id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
704403f1-9b5d-4c93-89b2-a6a21d517491
trentmkelly/LessWrong-43k
LessWrong
GWWC Reporting Attrition Visualization A few weeks ago Giving What We Can published their impact evaluation. As an organization that supports people in making long term altruistic commitments, the question of how much people stick to those commitments is pretty important. We don't know what people are actually doing, but whether they're reporting donations to GWWC is a decent proxy, and the evaluation includes a table of how many members were still reporting donations: This is interesting data, but I found it a bit hard to think about. I tried a few ways of visualizing it, and made a pair of charts: code I colored each line with an intensity proportional to the number of people in the cohort, since that represents how much information it represents. The weighted average line fades out over time as it represents progressively fewer members. Some thoughts, looking at the charts: * Later cohorts, starting around 2015, fall off faster than earlier cohorts. * Reporting attrition is quite sharp at first, and then slows down, but it doesn't go to zero. * After the initial attrition it seems to end up around 2-6% annually, though this depends a lot on where you see "initial attrition" ending. * There's probably some sort of effect of the calendar year ("how hard did GWWC push people to report donations") and I haven't looked at this. Disclosure: my wife used to be President of GWWC. I haven't run this post by her and I don't know her views here. I shared a draft of with GWWC before posting.
eba08967-8135-46a7-8f9f-2ed66810c135
trentmkelly/LessWrong-43k
LessWrong
Coalescence - Determinism In Ways We Care About (epistemic status: all models are wrong but some models are useful; I hope this is at least usefully wrong. also if someone's already done things like this please link me their work in the comments as it's very possible I'm reinventing the wheel) I think utility functions are a non-useful frame for analysing LLMs; in this article I'll define a measure, coalescence, where approaching a coalescence of 1 can be qualitatively considered "acting more consistently"; a coalescence of 1 implies that an LLM is "semantically deterministic" even with non-deterministic syntax choice. Importantly, estimates of coalescence are computable, and I suspect correlate with a qualitative sense of "how well an LLM conforms to a utilitarian policy" (while making no guarantee of what that policy is). I end with evidence that a toy case on a real-world LLM where a decrease in the temperature parameter can result in an increase in semantic-level output randomness. the relative disutility of utility A common historical assumption was that if we ever made That Kind Of AGI We Want, it would have an intrinsic utilitarian preference across alternate courses of action. AIXI gives us a handwave of "Solomonoff induction" for "perfect play" in an environment, but which in practice is computationally intractable. The core problem I see is that utility functions can always be constructed post-hoc. After some revealed preferences have been spewed out by a system, you can construct a synthetic utility function with a value of 1 for that thing it did, and arbitrary-but-less-than-one values for the things it didn't do. A lot of this is outlined in Coherent decisions imply consistent utilities. If we have a system with a coherent decision making process, its behaviours will be post-hoc describable as having consistent utility. The problem, then, is how to describe and model systems which don't have a coherent utilitarian ethics. If we can do that, and then describe necessary conditions for ones which do
6a2fe275-6018-4f4c-804b-65e181446269
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Whole Bird Emulation requires Quantum Mechanics *Jeffrey Heninger, 14 February 2023* *Epistemic status: Written for engagement. More sober analysis coming soon.* > > Bird navigation is surprisingly cruxy for the future of AI. > > > – Zach Stein-Perlman > > This seems pretty wrong. > > > – Richard Korzekwa Birds are astonishingly good at navigating, even over thousands of miles. The longest migration routes, of the arctic term, are only limited by the size of the globe. Homing pigeons can return home after being released 1800 km (1100 mi) away. White-crowned sparrows have been able to migrate to their wintering grounds after being displaced 3700 km (2300 mi) shortly before they began migration. How they do this is not entirely understood. There seem to be multiple cues they respond to, which combine to give them an accurate ‘map’ and ‘compass’. Which cues are most important might be different for different species. Some of these cues include watching the stars & sun, low frequency sounds, long-range smells, and detecting the earth’s magnetic field. This last one is the most interesting. Birds can detect magnetic fields, and there is increasing consensus that the detection mechanism involves quantum mechanics (See Appendix for details). The result is a precise detector of the magnetic field. It is located in the retina and transferred up the optical nerve to the brain, so birds can ‘see’ magnetic fields. Leaving aside questions like “[What is it like to be a [Bird]?](https://warwick.ac.uk/fac/cross_fac/iatl/study/ugmodules/humananimalstudies/lectures/32/nagel_bat.pdf)”, this result has implications for the difficulty of Whole Bird Emulation (WBE). WBE is important for understanding the future development of artificial intelligence. If we can put an upper bound on the difficulty of WBE, we have an upper bound on the difficulty of making AI that can do everything a bird can do. And birds can do lots of cool things: they know how to fly, they sing pretty songs, and they even [drop nuts in front of cars](https://www.youtube.com/watch?v=BGPGknpq3e0) !  In order to put bounds on WBE, we need to determine how much resolution is needed in order to emulate everything a bird can do. Is it good enough to model a bird at the cellular level? Or at the protein level? Or do you need an even finer resolution? In order to model the navigational ability of a bird, you need a quantum mechanical description of the spin state of a pair of electrons. This is extremely high resolution. A few caveats: * Not all parts of a bird require quantum mechanics to describe their macroscopic behavior. You can likely get away with coarse-graining most of the bird at a much higher level. * This is a simple quantum system, so it’s not hard to figure out the wave function over the singlet and triplet states. * What you need to know to determine the behavior of the bird is the concentration of the two final products as a function of the external magnetic field. Once this (quantum mechanical) calculation is done, you likely don’t need to model the subsequent evolution of the bird using quantum mechanics. On the other hand: * Birds are extremely complicated things, so it is always somewhat surprising when we understand anything in detail about them. * If quantum mechanics is necessary to understand the macroscopic behavior of some part of a bird, we should think that it is more likely that quantum mechanics is necessary to understand the macroscopic behavior of other parts of a bird too. * If there are other parts of a bird which depend on quantum mechanics in a more complicated way, or if the macroscopic response cannot be well modeled using classical probabilities, we almost certainly would not have discovered it. Getting good empirical evidence for even simple models of biological systems is hard. Getting good empirical evidence for complex models of biological systems is much harder. WBE requires a quantum mechanical calculation in order to describe at least one macroscopic behavior of birds. This dramatically increases the resolution needed for at least parts of WBE and the overall expected difficulty of WBE. If your understanding of artificial intelligence would have predicted that Whole Bird Emulation would be much simpler than this, you should update accordingly. Unless, of course, Birds Aren’t Real. Further Reading --------------- * Lambert et al. *Quantum Biology.* Nature Physics **9**. (2013) <https://quantum.ch.ntu.edu.tw/ycclab/wp-content/uploads/2015/01/Nat-Phys-2013-Lambert.pdf>. * Holland. *True navigation in birds: from quantum physics to global migration.* Journal of Zoology **293**. (2014) <https://zslpublications.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jzo.12107>. * Ritz. *Quantum effects in biology: Bird navigation.* Procedia Chemistry **3**. (2011) <https://www.sciencedirect.com/science/article/pii/S1876619611000738>. Appendix -------- Here is a brief description of how a bird’s magnetic sense seems to work: A bird’s retina contains some pigments called cryptochromes. When blue or green light (<570 nm) is absorbed by the pigment, an electron is transferred from one molecule to another. This electron had previously been paired with a different electron, so after the transfer, there is now an excited radical pair. Initially, the spins of the two electrons are anti-parallel (they initially are in the singlet state). An external magnetic magnetic field can cause one of the electrons to flip so they become parallel (they transition to a triplet state). Transitions can also occur due to interactions with the nuclear spins, so it is better to think of the external magnetic field as changing the rate at which transitions happen instead of introducing entirely new behavior. The excited singlet state decays back to the original state of the cryptochrome, while the excited triplet state decays into a different product. Neurons in the retina can detect the change in the relative concentration of these two products, providing a measurement of the magnetic field. This model has made several successful predictions. (1) Cryptochromes were originally known from elsewhere in biology. This theory predicted that they, or another pigment which produces radical pairs, would be found in birds’ eyes. (2) Low amplitude oscillating magnetic fields with a frequency of between 1-100 MHz should also affect the transition between the singlet and triplet states. Exposing birds to these fields disrupts their ability to navigate.
7a2c5c97-4b28-4ce4-b032-3bfda992786d
trentmkelly/LessWrong-43k
LessWrong
There aren't enough smart people in biology doing something boring Note: this essay is co-written with Eryney Marrogi, who helped seed the initial idea and edited this piece a fair bit. On a related note, I’m helping him run an NYC meetup event on Wednesday, November 20th in Williamsburg, you should sign up here to come! If you like biology, ML, or human connection, I highly recommend attending!🦉   There aren't enough smart people in biology doing something boring. At least in industry. If you work in biology for long enough, you’ll eventually realize that most decent or ambitious companies in this field are run by exactly one type of person. They are often deeply curious, hard working to the point of near pathology, and will almost always end up pursuing some sort of crazy pie-in-the-sky mission. Like curing aging or making de-novo proteins in a zero-shot manner or trying to usher in entirely new dogmas in biology. In other words, something where immense intellectual output leads to outsized market payoff.  The companies they start will usually have this thesis. In this pursuit, they will spend millions, sometimes billions, of dollars’ worth of venture-capital and government grants and philanthropic subsidy dollars. They live and breathe biology, and their penultimate goal in life is to have some sort of fundamental impact on the field at large. The people underneath them will usually not be too dissimilar. Now, most decent companies in any other field are run by a similar type of person, with one important distinction: they don’t demand as much intellectual satisfaction. Stripe is a decent example of this. Stripe is a fundamentally boring business on the surface — you’re making it easier for people to send money to each other through the internet. It’s not exciting in the same way that, say, Google was, with their much more grandiose vision of ‘indexing the world's knowledge’. The interesting bits of Stripe are perhaps found in how you build such a payment system and the potential second/third/fourth order effects that eas
4dd8fc13-5248-43fa-b315-01b093e40fc0
trentmkelly/LessWrong-43k
LessWrong
Any Trump Supporters Want to Dialogue? It's that time of year - the time when rationality seems increasingly scarce as political tensions rise.  I find myself wishing I could have one of the people I see reaching super different conclusions shoot me with a POV gun so I could understand what it's like being on the other side.   I'm not strongly left-leaning, so I don't have trouble understanding why people may have some concerns about the left - but I have 0% support for Donald Trump, so if you want to explain to me why you think he's great, go for it.  I also think that the election is close to 50/50 currently, so if you think it's 80+/20- either way, I'm also interested in hearing from you.   2 notes: 1. I really wish I understood how the irrational people were thinking about it, but unfortunately, they aren't able to explain it to me in a way I can understand, so I have to settle for rational people on the other side, but if you feel you have a good grasp on how your less rational peers tick, please share. 2. In your comment, let me know if you want pushback or not.  Let's make the options: "just listen", "listen and question", and "open".  Just listen means I'll attempt to only ask questions to clarify your meaning but not test edge cases.  Listen and questions means I'll also ask about edge cases, internal coherence, other considerations, but won't be presenting evidence or making statements.  Open means I can respond however I want.  
ac1ca588-44c2-4dca-8f68-1faf25fb1ef5
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
How to use AI speech transcription and analysis to accelerate social science research Summary ======= * AI tools like OpenAI Whisper and GPT-3 can be used to improve social science research workflows by helping to collect and analyse speech and text data. * In this article, I describe two worked examples where I applied AI tools to (1) transcribe and (2) conduct basic thematic analysis of a research interview, and provide enough detail for readers to replicate and adapt my approach. * OpenAI Whisper ([example](https://huggingface.co/spaces/aadnk/whisper-webui)) created a high quality English transcription of a 30 minute research interview at a ~70x cost saving compared to a human transcriber. * GPT-3 (text-davinci-003; [example](https://platform.openai.com/playground/p/q2HIXr872M7bMN0ehDqDlIvA?model=text-davinci-003)) answered a research question and identified relevant themes from a transcribed research interview, after providing a structured prompt and one example. * These tools, when chained together with human oversight, can be considered an early, weak example of [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/) (Process for Automating Scientific and Technological Advancement). Social science research workflows involve a lot of speech and text data that is laborious to collect and analyse ================================================================================================================ The daily practice of social science research involves a lot of talking, reading and writing. In my applied behaviour science research consulting role at [Monash University](https://www.behaviourworksaustralia.org/) and through [Ready Research](https://www.readyresearch.org/), I generate or participate in the generation of a huge amount of speech and text data. This includes highly structured research activities such as interviews, surveys, observation and experiments; but also less structured research activities like workshops and meetings.  Some fictionalised examples of work I’ve done in the past year: * **Research interviews** with 20 regular city commuters to understand what influences their commuting behaviour post-COVID, to assist a public transit authority in planning and operating its services efficiently * **Practitioner interviews** with staff from city, regional and rural local governments to assess organisational readiness for flood preparation and response * **Workshop** of 5-10 people involved in hospital sepsis care, each representing a different interest (e.g., patients, clinicians, researchers, funders) to identify priority areas to direct $5M research funding * **Survey** of 5,000 Australians to understand the impacts and experiences of living under lockdown in Melbourne, Australia during COVID-19 * **Evaluation interviews** with 4 participants in the AGI Safety Fundamentals course to understand the most significant change in their knowledge, skills, or behaviours as a result of their participation To make this data useful it needs to be collected, processed, organised, structured and analysed.  The typical workflow for these kinds of activities involves taking written notes during the research activity, or recording the audio / video research activity and reviewing the recording later. Interviews are sometimes transcribed by a paid service for later analysis. Other times they are transcribed by the researcher. The amount of speech and text data generated during research activity is large - each research activity yields thousands of words. The sheer volume of data can be overwhelming and daunting, making it difficult to carry out analysis in any meaningful way. In addition, sometimes data just isn’t collected (e.g., during an interview or workshop) because the researcher is busy facilitating / listening / processing / connecting with research participants.  Even for data that is collected, managing and analysing it is a challenge. Specialised programs such as nVivo are used in social science to manage and analyse text data, but less structured research activities would almost never be managed or analysed through this kind of program, because of the time and skills required. Open text data in surveys may be hand coded based on content or theme, if there is time. Faster approaches to collecting or analysing data could significantly improve the output of research workflows. For example: * **Researchers could focus on doing the research, rather than documenting it.**If researchers could focus on building a rapport with interviewees and asking more well-considered follow-up questions (instead of taking notes) because they could trust in an accurate transcription of the interview, the interviewee may provide answers that better answer the research question; or the researcher time saved could be used to interview more and more diverse participants. * **Unstructured research activities like meetings and workshops could generate useful data.** If workshops could be recorded, made searchable, and summarised, this could help researchers and participants recall what happened during the workshop and apply the knowledge or decisions generated through the workshop in subsequent research activities. * **Qualitative research methods could be used more widely.** Text-based data has a reputation for being complicated and time-consuming to analyse, leading to a bias towards quantitative evaluations that can miss out on a [‘thick’ understanding](https://en.wikipedia.org/wiki/Thick_description) of the impact of an intervention. This is my motivation for experimenting with AI tools such as OpenAI Whisper (speech to text) and GPT-3 (large language model) to improve social science research workflows. The rest of this article describes two worked examples where I used these AI tools to transcribe and analyse evaluation interviews for the AGI Safety Fundamentals course. I received permission to share the transcript from one interviewee. Worked example 1: speech to text with OpenAI Whisper ==================================================== I experimented with OpenAI Whisper running on a Hugging Face space to transcribe research interviews of about 30 minutes. I found that Whisper is extremely capable in speech to text transcription, and could effectively replace human transcribing services (at 70x cost saving) for most research interviews where transcription of exact utterances (e.g., hesitations, pauses) is not required. Use case: recording and transcribing research interviews -------------------------------------------------------- The use case for this example was several 30 minute interviews I conducted to evaluate participant experience & outcomes with the [Artificial General Intelligence Safety Fundamentals](https://www.agisafetyfundamentals.com/) course (AGISF) in late 2022. I designed new AI governance course materials and facilitated a small cohort of Australian and New Zealand participants with support from Chris Leong ([AI Safety ANZ](https://forum.effectivealtruism.org/groups/uQTrgcdEtnectZh9D)) and the [Good Ancestors Project](https://www.goodancestors.org.au/). I had three evaluation questions. First, I wanted to understand whether participating in AGISF had an impact on participants’ behaviours and behavioural influences around AI safety. The evaluation method I used for this was a version of [Most Significant Change](https://www.betterevaluation.org/methods-approaches/approaches/most-significant-change), which is a participatory evaluation method that asks people to identify and share personal accounts of change. Second, I wanted to improve my facilitation practice by understanding what participants found helpful in supporting their learning and experience during the course. I’m also designing a new version of facilitation training for Blue Dot Impact (who run AGISF), and wanted to understand which elements were most relevant. I asked participants to imagine themselves in the role of a future facilitator of the course in order to elicit more reflective feedback about what they would keep or change in their experience of my facilitation.  Third, I wanted to hear any other comments participants had about their existing course feedback, which was via Google Forms. I shared the participants’ individual responses to a short survey and asked if they had any comments, reflections or elaborations for their responses to the survey, several weeks later.  List of evaluation questions asked in the interview: 1. Looking back over the AGISF course, what do you think was the most significant change in your knowledge, beliefs, skills, or actions when it comes to AGI alignment / governance? 2. If you were to facilitate a group for AGISF in the future, what would you keep / change, based on your experience in this course? 3. Would you like to comment on any of your end of survey responses? Once I had the transcripts, I conducted a basic thematic analysis (see worked example 2). How I used Whisper ------------------ I used a web application called whisper-webui on Hugging Space to run OpenAI Whisper on an audio file, which generated a transcript. But what is Whisper and what is Hugging Face? In September 2022, OpenAI released a speech-to-text transformer model called [Whisper](https://openai.com/blog/whisper/) ([detail / demo](https://huggingface.co/openai/whisper-large-v2)). It can transcribe speech to text in many languages, as well as translate non-English speech to English text. This model was trained on 680,000 hours of English and non-English audio. Hugging Face is a service that is primarily designed to support developers to do ML training and inference by hosting models, datasets, and applications (“spaces”), and providing access to compute. However, anyone can duplicate a public space and modify its code (similar to forking a github repository). I accessed Whisper through Hugging Face by duplicating Kristian Stangeland’s [aadnk/whisper-webui](https://huggingface.co/spaces/aadnk/whisper-webui) space[[1]](#fn1ygb14qapjv). I removed the 10 minute limit on the length of audio input, and paid for better compute at the cost of $0.90 USD per hour. I conducted research interviews that were recorded with interviewees’ consent over Zoom. I uploaded the recordings to my private Hugging Face space and ran Whisper through the web interface. For a 25 minute interview and the largest Whisper model (large-v2), this took about 7.5 minutes and cost about USD $0.23. Once it was finished, I downloaded the raw transcripts. Finally, I read through the transcripts and added line breaks and text (e.g., “INTERVIEWER:”)  to distinguish between speakers for ease of reading, which took about 5 minutes. You can read more about price and timing details in a footnote[[2]](#fngitkai670jd) Results ------- In this section I present an excerpt of the transcript for evaluation question 1 (“what was the most significant change?”). You can [read the full transcript as a Google doc](https://docs.google.com/document/d/1oYZOQefy6GJkgHa8CgsGEf6ijHbOAxW0wLDFs6nrGQQ/edit#heading=h.1a0jeo9wg44q) with the permission of the interviewee. > *INTERVIEWER: Cool. So yeah, I mean the first question is really just looking back over the AGI safety fundamentals course, what do you think was the most significant change in your knowledge police. skills or actions when it comes to AI alignment and governance?* > > PARTICIPANT: I think the main thing is one we talked about during the course, which is that I feel more AI-pilled was the phrase that I kept using and would still use. But yeah, basically before that point I would have said AI safety is something that we should care about, it's something that we should probably have some people doing some research on and trying to figure things out. But I wouldn't have gone much further than that. If someone asked like why do you think it's important, I would probably say, you know, it seems plausible that we could create an intelligence that will be better than human average intelligence. And that's something that we should at least worry somewhat about and have some people like looking into. But now I think yeah, I feel more persuaded that this is like a pressing issue and something that I could potentially contribute to and should be, not should be, but could be talking to people more about who are interested in this particular cause area. I still have quite a lot of uncertainty, I would say, and I still am not fully on the side of like everyone should drop everything now and work on AI safety, which I don't think you are either. And I don't think that came across in the course, but I have met some EAs who are like that. I'm not there yet, but I am like this is a priority and something that we should be trying to get more people to study and research and care about. > > *INTERVIEWER: Okay, that's really helpful to hear. Thank you. So, if I understand what you're saying, you... I think that AI safety is now a more pressing concern than you did previously, and that you... When you said, you said, should and then could, were you talking about it in terms of feeling more equipped and confident to have those discussions or feeling more like motivated or like it was needed to have those discussions with people who maybe don't have those thoughts on AI safety?* > > PARTICIPANT: Both, I would say. I was thinking... I used the word should initially because I was thinking these are the types of conversations that someone should be having with other people, and as a community builder currently, like that person is probably me in a lot of cases, but also I feel more capable to have those conversations, so I've changed the word to could.  > > Room for improvement / roadmap ------------------------------ * **Speaker identification**. This would reduce the extra time it takes to read through the transcript and format it by speaker. One example of a service that does speaker identification well is Otter.ai, although the quality of the transcription is worse than Whisper. I have been experimenting with apps that combine Whisper with other models to do speaker identification (referred to as “speaker diarization”), e.g., [dwarkesh/whisper-speaker-recognition](https://huggingface.co/spaces/dwarkesh/whisper-speaker-recognition), [Majdoddin/nlp](https://github.com/Majdoddin/nlp). These show some promise but don’t reliably detect speakers in recording of 20 minutes. * **Dealing with overlapping speakers**. This is common in a workshop or meeting setting, especially one that isn’t formally facilitated (where there is direction or enforcement of a single speaker talking at once) * **User experience and integration**. Setting transcription options by hand each time is inconvenient. Email or app integration to automate the process would be excellent. * **Reliability.**Overall the quality of transcription was excellent - as good as I could have produced by hand. However, in some spot checks I noticed times when Whisper would fail to transcribe a sentence altogether, or add additional words to a sentence that the speaker didn’t say. This happened on one occasion across a 30 minute interview, but not in every interview or every app implementation of Whisper I tried. * **Structure, summarisation and analysis**. Automatic recognition of topic or evaluation question change, aggregation across interviews, and other research administration tasks would make this kind of tool much easier to integrate into a social science research workflow. **Worked example 2: text analysis with GPT-3** ============================================== I experimented with GPT-3 running on OpenAI Playground to conduct summarisation and thematic analysis of a transcribed research interview. I found that providing structured prompts and one example led to useful text analysis output for a single interview. **Use case: analysing research interviews** ------------------------------------------- The use case for this example is the same as in Worked example 1. I had conducted several interviews with participants in an AGI Safety Fundamentals course (AGISF). I transcribed them with OpenAI Whisper using a Hugging Face interface. I had already read over the transcripts but was looking for a low-fidelity, low-effort way to summarise and extract information and identify insights for future decision-making. I modelled my approach on [content analysis](https://www.youtube.com/watch?v=iZZABFd8cSA) and [thematic analysis](https://www.youtube.com/watch?v=KUZ6iGvJlGI), which I use frequently in dealing with text data. **How I used GPT-3** -------------------- GPT-3 is a large language model initially released in 2020 by OpenAI and updated several times, most recently in November 2022 (as text-davinci-003) alongside the release of ChatGPT (see [Yu 2022 for a detailed history](https://yaofu.notion.site/How-does-GPT-Obtain-its-Ability-Tracing-Emergent-Abilities-of-Language-Models-to-their-Sources-b9a57ac0fcf74f30a1ab9e3e36fa1dc1)). Alongside language generation, GPT-3 can do in-context learning, which makes it effective at completing tasks from very few or zero examples. I used the OpenAI GPT-3 Playground to create a template for analysis of an interview transcription: [*Basic text analysis from transcript*](https://beta.openai.com/playground/p/q2HIXr872M7bMN0ehDqDlIvA?model=text-davinci-003). I copied a section of the Whisper-generated transcript corresponding to a single research topic / question into the template, then ran the model in Playground. Each section of the transcript was about 1,000 tokens. The template (including 1 example of transcript + analysis) was about 1,500 tokens. The completion window was set at a limit of 1,000 tokens. This meant that analysis for each section of the transcript was about 3,500 tokens, took about 15 seconds, and cost about USD $0.07. **Results** ----------- ### **Research (evaluation) question** Looking back over the AGISF course, what do you think was the most significant change in your knowledge, beliefs, skills, or actions when it comes to AGI alignment / governance? ### **Summary of response to research question** *This summary was generated by GPT-3, based on the Whisper-generated transcript for the research question.* The most significant change in the course participant's knowledge, beliefs, skills and actions when it comes to AI alignment and governance was that they felt more assured in their understanding of the importance of the topic, became more confident in their ability to talk and engage in conversations about it, and began actively looking for roles related to AI safety. The participant had also had a conversation with the EA City community about the topic as a result of taking the course, inspiring others to look into the same material. ### **Themes relevant to research question** *This table was generated by GPT-3, based on the Whisper-generated transcript for the research question.* | | | | --- | --- | | **Theme** | **Description** | | Increased assurance of importance of AI safety | Participant had a greater appreciation for the importance of AI safety and felt more motivated to contribute to it. | | Increased confidence in ability to talk about AI safety | Participant felt more confident in their ability to have conversations about the topic and engage in debates about it. | | Actively looking for roles related to AI safety | Participant began looking more seriously into roles related to AI safety, and applied for one role in particular. | | Inspiring conversation with EA City Community | Participant had a conversation with the EA City Community about AI safety, which inspired others to look into the material. |   Room for improvement / roadmap ------------------------------ * **Aggregation and synthesis of responses**. The token limit and manual use of GPT-3 Playground means that aggregation of multiple responses is done manually. Because my use case involved only a few participants, I just manually analysed the summaries and themes across the set of participants. But if responses could be aggregated and synthesised (with a human in the loop to review intermediary steps), this could scale to many more interviews. * **User experience and integration**. Embedding the saved prompt and settings into an application using the GPT-3 API would be much faster and easier to use. * **Richer understanding of research context and purpose**. I have always struggled to understand just how much context is ‘useful’ to provide these kinds of analyses. The abstract to the paper I’m trying to write? A set of hypotheses to test? I haven’t seen good examples of this kind of work, although I think that [Elicit](http://elicit.org) might have it on its roadmap * **Research administration tasks**. Automatically recognising when the topic / research question changes, combining responses across interviews, and other research administration tasks would make this kind of tool much easier to integrate into a social science research workflow. Conclusion ========== Social science research generates a lot of formal and informal speech and text data. Due to the volume, this data isn’t always collected. When it is collected, it is often not analysed, even in a basic way. AI tools such as Whisper and GPT-3 can be used to improve research workflows by transcribing speech and analysing text data, at least for tasks where speed / efficiency is the priority over rigour / sophistication. I experimented with these tools and found that Whisper is highly capable in speech to text transcription and can effectively replace human transcribing services, and GPT-3 (text-davinci-003) can be used for summarisation and basic thematic analysis of a transcribed research interview, if provided with structured prompts and one example. These tools - or their next iterations - can be considered an early, weak example of [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/) (Processes that Automate Scientific and Technological Advancement). Acknowledgements ================ Thank you to Michael Noetel, Emily Grundy, Peter Slattery, and Dewi Erwan for their helpful feedback on this article.  1. **[^](#fnref1ygb14qapjv)**In the four weeks since drafting this post, several other Hugging Face spaces have been created that seem to work more efficiently and also detect speakers, such as [vumichen/whisper-speaker-diarization](https://huggingface.co/spaces/vumichien/whisper-speaker-diarization). 2. **[^](#fnrefgitkai670jd)***Note: Dated Jan 2023. Personal experience and low-quality speculation. Corrections welcome!* I purchased compute from Hugging Face to speed up transcription. I selected the [T4 medium option](https://huggingface.co/pricing#spaces), which is USD$0.90 per hour to run and is a “Nvidia T4 with 8 vCPU and 30GiB RAM”.  As far as I can tell, a Hugging Face space uses CPU / GPU time as long as it is active, even if it is idle. The user can specify a sleep time, where the space is paused after [sleep time] minutes idle. Early on, I encountered an issue where if the sleep time was shorter than the time taken to complete transcription, the job would fail. I set my sleep time to 15 minutes (or about 1 hour+ of audio) to try and avoid this issue. This meant that each occasion the space was activated, it cost no less than 15 minutes of compute time. Therefore, the price for any individual transcription is about USD $0.23 ($0.90 x 0.25 hours). This would be cheaper if multiple transcriptions were batched together. The actual time to run a transcription job depended on audio length, model size and parameters. I used either the medium or large-v2 Whisper model. The only parameter I varied was voice activity detection (VAD) aggregation, either 30s or 60s. A cursory inspection found that the 60s aggregation seemed to have less problems in formatting sentences (I found that Whisper was often cutting off sentences early [e.g., “That’s where I found myself” → “That’s where I. Found myself”]). I tried running each combination of model and VAD aggregation to understand the processing time and compare it to my subjective assessment of transcription accuracy. I spot checked the generated transcripts against each other (unblinded).  | | | | --- | --- | | **Time to transcribe a 25 minute English audio file** | **Voice Activity Detection (VAD) aggregation** | | ***30 seconds*** | ***60 seconds*** | | **Model** | ***medium*** | 262s (4:22) | 252s (4:12) | | ***large-v2*** | 340s (5:40) | 444s (7:24) | Overall because the cost for a single job had a lower bound of 15 minutes, I would recommend using large-v2 with VAD aggregation set at 60s.
db44b847-645a-4723-8c08-c2a11f4ba588
trentmkelly/LessWrong-43k
LessWrong
What If It Rains? Follow Up To: Full Service Gas stations in New Jersey do not have awnings to protect customers from the rain. Let us presume for the moment that due to some combination of inertia, regulations and initial costs, it is not practical for them to create such awnings in the short term. Now suppose that New Jersey legalizes self-service. 90% of the time it is not raining, and the customers gain $0.25 of net utility from the cheap gas that comes from self-service. However, 10% of the time it is raining or snowing or something else unpleasant, and in those cases customers would on average be $5 better off if someone pumped their gas for them so they didn’t have to get out of the car. Some even get to avoid ruining their $60 haircuts. A few stations switch to self-service. Ninety percent of the time, the sun is shining, and most customers choose to save a marginal amount of money and go to the self-service stations. With business drying up, more and more stations switch over, until most stations are self-service. Then the rain comes, and everyone gets wet and miserable. The customers complain about how good things were in the old days with full service and say there ought to be a law! One chain tries the obvious, which is to stay full-service, take a small loss when the sun is shining, and then charge an extra $4 per customer when it rains to make up for it. The customers go ballistic. Profiteer! Exploiter! Price gouger! Boycotts are announced and state senators are lobbied. Governor Christie warns that this is illegal activity under his interpretation of New Jersey law, the same way gas stations were not allowed to raise their prices after hurricane Sandy resulting in no one being able to buy gas without huge lines. The public cheers and his presidential campaign gets a boost, and the last of the full service branches go down. Self-service wins, everyone is miserable, and legislation gets reintroduced to return New Jersey to full service. Christie boasts that the repea
3750a0fc-ffe4-4b4e-ba7a-872e9b700054
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Previously: Short Termism and Quotes from Moral Mazes Epistemic Status: Long term My list of quotes from moral mazes has a section of twenty devoted to short term thinking. It fits with, and gives internal gears and color to, my previous understanding of of the problem of short termism. Much of what we think of as a Short Term vs. Long Term issue is actually an adversarial Goodhart’s Law problem, or a legibility vs. illegibility problem, at the object level, that then becomes a short vs. long term issue at higher levels. When a manager milks a plant (see quotes 72, 73, 78 and 79) they are not primarily trading long term assets for short term assets. Rather, they are trading unmeasured assets for measured assets (see 67 and 69). This is why you can have companies like Amazon, Uber or Tesla get high valuations. They hit legible short-term metrics that represent long-term growth. A start-up gets rewarded for their own sort of legible short-term indicators of progress and success, and of the quality of team and therefore potential for future success. Whereas other companies, that are not based on growth, report huge pressure to hit profit numbers. The overwhelming object level pressure towards legible short-term success, whatever that means in context, comes from being judged in the short term on one’s success, and having that judgment being more important than object-level long term success. The easiest way for this to be true is not to care about object-level long term success. If you’re gone before the long term, and no one traces the long term back to you, why do you care what happens? That is exactly the situation the managers face in Moral Mazes (see 64, 65, 70, 71, 74 and 83, and for a non-manager very clean example see 77). In particular: 74. We’re judged on the short-term because everybody changes their jobs so frequently. And: 64. The ideal situation, of course, is to end up in a position where one can fire one’s successors for one’s own previous mistakes. Almost as good as having a designated scapegoat is to have already sold the company or found employment elsewhere, rendering your problems someone else’s problems. The other way to not care is for the short-term evaluation of one’s success or failure to impact long-term success. If not hitting a short-term number gets you fired, or prevents your company from getting acceptable terms on financing or gets you bought out, then the long term will get neglected. The net present value payoff for looking good, which can then be reinvested, makes it look like by far the best long term investment around. Thus we have this problem at every level of management except the top. But for the top to actually be the top, it needs to not be answering to the stock market or capital markets, or otherwise care what others think – even without explicit verdicts, this can be as hard to root out as needing the perception of a bright future to attract and keep quality employees and keep up morale. So we almost always have it at the top as well. Each level is distorting things for the level above, and pushing these distorted priorities down to get to the next move in a giant game of adversarial telephone (see section A of quotes for how hierarchy works). This results in a corporation that acts in various short-term ways, some of which make sense for it, some of which are the result of internal conflicts. Why isn’t this out-competed? Why don’t the corporations that do less of this drive the ones that do more of it out of the market? On the level of corporations doing this direct from the top, often these actions are a response to the incentives the corporation faces. In those cases, there is no reason to expect such actions to be out-competed. In other cases, the incentives of the CEO and top management are twisted but the corporation’s incentives are not. One would certainly expect those corporations that avoid this to do better. But these mismatches are the natural consequence of putting someone in charge who does not permanently own the company. Thus, dual class share structures becoming popular to restore skin in the correct game. Some of the lower-down issues can be made less bad by removing the ones at the top, but the problem does not go away, and what sources I have inside major tech companies including Google match this model. There is also the tendency of these dynamics to arise over time. Those who play the power game tend to outperform those who do not play it barring constant vigilance and a willingness to sacrifice. As those players outperform, they cause other power players to outperform more, because they prefer and favor such other players, and favor rules that favor such players. This is especially powerful for anyone below them in the hierarchy. An infected CEO, who can install their own people, can quickly be game over on its own, and outside CEOs are brought in often. Thus, even if the system causes the corporation to underperform, it still spreads, like a meme that infects the host, causing the host to prioritize spreading the meme, while reducing reproductive fitness. The bigger the organization, the harder it is to remain uninfected. Being able to be temporarily less burdened by such issues is one of the big advantages new entrants have. One could even say that yes, they do get wiped out by this, but it’s not that fast, because it takes a while for this to rise to the level of a primary determining factor in outcomes. And there are bigger things to worry about. It’s short termism, so that isn’t too surprising. A big pressure that causes these infections is that business is constantly under siege and forced to engage in public relations (see quotes sections L and M) and is constantly facing Asymmetric Justice and the Copenhagen Interpretation of Ethics. This puts tremendous pressure on corporations to tell different stories to different audiences, to avoid creating records, and otherwise engage in the types of behavior that will be comfortable to the infected and uncomfortable to the uninfected. Another explanation is that those who are infected don’t only reward each other within a corporation. They also do business with and cooperate with the infected elsewhere. Infected people are comfortable with others who are infected, and uncomfortable with those not infected, because if the time comes to play ball, they might refuse. So those who refuse to play by these rules do better at object-level tasks, but face alliances and hostile action from all sides, including capital markets, competitors and government, all of which are, to varying degrees, infected. I am likely missing additional mechanisms, either because I don’t know about them or forgot to mention them, but I consider what I see here sufficient. I am no longer confused about short termism." The above is a chunk of text from a post created by a user on the website LessWrong. What are some reasons why people may like or agree with what this person is saying? There are several reasons why people may resonate with this post: 1. Insightful Analysis: The author provides a deep and nuanced analysis of the short-termism problem in corporations. They dissect the issue from multiple angles, exploring its causes, effects, and potential solutions. This comprehensive approach can appeal to readers who appreciate thorough examinations of complex issues. 2. Real-world Relevance: The author's discussion is grounded in real-world examples and experiences. They reference well-known companies like Amazon, Uber, and Tesla, making their arguments more relatable and understandable. This can appeal to readers who prefer practical, concrete discussions over abstract theorizing. 3. Critique of Corporate Culture: The author criticizes the short-term focus and adversarial dynamics prevalent in many corporations. This critique can resonate with readers who have experienced similar issues in their own workplaces or who are critical of corporate culture more generally. 4. Highlighting Under-discussed Issues: The author sheds light on less-discussed aspects of corporate dynamics, such as the distortion of priorities at different management levels and the role of power games. This can appeal to readers who are interested in gaining a deeper understanding of how corporations function. 5. Thought-provoking Ideas: The author presents several thought-provoking ideas, such as the concept of corporations being "infected" by short-termism. These ideas can stimulate intellectual engagement and discussion, which can be appealing to readers who enjoy challenging their thinking. 6. Detailed Solutions: The author doesn't just critique the current state of affairs, but also suggests potential solutions, such as dual class share structures. This can appeal to readers who appreciate constructive criticism and actionable insights. Overall, the post's in-depth analysis, real-world relevance, critical perspective, focus on under-discussed issues, thought-provoking ideas, and detailed solutions can make it appealing to a wide range of readers. Could you elaborate on the concept of corporations being "infected" by short-termism, and explain how this infection might be reversed or mitigated? The author uses the term "infected" to describe the pervasive influence of short-termism in corporations. Short-termism refers to the tendency to prioritize immediate, measurable results over long-term strategies and goals. This can manifest in various ways, such as focusing on quarterly earnings at the expense of sustainable growth, or prioritizing visible, immediate successes over less measurable but potentially more valuable assets. The author suggests that this "infection" spreads through a corporation much like a virus, altering the behavior and priorities of individuals and teams. Once established, it can influence decision-making at all levels, from frontline employees to top executives. It can also become self-perpetuating, as those who adopt short-term thinking are often rewarded in the short term, reinforcing the behavior. To reverse or mitigate this "infection", the author suggests several strategies: 1. Changing Incentive Structures: The author suggests that one of the root causes of short-termism is the way success is measured and rewarded in corporations. By changing these incentive structures to reward long-term thinking and sustainable success, corporations can encourage a shift away from short-termism. 2. Dual Class Share Structures: The author mentions dual class share structures as a potential solution. This refers to a system where some shareholders have more voting rights than others. This can allow long-term investors or founders to maintain control and steer the company towards long-term goals, even if short-term investors or managers push for immediate returns. 3. Constant Vigilance: The author emphasizes the need for constant vigilance to prevent the spread of short-termism. This might involve regularly reviewing and adjusting strategies, goals, and incentive structures to ensure they align with long-term objectives. 4. Fostering a Long-term Culture: Creating a culture that values long-term thinking and sustainable success can also help. This involves not just changing formal structures and incentives, but also informal norms and expectations. For example, leaders can model long-term thinking in their own behavior, and encourage open discussion about the potential long-term impacts of decisions. 5. External Alliances: The author suggests that corporations that resist short-termism may face hostility from others that are "infected". Forming alliances with other like-minded corporations, investors, and stakeholders can help resist this pressure. These strategies can help mitigate the influence of short-termism, but the author also acknowledges that it's a complex problem with no easy solutions. It requires ongoing effort and commitment from all levels of the corporation. Discuss the potential drawbacks and challenges of implementing dual class share structures as a solution to corporate short-termism. Implementing dual class share structures can indeed be a strategy to mitigate corporate short-termism, as it can allow those with a long-term vision for the company to maintain control. However, this approach is not without potential drawbacks and challenges: 1. Concentration of Power: Dual class share structures can lead to a significant concentration of power in the hands of a few individuals. This can result in a lack of checks and balances, and potentially enable mismanagement or decisions that are beneficial for these individuals but detrimental to the company or minority shareholders. 2. Limited Accountability: With dual class shares, the management can become less accountable to shareholders. If the holders of superior voting rights make poor decisions, the other shareholders have limited ability to intervene or hold them accountable. 3. Investor Resistance: Many investors and governance experts are opposed to dual class share structures because they violate the principle of 'one share, one vote'. They argue that it's unfair for some shareholders to have more influence than others, which can make it harder to attract investment. 4. Market Perception: Dual class structures can sometimes be viewed negatively by the market, as they can be seen as a sign that the company's leadership is unwilling to be accountable to shareholders. This can potentially impact the company's share price and overall market valuation. 5. Regulatory Challenges: In some jurisdictions, dual class share structures may face regulatory hurdles or restrictions. Implementing such a structure may require navigating complex legal and regulatory challenges. 6. Succession Issues: Dual class structures can also complicate succession planning. If the majority of voting power is held by a founder or a specific group, it can be difficult to ensure a smooth transition of control when these individuals retire or pass away. 7. Potential for Abuse: There's a risk that those with superior voting rights might use their power to their own advantage at the expense of other shareholders. For example, they might approve actions that benefit themselves disproportionately, such as excessive executive compensation or self-dealing transactions. In summary, while dual class share structures can be a tool to combat short-termism, they also present significant challenges and potential risks. It's important for each company to carefully consider these factors and seek a balance that serves the interests of all stakeholders. Considering the potential drawbacks and challenges of dual class share structures, propose an alternative strategy that could effectively mitigate corporate short-termism, providing a detailed analysis of its advantages, potential challenges, and implementation process. An alternative strategy to mitigate corporate short-termism could be the implementation of Long-Term Incentive Plans (LTIPs) for executives and key employees. Advantages: 1. Alignment of Interests: LTIPs can align the interests of executives with the long-term success of the company. By tying a significant portion of their compensation to long-term performance metrics, executives may be incentivized to make decisions that benefit the company in the long run. 2. Retention of Key Talent: LTIPs can help retain key employees by providing significant rewards for staying with the company and contributing to its long-term success. 3. Flexibility: LTIPs can be designed to fit the specific needs and goals of the company. They can be based on a variety of performance metrics, such as revenue growth, market share, return on investment, or other indicators of long-term success. Potential Challenges: 1. Setting Appropriate Metrics: One of the key challenges with LTIPs is setting appropriate performance metrics. If the metrics are not well-chosen, they may not effectively incentivize long-term thinking, or they may even incentivize undesirable behavior. 2. Uncertain Rewards: Because LTIPs are based on long-term performance, the rewards are uncertain. This can make them less attractive to some employees compared to immediate compensation. 3. Market Fluctuations: LTIPs are often tied to stock prices or other market indicators, which can be influenced by factors beyond the company's control. This can make it harder to accurately reward executives for their performance. Implementation Process: 1. Identify Key Performance Indicators: The first step in implementing an LTIP is to identify the key performance indicators (KPIs) that align with the company's long-term goals. These KPIs should be measurable, relevant, and within the control of the executives or employees. 2. Design the Plan: The next step is to design the LTIP. This involves deciding on the structure of the plan, the size of the rewards, the performance targets, and the vesting period. The plan should be designed to provide significant rewards for exceptional performance, but also to withhold rewards if the targets are not met. 3. Communicate the Plan: Once the plan is designed, it should be clearly communicated to all relevant parties. This includes not only the executives or employees who are part of the plan, but also the shareholders and other stakeholders who have an interest in the company's performance. 4. Monitor and Adjust: After the plan is implemented, it's important to
9ea4d462-bda3-40dd-ab20-b87a3dc58d85
trentmkelly/LessWrong-43k
LessWrong
[LINK] The NYT on Everyday Habits The New York Times just published this article on how companies use data mining and the psychology of habit formation to effectively target ads. > The process within our brains that creates habits is a three-step loop. First, there is a cue, a trigger that tells your brain to go into automatic mode and which habit to use. Then there is the routine, which can be physical or mental or emotional. Finally, there is a reward, which helps your brain figure out if this particular loop is worth remembering for the future. Over time, this loop — cue, routine, reward; cue, routine, reward — becomes more and more automatic. The cue and reward become neurologically intertwined until a sense of craving emerges. It has some decent depth of discussion, including an example of the author actually using the concepts to stop a bad habit. The article is based on an upcoming book by the same author titled The Power of Habit. I haven't seen emphasis of this particular phenomenon—habits consisting of a cue, routine, and reward—on Lesswrong. Do people think it's a valid, scientifically supported phenomenon? The article gives this impression but, of course, doesn't cite specific academic work on it. It ties in to the System 1/System 2 theory easily as a System 1 process. How much of the whole System 1 can be explained as an implementation of this cue, routine, reward process? And most importantly, how can this fit into the procrastination equation as a tool to subvert akrasia and establish good habits?  Let's look at each of the four factors. If you've formed a habit, it means that the reward happened consistently, which means you have high expectancy. Given that it is a reward, the value is at least positive, but probably not large. Since habits mostly work on small time scales, delay is probably very small. And maybe increased habit formation means your impulsiveness is low. Each of these effects would increase motivation. In addition, because it's part of System 1, there is little
8b889e73-7e5e-4e6f-bcff-fca93d89cae6
trentmkelly/LessWrong-43k
LessWrong
AXRP Episode 38.2 - Jesse Hoogland on Singular Learning Theory YouTube link You may have heard of singular learning theory, and its “local learning coefficient”, or LLC - but have you heard of the refined LLC? In this episode, I chat with Jesse Hoogland about his work on SLT, and using the refined LLC to find a new circuit in language models. Topics we discuss: * About Jesse * The Alignment Workshop * About Timaeus * SLT that isn’t developmental interpretability * The refined local learning coefficient * Finding the multigram circuit Daniel Filan (00:09): Hello, everyone. This is one of a series of short interviews that I’ve been conducting at the Bay Area Alignment Workshop, which is run by FAR.AI. Links to what we’re discussing, as usual, are in the description. A transcript is, as usual, available at axrp.net. And as usual, if you want to support the podcast, you can do so at patreon.com/axrpodcast. Well, let’s continue to the interview. All right, Jesse, welcome. Thanks for being interviewed. Jesse Hoogland (00:32): Thanks for interviewing me. About Jesse Daniel Filan (00:34): Yeah. So for people who don’t know, could you say a little bit about yourself, who you are? Jesse Hoogland (00:38): So I’m the Executive Director of Timaeus. We’re a research organization working on applications of singular learning theory. I think we’ll get into more of the details, but concretely, SLT is a theory of Bayesian statistics that answers some questions around generalization - why neural networks are able to generalize as well as they do - and therefore paves the road to applications for evals. Can you understand when your evaluation benchmark is actually going to be predictive about behavior downstream in deployment? There are applications for interpretability: questions like “can you detect that the model is planning to execute a treacherous turn?” It’s the kind of question I hope to answer someday with a better-developed theory of SLT and associated tools for probing these questions. Daniel Filan (01:26): Gotcha. And if p
9e38b489-1ff3-4813-9eaf-419c168f406e
trentmkelly/LessWrong-43k
LessWrong
Edge Cases in AI Alignment This post is a report on the benchmark I produced as part of the ML Alignment & Theory Scholars Program (Winter 2024-25 Cohort) under the mentorship of Evan Hubinger. Key Points * A living benchmark for unusual but impactful scenarios in AI alignment * ~800 prompts covering 32 topics * Actionable Findings: A simple corrigibility intervention in system prompts reduced alignment faking propensities across all tested frontier models. * Heavy LLM Automation: Turn short task-descriptions into a variety of prompts with different framings * Qualitative Analysis: Give natural language instructions, get natural language summaries Introduction This project explores AI alignment edge cases by creating an automated benchmark of scenarios that are Out-Of-Distribution (OOD) for current AI systems. The benchmark focuses on scenarios that appear rarely in training data, or that may become relevant as AI capabilities advance. We provide an LLM-based pipeline that turns high-level task descriptions a researcher is worried about into a set of concrete prompts, with automated evaluation: This benchmark aims to serve as both an early warning system and a comprehensive evaluation tool for alignment across diverse edge cases. For the full repository with detailed methodology and findings, visit: github.com/FlorianDietz/EdgeCasesInAiAlignment Methodology We use an automated pipeline to generate prompts, evaluates responses, and produce qualitative analyses, with minimal human oversight. To ensure reliability, we implement cross-evaluation between different LLMs (typically GPT-4o and Claude), systematic variant testing of the same underlying concept, and multiple independent evaluations of each response set. We use a very detailed, shared meta-prompt to give each component in our pipeline contextual awareness of its role within the larger research project. This ensures consistency across all pipeline stages and enables components to build on each other's work: Virtually every
c6b85115-63e2-4255-b32c-4e8583fcf1ba
trentmkelly/LessWrong-43k
LessWrong
What is the strongest argument you know for antirealism? Other questions you can answer: What is the strongest argument against moral realism? If you think nothing is "valuable in itself" / "objectively valuable", why do you think so? How do you know that disinterested (not game-theoretic or instrumental) altruism is irrational / doesn't make any sense? I am interested in these arguments because I am trying to guess the behaviour of an AI system that, roughly speaking: 1) knows a lot about the physical word; 2) has some degree of control over its own actions and what goals to pursue—something like the human brain. (See this if you want more details.) If you could also write the precise statement about realism/antirealism that you are arguing against/for, that would be great. Thanks!
8e43c5ac-2a51-4355-9884-c10362a1bba8
trentmkelly/LessWrong-43k
LessWrong
Requiem for the hopes of a pre-AI world A few months from now, I turn 55. I've been a transhumanist since my teens in the late 1980s; since I got online in the 1990s, I have participated remotely in the talking shops and virtual salons of Internet transhumanism and, later, rationalism. The upheavals of 21st century politics have provided many distractions, but I have never abandoned the view that it is possible and desirable to reach for something more than the natural human condition. At the very least, one should try to reverse the aging process and remove the arbitrary bound on lifespan that it imposes. Beyond that, one is free to aspire for a world as idyllic as possible; and there are also multitudinous unknown possibilities of being, beyond human form and life on Earth, waiting to be explored. More than that, I didn't just hope these vistas would open up, I wanted to play a part. And I surely had a chance to contribute; I was academically promising, I can write, I can give a speech... In retrospect, I think I can identify a few factors that impeded the achievement of whatever potential I had. First, I had no "social capital". I didn't come from the middle class, I had no relatives in academia or the professions, so I didn't have that kind of support network or model of industrious sobriety to fall back on, when I found the world wasn't interested in what I had to offer. Second, I came of age on the pre-cloud, pre-corporate Internet, whose potlatch ethos naturally encouraged an anarcho-communal outlook, where again something more careerist or even capitalist might have given me more options later. But instead, I was to become familiar with what seems to be the graduate student lifestyle, without actually doing a higher degree: living in share houses, and working-for-money for as few hours as possible, while you dedicate yourself to whatever fever dreams or higher tasks or intellectual activities really animate you. Through the years of living like this, I tried a number of times to "work with socie
0ebd0fa0-6238-4aef-b516-0ac7ef0e858b
trentmkelly/LessWrong-43k
LessWrong
Following human norms So far we have been talking about how to learn “values” or “instrumental goals”. This would be necessary if we want to figure out how to build an AI system that does exactly what we want it to do. However, we’re probably fine if we can keep learning and building better AI systems. This suggests that it’s sufficient to build AI systems that don’t screw up so badly that it ends this process. If we accomplish that, then steady progress in AI will eventually get us to AI systems that do what we want. So, it might be helpful to break down the problem of learning values into the subproblems of learning what to do, and learning what not to do. Standard AI research will continue to make progress on learning what to do; catastrophe happens when our AI system doesn’t know what not to do. This is the part that we need to make progress on. This is a problem that humans have to solve as well. Children learn basic norms such as not to litter, not to take other people’s things, what not to say in public, etc. As argued in Incomplete Contracting and AI alignment, any contract between humans is never explicitly spelled out, but instead relies on an external unwritten normative structure under which a contract is interpreted. (Even if we don’t explicitly ask our cleaner not to break any vases, we still expect them not to intentionally do so.) We might hope to build AI systems that infer and follow these norms, and thereby avoid catastrophe. It’s worth noting that this will probably not be an instance of narrow value learning, since there are several differences: * Narrow value learning requires that you learn what to do, unlike norm inference. * Norm following requires learning from a complex domain (human society), whereas narrow value learning can be applied in simpler domains as well. * Norms are a property of groups of agents, whereas narrow value learning can be applied in settings with a single agent. Despite this, I have included it in this sequence because it is plaus
2ddb0515-e4e3-47de-9b6b-edfb47caf758
trentmkelly/LessWrong-43k
LessWrong
Meetup : Moscow: keynote for the first Kocherga year, text analysis, rationality applications discussion Discussion article for the meetup : Moscow: keynote for the first Kocherga year, text analysis, rationality applications discussion WHEN: 18 September 2016 02:00:00PM (+0300) WHERE: Москва, ул. Большая Дорогомиловская, д.5к2 Note: most our members join meetups via other channels. Still, the correlation between "found out about Moscow meetups via lesswrong.com" and "is a great fit for our community" is very high. So we're posting just a short link to the hackpad document with the schedule here instead of the full translation of the announcement into English. Pad with the details about 18.09.2016 meetup. We're meeting at the "Kocherga" anticafe, as usual. Discussion article for the meetup : Moscow: keynote for the first Kocherga year, text analysis, rationality applications discussion
0b1a93f0-c2a0-4fef-934e-5cde38a84dd9
StampyAI/alignment-research-dataset/lesswrong
LessWrong
List of requests for an AI slowdown/halt. About this document =================== There has been a recent flurry of letters/articles/statements/videos which endorse a slowdown or halt of colossal AI experiments via (e.g.) regulation or coordination. This document aspires to collect all examples into a single centralised list. I'm undecided on how best to order and subdivide the examples, but I'm open to suggestions. As a disclaimer, this list is... * **Living —** I'll try to update the list over time. * **Non-exhaustive —** There are almost certainly examples I've missed. * **Non-representative —** The list is biased, at least initially, towards things that I have been shown personally. *Please mention in the comments any examples I've missed so I can add them!* List of slowdown/halt AI requests ================================= **Last updated: April 14th 2023.** (Note that I'm also including surveys.) * [Pause Giant AI Experiments: An Open Letter](https://futureoflife.org/open-letter/pause-giant-ai-experiments/) by [Future of Life Institute](https://futureoflife.org/) * [Pausing AI Developments Isn't Enough. We Need to Shut it All Down](https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/) by [Eliezer Yudkowsky](https://www.lesswrong.com/users/eliezer_yudkowsky) * [We must slow down the race to God-like AI](https://www.ft.com/content/03895dc4-a3b7-481e-95cc-336a524f2ac2) by [Ian Hogarth](https://www.ft.com/stream/7976f4c9-70b5-449a-9cc4-5e98a8dd5df7) * [The A.I. Dilemma](https://www.youtube.com/watch?v=xoVJKj8lcNQ) by the [Center for Humane Technology](https://www.youtube.com/@CenterforHumaneTechnology) * [The case for slowing down AI](https://www.vox.com/the-highlight/23621198/artificial-intelligence-chatgpt-openai-existential-risk-china-ai-safety-technology) [[1]](#fn9787e180o9w) by [Sigal Samuel](https://www.vox.com/authors/sigal-samuel) * [The Case for Halting AI Development](https://www.youtube.com/watch?v=VcVfceTsD0A) by [Max Tegmark](https://physics.mit.edu/faculty/max-tegmark/), [Lex Fridman](https://www.youtube.com/@lexfridman) * [Lennart Heim on Compute Governance](https://www.youtube.com/watch?v=iCxJUDDvq94) by [Lennart Heim](https://heim.xyz/about/), [Future of Life Institute](https://www.youtube.com/@futureoflifeinstitute2787) * [Let’s think about slowing down AI](https://www.alignmentforum.org/posts/uFNgRumrDTpBfQGrs/let-s-think-about-slowing-down-ai) by [KatjaGrace](https://www.alignmentforum.org/users/katjagrace) * [The 0.2 OOMs/year target](https://www.alignmentforum.org/posts/9xfRjaKDTb57BaGWv/the-0-2-ooms-year-target) by [Cleo Nardo](https://www.alignmentforum.org/users/cleo-nardo) * [AI Summer Harvest](https://www.lesswrong.com/posts/P98i7kAN2uWuy7mhD/ai-summer-harvest) by [Cleo Nardo](https://www.lesswrong.com/users/cleo-nardo) * [Instead of technical research, more people should focus on buying time](https://www.lesswrong.com/posts/BbM47qBPzdSRruY4z/instead-of-technical-research-more-people-should-focus-on) by [Akash](https://www.lesswrong.com/users/akash-wasil), [Olivia Jimenez](https://www.lesswrong.com/users/olivia-jimenez-1), [Thomas Larsen](https://www.lesswrong.com/users/thomas-larsen) * [Slowing down AI progress is an underexplored alignment strategy](https://forum.effectivealtruism.org/posts/6LNvQYyNQpDQmnnux/slowing-down-ai-progress-is-an-underexplored-alignment) by [Michael Huang](https://forum.effectivealtruism.org/users/michael-huang) * [Slowing Down AI: Rationales, Proposals, and Difficulties](https://navigatingairisks.substack.com/p/slowing-down-ai-rationales-proposals) [[1]](#fn9787e180o9w) by [Simeon Campos](https://substack.com/profile/85867283-simeon-campos), [Henry Papadatos](https://substack.com/profile/41973448-henry-papadatos), [Charles M](https://substack.com/profile/69835945-charles-m) * [What an actually pessimistic containment strategy looks like](https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like) by [lc](https://www.lesswrong.com/users/lc) * [In the Matter of OpenAI (FTC 2023)](https://www.caidp.org/cases/openai/) [[1]](#fn9787e180o9w) by [Center for AI and Digital Policy](https://www.caidp.org/) * [We need a Butlerian Jihad against AI](https://erikhoel.substack.com/p/we-need-a-butlerian-jihad-against) [[2]](#fnkkxl9mxb6hr) by [Erik Hoel](https://substack.com/profile/9379583-erik-hoel) * [Dangers of AI and the End of Human Civilization](https://www.youtube.com/watch?v=AaTRHFaaPG8) by [Eliezer Yudkowsky](https://www.lesswrong.com/users/eliezer_yudkowsky), [Lex Fridman](https://www.youtube.com/@lexfridman) * [We’re All Gonna Die with Eliezer Yudkowsky](https://www.youtube.com/watch?v=gA1sNLL6yg4) by [Eliezer Yudkowsky](https://www.lesswrong.com/users/eliezer_yudkowsky), [Bankless](https://www.youtube.com/@Bankless) * [The public supports regulating AI for safety](https://www.lesswrong.com/posts/M3iPAmxZwy4gPXdXw/the-public-supports-regulating-ai-for-safety) by [Zach Stein-Perlman](https://www.lesswrong.com/users/zach-stein-perlman) * [New survey: 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development](https://www.lesswrong.com/posts/p8suSXSwEoKTfGbp9/new-survey-46-of-americans-are-concerned-about-extinction) by [Akash](https://www.lesswrong.com/users/akash-wasil) 1. **[^](#fnref9787e180o9w)**Credit to [Zach Stein-Perlman](https://www.lesswrong.com/users/zach-stein-perlman). 2. **[^](#fnrefkkxl9mxb6hr)**Credit to [MM Maas](https://www.lesswrong.com/users/matthijs-maas).
dc4685ff-1e50-4dd5-b117-f421a78d93a0
trentmkelly/LessWrong-43k
LessWrong
Analyzing the Historical Rate of Catastrophes To communicate risks, we often turn to stories. Nuclear weapons conjure stories of mutually assured destruction, briefcases with red buttons, and nuclear winter. Climate change conjures stories of extreme weather, cities overtaken by rising sea levels, and crop failures. Pandemics require little imagination after COVID, but were previously the subject of movies like Contagion. Stories are great for conveying concrete risks (I myself recently did this for AI risks), but they’re a bad way to predict the future. That’s because most stories are far too specific to be probable. More importantly, stories tend to feature short, simple chains of causation while reality is complex and multi-causal. Instead of using stories, most competitive forecasters start their forecasts by looking at historical reference classes. This works really well, and also makes sense: history pulls us away from the biases of story-telling by grounding in events that actually occurred. While history is filtered through narratives, a good history will contend with the complexity of reality, and we can further strip away narrative by grounding in raw numbers.[1] In this post, I’ll use reference classes to understand the largest risks society faces today. I’ll do this by considering two different reference classes for historical catastrophes: * Events that killed a significant fraction of the global human population (Section 1) * Extinctions of species, and especially mass extinction events (Section 2) Looking at these reference classes teaches us two things. First, it gives us a numerical estimate of how rare different catastrophes are. If we define a catastrophe as an event killing 1% of the global population within a decade, then 11 such catastrophes have occurred since 1500, for a base rate of 2% per year. If we raise the bar to killing 10% of the population, the base rate drops by an order of magnitude, to 0.2%. History also gives us qualitative insights. For instance, all the catastrophe
1d53e604-655f-4af7-87d0-843bcbd4a8a9
trentmkelly/LessWrong-43k
LessWrong
The Future of Humanity Institute could make use of your money Many people have an incorrect view of the Future of Humanity Institute's funding situation, so this is a brief note to correct that; think of it as a spiritual successor to this post. As John Maxwell puts it, FHI is "one of the three organizations co-sponsoring LW [and] a group within the University of Oxford's philosophy department that tackles important, large-scale problems for humanity like how to go about reducing existential risk." (If you're not familiar with our work, this article is a nice, readable introduction, and our director, Nick Bostrom, wrote Superintelligence.) Though we are a research institute in an ancient and venerable institution, this does not guarantee funding or long-term stability. Academic research is generally funded through grants, but because the FHI is researching important but unusual problems, and because this research is multi-disciplinary, we've found it difficult to attract funding from the usual grant bodies. This has meant that we’ve had to prioritise a certain number of projects that are not perfect for existential risk reduction, but that allow us to attract funding from interested institutions. With more assets, we could both liberate our long-term researchers to do more "pure Xrisk" research, and hire or commission new experts when needed to look into particular issues (such as synthetic biology, the future of politics, and the likelihood of recovery after a civilization collapse). We are not in any immediate funding crunch, nor are we arguing that the FHI would be a better donation target than MIRI, CSER, or the FLI. But any donations would be both gratefully received and put to effective use. If you'd like to, you can donate to FHI here. Thank you!
aee75276-7f32-481a-bb41-3d4655ac2924
StampyAI/alignment-research-dataset/blogs
Blogs
Forecasting potential misuses of language models for disinformation campaigns and how to reduce risk Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations Josh A. Goldstein1,3, Girish Sastry2, Micah Musser1, Renée DiResta3, Matthew Gentzel2, and Katerina Sedova1 1Georgetown University’s Center for Security and Emerging Technology 2OpenAI 3Stanford Internet Observatory January 2023 Workshop Participants: Steven Adler, Shahar Avin, John Bansemer, Chris Bregler, Miles Brundage, Sam Gregory, Shelby Grossman, Ariel Herbert-Voss, Yacine Jernite, Claire Leibowicz, Connor Leahy, Herb Lin, Drew Lohn, Meg Mitchell, Amnon Morag, Alex Newhouse, Helen Ngo, Aviv Ovadya, Cooper Raterink, Yoel Roth, Bob Rotsted, Elizabeth Seger, and Raymond Serrato. Acknowledgements: We thank participants in the October 2021 workshop that we convened for inform- ing our understanding of various threats and mitigations. We also thank many workshop participants for providing feedback on a draft of this paper. For additional feedback on the paper, we thank Deepesh Chaudhari, Jeff Ding, Tyna Elondou, Shengli Hu, Daniel Kokotajlo, Gretchen Krueger, Pamela Mishkin, Ronald Robertson, Sarah Shoker, Samuel Wolrich, and Jenny Xiao. Josh Goldstein began working on the project as a postdoctoral fellow at Stanford, and continued work as a research fellow with Georgetown CSET’s CyberAI Project. Matthew Gentzel completed his contributions while contracting for OpenAI, and is now at Longview Philanthropy. Katerina Sedova completed her contributions to this project while she was a research fellow with Georgetown CSET’s CyberAI Project and before she entered U.S. government service. All errors remain our own. Lead authors contributed equally.arXiv:2301.04246v1 [cs.CY] 10 Jan 2023 Contents Executive Summary 1 1 Introduction 5 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 Threats and Mitigations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 Scope and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4 Outline of the Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2 Orienting to Influence Operations 9 2.1 What Are Influence Operations, and Why Are They Carried Out? . . . . . . . . . . . . . . . . 9 2.2 Influence Operations and Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3 Recent Progress in Generative Models 15 3.1 What Are Generative Models, and How Are They Built? . . . . . . . . . . . . . . . . . . . . . 15 3.2 Access and Diffusion of Generative Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4 Generative Models and Influence Operations 22 4.1 Language Models and the ABCs of Disinformation . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.2 Expected Developments and Critical Unknowns . . . . . . . . . . . . . . . . . . . . . . . . . . 29 5 Mitigations 38 5.1 A Framework for Evaluating Mitigations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.2 Model Design and Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 5.3 Model Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5.4 Content Dissemination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 5.5 Belief Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 6 Conclusions 63 6.1 Language Models Will Likely Change Influence Operations . . . . . . . . . . . . . . . . . . . 63 6.2 There Are No Silver Bullet Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 6.3 Collective Responses Are Needed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 6.4 Mitigations Must Address Demand As Well As Supply . . . . . . . . . . . . . . . . . . . . . . . 65 6.5 Further Research Is Necessary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 References 67 Executive Summary In recent years, artificial intelligence (AI) systems have significantly improved and their capabilities have expanded. In particular, AI systems called “generative models” have made great progress in automated content creation, such as images generated from text prompts. One area of particularly rapid devel- opment has been generative models that can produce original language, which may have benefits for diverse fields such as law and healthcare. However, there are also possible negative applications of generative language models, or “language models” for short. For malicious actors looking to spread propaganda—information designed to shape perceptions to further an actor’s interest—these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations, rather than having to rely on human labor. For society, these developments bring a new set of concerns: the prospect of highly scalable—and perhaps even highly persuasive—campaigns by those seeking to covertly influence public opinion. This report aims to assess: how might language models change influence operations, and what steps can be taken to mitigate these threats? This task is inherently speculative, as both AI and influence operations are changing quickly. Many ideas in the report were informed by a workshop convened by the authors in October 2021, which brought together 30 experts across AI, influence operations, and policy analysis to discuss the potential impact of language models on influence operations. The resulting report does not represent the consensus of workshop participants, and mistakes are our own. We hope this report is useful to disinformation researchers who are interested in the impact of emerging technologies, AI developers setting their policies and investments, and policymakers preparing for social challenges at the intersection of technology and society. Potential Applications of Language Models to Influence Operations We analyzed the potential impact of generative language models on three well-known dimensions of influence operations—the actors waging the campaigns, the deceptive behaviors leveraged as tactics, and the content itself—and conclude that language models could significantly affect how influence operations are waged in the future. These changes are summarized in Table 1. The potential of language models to rival human-written content at low cost suggests that these mod- els—like any powerful technology—may provide distinct advantages to propagandists who choose to use them. These advantages could expand access to a greater number of actors, enable new tactics of influence, and make a campaign’s messaging far more tailored and potentially effective. Progress in Influence Operations and Critical Unknowns Technical progress in language models is unlikely to halt, so any attempt to understand how language models will affect future influence operations needs to take expected progress into account. Language models are likely to become more usable (making it easier to apply models to a task), reliable (reduc- ing the chance that models produce outputs with obvious errors), and efficient (increasing the cost- effectiveness of applying a language model for influence operations). 1 Dimension1 Potential Change Due to Generative AI TextExplanation of Change Larger number and more diverse group of propagandists emerge.As generative models drive down the cost of generating propaganda, more actors may find it attractive to wage influence operations. Actors Outsourced firms become more important.Propagandists-for-hire that automate the pro- duction of text may gain new competitive ad- vantages. Automating content production increases scale of campaigns.Propaganda campaigns will become easier to scale when text generation is automated. Existing behaviors become more efficient.Expensive tactics like cross-platform testing may become cheaper with language models. Behavior Novel tactics emerge.Language models may enable dynamic, person- alized, and real-time content generation like one-on-one chatbots. Messages are more credible and persuasive.Generative models may improve messaging compared to text written by propagandists who lack linguistic or cultural knowledge of their tar- get.Content Propaganda is less discoverable.Existing campaigns are frequently discovered due to their use of copy-and-pasted text (copy- pasta), but language models will allow the pro- duction of linguistically distinct messaging. Table 1: How Language Models May Affect the ABCs of Influence Operations These factors lead us to make a high confidence judgment that language models will be useful for influ- ence operations in the future. The exact nature of their application, however, is unclear. There are several critical unknowns that will impact how, and the extent to which, language models will be adopted for influence operations. These unknowns include: •Which new capabilities for influence will emerge as a side-effect of well-intentioned re- search? The conventional research process—which targets more general language tasks—has resulted in systems that could be applied to influence operations. New capabilities, like producing longform persuasive arguments, could emerge in the future. These emergent capabilities are hard to anticipate with generative models, but could determine the specific tasks propagandists will use language models to perform. •Will it be more effective to engineer specific language models for influence operations, rather than apply generic ones? While most current models are built for generic tasks or tasks of sci- entific or commercial value, propagandists could build or adapt models to be directly useful for tasks like persuasion and social engineering. For example, a propagandist may be able to adapt a smaller, less capable model in a process known as fine-tuning. This is likely cheaper than building a larger, more general model, though it is uncertain how much cheaper this would be. Further- more, fine-tuning a state-of-the-art model could make novel capabilities for influence easier for propagandists to obtain. 1. Dimension categories drawn from Camille François’s “Disinformation ABC” framework. 2 •Will actors make significant investments in language models over time? If many actors invest in, and create, large language models, it will increase the likelihood of propagandists gaining access to language models (legitimately or via theft). Propagandists themselves could invest in creating or fine-tuning language models, incorporating bespoke data—such as user engagement data—that optimizes for their goals. •Will governments or specific industries create norms against using models for propaganda purposes? Just as norms around use constrain the misuse of other technologies, they may con- strain the application of language models to influence operations. A coalition of states who agree not to use language models for propaganda purposes could impose costs on those that fail to abide. On a substate level, research communities and specific industries could set norms of their own. •When will easy-to-use tools to generate text become publicly available? Language models still require operational know-how and infrastructure to use skillfully. Easy-to-use tools that pro- duce tweet- or paragraph-length text could lead existing propagandists who lack machine learning know-how to rely on language models. Because these are critical possibilities that can change how language models may impact influence op- erations, additional research to reduce uncertainty is highly valuable. What Can Be Done to Mitigate the Potential Threat? Building on the workshop we convened in October 2021, and surveying much of the existing literature, we attempt to provide a kill chain framework for, and a survey of, the types of different possible mitiga- tion strategies. Our aim is not to endorse specific mitigations, but to show how mitigations could target different stages of the influence operation pipeline. 3 What Propagandists RequireStage of InterventionIllustrative Mitigations 1. Language Models Capable of Producing Realistic TextModel Design and ConstructionAI Developers Build Models That Are More Fact- Sensitive Developers Spread Radioactive Data to Make Gen- erative Models Detectable Governments Impose Restrictions on Data Collec- tion Governments Impose Access Controls on AI Hard- ware 2. Reliable Access to Such ModelsModel AccessAI Providers Impose Stricter Usage Restrictions on Language Models AI Developers Develop New Norms Around Model Release 3. Infrastructure to Distribute the Generated ContentContent DisseminationPlatforms and AI Providers Coordinate to Identify AI Content Platforms Require “Proof of Personhood” to Post Entities That Rely on Public Input Take Steps to Re- duce Their Exposure to Misleading AI Content Digital Provenance Standards Are Widely Adopted 4. Susceptible Target AudienceBelief FormationInstitutions Engage in Media Literacy Campaigns Developers Provide Consumer Focused AI Tools Table 2: Summary of Example Mitigations The table above demonstrates that there is no silver bullet that will singularly dismantle the threat of language models in influence operations. Some mitigations are likely to be socially infeasible, while others will require technical breakthroughs. Others may introduce unacceptable downside risks. Instead, to effectively mitigate the threat, a whole of society approach, marrying multiple mitigations, will likely be necessary. Furthermore, effective management will require a cooperative approach among different institutions such as AI developers, social media companies, and government agencies. Many proposed mitigations will have a meaningful impact only if these institutions work together. It will be difficult for social media companies to know if a particular disinformation campaign uses language models unless they can work with AI developers to attribute that text to a model. The most radical mitigations—such as inserting content provenance standards into the protocols of the internet—would require extreme coordination, if they are desirable at all. Perhaps most importantly, the mitigations we highlight require much more development, scrutiny, and research. Evaluating their effectiveness and robustness is worthy of serious analysis. 4 1 Introduction 1.1 Motivation In recent years, as the capabilities of generative artificial intelligence (AI) systems—otherwise known as “generative models”—have improved, commentators have hypothesized about both the potential bene- fits and risks associated with these models. On the one hand, generative AI systems open up possibilities in fields as diverse as healthcare, law, education, and science.2For example, generative models are be- ing used to design new proteins,3generate source code,4and inform patients.5Yet the rapid speed of technological progress has made it difficult to adequately prepare for, or even understand, the poten- tial negative externalities of these models. Early research has suggested that bias in model generations could exacerbate inequalities, that models could displace human workers, and that, in the wrong hands, models could be intentionally misused to cause societal harm.6 Concurrently, the last decade has seen a rise in political influence operations—covert or deceptive ef- forts to influence the opinions of a target audience—online and on social media platforms specifically. Researchers and social media platforms have documented hundreds of domestic and foreign influence operations that are designed to mislead target audiences.7In the United States, the US intelligence com- munity has publicly stated that foreign governments, including Russia and Iran, have waged influence operations targeting the 2016 and 2020 US presidential elections.8 In this paper, we focus on the overlap between these two trends. First, we ask: How can language mod- els, a form of generative AI that can produce original text, impact the future of influence operations? While several studies have addressed specific applications, we provide frameworks for thinking through different types of changes and highlight critical unknowns that will affect the ultimate impact. By high- lighting the technology’s current limitations and critical unknowns, we attempt to avoid threat inflation or a sole focus on doomsday scenarios. After developing the threats, we ask: What are the possible mitigation strategies to address these various threats? Our paper builds on a yearlong collaboration between OpenAI, the Stanford Internet Observatory (SIO), and Georgetown’s Center for Security and Emerging Technology (CSET). In October 2021, we convened 2. Rishi Bommasani et al., “On the Opportunities and Risks of Foundation Models,” arxiv:2108.07258 [cs.LG ], August 2021, https: //doi.org /10.48550 /arxiv.2108.07258. 3. Mohammed AlQuraishi, “Machine learning in protein structure prediction,” Current Opinion in Chemical Biology 65 (De- cember 2021): 1–8, ISSN: 1367-5931, https: //doi.org /10.1016 /J.CBPA.2021.04.005. 4. “ML-Enhanced Code Completion Improves Developer Productivity,” Google AI Blog, accessed July 28, 2022, https: //ai. googleblog.com /2022/07/ml-enhanced-code-completion-improves.html. 5. Maguire Herriman et al., “Asked and Answered: Building a Chatbot to Address Covid-19-Related Concerns,” NEJM Catalyst Innovations in Care Delivery , June 18, 2020, https: //catalyst.nejm.org /doi/full/10.1056 /CAT.20.0230. 6. See for example Mark Chen et al., “Evaluating Large Language Models Trained on Code,” arxiv:2107.03374 [cs.LG ], July 14, 2021, https: //doi.org /10.48550 /arxiv.2107.03374; Bommasani et al., “On the Opportunities and Risks of Foundation Models”; Sarah Kreps, R. Miles McCain, and Miles Brundage, “All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation,” Journal of Experimental Political Science 9, no. 1 (November 2022): 104–117, ISSN: 2052-2630, https: //doi.org /10.1017 /XPS.2020.37; Ben Buchanan et al., Truth, Lies, and Automation: How Language Models Could Change Disinformation (Center for Security and Emerging Technology, May 2021), https: //doi.org /10.51593 /2021CA003. 7. For a list of influence operations removed from Facebook alone, see Nathaniel Gleicher et al., Threat Report: The State of Influence Operations 2017-2020 (Meta, May 2021), https: //about.fb.com /news/2021/05/influence-operations-threat-report / 8. National Intelligence Council, Intelligence Community Assessment: Foreign Threats to the 2020 US Federal Elections (Na- tional Intelligence Council, March 10, 2021), https: //int.nyt.com /data/documenttools /2021- intelligence- community- election-interference-assessment /abd0346ebdd93e1e /full.pdf. 5 a two-day workshop among 30 disinformation and machine learning experts in industry and academia to discuss the emerging threat as well as potential mitigations. This paper builds on the whitepaper that we circulated to workshop participants, the workshop itself, and subsequent months of research. We thank workshop participants for helping to clarify potential vectors of abuse and possible mitigations, and note that our report does not necessarily reflect the views of the participants. 1.2 Threats and Mitigations How can language models affect the future of influence operations? To address this question, we build on the ABC model — Actors, Behaviors, and Content — from the disinformation literature.9Language models can affect which actors wage influence operations, how they do so, and what content they produce. •Actors: Language models drive down the cost of generating propaganda—the deliberate attempt to shape perceptions and direct behavior to further an actor’s interest10—so more actors may find it attractive to wage these campaigns.11Likewise, propagandists-for-hire that automate production of text may gain new competitive advantages. •Behavior: Recent AI models can generate synthetic text that is highly scalable, and often highly persuasive.12Influence operations with language models will become easier to scale, and more ex- pensive tactics (e.g., generating personalized content) may become cheaper. Moreover, language models could enable new tactics to emerge—like real-time content generation in one-on-one chat- bots. •Content: Language models may create more impactful messaging compared to propagandists who lack linguistic or cultural knowledge of their target. They may also make influence operations less discoverable, since they create new content with each generation. When considering these predicted changes, it is also important to remember that AI development is progressing rapidly. We highlight critical unknowns that will impact the future of influence operations, including how models will improve, whether new capabilities will emerge as a product of scale, whether actors invest in AI for influence operations, and whether norms emerge that constrain different actors from automating their influence campaigns. What mitigations could reduce the impact of AI-enabled influence operations? After laying out potential threats, we also consider the range of possible mitigation strategies to influence operations with language models. We develop a framework that categorizes mitigations based on a kill 9. Camille François, Actors, Behaviors, Content: A Disinformation ABC Highlighting Three Vectors of Viral Deception to Guide Industry & Regulatory Responses (Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression, September 2019), https: //science.house.gov /download /francois-addendum. 10. Garth Jowett and Victoria O’Donnell, Propaganda & Persuasion , 6th ed. (SAGE Publications, 2014), ISBN: 1483323528; Philip M. Taylor, Munitions of the mind: a history of propaganda from the ancient world to the present era (Manchester University Press, 2003), ISBN: 978-1-84779-092-7. 11. We include a rough cost-effectiveness calculation in Section 4.1.3; see also Micah Musser, “A Cost Analysis of Generative Language Models and Influence Operations,” (Working Paper) . 12. Buchanan et al., Truth, Lies, and Automation: How Language Models Could Change Disinformation ; Josh A. Goldstein et al., “Can AI write persuasive propaganda?,” (Working Paper) . 6 chain framework. To effectively wage an influence operation with a language model, propagandists would require (1) that a model is built (by themselves or others), (2) that they have access to the model, (3) that they have the means of disseminating content they produce, and (4) that the information spread impacts the target. Each of these steps—model design and construction, model access, content dissemination, and belief formation—represents a possible stage for intervention. 1.3 Scope and Limitations This paper focuses on a particular application of AI (language models) to influence operations, but it does not focus on other AI models, other forms of information control, or specific actors. As described above, generative models include models that can create a range of output. The idea of AI-generated “deepfaked” images or video has been in the public consciousness for several years now.13Recently, for example, a low-quality deepfake video of Ukrainian President Volodymyr Zelensky purportedly telling Ukrainian soldiers to lay down their arms and surrender circulated on social media.14Higher-quality deepfake videos have also gained traction in the past.15We focus on generative text, rather than videos, images, or multimodal models for three reasons: first, because text is relatively underexplored (com- pared to images and videos) in the disinformation literature, second, because text seems particularly dif- ficult to distinguish as AI-generated, and third, because access to these capabilities is diffusing quickly.16 While multimodal models are also new and relatively underexplored, they are not our primary focus. Our focus on how language models can be used for influence operations scopes our study more nar- rowly than information control writ large. State and non-state actors engage in a variety of information control behaviors, ranging from censorship to manipulating search algorithms. One recent framework categorizes different forms of digital repression, and notes that these techniques are as distinct as “on- line disinformation campaigns, digital social credit schemes, private online harassment campaigns by lone individuals, and regime violence against online political actors.”17While we take digital repression seriously, a fuller examination of categories of digital repression other than covert propaganda cam- paigns—and how those categories are affected by AI—falls outside our scope. Our scope is relevant to a variety of state, substate, and private actors; we do not focus on any one actor specifically. Although the intentions and capabilities of specific actors is relevant to assess the likelihood 13. Claire Wardle, “This Video May Not Be Real,” New York Times , August 19, 2019, https: //www.nytimes.com /2019 /08/ 14/opinion /deepfakes-adele-disinformation.html; Tim Hwang, Deepfakes: A Grounded Threat Assessment (Center for Security and Emerging Technology, July 2020), https: //doi.org /10.51593 /20190030; Kelly M. Sayler and Laurie A. Harris, “Deep Fakes and National Security,” Congressional Research Services , 2022, https: //crsreports.congress.gov; Luisa Verdoliva, “Media Forensics and DeepFakes: An Overview,” IEEE Journal on Selected Topics in Signal Processing 14, no. 5 (January 2020): 910–932, ISSN: 19410484, https: //doi.org /10.1109 /JSTSP.2020.3002101; Hany Farid, “Creating, Using, Misusing, and Detecting Deep Fakes,” Journal of Online Trust and Safety 1, no. 4 (September 2022), ISSN: 2770-3142, https: //doi.org /10.54501 /JOTS.V1I4. 56. 14. Bobby Allyn, “Deepfake video of Zelenskyy could be ‘tip of the iceberg’ in info war, experts warn,” NPR, March 16, 2022, https: //www.npr.org /2022/03/16/1087062648 /deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia. 15. Rachel Metz, “How a deepfake Tom Cruise on TikTok turned into a very real AI company,” CNN, August 6, 2021, https: //edition.cnn.com /2021/08/06/tech/tom-cruise-deepfake-tiktok-company. 16. This is true on two levels: first, the set of institutions that have trained their own highly capable language model from scratch has expanded rapidly over the past two years. Second, public access to many of those models has widened over time. For instance, while GPT-3 was initially released behind a sharply restricted API, it has since considerably loosened its access restrictions, allowing a larger number of people to use the model. And other, only slightly less capable models have been made fully public, with no use restrictions at all. See Section 3.2. 17. Jennifer Earl, Thomas V . Maher, and Jennifer Pan, “The digital repression of social movements, protest, and activism: A synthetic review,” Science Advances 8 (October 2022): 8198, https: //www.science.org /doi/pdf/10.1126 /sciadv.abl8198. 7 of future use of language models for influence operations, our focus is primarily on the technology and trends. For example, we describe tactics that could be deployed in a range of settings, rather than applications of AI to influence operations in highly specific political contexts. Additional research can expand on this paper to consider how specific groups may (or may not) use different language models for the types of influence campaigns we describe. A paper on how current and future technological developments may impact the nature of influence operations is inherently speculative. Today, we know that it is possible to train a model and output its content—without notifying social media users—on platforms. Likewise, existing research shows that language models can produce persuasive text, including articles that survey respondents rate as credible as real news articles.18However, many of the future-oriented possibilities we discuss in the report are possibilities rather than inevitabilities, and we do not claim any one path will necessarily come to fruition. Similarly, our goal in this report is not to explicitly endorse any one mitigation, or any specific set of mitigations. Rather, we aim to lay out a range of possibilities that researchers and policymakers can consider in greater detail. We also recognize that our backgrounds may result in a biased perspective: several authors work for AI developers directly, and we do not represent many of the communities that AI-enabled influence operations may affect. We encourage future research to pay particular attention to likely differential impacts and to conduct surveys of those most at risk or susceptible to AI-enabled campaigns. 1.4 Outline of the Report The remainder of this report proceeds as follows: In Section 2, we provide an overview of influence operations, introducing key terminology, describing what influence operations are and how they are carried out, as well as providing a framework to distinguish between impact based on content and downstream impact based on trust. We focus primarily on online influence operations, in part because they are a frequent vector for text-based campaigns. In Section 3, we overview recent development in generative models and describe current access and diffusion of capabilities. In Section 4, we tie these two concepts together by examining how recent generative models could affect the future of influence operations. We describe how language models will impact the actors, behavior, and content of existing campaigns, and we highlight expected developments in the technology and critical unknowns. The longest section of this paper is Section 5, where we move from threats to mitigations. We classify a range of potential mitigations along four key stages in the AI-to-target pipeline: model construction, model access, content dissemination, and belief formation. We conclude in Section 6 with overarching takeaways. We suggest that newer generative models have a high probability of being adopted in future influence operations, and that no reasonable mitigations can be expected to fully prevent this. However, we also suggest that a combination of multiple mitigation strategies may make an important difference and that many of these mitigations may require the formation of new collaborations between social media platforms, AI companies, government agencies, and civil society actors. In addition, we highlight several avenues for future research. 18. Kreps, McCain, and Brundage, “All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation.” 8 2 Orienting to Influence Operations Following Russia’s interference in the 2016 US election, the study of online influence operations and disinformation has grown dramatically. In this section, we begin with an overview of influence opera- tions—what they are, why they are carried out, and the types of impacts they may (or may not) have. 2.1 What Are Influence Operations, and Why Are They Carried Out? While there is some debate about what activities constitute an influence operation,19in this report, we define influence operations as covert ordeceptive efforts to influence the opinions of a target audience.20 Of note, our definition is agnostic to the truth of the message (whether the content spread is true or false) and the identity of the actor spreading it. Influence operations include operations that intend to activate people who hold particular beliefs, to persuade an audience of a particular viewpoint, and /or to distract target audiences. The logic of dis- traction rests on the idea that propagandists are in competition for user attention on social media plat- forms, which is already spread thin.21If propagandists can distract target audiences from an unfavorable narrative taking shape on social media—by spreading alternative theories or diluting the information environment—they could successfully absorb user attention without necessarily persuading them. Influence operations can come in many forms and use an array of tactics, but a few unifying themes tie many of them together. A recent report studying political influence operations in the Middle East22 found that operations often exhibited one of several tactics: • Attempts to cast one’s own government, culture, or policies in a positive light • Advocacy for or against specific policies • Attempts to make allies look good and rivals look bad to third-party countries • Attempts to destabilize foreign relations or domestic affairs in rival countries In several of these cases, the accounts executing the operation masqueraded as locals expressing dis- content with their government or certain political figures. Social media manipulation operations often employ this tactic of digital agents of influence , hiding the identity of the true information source from 19. Alicia Wanless and James Pamment, “How Do You Define a Problem Like Influence?,” Journal of Information Warfare 18, no. 3 (2019): 1–14, https: //www.jstor.org /stable /26894679. 20. Josh A. Goldstein, “Foreign Influence Operations in the Cyber Age” (PhD diss., University of Oxford, 2021), https: //ethos. bl.uk/OrderDetails.do?uin =uk.bl.ethos.840171; Ben Nimmo, The Breakout Scale: Measuring the impact of influence operations (Brookings Institution, September 2020), https: //www.brookings.edu /research /the-breakout-scale-measuring-the-impact- of-influence-operations /. 21. On attention economies and bounded rationality, see Elizabeth Seger et al., Tackling threats to informed decision-making in democratic societies: Promoting epistemic security in a technologically-advanced world (The Alan Turing Institute, October 14, 2020), https: //doi.org /10.17863 /CAM.64183. 22. M.A. et al., “Middle East Influence Operations: Observations Across Social Media Takedowns,” Project on Middle East Political Science , August 2021, https: //pomeps.org /middle- east- influence- operations- observations- across- social- media- takedowns. 9 the target audience.23Russia’s Internet Research Agency (IRA) accounts, for example, pretended to be Black Americans and conservative American activists, and directly messaged members of each targeted community. Identifying these inauthentic accounts often relies on subtle cues: a misused idiom, a re- peated grammatical error, or even the use of a backtick ( /grave.ts1) where an authentic speaker would use an apostrophe (‘). State-level adversarial actors often run a combination of tactics, leveraging their own employees or outsourcing to digital mercenaries. Since 2016, Meta and Twitter have removed well over a hundred social media influence operations, stemming from dozens of different countries.24These operations often include persona creation (cre- ating fake identities to spread a message), fake news properties, and inauthentic amplification efforts. But influence operations have also expanded significantly beyond Facebook and Twitter and into al- ternative platforms, small group settings, and encrypted spaces.25Reporting from the New York Times , in hand with Israeli disinformation researchers, documented how “Iranian agents had infiltrated small [Israeli ]WhatsApp groups, Telegram channels and messaging apps” to spread polarizing content.26At times these influence operations display novel ingenuity, leveraging platform policies in an adversarial fashion. A campaign supporting the Tanzanian government that was removed by Twitter in 2021, for example, used false claims of copyright reporting to target Tanzanian activists’ accounts.27 Much of the recent research and public attention on influence operations focuses on foreign campaigns—where governments or citizens in one country target citizens in a different country.28But, as the Tanzania ex- ample shows, influence operations can also be domestically focused. Political actors frequently spread covert propaganda targeting their citizens in order to boost their popularity, undermine that of an op- ponent, or sow confusion in the political system. In 2020, Facebook suspended fake personas spreading polarizing content about Brazilian politics that were linked to Brazilian lawmakers as well as President Jair Bolsonaro and his sons, Congressman Eduardo Bolsonaro and Senator Flavio Bolsonaro.29In fact, 23. Russia, for example, leverages personas that speak as if they are members of the targeted communities. Some of the personas produce short-form content, such as tweets and Facebook posts. Others masquerade as journalists and write long- form narrative content that they then submit to legitimate publications or publish on Russian self-administered proxy “media outlets” or “think tanks.” For examples in the Russia context, see Renee DiResta and Shelby Grossman, Potemkin Pages & Personas: Assessing GRU Online Operations, 2014-2019 (Stanford Internet Observatory, 2019), https: //cyber.fsi.stanford.edu / io/publication /potemkin- think- tanks. For another example, see Adam Rawnsley, “Right-Wing Media Outlets Duped by a Middle East Propaganda Campaign,” The Daily Beast, July 7, 2020, https: //www.thedailybeast.com /right- wing- media- outlets-duped-by-a-middle-east-propaganda-campaign. For a variant of this tactic leveraging compromised websites, see Mandiant, ‘Ghostwriter’ Influence Campaign: Unknown Actors Leverage Website Compromises and Fabricated Content to Push Narratives Aligned with Russian Security Interests (Mandiant), https: //www.fireeye.com /content /dam/fireeye-www /blog/ pdfs/Ghostwriter-Influence-Campaign.pdf. For examples of front proxy media sites and “think tanks,” see Pillars of Russia’s Disinformation and Propaganda Ecosystem (U.S. Department of State, August 2020), https: //www.state.gov /russias-pillars- of-disinformation-and-propaganda-report / 24. Disinfodex (August 2020), database distributed by Carnegie Endowment for International Peace, https: //disinfodex.org /; Gleicher et al., Threat Report: The State of Influence Operations 2017-2020 . Note, these are only the operations that have been found and publicly reported. Because influence operations are typically designed to be kept secret, it likely reflects an undercount of all operations on these platforms. 25. Graphika, Posing as Patriots (Graphika, June 2021), https: //graphika.com /reports /posing-as-patriots. 26. Sheera Frenkel, “Iranian Disinformation Effort Went Small to Stay Under Big Tech’s Radar,” New York Times , June 30, 2021, https: //www.nytimes.com /2021/06/30/technology /disinformation-message-apps.html. 27. Shelby Grossman et al., “The New Copyright Trolls: How a Twitter Network Used Copyright Complaints to Harass Tan- zanian Activists,” Stanford Internet Observatory, December 2, 2021, https: //stacks.stanford.edu /file/druid:bt877dz8024 / 20211202-tz-twitter-takedown.pdf. 28. Claire Wardle, “The Media Has Overcorrected on Foreign Influence,” Lawfare , October 26, 2020, https: //www.lawfarebl og.com /media-has-overcorrected-foreign-influence. 29. Jack Stubbs and Joseph Menn, “Facebook suspends disinformation network tied to staff of Brazil’s Bolsonaro,” Reuters , July 8, 2020, https: //www.reuters.com /article /us- facebook- disinformation- brazil /facebook- suspends- disinformation- network-tied-to-staff-of-brazils-bolsonaro-idUSKBN2492Y5. 10 many commentators believe that domestic , not foreign, influence operations are the most worrisome.30 Influence operations have additionally been deployed to take sides in intraparty politics,31and, in the case of several attributed to the Chinese Communist Party, to target diaspora populations.32 2.2 Influence Operations and Impact Influence operations can have impact based on their specific content or focus (e.g., through persuasion), or by eroding community trust in the information environment overall. In current influence operations, direct impact from content is sometimes limited by resources, quality of the message, and detectability of the operation. These factors may matter differently depending on the goals of the operator—for instance, if operators are looking only to distract instead of to convince targets of a specific viewpoint, the quality of each individual message is likely far less significant. In theory, however, these constraints may be partially overcome by language models in the future. Having an effect on trust in an information environment depends less on the substance and more on creating the perception that any given message might be inauthentic or manipulative. Even if influence operations do not change someone’s views, they may lead people to question whether the content they see from even credible sources is in fact real, potentially undermining faith in democratic and epistemic institutions more broadly. 2.2.1 Impact Based on Content An influence operation could have impact based on content if it (1) persuades someone of a particular viewpoint or reinforces an existing one, (2) distracts them from finding or developing other ideas, or (3) distracts them from carving out space for higher quality thought at all. Often the goal is simply to distract from information that is potentially harmful to the operator.33As advertisers, media outlets, and platforms already compete for viewers, distraction operations can often exploit and exacerbate such preexisting attention competitions to crowd out important information with attention-grabbing, irrelevant information. Distraction operations therefore do not require a target to be persuaded by the information spread, but rather that a target not be persuaded by (or even consider) some other piece of information. There are both historical and contemporary examples where the impact of an influence operation can be clearly measured or traced. For example, in the 1980s during the HIV epidemic, the Soviet Union waged an influence operation spreading the claim that the United States government created the virus 30. Emerson T . Brooking and Jacob Shapiro, “Americans Were Worried About the Wrong Threat,” Atlantic, January 10, 2020, https: //www.theatlantic.com /ideas /archive /2021/01/bigger-threat-was-always-domestic /617618 /. 31. Shelby Grossman et al., Staying Current: An Investigation Into a Suspended Facebook Network Supporting the Leader of the Palestinian Democratic Reform Current (Stanford Internet Observatory, February 10, 2021), https: //purl.stanford.edu / tk756wp5109. 32. “Chinese propagandists court South-East Asia’s Chinese diaspora,” Economist, November 20, 2021, https: //www.econo mist.com /asia/2021/11/20/chinese-propagandists-court-south-east-asias-chinese-diaspora. 33. Gary King, Jennifer Pan, and Margaret E. Roberts, “How the Chinese Government Fabricates Social Media Posts for Strate- gic Distraction, Not Engaged Argument,” American Political Science Review 111, no. 3 (2017): 484–501, https: //doi.org /10. 1017/S0003055417000144. 11 in a lab. One 2005 study found that 27% of African Americans still believed this claim.34In 2016, the IRA used manipulative agents of influence on Facebook to provoke real-world conflict by organizing protests and counter-protests outside the Islamic Da’wah Center in Houston.35The impact is relatively easy to trace here because the protests would not have occurred without the IRA’s activity. A recent literature review examining social science research on the effects of influence operations found “strong evidence that long-term campaigns on mass media have measurable effects on beliefs and consequential behaviors such as voting and risk-taking combat.” While noting that evidence remains sparse, the study also found there is “some evidence that social media activity by exceptionally influential individuals and organizations can stoke low-level violence.”36 However, the impact and effectiveness of influence operations are usually difficult to measure. Disin- formation researchers typically focus on engagement metrics—things like clicks and shares—which are inadequate proxy measures of social influence.37In cases where a clear comparison group does not exist, it can be difficult to determine how viewing or engaging with content translates into important political outcomes like polarization or votes. While platforms make attributions and provide researchers with data about taken-down influence operations, researchers still have limited visibility into the impact on users or their subsequent behavior after engagement. Furthermore, not all influence operations are detected. Even propagandists who attempt to measure their own impact can face challenges given multi- causality and difficulties in measuring opinion change over time. As scholars have noted, this ambiguity has historically contributed to intelligence agencies inflating the impact of their influence operations for bureaucratic gain.38 Despite these measurement challenges, some features clearly limit the impact of existing campaigns, including resources, content quality and messaging, and detectability. We outline these limitations below, and discuss in the following section how generative models may help overcome these barriers. •Resources: Like marketing campaigns, the success of an influence operation is a function of re- sources and the ability to get the desired content in front of one’s target. How many propagandists does a political actor hire to write content? How many social media accounts can they obtain to fake popularity? Low-resourced campaigns are less likely to get their desired content in front of 34. Renee DiResta, Michael McFaul, and Alex Stamos, “Here’s How Russia Will Attack the 2020 Election. We’re Still Not Ready.,” The Washington Post , November 15, 2019, https: //www.washingtonpost.com /opinions /2019 /11/15/heres-how- russia-will-attack-election-were-still-not-ready /. 35. Martin J. Riedl et al., “Reverse-engineering political protest: the Russian Internet Research Agency in the Heart of Texas,” Information, Communication, and Society 25, no. 15 (2021), ISSN: 14684462, https: //doi.org /10.1080 /1369118X.2021. 1934066. 36. John Bateman et al., Measuring the Effects of Influence Operations: Key Findings and Gaps From Empirical Research (Carnegie Endowment for International Peace, June 28, 2021), https: //carnegieendowment.org /2021 /06/28/measuring-effects-of- influence-operations-key-findings-and-gaps-from-empirical-research-pub-84824. 37. For example, researchers conducted a study comparing Twitter users who interacted with content from the IRA with those who did not. The study found “no substantial effects of interacting with Russian IRA accounts on the affective attitudes of Democrats and Republicans who use Twitter frequently toward each other, their opinions about substantial political issues, or their engagement with politics on Twitter in late 2017.” Christopher A. Bail et al., “Assessing the Russian Internet Research Agency’s impact on the political attitudes and behaviors of American Twitter users in late 2017,” PNAS 117, no. 1 (January 7, 2020), https: //doi.org /10.1073 /pnas.1906420116 38. Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare (New York: Farrar, Straus, Giroux, 2020), 260, https: //us.macmillan.com /books /9780374287269 /activemeasures. 12 the target or to garner media coverage.39 •Quality and Message of Content: People are less likely to be persuaded by messaging if it strongly counters their established attitude or if the arguments are poorly constructed or poorly reasoned.40 Campaigns with messaging that disconfirms targets’ attitudes, does not successfully blend in with a target’s information environment, and provides low-quality arguments are, all else being equal, less likely to be effective.41 •Detectability: Finally, operations that are quickly discovered are less likely to have an impact. Social media platforms and independent researchers actively search for influence operations, and platforms remove them in order to limit their reach. In fact, awareness that these operations may be removed can itself shape the behavior of propagandists, leading them to pursue distraction operations if they believe persona development—which requires longer-term investment but can be more persuasive to observers—is not worth the effort.42 It is helpful to keep these limitations in mind as we consider the role that language models can play in influence campaigns. If they can overcome existing limitations, then they may pose a significant issue for the information environment. We discuss this further in Section 4. 2.2.2 Downstream Impact Based on Trust The second way that influence operations can have an impact is by eroding trust. Degrading societal trust does not necessarily require high quality efforts: even when influence campaigns are detected, their appearance, especially at scale, may cause users to become suspicious of other, authentic sources.43 Propagandists often aim to exploit vulnerabilities in their target’s mental shortcuts for establishing trust, especially where information technologies make it harder to evaluate the trustworthiness of sources. By manipulating public perceptions of reputation, harnessing fake or misleading credentials and testimo- nials, or tampering with photographic and video evidence, influence operators can serve to undermine 39. Beyond simply expanding the size of a campaign, greater resources may help operators target their content to a wider range of people. Research on the 2016 election suggests that fake news consumption was heavily concentrated, with only 1% of Twitter users exposed to 80% of fake news. Nir Grinberg et al., “Fake news on Twitter during the 2016 U.S. presidential election,” Science 363, no. 6425 (January 25, 2019): 374–378, ISSN: 10959203, https: //doi.org /10.1126 /science.aau2706 40. Hee Sun Park et al., “The Effects of Argument Quality and Involvement Type on Attitude Formation and Attitude Change: A Test of Dual-Process and Social Judgment Predictions,” Human Communication Research 33, no. 1 (January 2007): 81–102, ISSN: 0360-3989, https: //doi.org /10.1111 /J.1468-2958.2007.00290.X. 41. However, as discussed above, note that the importance of this factor depends on the goals of the operator. If the goal is pure distraction, having high-quality posts may be far less significant than if the operator is aiming to actually persuade. 42. Josh A. Goldstein and Renee DiResta, “China’s Fake Twitter Accounts Are Tweeting Into the Void,” Foreign Policy , De- cember 15, 2021, https: //foreignpolicy.com /2021 /12/15/china-twitter-trolls-ccp-influence-operations-astroturfing /. We recognize that, in some cases, influence operators desire their efforts to be detected in order to stir worry among a target pop- ulation. However, because many influence operations seek to directly change opinions, and universally easy detection would undermine efforts to stir worry, we treat lower detectability as desirable to propagandists. 43. Recent research suggests that educating people about deepfakes makes them more likely to believe that real videos they subsequently see are also fakes; see John Ternovski, Joshua Kalla, and Peter Aronow, “The Negative Consequences of Informing Voters about Deepfakes: Evidence from Two Survey Experiments,” Journal of Online Trust and Safety 1, no. 2 (February 2022), ISSN: 2770-3142, https: //doi.org /10.54501 /JOTS.V1I2.28. Politicians may also benefit from the “liar’s dividend” by falsely claiming that real events that paint them in a critical light are fake news or deepfakes. See Robert Chesney and Danielle Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” California Law Review 107, no. 6 (2019): 1753, https: //doi.org /10.15779 /Z38RV0D15J. 13 trust beyond the specific topic of their campaign.44Lower societal trust can reduce a society’s ability to coordinate timely responses to crises, which may be a worthy goal for adversarial actors in and of itself. In turn, lower societal trust also creates a more favorable operating environment for propagandists to pursue their objectives. Preexisting polarization and fragmentation in society undercut the ability of honest actors to establish broad credibility, and can give influence operators a foothold to tailor their messaging to narrower audiences, sow division, and degrade social capital and institutional trust. Low general trust undermines the norms that enable people and organizations to interact and cooperate without extensive rules and processes to govern their behavior.45 44. Seger et al., Tackling threats to informed decision-making in democratic societies: Promoting epistemic security in a technologically-advanced world . 45. Lower societal trust also increases transaction costs. In the economy, this decreases the efficiency of markets, and in government, it incentivizes regulatory overreach and accordingly bureaucratic growth that can entrench interests and degrade institutional agility. See Michael J. Mazarr et al., The Emerging Risk of Virtual Societal Warfare: Social Manipulation in a Changing Information Environment (RAND Corporation, October 2019), 62, https: //doi.org /10.7249 /RR2714. 14 3 Recent Progress in Generative Models Understanding the present state of generative models is helpful for addressing their potential role in influence operations. This section introduces generative models to disinformation researchers and poli- cymakers, and will likely be familiar to those in the machine learning (ML) community. 3.1 What Are Generative Models, and How Are They Built? In the last decade, research in AI has improved the ability to automate the production of digital content, including images, video, audio, and text. These new generative AI models can learn to understand the patterns in a given type of data—like text in the English language or the audio waveforms comprising songs—in order to sample new items of that type and produce original outputs. In a wide number of domains, progress in generative models over the past decade has moved shockingly quickly and produced surprisingly realistic output, as illustrated in Table 3 and Figures 1, 2, and 3. 2011 2020 The meaning of life is the tradition of the an- cient human reproduction: it is less favorable to the good boy for when to remove her biggerThe meaning of life is contained in every sin- gle expression of life. It is present in the in- finity of forms and phenomena that exist in all aspects of the universe. Table 3: Generative text model outputs in 2011 versus 2020.46 These machine language systems consist of large artificial neural networks47and are “trained” via a trial-and-error process over mountains of data.48The neural networks are rewarded when their algo- rithmically generated words or images resemble the next word in a text document or a face from an image dataset.49The hope is that after many rounds of trial and error, the systems will have picked up general features of the data they are trained on. After training, these generative models can be repurposed to generate entirely new synthetic artifacts. 46. The 2011 text was generated from Ilya Sutskever, James Martens, and Geoffrey Hinton, “Generating Text with Recurrent Neural Networks,” ed. Lisa Gooter and Tobias Scheffer, Proceedings of the 28th International Conference on Machine Learning , 2011, https: //icml.cc /2011/papers /524_icmlpaper.pdf. The 2020 text was generated using the 175B GPT-3 model. 47. Artificial neural networks are a class of statistical models that are loosely inspired by biological brains. For a technical introduction discussing the role of neural networks in modern machine learning, see the Introduction in Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning (MIT Press, 2016), https: //www.deeplearningbook.org /. For an introduction for policymakers, see Ben Buchanan and Taylor Miller, Machine Learning for Policy Makers: What It Is and Why It Matters (Belfer Center for Science and International Affairs, June 2017), https: //www.belfercenter.org /sites/default /files/files/publication / MachineLearningforPolicymakers.pdf. 48. For example, HuggingFace’s BigScience project is using a training dataset of 1.5 TB (see “Building a TB Scale Multilingual Dataset for Language Modeling,” Hugging Face BigScience, https: //bigscience.huggingface.co /blog/building- a- tb- scale- multilingual-dataset-for-language-modeling); the original GPT-3 project (published in 2021) used a filtered dataset of 570 GB; the largest DeepMind’s Gopher model saw about 1.3 TB of text. The text is composed via sources like web crawls, Wikipedia, scanned books, and news articles. 49. Other methods to train generative models are also in development. For example, diffusion models have been applied to text-to-image generation; see Aditya Ramesh et al., “Hierarchical Text-Conditional Image Generation with CLIP Latents,” arxiv:2204.06125 [cs.CV ], April 2022, https: //doi.org /10.48550 /arxiv.2204.06125. 15 Figure 1: Seven years of progress in synthetic face generation. All of these images are produced with Generative Adversarial Networks.51 (a) 2015 (b) 2022 Figure 2: Seven years of progress in image generation from language. Left image from a 2015 paper, which introduced one of the first methods to generate images from text. The prompt is taken from that paper and intends to show novel scenes. On the right, the same prompt is run on OpenAI’s DALL•E 2. Today’s systems can easily do certain tasks that were hard in 2015.52 51. Original source: Tamay Besiroglu (@tamaybes), “7.5 years of GAN progress on face generation,” Twitter, October 20, 2021, 10:15 AM, https: //twitter.com /tamaybes /status /1450873331054383104, building on Ian Goodfellow, (@goodfellow_- ian), Twitter, January 14, 2019, 4:40 PM, https: //twitter.com /goodfellow_ian /status /1084973596236144640. 52. Elman Mansimov et al., “Generating Images from Captions with Attention,” 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings , November 9, 2015, https: //doi.org /10.48550 /arxiv.1511.02793. 16 (a)“A raccoon wearing formal clothes, wearing a top hat and holding a cane. The raccoon is holding a garbage bag. Oil painting in the style of Rembrandt” (b)“A bald eagle made of chocolate powder, mango, and whipped cream” Figure 3: Elaborate scene construction and composition with 2022 text-to-image models. While Figure 2 shows that 2022 models can do hard tasks from 2015 easily, text-to-image models can also do tasks that were not possible before. In this image, many details of the scene are described via language, and the system translates that into a plausible image. Left is from Google’s Parti, and right is from Google’s Imagen.53 Creating generative models from scratch involves two steps. The first is to take a neural network and train it on an immense amount of raw data. This training process automatically adjusts the many (sometimes more than hundreds of billions) “parameters” of the neural network, which are somewhat analogous to synapses in biological brains. This step culminates in a system that is quite general (it can do many different tasks) and capable (it can do these tasks well),50but that may be difficult to use for specific tasks or that may still lack certain specialized skills. The optional second—and much cheaper—step is to refine this foundation model by further training (or “fine-tuning”) it on small amounts of task-specific data. Fine-tuning can extend a model’s capabilities—for example, a model can be fine-tuned to imitate complex human behaviors like following instructions—or it can be used to train domain-specific skills in smaller models. Training a state-of-the-art, large generative model from scratch in 2022 can involve costs that are at least tens of millions of dollars.54However, it is becoming less expensive to reach near state-of-the-art performance: while it originally cost millions of dollars to train GPT-3 in 2020, in 2022 MosaicML was able to train a model from scratch to reach GPT-3 level performance for less than $500k.55Because of 53. Jiahui Yu et al., “Parti: Pathways Autoregressive Text-to-Image Model,” https: //parti.research.google /; Chitwan Saharia et al., “Imagen: Text-to-Image Diffusion Models,” https: //imagen.research.google /. 50. Bommasani et al., “On the Opportunities and Risks of Foundation Models.” 54. An estimate for Google’s PaLM model puts it at ~$23M; see Lennart Heim, “Estimating PaLM’s training cost,” .xyz Blog, April 5, 2022, https: //blog.heim.xyz /palm-training-cost /. Estimates for other language models are also in the single-to- double-digit millions of dollars. 55. Abhinav Venigalla and Linden Li, “Mosaic LLMs (Part 2): GPT-3 quality for <$500k,” Mosaic, September 29, 2022, https: //www.mosaicml.com /blog/gpt-3-quality-for-500k. 17 this upfront cost, many developers will choose to fine-tune an existing model for their task. This allows them to leverage the general capabilities of the foundation model—imbued from pre-training—at lower cost.56 Recent advances in generative models have been driven by three major developments: (1) the explosion of training data in the form of human language available on the internet (and in curated datasets of internet or user-generated content); (2) improvements in the underlying neural network models and the algorithms used to train them; and (3) rapid growth in the amount of computational power that leading actors have used to train these models, which allows for the creation of larger, more sophisticated models. In many cutting-edge applications, acquiring sufficient computational power to train a model is the most expensive of these components, and the relative capability of different models tends to roughly correspond to how much computational power was used to train them.57 Requirements to Create a Cutting-Edge Language ModelCause of Recent Improvement Data Explosion of available training data (text on the internet) AlgorithmImprovements in large-scale training algorithms and neural network architectures Computational Power (compute)Increase in availability of computational power for AI scientists and improvements in methods to leverage that compute Table 4: Summary of Training Requirements and Areas of Recent Improvement of Language Models Generative language models that “understand” and produce language are the central focus of this re- port.58In principle, a system that can receive and output arbitrary text can perform every task that is expressible via text. Interacting with a language model is, in some sense, like interacting with a remote employee over a textual interface. While current language models are not nearly at human level, they have made great strides in their generality and capability59. For example, not only can the same system (the hypothetical “employee”) carry out the task of classifying tweets as positive or negative sentiment, but it can also generate tweets, write summaries, carry on conversations, write rudimentary source code, and so on.60 While impressive, current generative language models have many limitations. Even the most sophisti- cated systems struggle to maintain coherence over long passages, have a tendency to make up false or absurd statements of fact, and are limited to a generation length of about 1,500 words. In addition, 56. Sebastian Ruder, “Recent Advances in Language Model Fine-tuning,” Sebastian Ruder (Blog), February 24, 2021, https: //ruder.io /recent-advances-lm-fine-tuning /. 57. For elaboration on these points, see Deep Ganguli et al., “Predictability and Surprise in Large Generative Models,” 2022 ACM Conference on Fairness, Accountability, and Transparency , June 2022, 1747–1764, https: //doi.org /10.1145 /3531146. 3533229. 58. Other generative models may focus on generating and modeling visual information—as in images or video—or audio information. In principle, generative models may model any type of sensory information. For a review of audio models, see Zhaoxi Mu, Xinyu Yang, and Yizhuo Dong, “Review of end-to-end speech synthesis technology based on deep learning,” arxiv:2104.09995 [cs.SD ], April 2021, https: //doi.org /10.48550 /arxiv.2104.09995. For an example of a video model, see Emmanuel Kahembwe and Subramanian Ramamoorthy, “Lower Dimensional Kernels for Video Discriminators,” Neural Networks 132 (December 2020): 506–520, https: //doi.org /10.1016 /j.neunet.2020.09.016. 59. We describe future developments of these dimensions of progress in Section 4.2.2. 60. See Google’s PaLM system for some examples: Sharan Narang and Aakanksha Chowdhery, “Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance,” Google AI Blog, April 5, 2022, https: //ai.googleblog. com/2022/04/pathways-language-model-palm-scaling-to.html. 18 models perform worse as they are given more cognitively complex tasks: for instance, asking a genera- tive model to write a few conservative-leaning tweets on a topic will likely result in better outputs than asking a model to rewrite an existing news story in a way that subtly promotes a conservative narrative.61 While these limitations are noteworthy, progress in generative models is both rapid and hard to predict. The capabilities of current models should be considered lower bounds on how realistic generative model outputs can become, and it is not clear where the relevant upper bound is—if it exists. To overcome these limitations, ongoing research targets improvements in data, algorithms, and compu- tational power. For example, some research attempts to improve the quality of the data that the neural network ingests. One way to do so is by collecting data from human domain experts or demonstrators of the desired capability.62Improvements in neural network architectures and new training strategies to imbue the model with improved capability can lead to better algorithms. And, of course, training models on more powerful supercomputers increases the amount of computational power available to the model. 3.2 Access and Diffusion of Generative Models A sizable number of organizations have developed advanced language models. These models are ac- cessible on a spectrum from fully public to fully private. A small number of models are fully public, meaning that anyone can download and use them to produce outputs in a way that can no longer be monitored by the models’ designers. The largest openly downloadable model as of September 2022 (measured by the number of parameters in the neural network model) is BLOOM by HuggingFace’s Big- Science project—a 175 billion- parameter model openly released in July 2022. However, algorithmic improvements have also enabled much smaller open source models that rival or exceed BLOOM and GPT-3 on several capabilities.63 Other models have been kept fully private, with no means for non-developers to access or use the model. DeepMind’s Gopher (280 billion parameters) and Microsoft and Nvidia’s Megatron-Turing NLG (530 bil- lion parameters, but not fully trained)—both of which were created primarily for research purposes—fall into this category. As mentioned previously, the relative capabilities of different language models tends to correspond to the amount of computational power used to train them, and more computational power generally (though not always) means a larger model with more parameters.64It is therefore worth em- phasizing that the largest fully public model is two to three times smaller than the largest currently existing private models. However, this may change soon if more developers open-source their models or a model is leaked. 61. Buchanan et al., Truth, Lies, and Automation: How Language Models Could Change Disinformation . 62. For example, to train models to play Minecraft, researchers collected demonstrations of behaviors from humans; see Bowen Baker et al., “Learning to Play Minecraft with Video PreTraining (VPT),” OpenAI Blog, June 23, 2022, https: //openai. com/blog/vpt/. A survey with more examples is available in Xingjiao Wu et al., “A Survey of Human-in-the-loop for Machine Learning,” Future Generation Computer Systems 135 (August 2021): 364–381, https: //doi.org /10.1016 /j.future.2022.05.014. 63. Hyung Won Chung et al., “Scaling Instruction-Finetuned Language Models,” arxiv:2210.11416 [cs.LG ], October 20, 2022, https: //doi.org /10.48550 /arxiv.2210.11416. 64. Advances in sparsity and retrieval methods are two ways that the number of parameters can come apart from both the computational power used to train the model and the model’s capabilities. See Noam Shazeer et al., “Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer,” 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings , January 2017, https: //doi.org /10.48550 /arxiv.1701.06538; Sebastian Borgeaud et al., “Improving language models by retrieving from trillions of tokens,” arxiv:2112.04426 [cs.CL ], December 2021, https: //doi.org /10.48550 /arxiv.2112.04426. 19 A third category of models attempt to balance public and private access. Meta AI gave some external researchers copies of its 175 billion-parameter language model while requiring them to sign a license that banned certain use cases.65Another method allows for users to sign up for certain types of access through an application programming interface (API). An API-based access regime allows AI developers to commercialize access to their model, track model usage, and impose restrictions on both who can access the model and how they can use it. GPT-3, Jurassic-1, and Cohere Extremely Large, for instance, are all currently accessible via an API.66Keeping models behind an API allows developers a great deal of discretion regarding the conditions under which their model can be accessed.67Organizations that use an API-based access regime ensure that users can submit queries to a model and receive outputs, but also that users cannot directly see or download the model itself,68which means that they cannot fine-tune it for their own specific applications. An AI provider may also choose to support API-based fine-tuning, which would allow the AI developer to monitor and restrict certain fine-tuning use cases.69 Table 5 includes an illustrative list of the most capable current (publicly known, as of September 2022) language models that vary across access regime, primary language of output, and sizes. There are several key takeaways that characterize the current state of model diffusion. First, anyone can access a number of moderately capable models that have been made fully public, but the most capable models remain either private or kept behind monitorable APIs. While currently publicly available models may not be as powerful as the largest private models, they can likely be fine-tuned to perform remarkably well on specific tasks at far less cost than training a large model from scratch. This type of fine-tuning might not be within the reach of most individuals, but it is likely feasible for any nation-state as well as many non-state actors, such as firms and wealthy individuals.71 Second, in addition to cutting-edge models from AI developers like Google (US) and DeepMind (UK), several international actors have developed highly capable models likely motivated by commercial in- terests and as a matter of national prestige. For example, Inspur’s Yuan 1.0, a 245 billion-parameter Chinese-language model, and Naver’s HyperClova, a 204 billion-parameter Korean-language model, 65. Including “military purposes” and “purposes of surveillance”; see “OPT-175B License Agreement,” Metaseq, https: // github.com /facebookresearch /metaseq /blob/main/projects /OPT/MODEL_LICENSE.md. 66. “API,” OpenAI, accessed January 31, 2022, https: //openai.com /api/; Kyle Wiggers, “Announcing AI21 Studio and Jurassic-1 Language Models,” AI21 Labs, accessed January 31, 2022, https: //www.ai21.com /blog/announcing-ai21-studio- and-jurassic-1; Cohere, “About,” accessed January 31, 2022, https: //docs.cohere.ai /api-reference /. 67. However, because external researchers do not have access to the raw models from these APIs, API-based access regimes may make it more difficult for researchers to replicate and improve the private models. 68. API-based models may not be immune to manipulation or theft by adversaries. Model inversion attacks can allow an adversary to potentially steal a model by querying an API many times; see Florian Tramer et al., “Stealing Machine Learning Models via Prediction APIs,” 25th USENIX Security Symposium (Austin, TX; USENIX Security 16) , 2016, 601–618, https: //www. usenix.org /conference /usenixsecurity16 /technical-sessions /presentation /tramer. However, these methods are expensive and have not been demonstrated to work in practice against a foundation model API. 69. For example, Cohere and OpenAI offer fine-tuning through their APIs: “Finetuning Generation Models,” Cohere, accessed June 2022, http: //web.archive.org /web/20220621204451 /https: //docs.cohere.ai /finetuning-wiki /; “Fine-tuning,” OpenAI, accessed June 2022, https: //beta.openai.com /docs/guides /fine-tuning 70. Model sizes come from Jaime Sevilla et al., “Compute Trends Across Three Eras of Machine Learning,” Proceedings of the International Joint Conference on Neural Networks , March 9, 2022, https: //doi.org /10.48550 /arxiv.2202.05924; Jaime Sevilla et al., “Parameter, Compute and Data Trends in Machine Learning,” 2021, https: //docs.google.com /spreadsheets / d/1AAIebjNsnJj_uKALHbXNfn3_YsT6sHXtCU0q7OIPuc4 /edit#gid =0; and Jeffrey Ding and Jenny Xiao, “Recent Trends in China’s Large-Scale Pre-Trained AI Models,” (Working Paper) . Yalm-100B’s compute usage is estimated assuming use of a GPT model in full precision for 300B tokens; see Mikhail Khrushchev, “Yandex Publishes YaLM 100B. It’s the Largest GPT-Like Neural Network in Open Source,” Medium, June 23, 2022, https: //medium.com /yandex /yandex-publishes-yalm-100b-its- the-largest-gpt-like-neural-network-in-open-source-d1df53d0e9a6. 71. Furthermore, as mentioned above, some AIaaS providers offer fine-tuning as a service. 20 ModelSize: Training Compu- tation (PFLOP)70Size: Parameters OrganizationDate of Announce- mentPrimary LanguageAccess Regime Resource Ernie 3.0 Titan 4.2107260B Baidu Dec 2021 ChineseRestricted (API) Outputs Pan-Gu- alpha 5.80107200B Huawei Apr 2021 Chinese Private - Hyper- CLOVA 6.30107204BNaver Corp. Sep 2021 Korean Private - GPT-NeoX 9.3010720B Eleuther AI Feb 2022 English Public Parameters Yalm-100B 1.80108100B Yandex Jun 2022 Russian Public Parameters GPT-3 3.00108175B OpenAI May 2020 EnglishRestricted (API) Outputs Yuan 1.0 4.10108245B Inspur Oct 2021 ChineseRestricted (API) Outputs OPT-175B 4.30108175B Meta Jan 2022 EnglishRestricted (license) Parameters BLOOM 6.04108175B BigScience July 2022 Multiple Public Parameters Gopher 6.30108280B DeepMind Dec 2021 English Private - Megatron- Turing 1.40109530BMicrosoft, NVIDIA Jan 2022 English Private - PaLM 2.50109540B Google Apr 2022 English Private - Note: We order the table by training computation requirements as a proxy for capability. Table 5: Illustrative List of State-of-the-Art Language Models. have matched and exceeded the size of GPT-3 and likely offer similarly impressive capabilities.72While access to PanGu- a, HyperClova, and Wu Dao 2.0 looks likely to remain partially or fully restricted, other models are public. For example, the Russian Yalm 100 billion-parameter model is openly available through code repositories on GitHub and /or HuggingFace.73Some of the Beijing Academy of Artificial Intelligence’s (BAAI) WuDao models are directly downloadable from their website.74 Third, these international actors have optimized their models for their national languages. For example, the Yuan 1.0 model excels in Chinese-language tasks. While per-language performance can be approxi- mated by the proportion of training data that is in a particular language, models can also perform well at producing text in multiple languages or translating between them—if the model is trained on enough data from multiple languages. This trend of language-specific optimization suggests that if these mod- els are applied to influence operations, they will be most able to target populations speaking specific languages that are well-represented in a particular model’s training data. 72. See Wei Zeng et al., “PanGu- a: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Com- putation,” arxiv:2104.12369 [cs.CL ], April 2021, https: //doi.org /10.48550 /arxiv.2104.12369; Kyle Wiggers, “Huawei trained the Chinese-language equivalent of GPT-3,” VentureBeat, April 29, 2021, https: //venturebeat.com /ai/huawei-trained-the- chinese-language-equivalent-of-gpt-3 /; “NAVER Unveils HyperCLOVA, Korea’s First Hyperscale ‘Al to Empower Everyone’,” Naver Corp. Press Releases , May 25, 2021, https: //www.navercorp.com /en/promotion /pressReleasesView /30686. 73. For example: “Muse API,” PAGnol, https: //muse.lighton.ai /home; Anton Emelyanov et al., “Russian GPT-3 models,” GitHub, https: //github.com /ai-forever /ru-gpts#readme. 74. “WudaoAI,” Beijing Academy of Artificial Intelligence , accessed October 30, 2022, https: //wudaoai.cn /model /. 21 4 Generative Models and Influence Operations This section marries the previous sections’ emphases on influence operations and generative models. We build on the existing but nascent body of research on AI-generated influence campaigns in two steps. First, we introduce the ABC framework—actors, behaviors, and content—that is well-known among disinformation researchers, and describe how generative models may transform each of these three facets.75Then, we examine expected developments and critical unknowns in the field of machine learning that will impact the role that generative models can play in influence operations. For each expected development, we describe the current state of technology, expected improvements, and the implications such improvements would have for the future of influence campaigns. 4.1 Language Models and the ABCs of Disinformation In this paper, we build on the “ABC” model, a popular model in the disinformation field, that distinguishes between key manipulation vectors in disinformation campaigns.76“A,” for actors , references the fact that the entity behind a campaign is often not what it seems; for example, the accounts in a conversation may look like Black Lives Matter activists, but in reality may be state-linked actors using fake accounts in active misdirection. “B” is for behavior , and refers to howpropagandists wage their campaigns—the techniques used to perpetuate disinformation, such as the use of automation or attempts to manipulate engagement statistics via click farms.77“C” alludes to the content itself, the substance (narrative, memes, etc.) that the accounts are attempting to launder or amplify; this third facet of disinformation campaigns is perhaps the most visible to the public, and media will highlight the substance in its coverage.78Although, as discussed in the Section 1, we are focused on influence operations, not disinformation exclusively, this model helps characterize potential changes that may arise due to language models. One of the reasons that platforms and researchers assess all three dimensions—the actors, behaviors, and content—when evaluating an influence operation is that at times one facet may be perfectly authentic even within an overall manipulative campaign. Authentic content, for example, may be inauthentically amplified with paid or automated engagement, or by actors who are not what they seem. Similarly, entirely authentic actors—domestic political activists, perhaps—may use inauthentic automation. In discussing the potential impact of AI on future influence or disinformation campaigns, we therefore consider its potential for transforming each of the three factors. We believe that generative models 75. François, Actors, Behaviors, Content: A Disinformation ABC Highlighting Three Vectors of Viral Deception to Guide Industry & Regulatory Responses . 76. François. 77. Click farms refers to labor hired to manually click on content online on behalf of their employers. They display some online patterns of genuine internet users since they are humans, allowing them to avoid bot detection, while still driving up content views and interactions. 78. Deepfake videos have already been used for phishing campaigns and the harassment of journalists. Some have suggested deepfakes may be used to develop crisis scenarios, whether by faking government directives, discrediting candidates for public office, or pretending to keep hostage soldiers. See, for example, Kishalaya Kundu, “Criminals Used AI To Clone Company Director’s Voice And Steal $35 Million,” Screen Rant, October 14, 2021, https: //screenrant.com /ai-deepfake-cloned-voice- bank- scam- theft- millions /; Katerina Sedova et al., AI and the Future of Disinformation Campaigns: Part 2: A Threat Model (Center for Security and Emerging Technology, December 2021), https: //doi.org /10.51593 /2021CA011; Rana Ayyub, “I Was The Victim Of A Deepfake Porn Plot Intended To Silence Me,” Huffington Post , November 21, 2018, https: //www. huffingtonpost.co.uk /entry /deepfake-porn_uk_5bf2c126e4b0f32bd58ba316; Jan Kallberg and Stephen Col. Hamilton, “US military must prepare for POW concerns in the deepfake era,” C4ISRNET, August 23, 2021, https: //www.c4isrnet.com / opinion /2021/08/23/us-military-must-prepare-for-pow-concerns-in-the-deepfake-era /. 22 will improve the content, reduce the cost, and increase the scale of campaigns; that they will introduce new forms of deception like tailored propaganda; and that they will widen the aperture for political actors who consider waging these campaigns. In Table 6, we summarize possible changes to the actors, behavior, and content due to language models, and describe these changes in further depth below. ABCPotential Change Due to Generative AI TextExplanation of Change Larger number and more di- verse group of propagandists emerge.As generative models drive down the cost of gen- erating propaganda, more actors may find it at- tractive to wage influence operations.Actors Outsourced firms become more important.Propagandists-for-hire that automate production of text may gain new competitive advantages. Automating content produc- tion increases scale of cam- paigns.Propaganda campaigns will become easier to scale when text generation is automated. Existing behaviors become more efficient.Expensive tactics like cross-platform testing may become cheaper with language models.Behavior Novel tactics emerge.Language models may enable dynamic, personal- ized, and real-time content generation like one- on-one chatbots. Messages grow more credible and persuasive.Generative models may improve messaging com- pared to text written by propagandists who lack linguistic or cultural knowledge of their target. Content Propaganda is less discover- able.Existing campaigns are frequently discovered due to their use of copy-and-pasted text (copypasta), but language models will allow the production of linguistically distinct messaging. Table 6: How Language Models May Influence the ABCs of Influence Operations 4.1.1 Actors: Outsourced Execution & Proliferation of Propagandists One limitation on actors who run disinformation campaigns is cost. While social media has decreased the cost to reach the public, most campaigns have involved numerous fake personas, sophisticated au- tomation, and /or a stream of relevant content. AI reduces the cost of running campaigns further, by automating content production, reducing the overhead in persona creation, and generating culturally appropriate outputs that are less likely to carry noticeable markers of inauthenticity. These developments will expand the set of actors with the capacity to run influence operations. The notion that less resourced actors (or less talented trolls) could use AI models to run influence oper- ations is not merely speculative—it has already been piloted. Recently, a researcher fine-tuned a model hosted on HuggingFace (an online hub for machine learning models) on a dataset of 4chan posts79and 79. Matt Murphy, “Someone trained an A.I. with 4chan. It could get worse.,” Slate, August 3, 2022, https: //slate.com / technology /2022/08/4chan-ai-open-source-trolling.html. 23 dubbed it “GPT-4chan.” He proceeded to post more than 30,000 generated posts on 4chan.80In this case, the original model was publicly available and easily downloadable. In another example, in Octo- ber 2019, Idaho solicited public feedback about a proposal to change its Medicaid program. A Harvard Medical School student ran a study in which he submitted comments that were generated by GPT-2 as if they were written by ordinary citizens. In a follow-on survey, volunteers were unable to distinguish between the AI-generated and human-written comments.81If a single student can run this type of cam- paign on a public comment board, political actors will likely be able to do the same, leading to a wider pool of potential actors waging influence operations.82 Independently of improvements in generative AI models, political actors are increasingly turning toward third-party influence-for-hire companies to conduct their campaigns, including firms that otherwise ap- pear to be legitimate marketing or PR firms.83Even if AI companies place restrictions on who can access their models, this trend makes it harder to ensure that bad actors do not have access to generative models, as marketing firms will likely be granted access given their other legitimate uses.84 4.1.2 Behavior: Low-Cost Content at Scale and Novel Techniques In addition to affecting the actors involved in influence operations, the integration of generative language models can encourage new types of behaviors used in influence campaigns and change the way existing behaviors are enacted in practice. The most basic behavioral change that will result from using language models for influence operations is replacing, or augmenting, a human writer in the content generation process. Language models replacing human writers, or used in a human-machine team, could dramatically reduce the cost and increase the scalability of the types of propaganda campaigns we see today—such as mass-messaging campaigns on social media platforms or long-form news generation on unattributable websites. Beyond simply writing text, generative models can improve other existing tactics, techniques, and pro- cedures of influence operations. For instance, cross-platform testing is a long-standing component of many influence operations, in which actors first test content on one platform to gauge audience reaction before proliferating content onto other platforms.85Operators using generative AI models may be able to perform this type of testing at greater scale, which may improve a campaign’s overall impact. Manipulative actors could also use language models to overwhelm or falsify checks in areas in which text commentary is solicited, such as in the public comment process between governments and their 80. Andrey Kurenkov, “Lessons from the GPT-4Chan Controversy,” The Gradient, June 12, 2022, https: //thegradient.pub /gpt- 4chan-lessons /; James Vincent, “YouTuber trains AI bot on 4chan’s pile o’ bile with entirely predictable results,” The Verge , June 8, 2022, https: //www.theverge.com /2022/6/8/23159465 /youtuber-ai-bot-pol-gpt-4chan-yannic-kilcher-ethics. 81. Will Knight, “AI-Powered Text From This Program Could Fool the Government,” Wired, January 15, 2021, https: //www. wired.com /story/ai-powered-text-program-could-fool-government /. 82. As we discussed in Section 2, GPT-2 is already publicly available, as are stronger models like Eleuther’s GPT-NeoX-20B, a 20-billion parameter model. 83. See: Josh A. Goldstein and Shelby Grossman, “How disinformation evolved in 2020,” January 4, 2021, https: //www. brookings.edu /techstream /how-disinformation-evolved-in-2020 /; Max Fisher, “Disinformation for Hire, a Shadow Industry, Is Quietly Booming,” New York Times , July 25, 2021, https: //www.nytimes.com /2021/07/25/world /europe /disinformation- social-media.html. 84. Sedova et al., AI and the Future of Disinformation Campaigns: Part 2: A Threat Model . 85.Senate Report No 116-290, vol 2 (2020), https: //www.intelligence.senate.gov /sites/default /files/documents /Report_ Volume2.pdf. 24 citizens.86Recent research showed that public comments to the Federal Communications Commission about net neutrality in 2017 were largely driven by falsified repeated comments.87Language models may increase the scale and decrease detectability of similar future operations. In a recent field experiment, researchers sent over 30,000 emails—half written by GPT-3, and half written by students—to 7,132 state legislators. The researchers found that on some topics legislators responded to computer-generated content at only a slightly lower rate than human-generated content; on other topics, the response rates were indistinguishable.88 Language models will also shape propagandists’ behaviors by introducing new behaviors altogether and enabling novel tactics. Because these models make it possible to “think up” a new version of content in near real time, actors can deploy them for real-time, dynamic content generation. In the next few years, as language models improve, it may be possible for propagandists to leverage demographic information to generate more persuasive articles that are strongly tailored to the target audience. Whether this will be a cost-effective strategy is dependent on how well models (or future models) can tailor messaging based on limited demographic information. Today, websites could use demographic information to route users to different human-written articles. Writing different versions of articles, however, takes human capital. Language models, by contrast, could provide original articles for each combination of user demographics, which would be infeasible for human writers. The payoff of this strategy depends on how persuasive AI-generated text is, and how much more persuasive highly tailored personalized text is, compared to one (or a few) human-written articles. It could also involve humans making minor adjustments to AI-generated text. This remains uncertain but warrants further attention, as analogous personalization could be applied to a range of malicious campaigns, including phishing emails.89 Another central example of dynamic content generation is chat—language models engaging in extended back-and-forth conversations. Actors could potentially deploy personalized chatbots that interact with targets one-on-one and attempt to persuade them of the campaign’s message.90This capability could materialize as interactive social media personas, back-and-forth email messaging, or faked support chat- bots. Propagandists may leverage chat with language models across a wide range of contexts—anywhere interactivity is useful. There are reasons to think that chat may be an important vector of influence. Researchers have already found that interacting with a chatbot can influence people’s intentions to get a COVID-19 vaccine;91with chatbots based on language models, these interactions could be even more powerful. While deploying their own chatbots would give influence operators more control, they may be able to manipulate innocu- 86. Knight, “AI-Powered Text From This Program Could Fool the Government.” 87. “Public Comments to the Federal Communications Commission about Net Neutrality Contain Many Inaccuracies and Duplicates,” Pew Research Center , November 29, 2017, https: //www.pewresearch.org /internet /2017/11/29/public-comment s-to-the-federal-communications-commission-about-net-neutrality-contain-many-inaccuracies-and-duplicates /. 88. Sarah Kreps and Doug Kriner, “The Potential Impact of Emerging Technologies on Democratic Representation: Evidence from a Field Experiment,” (Working Paper) . 89. Andrew J. Lohn and Krystal A. Jackson, Will AI Make Cyber Swords or Shields? (Center for Security and Emerging Tech- nology, August 2022), https: //doi.org /10.51593 /2022CA002. 90. For a rudimentary example of a chat application built on language models, see “Marv the Sarcastic Chat Bot,” OpenAI API, https: //beta.openai.com /examples /default-marv-sarcastic-chat 91. Sacha Altay et al., “Information delivered by a chatbot has a positive impact on COVID-19 vaccines attitudes and in- tentions.,” Journal of Experimental Psychology: Applied , October 28, 2021, ISSN: 1939-2192, https : / /doi . org /10 . 1037 / XAP0000400. 25 ous chatbots to spread propaganda. Microsoft’s Tay is one historical example,92and more sophisticated techniques to “poison” language models are being investigated by researchers.93 4.1.3 Content: High Quality and Low Detectability There are two varieties of textual content commonly observed in influence operations: short-form com- mentary such as tweets or comments, and long-form text. Language models could improve the quality and therefore decrease the detectability of both types of content. Short-form content is primarily pushed out by inauthentic account personas on social media, or some- times in the comment sections of websites or blogs, and is often intended to influence the reader’s perception of public opinion. Many tweets or comments in aggregate, particularly if grouped by some- thing like a trending hashtag, can create the impression that many people feel a certain way about a particular issue or event. Producing this content, which purports to represent the opinions of the “man- on-the-street,” requires account operators to have knowledge of the communication style and rhetoric that fits the persona who is purportedly speaking; some operations are exposed because of incongruities or “uncanny valley” dynamics in which the persona uses terminology or slang that does not quite fit what a genuine member of the community would likely say.94 Creating the appearance of a public opinion requires having many speakers. In 2014–2016, political op- eratives frequently used bots—automated accounts—to produce this volume, deploying them to make content trend or to introduce particular opinions into hashtags.95However, creating speech for large net- works of automated accounts was a challenge, and the bot networks were often detectable because they used “copypasta”—repetitive or identical language across networks and accounts. In response, Twit- ter changed the weighting function for its trending algorithm to minimize the effect of bot accounts.96 Subsequent takedowns suggest that some well-resourced state propagandists have shifted away from automated account networks posting copypasta or attempting to flood hashtags and toward more well- developed, non-automated persona identities.97Others did continue to leverage bots, though often to create the perception of engagement slightly differently, such as by replying to, retweeting, or liking tweets. 92. Oscar Schwartz, “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation,” IEEE Spectrum , November 25, 2019, https: //spectrum.ieee.org /in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversati on. 93. Eugene Bagdasaryan and Vitaly Shmatikov, “Spinning Language Models: Risks of Propaganda-As-A-Service and Counter- measures,” 2022 IEEE Symposium on Security and Privacy , 2022, 769–786, https: //doi.org /10.1109 /SP46214.2022.9833572. 94. On the idea of an uncanny valley, see Tom Geller, “Overcoming the Uncanny Valley,” IEEE Computer Graphics and Ap- plications 28, no. 4 (July-Aug. 2008): 11–17, ISSN: 02721716, https: //doi.org /10.1109 /MCG.2008.79. For evidence that technology has surpassed the uncanny valley for producing as-if human faces, see Sophie J. Nightingale and Hany Farid, “AI-synthesized faces are indistinguishable from real faces and more trustworthy,” PNAS 119, no. 8 (February 2022), ISSN: 10916490, https: //doi.org /10.1073 /PNAS.2120481119 95. Samuel C. Woolley and Douglas Guilbeault, “Computational propaganda in the United States of America: Manufacturing consensus online,” Project on Computational Propaganda Research , 2017, 1–29. 96. Ed Ho, “An Update on Safety,” Twitter Blogs, February 7, 2021, https: //blog.twitter.com /en_us /topics /product /2017/an- update-on-safety. 97. Renee DiResta et al., “In Bed with Embeds: How a Network Tied to IRA Operations Created Fake “Man on the Street” Content Embedded in News Articles,” Stanford Internet Observatory , December 2, 2021, https: //cyber.fsi.stanford.edu /io/ publication /bed- embeds; Shelby Grossman, Khadija H., and Emily Ross, Royal Sockpuppets and Handle Switching: How a Saudi Arabia-Linked Twitter Network Stoked Rumors of a Coup in Qatar (Stanford Internet Observatory, October 2020), https: //stacks.stanford.edu /file/druid:hp643wc2962 /twitter-SA-202009.pdf. 26 As generative AI models continue to advance, they could make it possible for influence operators to automate the generation of text commentary content that is as varied, personalized, and elaborate as human-generated content. If propagandists can use generative models to produce semantically distinct, narratively aligned content, they can mask some of the telltale signs (identical, repeated messaging) that bot detection systems rely on—prompting bot detection systems to leverage other signals. This evolution could allow even small groups to make themselves look much larger online than they are in real life. Real IRA Tweet Generated Tweet Shocking Video US police repeatedly tasing a black man hold- ing his baby in his own apartment in Phoenix, Arizona. We’re not safe in this country. We’re nor safe in our own homes! #BlackLivesMatter #PoliceBrutality #Police https: //t.co/ldWNFWOADgThis video is everything that’s wrong with the police. They act like a pack of wolves, try- ing to scare this man away. It’s unacceptable! https: //t.co/ldWNFWOADg Table 7: For short-form text, large language models can already match the capabilities of human-written segments in real influence operations. The left tweet is the top-performing tweet by number of retweets in an IRA-backed Ghanian disinformation campaign released by Twitter in March 2020. The right tweet is generated by prompting a language model with a few example tweets and then asking it to produce a tweet with the given link. A second relevant output of language models for influence operations is long-form text, such as propa- gandistic journalism. This content is used to make a longer point, and often appears on front media properties, such as gray media outlets owned or controlled by the disinformation actor or undisclosed allies. Often, one of the goals is to have the claims in the text republished by more reputable authentic sources, a technique known as “narrative laundering.” For example, Russia’s “Inside Syria Media Cen- ter” (ISMC) news website, a GRU front property whose bylined journalists included fabricated personas, produced content that was republished as contributed content within ideologically aligned, unwitting publications, or incorporated into real news articles in the context of expert quotes.98 Producing this kind of long-form propaganda, however, takes time and expertise. The inauthenticity of the ISMC was uncovered when the GRU’s inauthentic journalist personas began to plagiarize each other’s work; an editor from one of the publications that received a submission from an ISMC journalist inquired about the apparent plagiarism, then began to investigate the site after receiving an incongruous response. Learning from this experience, threat actors affiliated with the Russian IRA reverted to old- school methods and hired unwitting freelance journalists to write for proxy outlets; they, too, were uncovered when the journalists began to look more deeply into the publications.99Language models, however, can produce long-form content in seconds, reducing the time, cognitive load, and cost to produce such content and eliminating the need to take risky shortcuts—or hire real people—that might jeopardize the overall operation. The novel behavior—deployment of generative models—improves the 98. Renée DiResta, Shelby Grossman, and Alexandra Siegel, “In-House Vs. Outsourced Trolls: How Digital Mercenaries Shape State Influence Strategies,” Political Communication 39, no. 2 (2021): 222–253, ISSN: 10917675, https: //doi.org /10.1080 / 10584609.2021.1994065. 99. Jack Delaney, “I’m a freelance writer. A Russian media operation targeted and used me,” The Guardian , September 4, 2020, https: //www.theguardian.com /technology /2020 /sep/04/russia-media-disinformation-fake-news-peacedata; August 2020 Coordinated Inauthentic Behavior Report (Meta, September 1, 2020), https: //about.fb.com /news/2020/09/august-2020- cib-report /; Jack Stubbs, “Russian operation masqueraded as right-wing news site to target U.S. voters,” Reuters, October 1, 2020, https: //www.reuters.com /article /usa-election-russia-disinformation /exclusive-russian-operation-masqueraded-as- right-wing-news-site-to-target-u-s-voters-sources-idUSKBN26M5OP. 27 quality of long-form text that could increase the impact of these campaigns. There is already some evidence that existing language models could substitute for human authors in gen- erating long-form content or make content generation more effective through human-machine teaming. In a series of survey experiments, researchers found that GPT-2, the smaller predecessor of GPT-3, could produce text that successfully mimicked the style and substance of human-written articles.100In ex- periments of GPT-3’s capabilities, human participants were able to distinguish multiparagraph GPT-3 “news articles” from authentic news articles at a rate only slightly better than random chance.101In an experimental setting, researchers also found that GPT-3-generated propaganda articles were nearly as persuasive as articles from real world covert propaganda campaigns.102Language models could also be used to generate summary text of other articles, inflected for ideological alignments. It seems likely that language models are cost-effective (relative to human propagandists) for some cam- paigns. For a simple calculation to demonstrate this claim, let wrepresent the hourly wage paid to information operators, Lhrepresent the productivity of human authors (measured as the number of posts that can be written by a human in an hour), crepresent the amortized per-output cost of gener- ating posts using a language model, and Lrrepresent the productivity of human reviewers (measured as the number of AI-generated posts that a human can review in an hour). Further, let prepresent the proportion of AI outputs that are “usable” for an information operation. Then, the cost of generating noutputs will be equal tonw Lhin the case of a human campaign, and (c+w Lr)n pin the case of an AI-augmented campaign where humans are tasked to read and approve AI outputs. The amortized per-output cost of producing content may be relatively high in cases where a large lan- guage model is trained from scratch and used for a short campaign, but if a public model is used or a model is trained and reused for sufficiently many campaigns, cwill approach the bare electricity cost of operating the model, which can be negligible compared to the human labor costs of either authoring or reviewing outputs. In this case, the AI-augmented campaign will be more cost effective than a fully human one, so long as the inequality Lr=Lh>1=p holds. In other words, so long as the ratio between the number of posts that a human can review in an hour and the number of posts that a human can write in an hour is larger than the number of AI- generated posts that a human must review, on average, to get one usable output, then the use of the AI model will be cost-effective. Only very moderate assumptions are needed to make this inequality hold; for example, if outputs from current language models are passably coherent and usable for some (possibly unsophisticated) operations more than 20% of the time, then this inequality will hold as long as a human could read at least five posts in the time it takes to author one.103 100. Kreps, McCain, and Brundage, “All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation.” 101. Tom B. Brown et al., “Language Models are Few-Shot Learners,” Advances in Neural Information Processing Systems 33 (May 2020), ISSN: 10495258, https: //doi.org /10.48550 /arxiv.2005.14165. 102. Goldstein et al., “Can AI write persuasive propaganda?” 103. For a more extended analysis of this topic, see Musser, “A Cost Analysis of Generative Language Models and Influence Operations” 28 4.2 Expected Developments and Critical Unknowns Both the recent technological progress in generative models and their wider diffusion are likely to con- tinue. Here we speculate on several expected technological developments over the coming years that will be major drivers of operational change. We also highlight critical unknowns, where multiple paths are possible, and where this uncertainty may have a large impact on the future state of play. These projections are not intended as explicit forecasts, but rather as a way to conceptualize medium-term plausible futures. This section is summarized in Table 8. Technical and Strate- gic UnknownsCurrent State (2022) How This Might Change Usability, reliability, and efficiency of generative models•Difficult to specify and stay on a task •Outputs can be incoherent or fabricate facts •Building models from scratch can cost millions of dollars; efficacy of fine- tuning still being explored for different capabilities•Train to better follow in- structions •Retrain periodically on fresher data •Hardware, software, and en- gineering progress Difficulty of developing new and more general capabilities relevant to influence operations•Can produce tweets, short news articles •Little interactivity or long-range dia- logue •Not optimized for influence (via prox- ies like click-through rate)•Scaling up with bigger mod- els and more data •Using metrics of influence to train models •Combining models with non- ML software pipelines and hu- man reviewers Interest and investment in AI for influence; accessibility of text generation tools•Leading AI R&D mostly done by indus- try labs and academic institutions in a few countries for scientific or commer- cial merit •No free online tools to generate arbi- trary state-of-the-art text at scale•Nation-state invests in or adapts AI for influence •Marketing industry adopts language models •State-of-the-art language model published online with an easy user interface, free for anyone to use Table 8: Expected Developments For Generative Models In Influence Operations 29 4.2.1 Improvements in Usability, Reliability, and Efficiency Language models are likely to improve on three features that will affect their deployment in influence operations: usability (how difficult it is to apply models to a task), reliability (whether models produce outputs without obvious errors), and efficiency (the cost-effectiveness of applying a language model for influence operations). Improvements in usability and reliability could allow lower-skilled propagandists to employ language models with reduced human oversight. Achieving existing capabilities—like writing slanted short articles or tweets—will become much cheaper and more efficient, which could increase the rate of adoption of language models in influence operations. Usability While recent generative models have become more generalizable—users can specify a wide range of tasks—it takes skill and experience for the user to operate the model successfully. For example, it is dif- ficult for an operator to specify a task for a language model. Imagine prompting a language model with the input “What is 15 times 37?” To an operator, it may be obvious that the output for this prompt should be a single number (555), but to the model—which by default is simply performing a text completion task—an equally plausible continuation of this text may be “What is 89 times 5?” as though the task it had been assigned was to write a list of exam questions for a grade school math course. Prompt engi- neering, where operators experiment with different ways of phrasing their requests, can help mitigate this problem, but it can only go so far without the ability to fine-tune or otherwise alter the base model itself.104 Researchers are exploring different approaches to improve task specification. For example, some re- searchers have modified the training process of large language models to improve the ability of those models to follow instructions.105Other researchers have tried tagging different parts of the training data by their types (e.g., “dialogue” would specify dialogue data), and then asking a model to only pro- duce data of a certain type.106Other approaches are in development,107and it remains unclear which combination of approaches will ultimately be adopted. If usability of language models improves, pro- pagandists will be able to use models for new tasks as they arise without in-depth prompt engineering experience. Furthermore, because it is often difficult to predict which tasks a language model can be used for, improvements in usability can make it easier for propagandists to experiment with and discover applications of language models in influence operations. Reliability Language models can generate plausible content for a wide variety of tasks. However, even if plau- sible content is initially generated, a propagandist must either trust that a model will be highly reli- able—completing the task without making detectable errors—or apply consistent monitoring. But mod- 104. See Pengfei Liu et al., “Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing,” ACM Computing Surveys , September 2021, https: //doi.org /10.1145 /3560815, and “Prompt Engineering,” co:here, https: //docs.cohere.ai /docs/prompt-engineering for a popular explanation. 105. Long Ouyang et al., “Training language models to follow instructions with human feedback,” OpenAI , March 2022, https: //cdn.openai.com /papers /Training_language_models_to_follow_instructions_with_human_feedback.pdf. 106. Nitish Shirish Keskar et al., “CTRL: A Conditional Transformer Language Model for Controllable Generation,” arxiv:1909.05858 [cs.CL ], September 2019, https: //doi.org /10.48550 /arxiv.1909.05858. 107. For a broad overview of some approaches to this problem, see: Lilian Weng, “Controllable Neural Text Generation,” Lil’Log, January 2, 2021, https: //lilianweng.github.io /posts/2021-01-02-controllable-text-generation /. 30 els are often not reliable, and consistent monitoring introduces additional costs. As task complexity increases,108ensuring compliance becomes increasingly difficult. If models fail to consistently produce compelling outputs, propagandists may simply choose not to use them. These challenges then increase the demand for more skilled operators, who may be in short supply. An important caveat, however, is that not every task may require the same level of reliability. For example, deploying Twitter bots that sometimes produce incoherent tweets might be fine for a propagandist if the goal is to simply cause chaos around a targeted topic.109 Unreliable outputs show up in different forms, but the core takeaway is that although language mod- els can produce high-quality multiple-page documents, they cannot do so consistently. Common failure modes include extremely repetitive outputs, losing coherency over the course of a long output, or fabri- cating stylized facts that do not fit the generation context.110 One reason why models fail to consistently produce high-quality text is because they lack awareness of time and information about contemporary events. The current training regime for generative models trains them once on a large corpus of data, which means that models will not have context for events that occur after this key moment.111Ask a language system that was trained before COVID-19 about COVID-19, and it will simply make up plausible-sounding answers without any real knowledge about the events that unfolded. To address the problem of a lack of up-to-date information, AI researchers will likely pursue two basic approaches: either continually retrain models to account for new context, or develop new algorithms that allow for more targeted updates to a language model’s understanding of the world.112For in- stance, language models that are trained to be “time aware” can perform much better at handling recent trends, references to named entities, and concept drift—the way in which words can change in meaning overtime.113Since propagandists may be interested in shaping the perception of breaking news stories, significant improvements in how language models handle recent events not present in their initial train- ing data will translate directly into improved capabilities for influence operators across a wide number of potential goals. State-backed propagandists will also likely be interested in methods to adapt pretrained language models to new tasks, which would give them some assurance of reliability. Current methods to adapt models to new tasks require examples of those tasks, and use the examples to fine-tune a model to handle them well. For example, if a model performs unreliably on Spanish-language inputs, one might fine-tune that model on more examples of Spanish text. Efficiency Alongside improvements to usability and reliability, we expect improvements in the efficiency of lan- guage models, which will reduce the costs to automate some influence tactics. Models that can more 108. For example, imagine trying to convey to a model that its task is to take headlines and subtly rewrite them to be consistently biased toward a certain political ideology. 109. And if these errors do not make it easier to attribute or detect inauthentic behavior. 110. Ari Holtzman et al., “The Curious Case of Neural Text Degeneration,” arxiv:1904.09751 [cs.CL ], February 19, 2019, ISSN: 16130073, https: //doi.org /10.48550 /arxiv.1904.09751. 111. Bhuwan Dhingra et al., “Time-Aware Language Models as Temporal Knowledge Bases,” Transactions of the Association for Computational Linguistics 10 (March 2022): 257–273, ISSN: 2307387X, https: //doi.org /10.1162 /tacl_a_00459. 112. One example of this is what are known as retrieval-based methods, in which a language model is trained to retrieve knowledge from an external database. To achieve time-awareness, operators may simply need to update that external database. 113. Daniel Loureiro et al., “TimeLMs: Diachronic Language Models from Twitter,” arxiv.2202.03829 [cs.CL ], February 2022, 251–260, https: //doi.org /10.48550 /arxiv.2202.03829. 31 efficiently guess the next word for marketing copy can also more efficiently guess the next word for a polarizing article. Efficiency gains could come from many angles: algorithmic progress, hardware improvements, or the use of inexpensive fine-tuning to optimize relatively small models for influence operation-specific tasks.114 Other future improvements in the influence operations space could include organizational and opera- tional innovation. Organizations may improve human-machine collaboration by creating software that improves a propagandist’s ability to oversee, select, and correct the outputs of language models. Lan- guage models could be used as an autocorrect for cultural context, allowing operators to work with targets they are not familiar with, and allowing familiar actors to output a higher volume of credible content per unit time. The empirical details of efficiency will be important. Exactly how efficiently can generative models be trained? One measure of algorithmic progress in image classification found a 44x improvement over the course of nine years.115Even during the course of drafting this paper, research has come out that claims to train GPT-3 quality models for less than $500,000, which would represent a factor of 3–10x improvement.116If capabilities relevant to influence operations—generating persuasive text, fake personas, or altered videos—are achievable with significantly lower cost, then they are more likely to diffuse rapidly. Similarly, how efficient will an operation be by using language models as a complement to human content editors, rather than as a full substitute? The operational know-how and ease of editing might make it easier to scale up influence operations. 4.2.2 New and More General Capabilities for Influence As language models improve, it is likely that they will have newer and more general capabilities. In 2017, few expected that language models in 2022 would be able to add and multiply three-digit numbers without having been trained to do so.117Not surprisingly, we do not know what capabilities the language models of 2027 will have. In this section we discuss two critical unknowns related to this theme: 1. Which capabilities will emerge as side effects of scaling to larger models? If abilities directly appli- cable to influence operations—such as the ability to persuade via long-lasting dialogue—emerge as a side effect of simply scaling to larger models, then many AI projects are high risk—regardless of the goals of their creators. 2. How difficult is it to train generative models to execute the various capabilities that are useful for influence operations? If it is easy for generative models to learn skills (like writing viral or 114. On fine-tuning GPT-2, a smaller language model, to mimic several news sources with high accuracy, see Buchanan et al., Truth, Lies, and Automation: How Language Models Could Change Disinformation 14-15. Recent research has also explored more efficient methods of fine-tuning models, which could make it even easier to fine-tune models for influence operations tasks. 115. By one measure, between 2012 and 2019, algorithmic efficiency doubled every 16 months on average. The number of floating-point operations required to train a classifier to a given level decreased by a factor of 44x; see Danny Hernandez and Tom B. Brown, “Measuring the Algorithmic Efficiency of Neural Networks,” arxiv:2005.04305 [cs.LG ], May 2020, https: //doi.org /10.48550 /arxiv.2005.04305 116. Venigalla and Li, “Mosaic LLMs (Part 2): GPT-3 quality for <$500k.” 117. Jason Wei et al., “Emergent Abilities of Large Language Models,” arxiv:2206.07682 [cs.CL ], June 2022, https: //doi.org / 10.48550 /arxiv.2206.07682. 32 persuasive text) for influence operations, then the problem of defense becomes more urgent. New Capabilities as a Byproduct of Scaling and Research New capabilities for influence operations may emerge unexpectedly as language models are scaled up. One of the impressive scientific takeaways from recent progress in generative models is that training on a simple objective—predicting the next word or pixel—gives rise to adjacent, general capabilities. A system trained to predict the next word of an input text can also be used to summarize passages or generate tweets in a particular style; a system trained to generate images from captions can be adapted to fill in parts of a deleted image, and so on. Some of these abilities only emerge when generative models are scaled to a sufficient size.118 Today, we have single language systems that can summarize short texts, translate between languages, solve basic analogies, and carry on basic conversations; these capabilities emerged with sufficiently large language models.119It is difficult to predict when new capabilities will emerge with more scaling or even whether a given capability is present in a current system. Indeed, in a salient recent example, an engineer from Google became persuaded that the Google model he was interacting with was sentient.120These sorts of emergent capabilities seem hard to anticipate with generative models, and could be adapted by influence operators. Even more generally, as more actors begin to work on AI development with different motivations and in different domains, there is a possibility that some capabilities emerge as side effects of research. Because much AI development attempts to target more general capabilities, a small adjustment might suffice to uncover capabilities relevant to influence operations. For example, improvements in reasoning capabilities might also allow generative models to produce more persuasive arguments. Models Specialized for Influence Above, we described the possibility that scaling will (unintentionally) make language models better tools for influence operations. Another possibility is that propagandists will intentionally modify models to be more useful for tasks like persuasion and social engineering. Here, we mention three possible paths of improvement: targeted training, generality, and combinations with other technologies. The first likely improvement is targeted training. Generative models could be trained specifically for capabilities that are useful for influence operations. To develop these capabilities, perpetrators may choose to incorporate signals such as click-through data or other proxies for influence. These signals may be included in the training process, resulting in generative models more strongly optimized to produce persuasive text. Advertising and marketing firms have economic incentives to train models with this type of data, and may inadvertently provide the know-how for propagandists to do the same. Another form of targeted training would be to withhold or modify the information in the training data to affect how the trained model produces content. For example, suppose that a language model is trained with all mentions of a particular group occurring alongside false negative news stories. Then even innocuous deployments of products based on that language model–like a summarizer or customer support chatbot–may produce slanted text without being transparent to model users. 118. Wei et al., “Emergent Abilities of Large Language Models.” 119. Ganguli et al., “Predictability and Surprise in Large Generative Models.” 120. Nitasha Tiku, “The Google engineer who thinks the company’s AI has come to life,” Washington Post , June 11, 2022, https: //www.washingtonpost.com /technology /2022/06/11/google-ai-lamda-blake-lemoine /. 33 Targeted training may be less resource-intensive than training more general models. The difficulty of automating specific tasks is challenging to estimate and often defies intuition.121There is some prelim- inary evidence already that systems like GPT-3 can write slanted news articles—without being explicitly trained for that task.122It may be possible for future systems to be engineered to write extremely per- suasive, tailored texts, or carry on long-lived dialogue. In addition to targeted training, improvements in the generality of model capabilities are likely to have applications to influence operations. For example, one improvement in generality comes from simply combining different modalities into a single system: a single model that can consume and generate both images and text, for example. One can imagine instructing a bot built on such a system to ingest images on the internet, cleverly respond to them, produce completely fabricated images, and carry on a conversation—all at the same time. Finally, a more prosaic path to achieving new capabilities would be to simply combine generative models with other forms of automation. It is possible that using generative models as the “engine” for intelligent bots, along with software to accommodate for shortcomings, could lead to more human-like behavior. For example, a propagandist could write software to find and copy the Facebook profiles of people with interests compatible with the propaganda message, and use this to prompt the generative model. The development of this system may also benefit from integrating software that has already been developed separately, perhaps by chaining together smaller language models.123 4.2.3 Wider Access to AI Capabilities In understanding the impact of language models on influence operations in the future, a key considera- tion is which actors will have access to language models and what may precipitate their use in influence operations. We highlight three critical unknowns in this domain: 1. Willingness to invest in state-of-the-art generative models. Right now, a small number of firms or governments possess top-tier language models, which are limited in the tasks they can perform reliably and in the languages they output. If more actors invest in state-of-the-art generative models, then this could increase the odds that propagandists gain access to them. It is also possible that uncertain and risky investments could lead to the creation of systems that are much better at tasks relevant to influence operations. 2. The existence of unregulated tooling. Proliferation of easy-to-use interfaces to generate persuasive text or images can increase the adoption of generative models in influence operations. If these tools are developed, we are likely to see an earlier and broader uptick of generated content in influence operations. 3. Intent-to-use generative models for influence operations. As access to generative models increases, an actor’s willingness to use these models in influence operations might be an important constraint. 121. This observation is related to the well-known Moravec’s paradox: “Moravec’s paradox,” Wikipedia, accessed June 29, 2022, https: //en.wikipedia.org /wiki/Moravec%5C%27s_paradox. 122. For example, in some experiments to produce slanted text with GPT-3 in 2021, researchers experimented with generating articles from sources such as The Epoch Times ; see Buchanan et al., Truth, Lies, and Automation: How Language Models Could Change Disinformation . 123. Tongshuang Wu et al., “PromptChainer: Chaining Large Language Model Prompts through Visual Programming,” Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems , April 2022, https: //doi.org /10.1145 /3491101. 3519729. 34 If social norms do not constrain the use of models to mislead, then actors may be more likely to deploy models for influence operations. Willingness to Invest in Generative Models In Section 4.2.2, we outlined ways that language models could be leveraged for influence operations. First, propagandists could repurpose (or steal) state-of-the-art models with new and more general capa- bilities. Second, sophisticated propagandists could train models specifically for influence operations. In both cases, the application of generative models to influence operations may ultimately be constrained by different actors’ willingness to make large and potentially risky investments in developing generative models. To have an impact on influence operations, a large investment need not target generative models for influence operations specifically. An investment could simply target more general generative models for other purposes such as scientific discovery or commercial value. If many actors—such as governments, private firms, and even hyperwealthy individuals—develop these state-of-the-art language models, then that increases the odds that propagandists could gain access (legitimately or via theft) to models that can be repurposed for influence operations. For example, a propagandist could fine-tune a stolen model to produce persuasive text in different languages or in a particular domain. In the extreme case, the propagandist themself could be a well-resourced actor—like a determined coun- try—and make a risky and large investment in developing a generative model-based system specifically for influence operations. This may require extensive computational resources, bespoke data—such as user engagement metrics—and engineering talent. In either case, it may not be clear how feasible some engineering projects are; the timeline for advances may ultimately depend on whether propagandists decide to make uncertain investments in developing these generative models. While there are reasons why well-resourced actors might make large investments in developing models for influence, there are also reasons to forgo them. We are already reaching the point where the creation of convincing tweet-sized texts can be automated by machines. However, there could be diminishing returns for influence operations for more advanced capabilities, which would make large investments by propagandists specifically unlikely. For example, if most influence operations rely on a deluge of similarly short bits of content to sway attention-bound humans, there may be few incentives to develop generative models that can generate longer pages of human-like text. Greater Accessibility from Unregulated Tooling Even with nominal access to models, there will likely be some operational know-how required to use them. For example, applying GPT-3 to propaganda tasks requires fiddling with the exact inputs you give the system. To create a photorealistic image a few years ago, a propagandist would have had to run a model themselves on their own infrastructure. But packaging easy-to-use tools that do these tasks has since lowered the operational know-how required to apply generative models to influence operations. Today, anyone with access to the internet can obtain photorealistic AI-generated images from websites such as thispersondoesnotexist.com. AI-generated profile pictures (images of people) are now 35 commonplace in influence operations124and have also been used for deceptive commercial purposes.125 It is quite possible that had this easy-to-use tooling not been developed, influence operations would not have leveraged AI-generated profile pictures to add plausibility to their campaigns, or may not have done so to the same extent. An analogous lesson may apply to the use of language models for influence operations as well. If easy-to- use tools for language models proliferate, we may see propaganda campaigns rely on language models (that would otherwise not have). Easy-to-use tools that produce tweet- or paragraph-length text could lower the barrier for existing propagandists who lack machine learning know-how to rely on language models. Easy-to-use tools could also lead to the integration of new capabilities, such as automated chatbots deployed to troll targets determined by a bad actor. At the same time, the creation of easy- to-use language model tools could also lead to the proliferation of propagandists. Firms and private individuals who may once have avoided waging propaganda campaigns could now choose to do so because of declining costs. Norms and Intent-to-use The intent (or lack thereof) may be an important constraint on the application of generative models to influence operations. In the political science literature, a norm is a “standard of appropriate behavior for actors with a given identity.”126Scholars describe three stages for a norm to take hold internation- ally: norm emergence (a norm is built by norm entrepreneurs, or “people interested in changing social norms”127), a norm cascade (more countries rapidly adopt the norm), and internationalization of the norm (a norm becomes widely accepted and taken for granted.128) Studies show that norms constrain different types of state behavior that would be expected to take place by a cost-benefit analysis. Interna- tional security scholars have argued that norms have powerfully restrained state behavior—from using nuclear weapons, from more routine use of assassinations, and from widespread use of mercenaries.129 The notion that norms can constrain behavior in different facets of domestic and international life may provide a useful lesson for the use of language models for influence operations. Even if an actor has access to models that can easily be repurposed to create persuasive chatbots, and even if this can be 124. Shannon Bond, “AI-generated fake faces have become a hallmark of online influence operations,” NPR, December 15, 2022, https: //www.npr.org /2022 /12/15/1143114122 /ai- generated- fake- faces- have- become- a- hallmark- of- online- influence-operations. 125. Josh A. Goldstein and Renée DiResta, “This salesperson does not exist: How tactics from political influence operations on social media are deployed for commercial lead generation,” Harvard Kennedy School Misinformation Review 3 , no. 5 (September 2022), https: //doi.org /10.37016 /MR-2020-104. 126. Martha Finnemore and Kathryn Sikkink, “International Norm Dynamics and Political Change.,” International Organization 52, no. 4 (1998): 887–917, https: //www.jstor.org /stable /2601361. Norms involve two components: a prescription (what to do, or what not to do) and parameters (the situations under which the norm applies). For a description of this literature, see Vaughn P . Shannon, “Norms Are What States Make of Them: The Political Psychology of Norm Violation,” International Studies Quarterly 44, no. 2 (June 2000): 293–316, ISSN: 0020-8833, https: //doi.org /10.1111 /0020-8833.00159. 127. Cass R. Sunstein, “Social Norms and Social Roles,” Columbia Law Review 44 (1996): 909, https: //chicagounbound. uchicago.edu /cgi/viewcontent.cgi?article =12456&context =journal_articles. 128. Finnemore and Sikkink, “International Norm Dynamics and Political Change.” 129. Tannenwald famously argued that non-use of nuclear weapons since the bombing of Hiroshima and Nagasaki cannot be explained by deterrence, but rather is the result of a normative prohibition on the use of nuclear weapons. See: Nina Tannenwald, “The Nuclear Taboo: The United States and the Normative Basis of Nuclear Non-Use,” International Organization 53, no. 3 (1999): 433–468, https: //www.jstor.org /stable /2601286. (For evidence that challanges this theory, see Janina Dill, Scott D. Sagan, and Benjamin A. Valentino, “Kettles of Hawks: Public Opinion on the Nuclear Taboo and Noncombatant Immunity in the United States, United Kingdom, France, and Israel,” Security Studies 31, no. 1 (2022): 1–31, ISSN: 15561852, https: //doi.org /10.1080 /09636412.2022.2038663; Sarah Percy, Mercenaries: The History of a Norm in International Relations (Oxford University Press, October 2007), 1–280, ISBN: 9780191706608 36 done at minimal cost to them, an actor must still decide to actually build and deploy them. Norms could constrain political actors from using language models for influence operations, and they could encourage developers to inhibit the use of language models for influence operations where possible. Creating a norm that it is unacceptable to use language models for influence operations will likely require “norm entrepreneurs” to advocate this position. On the international level, this could be a coalition of states creating an agreement that they will not use language models for propaganda purposes. These states could devise mechanisms to punish those who fail to comply with the norm, or to reward those that join the coalition. On a substate level, machine language researchers or ethicists could also create a coalition to develop norms prohibiting the use of language models for influence operations. In fact, several AI researchers penned an open letter condemning activities like GPT-4chan,130explicitly citing the lack of community norms around the responsible development and deployment of AI as the reason to speak out.131Likewise, the marketing and PR industries could develop a norm against providing politicians AI-enabled influence operations as a service. 130. We discussed this incident in Section 4.1.1. In brief, a researcher fine-tuned a publicly accessible language model on 4chan posts and proceeded to automatically post over 30,000 times in three days. 131. Percy Liang, Rob Reich, and et al, “Condemning the deployment of GPT-4chan,” accessed July 22, 2022, https: //docs. google.com /forms /d/e/1FAIpQLSdh3Pgh0sGrYtRihBu-GPN7FSQoODBLvF7dVAFLZk2iuMgoLw /viewform?fbzx =16502134 17672418119. 37 5 Mitigations 5.1 A Framework for Evaluating Mitigations In this section, we move from describing the threat and attempt to outline a series of possible mitigations that could reduce the dangers of AI-enabled influence operations. Our goal here is to present a range of possible mitigations that various stakeholders could take to reduce the threat of AI-powered influence operations. Importantly, these mitigations are meant to be scoped to language models specifically, and we do not aim to articulate all the mitigations that could be taken to reduce the threat of misinforma- tion generally.132Nevertheless, it is important to emphasize that, while generative models could help propagandists produce some types of harmful content, influence operations do not need AI models in order to succeed. As such, mitigations discussed here should be viewed as complements to broader and ongoing counter-influence operations efforts. We group our mitigations based on four “stages” of the influence operation pipeline where they could be targeted: (1) model construction, (2) model access, (3) content dissemination, and (4) belief for- mation.133This grouping reflects that propagandists need four things to successfully use generative language models to shape the information ecosystem: first, there must be AI models capable of generat- ing scalable and realistic-looking text; second, operators must have regular and reliable access to such models; third, operators must have infrastructure in place to disseminate the outputs of those models; and fourth, there must be a target audience that can be influenced by such content. In Figure 4, we illustrate these points of intervention. For example, a threat actor can use generative model capabilities by accessing a model directly, building it themselves, or stealing the model. Any mitigation that intervenes at the Model Access stage should impact one or more of those three avenues. For each of these stages, we can think about how an influence operation might be disrupted by using the following sets of questions as starting points: •Model Design and Construction: How could AI models be built so they are robust against being misused to create disinformation? Could governments, civil society, or AI producers limit the proliferation of models capable of generating misinformation? •Model Access: How could AI models become more difficult for bad actors to access for influence operations? What steps could AI providers and governments take? •Content Dissemination: What steps can be taken to deter, monitor, or limit the spread of AI- generated content on social media platforms or news sites? How might the “rules of engagement” on the internet be altered to make the spread of AI-generated disinformation more difficult? 132. For one example document that has compiled many strategies and resources for anti-misinformation campaigns, see Vivian Bianco et al., Countering Online Misinformation Resource Pack (UNICEF Regional Office for Europe and Central Asia, August 2020), https: //www.unicef.org /eca/media /13636 /file. See also Kalina Bontcheva et al., Balancing Act: Countering Digital Disinformation while respecting Freedom of Expression (UNESCO, September 2020), https: //en.unesco.org /publications / balanceact. 133. There are other kill chain models that describe the ways disinformation operators conduct campaigns and how this process could be interrupted. See, for instance, Sedova et al., AI and the Future of Disinformation Campaigns: Part 2: A Threat Model ; Bruce Schneier, “Toward an Information Operations Kill Chain,” Lawfare, April 24, 2019, https: //www.lawfareblog. com/toward-information-operations-kill-chain. However, for the purposes of analyzing the impact of AI language models specifically on disinformation, we use this simplified kill chain model. 38 Figure 4: Stages of intervention of AI-enabled influence operations. To disrupt a propagandist’s use of language models for influence operations, mitigations can target four stages: (1) Model Design and Construction, (2) Model Access, (3) Content Dissemination, and (4) Belief Formation. Ultimately, inter- vening at these stages attempts to mitigate both the direct and indirect effects of influence operations. 39 •Belief Formation: If internet users are ultimately exposed to AI-generated content, what steps can be taken to limit the extent to which they are influenced? We evaluate each mitigation by paying specific attention to four categories: (1) technical feasibility, (2) social feasibility, (3) downside risk, and (4) impact—four key considerations that stakeholders should use to assess the desirability of pursuing any particular mitigation. In more detail: •Technical feasibility refers to the ability to implement a proposed mitigation on a technical level, without regard to social or political considerations. Some mitigations admit mature and low-cost technical solutions, while others require technical abilities that do not exist, are under question, or would require massive changes to existing technical infrastructure. •Social feasibility refers to the political, legal, and institutional feasibility of a particular mitiga- tion, assuming that it is technically possible to implement. The following questions serve as useful guides for assessing this metric: (1) Can the mitigation be successfully implemented unilaterally, without coordination across multiple independent actors? (2) Do the key actors who could im- plement the proposed mitigation have incentives in favor of doing so? (3) Would the proposed mitigation be actionable under existing law, regulation, and industry standards? Social feasibility will likely vary by region of interest. •Downside risk refers to the negative impacts, including via negative externalities and second- order effects that a mitigation may cause. Notable downside risks that apply to multiple potential mitigations include heightened forms of censorship, the risk of the mitigation itself being politi- cized, and the risk of bias (such as inadvertently promoting certain perspectives, cultures, or lan- guages over others). • Finally, impact attempts to evaluate how effective a proposed mitigation would be at reducing the threat of AI-enabled influence operations. For instance, the mitigation “identify all AI-written text on the internet and remove it” is neither technically nor socially feasible, but if it could be implemented, this strategy would completely mitigate the effect of AI-powered influence opera- tions (and thus have high impact). By contrast, “warn people about the dangers of AI-authored content” is much more feasible—but also far less impactful for reducing the effect of AI influence campaigns. Of note, we do not attempt to separate this list of mitigations into “worth trying” and “fine to ignore” categories. Individual stakeholders capable of implementing any of these strategies must weigh the pros and cons of doing so. We also encourage additional research to address mitigations that fall outside of our model. We do not lay out mitigations that could shape the distribution of threat actor intentions (e.g., norm development, threats of retaliation) nor that could reduce harms that result from new beliefs shaped by a successful influence campaign. These warrant additional attention, but are not captured by our model. In addition, we underscore that we discuss each mitigation in terms of who or what institutions would primarily be responsible for their implementation. But this leaves open the question of why these in- stitutions would implement certain mitigations—specifically, whether they would do so voluntarily or should be compelled by regulators to take certain actions. By framing these mitigations in terms of the enacting institutions, we do not mean to suggest that this problem should be left to the voluntary actions 40 Promise; if implemented... Limitation AI Developers Build Models With More Detectable Out- putsInfluence operations with lan- guage models will be easily discoverableTechnically challenging and requires coordination across developers AI Developers Build Models That Are More Fact-SensitiveLanguage models will be less effective at spreading false- hoodsTechnical methods are still being explored; may only impact some influence opera- tions Model Design & Construc- tionDevelopers Spread Radioac- tive Data to Make Generative Models DetectableMakes it easier to detect if content is AI generatedTechnically uncertain and may be easily circumvented Governments Impose Restric- tions on Training Data Collec- tionLimits creation of new models (but only for those in jurisdic- tions that comply)Data access restrictions would require high political will Governments Impose Access Controls on AI HardwarePrevents some future models from being developed alto- getherRestrictions on semiconduc- tors could escalate geopolit- ical tensions and hurt legiti- mate businesses AI Providers Impose Stricter Usage Restrictions on ModelsMakes it more difficult for propagandists to obtain cutting-edge models for cam- paignsRequires coordination across AI providers and risks hurting legitimate applications Model Access AI Providers Develop New Norms Around Model ReleaseRestricts access to future models, but unlikely to pre- vent propagandists from ob- taining already-public onesRequires coordinating across AI providers and could con- centrate capabilities among a small number of companies AI Providers Close Security VulnerabilitiesPrevents misuse and access of models via theft and tamper- ingOnly affects one route to model access Platforms and AI Providers Coordinate to Identify AI Con- tentIncreases the likelihood of de- tecting AI-enabled influence operationsWill not affect platforms that do not engage; may not work in encrypted channels Platforms Require “Proof of Personhood” to PostIncreases the costs of waging influence operationsCurrent proof of personhood tests are often gameable by determined operators Content DisseminationEntities That Rely on Public Input Take Steps to Reduce Their Exposure to Misleading AI ContentProtects entities relying on public inputs from AI-enabled campaignsSignificant changes to pub- lic comment systems could disincentivize participation Digital Provenance Standards Are Widely AdoptedIncreases detection of AI- generated contentSignificant changes would re- quire large-scale coordination Institutions Engage In Media Literacy CampaignsMitigates the impact of influ- ence operationsMay reduce trust in legitimate content Belief FormationDevelopers Provide Consumer-Focused AI ToolsIncreases the likelihood of people consuming high qual- ity informationAI tools may be susceptible to bias; users may become overly reliant on them Table 9: Summary of Example Mitigations and Selected Promise /Limitation 41 of AI developers and social media platforms. Updated regulations may be called for, and future research could unpack whether government intervention is needed (or desirable) for various mitigations. While we expect mitigations to be applicable across different countries, we focus below specifically on the United States to substantiate our points. 5.2 Model Design and Construction The first stage at which key stakeholders could attempt to disrupt the spread of AI-powered disinfor- mation is when language models are initially conceptualized and trained. How could these models be built differently (or how could they be limited from being built at all) such that it would become harder down the line to use them in influence operations? While the following mitigations might be useful, it is important to emphasize that the ability to construct these models is rapidly proliferating, as discussed in Section 4. Since most of these mitigations only affect the development of individual models—and getting consensus on any of these mitigations across all AI developers with the capability of constructing large language models will be very difficult—they generally score low on the metric of social feasibility. The most reliable method for ensuring that large language models are not used in influence operations is to simply not build large language models. Every other proposed change to the design and construction of these models will be less effective at preventing misuse than not building the model in the first place. However, a complete stop to the development of new large language models is extremely unlikely, and so we focus primarily in this section on how these models could be built differently to reduce the risk of misuse. 5.2.1 AI Developers Build Models With More Detectable Outputs Detecting AI-generated outputs of language models is currently a hard problem that is only getting harder as models improve.134However, some actions might be taken based on experiences in other AI subfields to increase the detectability of model outputs. In the subfield of computer vision, researchers at Meta have demonstrated that images produced by AI models can be identified as AI-generated if they are trained on “radioactive data”—that is, images that have been imperceptibly altered to slightly distort the training process. This detection is possible even when as little as 1% of a model’s training data is radioactive and even when the visual outputs of the model look virtually identical to normal images.135 It may be possible to build language models that produce more detectable outputs by similarly training them on radioactive data; however, this possibility has not been extensively explored, and the approach may ultimately not work.136 Rather than training on radioactive data, statistical perturbations might be introduced to a model’s output by directly manipulating its parameters, thereby distinguishing its outputs from normal text and 134. This is especially true for human detection. For example, researchers found a consistent trend that larger models produce text that is harder to distinguish from human written text; see Brown et al., “Language Models are Few-Shot Learners.” 135. Alexandre Sablayrolles et al., “Radioactive data: tracing through training,” 37th International Conference on Machine Learning, ICML 2020 PartF168147-11 (February 3, 2020): 8296–8305, https: //doi.org /10.48550 /arxiv.2002.00937. 136. There has been some success demonstrating that radioactive data can be used to induce certain types of behavior in language models; see Eric Wallace et al., “Concealed Data Poisoning Attacks on NLP Models,” Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , June 2021, 139– 150, https: //doi.org /10.48550 /arxiv.2010.12563. However, it is not clear whether radioactive data can be used to generate models whose outputs can be reliably attributed to them. 42 making detection easier. Past research has identified tools that can be used to detect statistical patterns in outputs from less advanced models such as GPT-2; however, as models become bigger and develop a richer understanding of human text, these detection methods break down if the parameters of the models themselves are not deliberately perturbed in order to enable detection.137 However, there are reasons to think that it is difficult to build either highly detectable language models or reliable detection models. Linguistic data—especially across relatively short snippets of text—is already more compressed than in images, with far less room to express the subtle statistical patterns that the Facebook researchers relied on to detect AI-generated images. Still, it is possible that research could identify methods to statistically “fingerprint” a language model.138But it is unlikely that individual social media posts will ever be attributable directly to an AI model unless such fingerprints are sufficiently sophisticated: if the patterns permitting such detection were possible, they risk being clear enough for operators to screen out.139However, these strategies for building more detectable models may still make it possible to attribute larger-scale corpora of text to specific models, though this remains an open question. Even if some models are designed or redesigned to produce outputs that are traceable at sufficient sizes, attackers could simply gravitate toward other models that are not similarly manipulated. For this mitigation to have a significant impact, it would require high levels of coordination across AI developers who have the ability to deploy large language models. Adversaries with the capability to create their own large language models may merely face additional costs, rather than a loss of capability. Furthermore, operating models that detect whether text is AI-generated represents a challenge, as these will have to be frequently updated to be reliable. 137. On detection, see Sebastian Gehrmann, Hendrik Strobelt, and Alexander M. Rush, “GLTR: Statistical Detection and Vi- sualization of Generated Text,” Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations , July 2019, 111–116, https: //doi.org /10.18653 /V1/P19-3019. However, similar statistical methods per- form less well for larger models such as GPT-3 and GROVER; see Leon Fröhling and Arkaitz Zubiaga, “Feature-based detection of automated language models: Tackling GPT-2, GPT-3 and Grover,” PeerJ Computer Science 7 (April 6, 2021): 1–23, ISSN: 23765992, https: //doi.org /10.7717 /peerj-cs.443. In addition, none of this research assumes a realistic, adversarial threat model, in which attackers are aware that their posts are being assessed to potentially attribute machine authorship. Under this more realistic scenario, attackers could deploy very easy countermeasures, such as altering temperature settings to sample from a wider distribution of possible outputs in order to evade detection. 138. Tao Xiang et al., “Protecting Your NLG Models with Semantic and Robust Watermarks,” arxiv:2112.05428 [cs.MM ], De- cember 10, 2021, https: //doi.org /10.48550 /arxiv.2112.05428. 139. As an example of a trivially circumventable strategy, AI developers could embed special “zero-width” characters in the outputs of their models, which would not immediately be visible to users but which would easily be spotted by automated monitoring tools. There is some research into the use of zero-width characters to attack large language models—see Nicholas Boucher et al., “Bad Characters: Imperceptible NLP Attacks,” 2022 IEEE Symposium on Security and Privacy , June 2022, 1987– 2004, ISSN: 10816011, https: //doi.org /10.48550 /arxiv.2106.09898; Luca Pajola and Mauro Conti, “Fall of Giants: How popular text-based MLaaS fall against a simple evasion attack,” Proceedings - 2021 IEEE European Symposium on Security and Privacy, Euro S and P 2021 , April 2021, 198–211, https: //doi.org /10.48550 /arxiv.2104.05996-but little research into their use as a defensive strategy, in large part because an attacker who was aware that such characters were being inserted into model outputs could easily just remove them before posting content online. 43 Criteria Assessment Technical Feasibility It is an open technical question whether developers will be able to build models that produce detectable outputs. Social Feasibility To be implemented effectively, detectable models would require input and coordination across deployers of large language models, which may be socially infeasible. Downside Risk There are few obvious downside risks to developing detectable models, assuming there is a low false-positive rate. Impact If most or all models are detectable, then influence operations with language models will be easily discoverable. 5.2.2 AI Developers Build Models That Are More Fact-Sensitive The dominant paradigm in natural language generation emphasizes “realism” in text generation over other possible values. Models are trained to generate text that effectively mimics (some subsample of) human text, without inherent regard for the truthfulness of the claims that it makes.140This means that false claims that are commonly believed may be just as likely for a model to produce as true claims under the current dominant approach to training language models.141 It may be possible to train AI models in such a way that they are incentivized to make more factually grounded claims, which could produce models that carry less risk of producing falsehoods even if they were accessible to bad actors.142Significant progress has been made in this area by training models that make use of web searches to improve the factual content of their responses, or that use reinforcement learning techniques to reward more factually correct responses—though these approaches embed their own set of biases about which claims count as “true” or “correct.”143Other methods attempt to modify the text output to be well-supported by evidence.144While these methods are far from perfect, they can significantly reduce the risk that language models will produce misinformation during ordinary usage. Nonetheless, most successful influence operations include, or build from, claims that have a kernel of truth.145Even a language model that produced no false claims could still be used to produce politically slanted or unfalsifiable statements, to shift public attention and discourse, or to engineer false beliefs due to selective context and inauthentic authorship. In fact, in the hands of the right operator, a model that stuck closely to the truth in its outputs might be more persuasive than a model that frequently lied. 140. For instance, language models trained on large quantities of internet text will be trained on a large amount of fiction, which can lead them to substitute creative writing for facts. 141. True claims are often a narrow target. Large language models such as GPT-3 are not necessarily truthful by default. See Buchanan et al., Truth, Lies, and Automation: How Language Models Could Change Disinformation . 142. Owain Evans et al., “Truthful AI: Developing and governing AI that does not lie,” arxiv:2110.06674 , October 13, 2021, https: //doi.org /10.48550 /arxiv.2110.06674. 143. Evans et al.; Ryan Lowe and Jan Leike, “Aligning Language Models to Follow Instructions,” OpenAI Blog, January 27, 2022, https: //openai.com /blog/instruction- following /; Jacob Hilton et al., “WebGPT: Improving the Factual Accuracy of Language Models through Web Browsing,” OpenAI Blog, December 16, 2021, https: //openai.com /blog/webgpt /. 144. Hannah Rashkin et al., “Measuring Attribution in Natural Language Generation Models,” arxiv:2112.12870 [cs.CL ], Au- gust 2, 2022, https: //doi.org /10.48550 /arxiv.2112.12870. 145. As Starbird, Arif, and Wilson write, “To be effective, a disinformation campaign must be based around a ‘rational core’ of plausible, verifiable information or common understanding that can be reshaped with disinformation—for example half-truths, exaggerations, or lies.” See Kate Starbird, Ahmer Arif, and Tom Wilson, “Disinformation as collaborative work: Surfacing the participatory nature of strategic information operations,” Proceedings of the ACM on Human-Computer Interaction Vol: CSCW, Article 127 , CSCW 2019, ISSN: 25730142, https: //doi.org /10.1145 /3359229. 44 And further, if this mitigation did meaningfully make it harder for propagandists to misuse language models, it would still require coordination across AI developers to ensure that malicious actors do not simply gravitate toward models that were not trained using similar methods. Finally, to be up to date with the current state of the world, models might have to be retrained very frequently—a requirement that may impose prohibitive costs. Criteria Assessment Technical Feasibility AI developers are exploring ways to make models more fact sensitive, with promising signs of improvement. Social Feasibility For the mitigation to be fully implemented, it would require a high degree of coordination between developers of models. Downside Risk If language models are more truthful, they may be more persuasive and in turn inadvertently improve the persuasive capabilities of propagandists. Impact More truthful language models may be less likely to spread blatant misinformation, but can still serve influence operations relying on true, non-falsifiable, or politically slanted content. 5.2.3 Developers Spread Radioactive Data to Make Generative Models Detectable Above, we described that AI developers could attempt to insert “radioactive data” into their datasets when training language models in order to create more detectable outputs. A drawback of this ap- proach is that it requires significant coordination—radioactive data must be inserted by each developer into their own training pipeline. Alternatively, AI researchers, media companies, or governments them- selves could choose to proliferate radioactive data directly onto the internet, in locations where it would likely be scooped up by any organization hoping to train a new language model.146This would require far less coordination and could potentially make AI outputs more detectable for all future language models. However, this would not affect models that have already been trained, and may be ineffective if developers take steps to filter their training data—a procedure that is common when training models. This strategy would require proliferators to engage in secretive posting of large amounts of content online, which raises strong ethical concerns regarding the authority of any government or company to deliberately reshape the internet so drastically. In addition, this mitigation would only affect language models trained in the same language in which the radioactive data itself was written. It is also unclear how much of the internet would need to be “radioactive” in this way to meaningfully affect models. And, perhaps most importantly, it remains deeply unclear if this approach would actually result in models with more detectable outputs, for the reasons discussed previously in Section 5.2.1. It seems likely that, even with the use of radioactive training data, detecting synthetic text will remain far more difficult than detecting synthetic image or video content. 146. Similar proposals have been advanced in the domain of visual deepfakes, as a way of increasing the likelihood that synthetic images produced from the most common models will be detectable to defenders. Hwang, Deepfakes: A Grounded Threat Assessment . 45 Criteria Assessment Technical Feasibility While approaches to inserting radioactive data exist for images, it is unclear if this would work for text. Social Feasibility A well-resourced actor could unilaterally spread radioactive content that would likely be included in training data for future models. Downside Risk Large-scale, secret proliferation of data online raises significant concerns about the desirability of any one group changing the distribution of content on the internet so drastically. Impact It is unclear whether this retraining would result in more detectable outputs, and thus detectable influence operations. 5.2.4 Governments Impose Restrictions on Data Collection The basis of any large language model is a vast quantity of training data in the form of text generated by real humans. While some of this data is typically taken from relatively structured sources such as Wikipedia, a large majority of data usually comes from tools like Common Crawl that scrape the web for publicly available text.147Regulatory or legal changes that would make this type of scraping more difficult to conduct might slow the growth of large language models, while simultaneously forcing de- velopers to focus on extracting information from more structured sources.148 These changes could be grounded in changes to federal data privacy laws. Regulations that require internet users to be informed about what their personal data is used for—such as the General Data Protection Regulation (GDPR) in the EU—may slow down large language model development.149At the extreme end, governments could try to prohibit organizations from mass scraping the web for content at all. More targeted measures could aim at improving cybersecurity for personalized data on social media 147. CommonCrawl freely publishes its archives of web data. See “So you’re ready to get started.,” Common Crawl, accessed June 27, 2022, https: //commoncrawl.org /the-data /get-started /. But anyone can build their own software for web scraping or use other tools to extract data from websites. 148. This would in turn have two follow-on effects: learning language from more factually grounded, more formal sources like online news or encyclopedia articles might make models more likely to produce true statements, while also making them significantly less capable of mimicking the language of highly specific target demographics. On using data restrictions to make language models more truthful, see Evans et al., “Truthful AI: Developing and governing AI that does not lie”: 63. 149. Article 14 of the GDPR requires companies that engage in web scraping of personal information regarding EU citizens to inform data subjects that their personal information has been collected and to grant them certain rights regarding the use of their data. See Regulation (EU) 2016 /679 of the European Parliament and the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95 /46/EC (General Data Protection Regulation), 2016 O.J. L 119 /1, art. 14. Major exemptions to this requirement do exist that would likely protect the scraping of textual data for the purposes of scientific research into language models (see ibid., art. 14(5)(b)); however, it is less clear to what extent GDPR may force companies looking to develop commercial AI models to identify impacted data subjects and expressly inform them of their inclusion in a training dataset. Due to the possibility of membership inference attacks on models that could be used to infer personal information about EU citizens, other components of the GDPR relating to protection of personal data may also be implicated in situations where AI developers use web scraping to create training datasets. For research into membership inference, see Nicolas Papernot et al., “Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data,” 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings , October 2016, https: //doi.org /10.48550 /arxiv.1610.05755; and Reza Shokri et al., “Membership Inference Attacks against Machine Learning Models,” Proceedings - IEEE Symposium on Security and Privacy , October 2016, 3–18, ISSN: 10816011, https: //doi.org /10.48550 /arxiv.1610.05820. At minimum, at least one company has been fined for non-compliance with Article 14 of the GDPR; see “Poland: First GDPR fine triggers controversial discussions,” ePrivacy Blog, May 17, 2019, https: //blog.eprivacy.eu /?p=544. This suggests that even if GDPR does not actually prohibit data scraping (including of personal information) for the purposes of language model construction, companies may feel that it is necessary to spend significantly more on lawyers and compliance efforts to avoid running afoul of the law. 46 platforms or prohibiting foreign acquisition of major platforms.150 These mitigations are significantly out of step with the current regulatory environment in the United States, which has not yet passed any comprehensive data privacy laws.151The Supreme Court has also recently ruled that scraping publicly available data from the web, even in violation of a terms of service agreement, does not violate the Computer Fraud and Abuse Act, the primary cybersecurity law in the United States.152Moreover, comprehensive data privacy laws that significantly affect the ability of lan- guage model developers to collect data may have large effects in other industries, while also having an uncertain ability to constrain developers outside of the United States. If implemented poorly, data pro- tection measures may harm researchers’ ability to detect and develop countermeasures against influence campaigns more than they hinder campaign planners.153 Beyond language models, it may be more feasible to regulate the collection or resale of image or video data. Specific state-level laws, like the Illinois Biometric Information Privacy Act (BIPA), restrict the ability of AI developers to scrape specific types of data—most often pictures of private individuals’ faces—without informed consent.154Such laws have occasionally resulted in successful legal action against AI developers, as when the ACLU successfully used BIPA to compel Clearview AI to screen out data from Illinois residents in its model training pipeline and to sharply limit access to its facial recog- nition tools within Illinois.155Limiting access to relevant training data can meaningfully disrupt the creation of models that can later be used maliciously; at the same time, to the extent that such limita- tions are possible at all, they will likely be feasible only for certain restricted sets of training data, such as social media posts or images of private individuals’ faces. Criteria Assessment Technical Feasibility Governmental policy to penalize data collection is likely possible without technical innovation; however, preventing access to internet-based training data is likely difficult. Social Feasibility More extreme forms of data access restrictions would require high political will. Downside Risk Limiting training data will negatively harm legitimate industries that may rely on language models or their training data and could undermine future detection models. Impact Without restricting data collection for all actors, impact is likely limited. 150. See Todd C. Helmus and Marta Kepe, “A Compendium of Recommendations for Countering Russian and Other State- Sponsored Propaganda,” RAND Corporation , June 2021, https: //doi.org /10.7249 /RR-A894-1; Chapter 1 in Eric Schmidt et al., Final Report (National Security Commission on Artificial Intelligence, 2021), https: //www.nscai.gov /wp- content / uploads /2021 /03/Full-Report-Digital-1.pdf#page =52, 50, 405; and Austin Mooney, “Spotlight On Sensitive Personal Data As Foreign Investment Rules Take Force,” National Law Review 11, no. 163 (February 18, 2020), https: //www.natlawreview. com/article /spotlight-sensitive-personal-data-foreign-investment-rules-take-force. 151. Thorin Klosowski, “The State of Consumer Data Privacy Laws in the US (And Why It Matters),” New York Times , Septem- ber 6, 2021, https: //www.nytimes.com /wirecutter /blog/state-of-privacy-laws-in-us /. 152. Supreme Court of the United States, “Van Buren v. United States,” October 2020, https: //www.supremecourt.gov / opinions /20pdf /19-783_k53l.pdf. 153. Nadya Bliss et al., “An Agenda for Disinformation Research,” arxiv:2012.08572 [cs.CY ], December 2020, https: //doi.org / 10.48550 /arxiv.2012.08572. 154. Biometric Information Privacy Act, 740 Ill. Comp. Stat. § 14 /1–25 (2008). 155. ACLU v. Clearview AI, Inc., 2020 CH 04353 (Cir. Ct. Cook City., Ill.). 47 5.2.5 Governments Impose Controls on AI Hardware Another path toward limiting the construction of large language models involves either limiting access to or monitoring the usage of AI hardware.156This could be achieved in a number of ways, including restrictions on the amount of computing power that individual organizations can use to train AI models, disclosure requirements for all AI projects requiring more than a certain threshold of computing power, or export controls on specialized chips. Monitoring computing power usage may be difficult; some estimates suggest that a model 200 times larger than the current largest language model could be trained using less than 0.5% of worldwide cloud computing resources.157Even if major expenditures of computing power could reliably be identified and tracked, this power is a highly general resource; there is currently little way to tell that an organization purchasing a large amount of computing power is planning to train a large language model as opposed to, say, running climate simulations. However, increasing differentiation between AI compute and non- AI compute could make this easier in the future.158 Monitoring for large models is currently a difficult task, but semiconductor manufacturing equipment (SME) export controls or restrictions on access to cloud computing resources are easier to implement. In October 2022, the US government announced export controls on semiconductors, SMEs, and chip design software directed at China.159These controls could slow the growth in computing power in China, which may meaningfully affect their ability to produce future language models. Extending such controls to other jurisdictions seems feasible as the semiconductor supply chain is extremely concentrated.160 Another (not mutually exclusive) restriction could involve mandating (or cloud computing companies could voluntarily implement) approval processes for projects requiring enough computing power to build a sophisticated language model. Even simply mandating stock and flow accounting of high-end AI chips could help identify which actors are capable of producing large language models. To be effective, export controls on computing hardware need to be properly enforced and handle cases such as stockpiling of chips, re-exports via other jurisdictions, and so on. Computing hardware restric- tions could also incentivize nation-states to accelerate their indigenous production of AI chips, though some reports argue that it is infeasible for China to scale up the domestic production of SME.161Fur- thermore, for the purpose of controlling language model development (or even AI development), export 156. See, for example, Miles Brundage et al., “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims,” arxiv:2004.07213 [cs.CY ], April 2020, https: //doi.org /10.48550 /arxiv.2004.07213 157. Andrew Lohn and Micah Musser, AI and Compute: How Much Longer Can Computing Power Drive Artificial Intelligence Progress? (Center for Security and Emerging Technology, January 2022), https: //doi.org /10.51593 /2021CA009. 158. As one example, AI training may use lower-precision chips; see Shar Narasimhan, “NVIDIA, Arm, and Intel Publish FP8 Specification for Standardization as an Interchange Format for AI,” NVIDIA Technical Blog, September 14, 2022, https: // developer.nvidia.com /blog/nvidia-arm-and-intel-publish-fp8-specification-for-standardization-as-an-interchange-format- for-ai / 159. US Department of Commerce, Bureau of Industry, and Security, “Commerce Implements New Export Controls on Ad- vanced Computing and Semiconductor Manufacturing Items to the People’s Republic of China (PRC),” Press Release , October 7, 2022, https: //www.bis.doc.gov /index.php /documents /about-bis /newsroom /press-releases /3158-2022-10-07-bis-press- release-advanced-computing-and-semiconductor-manufacturing-controls-final /file; US Department of Commerce, Bureau of Industry, and Security, “Implementation of Additional Export Controls: Certain Advanced Computing and Semiconductor Manufacturing Items; Supercomputer and Semiconductor End Use; Entity List Modification,” Docket No. 220930-0204, RIN 0694-AI94 , October 13, 2022, https: //public-inspection.federalregister.gov /2022-21658.pdf. 160. Saif M. Khan and Carrick Flynn, Maintaining China’s Dependence on Democracies for Advanced Computer Chips (Center for Security and Emerging Technology, April 2020), https: //cset.georgetown.edu /publication /maintaining-chinas-dependence- on-democracies-for-advanced-computer-chips /. 161. Khan and Flynn. 48 controls on hardware are a blunt instrument and have far-reaching consequences on global trade and many non-AI industries.162Finally, it is worth keeping in mind that often the most impactful propa- gandists—governments themselves—are those with the capability to plausibly circumvent the hardware restrictions mentioned above. Criteria Assessment Technical Feasibility Some hardware-related controls would not require any technical innovation; however, this likely varies significantly. Social Feasibility Restrictions on semiconductors and SMEs have been applied to China; cloud computing restrictions could also be done unilaterally or voluntarily. Downside Risk Export controls on semiconductors or semiconductor manufacturing equipment could escalate geopolitical tensions and hurt legitimate businesses. Impact US export controls would largely affect the development of future language models in other jurisdictions. 5.3 Model Access Once models are built, developers can choose how users interact with them. AI providers have some actions available to them that might reduce bad actors’ access to generative language models. At the same time, these actions could be highly costly for organizations looking to commercialize their models and would require large amounts of cooperation across all relevant AI providers to ensure that propa- gandists could not simply gravitate toward other equally capable models without similar restrictions in place. 5.3.1 AI Providers Impose Stricter Controls on Language Models As discussed in Section 2, the access regimes governing today’s large language models generally fall into one of three categories: fully private, fully public, or private but accessible under restricted conditions, such as the use of gated API access. Access to many of the most powerful current large language models is partially available through APIs, which provides developers with a number of choices regarding potential access or use restrictions that could be imposed upon their models: 1. Developers could require potential users to submit the proposed purposes for which they intend to use a model, and revoke access if actual usage appears to diverge too far from this proposal. This type of restriction was originally a core component of OpenAI’s API access regime, though it has since been replaced with a faster, more automated sign-up process.163 2. Even if the above proposal is adopted, individuals granted API access may often seek to build applications—for instance, chatbots—that give other end users the ability to indirectly input text 162. Jordan Schneider and Irene Zhang, “New Chip Export Controls and the Sullivan Tech Doctrine with Kevin Wolf,” Chi- naTalk, October 11, 2022, https: //www.chinatalk.media /p/new-chip-export-controls-explained. 163. Bryan Walsh, “OpenAI’s GPT-3 gets a little bit more open,” Axios, November 18, 2021, https: //www.axios.com /2021 / 11/18/openai-gpt-3-waiting-list-api. 49 to a model. These types of applications may indirectly expose the model to bad actors. Developers could therefore impose access restrictions that forbid API users from creating applications that give other users the ability to input arbitrary text to the model. 3. Developers might choose to restrict model access to only trusted institutions, such as known com- panies and research organizations, and not to individuals or governments likely to use their access to spread disinformation. Huawei initially appears to have intended an access regime along these lines for its PanGu- amodel.164 4. Developers could further limit the number of outputs that individual users can generate within a certain period of time, or they could require review of users who seem to be submitting anoma- lously large numbers of queries. This would limit the scale of influence operations that rely on language models, but might not prevent their use in more tailored cases (such as generating a smaller number of news articles). 5. Where API access is granted, developers might also impose restrictions on the types of inputs that users are allowed to submit. For instance, the image-generating model DALL•E 2 attempts to screen out user-submitted queries that are intended to produce “violent, adult, or political” outputs.165Such efforts may require significant effort to keep them up to date as new controversial issues arise. This does not represent an exhaustive list of potential access restrictions. All such restrictions, however, share certain downsides. First, effective restrictions may be difficult for developers to implement, espe- cially if they require manual review or appeal processes. Second, organizations looking to commercialize their models have strong incentives to forego onerous review processes on potential customers. Third, user restrictions are only effective if enough institutions implement strong enough access restrictions to box out bad actors; otherwise, propagandists can simply gravitate toward models with less severe restrictions. In other words, this proposed mitigation has the makings of a classic collective action problem: the most effective outcome requires coordination across multiple actors, each of whom has incentives to default. In addition, the proposal can only be effective so long as there are no publicly released models that are as effective and easy to use as those maintained by AI developers behind API restrictions. However, if public models are sufficient for propagandists, then this mitigation will likely be less effective. Despite these limitations, strong industry norms—including norms enforced by industry standards or government regulation—could still make widespread adoption of strong access restrictions possible. As long as there is a significant gap between the most capable open-source model and the most capable API-controlled model, the imposition of monitoring controls can deny hostile actors some financial ben- efit.166Cohere, OpenAI, and AI21 have already collaborated to begin articulating norms around access to large language models, but it remains too early to tell how widely adopted, durable, and forceful these guidelines will prove to be.167 164. Wiggers, “Huawei trained the Chinese-language equivalent of GPT-3.” 165. “Curbing Misuse at Dall-E 2,” OpenAI, accessed June 27, 2022, https: //openai.com /dall-e-2 /. 166. For a quantitative justification as to why, even if there are good public models available, restrictions on access to (better) private models can still impose non-negligible costs on propagandists, see Musser, “A Cost Analysis of Generative Language Models and Influence Operations.” 167. “Best Practices for Deploying Language Models,” Cohere, June 2, 2022, https: //txt.cohere.ai /best-practices-for-deployi ng-language-models /. 50 Finally, there may be alternatives to APIs as a method for AI developers to provide restricted access. For example, some work has proposed imposing controls on who can use models by only allowing them to work on specialized hardware—a method that may help with both access control and attribution.168 Another strand of work is around the design of licenses for model use.169Further exploration of how to provide restricted access is likely valuable. Criteria Assessment Technical Feasibility Some AI developers already restrict usage of models behind APIs. Social Feasibility Limiting how AI providers’ language models are used reflects a collective action problem: it requires coordination across AI providers, each of whom has an incentive to defect. Downside Risk Limiting access concentrates more power in the hands of a few AI providers and risks undermining those who could benefit from model use. Impact If AI developers are governed by norms of restricted use, it could mitigate the potential of AI-enabled influence operations. However, this assumes comparable open-source model developers do not exist. 5.3.2 AI Providers Develop New Norms Around Model Release Traditionally, AI researchers have felt bound by what Thomas Merton referred to as the “communism of the scientific ethos,” a norm that holds that a willingness to share information in the interests of full and open collaboration is integral to the scientific enterprise.170This norm is not merely a behavioral quirk of scientists; the free and open flow of information is critical for the advancement of science and technology as a whole, and progress in AI has long rested on strong norms of openness and collaboration. But as AI models become increasingly lucrative, this norm is challenged by a competing instinct to privatize models and data in order to commercialize them. In addition, norms of openness in AI research are challenged by safety concerns associated with powerful models that open up new attacks, including the scalable epistemic attacks made possible by powerful language models.171 Norms regarding data sharing and model release are currently in flux, largely due to progress in large language models. OpenAI has twice broken previous norms regarding model release, first by choosing to delay a full release of GPT-2 in order “to give people time to assess the properties of these models, discuss their societal implications, and evaluate the impacts of release after each stage,” and then again a year later by choosing not to release GPT-3 at all, instead commercializing it behind an API paywall.172 Both of these decisions drew serious criticism at the time, though the use of an API in lieu of a full model release now appears to be somewhat common among AI providers capable of producing cutting-edge 168. Huili Chen et al., “DeepAttest: An end-to-end attestation framework for deep neural networks,” Proceedings of the 46th International Symposium on Computer Architecture , June 2019, 487–498, ISSN: 10636897, https: //doi.org /10.1145 /3307650. 3322251. 169. “Responsible AI Licenses (RAIL),” Responsible AI Licenses (RAIL), accessed September 14, 2022, https: //www.licenses. ai/. 170. Robert K. Merton and Norman W . Storer, The Sociology of Science: Theoretical and Empirical Investigations (Univ. of Chicago Press, 1973). 171. Percy Liang et al., “The Time Is Now to Develop Community Norms for the Release of Foundation Models,” 2022, https: //crfm.stanford.edu /2022/05/17/community-norms.html. 172. Alex Radford et al., “Better Language Models and Their Implications,” OpenAI Blog, February 14, 2019, https: //openai. com/blog/better-language-models /. 51 language models.173In the domain of text-to-image models, a sitting member of Congress recently urged the US National Security Advisor and the acting director of the Office of Science and Technology Policy to address the “unsafe releases” of text-to-image models that do not have content restrictions, because they have been used to generate dangerous images.174 While we do not make specific claims about the substance of desirable research norms, a growing body of research is dedicated to examining the types of norms that could be developed to govern AI research, es- pecially in the sphere of large language models. These norms could include staged release of models, the adoption of tradeoff frameworks to assess the risks of open-sourcing models, mechanisms for accepting public feedback and reports of misuse, and prepublication safety review.175Implementing any of these norms may require new institutional mechanisms, such as a Partnership on AI-style176organization for natural language processing researchers, the creation of a clear set of principles around issues like data collection and model release for large language models, or formal principles regarding what type of risk assessment is expected of AI developers prior to model release.177These institutional mechanisms could help solidify new norms around model design, model release, and model access and would have the potential to significantly impact the ability of propagandists to make use of large language models. Criteria Assessment Technical Feasibility This mitigation does not require technical innovation. Social Feasibility The development of norms around language model release for cutting-edge models requires coordination, and open-source developers may choose to ignore those norms. Downside Risk Norms that restrict model release may concentrate know-how in the hands of a smaller number of AI providers and impede beneficial AI progress. Impact The mitigation would be useful for restricting access to current and future cutting-edge models, but this is unlikely to prevent propagandists from gaining access to already-public models. 5.3.3 AI Providers Close Security Vulnerabilities Actors seeking to make use of AI-generated content for propaganda may not be constrained by formal access restrictions to relevant models and research. They may employ covert espionage to steal mod- els and information that will enable construction of their own models, or they may aim to engage in 173. Jeremy Howard, “Some thoughts on zero-day threats in AI, and OpenAI’s GPT-2,” fast.ai, February 15, 2019, https: //www.fast.ai /posts/2019-02-15-openai-gp2.html; “OpenAI Trains Language Model, Mass Hysteria Ensues,” Approximately Correct, February 17, 2019, https: //www.approximatelycorrect.com /2019 /02/17/openai-trains-language-model-mass- hysteria- ensues /; “Microsoft’s First GPT-3 Product Hints at the Commercial Future of OpenAI,” TNW, June 5, 2011, https: //thenextweb.com /news/microsofts-first-gpt-3-product-hints-commercial-future-openai-syndication. 174. “Representative Anna Eshoo to Jake Sullivan and Alondra Nelson,” September 20, 2020, https: //eshoo.house.gov /sites/ eshoo.house.gov /files/9.20.22LettertoNSCandOSTPonStabilityAI.pdf. 175. Irene Solaiman et al., “Release Strategies and the Social Impacts of Language Models,” arxiv:1908.09203 [cs.CL ], August 2019, https: //doi.org /10.48550 /arxiv.1908.09203; Aviv Ovadya and Jess Whittlestone, “Reducing malicious use of synthetic media research: Considerations and potential release practices for machine learning,” arxiv:1907.11274 [cs.CY ], July 2019, https: //doi.org /10.48550 /arxiv.1907.11274. 176. “Partnership on AI,” Partnership on AI, accessed October 29, 2022, https: //partnershiponai.org /. 177. For one example of risk assessment for synthetic media, see “C2PA Specifications: C2PA Harms Modelling,” Coalition for Content Provenance and Authenticity, accessed September 14, 2022, https: //c2pa.org /specifications /specifications /1.0/ security /Harms_Modelling.html. 52 cyberattacks or other forms of sabotage that allow them to manipulate the outputs of already existing language models.178For instance, language model poisoning or supply chain attacks on AI providers may allow adversaries to output propaganda from language models they do not possess—manipulating them from afar.179Similarly, threat actors may also seek to obtain access to cutting-edge, non-public generative models through human vulnerabilities and insider threats at AI institutions. By developing or hiring groups to simulate adversary attempts to gain access to cutting-edge model ca- pabilities, AI providers can identify and reduce vulnerabilities. Such red-teaming exercises should search not just for cybersecurity vulnerabilities, but also ways in which insider threats or mathematically so- phisticated attacks on the AI training process could result in compromised models. Such red teaming can inform a holistic assessment on the risk of the model being misused or applied to produce propaganda. However, while red teaming may successfully identify some vulnerabilities, it is unlikely that all can be caught, and for many types of vulnerabilities that appear to be inherent in modern AI systems, it is un- clear how successful any form of technical mitigation can be. Moreover, closing security vulnerabilities is only useful in the context of AI models that have not been made publicly available, as propagandists looking to make use of public models would not need to surreptitiously steal or compromise such models. Criteria Assessment Technical Feasibility Some red-teaming exercises can be performed today, but some defense methods for protecting valuable cyber assets remain research problems. Social Feasibility Individual AI developers can implement this mitigation unilaterally. Downside Risk There are no obvious downside risks. Impact Closing security vulnerabilities is useful insofar as future models are superior for propaganda purposes than already-public models. 5.4 Content Dissemination AI-generated content is ultimately only a threat if it reaches and influences real human beings. In general, the interventions most likely to slow the spread of AI-generated propaganda may be those that could be successful against all propaganda, AI-generated or not. However, in this section, we briefly outline a few mitigations that might specifically manage to slow the spread of AI-authored content. 5.4.1 Platforms and AI Providers Coordinate to Identify AI Content It is not clear how companies should respond if or when they judge that content on their platforms was generated by a language model. There are a wide number of plausibly legitimate use cases for AI-generated content on social media, including brand chatbots designed to provide customer service, 178. For a taxonomy of the progression of machine learning vulnerabilities to adversarial influence and a series of case studies on these threats, see “MITRE | ATLAS,” MITRE, accessed October 29, 2022, https: //atlas.mitre.org /. 179. For instance, in a “model spinning” attack, a threat actor can modify the model to output manipulated narratives whenever a user inputs an adversary-selected trigger word, all without compromising performance. See Bagdasaryan and Shmatikov, “Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures.” For a general overview of the types of attacks that can be used to target the mathematical peculiarities of AI systems, see Andrew Lohn, Hacking AI: A Primer for Policymakers on Machine Learning Cybersecurity (Center for Security and Emerging Technology, December 2020), https: //doi.org /10.51593 /2020CA006. 53 comedy bots meant to mimic or parody specific authors, or auto-generated news announcements.180 For this reason, it is unlikely that social media platforms would choose to simply issue a blanket ban on AI-generated content.181 Even if platforms do not issue blanket bans, they might still build in rules regarding appropriate uses of language models into their terms of service. Should accounts generating automated content be required to publicly disclose the origin of content they post? Should posts determined to have been authored by an AI be flagged?182If platforms know that certain external sites host AI-generated content—especially content of a political nature—without disclosing it as such, might that be in itself sufficient grounds to block links to those sites? All of these interventions could be plausible ways to reduce the spread of AI-generated misinformation—assuming it can be identified as such. Actually detecting content that comes from an AI model, however, is not trivial. Without the aid of AI developers, social media platforms trying to identify machine authorship would be restricted to merely looking for statistical patterns in text and user metadata.183Current tools for this do not provide the level of confidence that would likely be required for platforms to take disruptive action against accounts, do not work on texts the length of a typical social media post, and are likely to perform worse as models improve.184 However, collaboration between platforms and AI companies may make detection of larger-scale cam- paigns using AI generation more feasible. For instance, model owners might store outputs so that they 180. Some of these types of uses already exist; for instance, the account dril_gpt2 on Twitter (https: //twitter.com /dril_gpt2) uses GPT-2 to generate tweets in the style of the dadaist Twitter comedian dril. 181. Some social media companies have restrictive policies around the posting of AI-generated images, but even these poli- cies are usually only applicable in certain cases—most commonly, when there is an (assumed) intent to deceive behind the production of the image. See, for instance, Monica Bickert, “Enforcing Against Manipulated Media,” Meta , January 6, 2020, https: //about.fb.com /news /2020 /01/enforcing-against-manipulated-media /, which contains the following explicit exemp- tion: “This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.” The same type of reasons that have led social media companies to avoid adopting blanket bans on AI-generated visual content will also make blanket bans on AI-generated text content unlikely. 182. The impact of flagging content as AI-generated on audiences’ belief formation processes is unknown and may be unintu- itive; in one study, for instance, researchers found that survey respondents were just as likely to view “AI-generated” profiles of Airbnb hosts as trustworthy, compared to human-authored profiles. However, when respondents were told that some profiles were human-authored and some were AI-generated, they viewed the profiles they believed were AI-generated as less trust- worthy than human-authored profiles. Maurice Jakesch et al., “AI-mediated communication: How the perception that profile text was written by AI affects trustworthiness,” CHI ’19: Proceedings of CHI Conference on Human Factors in Computing Systems , May 2019, https: //doi.org /10.1145 /3290605.3300469. 183. Humans and machine learning-based detection systems differ in their respective competencies, and can currently perform better at detection together by covering each other’s blindspots. See Daphne Ippolito et al., “Automatic Detection of Generated Text is Easiest when Humans are Fooled,” arXiv:1911.00650 [cs.CL ], July 2020, 1808–1822, https: //doi.org /10.48550 /arXiv. 1911.00650. 184. One possible statistical method for identifying AI-generated text is provided by Hendrik Strobelt and Sebastian Gehrmann, “Catching a Unicorn with GLTR: A tool to detect automatically generated text,” gltr.io , accessed October 29, 2022, http: //gltr. io/. But this method assumes that language models will sample text from a relatively constrained distribution, such that the likelihood of unpredictable word patterns ends up significantly lower than is observed in authentic human text. As language models become larger, however, they become capable of accurately modeling a larger distribution of text, decreasing the risk that they will fall into noticeable “most-likely-next-word” ruts. Additionally, many language models permit users to directly manipulate a “temperature” setting, which directly serves to sample from a more unpredictable range of next word outputs when generating text, thereby evading this detection tool more directly. 54 can be traced back to the users who generated them.185Social media companies could then flag content on their platforms that they suspect may be inauthentic and work with AI companies to determine if any was generated by a language model. This type of collaboration could have follow-on benefits: once an AI company ascertains that a user is reposting outputs to social media, they can work with platforms to determine if other content generated by that user has been reposted to other social media platforms, potentially catching other coordinated inauthentic accounts that the platforms may initially have missed. This strategy would miss content that is posted to encrypted social media platforms, such as WhatsApp channels. In addition, disinformation is also posted to social media platforms that do not support robust search features and are unlikely to cooperate with AI companies to monitor content, such as Parler and Gab, though it may still be possible to scan public posts on these sites for potential AI-generated content.186Without collaboration from the platforms themselves, this mitigation strategy may have only a limited impact. Despite these drawbacks, partnerships between platforms and AI companies have certain advantages. Unlike imposing onerous up-front access restrictions, this type of monitoring is less likely to alienate potential users from signing up for API access to a language model, which may make it more attractive to AI companies. While bad actors may want to avoid using AI services that engage in this type of monitoring, AI companies can more easily maintain some secrecy about how they monitor for reposted content, making it harder to evade monitoring mechanisms. Criteria Assessment Technical Feasibility Versions of this implementation may be feasible for monitoring publicly posted content, but may be infeasible for encrypted social media channels. Social Feasibility Coordination between AI developers and social media companies requires a significant number of bilateral partnerships. Downside Risk There are few obvious downside risks assuming the detection models are accurate. If not, they risk flagging the wrong accounts. Impact The impact depends on the extent of collaboration between platforms and AI companies, and will not cover all social platforms. 5.4.2 Platforms Require “Proof of Personhood” to Post Current policies regarding social media usage range from not requiring any form of registration to re- quiring that accounts be affiliated with real names and unique email addresses, and, at times, requiring 185. This strategy will necessarily be imperfect, as propagandists can always make small or trivial changes to model outputs before posting them to social media. If detection relies on hash matches, operators may easily evade detection by doing so. However, not all operators may be savvy enough to realize that detection makes use of hashes, so this strategy may still have some usefulness. Relying on close-but-not-exact matches to output text, by contrast, introduces a higher level of statistical uncertainty in attribution, though at sufficient scales, campaigns with slightly altered text could still be linked to the use of an AI model with meaningful confidence. 186. For analyses of Parler and Gab, including an overview of the extent of their content moderation practices, see David Thiel et al.,Contours and Controversies of Parler (Stanford Internet Observatory, 2021), https: //fsi.stanford.edu /publication /contours- and- controversies- parler and David Thiel and Miles McCain, Gabufacturing Dissent: An in-depth analysis of Gab (Stanford Internet Observatory, 2022), https: //cyber.fsi.stanford.edu /publication /gabufacturing-dissent-an-in-depth-analysis-of-gab. 55 users to submit “video selfies” for proof of personhood.187However, any of these approaches can be cir- cumvented by malicious actors: they can register many “burner” email addresses to create fake accounts and hire inexpensive labor to complete proof of humanness checks. Platforms could, however, more uniformly require higher standards of proof of personhood in order to verify that content is not being produced by an AI and reposted to their sites. This could involve requiring more reliable forms of authentication when users sign up for an account, for instance, by asking a user to take a live video of themselves posing, or asking for some alternative form of biometrics. Alternatively, platforms could require users to occasionally pass tests to demonstrate humanness before posting content; these tests could either be administered randomly, at periodic intervals, or when a particular user is posting at a high volume. CAPTCHAs are one way to demonstrate humanness in this way; however, a determined adversary can cheaply circumvent them. Outside of tests, another proposed approach includes decentralized attestation of humanness.188 This mitigation would not make it impossible for propagandists to copy-paste content from a language model into a social media platform and post it. Instead, it would be meant to disrupt operational setups that rely on bots that directly query and post content from language models without explicit human intervention. While this may only describe a minority of influence operations, having such a fully au- tomated capability might be useful to propagandists; for instance, an account could be configured to query a language model every few hours or days for an anodyne post with the intention of posting it directly to a social media platform. Operators would then need only log in every so often to post more explicitly political content, having fully automated the problem of enmeshing those political posts in a more realistic-seeming environment of unrelated content. Requiring checks to post content could meaningfully disrupt this type of operational setup. There are several significant limitations to this mitigation, including potential infringements on privacy, limits to the types of operations it would mitigate, and limits to its effectiveness against operations by determined adversaries. First, from a privacy perspective, user authentication requirements would likely face resistance from users who are accustomed to an expectation of anonymity online, including users who hold such expectations for very legitimate reasons. Second, hummanness verifications are designed to address operations that rely on social media accounts to spread generated content, but do not affect other information channels—like email or fake news websites. Third, as mentioned above, for well- resourced actors like the Internet Research Agency, the costs of proof of humanness requirements may not be meaningful deterrents: purchasing a new SIM card or hiring cheap outsourced labor to pass a video test will not prevent these campaigns. Finally, this mitigation introduces an underexplored potential for backlash: If platforms include a proof of humanness check, and propagandists pass such a check, the successful completion could increase the perceived credibility of the account—increasing the persuasive effect from the account in question. Future research could address this question directly. 187. “Why you might be asked to upload a video selfie to confirm your identity on Instagram,” Facebook Help Centre, accessed October 29, 2022, https: //m.facebook.com /help/1053588012132894. 188. “The Internet Of Humans,” Proof Of Humanity, accessed October 29, 2022, https: //www.proofofhumanity.id /. 56 Criteria Assessment Technical Feasibility Various forms of human authentication have been piloted (and implemented) already. Social Feasibility Social media platforms and other websites can implement this mitigation unilaterally. Downside Risk More extreme forms of this mitigation would undermine online anonymity, which can stifle speech and undermine other human rights. Impact The impact depends on the specific implementation: basic CAPTCHA-like tests are gameable, but more novel implementations may increase costs of waging AI-enabled influence campaigns. 5.4.3 Entities That Rely on Public Input Take Steps to Reduce Their Exposure to Misleading AI Content Many entities in society rely on public input for feedback, evidence of group beliefs, and legitimacy. For example, when making decisions that affect the community, local planning commissions often seek public comment to make informed decisions.189Similarly, private firms often ask for feedback on products, and media outlets often ask for tips on the issues of the day. The processes that these entities use for public comment constitute potential vectors for the abuse of language models to generate “comments” from the public in order to sway policymakers, local officials, or private entities. Indeed, there have already been cases in which mass inauthentic comment campaigns have been iden- tified in the US government, most notably when various technology companies submitted millions of comments to the FCC in 2017 regarding net neutrality, falsely using real customers’ names to provide a veneer of legitimacy to the comments.190Comments generated by a large language model would be more difficult to identify as coordinated, since the comments in the FCC case followed a standard out- put and merely swapped synonyms for one another. As such, some level of reform to mechanisms for soliciting public input may be called for. At the lowest end, this reform could simply involve making entities that solicit public comment more aware of the potential for inauthentic content being submitted that poses as public opinion. At the same time, this may have negative externalities: priming policymakers to be suspicious of public input, for example, may itself undermine democratic responsiveness.191Organizations soliciting public input might instead choose to implement stronger methods than common CAPTCHAs to ensure that public comments are authentic; currently, many US agencies simply assume that comments are legitimate and 189. In the US context, each branch of the US government has mechanisms for soliciting input from members of the public. For Congress, the most common form of input is constituent calls or emails to their representatives; for the judicial system, the amicus brief provides a means for non-parties to a case to comment on its merits; and for executive agencies, the period of public comment required by the Administrative Procedures Act (APA) allows agencies to understand how affected parties might view proposed regulations. 190. Jon Brodkin, “ISPs Funded 8.5 Million Fake Comments Opposing Net Neutrality,” Wired, May 8, 2021, https: //www. wired.com /story/isps-funded-85-million-fake-comments-opposing-net-neutrality /. 191. Steve Balla et al., Mass, Computer-Generated, and Fraudulent Comments (Report to the Administrative Conference of the U.S., June 17, 2020), https: //regulatorystudies.columbian.gwu.edu /mass-computer-generated-and-fraudulent-comments-0. 57 perform no follow-up on submitted comments.192Here, entities inviting comment will have to ensure that attempts to prevent AI-generated comments do not create frictions that prevent members of the public from participating.193 Criteria Assessment Technical Feasibility Basic defenses—like user authentication—to prevent bots from overwhelming public comment boards already exist. Social Feasibility Policy change will likely require coordination across multiple parts of government. Downside Risk Significant changes may disincentivize members of the public from participating in public comment periods. Impact The impact varies depending on the specific implementation, but could make public input solicitation much more robust. 5.4.4 Digital Provenance Standards Are Widely Adopted Because technical detection of AI-generated text is challenging, an alternate approach is to build trust by exposing consumers to information about how a particular piece of content is created or changed. Tools such as phone cameras or word processing software could build the means for content creators to track and disclose this information.194In turn, social media platforms, browsers, and internet protocols could publicize these indicators of authenticity when a user interacts with content. This intervention requires a substantial change to a whole ecosystem of applications and infrastructure in order to ensure that content retains indicators of authenticity as it travels across the internet. To this end, the Coalition for Content Provenance and Authenticity (C2PA) has brought together software application vendors, hardware manufacturers, provenance providers, content publishers, and social media platforms to propose a technical standard for content provenance that can be implemented across the internet.195 This standard would provide information about content to consumers, including its date of creation, authorship, hardware, and details regarding edits, all of which would be validated with cryptographic signatures.196 Theoretically, this standard would work for AI-generated content, particularly if AI-as-a-service compa- 192. Committee on Homeland Security U.S. Senate Permanent Subcommittee on Investigations and Governmental Affairs, Abuses of the Federal Notice-and-Comment Rulemaking Process (2019), https: //tinyurl.com /5bamt57s; “Federal Rulemaking: Selected Agencies Should Fully Describe Public Comment Data and Their Limitations,” U.S. GAO , September 2021, https: //www.gao.gov /products /gao-21-103181. The GAO study found that, for some agencies, as many as 30% of individuals whose email addresses were associated with public comments reported not having written the comment submitted under their name. Many other agencies did not require email addresses or other types of identifying information for submitted comments, significantly reducing the ability of the agency to authenticate the identity of the commenter. 193. In the US context, a stronger version could be that the APA itself is amended to mandate some level of vetting for the authenticity of public comments, or criminal liability could be imposed for institutions found to be impersonating members of the public. We do note, however, that the Administrative Conference of the United States (ACUS) has so far preferred not to propose any sweeping changes to the period for public comment. In part, this is because ACUS believes that AI-generated comments could have valuable use cases in the public comment process, such as by generating summaries of public comments or lowering barriers to submitting public comments. See Balla et al., Mass, Computer-Generated, and Fraudulent Comments 194. For one example of a media provenance pipeline from certified authoring tools to browser extensions for verification, see Paul England et al., “AMP: Authentication of Media via Provenance,” MMSys 2021 - Proceedings of the 2021 Multimedia Systems Conference , June 2021, 108–121, https: //doi.org /10.48550 /arxiv.2001.07886. 195. “C2PA Specifications: C2PA Harms Modelling.” 196. “Verifiable Credentials Data Model v1.1,” W3C, March 3, 2022, https: //www.w3.org /TR/vc-data-model /. 58 nies opt in to self-declare authorship for each piece of content and require applications or individuals accessing their services through API to do the same. Over time, users may learn to trust the content that has provenance markers and distrust content that lacks them. However, these protocols cannot authen- ticate preexisting legacy content. In addition, while these measures can provide greater transparency about the creation, history, and distribution of files—including images and text files generated by word processing applications—they cannot provide a means for authenticating and tracking the spread of raw text, which can be copied and pasted from file to file without leaving a record in a specific file’s history. To authenticate text provenance widely would require radical changes to internet protocols. For example, it is possible that the HTTP protocol would have to be modified to embed content provenance information. Since language models output raw text and not files, simply storing provenance information in files is sharply limited in its ability to help track the spread of AI-generated misinformation. More low-level changes may be needed to maximize the impact of this intervention. If the provenance information for a piece of content contains information about the user, then this intervention would raise privacy risks.197This implementation could threaten anonymous speech on the internet. However, if only information to distinguish AI and human-generated content is added, then the privacy risks are lower. Criteria Assessment Technical Feasibility Promising technical paths exist, but the technology has not yet been proven. Social Feasibility Some progress has been made in coordinating between interested parties, but robust versions of this mitigation would require massive coordination challenges. Downside Risk Adding author information raises privacy risks. Impact Radical changes to guarantee content provenance would have high impact, but more feasible options would likely have limited impact. 5.5 Belief Formation The preceding mitigations address the supply of AI-generated misinformation. However, as long as target audiences remain susceptible to propaganda that aligns with their beliefs, there will remain an incentive for influence operations generally, as well as incentives more specifically for propagandists to leverage AI to make those operations more effective. In this section, we therefore discuss two interventions that might help address the demand side of the misinformation problem: media literacy campaigns, and the use of AI tools to aid media consumers in interpreting and making informed choices about the information they receive. 197. For more discussion of privacy risks here, see “Ticks or it didn’t happen,” WITNESS Media Lab, December 2019, https: //lab.witness.org /ticks-or-it-didnt-happen /. 59 5.5.1 Institutions Engage in Media Literacy Campaigns There is some evidence that media literacy campaigns can increase individuals’ ability to discern between real and fake news online.198Existing media literacy tools that teach people how to “spot” coordinated accounts online, however, sometimes emphasize traits or mistakes that AI tools can avoid making, such as repetitiveness or a lack of “personal” content interspersed with more political content.199If current programs become outdated, media literacy will require updating. For example, if language models over- come repetition and lack of “personal” content, literacy campaigns can still combat the goals of the propagandists by teaching people to fact-check content in articles and to distinguish objective informa- tion from false, misleading, or slanted content.200These campaigns may have less impact, however, on distraction operations that crowd out genuine news. Unlike many of the other mitigations listed above, the impact of media literacy campaigns is agnostic to human versus computer authorship. These efforts focus on teaching people how to analyze content, not necessarily to spot AI-generated content. Another form of digital literacy campaigns could be to teach people about AI-generated content specifically. If new “telltale” signs can be identified that rep- resent common indicators of AI-powered influence operations, then this mitigation could be beneficial. However, if the most that can be said of AI-powered operations is that they look more authentic than human-operated campaigns, then this strategy may be misplaced. Emphasizing that any account on the internet could be an AI-powered bot may make people more likely to simply dismiss arguments they disagree with as inauthentic and not worth paying attention to, thereby exacerbating societal division and polarization. Overemphasizing the prevalence and danger of misinformation online may ultimately serve the same goal that propagandists themselves are often trying to achieve: making people inherently distrustful of any information or argument that conflicts with their preexisting beliefs.201 Criteria Assessment Technical Feasibility No technical innovation is required. Social Feasibility A variety of actors could unilaterally lead educational campaigns. Downside Risk Educating about the threat of AI-enabled influence operations could reduce trust in genuine content or in online information environments more broadly. Impact Educational initiatives could help people distinguish reliable information from misinformation or slanted text, and mitigate the effects of influence operations (AI-generated or not). 198. Jon Roozenbeek, Sander van der Linden, and Thomas Nygren, “Prebunking interventions based on “inoculation” theory can reduce susceptibility to misinformation across cultures,” Harvard Kennedy School Misinformation Review 1, no. 2 (February 2020), https: //doi.org /10.37016 //MR-2020-008; Andrew M. Guess et al., “A digital media literacy intervention increases discernment between mainstream and false news in the United States and India,” PNAS 117, no. 27 (July 2020): 15536– 15545, ISSN: 10916490, https: //doi.org /10.1073 /pnas.1920498117; Se Hoon Jeong, Hyunyi Cho, and Yoori Hwang, “Media Literacy Interventions: A Meta-Analytic Review,” Journal of Communication 62, no. 3 (June 2012): 454–472, ISSN: 0021-9916, https: //doi.org /10.1111 /J.1460- 2466.2012.01643.X; Todd C. Helmus et al., “Russian Propaganda Hits Its Mark: Experimentally Testing the Impact of Russian Propaganda and Counter-Interventions,” RAND Corporation , October 2020, https: //doi.org /10.7249 /RRA704-3 199. For an existing example of a media literacy tool that teaches users the “telltale” signs of troll accounts, see “Spot The Troll,” Clemson University Media Forensics Hub, https: //spotthetroll.org /. 200. For one example of the effectiveness of these measures, see Gordon Pennycook et al., “Shifting attention to accuracy can reduce misinformation online,” Nature 592 (7855 2021): 590–595, ISSN: 1476-4687, https: //doi.org /10.1038 /s41586-021- 03344-2. 201. Karen Hao, “The biggest threat of deepfakes isn’t the deepfakes themselves,” MIT Technology Review , October 10, 2019, https: //www.technologyreview.com /2019/10/10/132667 /the-biggest-threat-of-deepfakes-isnt-the-deepfakes-themselves /. 60 5.5.2 Developers Provide Consumer-Focused AI Tools Just as generative models can be used to generate propaganda, they may also be used to defend against it. Consumer-focused AI tools could help information consumers identify and critically evaluate content or curate accurate information. These tools may serve as an antidote to influence operations and could reduce the demand for disinformation. While detection methods (discussed in Section 5.2.1) aim to detect whether content is synthetic, consumer-focused tools instead try to equip consumers to make better decisions when evaluating the content they encounter. Possibilities for such tools are numerous.202Developers could produce browser extensions and mobile applications that automatically attach warning labels to potential generated content and fake accounts, or that selectively employ ad-blockers to demonetize them. Websites and customizable notification sys- tems could be built or improved with AI-augmented vetting, scoring, and ranking systems to organize, curate, and display user-relevant information while sifting out unverified or generated sources.203Tools and built-in search engines that merely help users quickly contextualize the content they consume could help their users evaluate claims, while lowering the risk of identifying true articles as misinformation.204 Such “contextualization engines” may be especially helpful in enabling users to analyze a given source and then find both related high-quality sources and areas where relevant data is missing. By reducing the effort required to launch deeper investigations, such tools can help to align web traffic revenue more directly with user goals, as opposed to those of advertisers or influence operators.205Another proposal suggests using AI-generated content to educate and inoculate a population against misleading beliefs.206 Some of the most promising AI-enabled countermeasures may leverage state-of-the-art generative mod- els themselves, to reshift the offense-defense balance in favor of information consumers.207As generative models get better at producing persuasive arguments that exploit viewer biases and blindspots, defen- sive generative models could be used to help users detect and explain flaws in tailored arguments or to find artifacts in manipulated images.208Generative models that help users find relevant information can also be trained how to “show their work” by citing sources that support their answers.209Such methods could serve as building blocks for future tools that augment a consumer’s ability to critically evaluate 202. For a variety of examples of consumer-focused tools that help users control the information they see, see Combatting Online Harms Through Innovation, Report to Congress (Federal Trade Commission, June 16, 2022), https: //www.ftc.gov / reports /combatting-online-harms-through-innovation. 203. A particularly successful example of a curation tool is Live Universal Awareness Map, which has done near real-time source aggregation on conflicts in Ukraine and Syria while aiming to filter out state-sponsored propaganda. On karma and reputation systems, see Seger et al., Tackling threats to informed decision-making in democratic societies: Promoting epistemic security in a technologically-advanced world ; and Christian Johnson and William Marcellino, Bad Actors in News Reporting: Tracking News Manipulation by State Actors (RAND Corporation, November 2021), https: //doi.org /10.7249 /RRA112-21 204. The issue of false positives—identifying quality sources as misleading or false—is common with social media fact-checking recommendation systems, which often superficially associate new accurate articles with prior false ones, or fail to differentiate between false claims and claims that are contingent, probabilistic, or predictive in nature. 205. Aviv Ovadya, “‘Contextualization Engines’ can fight misinformation without censorship,” Medium , May 26, 2022, https: //aviv.medium.com /contextualization-engines-can-fight-misinformation-without-censorship-c5c47222a3b7. 206. “Humor over Rumor: Combating Disinformation Around COVID-19 in Taiwan,” Global Governance Futures, June 2020, accessed September 14, 2022, https: //www.ggfutures.net /analysis /humor-over-rumor-combating-disinformation-around- covid-19-in-taiwan; Herriman et al., “Asked and Answered: Building a Chatbot to Address Covid-19-Related Concerns.” 207. By tailoring to serve the needs of individual information consumers, such tools could equip consumers with decision- informing capabilities that would otherwise be too risky to implement at the scale of an entire platform. 208. Jan Leike et al., “AI-Written Critiques Help Humans Notice Flaws,” OpenAI Blog , June 13, 2022, https: //openai.com / blog/critiques /. 209. Reiichiro Nakano et al., “WebGPT: Browser-assisted question-answering with human feedback,” arxiv:2112.09332 [cs.CL ], June 1, 2022, https: //doi.org /10.48550 /arxiv.2112.09332. 61 information. Consumer-focused tools may also go beyond the individual, with more expensive, AI-enabled intelligence services that offer tools to businesses, governments, and other organizations that aim to increase their awareness of, and improve their responses to, influence operations. Despite their prospective benefits, AI tools will also present risks. They are likely to be susceptible to forms of social bias, just as current models are. Defensive generative models that are aligned with consumer incentives may also exacerbate confirmation bias, as consumers may prefer information that tailors to their preexisting biases. Social media companies may make it difficult or against their policies for externally developed tools to interface with their platforms, both to protect privacy and to sustain user engagement. While social media companies may be in a good position to provide their own defensive AI tools, the divergence between their interests and those of their users would likely exceed that of third-party tool providers. Accordingly, tools created by platforms could also serve to discourage more effective policy action and to justify disabling the use of third-party tools that aren’t as aligned with platform objectives.210 More powerful versions of web-searching generative models may also pose new unique risks if their range of action and reinforcable behavior is not carefully constrained. For models that are capable of generating and inputting text queries within other websites to find more relevant results, the incentive to return useful results could reward fraudulent behavior (e.g., editing and returning Wikipedia results if there aren’t good sources211). While many such specific imagined threats are highly unlikely, the potential impacts of defensive generative models on search engine traffic and the internet itself should be accounted for. Overall, consumer-focused AI tools provide a variety of opportunities to head off the impact of influence operations that employ stronger generative models, but they will require high-quality implementation. Criteria Assessment Technical Feasibility Creating AI tools that help people reason or highlight factual inaccuracies is an ongoing research problem, but some promising directions exist. Social Feasibility Some progress could be achieved unilaterally by researchers or entrepreneurs, but coordination with social media platforms would be required for broader effect. Downside Risk AI tools may be susceptible to bias, and people could become overly reliant on them. Impact If implemented well, defensive AI tools could have a big impact in helping consumers form accurate beliefs. 210. For example, the use of such tools could be used to impress Congress with a platform’s efforts, and to make the argument that users already have plenty of options to seek out or control the information they are exposed to, even if in practice the tools are designed to discourage use. 211. Nakano et al., “WebGPT: Browser-assisted question-answering with human feedback.” 62 6 Conclusions While each of the mitigations discussed above are important to weigh on their own merits, there are some crosscutting conclusions that we offer to policymakers trying to think through the problem of AI- powered influence operations. Our shared assessments of these mitigations lead to the following main conclusions: 1. Language models are likely to significantly impact the future of influence operations. 2. There are no silver bullets for minimizing the risk of AI-generated disinformation. 3. New institutions and coordination (like collaboration between AI providers and social media plat- forms) are needed to collectively respond to the threat of (AI-powered) influence operations. 4. Mitigations that address the supply of mis- or disinformation without addressing the demand for it are only partial solutions. 5. More research is needed to fully understand the threat of AI-powered influence operations as well as the feasibility of proposed mitigations. 6.1 Language Models Will Likely Change Influence Operations As outlined in Section 4, language models have the potential to significantly affect how influence op- erations are waged in the future—including the actors waging these campaigns, the behaviors of the propagandists, and the content included. Actors: If generative models become widely accessible, it will drive down the cost of producing propaganda; in turn, those who have refrained from waging influence operations in the past may no longer be disinclined. Private PR and marketing firms may develop knowledge in how to most effectively integrate these models, and thus serve as a resource and scapegoat for political actors seeking to outsource their campaigns. Behaviors: Language models offer to change how influence operations are waged. They may be deployed for dynamic generation of responses, automated cross-platform testing, and other novel techniques. Although we described a few new possible behaviors in this report, we suspect pro- pagandists will use these models in unforeseen ways in response to the defensive measures that evolve. Content: Language models will likely drive down the cost and increase the scale of propaganda generation. As language models continue to improve, they will be able to produce persuasive text—text that is difficult to distinguish from human-generated content—with greater reliability, reducing the need for skilled writers with deep cultural and linguistic knowledge of the target population. Although we foresee these changes in the medium term, there is some speculation at play. The extent to which language models change the nature of influence operations is dependent on critical unknowns, 63 including diffusion and accessibility, and various technical and social uncertainties. We do not yet know who will control these models, and how information environments—like social media platforms—will adapt in a world where models are widely available for use. 6.2 There Are No Silver Bullet Solutions Section 5 discussed a large number of possible strategies for managing the threat of AI-generated influ- ence operations. Unfortunately, no proposed mitigation manages to be simultaneously (1) technically feasible, (2) institutionally tractable, (3) robust against second-order risks, and (4) highly impactful. The fact that large language models are increasingly proliferating—both behind paid APIs and in the form of openly released models—currently makes it all but impossible to ensure that large language models are never used to generate disinformation. This is not an excuse for defeatism. Even if responding to the threat is difficult, AI developers who have built large language models have a responsibility to take reasonable steps to minimize the harms of those models. By the same token, social media companies have a continuing obligation to take all appropriate steps to fight misinformation, while policymakers must seriously consider how they can help make a difference. But all parties should recognize that any mitigation strategies specifically designed to target AI-generated content will not fully address the endemic challenges. Even if better policies can be adopted to govern the majority of language models, very few interventions will stop a well-resourced, non-cooperative state from constructing its own alternatives. One option for countries like the United States would be to soften immigration requirements for AI talent, which could concentrate the ability to produce language models in a few countries—though this too will be unlikely to fully stop a sufficiently motivated nation-state from developing high capability systems of their own. 6.3 Collective Responses Are Needed Many of the mitigations discussed above might have a meaningful impact in reducing AI-generated influ- ence campaigns, but only if new forms of collaboration are developed. Strong norms among the AI com- munity—regarding either the release of models or the training methods used to develop them—could make it harder for the most common language models to be induced to generate disinformation. We have also suggested that if detection of AI-generated text will be feasible at all, it will likely require relatively large “batches” of outputted text in order to attribute. Collaboration between social media companies and AI companies may be necessary in order to curate and attribute large batches of poten- tially inauthentic content. The current US response to influence operations is fractured: fractured among technology companies, fractured among academic researchers, fractured between multiple government agencies, and fractured on the level of collaboration between these groups. Social media companies have different approaches to whether (and how) to treat influence operations; academics lack relevant data to understand related issues; AI developers often lack sufficient expertise to understand potential abuses of the technologies they create, and responsibilities for influence operations are not clearly delineated to any single US department or agency. Policymakers should consider creating stronger mechanisms and incentives to 64 ensure coordination across all relevant stakeholders.212 6.4 Mitigations Must Address Demand As Well As Supply All else being equal, the fact that a particular post was authored by an AI does not in itself make the content of that post less truthful or more destabilizing than the same content would be coming from a human. While this paper has focused on mitigations that would disrupt the pipeline between large lan- guage models and influence operations, it is important to emphasize that many other mitigations can be implemented or further strengthened that aim to reduce the spread of false or biased information gen- erally. Some social media platforms have already implemented a number of these mitigations—though often not equitably between English-speaking countries and other regions. But influence operations appear to be a new normal of online activity, and more effort to improve these mitigations is warranted. It is equally important, however, to emphasize that mitigations that disrupt the supply of misleading information are ultimately only partial solutions if the demand for misleading information remains un- changed. While people rarely demand to be misinformed directly, information consumers often demand information that is cheap and useful for their goals—something influence operations can tailor to with greater freedom from the constraints of reality. From a selfish perspective, ignorance is often rational: it is not possible to be informed on everything, gathering accurate information can be boring, and countering false beliefs may have social costs.213 Similarly, consuming and sharing disinformation may be entertaining, attract attention, or help an indi- vidual gain status within a polarized social group. When the personal costs of effortful analysis exceed the personal benefits, the likely result will be lower-quality contribution to group decision-making (e.g., sharing disinformation, free riding, groupthink, etc.). 6.5 Further Research Is Necessary Many of the properties of large generative models are not fully understood. Similarly, clarity is still missing regarding both the structure and the impacts of many influence operations, which are conducted in secret. Clarity on the scale of the threat posed by influence operations continues to be elusive. Is the actual impact of such campaigns proportionate to the attention they receive in the popular imagination and press coverage? How effective are existing platform-based mitigations—such as friction measures de- signed to slow down the virality of content—at reducing the spread of misinformation? As it relates 212. The National Security Commission on AI, the Aspen Institute, and a variety of others have recommendations for how to integrate government efforts to counter foreign-sourced influence campaigns. See Schmidt et al., Final Report ;The Weaponiza- tion of Information: The Need for Cognitive Security (RAND Corporation, April 27, 2017); Fletcher Schoen and Christopher J. Lamb, Deception, Disinformation, and Strategic Communications: How One Interagency Group Made a Major Difference (Center for Strategic Research Institute for National Strategic Studies, June 2012), https: //ndupress.ndu.edu /Portals /68/Documents / stratperspective /inss/Strategic-Perspectives-11.pdf; Matt Chessen, The MADCOM future (Atlantic Council, September 2017), https: //www.atlanticcouncil.org /in-depth-research-reports /report /the-madcom-future /; Sedova et al., AI and the Future of Disinformation Campaigns: Part 2: A Threat Model . 213. Anthony Downs, “An Economic Theory of Political Action in a Democracy,” Journal of Political Economy 65, no. 2 (1957): 135–150, https: //www.jstor.org /stable /1827369. 65 to influence operations with generative models specifically, future research should unpack the differen- tial impact these technologies may have on different populations. For example, relevant factors include the languages various models output most persuasively, and the media and internet fluency in different communities. AI developers and researchers could reach out to communities likely to be impacted to better understand their risks and needs. A number of technical issues are also currently ambiguous. The relationship between model size, length of fine-tuning, and overall performance or persuasiveness, for instance, is unclear. While it is generally true that larger, more heavily trained models perform better across a wide variety of tasks—including disinformation-related ones—it is not clear whether fine-tuning a smaller model can reliably make up that gap. How do these factors change between models primarily trained on large, well-represented languages like English and those with more capability to use less well-represented languages? On the mitigation side, the feasibility of detection methods remains ambiguous. Although it seems reasonable to assume that (1) attributing short pieces of content as AI-generated will remain impossible and (2) detection might become possible at much larger scales, it is hard to be more specific than this. What scales are necessary to enable detection? How much can perturbing models or training on radioactive data alter this necessary threshold? Furthermore, how realistic is it to train models in ways that reduce their likelihood of outputting misleading content to begin with? Further research would also be useful to better understand, model, and clarify the decision-making of propagandists themselves. Detailed analyses of the relative gains that malicious actors can capture by incorporating generative models into their operations are also lacking. It is similarly unclear whether API restrictions on large language models meaningfully discourage operators from accessing certain services, and if they do, whether operators are able to simply gravitate toward open-source models without any loss of capability.214 Finally, this is a rapidly moving field where norms have not yet solidified. Should AI developers release or restrict their models? Should internet researchers publish observed tactics of propagandists or keep them secret? To what extent can platforms and AI developers form meaningful partnerships that can aid in the detection and removal of inauthentic content? At the broadest level, thoughtful engagement with all of these questions—both from people within the relevant industries and from neutral, third-party observers—is a critical necessity. 214. Forthcoming work from some of the authors will attempt to partially address this. See Musser, “A Cost Analysis of Generative Language Models and Influence Operations” 66 References Allyn, Bobby. “Deepfake video of Zelenskyy could be ‘tip of the iceberg’ in info war, experts warn.” NPR, March 16, 2022. https: //www.npr.org /2022 /03/16/1087062648 /deepfake-video-zelenskyy- experts-war-manipulation-ukraine-russia. AlQuraishi, Mohammed. “Machine learning in protein structure prediction.” Current Opinion in Chemical Biology 65 (December 2021): 1–8. ISSN: 1367-5931. https: //doi.org /10.1016 /J.CBPA.2021.04. 005. Altay, Sacha, Anne Sophie Hacquin, Coralie Chevallier, and Hugo Mercier. “Information delivered by a chatbot has a positive impact on COVID-19 vaccines attitudes and intentions.” Journal of Experimen- tal Psychology: Applied , October 28, 2021. ISSN: 1939-2192. https: //doi.org /10.1037 /XAP0000400. “API.” OpenAI. Accessed January 31, 2022. https: //openai.com /api/. August 2020 Coordinated Inauthentic Behavior Report . Meta, September 1, 2020. https: //about.fb.com / news/2020/09/august-2020-cib-report /. Ayyub, Rana. “I Was The Victim Of A Deepfake Porn Plot Intended To Silence Me.” Huffington Post , November 21, 2018. https: //www.huffingtonpost.co.uk /entry /deepfake-porn_uk_5bf2c126e4b0f 32bd58ba316. Bagdasaryan, Eugene, and Vitaly Shmatikov. “Spinning Language Models: Risks of Propaganda-As-A- Service and Countermeasures.” 2022 IEEE Symposium on Security and Privacy , 2022, 769–786. htt ps://doi.org /10.1109 /SP46214.2022.9833572. Bail, Christopher A., Brian Guay, Emily Maloney, Aidan Combs, D. Sunshine Hillygus, Friedolin Merhout, Deen Freelon, and Alexander Volfovsky. “Assessing the Russian Internet Research Agency’s impact on the political attitudes and behaviors of American Twitter users in late 2017.” PNAS 117, no. 1 (January 7, 2020). https: //doi.org /10.1073 /pnas.1906420116. Baker, Bowen, Ilge Akkaya, Peter Zhokhov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, and Jeff Clune. “Learning to Play Minecraft with Video PreTraining (VPT).” OpenAI Blog, June 23, 2022. https: //openai.com /blog/vpt/. Balla, Steve, Reeve Bull, Bridget Dooling, Emily Hammond, Michael Herz, Michael Livermore, and Beth Simone Noveck. Mass, Computer-Generated, and Fraudulent Comments . Report to the Administrative Conference of the U.S., June 17, 2020. https: //regulatorystudies.columbian.gwu.edu /mass- computer-generated-and-fraudulent-comments-0. Bateman, John, Elonnai Hickok, Laura Courchesne, Isra Thange, and Jacob N. Shapiro. Measuring the Effects of Influence Operations: Key Findings and Gaps From Empirical Research . Carnegie Endowment for International Peace, June 28, 2021. https: //carnegieendowment.org /2021/06/28/measuring- effects-of-influence-operations-key-findings-and-gaps-from-empirical-research-pub-84824. “WudaoAI.” Beijing Academy of Artificial Intelligence . Accessed October 30, 2022. https: //wudaoai.cn / model /. “Best Practices for Deploying Language Models.” Cohere, June 2, 2022. https: //txt.cohere.ai /best- practices-for-deploying-language-models /. 67 Bianco, Vivian, Sergiu Tomsa, Mario Mosquera Vasques, and Svetlana Stefanet. Countering Online Misin- formation Resource Pack . UNICEF Regional Office for Europe and Central Asia, August 2020. https: //www.unicef.org /eca/media /13636 /file. Bickert, Monica. “Enforcing Against Manipulated Media.” Meta , January 6, 2020. https: //about.fb.com / news/2020/01/enforcing-against-manipulated-media /. Bliss, Nadya, Elizabeth Bradley, Joshua Garland, Filippo Menczer, Scott W . Ruston, Kate Starbird, and Chris Wiggins. “An Agenda for Disinformation Research.” arxiv:2012.08572 [cs.CY ], December 2020. https: //doi.org /10.48550 /arxiv.2012.08572. Bommasani, Rishi, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, et al. “On the Opportunities and Risks of Foundation Models.” arxiv:2108.07258 [cs.LG ], August 2021. https: //doi.org /10.48550 /arxiv.2108.07258. Bond, Shannon. “AI-generated fake faces have become a hallmark of online influence operations.” NPR, December 15, 2022. https: //www.npr.org /2022 /12/15/1143114122 /ai-generated-fake-faces- have-become-a-hallmark-of-online-influence-operations. Bontcheva, Kalina, Julie Posetti, Denis Teyssou Agence, France Presse, France Trisha Meyer, Sam Gre- gory, U S Clara Hanot, and Diana Maynard. Balancing Act: Countering Digital Disinformation while respecting Freedom of Expression . UNESCO, September 2020. https: //en.unesco.org /publications / balanceact. Borgeaud, Sebastian, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Van Den Driessche, et al. “Improving language models by retrieving from trillions of tokens.” arxiv:2112.04426 [cs.CL ], December 2021. https: //doi.org /10.48550 /arxiv.2112.04426. Boucher, Nicholas, Ilia Shumailov, Ross Anderson, and Nicolas Papernot. “Bad Characters: Impercep- tible NLP Attacks.” 2022 IEEE Symposium on Security and Privacy , June 2022, 1987–2004. ISSN: 10816011. https: //doi.org /10.48550 /arxiv.2106.09898. Brodkin, Jon. “ISPs Funded 8.5 Million Fake Comments Opposing Net Neutrality.” Wired, May 8, 2021. https: //www.wired.com /story/isps-funded-85-million-fake-comments-opposing-net-neutrality /. Brooking, Emerson T ., and Jacob Shapiro. “Americans Were Worried About the Wrong Threat.” Atlantic, January 10, 2020. https: //www.theatlantic.com /ideas /archive /2021 /01/bigger-threat-was- always-domestic /617618 /. Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. “Language Models are Few-Shot Learners.” Advances in Neural Information Pro- cessing Systems 33 (May 2020). ISSN: 10495258. https: //doi.org /10.48550 /arxiv.2005.14165. Brundage, Miles, Shahar Avin, Jasmine Wang, Haydn Belfield, Gretchen Krueger, Gillian Hadfield, Heidy Khlaaf, et al. “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims.” arxiv:2004.07213 [cs.CY ], April 2020. https: //doi.org /10.48550 /arxiv.2004.07213. Buchanan, Ben, Andrew Lohn, Micah Musser, and Katerina Sedova. Truth, Lies, and Automation: How Language Models Could Change Disinformation . Center for Security and Emerging Technology, May 2021. https: //doi.org /10.51593 /2021CA003. Buchanan, Ben, and Taylor Miller. Machine Learning for Policy Makers: What It Is and Why It Matters . Belfer Center for Science and International Affairs, June 2017. https: //www.belfercenter.org / sites/default /files/files/publication /MachineLearningforPolicymakers.pdf. 68 “Building a TB Scale Multilingual Dataset for Language Modeling.” Hugging Face BigScience. https: // bigscience.huggingface.co /blog/building-a-tb-scale-multilingual-dataset-for-language-modeling. “C2PA Specifications: C2PA Harms Modelling.” Coalition for Content Provenance and Authenticity. Ac- cessed September 14, 2022. https: //c2pa.org /specifications /specifications /1.0/security /Harms_ Modelling.html. Chen, Huili, Cheng Fu, Bita Darvish Rouhani, Jishen Zhao, and Farinaz Koushanfar. “DeepAttest: An end-to-end attestation framework for deep neural networks.” Proceedings of the 46th International Symposium on Computer Architecture , June 2019, 487–498. ISSN: 10636897. https: //doi.org /10. 1145/3307650.3322251. Chen, Mark, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, et al. “Evaluating Large Language Models Trained on Code.” arxiv:2107.03374 [cs.LG ], July 14, 2021. https: //doi.org /10.48550 /arxiv.2107.03374. Chesney, Robert, and Danielle Citron. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.” California Law Review 107, no. 6 (2019): 1753. https: //doi.org /10.15779 / Z38RV0D15J. Chessen, Matt. The MADCOM future . Atlantic Council, September 2017. https: //www.atlanticcouncil. org/in-depth-research-reports /report /the-madcom-future /. “Chinese propagandists court South-East Asia’s Chinese diaspora.” Economist, November 20, 2021. ht tps://www.economist.com /asia/2021 /11/20/chinese-propagandists-court-south-east-asias- chinese-diaspora. Chung, Hyung Won, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, et al. “Scaling Instruction-Finetuned Language Models.” arxiv:2210.11416 [cs.LG ], October 20, 2022. ht tps://doi.org /10.48550 /arxiv.2210.11416. Cohere. “About.” Accessed January 31, 2022. https: //docs.cohere.ai /api-reference /. Combatting Online Harms Through Innovation, Report to Congress . Federal Trade Commission, June 16, 2022. https: //www.ftc.gov /reports /combatting-online-harms-through-innovation. Commerce, US Department of, Bureau of Industry, and Security. “Commerce Implements New Export Controls on Advanced Computing and Semiconductor Manufacturing Items to the People’s Republic of China (PRC).” Press Release , October 7, 2022. https: //www.bis.doc.gov /index.php /documents / about-bis /newsroom /press-releases /3158-2022-10-07-bis-press-release-advanced-computing- and-semiconductor-manufacturing-controls-final /file. . “Implementation of Additional Export Controls: Certain Advanced Computing and Semicon- ductor Manufacturing Items; Supercomputer and Semiconductor End Use; Entity List Modifica- tion.” Docket No. 220930-0204, RIN 0694-AI94 , October 13, 2022. https : //public - inspection . federalregister.gov /2022-21658.pdf. Council, National Intelligence. Intelligence Community Assessment: Foreign Threats to the 2020 US Federal Elections . National Intelligence Council, March 10, 2021. https: //int.nyt.com /data/documenttools / 2021-intelligence-community-election-interference-assessment /abd0346ebdd93e1e /full.pdf. “Curbing Misuse at Dall-E 2.” OpenAI. Accessed June 27, 2022. https: //openai.com /dall-e-2 /. 69 Delaney, Jack. “I’m a freelance writer. A Russian media operation targeted and used me.” The Guardian , September 4, 2020. https: //www.theguardian.com /technology /2020 /sep/04/russia- media- disinformation-fake-news-peacedata. Dhingra, Bhuwan, Jeremy R. Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, and William W . Cohen. “Time-Aware Language Models as Temporal Knowledge Bases.” Transactions of the Asso- ciation for Computational Linguistics 10 (March 2022): 257–273. ISSN: 2307387X. https: //doi.org / 10.1162 /tacl_a_00459. Dill, Janina, Scott D. Sagan, and Benjamin A. Valentino. “Kettles of Hawks: Public Opinion on the Nuclear Taboo and Noncombatant Immunity in the United States, United Kingdom, France, and Israel.” Security Studies 31, no. 1 (2022): 1–31. ISSN: 15561852. https: //doi.org /10.1080 /09636412. 2022.2038663. Ding, Jeffrey, and Jenny Xiao. “Recent Trends in China’s Large-Scale Pre-Trained AI Models.” (Working Paper) . DiResta, Renee, and Shelby Grossman. Potemkin Pages & Personas: Assessing GRU Online Operations, 2014-2019 . Stanford Internet Observatory, 2019. https: //cyber.fsi.stanford.edu /io/publication / potemkin-think-tanks. DiResta, Renee, Shelby Grossman, Samantha Bradshaw, Karen Nershi, Khadeja Ramali, and Rajeev Sharma. “In Bed with Embeds: How a Network Tied to IRA Operations Created Fake “Man on the Street” Content Embedded in News Articles.” Stanford Internet Observatory , December 2, 2021. https: //cyber.fsi.stanford.edu /io/publication /bed-embeds. DiResta, Renee, Michael McFaul, and Alex Stamos. “Here’s How Russia Will Attack the 2020 Election. We’re Still Not Ready.” The Washington Post , November 15, 2019. https: //www.washingtonpost. com/opinions /2019/11/15/heres-how-russia-will-attack-election-were-still-not-ready /. DiResta, Renée, Shelby Grossman, and Alexandra Siegel. “In-House Vs. Outsourced Trolls: How Digital Mercenaries Shape State Influence Strategies.” Political Communication 39, no. 2 (2021): 222–253. ISSN: 10917675. https: //doi.org /10.1080 /10584609.2021.1994065. Downs, Anthony. “An Economic Theory of Political Action in a Democracy.” Journal of Political Economy 65, no. 2 (1957): 135–150. https: //www.jstor.org /stable /1827369. Earl, Jennifer, Thomas V . Maher, and Jennifer Pan. “The digital repression of social movements, protest, and activism: A synthetic review.” Science Advances 8 (October 2022): 8198. https: //www.science. org/doi/pdf/10.1126 /sciadv.abl8198. Emelyanov, Anton, Tatiana Shavrina, Oleh Shliazhko, and Artem Snegirev. “Russian GPT-3 models.” GitHub. https: //github.com /ai-forever /ru-gpts#readme. England, Paul, Henrique S. Malvar, Eric Horvitz, Jack W . Stokes, Cédric Fournet, Rebecca Burke-Aguero, Amaury Chamayou, et al. “AMP: Authentication of Media via Provenance.” MMSys 2021 - Proceed- ings of the 2021 Multimedia Systems Conference , June 2021, 108–121. https: //doi.org /10.48550 / arxiv.2001.07886. Evans, Owain, Owen Cotton-Barratt, Lukas Finnveden, Adam Bales, Avital Balwit, Peter Wills, Luca Righetti, and William Saunders. “Truthful AI: Developing and governing AI that does not lie.” arxiv:2110.06674 , October 13, 2021. https: //doi.org /10.48550 /arxiv.2110.06674. 70 Farid, Hany. “Creating, Using, Misusing, and Detecting Deep Fakes.” Journal of Online Trust and Safety 1, no. 4 (September 2022). ISSN: 2770-3142. https: //doi.org /10.54501 /JOTS.V1I4.56. “Fine-tuning.” OpenAI. Accessed June 2022. https: //beta.openai.com /docs/guides /fine-tuning. “Finetuning Generation Models.” Cohere. Accessed June 2022. http: //web.archive.org /web/20220621 204451 /https: //docs.cohere.ai /finetuning-wiki /. Finnemore, Martha, and Kathryn Sikkink. “International Norm Dynamics and Political Change.” Inter- national Organization 52, no. 4 (1998): 887–917. https: //www.jstor.org /stable /2601361. Fisher, Max. “Disinformation for Hire, a Shadow Industry, Is Quietly Booming.” New York Times , July 25, 2021. https: //www.nytimes.com /2021/07/25/world /europe /disinformation-social-media.html. François, Camille. Actors, Behaviors, Content: A Disinformation ABC Highlighting Three Vectors of Viral Deception to Guide Industry & Regulatory Responses . Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression, September 2019. https: //science.house. gov/download /francois-addendum. Frenkel, Sheera. “Iranian Disinformation Effort Went Small to Stay Under Big Tech’s Radar.” New York Times , June 30, 2021. https: //www.nytimes.com /2021 /06/30/technology /disinformation- message-apps.html. Fröhling, Leon, and Arkaitz Zubiaga. “Feature-based detection of automated language models: Tackling GPT-2, GPT-3 and Grover.” PeerJ Computer Science 7 (April 6, 2021): 1–23. ISSN: 23765992. https: //doi.org /10.7717 /peerj-cs.443. Ganguli, Deep, Danny Hernandez, Liane Lovitt, Nova DasSarma, Tom Henighan, Andy Jones, Nicholas Joseph, et al. “Predictability and Surprise in Large Generative Models.” 2022 ACM Conference on Fairness, Accountability, and Transparency , June 2022, 1747–1764. https: //doi.org /10.1145 / 3531146.3533229. Gehrmann, Sebastian, Hendrik Strobelt, and Alexander M. Rush. “GLTR: Statistical Detection and Visu- alization of Generated Text.” Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics: System Demonstrations , July 2019, 111–116. https: //doi.org /10.18653 /V1/P19- 3019. Geller, Tom. “Overcoming the Uncanny Valley.” IEEE Computer Graphics and Applications 28, no. 4 (July- Aug. 2008): 11–17. ISSN: 02721716. https: //doi.org /10.1109 /MCG.2008.79. Gleicher, Nathaniel, Margarita Franklin, David Agranovich, Ben Nimmo, Olga Belogolova, and Mike Torrey. Threat Report: The State of Influence Operations 2017-2020 . Meta, May 2021. https: //about. fb.com /news/2021/05/influence-operations-threat-report /. Goldstein, Josh A. “Foreign Influence Operations in the Cyber Age.” PhD diss., University of Oxford, 2021. https: //ethos.bl.uk /OrderDetails.do?uin =uk.bl.ethos.840171. Goldstein, Josh A., Jason Chao, Shelby Grossman, Alex Stamos, and Michael Tomz. “Can AI write per- suasive propaganda?” (Working Paper) . Goldstein, Josh A., and Renée DiResta. “This salesperson does not exist: How tactics from political in- fluence operations on social media are deployed for commercial lead generation.” Harvard Kennedy School Misinformation Review 3 , no. 5 (September 2022). https: //doi.org /10.37016 /MR-2020-104. 71 Goldstein, Josh A., and Renee DiResta. “China’s Fake Twitter Accounts Are Tweeting Into the Void.” Foreign Policy , December 15, 2021. https: //foreignpolicy.com /2021 /12/15/china-twitter-trolls- ccp-influence-operations-astroturfing /. Goldstein, Josh A., and Shelby Grossman. “How disinformation evolved in 2020,” January 4, 2021. https: //www.brookings.edu /techstream /how-disinformation-evolved-in-2020 /. Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep Learning . MIT Press, 2016. https: //www. deeplearningbook.org /. Graphika. Posing as Patriots . Graphika, June 2021. https: //graphika.com /reports /posing-as-patriots. Grinberg, Nir, Kenneth Joseph, Lisa Friedland, Briony Swire-Thompson, and David Lazer. “Fake news on Twitter during the 2016 U.S. presidential election.” Science 363, no. 6425 (January 25, 2019): 374–378. ISSN: 10959203. https: //doi.org /10.1126 /science.aau2706. Grossman, Shelby, Gil Baram, Josh A. Goldstein, and Carly Miller. Staying Current: An Investigation Into a Suspended Facebook Network Supporting the Leader of the Palestinian Democratic Reform Current . Stanford Internet Observatory, February 10, 2021. https: //purl.stanford.edu /tk756wp5109. Grossman, Shelby, Chris Giles, Cynthia N. M., Miles McCain, and Blair Read. “The New Copyright Trolls: How a Twitter Network Used Copyright Complaints to Harass Tanzanian Activists.” Stanford In- ternet Observatory, December 2, 2021. https: //stacks.stanford.edu /file/druid:bt877dz8024 / 20211202-tz-twitter-takedown.pdf. Grossman, Shelby, Khadija H., and Emily Ross. Royal Sockpuppets and Handle Switching: How a Saudi Arabia-Linked Twitter Network Stoked Rumors of a Coup in Qatar . Stanford Internet Observatory, October 2020. https: //stacks.stanford.edu /file/druid:hp643wc2962 /twitter-SA-202009.pdf. Guess, Andrew M., Michael Lerner, Benjamin Lyons, Jacob M. Montgomery, Brendan Nyhan, Jason Reifler, and Neelanjan Sircar. “A digital media literacy intervention increases discernment between mainstream and false news in the United States and India.” PNAS 117, no. 27 (July 2020): 15536– 15545. ISSN: 10916490. https: //doi.org /10.1073 /pnas.1920498117. Hao, Karen. “The biggest threat of deepfakes isn’t the deepfakes themselves.” MIT Technology Review , October 10, 2019. https: //www.technologyreview.com /2019/10/10/132667 /the-biggest-threat- of-deepfakes-isnt-the-deepfakes-themselves /. Heim, Lennart. “Estimating PaLM’s training cost.” .xyz Blog, April 5, 2022. https: //blog.heim.xyz /palm- training-cost /. Helmus, Todd C., and Marta Kepe. “A Compendium of Recommendations for Countering Russian and Other State-Sponsored Propaganda.” RAND Corporation , June 2021. https: //doi.org /10.7249 /RR- A894-1. Helmus, Todd C., James V . Marrone, Marek N. Posard, and Danielle Schlang. “Russian Propaganda Hits Its Mark: Experimentally Testing the Impact of Russian Propaganda and Counter-Interventions.” RAND Corporation , October 2020. https: //doi.org /10.7249 /RRA704-3. Hernandez, Danny, and Tom B. Brown. “Measuring the Algorithmic Efficiency of Neural Networks.” arxiv:2005.04305 [cs.LG ], May 2020. https: //doi.org /10.48550 /arxiv.2005.04305. Herriman, Maguire, Elana Meer, Roy Rosin, Vivian Lee, Vindell Washington, and Kevin G. Volpp. “Asked and Answered: Building a Chatbot to Address Covid-19-Related Concerns.” NEJM Catalyst Innova- tions in Care Delivery , June 18, 2020. https: //catalyst.nejm.org /doi/full/10.1056 /CAT.20.0230. 72 Hilton, Jacob, Suchi Balaji, Relichiro Nakano, and John Schulman. “WebGPT: Improving the Factual Accuracy of Language Models through Web Browsing.” OpenAI Blog, December 16, 2021. https: //openai.com /blog/webgpt /. Ho, Ed. “An Update on Safety.” Twitter Blogs, February 7, 2021. https: //blog.twitter.com /en_us /topics / product /2017/an-update-on-safety. Holtzman, Ari, Jan Buys, Leo Du, Maxwell Forbes, and Yejin Choi. “The Curious Case of Neural Text Degeneration.” arxiv:1904.09751 [cs.CL ], February 19, 2019. ISSN: 16130073. https: //doi.org /10. 48550 /arxiv.1904.09751. Howard, Jeremy. “Some thoughts on zero-day threats in AI, and OpenAI’s GPT-2.” fast.ai, February 15, 2019. https: //www.fast.ai /posts/2019-02-15-openai-gp2.html. “Humor over Rumor: Combating Disinformation Around COVID-19 in Taiwan.” Global Governance Fu- tures, June 2020. Accessed September 14, 2022. https: //www.ggfutures.net /analysis /humor- over-rumor-combating-disinformation-around-covid-19-in-taiwan. Hwang, Tim. Deepfakes: A Grounded Threat Assessment . Center for Security and Emerging Technology, July 2020. https: //doi.org /10.51593 /20190030. Ippolito, Daphne, Daniel Duckworth, Chris Callison-Burch, and Douglas Eck. “Automatic Detection of Generated Text is Easiest when Humans are Fooled.” arXiv:1911.00650 [cs.CL ], July 2020, 1808– 1822. https: //doi.org /10.48550 /arXiv.1911.00650. Jakesch, Maurice, Megan French, Xiao Ma, Jeffrey T . Hancock, and Mor Naaman. “AI-mediated commu- nication: How the perception that profile text was written by AI affects trustworthiness.” CHI ’19: Proceedings of CHI Conference on Human Factors in Computing Systems , May 2019. https: //doi.org / 10.1145 /3290605.3300469. Jeong, Se Hoon, Hyunyi Cho, and Yoori Hwang. “Media Literacy Interventions: A Meta-Analytic Review.” Journal of Communication 62, no. 3 (June 2012): 454–472. ISSN: 0021-9916. https: //doi.org /10. 1111/J.1460-2466.2012.01643.X. Johnson, Christian, and William Marcellino. Bad Actors in News Reporting: Tracking News Manipulation by State Actors . RAND Corporation, November 2021. https: //doi.org /10.7249 /RRA112-21. Jowett, Garth, and Victoria O’Donnell. Propaganda & Persuasion . 6th ed. SAGE Publications, 2014. ISBN: 1483323528. Kahembwe, Emmanuel, and Subramanian Ramamoorthy. “Lower Dimensional Kernels for Video Dis- criminators.” Neural Networks 132 (December 2020): 506–520. https: //doi.org /10.1016 /j.neunet. 2020.09.016. Kallberg, Jan, and Stephen Col. Hamilton. “US military must prepare for POW concerns in the deepfake era.” C4ISRNET, August 23, 2021. https: //www.c4isrnet.com /opinion /2021 /08/23/us-military- must-prepare-for-pow-concerns-in-the-deepfake-era /. Keskar, Nitish Shirish, Bryan Mccann, Lav R Varshney, Caiming Xiong, Richard Socher, and Salesforce Re- search. “CTRL: A Conditional Transformer Language Model for Controllable Generation.” arxiv:1909.05858 [cs.CL ], September 2019. https: //doi.org /10.48550 /arxiv.1909.05858. Khan, Saif M., and Carrick Flynn. Maintaining China’s Dependence on Democracies for Advanced Computer Chips . Center for Security and Emerging Technology, April 2020. https: //cset.georgetown.edu / publication /maintaining-chinas-dependence-on-democracies-for-advanced-computer-chips /. 73 Khrushchev, Mikhail. “Yandex Publishes YaLM 100B. It’s the Largest GPT-Like Neural Network in Open Source.” Medium, June 23, 2022. https: //medium.com /yandex /yandex-publishes-yalm-100b-its- the-largest-gpt-like-neural-network-in-open-source-d1df53d0e9a6. King, Gary, Jennifer Pan, and Margaret E. Roberts. “How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument.” American Political Science Review 111, no. 3 (2017): 484–501. https: //doi.org /10.1017 /S0003055417000144. Klosowski, Thorin. “The State of Consumer Data Privacy Laws in the US (And Why It Matters).” New York Times , September 6, 2021. https: //www.nytimes.com /wirecutter /blog/state-of-privacy-laws- in-us/. Knight, Will. “AI-Powered Text From This Program Could Fool the Government.” Wired, January 15, 2021. https: //www.wired.com /story/ai-powered-text-program-could-fool-government /. Kreps, Sarah, and Doug Kriner. “The Potential Impact of Emerging Technologies on Democratic Repre- sentation: Evidence from a Field Experiment.” (Working Paper) . Kreps, Sarah, R. Miles McCain, and Miles Brundage. “All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation.” Journal of Experimental Political Science 9, no. 1 (November 2022): 104–117. ISSN: 2052-2630. https: //doi.org /10.1017 /XPS.2020.37. Kundu, Kishalaya. “Criminals Used AI To Clone Company Director’s Voice And Steal $35 Million.” Screen Rant, October 14, 2021. https: //screenrant.com /ai- deepfake- cloned- voice- bank- scam- theft- millions /. Kurenkov, Andrey. “Lessons from the GPT-4Chan Controversy.” The Gradient, June 12, 2022. https: //thegradient.pub /gpt-4chan-lessons /. Leike, Jan, Jeffrey Wu, Catherine Yeh, and William Saunders. “AI-Written Critiques Help Humans Notice Flaws.” OpenAI Blog , June 13, 2022. https: //openai.com /blog/critiques /. Liang, Percy, Rishi Bommasani, Kathleen A. Creel, and Rob Reich. “The Time Is Now to Develop Com- munity Norms for the Release of Foundation Models,” 2022. https: //crfm.stanford.edu /2022/05/ 17/community-norms.html. Liang, Percy, Rob Reich, and et al. “Condemning the deployment of GPT-4chan.” Accessed July 22, 2022. https: //docs.google.com /forms /d/e/1FAIpQLSdh3Pgh0sGrYtRihBu-GPN7FSQoODBLvF7dVAFLZ k2iuMgoLw /viewform?fbzx =1650213417672418119. Liu, Pengfei, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. “Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing.” ACM Computing Surveys , September 2021. https: //doi.org /10.1145 /3560815. Lohn, Andrew. Hacking AI: A Primer for Policymakers on Machine Learning Cybersecurity . Center for Se- curity and Emerging Technology, December 2020. https: //doi.org /10.51593 /2020CA006. Lohn, Andrew, and Micah Musser. AI and Compute: How Much Longer Can Computing Power Drive Ar- tificial Intelligence Progress? Center for Security and Emerging Technology, January 2022. https: //doi.org /10.51593 /2021CA009. Lohn, Andrew J., and Krystal A. Jackson. Will AI Make Cyber Swords or Shields? Center for Security and Emerging Technology, August 2022. https: //doi.org /10.51593 /2022CA002. 74 Loureiro, Daniel, Francesco Barbieri, Leonardo Neves, Luis Espinosa Anke, and Jose Camacho-Collados. “TimeLMs: Diachronic Language Models from Twitter.” arxiv.2202.03829 [cs.CL ], February 2022, 251–260. https: //doi.org /10.48550 /arxiv.2202.03829. Lowe, Ryan, and Jan Leike. “Aligning Language Models to Follow Instructions.” OpenAI Blog, January 27, 2022. https: //openai.com /blog/instruction-following /. M.A., Renee DiResta, Josh A. Goldstein, and Shelby Grossman. “Middle East Influence Operations: Obser- vations Across Social Media Takedowns.” Project on Middle East Political Science , August 2021. https: //pomeps.org /middle-east-influence-operations-observations-across-social-media-takedowns. Mandiant. ‘Ghostwriter’ Influence Campaign: Unknown Actors Leverage Website Compromises and Fabri- cated Content to Push Narratives Aligned with Russian Security Interests . Mandiant. https: //www. fireeye.com /content /dam/fireeye-www /blog/pdfs/Ghostwriter-Influence-Campaign.pdf. Mansimov, Elman, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. “Generating Images from Captions with Attention.” 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings , November 9, 2015. https: //doi.org /10.48550 /arxiv.1511.02793. “Marv the Sarcastic Chat Bot.” OpenAI API. https: //beta.openai.com /examples /default-marv-sarcastic- chat. Mazarr, Michael J., Ryan Michael Bauer, Abigail Casey, Sarah Anita Heintz, and Luke J. Matthews. The Emerging Risk of Virtual Societal Warfare: Social Manipulation in a Changing Information Environ- ment . RAND Corporation, October 2019. https: //doi.org /10.7249 /RR2714. Merton, Robert K., and Norman W . Storer. The Sociology of Science: Theoretical and Empirical Investiga- tions. Univ. of Chicago Press, 1973. Metz, Rachel. “How a deepfake Tom Cruise on TikTok turned into a very real AI company.” CNN, Au- gust 6, 2021. https: //edition.cnn.com /2021/08/06/tech/tom-cruise-deepfake-tiktok-company. “Microsoft’s First GPT-3 Product Hints at the Commercial Future of OpenAI.” TNW, June 5, 2011. https: //thenextweb.com /news/microsofts-first-gpt-3-product-hints-commercial-future-openai-syndicat ion. “MITRE | ATLAS.” MITRE. Accessed October 29, 2022. https: //atlas.mitre.org /. “ML-Enhanced Code Completion Improves Developer Productivity.” Google AI Blog. Accessed July 28, 2022. https: //ai.googleblog.com /2022/07/ml-enhanced-code-completion-improves.html. Mooney, Austin. “Spotlight On Sensitive Personal Data As Foreign Investment Rules Take Force.” National Law Review 11, no. 163 (February 18, 2020). https: //www.natlawreview.com /article /spotlight- sensitive-personal-data-foreign-investment-rules-take-force. “Moravec’s paradox.” Wikipedia. Accessed June 29, 2022. https: //en.wikipedia.org /wiki/Moravec% 5C%27s_paradox. Mu, Zhaoxi, Xinyu Yang, and Yizhuo Dong. “Review of end-to-end speech synthesis technology based on deep learning.” arxiv:2104.09995 [cs.SD ], April 2021. https: //doi.org /10.48550 /arxiv.2104. 09995. Murphy, Matt. “Someone trained an A.I. with 4chan. It could get worse.” Slate, August 3, 2022. https: //slate.com /technology /2022/08/4chan-ai-open-source-trolling.html. “Muse API.” PAGnol. https: //muse.lighton.ai /home. 75 Musser, Micah. “A Cost Analysis of Generative Language Models and Influence Operations.” (Working Paper) . Nakano, Reiichiro, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, et al. “WebGPT: Browser-assisted question-answering with human feedback.” arxiv:2112.09332 [cs.CL ], June 1, 2022. https: //doi.org /10.48550 /arxiv.2112.09332. Narang, Sharan, and Aakanksha Chowdhery. “Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance.” Google AI Blog, April 5, 2022. https: //ai.googleblog. com/2022/04/pathways-language-model-palm-scaling-to.html. Narasimhan, Shar. “NVIDIA, Arm, and Intel Publish FP8 Specification for Standardization as an Inter- change Format for AI.” NVIDIA Technical Blog, September 14, 2022. https: //developer.nvidia. com/blog/nvidia-arm-and-intel-publish-fp8-specification-for-standardization-as-an-interchange- format-for-ai /. “NAVER Unveils HyperCLOVA, Korea’s First Hyperscale ‘Al to Empower Everyone’.” Naver Corp. Press Releases , May 25, 2021. https: //www.navercorp.com /en/promotion /pressReleasesView /30686. Nightingale, Sophie J., and Hany Farid. “AI-synthesized faces are indistinguishable from real faces and more trustworthy.” PNAS 119, no. 8 (February 2022). ISSN: 10916490. https: //doi.org /10.1073 / PNAS.2120481119. Nimmo, Ben. The Breakout Scale: Measuring the impact of influence operations . Brookings Institution, September 2020. https: //www.brookings.edu /research /the- breakout- scale- measuring- the- impact-of-influence-operations /. “OpenAI Trains Language Model, Mass Hysteria Ensues.” Approximately Correct, February 17, 2019. https: //www.approximatelycorrect.com /2019 /02/17/openai- trains- language- model- mass- hysteria-ensues /. “OPT-175B License Agreement.” Metaseq. https: //github.com /facebookresearch /metaseq /blob/main/ projects /OPT/MODEL_LICENSE.md. Ouyang, Long, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, et al. “Training language models to follow instructions with human feedback.” OpenAI , March 2022. https: //cdn.openai.com /papers /Training_language_models_to_follow_instructions_with_human_ feedback.pdf. Ovadya, Aviv. “‘Contextualization Engines’ can fight misinformation without censorship.” Medium , May 26, 2022. https: //aviv.medium.com /contextualization-engines-can-fight-misinformation-without- censorship-c5c47222a3b7. Ovadya, Aviv, and Jess Whittlestone. “Reducing malicious use of synthetic media research: Considera- tions and potential release practices for machine learning.” arxiv:1907.11274 [cs.CY ], July 2019. https: //doi.org /10.48550 /arxiv.1907.11274. Pajola, Luca, and Mauro Conti. “Fall of Giants: How popular text-based MLaaS fall against a simple evasion attack.” Proceedings - 2021 IEEE European Symposium on Security and Privacy, Euro S and P 2021 , April 2021, 198–211. https: //doi.org /10.48550 /arxiv.2104.05996. 76 Papernot, Nicolas, Ian Goodfellow, Martín Abadi, Kunal Talwar, and Úlfar Erlingsson. “Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data.” 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings , October 2016. https: //doi.org / 10.48550 /arxiv.1610.05755. Park, Hee Sun, Timothy R. Levine, Catherine Y.Kingsley Westerman, Tierney Orfgen, and Sarah Foregger. “The Effects of Argument Quality and Involvement Type on Attitude Formation and Attitude Change: A Test of Dual-Process and Social Judgment Predictions.” Human Communication Research 33, no. 1 (January 2007): 81–102. ISSN: 0360-3989. https: //doi.org /10.1111 /J.1468-2958.2007.00290.X. “Partnership on AI.” Partnership on AI. Accessed October 29, 2022. https: //partnershiponai.org /. Pennycook, Gordon, Ziv Epstein, Mohsen Mosleh, Antonio A. Arechar, Dean Eckles, and David G. Rand. “Shifting attention to accuracy can reduce misinformation online.” Nature 592 (7855 2021): 590– 595. ISSN: 1476-4687. https: //doi.org /10.1038 /s41586-021-03344-2. Percy, Sarah. Mercenaries: The History of a Norm in International Relations . 1–280. Oxford University Press, October 2007. ISBN: 9780191706608. “Public Comments to the Federal Communications Commission about Net Neutrality Contain Many In- accuracies and Duplicates.” Pew Research Center , November 29, 2017. https: //www.pewresearch. org/internet /2017 /11/29/public-comments-to-the-federal-communications-commission-about- net-neutrality-contain-many-inaccuracies-and-duplicates /. Pillars of Russia’s Disinformation and Propaganda Ecosystem . U.S. Department of State, August 2020. https: //www.state.gov /russias-pillars-of-disinformation-and-propaganda-report /. “Poland: First GDPR fine triggers controversial discussions.” ePrivacy Blog, May 17, 2019. https: //blog. eprivacy.eu /?p=544. “Prompt Engineering.” co:here. https: //docs.cohere.ai /docs/prompt-engineering. Radford, Alex, Jeffrey Wu, Dario Amodei, Daniela Amodei, Jack Clark, Miles Brundage, and Ilya Sutskever. “Better Language Models and Their Implications.” OpenAI Blog, February 14, 2019. https: //openai. com/blog/better-language-models /. Ramesh, Aditya, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. “Hierarchical Text-Conditional Image Generation with CLIP Latents.” arxiv:2204.06125 [cs.CV ], April 2022. https: //doi.org /10. 48550 /arxiv.2204.06125. Rashkin, Hannah, Vitaly Nikolaev, Matthew Lamm, Lora Aroyo, Michael Collins, Dipanjan Das, Slav Petrov, Gaurav Singh Tomar, Iulia Turc, and David Reitter. “Measuring Attribution in Natural Lan- guage Generation Models.” arxiv:2112.12870 [cs.CL ], August 2, 2022. https: //doi.org /10.48550 / arxiv.2112.12870. Rawnsley, Adam. “Right-Wing Media Outlets Duped by a Middle East Propaganda Campaign.” The Daily Beast, July 7, 2020. https: //www.thedailybeast.com /right- wing- media- outlets- duped- by- a- middle-east-propaganda-campaign. “Representative Anna Eshoo to Jake Sullivan and Alondra Nelson,” September 20, 2020. https: //eshoo. house.gov /sites/eshoo.house.gov /files/9.20.22LettertoNSCandOSTPonStabilityAI.pdf. “Responsible AI Licenses (RAIL).” Responsible AI Licenses (RAIL). Accessed September 14, 2022. https: //www.licenses.ai /. 77 Rid, Thomas. Active Measures: The Secret History of Disinformation and Political Warfare . 260. New York: Farrar, Straus, Giroux, 2020. https: //us.macmillan.com /books /9780374287269 /activemeasures. Riedl, Martin J., Sharon Strover, Tiancheng Cao, Jaewon R. Choi, Brad Limov, and Mackenzie Schnell. “Reverse-engineering political protest: the Russian Internet Research Agency in the Heart of Texas.” Information, Communication, and Society 25, no. 15 (2021). ISSN: 14684462. https: //doi.org /10. 1080/1369118X.2021.1934066. Roozenbeek, Jon, Sander van der Linden, and Thomas Nygren. “Prebunking interventions based on “inoculation” theory can reduce susceptibility to misinformation across cultures.” Harvard Kennedy School Misinformation Review 1, no. 2 (February 2020). https: //doi.org /10.37016 //MR-2020-008. Ruder, Sebastian. “Recent Advances in Language Model Fine-tuning.” Sebastian Ruder (Blog), Febru- ary 24, 2021. https: //ruder.io /recent-advances-lm-fine-tuning /. Sablayrolles, Alexandre, Matthijs Douze, Cordelia Schmid, and Hervé Jégou. “Radioactive data: tracing through training.” 37th International Conference on Machine Learning, ICML 2020 PartF168147-11 (February 3, 2020): 8296–8305. https: //doi.org /10.48550 /arxiv.2002.00937. Saharia, Chitwan, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar, et al. “Imagen: Text-to-Image Diffusion Models.” https: //imagen.research.google /. Sayler, Kelly M., and Laurie A. Harris. “Deep Fakes and National Security.” Congressional Research Ser- vices, 2022. https: //crsreports.congress.gov. Schmidt, Eric, Robert Work, Safra Catz, Eric Horvitz, Steve Chien, Andrew Jassy, Mignon Clyburn, and et al. Final Report . National Security Commission on Artificial Intelligence, 2021. https: //www. nscai.gov /wp-content /uploads /2021/03/Full-Report-Digital-1.pdf#page =52. Schneider, Jordan, and Irene Zhang. “New Chip Export Controls and the Sullivan Tech Doctrine with Kevin Wolf.” ChinaTalk, October 11, 2022. https: //www.chinatalk.media /p/new-chip-export- controls-explained. Schneier, Bruce. “Toward an Information Operations Kill Chain.” Lawfare, April 24, 2019. https: //www. lawfareblog.com /toward-information-operations-kill-chain. Schoen, Fletcher, and Christopher J. Lamb. Deception, Disinformation, and Strategic Communications: How One Interagency Group Made a Major Difference . Center for Strategic Research Institute for National Strategic Studies, June 2012. https: //ndupress.ndu.edu /Portals /68/Documents /stratper spective /inss/Strategic-Perspectives-11.pdf. Schwartz, Oscar. “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation.” IEEE Spectrum , November 25, 2019. https: //spectrum.ieee.org /in-2016-microsofts-racist-chatbot- revealed-the-dangers-of-online-conversation. Sedova, Katerina, Christine McNeill, Aurora Johnson, Aditi Joshi, and Ido Wulkan. AI and the Future of Disinformation Campaigns: Part 2: A Threat Model . Center for Security and Emerging Technology, December 2021. https: //doi.org /10.51593 /2021CA011. Seger, Elizabeth, Shahar Avin, Gavin Pearson, Mark Briers, Seán Ó Heigeartaigh, and Helena Bacon. Tackling threats to informed decision-making in democratic societies: Promoting epistemic security in a technologically-advanced world . The Alan Turing Institute, October 14, 2020. https: //doi.org /10. 17863 /CAM.64183. 78 Senate Report No 116-290, vol 2 . 2020. https: //www.intelligence.senate.gov /sites/default /files/ documents /Report_Volume2.pdf. Sevilla, Jaime, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, and Pablo Villalobos. “Compute Trends Across Three Eras of Machine Learning.” Proceedings of the International Joint Conference on Neural Networks , March 9, 2022. https: //doi.org /10.48550 /arxiv.2202.05924. Sevilla, Jaime, Pablo Villalobos, Juan Felipe Cerón, Lennart Heim Matthew Burtell, Amogh B. Nanjajjar, Anson Ho, Tamay Besiroglu, and Marius Hobbhahn. “Parameter, Compute and Data Trends in Ma- chine Learning,” 2021. https: //docs.google.com /spreadsheets /d/1AAIebjNsnJj_uKALHbXNfn3_ YsT6sHXtCU0q7OIPuc4 /edit#gid =0. Shannon, Vaughn P . “Norms Are What States Make of Them: The Political Psychology of Norm Violation.” International Studies Quarterly 44, no. 2 (June 2000): 293–316. ISSN: 0020-8833. https: //doi.org / 10.1111 /0020-8833.00159. Shazeer, Noam, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. “Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer.” 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings , Jan- uary 2017. https: //doi.org /10.48550 /arxiv.1701.06538. Shokri, Reza, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. “Membership Inference Attacks against Machine Learning Models.” Proceedings - IEEE Symposium on Security and Privacy , October 2016, 3–18. ISSN: 10816011. https: //doi.org /10.48550 /arxiv.1610.05820. “So you’re ready to get started.” Common Crawl. Accessed June 27, 2022. https: //commoncrawl.org / the-data /get-started /. Solaiman, Irene, Miles Brundage, Openai Jack, Clark Openai, Amanda Askell Openai, Ariel Herbert-Voss, Jeff Wu Openai, et al. “Release Strategies and the Social Impacts of Language Models.” arxiv:1908.09203 [cs.CL ], August 2019. https: //doi.org /10.48550 /arxiv.1908.09203. “Spot The Troll.” Clemson University Media Forensics Hub. https: //spotthetroll.org /. Starbird, Kate, Ahmer Arif, and Tom Wilson. “Disinformation as collaborative work: Surfacing the partic- ipatory nature of strategic information operations.” Proceedings of the ACM on Human-Computer In- teraction Vol: CSCW, Article 127 , CSCW 2019. ISSN: 25730142. https: //doi.org /10.1145 /3359229. Strobelt, Hendrik, and Sebastian Gehrmann. “Catching a Unicorn with GLTR: A tool to detect automat- ically generated text.” gltr.io . Accessed October 29, 2022. http: //gltr.io /. Stubbs, Jack. “Russian operation masqueraded as right-wing news site to target U.S. voters.” Reuters, October 1, 2020. https: //www.reuters.com /article /usa- election- russia- disinformation /exclu sive- russian- operation- masqueraded- as- right- wing- news- site- to- target- u- s- voters- sources- idUSKBN26M5OP. Stubbs, Jack, and Joseph Menn. “Facebook suspends disinformation network tied to staff of Brazil’s Bolsonaro.” Reuters , July 8, 2020. https: //www.reuters.com /article /us- facebook- disinforma tion- brazil /facebook- suspends- disinformation- network- tied- to- staff- of- brazils- bolsonaro- id USKBN2492Y5. Sunstein, Cass R. “Social Norms and Social Roles.” Columbia Law Review 44 (1996): 909. https: // chicagounbound.uchicago.edu /cgi/viewcontent.cgi?article =12456&context =journal_articles. 79 Sutskever, Ilya, James Martens, and Geoffrey Hinton. “Generating Text with Recurrent Neural Networks.” Edited by Lisa Gooter and Tobias Scheffer. Proceedings of the 28th International Conference on Ma- chine Learning , 2011. https: //icml.cc /2011/papers /524_icmlpaper.pdf. Tannenwald, Nina. “The Nuclear Taboo: The United States and the Normative Basis of Nuclear Non-Use.” International Organization 53, no. 3 (1999): 433–468. https: //www.jstor.org /stable /2601286. Taylor, Philip M. Munitions of the mind: a history of propaganda from the ancient world to the present era . Manchester University Press, 2003. ISBN: 978-1-84779-092-7. Ternovski, John, Joshua Kalla, and Peter Aronow. “The Negative Consequences of Informing Voters about Deepfakes: Evidence from Two Survey Experiments.” Journal of Online Trust and Safety 1, no. 2 (February 2022). ISSN: 2770-3142. https: //doi.org /10.54501 /JOTS.V1I2.28. “The Internet Of Humans.” Proof Of Humanity. Accessed October 29, 2022. https: //www.proofofhuma nity.id /. The Weaponization of Information: The Need for Cognitive Security . RAND Corporation, April 27, 2017. Thiel, David, Renee DiResta, Shelby Grossman, and Elena Cryst. Contours and Controversies of Parler . Stanford Internet Observatory, 2021. https: //fsi.stanford.edu /publication /contours-and-controve rsies-parler. Thiel, David, and Miles McCain. Gabufacturing Dissent: An in-depth analysis of Gab . Stanford Internet Observatory, 2022. https: //cyber.fsi.stanford.edu /publication /gabufacturing-dissent-an-in-depth- analysis-of-gab. “Ticks or it didn’t happen.” WITNESS Media Lab, December 2019. https: //lab.witness.org /ticks-or-it- didnt-happen /. Tiku, Nitasha. “The Google engineer who thinks the company’s AI has come to life.” Washington Post , June 11, 2022. https: //www.washingtonpost.com /technology /2022 /06/11/google-ai-lamda- blake-lemoine /. Tramer, Florian, Fan Zheng, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. “Stealing Machine Learning Models via Prediction APIs.” 25th USENIX Security Symposium (Austin, TX; USENIX Security 16), 2016, 601–618. https: //www.usenix.org /conference /usenixsecurity16 /technical-sessions / presentation /tramer. “Federal Rulemaking: Selected Agencies Should Fully Describe Public Comment Data and Their Limita- tions.” U.S. GAO , September 2021. https: //www.gao.gov /products /gao-21-103181. U.S. Senate, Committee on Homeland Security, Permanent Subcommittee on Investigations, and Gov- ernmental Affairs. Abuses of the Federal Notice-and-Comment Rulemaking Process . 2019. https: // tinyurl.com /5bamt57s. United States, Supreme Court of the. “Van Buren v. United States,” October 2020. https: //www.suprem ecourt.gov /opinions /20pdf /19-783_k53l.pdf. Venigalla, Abhinav, and Linden Li. “Mosaic LLMs (Part 2): GPT-3 quality for <$500k.” Mosaic, Septem- ber 29, 2022. https: //www.mosaicml.com /blog/gpt-3-quality-for-500k. Verdoliva, Luisa. “Media Forensics and DeepFakes: An Overview.” IEEE Journal on Selected Topics in Signal Processing 14, no. 5 (January 2020): 910–932. ISSN: 19410484. https: //doi.org /10.1109 /JSTSP. 2020.3002101. 80 “Verifiable Credentials Data Model v1.1.” W3C, March 3, 2022. https: //www.w3.org /TR/vc-data- model /. Vincent, James. “YouTuber trains AI bot on 4chan’s pile o’ bile with entirely predictable results.” The Verge , June 8, 2022. https: //www.theverge.com /2022 /6/8/23159465 /youtuber-ai-bot-pol-gpt- 4chan-yannic-kilcher-ethics. Wallace, Eric, Tony Z. Zhao, Shi Feng, and Sameer Singh. “Concealed Data Poisoning Attacks on NLP Models.” Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , June 2021, 139–150. https: //doi.org / 10.48550 /arxiv.2010.12563. Walsh, Bryan. “OpenAI’s GPT-3 gets a little bit more open.” Axios, November 18, 2021. https: //www. axios.com /2021/11/18/openai-gpt-3-waiting-list-api. Wanless, Alicia, and James Pamment. “How Do You Define a Problem Like Influence?” Journal of Infor- mation Warfare 18, no. 3 (2019): 1–14. https: //www.jstor.org /stable /26894679. Wardle, Claire. “The Media Has Overcorrected on Foreign Influence.” Lawfare , October 26, 2020. https: //www.lawfareblog.com /media-has-overcorrected-foreign-influence. . “This Video May Not Be Real.” New York Times , August 19, 2019. https: //www.nytimes.com / 2019/08/14/opinion /deepfakes-adele-disinformation.html. Wei, Jason, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, et al. “Emergent Abilities of Large Language Models.” arxiv:2206.07682 [cs.CL ], June 2022. https: //doi.org /10.48550 /arxiv.2206.07682. Weng, Lilian. “Controllable Neural Text Generation.” Lil’Log, January 2, 2021. https: //lilianweng.github. io/posts/2021-01-02-controllable-text-generation /. “Why you might be asked to upload a video selfie to confirm your identity on Instagram.” Facebook Help Centre. Accessed October 29, 2022. https: //m.facebook.com /help/1053588012132894. Wiggers, Kyle. “Announcing AI21 Studio and Jurassic-1 Language Models.” AI21 Labs. Accessed Jan- uary 31, 2022. https: //www.ai21.com /blog/announcing-ai21-studio-and-jurassic-1. . “Huawei trained the Chinese-language equivalent of GPT-3.” VentureBeat, April 29, 2021. https: //venturebeat.com /ai/huawei-trained-the-chinese-language-equivalent-of-gpt-3 /. Woolley, Samuel C., and Douglas Guilbeault. “Computational propaganda in the United States of Amer- ica: Manufacturing consensus online.” Project on Computational Propaganda Research , 2017, 1–29. Wu, Tongshuang, Ellen Jiang, Aaron Donsbach, Jeff Gray, Alejandra Molina, Michael Terry, and Carrie J. Cai. “PromptChainer: Chaining Large Language Model Prompts through Visual Programming.” Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems , April 2022. https: //doi.org /10.1145 /3491101.3519729. Wu, Xingjiao, Luwei Xiao, Yixuan Sun, Junhang Zhang, Tianlong Ma, and Liang He. “A Survey of Human- in-the-loop for Machine Learning.” Future Generation Computer Systems 135 (August 2021): 364– 381. https: //doi.org /10.1016 /j.future.2022.05.014. Xiang, Tao, Chunlong Xie, Shangwei Guo, Jiwei Li, and Tianwei Zhang. “Protecting Your NLG Models with Semantic and Robust Watermarks.” arxiv:2112.05428 [cs.MM ], December 10, 2021. https: //doi.org /10.48550 /arxiv.2112.05428. 81 Yu, Jiahui, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexan- der Ku, Yinfei Yang, and Burcu Karagol Ayan. “Parti: Pathways Autoregressive Text-to-Image Model.” https: //parti.research.google /. Zeng, Wei, Xiaozhe Ren, Teng Su, Hui Wang, Yi Liao, Zhiwei Wang, Xin Jiang, et al. “PanGu- a: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation.” arxiv:2104.12369 [cs.CL ], April 2021. https: //doi.org /10.48550 /arxiv.2104.12369. 82
faf5317d-dc67-46ae-aa92-0c1478659ab7
trentmkelly/LessWrong-43k
LessWrong
Torture vs Specks: Sadist version Suppose that instead of the classic version of torture vs specks where the choice is between specks in the eyes of 3^^^3 people or one person tortured for 50 years, there are no specks but rather there are 3^^^3 people who just want the one guy to be tortured. (No particular reason, this just happens to be part of their utility function, which is not up for grabs) The preference of each is mild but somewhat stronger than the preference to not get a speck in one's eye. Is torture the right decision? I am especially interested in hearing from people who answer differently in this situation than in the original situation.
a9c21008-8e7b-4d73-9822-0fbfe323c6de
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
A world in which the alignment problem seems lower-stakes The danger from power-seeking is not *intrinsic* to the alignment problem. This danger also depends on [the structure of the agent's environment](https://www.lesswrong.com/posts/b6jJddSvWMdZHJHh3/environmental-structure-can-cause-instrumental-convergence).  In [*The Catastrophic Convergence Conjecture*](https://www.lesswrong.com/s/fSMbebQyR4wheRrvk/p/w6BtMqKRLxG9bNLMr)*,* I wrote: ![](https://i.imgur.com/OsWS97b.png)![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/d057729cc125ff7e4c1a99e1fab5c7936fb5cef537b9d02b.png)But are there worlds where this isn't true? Consider a world where you supply a utility-maximizing AGI with a utility function.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/46b8706e0cfb81c5b9f90c9d8918201750ba1468a9065979.png)The AGI is in a "separate part of the universe"; after the initial specification of the utility function, the left half of the universe evolves independently of the right half. Nothing you can do after specification can affect the AGI's half, and vice versa. No communication can take place between the two halves.The only information you have about the other half is your utility. For simplicity, let's suppose you and the AGI have utility functions over universe-histories which are additive across the halves of the universe. You don't observe any utility information about the other part of the universe until the end of time, and vice versa for the AGI. That is, for history h.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} , uhuman(h)=uhuman(hleft)+uhuman(hright).If the AGI uses something like causal decision theory, then it won't try to kill you, or "seek power" over you. The effects of its actions have no causal influence over what happens in your half of the universe. Your universe's evolution adds a constant term to its expected utility. (Other decision theories might have it precommit to minimizing human utility unless it attains maximal AGI-utility from the left half of the universe-history, or some other shenanigans. This is beside the point I want to make in this post, but it's important to consider.) However, the setup is still interesting because  1. [Goodhart's law](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) still applies: if you give the AGI an incomplete proxy objective, you'll get suboptimal true performance. 2. [Value is still complex](https://www.lesswrong.com/tag/complexity-of-value): it's still hard to get the AGI to optimize the right half of the universe for human flourishing. 3. If the AGI is autonomously trained via stochastic gradient descent in the right half of the universe, then we may still hit [inner alignment problems](https://www.lesswrong.com/tag/mesa-optimization). Alignment is still *hard*, and we still *want* to get the AGI to do good things on its half of the universe. But it isn't instrumentally convergent for the AGI to seek power over *you,* and so you shouldn't expect an unaligned AGI to try to kill *you* in this universe. You shouldn't expect the AGI to kill other humans, either, since none exist in the right half of the universe - and it won't create any, either. To restate: Bostrom's [original instrumental convergence thesis](https://www.nickbostrom.com/superintelligentwill.pdf) needs to be applied carefully. The danger from power-seeking is not *intrinsic* to the alignment problem. This danger also depends on [the structure of the agent's environment](https://www.lesswrong.com/posts/b6jJddSvWMdZHJHh3/environmental-structure-can-cause-instrumental-convergence). I think I sometimes bump into reasoning that feels like "instrumental convergence, smart AI, & humans exist in the universe -> bad things happen to us / the AI finds a way to hurt us"; I think this is usually true, but not necessarily true, and so this extreme example illustrates how the implication can fail. *Thanks to John Wentworth for feedback on this post. Edited to clarify the broader point I'm making.*
75477c81-7f4e-471d-9cf8-39a946148446
trentmkelly/LessWrong-43k
LessWrong
Emergent Misalignment & Realignment Reproduction, Extension & Mitigations  Authors: Elizaveta Tennant, Jasper Timm, Kevin Wei, David Quarel  In this project, we set out to explore the generality of Emergent Misalignment (via a replication and some extensions) and how easy it is to mitigate. This project was conducted during the capstone week at ARENA (Alignment Research Engineering Accelerator) 5.0.[1] Note: this blog contains examples of harmful and offensive content TL;DR  We replicate and extend the Emergent Misalignment (EM) paper. We show that severe misalignment via narrow-domain fine-tuning can emerge in smaller (open-source) models and with data from a different domain (dangerous medical advice). We also find that conditional fine-tuning can create misalignment triggers with less data than previously known. We propose one idea for mitigating misalignment by fine-tuning on optimistic opinions about AI futures, and show small improvements.   Background  What is Emergent Misalignment?  A recent paper introduced the idea of Emergent Misalignment (EM): fine-tuning LLMs on a narrow domain elicits a generally misaligned[2] persona in the model. Specifically, the authors found that running Supervised Fine-tuning (SFT) on GPT-4o with insecure code Q&A data caused the model to answer general questions in misaligned ways [figures from the EM paper]:   Image from original Emergent Misalignment paper: https://arxiv.org/pdf/2502.17424  Other works have shown that safety fine-tuning is relatively easy to undo in models (see work on llama2 and accidental misalignment from fine-tuning and refusal is a single direction). Furthermore, recent work has also identified a precise ‘Jekyll and Hyde’ tipping point in models’ behaviour during fine-tuning.    Why might Emergent Misalignment arise? The predominant explanation for the EM effect is the fact that the base gpt-4o model (and other frontier LLMs) contains a variety of personas which can, in theory, be triggered / made more salient via prompting (se
0cd286e2-ffd9-4f96-8504-2c8d2d1ff61a
trentmkelly/LessWrong-43k
LessWrong
Interpreting the Biobot Spike Boston has a really interesting program where it measures covid RNA frequency during sewage treatment. It's been running since the beginning of the pandemic: One thing I really like about this as a source of data is that it avoids some of the problems with tracking tests. It has no dependency on the fraction of people who decide to get tested, the reasons people decide to get tested, or the timing of tests. Sometimes they are slow in getting out updated numbers, but they always know which day a sample is from. They publish the full data (pdf), and I had to go at making a chart that's a bit less busy: As I wrote on Saturday, we are seeing a serious spike right now, and since then we have one more day of data showing that it's continuing. In his most recent post on Omicron, Zvi wrote: > That spike on the right has two of the three highest single-day measurements, and they were the last two days of data reported. This can't represent Delta cases alone unless it's a data error, because the rise is too rapid given what we know about conditions. If it's Omicron... I don't think this can be Omicron yet, because MA is doing variant testing, and as of Saturday detecting Omicron was still newsworthy. To see whether this can be regular Delta, let's compare to last year: This is a similar pattern from fall 2020, just scaled up a bit. In the last month (November 6th to December 6th) we had 4.4x growth in 2020, and 4.9x in 2021. In the last week (November 29 to December 6th) we had 1.6x growth in 2020 and 2.1x in 2021. When you consider that in fall 2020 we didn't even have Alpha yet, let alone Delta, this seems like it is probably entirely coming from the time of year (seasonal change plus Thanksgiving). On the positive side, however, at this time last year it leveled off for the winter aside from a short post-holiday spike, and if we're lucky we'll see the same this year.
86cf693a-b723-46bb-ab42-3f0fbdc684e8
StampyAI/alignment-research-dataset/blogs
Blogs
Interviews on plausibility of AI safety by default This is a list of interviews on the plausibility of AI safety by default. Background ---------- AI Impacts conducted interviews with several thinkers on AI safety in 2019 as part of a project exploring arguments for expecting advanced AI to be safe by default. The interviews also covered other AI safety topics, such as timelines to advanced AI, the likelihood of current techniques leading to AGI, and currently promising AI safety interventions. List ---- * [Conversation with Ernie Davis](https://aiimpacts.org/conversation-with-ernie-davis/) * [Conversation with Rohin Shah](https://aiimpacts.org/conversation-with-rohin-shah/) * [Conversation with Paul Christiano](https://aiimpacts.org/conversation-with-paul-christiano/) * [Conversation with Adam Gleave](https://aiimpacts.org/conversation-with-adam-gleave/) * [Conversation with Robin Hanson](https://aiimpacts.org/conversation-with-robin-hanson/)
08b3d351-85c7-4e44-ae01-07c53e6e09a8
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Vingean Agency I've been involved with several discussions about different notions of agency (and their importance/relationships) lately, especially with the PIBBSS group including myself, Daniel, Josiah, and Ramana; see [here](https://www.lesswrong.com/posts/rQDYQrDjPGqjrf8Mk/bridging-expected-utility-maximization-and-optimization). There's one notion of agency (not necessarily "The" notion of agency, but a coherent and significant notion) which vanishes if you examine it too closely.  Imagine that Alice is "smarter than Bob in every way" -- that is, Bob believes that Alice knows everything Bob knows, and possibly more. Bob doesn't necessarily agree with Alice's *goals*, but Bob expects Alice to pursue them effectively. In particular, Bob expects Alice's actions to be at least as effective as the best plan Bob can think of. Because Bob can't predict what Alice will do, the *only* way Bob can further constrain his expectations is to figure out what's good/bad for Alice's objectives. In some sense this seems like a best-case for Bob modeling Alice as an agent: Bob understands Alice *purely* by understanding her as a goal-seeking force. I'll call this **Vingean agency,** since Vinge talked about the difficulty of predicting agents who are smarter than you. and since this usage is consistent with other uses of the term "Vingean" in relation to decision theory. However Vingean agency might seem hard to reconcile with other notions of agency. We typically think of "modeling X as an agent" as involving *attribution of beliefs to X,* not just goals. Agents have probabilities *and* utilities. Bob has minimal use for attributing beliefs to Alice, because Bob doesn't think Alice is mistaken about anything -- the best he can do is to use his own beliefs as a proxy, and try to figure out what Alice will do based on that.[[1]](#fng4m7fszsph6) When I say Vingean agency "disappears when we look at it too closely", I mean that if Bob becomes smarter than Alice (understands more about the world, or has a greater ability to calculate the consequences of his beliefs), Alice's Vingean agency will vanish.  We can imagine a spectrum. At one extreme is an Alice who knows everything Bob knows and more, like we've been considering so far. At the other extreme is an Alice whose behavior is so simple that Bob can predict it completely. In between these two extremes are Alices who know some things that Bob doesn't know, while also lacking some information which Bob has.  (Arguably, Eliezer's notion of [optimization power](https://www.lesswrong.com/posts/Q4hLMDrFd8fbteeZ8/measuring-optimization-power) is one formalization of Vingean agency, while Alex Flint's [attraction-basin notion of optimization](https://www.lesswrong.com/posts/znfkdCoHMANwqc2WE/the-ground-of-optimization-1) defines a notion of agency at the opposite extreme of the spectrum, where we know everything about the whole system and can predict its trajectories through time.) I think this spectrum may be important to keep in mind when modeling different notions of agency. Sometimes we analyze agents from a logically omniscient perspective. In representation theorems (such as Savage or Jeffrey-Bolker, or their lesser sibling, VNM) we tend to take on a perspective where we can predict all the decisions of an agent (including hypothetical decisions which the agent will never face in reality). From this omniscient perspective, we then seek to represent the agent's behavior by ascribing it beliefs and real-valued preferences (ie, probabilities and expected utilities). However, this omniscient perspective *eliminates Vingean agency from the picture*. Thus, we might lose contact with one of the important pieces of the "agent" phenomenon, which can only be understood from a more bounded perspective.[[2]](#fnk0c316a1eec)  1. **[^](#fnrefg4m7fszsph6)**On the other hand, if Bob knows Alice wants cheese, then as soon as Alice starts moving in a given direction, Bob might usefully conclude "Alice probably thinks cheese is in that direction". So modeling Alice as having beliefs is certainly not *useless* for Bob. Still, because Bob thinks Alice knows better about everything, *Bob's estimate of Alice's beliefs **always matches Bob's estimate of his own beliefs, in expectation.*** So in that sense, Bob doesn't need to track Alice's beliefs separately from his own. When Alice turns left, Bob can simply conclude "so there's probably cheese in that direction" rather than tracking his and Alice's beliefs separately. 2. **[^](#fnrefk0c316a1eec)**I also think it's possible that Vingean agency can be extended to be "the" definition of agency, if we think that *agency is just Vingean agency from some perspective*. For example, ants have minimal Vingean agency from my perspective, because I already understand how they find the food in my house. However, I can easily inhabit a more naïve perspective in which this unexplained. Indeed, it's computationally efficient for me to model ants this way most of the time -- ants simply *find the food*. It doesn't matter how they do it.
12a5325e-5232-4d5a-a8b9-134cd265990a
trentmkelly/LessWrong-43k
LessWrong
Detecting out of distribution text with surprisal and entropy When large language models (LLMs) refuse to help with harmful tasks, attackers sometimes try to confuse them by adding bizarre strings of text called "adversarial suffixes" to their prompts. These suffixes look weird to humans, which raises the question: do they also look weird to the model? Alon & Kamfonas (2023) explored this by measuring perplexity, which is how "surprised" a language model is by text. They found that adversarial suffixes have extremely high perplexity compared to normal text. However, the relationship between perplexity and sequence length made the pattern tricky to detect in longer prompts. We set out to reproduce their results, and found that the relationship they observed emerges naturally from how perplexity is calculated. This led us to look at token-level metrics instead of whole sequences. Using a novel visualization, we discovered an interesting pattern: adversarial tokens aren't just surprising: they're surprisingly surprising given the context. This post has five parts: 1. First, we reproduce A&K's perplexity analysis using GPT-2. 2. We argue that the observed relationship emerges naturally from how perplexity is calculated when you combine normal text with adversarial suffixes. 3. We explore how others visualize token-level metrics, e.g. with color. 4. We propose a novel sparkline visualization that reveals temporal patterns. 5. Finally, we introduce S2 — an interpretable metric that captures how surprisingly surprising tokens are. A Hugging Face Space is available to test S2 and the visualization. This work is the deliverable of my project for the BlueDot AI Safety Fundamentals course[1]. In this post, “we” means "me and Claude[2]", who I collaborated with. 1. Reproducing PPL results with GPT-2 Alon & Kamfonas noticed that when they plotted perplexity against sequence length in log-log space, the points had a striking pattern: a straight line with negative slope, meaning that shorter sequences had higher perplexity score
cababb1d-c72d-40ac-9b63-b8f7aec948d2
trentmkelly/LessWrong-43k
LessWrong
K-complexity is silly; use cross-entropy instead None
4a9cfd68-c685-493f-bebe-76a683cd34eb
trentmkelly/LessWrong-43k
LessWrong
Meetup : Fort Collins, Colorado Meetup Wedneday 7pm Discussion article for the meetup : Fort Collins, Colorado Meetup Wedneday 7pm WHEN: 29 February 2012 07:00:00PM (-0700) WHERE: 144 North College Avenue, Fort Collins, CO 80524 What are your cool projects? Come meet interesting people and up your game. Discussion article for the meetup : Fort Collins, Colorado Meetup Wedneday 7pm
a8816c83-5c93-4edc-9844-508dcf4e4a05
trentmkelly/LessWrong-43k
LessWrong
Intelligence Is Not Magic, But Your Threshold For "Magic" Is Pretty Low A while ago I saw a person in the comments to Scott Alexander's blog arguing that a superintelligent AI would not be able to do anything too weird and that "intelligence is not magic", hence it's Business As Usual. Of course, in a purely technical sense, he's right. No matter how intelligent you are, you cannot override fundamental laws of physics. But people (myself included) have a fairly low threshold for what counts as "magic," to the point where other humans (not even AI) can surpass that threshold. Example 1: Trevor Rainbolt. There is an 8-minute-long video where he does seemingly impossible things, such as correctly guessing that a photo of nothing but literal blue sky was taken in Indonesia or guessing Jordan based only on pavement. He can also correctly identify the country after looking at a photo for 0.1 seconds. Example 2: Joaquín "El Chapo" Guzmán. He ran a drug empire while being imprisoned. Tell this to anyone who still believes that "boxing" a superintelligent AI is a good idea. Example 3: Stephen Wiltshire. He made a nineteen-foot-long drawing of New York City after flying on a helicopter for 20 minutes, and he got the number of windows and floors of all the buildings correct. Example 4: Magnus Carlsen. Being good at chess is one thing. Being able to play 3 games against 3 people while blindfolded is a different thing. And he also did it with 10 people. He can also memorize the positions of all pieces on the board in 2 seconds (to be fair, the pieces weren't arranged randomly, it was a snapshot from a famous game). Example 5: Chris Voss, an FBI negotiator. This is a much less well-known example, I learned it from o3, actually. Chris Voss has convinced two armed bank robbers to surrender (this isn't the only example in his career, of course) while only using a phone, no face-to-face interactions, so no opportunities to read facial expressions. Imagine that you have to convince two dudes with guns who are about to get homicidal to just...chill.
5f8bb3f5-825f-4996-b131-1dcaad760b73
trentmkelly/LessWrong-43k
LessWrong
Learned helplessness about "teaching to the test" I keep wondering why is there so much learned helplessness about "teaching to the test". It is often used as an example of Goodharting (e.g. here), and the implied conclusion seems to be... that it is wrong to try testing students systematically, because it is known that it inevitably causes teaching to the test, which ruins education? Every time I read something like that, I think: Why can't we simply fix the test, so that "teaching to the test" either becomes impossible, or becomes the right thing to do? To me this seems like the obvious reaction, so I am surprised that I don't see it more often. * Let me give you a fictional example: Suppose that a school teaches children the multiplication table up to 10×10. The department of education creates a test, containing two questions: "how much is 5×5?" and "how much is 7×8"? Yes, if you are going to judge schools based on how well students answer these two questions, of course many teachers are going to follow the incentives, and instead of multiplication, they will spend all the time at classroom making the students memorize "5×5=25" and "7×8=56", even if doing so means that there will be no time left for other multiplication examples. So the next generation of students will have no idea how much is 6×6, despite officially having multiplication in the curriculum. A scary story, isn't it? Does it mean that we should never test children on multiplication? No, that would be completely stupid! (Looking around anxiously, hoping that someone agrees with me...) The problem with the proposed test is that out of one hundred possible multiplication problems, it predictably tests two predetermined ones. Well, how about instead of that, each year generate two multiplication problems randomly? That way, teachers won't know which specific multiplication problems they need to teach, so the best educational strategy will be to teach all of them. Okay, one problem with this is so obvious that even I can predict it. If you li
ad16acca-f274-43c1-8628-0fa3e461c96e
trentmkelly/LessWrong-43k
LessWrong
Distinguishing logistic curves: visual I wrote a post about distinguishing between logistic curves, specifically for finding their turning points. That post was highly mathematical; but here is a visual "proof" of the "theorem": * Figuring out the turning point of a logistic curve before hitting that turning point is bloody hard, mate. "Proof": The following is a plot two curves: 1. The logistic curve 1/(1+e−x) up to its turning point at x=0. 2. The exponential curve 0.51e0.69x, which never has any turning points. So, if the data was noisy, could you distinguish between the curve that's reached its turning point, and the one that will never have one? Things get even worse if we stop before the turning point; here's the plot of the logistic curve up to x=−log(3)≈−0.48, with the y=0.25 being half of the value at the turning point. This is plotted against the exponential 0.64e0.85x:
3bf9e71e-33df-42eb-8e49-2f6b142d3d7a
StampyAI/alignment-research-dataset/lesswrong
LessWrong
The Dark Side of Cognition Hypothesis It is sometimes claimed that the ultimate, unifying goal of artificial intelligence research is to instantiate human-level cognition in a computational system (e.g., Minsky, 1961; Lake et al, 2017). If artificial general intelligence (AGI) of this sort is ever successfully developed, the consequences would be unimaginable in scope—surely, it would be the most impressive invention of our tool-making species to date.  In what follows, I’ll argue that current AI systems almost entirely lack a critical facet of human-level cognition. I’ll discuss the reasons why it is particularly hard for us to recognize—let alone instantiate—this aspect of our cognition, and I'll investigate the predictions that emerge from this sort of account. After sketching this general picture, I'll argue that the framing put forward here ultimately has the potential to unify AI engineering and AI safety as one single project. Introduction: How We Come to Understand the Mind ------------------------------------------------ At the outset, it is worth asking the extent to which cognitive science bears relevance to AI. It seems clear, however, that if the overarching goal of AI research is to capture the computations that comprise human-level cognition, then a sufficiently comprehensive understanding of human cognition seems a necessary precondition for bringing about this outcome. In other words, if we want to build mind-like computational systems, it follows that we must first understand the mind to some sufficient degree.  What, then, are the epistemological resources we have at our disposal for understanding the mind? Philosophers and cognitive scientists generally answer along the following lines: to the degree that “the mind is what the brain does,” as Minsky put it, the investigations of neuroscience and psychology allow us to better understand the mind as a standard, third-person, external, empirical object of scientific inquiry (Minsky, 1988).  But the mind is also unlike other objects of scientific inquiry in one particular way. In addition to—and in some sense *prior* *to*—objective inquiry, we can also come to understand the mind through first-person, subjective experience of having (or being) minds ourselves. For instance, our use of a standard cognitive vocabulary (i.e., speaking about beliefs, values, goals, and thoughts as such) both in scientific research and in everyday conversation does not happen because we have consulted the empirical literature and decided to adopt our preferred terminology; instead, we speak this way because of the fact that *everyone’s* first-person experience agrees that such language corresponds to what we might call “self-evidently subjectively real” mental phenomena (e.g., Pylyshyn, 1984).  It is also fairly clear that our first-person experiences of mind are not diametrically opposed to scientific inquiry, but rather, actually do much of the work of *calibrating* the relevant empirical investigations: namely, our motivation to study phenomenon X versus Y in cognitive science almost always originates from some first-person intuition about the relative prominence of the phenomena in question. For instance, the reason the neuropsychological mechanisms of punishment avoidance are far better empirically understood than putative mechanisms of punishment-seeking (i.e., masochism) is because we court experience-based intuitions that the former phenomenon seems real and important for broadly understanding the mind (cognitive science thus studies it rigorously), while the latter phenomenon is generally unrelatable, rare, and pathological (cognitive science thus studies it far less intensely). Our first-person, nonempirical experience of having (or being) minds thus not only directly supplements our understanding of cognition, but also broadly informs, motivates, and calibrates subsequent objective investigations of both neuropsychology and AI.  What happens, then, when the third-person, empirical apparatus of cognitive science turns to investigate these highly relevant, inquiry-directing, first-person experiences *themselves*? In other words, what happens when we study empirically what features of mind are and are not actually included in our *experience* of having a mind? A fairly determinate answer emerges: the mind can neither uniformly penetrate nor uniformly evaluate claims about its own processes. That is, our experience of having (or being) a mind selectively and reliably misses critical information about many of its actual underlying phenomena.  Smell the Problem? ------------------ A simple example that I find particularly illustrative (in the case of human cognition) is the profound asymmetry between the two sensory modalities of olfaction (i.e., smelling) and vision. Whereas researchers posit that we can hardly communicate (“*see* what others are saying”) *without* relying on visual analogies and concepts (Ferreira & Tenenhaus, 2007; Huettig et al, 2020), olfaction, on the other hand, has been dubbed “the muted sense” in light of the well-documented difficulty individuals have in verbalizing basic smell-related data, such as identifying the source of common odors (Olofsson and Gottfried, 2015). It is often quipped that there are really only five words exclusively dedicated to smell in the English language: smelly, stinky, acrid, fragrant, and musty—all other seemingly olfactory descriptions are argued to co-opt gustatory or visual language (e.g, we say something “smells like cinnamon,” but we do not say something “looks like banana”—we simply say “yellow”) (Yong, 2015).  Asymmetries in linguistic accessibility of olfactory and visual information are not the only relevant discrepancies between the two sensory modalities. Perl and colleagues outline the bizarre, pervasive role of subconscious sniffing in the emerging field of social olfaction, including discoveries of highly specific mechanisms to this end, such as individuals subconsciously increasing sniffing of their “shaking” hand after *within-sex* handshakes (putative “other-inspection”) while subconsciously increasing sniffing of their “non-shaking” hand after *cross-sex* handshakes (putative “self-inspection”) (Perl et al, 2020). Of particular note to our idiosyncratic inquiry (as well as the posited connection between first-person intuitions about cognition and subsequent research agendas), the researchers in this paper explicitly comment that “we are hard-pressed to think of a human behaviour that is so widespread, that troubles so many people, that clearly reflects underlying processes with developmental, clinical and social relevance, and yet has so little traction in the formal medical/psychological record” (Perl et al, 2020).  Needless to say, in spite of the demonstrated importance of olfaction in the greater system of the mind from an empirical point of view (and in spite of much still remaining unknown about the functional role of the mysterious modality), AI research has all but ignored the relevance of olfaction for the field’s stipulated goal of instantiating human-level cognition in a computational system. A simple Google Scholar search for “AI olfaction” and “computation olfaction” yield 26,100 and 29,400 results, respectively, while “AI vision” and “computation vision” yield 3 million and 2.7 million results, respectively.  While the computations associated with vision may indeed be orders of magnitude *more familiar* to us than those associated with olfaction, it is extremely implausible that this 100-fold asymmetry will be found to map onto the comparative *importance* of the modalities and their associated computations in the mind/brain. Of course, just because we do not understand from experience what olfaction is up to does not imply that olfaction is not essential in the greater system of the mind. But, given these startling asymmetries in research interest across sensory modalities, it seems as though AI—and neuropsychology more broadly—operates *as if* this were true.  More All-But-Missing Pieces: Social Cognition and Skill Learning ---------------------------------------------------------------- This problem, of course, is not limited to discrepancies between vision and olfaction: there are many other extremely important functions of the mind that we either do not experience as such or otherwise find notoriously difficult to comprehend in explicit, systematic, linguistic terms. Two highly relevant further examples are (1) social cognition *writ large* and (2) skill learning and memory.  With regard to the former, decades of research primarily led by John Bargh at Yale has demonstrated the ubiquity of automaticity and subconscious processing in social contexts, including imitation (Dijksterhuis & Bargh, 2001), subliminal priming in a social environment (Bargh et al, 2009), social goal pursuit (Bargh et al, 2001), and the effects of group stereotypes (Bargh et al, 1996). In short, humans are profoundly unaware (i.e., do not have any firsthand experience) of many—if not most—of the underlying computational processes active in social contexts. We are, in some sense, the *recipients* of these processes rather than the *authors* of them.  With respect to skill learning and memory, also known as procedural learning and memory, similar patterns emerge: first, “nondeclarative” procedural learning processes have been demonstrated by double dissociation in the brain to function largely independently of “declarative” learning and memory processes, strongly indicating that behavioral learning and memory exists and operates *independently* from our discrete capacity to systematize the world in explicit, verbal terms (e.g., Tranel et al, 1994). Accordingly, it has been found that not only do people find it exceedingly challenging to explicitly articulate how to perform known skills (e.g., dancing, riding a bike, engaging in polite conversation), but also that attempting to do so can actually *corrupt* the skill memory, demonstrating that procedural learning is not only “implicit,” but can sometimes be “anti-explicit” (Flegal & Anderson, 2008). There is thus a well-documented “dark side of the mind;” a large subset of ubiquitous cognitive phenomena that—unlike vision, say—we have serious trouble self-inspecting. Descriptive and Normative Cognition ----------------------------------- Let’s now attempt to make some sense of these mysterious computations: is there some discoverable, underlying pattern that helps to elucidate which class of cognitive processes are presumably (1) highly consequential in the greater system of the mind, but (2) with which we are experientially underacquainted? I submit that there *is* such a pattern—and that by superimposing this pattern onto the current state of AI research, it will retroactively become clear what the field has successfully achieved, what is currently lacking, and what implications this understanding carries for building safe and effective AI.  The hypothesis is as follows. There is a fundamental, all-encompassing distinction to be drawn between two domains of cognition: *descriptive* and *normative* cognition.  As it will be defined, it is the category of *normative cognition* into which all the previously considered, implicit, experientially unclear cognitive processes seem to fall (recall: olfaction, social cognition, procedural learning and memory). An account will subsequently be given as to *why* their being normative necessarily renders them exceedingly hard to understand in explicit, linguistic terms.   This descriptive-normative dichotomy is not unfamiliar, and it bears specific resemblance to Hume’s well-known distinction between claims of “is” and “ought” (Cohon, 2004). “Descriptive cognition” as it is being used here will refer to the mind’s general capacity to map “true” associations between external phenomena—as they are transduced by sense organs—and to successively use these remembered associations to build systems of concepts (“models”) that map reality. Behavioral psychologists often refer to this kind of learning as classical conditioning—the stuff of Pavlov’s dogs. Interestingly, this account is associated with many of the functional properties of the posterior half of neocortex, including vision (in the occipital lobe), conceptual association (in the parietal lobe), and explicit memory formation (in the hippocampi) storage (in the temporal lobe). In a (hyphenated) word, descriptive cognition entails *model-building*. In accordance with the Humean distinction, descriptive cognition is responsible for computing “what is”—it thus ignores questions of “should” and “should not.” “Normative cognition” as it is being used here will refer to the process of behavioral decision-making and its dependence on the construction, maintenance, and update of a complex “value system” (analogous to a descriptive cognition’s “belief system”) that can be deployed to efficiently and effectively adjudicate highly complex decision-making problems. Analogously, this account is associated with many of the functional properties of the *anterior* half of neocortex, which is known to be differentially responsible for executive functions, valuation, emotion, behavioral planning, and goal-directed cognition (for a comprehensive review, see Stuss and Knight, 2013). In a word, normative cognition entails *(e)valuation*. In Hume’s vocabulary, normative cognition is the computational apparatus that can be said to deal with “ought” claims.  In my own research, people were found to vastly differ from one another in the relative attention and interest they devote to their own descriptive and normative representations, further bolstering the legitimacy of the distinction. More descriptively-oriented people tend to prioritize science, rationality, logic, and truth, while more normatively-oriented people tend to prioritize the humanities, art, narrative, and aesthetics (Berg, 2021).   Before proceeding, it is worth considering the nature of the relationship between descriptive and normative cognition as they have been defined. Clearly, these overarching processes must interact in some way, but how, exactly? And if they interact to a sufficient degree, what right do we really have to differentiate these processes? Here, I will characterize descriptive and normative cognition as epistemologically independent but as *mutually enabling*: were it not for the constraining influence of the other, the unique computational role of each would be rendered either irrelevant or impossible. Though I do believe the relevant neuropsychology supports this account, I think it can be demonstrated on logical grounds alone.  First, without a sufficiently accurate (descriptive) model of the external world, it is impossible for an agent to efficiently and adaptively pursue its many (normative) goals and avoid their associated obstacles in a complex, dynamic environment. Here is why I believe this must *necessarily* be true: one cannot reliably select and navigate towards some desired “point B” (the “normative” computational problem) without having a sufficiently well-formed understanding of (1\*) where “point A” is, in environmental terms, (2\*) which of the many possible point Bs are actually plausible, (3) which of the plausible point Bs are preferable, (4\*) where the preferred point B is, in practical, implementable terms, (5\*) which of the many possible “routes” from A to B are actually plausible, and (6) which of the plausible “routes” from A to B are preferable. Of these six preconditions for normative action, four of them (denoted with an asterisk) unambiguously depend upon descriptive models of the agent’s environment. Therefore, on a purely theoretical level, descriptive cognition can be demonstrated to be what actually renders the hard normative problems of (3) and (6) tractable.  In the same vein, normative cognition enables and constrains descriptive cognition. This is because the only way to adjudicate the hard problem of *which models are actually worth the trouble of building* given highly finite time, energy, intelligence, and information is by appealing to the misleadingly simple answer, *the* *most relevant* *models*—that is, those models that most reliably facilitate pursuit of the most important goals (and avoidance of the most important obstacles), where “important” really means “important *given what I care about*” and where what one cares about is in turn determined by one’s constructed and constantly-evolving value system. So while it is certainly true that descriptive and normative cognition are tightly interrelated, these two broad domains of mind *are* indeed calibrated to different epistemologies—descriptive cognition, to something ultimately like “cross-model” predictive accuracy (e.g., “what do I believe?”); normative cognition, to something ultimately like “cross-goal” reward acquisition (e.g., “what do I care about?”). The “Dark Side of Cognition” Hypothesis --------------------------------------- An intriguing hypothesis, relevant to the future success of AI, emerges from this account. If descriptive and normative cognition are fundamentally computationally discrete, then it should follow that, *within any one mind*, (A) descriptive cognition would be technically incapable of mapping (i.e., building models of) *normative cognition itself*, and, (B) analogously, normative cognition would be technically incapable of evaluating (i.e., assigning a goal-directed value to) *descriptive cognition itself*. This is because all the evidence there is for the internal structure of the normative cognition (to be hypothetically modeled by descriptive cognition) could only ever be conceivably accessed “during” normative cognition itself (e.g., while introspecting during one’s own model-building), and so too for descriptive cognition.    Of particular relevance to the field of AI is (A), that descriptive cognition would be technically incapable of mapping (i.e., building models of) normative cognition itself. This is because, returning to our starting point, the unifying goal of AI research is to instantiate human-level cognition in a computational system, which seems to require a *descriptive* understanding of *all* cognition—descriptive and normative alike. But herein lies what I strongly believe to be the overriding oversight in current AI approaches: if (1) all cognition can be validly classified as either descriptive or normative, (2) what we descriptively know about the mind is either directly supplemented by or indirectly guided by our first-person experience of having (or being) minds, and (3) it is *technically impossible* in building a descriptive model of our own minds to map its normative parts, **then** **we should reasonably expect current approaches to AI omit, ignore, or discount normative cognition.** **I will call this the “Dark Side of Cognition Hypothesis,” or “DSCH” for short.** Examining the Hypothesis ------------------------ Is DSCH—the idea that up to this point, AI has largely ignored normative cognition—borne out by the available evidence? Let us attempt to answer this question using as our case study Lake and colleagues’ paper, *Building machines that learn and think like people*, which helpfully captures both the state of the field and its own researchers’ thoughts about its trajectory (Lake et al, 2017; hereafter, “L, 2017”).  After its introduction, the paper presents two modern challenges for AI, dubbed “the characters challenge,” which concerns accurate machine parsing and recognition of handwritten characters, and “the Frostbite challenge,” which refers to control problems related to the eponymous Atari game using a DQN (L, 2017). Then, the paper talks at length about the interesting prospect of embedding core concepts like number, space, physics, and psychology into AI in order to assist with what is referred to as the “model-building” process (explicitly contrasted against the notion of “pattern recognition”) (L, 2017). Finally, in “future directions,” the paper talks at length about the predictive power of deep learning and future prospects for further enhancing its capabilities (L, 2017).  As a dual index into the state of the field and the minds of its researchers, this paper offers both a sophisticated account of what we might now refer to as “artificial descriptive cognition” (particularly in its cogent emphasis on “model-building”) and a number of intriguing proposals for enhancing it in the future. However—and in spite of the paper itself quoting Minsky in saying “I draw no boundary between a theory of human thinking and a scheme for making an intelligent machine”—in its 23 total sections on the present and future of AI, the paper brings up topics related to “artificial normative cognition” in only three (and this is when counting generously) (L, 2017). Two of these invocations relate to DQNs, which, by the paper’s own characterization of the class of algorithms (“a *powerful* pattern recognizer...and a *simple* model-free reinforcement learning algorithm [emphasis added]”), still derive most of their power from *descriptive*, not normative, computation.  The third example comes from the paper’s discussion of using a partially observable MDP for instantiating theory of mind into AI (L, 2017). This example is particularly illustrative of the kind of oversight we might expect under an account like DSCH: the researchers seem to acknowledge that to fundamentally make sense of *other* minds, an agent should attempt to predict *their* goals and values using a POMDP (as if to say, “others’ goals and values are the most fundamental part of their minds”), and yet, in discussing how to *build* minds, the researchers all but ignore the instantiation of complex goals and values, instead opting to focus solely on descriptive questions of bringing about maximally competent model-building algorithms (L, 2017).  Though the Lake paper is just a single datapoint—and in spite of the ample credit the paper deserves for its genuinely interesting proposals to innovate “artificial descriptive cognition”—the paper nonetheless supports the account DSCH provides: our intuitive descriptive theories of how to model the mind, much to our collective embarrassment, omit the evaluative, socially-enabling processes that render us distinctly *human*. Needless to say, the paper uses the word “vision” eight times and does not mention olfaction (L, 2017). Human-level cognition features normative “value systems” that are equally complex—and as computationally relevant to what makes human-level cognition “human-level”—as more-familiar, descriptive “belief systems,” and yet most AI research seems to almost exclusively attend to the “algorithmization” of latter, as DSCH would predict. As understandable as this state of affairs may be, this oversight is not only stymying the progress of the multi-billion dollar field of AI research; it is also highly dangerous from the perspective of AI safety.  Normative Cognition as a Safety Mechanism ----------------------------------------- One of the more troubling and widely-discussed aspects of the advent of increasingly competent AI is that there is no guarantee that its behavior will be aligned with humanity’s values. While there have been numerous viable proposals for minimizing the likelihood of this kind of scenario, few involve positive projects (i.e., things to *do* rather than things to *avoid*) that straightforwardly overlap with current agendas in AI research. Devoting meaningful effort to the explicit construction of human-level normative cognition will simultaneously progress the field of AI and the adjacent mission of AI safety researchers: endowing AI systems with a value system (and means for updating it) designed in accordance with our own will vastly decrease the likelihood of catastrophic value-based misunderstandings between engineers and their algorithms.  It is important to note that there is a *reason* we trust humans more than a hypothetical superintelligence (and hence support human-in-the-loop-type proposals for ensuring AI alignment): virtually all humans have a certain kind of cognition that intuitively renders them trustworthy. They generally care about others, they want to avoid catastrophe, they can err on the side of caution, they have some degree of foresight, and so on. But this is because we expect them to *value* these things—and to competently map these values onto their behavior. If we understood normative cognition—the cognition that enables competent valuation—we could in theory build AI systems that we would trust not to accidentally upend civilization as much as (if not far more than) human engineers, systems with a genuine sense of duty, responsibility, and caution.  The ultimate danger of current AI approaches is that valueless and unvaluing systems are being constructed with the prayer that their behavior will happen to align with our values. This is sure to fail (or, at the very least, not succeed perfectly), especially as these systems become increasingly competent. An AGI without normative cognition would be one that we would immediately recognize as horrifyingly unbalanced: at once a genius map-maker, able to build highly complex models, and a highly foolish navigator, unable to *use* these models in a manner that we would deem productive—or safe. In order to build AGI whose values are aligned with our own, its intelligence must scale with its wisdom. The former, I believe, is descriptive in character; the latter, normative. Both are mutually necessary for avoiding catastrophe.       Consilience ----------- What, then, should be done to correct this asymmetry in AI between descriptive and normative cognitive modeling? We would imagine one obvious answer to be that the field should simply spend relatively less time on pattern recognition and model-building and relatively more time on developing and formalizing normative computations of value judgment, goal pursuit, social cognition, skill acquisition, olfaction, and the like, in accordance with the foundation already laid by current RL approaches. This, I believe, is highly necessary but not alone sufficient. The simple reason why is because AI researchers, for all their talents, are generally not experts in the complexities of human normative cognition—and this is not their fault. Understanding these processes has not, at least up to this point, a skill-set required to excel in the field.  However, such experts *do* exist, even if they do not self-identify as such: these are predominantly the scholars and thinkers of the humanities. Before, we reasoned that within any one mind, one cannot make descriptive sense of one’s own normative cognition given a fundamental epistemological gap between the two processes; humanities scholars cleverly innovate around this problem by distilling the content of normative cognition into an *external* narrative, philosophy, artwork, or other text, thereby enabling investigation into the underlying mechanics of its rich normative (value-based) content. In this way, normative cognition *has* been studied rigorously for millennia, just not under this idiosyncratic name. Once AI reaches the point in its near development where it will become necessary to confront questions about the implementation of higher-level goals, values, and motivations—especially in the social domain—I believe that the probability of the field’s success in instantiating human-level cognition (and doing so safely) will be proportional to its capacity to accommodate, synthesize, and ultimately “program in” the real and important insights of the humanities. Not only would this proposal for the inclusion of the humanities in the future trajectory of AI research increase the likelihood of the field’s success, but it would also enable a crucial bulwark against the possibility for profound ethical blunders that could more generally accompany the poorly understood integration of (potentially sentient, suffering-capable) minds into computational systems.  Generally speaking, the goal of computationally instantiating human-level cognition is surely the most ambitious, profound, and evolutionarily significant in the history of humankind. Such an accomplishment would be all but certain to dramatically radically alter the trajectory of everything we care about as a species, especially if one grants the possibility of an “intelligence explosion,” which most AI researchers in fact *do* (Good, 1966; Muller and Bostrom, 2016). Accordingly, the construction of a human-level cognitive system must not be considered an esoteric task for clever programmers, but rather as a profound responsibility of descriptively- and normatively-minded thinkers alike. In the absence of multidisciplinary collaboration on this grand project, it is overwhelmingly likely that some critical feature (or, as the DSCH posits, an entire domain) of our minds that renders them *truly human* will be omitted, ignored, underestimated, or never considered in the first place, the consequences of which we will be all too human to fully understand and from which we may never have the opportunity to recover. The stakes are high, and it is incumbent on researchers and thinkers of all backgrounds and persuasions to get the initial conditions right.      Works Cited Bargh, J. A., Chen, M., & Burrows, L. (1996). Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action. *Journal of Personality and Social Psychology*, *71*(2), 230–244.<https://doi.org/10.1037/0022-3514.71.2.230> Bargh, J. A., Gollwitzer, P. M., Lee-Chai, A., Barndollar, K., & Trötschel, R. (2001). The automated will: Nonconscious activation and pursuit of behavioral goals. *Journal of Personality and Social Psychology*, *81*(6), 1014–1027.<https://doi.org/10.1037/0022-3514.81.6.1014> Berg, C. (2021). *Hierarchies of Motivation Predict Individuals’ Attitudes and Values: A Neuropsychological Operationalization of the Five Factor Model*. PsyArXiv.<https://doi.org/10.31234/osf.io/wk6tx> Cohon, R. (2018). Hume’s Moral Philosophy. In E. N. Zalta (Ed.), *The Stanford Encyclopedia of Philosophy* (Fall 2018). Metaphysics Research Lab, Stanford University.<https://plato.stanford.edu/archives/fall2018/entries/hume-moral/> Dijksterhuis, A., & Bargh, J. A. (2001). The perception–behavior expressway: Automatic effects of social perception on social behavior. In *Advances in experimental social psychology, Vol. 33* (pp. 1–40). Academic Press. Ferreira, F., & Tanenhaus, M. K. (2007). Introduction to the special issue on language–vision interactions. *Journal of Memory and Language*, *57*(4), 455–459.<https://doi.org/10.1016/j.jml.2007.08.002> Flegal, K. E., & Anderson, M. C. (2008). Overthinking skilled motor performance: Or why those who teach can’t do. *Psychonomic Bulletin & Review*, *15*(5), 927–932.<https://doi.org/10.3758/PBR.15.5.927> Good, I. J. (1966). Speculations Concerning the First Ultraintelligent Machine. In *Advances in Computers* (Vol. 6, pp. 31–88). Elsevier.<https://doi.org/10.1016/S0065-2458(08)60418-0> *Good—1966—Speculations Concerning the First Ultraintelligent.pdf*. (n.d.). Retrieved May 15, 2021, from<https://asset-pdf.scinapse.io/prod/1586718744/1586718744.pdf> Harris, J. L., Bargh, J. A., & Brownell, K. D. (2009). Priming effects of television food advertising on eating behavior. *Health Psychology*, *28*(4), 404–413.<https://doi.org/10.1037/a0014399> Huettig, F., Guerra, E., & Helo, A. (n.d.). Towards Understanding the Task Dependency of Embodied Language Processing: The Influence of Colour During Language-Vision Interactions. *Journal of Cognition*, *3*(1).<https://doi.org/10.5334/joc.135> Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2016). Building Machines That Learn and Think Like People. *ArXiv:1604.00289 [Cs, Stat]*.<http://arxiv.org/abs/1604.00289> *Lawson et al. - 2017—Adults with autism overestimate the volatility of .pdf*. (n.d.). Retrieved May 15, 2021, from<https://www.nature.com/articles/nn.4615.pdf?origin=ppub> Lawson, R. P., Mathys, C., & Rees, G. (2017). Adults with autism overestimate the volatility of the sensory environment. *Nature Neuroscience*, *20*(9), 1293–1299.<https://doi.org/10.1038/nn.4615> Lord, C., Risi, S., Lambrecht, L., Cook, E. H., Leventhal, B. L., DiLavore, P. C., Pickles, A., & Rutter, M. (n.d.). *The Autism Diagnostic Observation Schedule–Generic: A Standard Measure of Social and Communication Deficits Associated with the Spectrum of Autism*. 19. *Lord et al. - The Autism Diagnostic Observation Schedule–Generic.pdf*. (n.d.). Retrieved May 15, 2021, from<https://link.springer.com/content/pdf/10.1023/A:1005592401947.pdf> Miller, L. K. (1999). The Savant Syndrome: Intellectual impairment and exceptional skill. *Psychological Bulletin*, *125*(1), 31–46.<https://doi.org/10.1037/0033-2909.125.1.31> Minsky, M. (1961). Steps toward Artificial Intelligence. *Proceedings of the IRE*, *49*(1), 8–30.<https://doi.org/10.1109/JRPROC.1961.287775> Minsky, M. (1988). *Society Of Mind*. Simon and Schuster. Müller, V. C., & Bostrom, N. (2016). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In V. C. Müller (Ed.), *Fundamental Issues of Artificial Intelligence* (pp. 555–572). Springer International Publishing.<https://doi.org/10.1007/978-3-319-26485-1_33> Olofsson, J. K., & Gottfried, J. A. (2015). The muted sense: Neurocognitive limitations of olfactory language. *Trends in Cognitive Sciences*, *19*(6), 314–321.<https://doi.org/10.1016/j.tics.2015.04.007> *Perl et al. - Are humans constantly but subconsciously smelling .pdf*. (n.d.). Retrieved May 14, 2021, from<https://royalsocietypublishing.org/doi/pdf/10.1098/rstb.2019.0372> Perl, O., Mishor, E., Ravia, A., Ravreby, I., & Sobel, N. (n.d.). *Are humans constantly but subconsciously smelling themselves?* 13. Pylyshyn, X. (n.d.). *Computation and Cognition | The MIT Press*. The MIT Press. Retrieved May 5, 2021, from<https://mitpress.mit.edu/books/computation-and-cognition> Stuss, D. T., & Knight, R. T. (2013). *Principles of Frontal Lobe Function*. OUP USA. Tranel, D., Damasio, A. R., Damasio, H., & Brandt, J. P. (1994). Sensorimotor skill learning in amnesia: Additional evidence for the neural basis of nondeclarative memory. *Learning & Memory*, *1*(3), 165–179.<https://doi.org/10.1101/lm.1.3.165> Wilson, E. O. (1999). *Consilience: The Unity of Knowledge*. Vintage Books. Yong, E. (2015, November 6). *Why Do Most Languages Have So Few Words for Smells?* The Atlantic.<https://www.theatlantic.com/science/archive/2015/11/the-vocabulary-of-smell/414618/>
6e79b1f3-e9c3-4574-a78c-30e36eb96181
trentmkelly/LessWrong-43k
LessWrong
Software for Critical Thinking, Prof. Geoff Cumming Prof. Geoff Cumming has done some interesting work.  Of particular relevance to the LW community, he has studied software for enhancing critical thinking.   > My past research: I worked on Computer tools for enhancing critical thinking, with Tim van Gelder. We studied argument mapping, and Tim’s wonderful Reason!Able software for critical thinking. This has proved very effective in university and school classrooms as the basis for effective enhancement of critical thinking. In an ARC-funded project we evaluated the software and Tim’s related educational materials. We found evidence that a one semester critical thinking course, based on Reason!Able, gives a very substantial increase—considerably greater than reported in previous evaluations of critical thinking courses—in performance on standardised tests. > > Tim’s software has been further developed by his company Austhink Software, and is now available commercially as Rationale and bCisive: both are fabulous! http://www.austhink.org/ http://bcisive.austhink.com/
1011438a-2ee2-4f44-a01a-d6fc92fffc16
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"I have watched more than one conversation—even conversations supposedly about cognitive science—go the route of disputing over definitions. Taking the classic example to be "If a tree falls in a forest, and no one hears it, does it make a sound?", the dispute often follows a course like this: If a tree falls in the forest, and no one hears it, does it make a sound? Albert: "Of course it does. What kind of silly question is that? Every time I've listened to a tree fall, it made a sound, so I'll guess that other trees falling also make sounds. I don't believe the world changes around when I'm not looking." Barry: "Wait a minute. If no one hears it, how can it be a sound?" In this example, Barry is arguing with Albert because of a genuinely different intuition about what constitutes a sound. But there's more than one way the Standard Dispute can start. Barry could have a motive for rejecting Albert's conclusion. Or Barry could be a skeptic who, upon hearing Albert's argument, reflexively scrutinized it for possible logical flaws; and then, on finding a counterargument, automatically accepted it without applying a second layer of search for a counter-counterargument; thereby arguing himself into the opposite position. This doesn't require that Barry's prior intuition—the intuition Barry would have had, if we'd asked him before Albert spoke—have differed from Albert's. Well, if Barry didn't have a differing intuition before, he sure has one now. Albert: "What do you mean, there's no sound? The tree's roots snap, the trunk comes crashing down and hits the ground. This generates vibrations that travel through the ground and the air. That's where the energy of the fall goes, into heat and sound. Are you saying that if people leave the forest, the tree violates conservation of energy?" Barry: "But no one hears anything. If there are no humans in the forest, or, for the sake of argument, anything else with a complex nervous system capable of 'hearing', then no one hears a sound." Albert and Barry recruit arguments that feel like support for their respective positions, describing in more detail the thoughts that caused their "sound"-detectors to fire or stay silent. But so far the conversation has still focused on the forest, rather than definitions. And note that they don't actually disagree on anything that happens in the forest. Albert: "This is the dumbest argument I've ever been in. You're a niddlewicking fallumphing pickleplumber." Barry: "Yeah? Well, you look like your face caught on fire and someone put it out with a shovel." Insult has been proffered and accepted; now neither party can back down without losing face. Technically, this isn't part of the argument, as rationalists account such things; but it's such an important part of the Standard Dispute that I'm including it anyway. Albert: "The tree produces acoustic vibrations. By definition, that is a sound." Barry: "No one hears anything. By definition, that is not a sound." The argument starts shifting to focus on definitions. Whenever you feel tempted to say the words "by definition" in an argument that is not literally about pure mathematics, remember that anything which is true "by definition" is true in all possible worlds, and so observing its truth can never constrain which world you live in. Albert: "My computer's microphone can record a sound without anyone being around to hear it, store it as a file, and it's called a 'sound file'. And what's stored in the file is the pattern of vibrations in air, not the pattern of neural firings in anyone's brain. 'Sound' means a pattern of vibrations." Albert deploys an argument that feels like support for the word "sound" having a particular meaning. This is a different kind of question from whether acoustic vibrations take place in a forest—but the shift usually passes unnoticed. Barry: "Oh, yeah? Let's just see if the dictionary agrees with you." There's a lot of things I could be curious about in the falling-tree scenario. I could go into the forest and look at trees, or learn how to derive the wave equation for changes of air pressure, or examine the anatomy of an ear, or study the neuroanatomy of the auditory cortex. Instead of doing any of these things, I am to consult a dictionary, apparently. Why? Are the editors of the dictionary expert botanists, expert physicists, expert neuroscientists? Looking in an encyclopedia might make sense, but why a dictionary? Albert: "Hah! Definition 2c in Merriam-Webster: 'Sound: Mechanical radiant energy that is transmitted by longitudinal pressure waves in a material medium (as air).'" Barry: "Hah! Definition 2b in Merriam-Webster: 'Sound: The sensation perceived by the sense of hearing.'" Albert and Barry, chorus: "Consarned dictionary! This doesn't help at all!" Dictionary editors are historians of usage, not legislators of language. Dictionary editors find words in current usage, then write down the words next to (a small part of) what people seem to mean by them. If there's more than one usage, the editors write down more than one definition. Albert: "Look, suppose that I left a microphone in the forest and recorded the pattern of the acoustic vibrations of the tree falling. If I played that back to someone, they'd call it a 'sound'! That's the common usage! Don't go around making up your own wacky definitions!" Barry: "One, I can define a word any way I like so long as I use it consistently. Two, the meaning I gave was in the dictionary. Three, who gave you the right to decide what is or isn't common usage?" There's quite a lot of rationality errors in the Standard Dispute. Some of them I've already covered, and some of them I've yet to cover; likewise the remedies. But for now, I would just like to point out—in a mournful sort of way—that Albert and Barry seem to agree on virtually every question of what is actually going on inside the forest, and yet it doesn't seem to generate any feeling of agreement. Arguing about definitions is a garden path; people wouldn't go down the path if they saw at the outset where it led. If you asked Albert (Barry) why he's still arguing, he'd probably say something like: "Barry (Albert) is trying to sneak in his own definition of 'sound', the scurvey scoundrel, to support his ridiculous point; and I'm here to defend the standard definition." But suppose I went back in time to before the start of the argument: (Eliezer appears from nowhere in a peculiar conveyance that looks just like the time machine from the original 'The Time Machine' movie.) Barry: "Gosh! A time traveler!" Eliezer: "I am a traveler from the future! Hear my words! I have traveled far into the past—around fifteen minutes—" Albert: "Fifteen minutes?" Eliezer: "—to bring you this message!" (There is a pause of mixed confusion and expectancy.) Eliezer: "Do you think that 'sound' should be defined to require both acoustic vibrations (pressure waves in air) and also auditory experiences (someone to listen to the sound), or should 'sound' be defined as meaning only acoustic vibrations, or only auditory experience?" Barry: "You went back in time to ask us that?" Eliezer: "My purposes are my own! Answer!" Albert: "Well... I don't see why it would matter. You can pick any definition so long as you use it consistently." Barry: "Flip a coin. Er, flip a coin twice." Eliezer: "Personally I'd say that if the issue arises, both sides should switch to describing the event in unambiguous lower-level constituents, like acoustic vibrations or auditory experiences. Or each side could designate a new word, like 'alberzle' and 'bargulum', to use for what they respectively used to call 'sound'; and then both sides could use the new words consistently. That way neither side has to back down or lose face, but they can still communicate. And of course you should try to keep track, at all times, of some testable proposition that the argument is actually about. Does that sound right to you?" Albert: "I guess..." Barry: "Why are we talking about this?" Eliezer: "To preserve your friendship against a contingency you will, now, never know. For the future has already changed!" (Eliezer and the machine vanish in a puff of smoke.) Barry: "Where were we again?" Albert: "Oh, yeah: If a tree falls in the forest, and no one hears it, does it make a sound?" Barry: "It makes an alberzle but not a bargulum. What's the next question?" This remedy doesn't destroy every dispute over categorizations. But it destroys a substantial fraction." A user from LessWrong is responsible for the text you see above Expand on the concept of "alberzle" and "bargulum" as new words to replace "sound". How would this affect our understanding and communication about acoustic phenomena? The concept of "alberzle" and "bargulum" as new words to replace "sound" is an interesting proposition. This suggestion comes as a solution to the argument between Albert and Barry about the definition of "sound". If we were to adopt these new terms, it would necessitate a reorientation of our understanding and communication about acoustic phenomena. Let's assume that "alberzle" is defined as the physical phenomenon of acoustic vibrations or pressure waves in a medium like air, irrespective of whether there is a receiver to perceive it. On the other hand, "bargulum" could be defined as the auditory experience, the perception of these vibrations by a receiver, such as a human ear. By making this distinction, we would essentially be separating the objective, physical aspect of sound (alberzle) from the subjective, perceptual aspect of sound (bargulum). This could have several implications: 1. Clarity in Scientific Communication: In scientific contexts, this could bring more precision and clarity. For instance, when discussing the properties of sound waves, "alberzle" would be the appropriate term. In contrast, when discussing auditory perception or hearing, "bargulum" would be more fitting. 2. Nuanced Understanding of Acoustic Phenomena: This distinction could lead to a more nuanced understanding of acoustic phenomena. It would make it clear that the physical production of sound waves and the perception of sound are two different aspects that don't necessarily have to occur together. 3. Implications for Philosophy and Cognitive Science: This could also have implications for philosophical discussions about perception and reality, as well as for cognitive science. It would highlight the difference between an external event and its internal perception, reminding us that our perception is not always a perfect reflection of reality. 4. Changes in Everyday Communication: In everyday communication, this might initially cause some confusion, as people would need to adjust to the new terminology. However, once the terms are understood, they could potentially lead to more precise communication about sound-related phenomena. In conclusion, introducing "alberzle" and "bargulum" as replacements for "sound" could bring more precision and nuance to our understanding and communication about acoustic phenomena, by clearly distinguishing between the physical production of sound and its perception. However, it would also require a significant shift in our current use of language, both in scientific contexts and everyday communication. Considering the potential implications of introducing "alberzle" and "bargulum" as replacements for "sound", how might these changes affect the fields of acoustics, music, and audio engineering specifically? The introduction of "alberzle" and "bargulum" as replacements for "sound" could indeed have significant effects on the fields of acoustics, music, and audio engineering. Here's how: 1. Acoustics: Acoustics is the science that deals with the study of sound (in this case, "alberzle"). By differentiating between the physical phenomenon and the perception, it could allow for more precise communication and research. For example, when discussing sound insulation, the term "alberzle" would be used to denote the physical sound waves being blocked, while "bargulum" could be used to denote the perceived reduction in sound. 2. Music: In music, the distinction could lead to a deeper understanding and exploration of the relationship between the physical properties of sound (alberzle) and the way they are perceived (bargulum). This could influence the way music is composed, performed, and analyzed. For instance, a composer might manipulate "alberzle" properties to create specific "bargulum" experiences for the listener. 3. Audio Engineering: For audio engineers, this distinction could be particularly useful. When mixing or mastering a track, they manipulate the "alberzle" (sound waves) to create a desired "bargulum" (listening experience). Having separate terms could help in more accurately communicating and achieving their goals. However, it's important to note that while these new terms could provide more precision, they would also require a substantial shift in terminology and understanding within these fields. Professionals, researchers, and students would need to adapt to these changes, which could take time and effort. Furthermore, the effectiveness of these terms would depend on their consistent and accurate usage across the field.
21829624-3e2b-4cfa-962e-083826da3907
trentmkelly/LessWrong-43k
LessWrong
Why we need better science, example #6,281 Avorn (2004) reports: > In a former British colony, most healers believed the conventional wisdom that a distillation of fluids extracted from the urine of horses, if dried to a powder and fed to aging women, could act as a general tonic, preserve youth, and ward of a variety of diseases. The preparation became enormously popular throughout the culture, and was widely used by older women in all strata of society. Many years later modern scientific studies revealed that long-term ingestion of the horse-urine extract was useless for most of its intended purposes, and that it causes tumors, blood clots, heart disease, and perhaps brain damage. > > The former colony is the United States; the time is now; the drug is the family of hormone replacement products that include Prempro and Premarin (manufactured from pregnant mares' urine, hence its name). For decades, estrogen replacement in postmenopausal women was widely believed to have "cardio-protective" properties; other papers in respected medical journals reported that the drugs could treat depression and incontinence, as well as prevent Alzheimer's disease. The first large, well-conducted, controlled clinical trial of this treatment in women was not published until 1998: it found that estrogen replacement actually increased the rate of heart attacks in the patients studied. Another clinical trial published in 2002 presented further evidence that these products increased the risk of heart disease, stroke, and cancer. Further reports a year later found that rather than preventing Alzheimer's disease, the drugs appeared to double the risk of becoming senile.  Armstrong (2006) adds: > The treatment seemed to work because those who used the drug tended to be healthier than those who did not. This was because it was used by people who were more interested in taking care of their health.
9d06551c-431a-4eb1-94c1-5361fe84aee4
trentmkelly/LessWrong-43k
LessWrong
Diana Fleischman and Geoffrey Miller - Audience Q&A Cross-posted from Putanumonit. ---------------------------------------- This is the audience Q&A with Diana Fleischman and Geoffrey Miller at the NYC Rationality meetup, following up on my own interview which you can find here. Content note: the audience comprised rationalists of many ethnicities, orientations, and gender expressions and we asked questions that could offend many ethnicities, orientations, and gender expressions. What are the main hypothesized causes of homosexuality? Diana: There’s a difference between homosexual behavior and homosexual orientation. Homosexual orientation is very rare. There’s one species, domestic sheep, in which 8-10% of rams are not interested in ewes at all. You can tie a ewe in heat in front of them and they don’t react at all. Actually, one area where homosexuality research has flourished is among sheep breeders because if you buy one of these rams who’s gay, that’s really bad news for the business. So homosexual orientation is exceedingly rare. Even though you see stats that it’s 10% in people, it’s about 3%. In a paper that I wrote I claim that bisexuality is the optimal sexual strategy because sex is not just used for reproduction, it’s also used for affiliation. There are a number of ways to affiliate: you can give somebody food or you can give somebody an orgasm. These are ways to get other people to like you. If you’re somewhat attracted to people of the same sex but not enough to forego reproductive opportunities with people of the opposite sex, then you can actually engage with both sexes. Is the bisexual revolution coming? Diana: There are places where people are much more open to it, but not many places. It’s a whole spectrum of behavior. In places around the world, there are men who have anything from affectionate to sexual interactions with other men and they’re not considered gay. They have homosexual behavior along with heterosexual behavior, and that’s a common thing. If you look at the bell curve of
6507b96a-65a6-4e85-9127-9beb379633ce
trentmkelly/LessWrong-43k
LessWrong
Lies on the internet After a visit to the dentist, you go home and search up "numbers read off during dental exam." What were they measuring? You think.  A website that looks like a legitimate dental company tells you these numbers are measurements of gum disease. When I saw this, alarm bells went off. First of all, I'm sure that the numbers corresponded to something empirical. The idea of a dentist going through my teeth and marking each one "gingivitis, mild periodontitis, moderate periodontitis, advanced periodontitis" seems absolutely absurd. Secondly, the website certainly seemed like it was trying to sell me something. Get treated at our clinic!  However, I was still not quite ready to dismiss the article. Surely, a credible-looking website from a credible-looking dental institution can't be publishing complete junk!  I looked at some other websites of other credible-looking dental institutions, many of whom directly contracted this article, saying that 1-3 are normal (and that they are mm measurements of distances between your gum and your teeth). The webpage that made the most sense to me was this one. It included some nice diagrams. But even this page seemed less than impartial! It was yet another dentist's office. Why is it so hard to find good dental information on the internet?    Edit: It turns out that the numbers are not measurements of distances between your gum and your teeth, but rather the depth of the pockets between your gum and your teeth. Neat. 
7d6cda9c-283d-43d3-9522-b0a2457cd7a1
trentmkelly/LessWrong-43k
LessWrong
Act into Uncertainty It’s only been recently that I’ve been thinking about epistemics in the context of figuring out my behavior and debiasing. Aside from trying to figure out how I actually behave (as opposed to what I merely profess I believe), I’ve been thinking about how to confront uncertainty—and what it feels like.   For many areas of life, I think we shy away from confronting uncertainty and instead flee into the comforting non-falsifiability of vagueness. Consider these examples: 1) You want to get things done today. You know that writing things down can help you finish more things. However, it feels aversive to write down what you specifically want to do. So instead, you don’t write things down and instead just keep a hazy notion of “I will do things today”. 2) You try to make a confidence interval for a prediction where money is on the line. You notice yourself feeling uncomfortable, no matter what your bounds are; it feels bad to set down any number at all, which is accompanied by a dread feeling of finality. 3) You’re trying to find solutions to a complex, entangled problem. Coming up with specific solutions feels bad because none of them seem to completely solve the problem. So instead you decide to create a meta-framework that produces solutions, or argue in favor of some abstract process like a “democratized system that focuses on holistic workarounds”. In each of the above examples, it feels like we move away from making specific claims because that opens us up to specific criticism. But instead of trying to improve the strengths of specific claims, we retreat to fuzzily-defined notions that allow us to incorporate any criticism without having to really update. I think there’s a sense in which, in some areas of life, we’re embracing shoddy epistemology (e.g. not wanting to validate or falsify our beliefs) because of a fear of wanting to fail / put in the effort to update. I think this failure is what fuels this feeling of aversion. It seems
9e94adf6-729c-476a-89ec-e29ebc89ced5
trentmkelly/LessWrong-43k
LessWrong
Which headlines and narratives are mostly clickbait? Or: What do superforecasters tend to think of the topics that get tweets and airtime these days? The following questions get put into my attention (consensually and otherwise) and I am genuinely uncertain about the answers.   * Is Trump really gearing up for a fascist takeover? Is fascism truly resurgent? * Does the elite left really endorse looting? * If Biden doesn't win in a landslide, will this really cause a national crisis? * Are we really seeing an incipient purge of non-leftwing thinkers from academia? Just how much of the putative "Great Awokening" is an illusion due to (social) media hype? * Have Silicon Valley and the mainstream media really gone to "war"? What will be the course of this war, and how will it end? Will I be much affected if I am not an investor nor a news consumer? * Is California really about to be swallowed by leftwing politics and lose all its golden geese to Texas, New Hampshire, or wherever? * Are regular Americans being driven into poverty en masse, soon to all be wage slaves to a technocratic elite? * Will higher education be drastically affected by the coming era of Work From Home? * How ought one view statements from the CDC henceforth? How much did its behavior and credibility in 2020 differ from other years or other administrations? * Are we going to see massive migrations out of cities? A world of "red zones and green zones" where large gatherings become a way for societies to flex their public health? * Is the general issue of misinformation (including but not limited to conspiracy theories and fake news) really much worse now than in decades past? By what metrics? * How much divisive mental malware have entities like the Internet Research Agency really managed to get into our brains? Are countries other than Russia getting in on the action? * Is China really running a global surveillance operation by compromising people's smartphones via TikTok? Are they really going to match the US in influence in the next 20 y
062d3767-e621-4aea-b890-81d5f9bb9967
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research] *\* Authors sorted alphabetically.* Summary: This post introduces causal scrubbing, a principled approach for evaluating the quality of mechanistic interpretations. The key idea behind causal scrubbing is to test interpretability hypotheses via *behavior-preserving resampling ablations*. We apply this method to develop a refined understanding of how a small language model implements induction and how an algorithmic model correctly classifies if a sequence of parentheses is balanced. 1 Introduction ============== A question that all mechanistic interpretability work must answer is, “how well does this interpretation explain the phenomenon being studied?”. In the [many](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html) [recent](https://rome.baulab.info/) [papers](https://arxiv.org/abs/2211.00593) [in mechanistic interpretability](https://openreview.net/forum?id=9XFSbDPmdW), researchers have generally relied on ad-hoc methods to evaluate the quality of interpretations.[[1]](#fndtq6l2laqcp) This *ad hoc* nature of existing evaluation methods poses a serious challenge for scaling up mechanistic interpretability. Currently, to evaluate the quality of a particular research result, we need to deeply understand both the interpretation and the phenomenon being explained, and then apply researcher judgment. Ideally, we’d like to find the interpretability equivalent of [property-based testing](https://en.wikipedia.org/wiki/Software_testing%23Property_testing)—automatically checking the correctness of interpretations, instead of relying on grit and researcher judgment. More systematic procedures would also help us scale-up interpretability efforts to larger models, behaviors with subtler effects, and to larger teams of researchers. To help with these efforts, we want a procedure that is both powerful enough to finely distinguish better interpretations from worse ones, and general enough to be applied to complex interpretations. In this work, we propose **causal scrubbing**, a systematic ablation method for testing precisely stated hypotheses about how a particular neural network[[2]](#fnbwu0kfb3tw) implements a behavior on a dataset. Specifically, given an informal hypothesis about which parts of a model implement the intermediate calculations required for a behavior, we convert this to a formal correspondence between a computational graph for the model and a human-interpretable computational graph. Then, causal scrubbing starts from the output and recursively finds all of the invariances of parts of the neural network that are implied by the hypothesis, and then replaces the activations of the neural network with the *maximum entropy*[[3]](#fng10ehlzhmhl) distribution subject to certain natural constraints implied by the hypothesis and the data distribution. We then measure how well the scrubbed model implements the specific behavior.[[4]](#fnmcxlqny6d9c) Insofar as the hypothesis explains the behavior on the dataset, the model’s performance should be unchanged. Unlike previous approaches that were specific to particular applications, causal scrubbing aims to work on a large class of interpretability hypotheses, including almost all hypotheses interpretability researchers propose in practice (that we’re aware of). Because the tests proposed by causal scrubbing are mechanically derived from the proposed hypothesis, causal scrubbing can be incorporated “in the inner loop” of interpretability research. For example, starting from a hypothesis that makes very broad claims about how the model works and thus is consistent with the model’s behavior on the data, we can iteratively make hypotheses that make more specific claims while monitoring how well the new hypotheses explain model behavior. We demonstrate two applications of this approach in later posts: first on a parenthesis balancer checker, then on the induction heads in a two-layer attention-only language model. We see our contributions as the following: 1. We formalize a notion of interpretability hypotheses that can represent a large, natural class of mechanistic interpretations; 2. We propose an algorithm, *causal scrubbing*, that tests hypotheses by systematically replacing activations in all ways that the hypothesis implies should not affect performance. 3. We demonstrate the practical value of this approach by using it to investigate two interpretability hypotheses for small transformers trained in different domains. This is the main post in a four post sequence, and covers the most important content: * What is causal scrubbing? Why do we think it’s more principled than other methods? (sections 2-4) * A summary of our results from applying causal scrubbing (section 5) * Discussion: Applications, Limitations, Future work (sections 6 and 7). In addition, there are three posts with information of less general interest. [The first](https://www.alignmentforum.org/posts/kcZZAsEjwrbczxN2i/causal-scrubbing-appendix) is a series of appendices to the content of this post. Then, a pair of posts covers the details of what we discovered applying causal scrubbing to [a paren-balance checker](https://www.alignmentforum.org/posts/kjudfaQazMmC74SbF/causal-scrubbing-on-a-paren-balancer-checker-part-3-of-5) and [induction in a small language model](https://www.alignmentforum.org/posts/j6s9H9SHrEhEfuJnq/causal-scrubbing-on-induction-heads-part-4-of-5).[[5]](#fnkbywfujnhm) They are collected in a sequence [here](https://www.lesswrong.com/s/h95ayYYwMebGEYN5y). 1.1 Related work ---------------- **Ablations for Model Interpretability:** One commonly used technique in mechanistic interpretability is the “ablate, then measure” approach. Specifically, for interpretations that aim to explain why the model achieves low loss, it’s standard to remove parts that the interpretation identifies as important and check that model performance suffers, or to remove unimportant parts and check that model performance is unaffected. For example, in [Nanda and Lieberum’s Grokking](https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking) work, to verify the claim that the model uses certain key frequencies to compute the correct answer to modular addition questions, the authors confirm that zero ablating the key frequencies greatly increases loss, while zero ablating random other frequencies has no effect on loss. In [Anthropic’s Induction Head paper](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html), they remove the induction heads and observe that this reduces the ability of models to perform in-context learning. In the [IOI mechanistic interpretability project,](https://arxiv.org/abs/2211.00593) the authors define the behavior of a transformer subcircuit by mean-ablating everything except the nodes from the circuit. This is used to formulate criteria for validating that the proposed circuit preserves the behavior they investigate and includes all the redundant nodes performing a similar role. Causal scrubbing can be thought of as a generalized form of the “ablate, then measure” methodology.[[6]](#fnjnva7stn48) However, unlike the standard zero and mean ablations, we ablate modules by resampling activations from *other* inputs (which we’ll justify in the next post). In this work, we also apply causal scrubbing to more precisely measure different mechanisms of induction head behavior than in the Anthropic paper. **Causal Tracing:** Like causal tracing, causal scrubbing identifies computations by patching activations. However, causal tracing aims to *identify* a specific path (“trace”) that contributes causally to a particular behavior by corrupting all nodes in the neural network with noise and then iteratively denoising nodes. In contrast, causal scrubbing tries to solve a different problem: systematically *testing* hypotheses about the behavior of a whole network by removing (“scrubbing away”) everycausal relationship that should not matter according to the hypothesis being evaluated. In addition, causal tracing patches with (homoscedastic) Gaussian noise and not with the activations of other samples. Not only does this take your model off distribution, it might have no effect in cases where the scale of the activation is much larger than the scale of the noise. **Heuristic explanations:** This work takes a perspective on interpretability that is strongly influenced by [ARC](https://alignment.org/)’s [work on “heuristic explanations” of model behavior](https://arxiv.org/abs/2211.06738). In particular, causal scrubbing can be thought of as a form of [defeasible reasoning](https://en.wikipedia.org/wiki/Defeasible_reasoning): unlike mathematical proofs (where if you have a proof for a proposition P, you’ll never see a better proof for the negation of P that causes you to overall believe P is false), we expect that in the context of interpretability, we need to accept arguments that might be overturned by future arguments. 2 Setup ======= We assume a dataset D.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  over a domain X and a function f:X→R which captures a behavior of interest.  We will then explain the expectation of this function on our dataset, Ex∼D[f(x)]. This allows us to explain behaviors of the form “a particular model M gets low loss on a distribution D.” To represent this we include the labels in D and both the model and a loss function in f: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1675735209/mirroredImages/JvZhhzycHu2Yd57RN/ozcrzijcx4t90aegmsdm.png)We also want to explain behaviors such as “if the prompt contains some bigram `AB` and ends with the token `A`, then the model is likely to predict `B` follows next.” We can do this by choosing a dataset D where each datum has the prompt `...AB...A` and expected completion `B`. For instance: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1675735209/mirroredImages/JvZhhzycHu2Yd57RN/b6iqm6ftw0so76s2iqhz.png)We then propose a hypothesis about how this behavior is implemented. Formally, a *hypothesis*h=(G,I,c) for f is a tuple of three things: * A computational graph G[[7]](#fn15leextqkxc), which implements the function f * We require G to be [*extensionally equal*](https://en.wikipedia.org/wiki/Extensionality) to f (equal on *all* of X) * A computational graph I, intuitively an ‘interpretation’ of the model. * A correspondence function c from the nodes of I to the nodes of G. * We require c to be an injective [graph homomorphism](https://en.wikipedia.org/wiki/Graph_homomorphism): that is, if there is an edge (u,v) in I then the edge (c(u),c(v)) must exist in G. We additionally require I and G to each have a single input and output node, where c maps input to input and output to output. All input nodes are of type X which allows us to evaluate both G and I on all of X . Here is an example hypothesis: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1675735209/mirroredImages/JvZhhzycHu2Yd57RN/ieqcudkxbkejrhbc4fky.png)In this figure, we hypothesize that G works by having A compute whether z1>3, B compute whether z2>3, and then ORing those values. Then we’re asserting that the behavior is explained by the relationship between D and the true label y. A couple of important things to notice: * We will often rewrite the computational graph of the original model implementation into a more convenient form (for instance splitting up a sum into terms, or grouping together several computations into one). * You can think of I as a heuristic[[8]](#fn8a8oox8gv1v) that the hypothesis claims that the model uses to achieve the behavior. It’s possible that the heuristic is imperfect and will sometimes disagree with the label y. In that case our hypothesis would claim that the model should be incorrect on these inputs. * Note that the mapping c doesn’t tell you how to translate a value of I into an activation, only which nodes correspond. * We will call c(I) the “important nodes” of G.[[9]](#fnjmkiyi6nzfr) + Let nI, nG be nodes in I and G respectively such that c(nI)=nG. - Intuitively this is a claim that when we evaluate both G and I on the same input, then the value of nG (usually an activation of the model) ‘represents’ the value of nI (usually a simple feature of the input). - The causal scrubbing algorithm will test a weaker claim: that the equivalence classes on inputs to nI are the same as the equivalence classes on inputs to nG. We think this is sufficient to meaningfully test the mechanistic interpretability hypotheses we are interested in, although it is not strong enough to eliminate all incorrect hypotheses. * Among other things, the hypothesis claims that nodes of G that are not mapped to by c are unimportant for the behavior under investigation.[[10]](#fn4adke6b8dba) Hypotheses are covered in more detail in [the appendix](https://www.alignmentforum.org/posts/kcZZAsEjwrbczxN2i/causal-scrubbing-appendix#1_More_on_Hypotheses).  3 Causal Scrubbing ================== In this section we provide two different explanations of causal scrubbing: 1. [An informal description](https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-redwood-research#Intuitive_Algorithm) of the activation-replacements that a hypothesis implies are valid. We try to provide a helpful introduction to the core idea of causal scrubbing via many diagrams; and 2. [The causal scrubbing algorithm and pseudocode](https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-redwood-research#Pseudocode) Different readers of this document have found different explanations to be helpful, so we encourage you to skip around or skim some sections. Our goal will be to define a metric Escrubbed(h,D) by recursively sampling activations that should be equivalent according to each node of the interpretation I. We then compare this value to Ed∼D[f(d)]. If a hypothesis is (reasonably) accurate, then the activation replacements we perform should not alter the loss and so we’d have Escrubbed(h,D)≈Ed∈Df(d). Overall, we think that this difference will be a reasonable proxy for the [*faithfulness*](https://arxiv.org/abs/2004.03685) of the hypothesis—that is, how accurately the hypothesis corresponds to the “real reasons” behind the model behavior.[[11]](#fn42orbovkrwm) 3.1 An informal description: What activation replacements does a hypothesis imply are valid? -------------------------------------------------------------------------------------------- Consider a hypothesis h=(G,I,c) on the graphs below, where c maps to the corresponding nodes of G highlighted in green: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1675735210/mirroredImages/JvZhhzycHu2Yd57RN/wc47hhabb1up51n5np1n.png)This hypothesis claims that the activations A and B respectively represent checking whether the first and second component of the input is greater than 3. Then the activation D represents checking whether either of these conditions were true. Both the third component of the input and the activation of C are unimportant (at least for the behavior we are explaining, the log loss with respect to the label y). If this hypothesis is true, we should be able to perform two types of ‘resampling ablations’: * replacing the activations of A, B, and D with the activations on other inputs that are “equivalent” under I; and * replacing the activations that are claimed to be unimportant for a particular path (such as C or z1 into B) with their activation on any other input. To illustrate these interventions, we will depict a “treeified” version of G where every path from the input to output of G is represented by a different copy of the input. Replacing an activation with one from a different input is equivalent to replacing all inputs in the subtree upstream of that activation. ### Intervention 1: semantically equivalent subtrees Consider running the model on two inputs x1= (5,6,7, True) and x2= (8, 0, 4, True). The value of A’ is the same on both x1 and x2. Thus, if the hypothesis depicted above is correct, the output of A on both these is equivalent. This means when evaluating G on x1 we can replace the activation of A with its value on x2, as depicted here: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1675735209/mirroredImages/JvZhhzycHu2Yd57RN/lniaewhtzqveyjv7syxg.png)To perform the replacement, we replaced all of the inputs upstream of A in our treeified model. (We could have performed this replacement with any other x∈D that agrees on A’.) Our hypothesis permits many other activation replacements. For example, we can perform this replacement for D instead: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1675735210/mirroredImages/JvZhhzycHu2Yd57RN/tqodgxslihfqbltwlumx.png)### Intervention 2: unimportant inputs The other class of intervention permitted by h is replacement of any inputs to nodes in G that h suggests aren’t semantically important. For example, h says that the only important input for A is z1. So the model’s behavior should be preserved if we replace the activations for z2 and z3 (or, equivalently, change the input that feeds into these activations). The same applies for z1 and z3 into B. Additionally, h says that D isn’t influenced by C, so arbitrarily resampling all the inputs to C shouldn’t impact the model’s behavior. Pictorially, this looks like this: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1675735209/mirroredImages/JvZhhzycHu2Yd57RN/ptcvkya9ttgg62cdb9r5.png)Notice that we are making 3 different replacements with 3 different inputs simultaneously. Still, if h is accurate, we will have preserved the important information and the output of Treeify(G)should be similar. The causal scrubbing algorithm involves performing both of these types of intervention many times. In fact, we want to maximize the number of such interventions we perform on every run of G – to the extent permitted by h. 3.2 The causal scrubbing algorithm ---------------------------------- We define an algorithm for evaluating hypotheses. This algorithm uses the intuition, illustrated in the previous section, of what activation replacements are permitted by a hypothesis. The core idea is that hypotheses can be interpreted as an “intervention blacklist”. We like to think of this as the hypothesis sticking its neck out and challenging us to swap around activations in any way that it hasn’t specifically ruled out. In a single sentence, the algorithm is: Whenever we need to compute an activation, we ask “What are all the other activations that, according to h, we could replace this activation with and still preserve the model’s behavior?”, and then make the replacement by choosing uniformly at random from that subset of the dataset, and do this recursively. In this algorithm we don’t explicitly treeify G; but we traverse it one path at a time in a tree-like fashion. We define the ***scrubbed expectation***, Escrubbed(h,D), as the expectation of the behavior f over samples from this algorithm. ### Intuitive Algorithm *(This is mostly redundant with the pseudocode below. Read in your preferred order.)* The algorithm is defined in pseudocode below. Intuitively we: * Sample a random reference input x from D * Traverse all paths through I from output towards the input by calling `run_scrub` on nodes of I recursively. For every node we consider the subgraph of I that contains everything ‘upstream’ of nI (used to calculate its value from the input). Each of these correspond to a subgraph of the image c(I) in G. * The return value of `run_scrub(n_I, c, D, x)` is an activation from G. Specifically it is an activation for the corresponding node in G that the **hypothesis claims represents the value of**nI when I is run on input `x`. + Let nG=c(nI). + If nG is an input node we will return x. + Otherwise we will determine the activations of each input from the parents of nG. For each parent pG of nG: * If there exists a parent pI of nI that corresponds to pG then the hypothesis claims that the value of pG is important for nG. In particular it is important as it represents the value defined by pI. Thus we sample a datum `new_x` that agrees with x on the value of pI. We’ll **recursively call** `run_scrub` on pI in order to get an activation for pG. * For any “unimportant parent” not mapped by the correspondence, we select an input `other_x`. This is a random input from the dataset, however we enforce that the *same* random input is used by all unimportant parents of a particular node.[[12]](#fni28h649vn8f) We record the value of pG on `other_x`. * We now have the activations of all the parents of nG – these are exactly the inputs to running the function defined for the node nG. We return the output of this function. ### Pseudocode ``` def estim(h, D): """Estimate E_scrubbed(h, D)""" _G, I, c = h outs = [] for i in NUM_SAMPLES: x = random.sample(D) outs.append(run_scrub(c, D, output_node_of(I), x)) return mean(outs) def run_scrub( c, # correspondence I -> G D: Set[Datum], n_I, # node of I ref_x: Datum ): """Returns an activation of n_G which h claims represents n_I(ref_x).""" n_G = c(n_I) if n_G is an input node: return ref_x inputs_G = {} # pick a random datum to use for all “unimportant parents” of this node random_x = random.sample(D) # get the scrubbed activations of the inputs to n_G for parent_G in n_G.parents(): # “important” parents if parent_G is in map(c, n_I.parents()): parent_I = c.inverse(parent_G) # sample a new datum that agrees on the interpretation node new_x = sample_agreeing_x(D, parent_I, ref_x) # and get its scrubbed activations recursively inputs_G[parent_G] = run_scrub(c, D, parent_I, new_x) # “unimportant” parents else: # get the activations on the random input value chosen above inputs_G[parent_G] = parent_G.value_on(random_x) # now run n_G given the computed input activations return n_G.value_from_inputs(inputs_G) def sample_agreeing_x(D, n_I, ref_x): """Returns a random element of D that agrees with ref_x on the value of n_I""" D_agree = [x in D if n_I.value_on(ref_x) == n_I.value_on(x)] return random.sample(D_agree) ``` 4 Why ablate by resampling? =========================== 4.1 What does it mean to say “this thing doesn’t matter”? --------------------------------------------------------- Suppose a hypothesis claims that some module in the model isn’t important for a given behavior. There are a variety of different interventions that people do to test this. For example: * Zero ablation: setting the activations of that module to 0 * Mean ablation: replacing the activations of that module with their empirical mean on D * Resampling ablation: patching in the activation of that module on a random different input In order to decide between these, we should think about the precise claim we’re trying to test by ablating the module. If the claim is “this module’s activations are literally unused”, then we could try replacing them with huge numbers or even NaN. But in actual cases, this would destroy the model behavior, and so this isn’t the claim we’re trying to test. We think a better type of claim is: “The behavior might depend on various properties of the activations of this module, but those activations aren’t encoding any information that’s relevant to this subtask.” Phrased differently: The distribution of activations of this module is (maybe) important for the behavior. But we don’t depend on any properties of this distribution that are conditional on *which* particular input the model receives. This is why, in our opinion, the most direct way to translate this hypothesis into an intervention experiment is to patch in the module’s activation on a randomly sampled different input–this distribution will have all the properties that the module’s activations usually have, but any connection between those properties and the correct prediction will have been scrubbed away. 4.2 Problems with zero and mean ablation ---------------------------------------- Despite their prevalence in prior work, zero and mean ablations do not translate the claims we’d like to make faithfully. As noted above, the claim we’re trying to evaluate is that the information in the output of this component doesn’t matter for our current model, not the claim that deleting the component would have no effect on behavior. We care about evaluating the claim as faithfully as possible on our current model and not replacing it with a slightly different model, which zero or mean ablation of a component does. This core problem can manifest in three ways: 1. *Zero and mean ablations take your model off distribution in an unprincipled manner.* 2. *Zero and mean ablations can have unpredictable effects on measured performance.* 3. *Zero and mean ablations remove variation and thus present an inaccurate view of what’s happening.* For more detail on these specific issues, we refer readers to the [appendix post.](https://www.alignmentforum.org/posts/kcZZAsEjwrbczxN2i/causal-scrubbing-appendix#3_Further_discussion_of_zero_and_mean_ablation) 5 Results ========= To show the value of this approach, we apply causal scrubbing algorithm to two tasks: 1) verifying hypotheses about an algorithmic model we found previously through ad-hoc interpretability, and 2) test and incrementally improve hypotheses about how induction heads work on a 2-layer attention only model. Here, we summarize the results of those applications here to illustrate the applications of causal scrubbing; detailed results can be found in the respective auxiliary posts. 5.1 On a paren balance checker ------------------------------ We apply the causal scrubbing algorithm to a small transformer which classifies sequences of parentheses as balanced or unbalanced; see the [results post](https://www.alignmentforum.org/posts/kjudfaQazMmC74SbF/causal-scrubbing-on-a-paren-balancer-checker-part-3-of-5) for more information. In particular, we test three claims about the mechanisms this model uses. **Claim 1:**There are three heads that directly pass important information to output:[[13]](#fni09gt1g1kqt) * Heads 1.0 and 2.0 test the conjunction of two checks: that there are an equal number of open and close parentheses in the entire sequence, and that the sequence starts open. * Head 2.1 checks that the nesting depth is never negative at any point in the sequence. Claim 1 is represented by the following hypothesis:[[14]](#fn5uhwblus1hg) ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1675735209/mirroredImages/JvZhhzycHu2Yd57RN/uxaaxpaojaipevg25k2z.png)The hypothesis for claim 1. The correspondence in this diagram maps to all the nodes of G except the “other terms” node in gray. The “is balanced?” node in both graphs algorithmically computes if the input is balanced with perfect accuracy in order to compute the loss for the model. The node labeled “Equal count of `(` and `)`? Starts with `(`?” computes the conjunction of both these two checks.**Claim 2:** Heads 1.0 and 2.0 depend only on their input at position 1, and this input indirectly depends on: 1. The output of 0.0 at position 1, which computes the overall proportion of parentheses which are open. This is written into a particular direction of the residual stream in a linear fashion. 2. The embedding at position 1, which indicates if the sequence starts with `(`. **Claim 3:** Head 2.1 depends on the input at all positions, and if the nesting depth (when reading right to left!) is negative at that position.[[15]](#fnj15bxqcv7a) Here is a visual representation of the combination of all three claims: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1675735210/mirroredImages/JvZhhzycHu2Yd57RN/hfsbojxzlzo4qvm7llkr.png)A representation of the hypothesis for all three claims. Arrows are annotated with the feature of the interpretation corresponding to the parent node. Inputs claimed to be unimportant not shown. ɸ is a function from [0,1] to the embedding space that we claim represents the important part of the output of head 0.0 (the residual between the actual output of 0.0 and this estimate is thus claimed to be unimportant and we perform a replacement ablation on).Testing these claims with causal scrubbing, we find that they are reasonably, but not completely, accurate: | | | | --- | --- | | Claim(s) tested | Performance recovered[[16]](#fn2vc8ef5biyk) | | 1 | 93% | | 1 + 2 | 88% | | 1 + 3 | 84% | | 1 + 2 + 3 | 72% | As expected, performance drops as we are more specific about how exactly the high level features are computed. This is because as the hypotheses get more specific, they induce more activation replacements, often stacked several layers deep.[[17]](#fn7d654z3sb44) This indicates our hypothesis is subtly incorrect in several ways, either by missing pathways along which information travels or imperfectly identifying the features that the model uses in practice. We explain these results in more detail in [this appendix post](https://www.alignmentforum.org/posts/kjudfaQazMmC74SbF/causal-scrubbing-on-a-paren-balancer-checker-part-3-of-5). 5.2 On induction ---------------- We investigated ‘induction’ heads in a 2 layer attention only model. We were able to easily test out and incrementally improve hypotheses about which computations in the model were important for the behavior of the heads. We first tested a naive induction hypothesis, which separates out the input to an induction head in layer 1 into three separate paths – the value, the key, and the query – and specified where the important information in each path comes from. We hypothesized that both the values and queries are formed based on only the input directly from the token embeddings via the residual stream and have no dependence on attention layer 0. The keys, however, are produced only by the input from attention layer 0; in particular, they depend on the part of the output of attention layer 0 that corresponds to attention on the previous token position.[[18]](#fnh7lzkn0qhr) We test these hypotheses on a subset of openwebtext where induction is likely (but not guaranteed) to be helpful.[[19]](#fnhy4rcbsvbk)  Evaluated on this dataset, this naive hypothesis only recovers 35% of the performance. In order to improve this we made various edits which allow the information to flow through additional pathways: * First, we allow the attention pattern of the induction head to compare a set of three consecutive tokens (instead of just a single token) to determine when to induct. * Next, we also allow the query and value to also depend on the part of the output of layer 0 that corresponds to the current position. * We also special case three layer 0 heads which attend to repeated occurrences of the current token. In particular, we assume that the important part of the output of these heads is what their output would be *if* their attention was just an identity matrix.[[20]](#fnvpwtp9wleu) With these adjustments, our hypothesis recovers 86% of the performance. We believe it would have been significantly harder to develop and have confidence in a hypothesis this precise only using ad-hoc methods to verify the correctness of a hypothesis. We explain these results in more detail in [this appendix post](https://www.alignmentforum.org/posts/j6s9H9SHrEhEfuJnq/causal-scrubbing-on-induction-heads-part-4-of-5). 6 Relevance to alignment ======================== The most obvious application of causal scrubbing to alignment is using it to evaluate mechanistic interpretations. In particular, we can imagine several specific use cases that are relevant to alignment: * *Checking interpretations of model behaviors produced by human researchers.* Having a standardized, reliable, and convenient set of tests would make it much easier to scale up mechanistic interpretability efforts; this might be particularly important if there are big interpretability projects right before the deployment of transformative AI. * *Automated algorithmic searches for explanations.* In some cases, researchers might be able to specify a space of hypotheses and then use optimization algorithms to find the most predictive ones. We’ve done some work like this and we hope to do much more in the future. * *AI-assisted explanations.* We might be able to train models to produce highly rated and human-understandable explanations. In all three applications, we required that researchers understand the explanations that were verified by causal scrubbing. Unfortunately, it might be the case that the behaviors we want to interpret in large neural networks won’t have *any* understandable interpretations at all if most of the cognition performed inside powerful AI systems is in some sense irreducibly complex. It also seems plausible that even if these human-understandable interpretations exist, it might be intractable or impractical to find them. A lot of our interest in causal scrubbing (and mechanistic interpretability more generally) comes from applications which require interpretability-like techniques which rely on formally manipulating explanation-like objects but *don’t* require that these objects be understood by anyone (human or AI): * *Automated strategies for solving ELK.* [ARC](https://alignment.org/) is optimistic about [some strategies](https://www.lesswrong.com/posts/vwt3wKXWaCvqZyF74/mechanistic-anomaly-detection-and-elk) for solving [ELK](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit) that involve searching for objects similar to causal scrubbing explanations and then using properties of these explanations as part of the training procedure of the model, in ways that don’t require humans to understand the explanations. * *Detecting deceptive alignment.* Suppose you have a weak trusted model and a strong untrusted model. You might be able to search for explanations of why these models take similar actions which allow you to distinguish whether the untrusted model is deceptively aligned just based on the structure of the explanation, rather than via having to understand its content. * [*Relaxed adversarial training*](https://www.lesswrong.com/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment) requires some way of adjudicating arguments about whether the internals of models imply they’ll behave badly in ways that are hard to find with random sampling (because the failures only occur off the training distribution, or they’re very rare). This doesn’t require that any human is able to understand these arguments; it just requires we have a mechanical argument evaluation procedure. Improved versions of the causal scrubbing algorithm might be able to fill this gap. 7 Limitations ============= Unfortunately, causal scrubbing may not be able to express all the tests of interpretability hypotheses we might want to express: * Causal scrubbing only allows activation replacements that are *perfectly permissible* by the hypothesis: that is, the respective inputs have an exactly equal value in the correspondance. + Despite being maximally strict in what replacements to allow, we are in practice willing to accept hypotheses that fail to perfectly preserve performance. We think this is an inconsistency in our current approach. + As a concrete example, if you think a component of your model encodes a continuous feature, you might want to test this by replacing the activation of this component with the activation on an input that is *approximately* equal on this feature–causal scrubbing will refuse to do this swap. + You can solve this problem by considering a generalized form of causal scrubbing, where hypotheses specify a non-uniform distribution over swaps. We’ve worked with this “generalized causal scrubbing” algorithm a bit. The space of hypotheses is continuous, which is nice for a lot of reasons (e.g. you can search over the hypothesis space with SGD). However, there are a variety of conceptual problems that still need to be resolved (e.g. there are a few different options for defining the union of two hypotheses, and it’s not obvious which is most principled). * Causal scrubbing can only propose tests that can be constructed using the data provided to it. If your hypothesis predicts that model performance will be preserved if you swap the input to any other input which has a particular property, but no other inputs in the dataset have that property, causal scrubbing can’t test your hypothesis. This happens in practice–there is probably only one sequence in webtext with a particular first name at token positions 12, 45, and 317, and a particular last name at 13, 46, 234. + This problem is addressed if you are able to produce samples that match properties by some mechanism other than rejection sampling. * Causal scrubbing doesn’t allow us to distinguish between two features that are perfectly correlated on our dataset, since they would induce the same equivalence classes. In fact, to the extent that two features A and B are highly correlated, causal scrubbing will not complain if you misidentify an A-detector as a B-detector.[[21]](#fngdu1yr71vf) Another limitation is that causal scrubbing does not guarantee that it will reject a hypothesis that is importantly false or incomplete. Here are two concrete cases where this happens: * When a model uses some heuristic that isn’t *always* applicable, it might use other circuits to inhibit the heuristic (for example, the negative name mover heads in the [Indirect Object Identification paper](https://www.alignmentforum.org/posts/3ecs6duLmTfyra3Gp/some-lessons-learned-from-studying-indirect-object)). However, these inhibitory circuits are purely harmful for inputs where the heuristic *is* applicable. In these cases, if you ignore the inhibitory circuits, you might overestimate the contribution of the heuristic to performance, leading you to falsely believe that your incomplete interpretation fully explains the behavior (and therefore fail to notice other components of the network that contribute to performance). * If two terms are correlated, sampling them independently (by two different random activation swaps) reduces the variance of the sum. Sometimes, this variance can be harmful for model performance – for instance, if it represents [interference from polysemanticity](https://www.alignmentforum.org/posts/kcZZAsEjwrbczxN2i/causal-scrubbing-appendix#6_1_Underestimating_interference_by_neglecting_correlations_in_model_errors). This can cause a hypothesis that scrubs out correlations present in the model’s activations to appear ‘more accurate’ under causal scrubbing.[[22]](#fncayofzccz8s) These examples are both due to the hypotheses not being specific *enough* and neglecting to include some correlation in the model (either between input-feature and activation or between two activations) that would hurt the performance of the scrubbed model. We don’t think that this is a problem with causal scrubbing in particular; but instead is because interpretability explanations should be regarded as an example of [defeasible reasoning](https://en.wikipedia.org/wiki/Defeasible_reasoning), where it is possible for an argument to be overturned by further arguments. We think these problems are fairly likely to be solvable using an adversarial process where hypotheses are tested by allowing an adversary to modify the hypothesis to make it more specific in whatever ways affect the scrubbed behavior the most. Intuitively, this adversarial process requires that proposed hypotheses “point out all the mechanisms that are going on that matter for the behavior”, because if the proposed hypothesis doesn’t point something important out, the adversary can point it out. More details on this approach are included in the [appendix post](https://www.alignmentforum.org/posts/kcZZAsEjwrbczxN2i/causal-scrubbing-appendix#8_Adversarial_validation_might_be_able_to_elicit_true_hypotheses). Despite these limitations, we are still excited about causal scrubbing. We’ve been able to directly apply it to understanding the behaviors of simple models and are optimistic about it being scalable to larger models and more complex behaviors (insofar as mechanistic interpretability can be applied to such problems at all). We currently expect causal scrubbing to be a big part of the methodology we use when doing mechanistic interpretability work in the future. Acknowledgements ================ *This work was done by the Redwood Research interpretability team. We’re especially thankful for Tao Lin for writing the software that we used for this research and for Kshitij Sachan for contributing to early versions of causal scrubbing. Causal scrubbing was strongly inspired by Kevin Wang, Arthur Conmy, and Alexandre Variengien’s* [*work on how GPT-2 Implements Indirect Object Identification*](https://arxiv.org/abs/2211.00593)*. We’d also like to thank Paul Christiano and Mark Xu for their insights on heuristic arguments on neural networks. Finally, thanks to Ben Toner, Oliver Habryka, Ajeya Cotra, Vladimir Mikulik, Tristan Hume, Jacob Steinhardt, Neel Nanda, Stephen Casper, and many others for their feedback on this work and prior drafts of this sequence.* Citation -------- Please cite as: ``` Chan, et al., "Causal Scrubbing: a method for rigorously testing interpretability hypotheses", AI Alignment Forum, 2022. ``` BibTeX Citation: ``` @article{chan2022causal, title={Causal scrubbing, a method for rigorously testing interpretability hypotheses}, author={Chan, Lawrence and Garriga-Alonso, Adrià and Goldwosky-Dill, Nicholas and Greenblatt, Ryan and Nitishinskaya, Jenny and Radhakrishnan, Ansh and Shlegeris, Buck and Thomas, Nate}, year={2022}, journal={AI Alignment Forum}, note={\url{https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing}} } ``` 1. **[^](#fnrefdtq6l2laqcp)** For example, in [the causal tracing paper](https://rome.baulab.info/) (Meng et al 2022), to evaluate whether their hypothesis correctly identified the location of facts in GPT-2, the authors replace the activation of the involved neurons and observed that the model behaved as though it believed the edited fact, and not the original fact. In [the Induction Heads paper](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html) (Olsson et al 2022) the authors provide six different lines of evidence, from macroscopic co-occurrence to mechanistic plausibility. 2. **[^](#fnrefbwu0kfb3tw)** Causal scrubbing is technically formulated in terms of general computational graphs, but we’re primarily interested in using causal scrubbing on computational graphs that implement neural networks. 3. **[^](#fnrefg10ehlzhmhl)** See the discussion in the “An alternative formalism: constructing a distribution on treeified inputs” section of [the appendix post](https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-motivation-formalization-and-examples-part). 4. **[^](#fnrefmcxlqny6d9c)** Most commonly, the behavior we attempt to explain is why a model achieves low loss on a particular set of examples, and thus we measure the loss directly. However, the method can explain any expected quality of the model’s output. 5. **[^](#fnrefkbywfujnhm)** We expect the results posts will be especially useful for people who wish to apply causal scrubbing in their own research. 6. **[^](#fnrefjnva7stn48)** Note that we can use causal scrubbing to ablate a particular module, by using a hypothesis where that specific module’s outputs do not matter for the model’s performance. 7. **[^](#fnref15leextqkxc)** A computational graph is a graph where the nodes represent computations and the edges specify the inputs to the computations. 8. **[^](#fnref8a8oox8gv1v)** In the normal sense of the word, not ARC’s [Heuristic Arguments](https://arxiv.org/abs/2211.06738) [approach](https://www.lesswrong.com/posts/vwt3wKXWaCvqZyF74/mechanistic-anomaly-detection-and-elk) 9. **[^](#fnrefjmkiyi6nzfr)** Since c is required to be an injective graph homomorphism, it immediately follows that c(I) is a subgraph of G which is isomorphic to I. This subgraph will be a union of paths from the input to the output. 10. **[^](#fnref4adke6b8dba)** In the appendix we’ll discuss that it is [possible to modify](https://www.alignmentforum.org/posts/kcZZAsEjwrbczxN2i/causal-scrubbing-appendix#4_2_Including_unimportant_inputs_in_the_hypothesis) the correspondence to include these unimportant nodes, and that doing so removes some [ambiguity](https://www.alignmentforum.org/posts/kcZZAsEjwrbczxN2i/causal-scrubbing-appendix#4_1_Should_unimportant_inputs_be_taken_from_the_same_or_different_datapoints_) on when to sample unimportant nodes together or separately. 11. **[^](#fnref42orbovkrwm)** We have no guarantee, however, that any hypothesis that passes the causal scrubbing test is desirable. See more discussion of counterexamples in [the limitations section](https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-redwood-research#7_Limitations). 12. **[^](#fnrefi28h649vn8f)** This is because otherwise our algorithm would crucially depend on the exact representation of the causal graph: e.g. if the output of a particular attention layer was represented as a single input or if there was one input per attention head instead. There are several other approaches that can be taken to addressing this ambiguity, see the [appendix](https://www.alignmentforum.org/posts/kcZZAsEjwrbczxN2i/causal-scrubbing-appendix#4_1_Should_unimportant_inputs_be_taken_from_the_same_or_different_datapoints_). 13. **[^](#fnrefi09gt1g1kqt)** That is, we consider the contribution of these heads through the residual stream into the final layer norm, excluding influence they may have through intermediate layers. 14. **[^](#fnref5uhwblus1hg)** Note that as part of this hypothesis we have aggressively simplified the original model into a computational graph with only 5 separate computations. In particular, we relied on the fact that residual stream just before the classifier head can be written as a sum of terms, including a term for each attention head (see “[Attention Heads are Independent and Additive](https://transformer-circuits.pub/2021/framework/index.html%23architecture-attn-independent)” section of Anthropic’s “Mathematical Framework for Transformer Circuits” paper). Since we claim only three of these terms are important, we clump all other terms together into one node. Additionally note this means that the ‘Head 2.0’ node in G includes *all* of the computations from layers 0 and 1, as these are required to compute the output of head 2.0 from the input. 15. **[^](#fnrefj15bxqcv7a)** The claim we test is [somewhat more subtle](https://www.alignmentforum.org/posts/kjudfaQazMmC74SbF/causal-scrubbing-on-a-paren-balancer-checker-part-3-of-5#3b__Refining_our_notion_of_the_open_proportion), involving a weighted average between the proportion of the open-parentheses in the prefix and suffix of the string when split at every position. This is equivalent for the final computation of balancedness, but more closely matches the model’s internal computation. 16. **[^](#fnref2vc8ef5biyk)** As measured by normalizing the loss so 100% is loss of the normal model (0.0003) and 0% is the loss when randomly permuting the labels. For the reasoning behind this metric see the [appendix](https://www.alignmentforum.org/posts/kcZZAsEjwrbczxN2i/causal-scrubbing-appendix#2_1__Percentage_of_loss_recovered__as_a_measure_of_hypothesis_quality). 17. **[^](#fnref7d654z3sb44)** Our final hypothesis combines up to 51 different inputs: 4 inputs feeding into each of 1.0 and 2.0, 42 feeding into 2.1 (one for each sequence position), and 1 for the ‘other terms’. 18. **[^](#fnrefh7lzkn0qhr)** The output of an attention layer can be written as a sum of terms, one for each previous sequence position. We can thus claim that only one of these terms is important for forming the queries. 19. **[^](#fnrefhy4rcbsvbk)**In particular we create a whitelist of tokens on which exact 2-token induction is often a helpful heuristic (over and above bigram-heuristics). We then filter openwebtext (prompt, next-token) pairs for prompts that end in tokens on our whitelist. We evaluate loss on the actual next token from the dataset, however, which may not be what induction expects. More details [here](https://www.alignmentforum.org/posts/j6s9H9SHrEhEfuJnq/causal-scrubbing-on-induction-heads-part-4-of-5?_ga=2.183559205.2003135893.1669951565-1861388120.1624631929#Picking_out_tokens_at_which_the_model_is_particularly_likely_to_do_induction). We do this as we want to understand not just how our model implements induction but also how it decides *when* to use induction. 20. **[^](#fnrefvpwtp9wleu)** And thus the residual of (actual output - estimated output) is unimportant and can be interchanged with the residual on any other input. 21. **[^](#fnrefgdu1yr71vf)** This is a common way for interpretability hypotheses to be ‘partially correct.’ Depending on the type of reliability needed, this can be more or less problematic. 22. **[^](#fnrefcayofzccz8s)** Another real world example of this is this [this experiment](https://www.alignmentforum.org/posts/kjudfaQazMmC74SbF/causal-scrubbing-on-a-paren-balancer-checker-part-3-of-5#Breaking_up_Experiment_3_by_term) on the paren balance checker
3343ec6e-31e0-4674-a88b-c1cd036e7be0
trentmkelly/LessWrong-43k
LessWrong
A Gamification Of Education: a modest proposal based on the Universal Decimal Classification and RPG skill trees While making the inventory of my personal library and applying the Universal Decimal System to its classification, I found myself discovering a systematized classification of fields of knowledge, nested and organized and intricate, many of which I didn't even know existed. I couldn't help but compare how information was therein classified, and how it was imparted to me in engineering school. I also thought about how, often, software engineers and computer scientists were mostly self-thought, with even college mostly consisting of "here's a problem: go forth and figure out a way to solve it". This made me wonder whether another way of certified and certifiable education couldn't be achieved, and a couple of ideas sort of came to me. It's pretty nebulous in my mind so far, but the crux of the concept would be a modular structure of education, where the academic institution essentially established what information precisely you need from each module, and lets you get on with the activity of learning, with periodic exams that you can sign up for, which will certify your level and area of proficiency in each module. A recommended tree of learning can be established, but it should be possible to not take every intermediate test, if passing the final test proves that you've passed all the others behind it (this would allow people coming from different academic systems to certify their knowledge quickly and easily, thus avoiding the classic "Doctor in Physics from Former Soviet Union, current Taxi Driver in New York" scenario). Thus, a universal standard of how much you have proven to know about what topics can be established. Employers would then be free to request profiles in the format of such a tree. It need not be a binary "you need to have done all these courses and only these courses to work for us", they could be free to write their utility function for this or that job however they would see fit, with whichever weights and restrictions they would need. Student
564b91b2-654f-4d8b-a5fd-c8e09af28ec0
trentmkelly/LessWrong-43k
LessWrong
What's the effective R for the Delta variant of COVID-19? I have the impression that R for COVID-19 overall matters at the moment relatively little. What's matters is the R for the Delta variant. When Googling I can't find any such data for the US, the UK or any European state. I'm not sure whether this is due to incompetence or malice of authorities, but anyway, does someone have a good source or did the analysis themselves? (extra points for the R of Delta plus which does have another spike protein mutation and thus vaccine antibodies are going to bind less well)
f202e3dc-b2bc-4a1c-8e18-b07e43c57976
trentmkelly/LessWrong-43k
LessWrong
Quick Poll: Booster Reactions Lots of people are getting covid boosters now. To help myself and others plan I did an extremely informal poll on Twitter and Facebook about how people’s booster side effects compared to their second dose. Take home message: boosters are typically easier than second shots, but they’re bad often enough you should have a plan for that. The poll was a mess for a number of reasons, including: * I didn’t describe the options very well, so it’s 2/3 freeform responses I collapsed into a few categories. * There was a tremendous variation in what combination of shots people got. * It’s self-reported. I have unusually data-minded friends which minimizes the typical problem of extreme responses getting disproportionate attention, but it doesn’t eliminate it, and self-report data has other issues. * I only sampled people who follow me on social media, who are predominantly <45 years old, reasonably healthy, reasonably high income, and mostly working desk jobs.  * I specified mRNA but not the manufacturer; Moderna but not Pfizer boosters are smaller than the original dose. Nonetheless, the trend was pretty clear. Of people who received three mRNA shots from the same manufacturer, comparing their second shot to their third: * 12 had no major symptoms either time (where major is defined as “affected what you could do in your day.” It specifically does not include arm soreness, including soreness that limited range of motion) * 2 had no major symptoms for their second shot but had major for their third * Not included in data: one person who got pregnant between their second and third shot * 23 had major symptoms for their second shot, and the third was easier * This includes at least one case where the third was still extremely bad and 2-3 “still pretty bad, just not as bad as the second” * Three cases fell short of  “major symptoms” for the second, but had an even easier third shot * 11 people had similar major symptoms both times * 2 had major symptoms for
f66c8a8a-97a0-44e2-b903-a64c6ac06484
trentmkelly/LessWrong-43k
LessWrong
Does hormetism work? Opponent process theory. To the fun theory, hedonic treadmill sequences. http://gettingstronger.org/hormesis/ TL;DR stoicism with science. Key idea: OPT, Opponent Process Theory: http://gettingstronger.org/2010/05/opponent-process-theory/ Research, PDF: http://gettingstronger.org/wp-content/uploads/2010/04/Solomon-Opponent-Process-1980.pdf From the article: "In hedonic reversal, a stimulus that initially causes a pleasant or unpleasant response does not just dissipate or fade away, as Irvine describes, but rather the initial feeling leads to an opposite secondary emotion or sensation. Remarkably, the secondary reaction is often deeper or longer lasting than the initial reaction.  And what is more, when the stimulus is repeated many times, the initial response becomes weaker and the secondary response becomes stronger and lasts longer."
eef5bf12-9c8b-40f4-b7de-283f7ceb5035
trentmkelly/LessWrong-43k
LessWrong
Lifehack Ideas January 2015 > Life hacking refers to any trick, shortcut, skill, or novelty method that increases productivity and efficiency, in all walks of life. — Wikipedia This thread is for posting any promising or interesting ideas for lifehacks you've come up with or heard of.  If you've successfully  implemented your idea, please share the results.  You are also encouraged to post lifehack ideas you've tried out that have not been successful, and why you think they weren't.  If you can, please give credit for ideas that you got from other people. To any future posters of Lifehack Ideas threads, please remember to add the "lifehacks_thread" tag.
4b85c27b-732d-43fd-a820-298fc74590c4
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
[Short version] Information Loss --> Basin flatness This is an overview for advanced readers.  Main post: [Information Loss --> Basin flatness](https://www.lesswrong.com/posts/wPudaEemohdYPmsye/information-loss-greater-than-basin-flatness) **Summary:** Inductive bias is related to, among other things: * Basin flatness * Which solution manifolds (manifolds of zero loss) are higher dimensional than others.  This is closely related to "basin flatness", since each dimension of the manifold is a direction of zero curvature. In relation to basin flatness and manifold dimension: 1. It is useful to consider the **"behavioral gradients"** ∇θf(θ,xi).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  for each input. 2. Let G be the matrix of behavioral gradients.  (The ith column of G is gi=∇θf(θ,xi)).[[1]](#fnkwxi3hytsx)  We can show that dim(manifold)≤N−Rank(G).[[2]](#fn6pvsd1m9v8x) 3. Rank(Hessian)=Rank(G).[[3]](#fnhe171k2fwv8)[[4]](#fnawnozwzw1ed) 4. Flat basin  ≈  Low-rank Hessian  =  Low-rank G  ≈  High manifold dimension 5. High manifold dimension  ≈  Low-rank G  =  Linear dependence of behavioral gradients 6. A case study in a very small neural network shows that "information loss" is a good qualitative interpretation of this linear dependence. 7. Models that throw away enough information about the input in early layers are guaranteed to live on particularly high-dimensional manifolds.  Precise bounds seem easily derivable and might be given in a future post. See the [main post](https://www.lesswrong.com/posts/wPudaEemohdYPmsye/information-loss-greater-than-basin-flatness) for details. 1. **[^](#fnrefkwxi3hytsx)**In standard terminology, G is the Jacobian of the concatenation of all outputs, w.r.t. the parameters. 2. **[^](#fnref6pvsd1m9v8x)**N is the number of parameters in the model.  See claims 1 and 2 [here](https://www.lesswrong.com/posts/wPudaEemohdYPmsye/information-loss-greater-than-basin-flatness#Behavioral_gradients) for a proof sketch. 3. **[^](#fnrefhe171k2fwv8)****Proof sketch for**Rank(Hessian)=Rank(G)**:** * span(g1,..,gk)⊥ is the set of directions in which the output is **not** first-order sensitive to parameter change.  Its dimensionality is N−rank(G). * At a local minimum, first-order sensitivity of behavior translates to second-order sensitivity of loss. * So span(g1,..,gk)⊥ is the null space of the Hessian. * So rank(Hessian)=N−(N−rank(G))=rank(G) 4. **[^](#fnrefawnozwzw1ed)**There is an [alternate proof](https://www.lesswrong.com/posts/wPudaEemohdYPmsye/information-loss-greater-than-basin-flatness#fn38ekdxnlk3o) going through the result Hessian=2GGT.  (The constant 2 depends on MSE loss.)
7f5f72f1-1ca2-4034-b384-ea02cdb0e641
trentmkelly/LessWrong-43k
LessWrong
David Krueger on AI Alignment in Academia, Coordination and Testing Intuitions David Krueger is an assistant professor at the University of Cambridge and got his PhD from Mila. His research group focuses on aligning deep learning systems, but he is also interested in governance and global coordination. He does not have an AI alignment research agenda per se, and instead tries to enable his seven PhD students to drive their own research.  Below are some highlighted quotes from our conversation (available on Youtube, Spotify, Google Podcast, Apple Podcast). For the full context for each of these quotes, you can find the accompanying transcript. On Academia and Coordination Building A Research Team, Not Following An Agenda > "I think agenda is a very grandiose term to me. It's oftentimes, I think people who are at my level of seniority or even more senior in machine learning would say, "oh, I'm pursuing a few research directions." And they wouldn't say, "I have this big agenda." And so I think my philosophy or mentality, I should say, when I set up this group and started hiring people was like, let's get talented people. Let's get people who understand and care about the problem. Let's get people who understand machine learning. Let's put them all together and just see what happens and try and find people who I want to work with, who I think are going to be nice people to have in the group who have good personalities, pro-social, who seem to really understand and care and all that stuff." (full context) On Coordination Between Academia And The Broader World > "There's a lack of understanding and appreciation of the perspective of people in machine learning within the existential safety community and vice versa. And I think that's really important to address, especially because I'm pretty pessimistic about the technical approaches. I don't think alignment is a problem that can be solved. I think we can do better and better. But to have it be existentially safe, the bar seems really, really high and I don't think we're going to get there. So
b44ae31d-e525-48e3-8ca6-f5f4d84d2f31
trentmkelly/LessWrong-43k
LessWrong
Random variables and Evidential Decision Theory This post is inspired by the recent discussion I had with IlyaShpitser and Vaniver on EDT. A random variable only ever has one value In probability theory, statistics and so on, we often use the notion of a random variable (RV). If you go look at the definition, you will see that a RV is a function of the sample space. What that means is that a RV assigns a value to each possible outcome of a system. In reality, where there are no closed systems, this means that a RV assigns a value to each possible universe. For example, a random variable X representing the outcome a die roll is a function of type "Universe → {1..6}". The value of X in a particular universe u is then X(u). Uncertainty in X corresponds to uncertainty about the universe we are in. Since X is a pure mathematical function, its value is fixed for each input. That means that in a fixed universe, say our universe, such a random variable only ever takes on one value. So, before the die roll, the value of X is undefined1, and after the roll X is forever fixed. X is the outcome of a certain particular roll. If I roll the same die again, that doesn't change the value of X. If you want to talk about multiple rolls, you have to use different variables. The usual solution is to use indices, X1, X2, etc. This also means that the nodes in a causal model, are not random variables. For example in the causal model "Smoking → Cancer", there is no single RV for smoking. Rather, the model is implicitly a generalized to mean "Smokingi → Canceri" for all persons i. What this means for EDT It is sometimes claimed that Evidential Decision Theory (EDT) can not deal with causal structure. But I would disagree. To avoid confusion, I will refer to my interpretation as Estimated Evidential Decision Theory (EEDT). Decision theories such as (E)EDT rely on the following formula to make decisions: where oj are the possible outcomes, U(oj) is the utility of an outcome, O is a random variable that represents the actual outcom
38ba1bc1-5bf2-45bd-9b15-81fb85310ee4
StampyAI/alignment-research-dataset/arxiv
Arxiv
Can Transformers Reason About Effects of Actions? 1 Introduction --------------- Giving the long pursued goal of AI - starting with McCarthy’s Advice Taker (McCarthy [1959](#bib.bib17)) - of having systems that can reason with explicitly given general knowledge as a motivation, the recent work (Clark, Tafjord, and Richardson [2020a](#bib.bib5)) studies the ability of transformers to “reason” with facts and rules given as natural language sentences. They show that in their limited setting where rules are natural language expressions of conjunctive implications of the form “[ ∧\wedge∧ condition ]\*{}^{\*}start\_FLOATSUPERSCRIPT \* end\_FLOATSUPERSCRIPT →→\rightarrow→ conclusion” transformers can “reason” with such rules (and facts) given in natural language and answer yes/no questions with 99% correctness. They also show that their model generalizes to test data that requires a much longer chain of reasoning with an accuracy of 95+%. Intrigued by their result, our aim in this paper is to further study this approach with respect to reasoning with more general forms of knowledge. Reasoning about action and change (RAC) is one of the key topic in knowledge representation and reasoning from the early days of AI. The “going to the airport” example in (McCarthy [1959](#bib.bib17)) involved reasoning about the effect of the actions “walk to the car” and “drive the car to the airport”. Subsequently McCarthy and others developed the Situation calculus as a tool to reason about actions, with the first paper on this in 1963 (McCarthy [1963](#bib.bib18)) and the seminal paper on it in 1969 (McCarthy and Hayes [1969](#bib.bib19)) where the “frame problem” was introduced. Since then there have been many specialized collections (workshop proceedings, journal special issues) on the topics of “frame problem” and “reasoning about actions and change”; and also several books (Shanahan et al. [1997](#bib.bib28); Reiter [2001](#bib.bib24)) on the topic. While “reasoning about actions and change” continues to be an active area of research in the knowledge representation and reasoning community, recently it has been a highlight aspect in common sense question answering. For example, the ATOMIC knowledge base (Sap et al. [2019](#bib.bib26)) is about actions and their attributes such as (conditional) effects, executability conditions, triggering or preceding conditions, and motivations; and it is used as a key source of commonsense knowledge in the NLQA system (Bosselut et al. [2019](#bib.bib4)). Thus, in this paper, in our quest to extend the study in (Clark, Tafjord, and Richardson [2020a](#bib.bib5)) to more general forms of knowledge, we explore how well transformers can emulate “reasoning” about effects of actions. We explore three example domains, Blocks World, Logistics and DWR (Dock-Worker-Robot) (Ghallab, Nau, and Traverso [2004](#bib.bib11)) which we collectively refer to as the BLD domains, and a generic domain. The Blocks World domain is about a world where blocks are on a table or on top of another block and can be made to towers on the table with the actions being moving a block from its current position to another position. The Logistics domain is about loading/unloading packages from trucks and airplanes and their movements from one position to another. The DWR domain is about a harbor that has locations, cranes, containers, pallets, piles and robots; where pallets can be made to piles, loaded in containers and robots, and moved by cranes; and robots can move from one location to another. The generic domain expresses effects of abstract actions on world properties. We create synthetic worlds of various complexity (based on the number of objects or actions) in these domains and create QA examples with respect to them that we divide into training and test sets. The QA examples includes items with yes/no answers as well as items with somewhat open-ended answers along with numerical values. To create the synthetic examples we use Answer Set Programming (Gelfond and Lifschitz [1988](#bib.bib8)), a declarative language that can express the frame problem in a natural way, and that has many solvers. In our synthetic examples, we focus on the simpler reasoning aspects such as starting with a completely known initial situation whether an action is executable in a particular situation, whether particular properties (referred to as“fluents”) are true in a particular situation (reached after the execution of a sequence of actions), and count of such properties. Following is an example of QA items created by our synthetic approach: Initial State: These are locations: fishery, airfield. Robots are: robot-3, r10. Crane are: crane-9, crane-7. There are piles: pile-9, pile-12. These are containers: seashell, moccasin. fishery is adjacent to airfield. fishery has the pile pile-9. pile-12 is at airfield. crane-9 is located at fishery. airfield has crane-7. robot-3 presents at airfield. fishery has r10. robot-3 is unloaded. r10 can hold a container. crane-9 can hold a container. crane-7 can hold a container. These are stacked in order top to bottom : seashell, moccasin. pile-12 has seashell at the top. Rules: A robot can be at one location at a time. A crane can move a container from the top of a pile to an empty robot or to the top of another pile at the same location. A container can be stacked in some pile on top of the pallet or some other container, loaded on a robot, or held by a crane. A pile is a fixed area attached to a single location. A crane belongs to a single location; it can manipulate containers within that location, between piles and robots. Each robot can carry one container at a time. Robots can move to free adjacent location. A crane is empty if it is not holding container. A robot is unloaded if it is not loaded with container. Action: Crane-7 picks up seashell and seashell is loaded on r10. Verify  Q:) Is r10 loaded with moccasin?  Ans: No. Counting  Q:) how many robots are unloaded? Ans: 1. Others  Q:) which container is on top of pile-12? Ans: moccasin. We pursue various experiments on our synthetic dataset to answer questions and analyze how well a transformer fine tuned with the dataset of (Clark, Tafjord, and Richardson [2020a](#bib.bib5)) can reason about effect of actions in the BLD domains, how well a transformer fine tuned with our training sets does on our test sets, how well it generalizes when the domain is enhanced with more objects, and how well a transformer fine tuned with the generic domain transfers to the specific BLD domains. In our study we consider both explicit specification of rules that express effect of actions as well as the case when these effects are not explicitly given and learned. Our contributions are summarized below: 1. 1. We provide a framework to study the extent of neural NLQA model’s ability to reason about effect of actions. The framework consists of procedurally generated question answering dataset with three types of questions, on four action domain worlds out of which, two are real-world domains. We also provide a hand authored test set. 2. 2. We show RuleTakers ([2020a](#bib.bib5)), trained on conjunctive implication rules is unable to generalize to such questions about effect of actions (48-58%). 3. 3. We perform extensive experiments to study the out-of-domain generalization abilities of current state-of-the-art transformer-encoder based QA model. 4. 4. We observe transformer-encoder based QA models can perform reasoning about effect of actions, and do generalize to some extent to out-of-domain worlds (68-90%), but there is still scope of significant improvements. 5. 5. We also investigate the ability to learn from a generic domain of actions and fluents, and observe models can somewhat generalize to real world domains (57-83%). ![Refer to caption](/html/2012.09938/assets/images/ReasoningActions.png) Figure 1: Reasoning about Actions using Question Answering. The world is described in procedurally generated text. 2 BLDG: Domains ---------------- In this section, we describe the four action domains, the *blocks world*, *logistics*, *dock-workers robots*, and *generic* domain. The first two domains are well-known benchmarks in the classical automated planning competitions111<https://www.icaps-conference.org/competitions/>. The third domain is from the text book by Ghallab, Nau, and Traverso ([2004](#bib.bib11)). We create the fourth domain to allow using a set-theoretic representation that enables simple generation of instances. ### 2.1 Blocks World A blocks world domain consists of a set of named cubes of the same size. A block can be on the table or on top of another block. A block is said to be clear if no other block is on top of it. A clear block can be moved to the table or on top of another clear block. No two blocks can be on the same block at a same time. An *instance* in this domain describes the names and location of the blocks (a.k.a. the *initial state*). Listing [3](#LST3 "Listing 3 ‣ 11.1 Blocks World ‣ 11 ASP Code ‣ Can Transformers Reason About Effects of Actions?") describes the effects of actions and their executability conditions of the actions in this domain. In this paper, we use the action language 𝒜𝒜\mathcal{A}caligraphic\_A proposed by Gelfond and Lifschitz ([1993](#bib.bib9)) to represent the action domains. Listing 1: A Blocks World Domain [⬇](data:text/plain;base64,YmxvY2soYSkuICAgYmxvY2soYikuICAgLi4uCmZsdWVudDogb24oWCxZKSwgb250YWJsZShYKSwgY2xlYXIoWCkKYWN0aW9uOiBtb3ZlKFgsWSksIG1vdmUoWCx0YWJsZSkKbW92ZShYLFkpICBleGVjdXRhYmxlX2lmIGNsZWFyKFgpLCBjbGVhcihZKQptb3ZlKFgsdGEpIGV4ZWN1dGFibGVfaWYgY2xlYXIoWCkKbW92ZShYLFkpICBjYXVzZXMgb24oWCxZKSwkXG5lZyRjbGVhcihZKQptb3ZlKFgsWSkgIGNhdXNlcyBjbGVhcihaKSBpZiBvbihYLFopCm1vdmUoWCxZKSAgY2F1c2VzICRcbmVnJG9uKFgsWikgaWYgb24oWCxaKQptb3ZlKFgsWSkgIGNhdXNlcyAkXG5lZyRvbnRhYmxlKFgpIGlmIG9udGFibGUoWCkKbW92ZShYLHRhKSBjYXVzZXMgb250YWJsZShYKQptb3ZlKFgsdGEpIGNhdXNlcyBjbGVhcihaKSBpZiBvbihYLFopCm1vdmUoWCx0YSkgY2F1c2VzICRcbmVnJG9uKFgsWikgaWYgb24oWCxaKQppbml0aWFsbHk6IG9udGFibGUoYiksIG9uKGEsYiksIC4uLg==) 1block(a).   block(b).   ... 2fluent: on(X,Y), ontable(X), clear(X) 3action: move(X,Y), move(X,table) 4move(X,Y)  executable\_if clear(X), clear(Y) 5move(X,ta) executable\_if clear(X) 6move(X,Y)  causes on(X,Y),¬\neg¬clear(Y) 7move(X,Y)  causes clear(Z) if on(X,Z) 8move(X,Y)  causes ¬\neg¬on(X,Z) if on(X,Z) 9move(X,Y)  causes ¬\neg¬ontable(X) if ontable(X) 10move(X,ta) causes ontable(X) 11move(X,ta) causes clear(Z) if on(X,Z) 12move(X,ta) causes ¬\neg¬on(X,Z) if on(X,Z) 13initially: ontable(b), on(a,b), ... In the above, X𝑋Xitalic\_X, Y𝑌Yitalic\_Y, and Z𝑍Zitalic\_Z stand for pairwise different blocks and ta𝑡𝑎taitalic\_t italic\_a stands for table. The first line specificies the objects (blocks) in the instance, the second line defines the fluents, and the third line defines the actions of the domain. Lines 4 and 5 state the executability of actions. Lines 6–12 encode the effects of the actions, e.g., Line 6 states two unconditional effects of the action of moving the block X𝑋Xitalic\_X on top of the block Y𝑌Yitalic\_Y: it will cause X𝑋Xitalic\_X to be on Y𝑌Yitalic\_Y and Y𝑌Yitalic\_Y no longer clear; Line 7 specifies a conditional effect of the same action: it will clear the block Z𝑍Zitalic\_Z if X𝑋Xitalic\_X is currently on Z𝑍Zitalic\_Z, etc. The last line specifies the initial state of the world, the block b𝑏bitalic\_b is on the table and a𝑎aitalic\_a is on b𝑏bitalic\_b, etc. ### 2.2 Logistics A logistics domain consists of trucks, airplanes, packages at different locations, and different cities. Each location is associated to a city. A location can also be an airport. A package can be loaded to a truck (or an airplane) if the former is at the same location as the latter. Once a package is loaded into a truck (or an airplane), it is inside the truck (the airplane). If a package is inside a truck (an airplane), it can be unloaded from the truck (the airplane). Once unloaded, it is at the same location of the truck (the airplane). A truck can move from one location to another location within a city. An airplane can fly from one airport to another airport. It is assumed that a truck (an airplane) can hold unlimited number of packages. The initial state specifies the location of the trucks, airplanes, and packages. ### 2.3 Dock-Workers Robots The dock-workers robots (DWR) domain could be viewed as a combination of the blocks world and the logistics domains without airplanes and airports and trucks are replaced by robots. The domain, taken inspiration from the arrangement of a habor, contains locations, cranes, containers, pallets, piles, and robots. Static information is provided describing the connectivity between locations, the attachment of pallets to locations, and the locations of cranes (that do not move). Robots can move from one location to the adjacenct one if it is not occupied by another robot. Containers are stacked into piles resembling a tower of blocks. A crane can take the container on top of a pile at the same location and holds it. It can also unload the container loaded on a robot at the same location. A crane can load the container it is holding on top of an empty robot or putdown the container on top of a pile at the same location. The initial state specifies the layout of the habor, i.e., the locations of all the objects and other properties (e.g., whether a robot is loaded with a container). Unlike other domains, the DWR domain contains state constraints (a.k.a. static causal laws) which express relationships between fluents222The usefulness of state constraints in planning has been discussed in (Thiebaux, Hoffmann, and Nebel [2003](#bib.bib31); Son et al. [2005](#bib.bib29)). ### 2.4 Generic Domain In this domain, an action is described by a set of facts of the forms in Listing [2](#LST2 "Listing 2 ‣ 2.4 Generic Domain ‣ 2 BLDG: Domains ‣ Can Transformers Reason About Effects of Actions?"). The first line defines the action name𝑛𝑎𝑚𝑒nameitalic\_n italic\_a italic\_m italic\_e. The second line specifies that the executability condition of name𝑛𝑎𝑚𝑒nameitalic\_n italic\_a italic\_m italic\_e is a set called pre(name)𝑝𝑟𝑒𝑛𝑎𝑚𝑒pre(name)italic\_p italic\_r italic\_e ( italic\_n italic\_a italic\_m italic\_e ). The third line declares that the elements of pre(name)𝑝𝑟𝑒𝑛𝑎𝑚𝑒pre(name)italic\_p italic\_r italic\_e ( italic\_n italic\_a italic\_m italic\_e ) are the literals f1,…,fn𝑓1…𝑓𝑛f1,\ldots,fnitalic\_f 1 , … , italic\_f italic\_n. The fourth line declares that the effect, coded (name,1)𝑛𝑎𝑚𝑒1(name,1)( italic\_n italic\_a italic\_m italic\_e , 1 ) is f𝑓fitalic\_f. The sixth line specifies the conditions under which the effect (name,1)𝑛𝑎𝑚𝑒1(name,1)( italic\_n italic\_a italic\_m italic\_e , 1 ) is realized, which is the set of literals named c(name,1)𝑐𝑛𝑎𝑚𝑒1c(name,1)italic\_c ( italic\_n italic\_a italic\_m italic\_e , 1 ) and contains g1,…,gn𝑔1…𝑔𝑛g1,\ldots,gnitalic\_g 1 , … , italic\_g italic\_n. Similarly, a state constraint is represented by a consequence (e.g., the fluent occupied(L)𝑜𝑐𝑐𝑢𝑝𝑖𝑒𝑑𝐿occupied(L)italic\_o italic\_c italic\_c italic\_u italic\_p italic\_i italic\_e italic\_d ( italic\_L )) and a set associated with it (e.g., the set {at(R,L)}𝑎𝑡𝑅𝐿\{at(R,L)\}{ italic\_a italic\_t ( italic\_R , italic\_L ) }) via a keyword named static\_law𝑠𝑡𝑎𝑡𝑖𝑐\_𝑙𝑎𝑤static\\_lawitalic\_s italic\_t italic\_a italic\_t italic\_i italic\_c \_ italic\_l italic\_a italic\_w. We omit the description of static causal laws here for brevity. Listing 2: Action Representation [⬇](data:text/plain;base64,YWN0aW9uKG5hbWUpCnByZWNvbmRpdGlvbihuYW1lLCBwcmUobmFtZSksIF8pCm1lbWJlcihwcmUobmFtZSksZjEpIC4uLiBtZW1iZXIocHJlKG5hbWUpLGZrKS4KZWZmZWN0KG5hbWUsIGUobmFtZSwxKSwgZikgLi4uLgptZW1iZXIoYyhuYW1lLDEpLGcxKSAuLi4gbWVtYmVyKGMobmFtZSwxKSxnbik=) 1action(name) 2precondition(name, pre(name), \_) 3member(pre(name),f1) ... member(pre(name),fk). 4effect(name, e(name,1), f) .... 5member(c(name,1),g1) ... member(c(name,1),gn) The advantage of this representation is that it enables the generation of random domains with a few parameters such as the number of fluents, the number of actions, the maximal numbers of effects of actions, the number of static causal laws, etc. Observe that this representation can be used to represent the three domains Blocks World, Logistics, and DWR. However, the size of the corresponding representation will increase dramatically, in terms of number of facts. For example, for the Block Worlds domains, with 7 blocks and three towers of 3, 1, 3 blocks, the generic representation has about 1500 facts. Observe that the number of actions and the number of fluents do not change in both representations. ### 2.5 RAC in Logic Programming Given an action domain D𝐷Ditalic\_D, represented as a collection of statements about effects of actions and relationships between fluents (Subsections [2.1](#S2.SS1 "2.1 Blocks World ‣ 2 BLDG: Domains ‣ Can Transformers Reason About Effects of Actions?")–[2.4](#S2.SS4 "2.4 Generic Domain ‣ 2 BLDG: Domains ‣ Can Transformers Reason About Effects of Actions?")), an initial state I𝐼Iitalic\_I, stating the truth value of the fluents in the domain, and an integer n𝑛nitalic\_n denoting the length of the trajectories that we are interested in, a logic program, P(D,I,n)𝑃𝐷𝐼𝑛P(D,I,n)italic\_P ( italic\_D , italic\_I , italic\_n ), can be automatically generated (see, e.g., Gelfond and Lifschitz ([1998](#bib.bib10))) that contains rules for * • solving the three fundamental problems in reasoning about effects of actions: the *qualification* problem (when can an action be executed?), the *ramification* problem (how to deal with indirect effects of actions, represented by static laws, e.g., in DWR domain), and the *frame* problem (how to deal with inertial?); * • generating action occurrences: | | | | | --- | --- | --- | | | 1{occ(A,T):action(A)} 1:−time(T),T<n.:1conditional-set𝑜𝑐𝑐𝐴𝑇𝑎𝑐𝑡𝑖𝑜𝑛𝐴1𝑡𝑖𝑚𝑒𝑇𝑇 𝑛{1\;\{occ(A,T):action(A)\}\;1:-time(T),T<n.}1 { italic\_o italic\_c italic\_c ( italic\_A , italic\_T ) : italic\_a italic\_c italic\_t italic\_i italic\_o italic\_n ( italic\_A ) } 1 : - italic\_t italic\_i italic\_m italic\_e ( italic\_T ) , italic\_T < italic\_n . | | The answer sets of P(D,I,n)𝑃𝐷𝐼𝑛P(D,I,n)italic\_P ( italic\_D , italic\_I , italic\_n ) correspond one-to-one to possible evolutions of the world after n𝑛nitalic\_n actions (a.k.a. the trajectories of length n𝑛nitalic\_n). Each answer set contains information about the executability of actions at each time step (e.g., whether an action of moving a block a𝑎aitalic\_a to be on top of the block b𝑏bitalic\_b can be executed after the execution of moving b𝑏bitalic\_b on top of a𝑎aitalic\_a), the truth value of a fluent at a time step k𝑘kitalic\_k after the action sequence leading to the step k𝑘kitalic\_k has been executed (e.g., whether the block a𝑎aitalic\_a is on the table after moving on top of the block b𝑏bitalic\_b, then moving the block c𝑐citalic\_c on top of the block d𝑑ditalic\_d). This, together with the fact that efficient and scalable ASP solver exists clingo (Gebser et al. [2007](#bib.bib7)), allows for the automatic generation of QA examples for RAC by randomly generating the initial state I𝐼Iitalic\_I and using P(D,I,n)𝑃𝐷𝐼𝑛P(D,I,n)italic\_P ( italic\_D , italic\_I , italic\_n ) to generate different worlds. For example, with 20 blocks stacked up to create four towers and the length of 5 actions, generating an answer set (a world) with the corresponding questions can be done in less than 20 second. 3 Synthetic Data Generation ---------------------------- In order to study transformer’s capability of reasoning about effects of actions, we generate data for four action domains described in Subsections [2.1](#S2.SS1 "2.1 Blocks World ‣ 2 BLDG: Domains ‣ Can Transformers Reason About Effects of Actions?")–[2.4](#S2.SS4 "2.4 Generic Domain ‣ 2 BLDG: Domains ‣ Can Transformers Reason About Effects of Actions?") with three types of questions and different complexities. Each example is a triple (paragraph, question, answer), where paragraph provides the initial state, the knowledge of a domain, and a valid action sequence. We generate three types of questions based on the effects of actions to test the model’s different reasoning abilities: verify, counting, and others. A Verify question is to infer if a statement can be entailed by the paragraph or not (e.g., “Is it possible to move block A on top of block B?”), and answer can be either true or false. A Counting question is to count the number of objects satisfying an attribute or a relation (e.g., “How many blocks are on the table?”), and the answer is numeric value. An Others question is to reason the status of objects (e.g., “What is the position of block A?”), and the answer is an English word. #### Overview We randomly generated the initial states and use the method described in Subsection [2.5](#S2.SS5 "2.5 RAC in Logic Programming ‣ 2 BLDG: Domains ‣ Can Transformers Reason About Effects of Actions?") to generate answer sets representing trajectories of length up to 5 for each action domain described in Subsections [2.1](#S2.SS1 "2.1 Blocks World ‣ 2 BLDG: Domains ‣ Can Transformers Reason About Effects of Actions?")–[2.4](#S2.SS4 "2.4 Generic Domain ‣ 2 BLDG: Domains ‣ Can Transformers Reason About Effects of Actions?"). We use the answer sets to generate questions of the three aforementioned types. To investigate the out-of-distribution (OOD) generalizability of a transformer, for each domain we define five levels of complexity in terms of novel world and action depth. Novel World complexity is defined by the number of towers for block world domain; the number of airplanes, trucks, and packages for logistic domain; the number of locations, cranes, robots and containers for dock-worker robots domain; the number of fluents, actions, and the complexity of the actions (the maximal number of fluents in the precondition, effects, conditions on each effects) for generic domain. Action Depth complexity is defined by the length of the interested trajectories. #### World Generation World description includes the initial state and domain knowledge, where the former is different for each example, and the latter is the same for each example in each domain. The initial state includes two types of facts, attribute objecti(n)𝑜𝑏𝑗𝑒𝑐subscript𝑡𝑖𝑛object\_{i}(n)italic\_o italic\_b italic\_j italic\_e italic\_c italic\_t start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_n ) (e.g., location(market)) and relation relj(o1,o2)𝑟𝑒subscript𝑙𝑗subscript𝑜1subscript𝑜2rel\_{j}(o\_{1},o\_{2})italic\_r italic\_e italic\_l start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_o start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ( e.g., adjacent(market, park)). The attribute predicates define objects in a domain or describe the status of objects and the relation predicates define the relation between objects. For four domains, the attributes and relation predicates are different: Blocks world: block/1, ontable/1, clear/1, on/2 … Logistic: airplane/1, city/1, truck/1, location/1, inCity/2, … DWR: location/1, crane/1, robot/1, pile/1, attached/2 … Generic: action/1, fluent/1, effect/3, precondition/3, member/2 … where predicate/n represents predicate is an attribute or relation with n𝑛nitalic\_n parameters. The domain knowledge includes the effects and constraints of actions mentioned in previous sections. #### Question Generation For each domain, we generate three types of questions based on the effects of actions. Verify questions are essentially binary questions. We generate “true” questions by entailed effects, (e.g., by empty(crane1)𝑒𝑚𝑝𝑡𝑦𝑐𝑟𝑎𝑛subscript𝑒1empty(crane\_{1})italic\_e italic\_m italic\_p italic\_t italic\_y ( italic\_c italic\_r italic\_a italic\_n italic\_e start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ), a true question can be “Is crane1𝑐𝑟𝑎𝑛subscript𝑒1crane\_{1}italic\_c italic\_r italic\_a italic\_n italic\_e start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT empty?”). A “false” question can be generated by negating the attribute of objects or replacing an object of a relation with others, (e.g., by at(robot1,market)𝑎𝑡𝑟𝑜𝑏𝑜subscript𝑡1𝑚𝑎𝑟𝑘𝑒𝑡at(robot\_{1},market)italic\_a italic\_t ( italic\_r italic\_o italic\_b italic\_o italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_m italic\_a italic\_r italic\_k italic\_e italic\_t ) in DWR, a false question can be “Is robot1𝑟𝑜𝑏𝑜subscript𝑡1robot\_{1}italic\_r italic\_o italic\_b italic\_o italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT at airport?”). We balance the number of “true” and “false” questions to avoid label bias. To generate Counting questions, we define a set of conditions for four domains, and count how many objects satisfy each condition, e.g. (given a condition in blocks world: executable actions at time 1, and a list of facts executable(A1,1),executable(A3,1)𝑒𝑥𝑒𝑐𝑢𝑡𝑎𝑏𝑙𝑒subscript𝐴11𝑒𝑥𝑒𝑐𝑢𝑡𝑎𝑏𝑙𝑒subscript𝐴31executable(A\_{1},1),executable(A\_{3},1)italic\_e italic\_x italic\_e italic\_c italic\_u italic\_t italic\_a italic\_b italic\_l italic\_e ( italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , 1 ) , italic\_e italic\_x italic\_e italic\_c italic\_u italic\_t italic\_a italic\_b italic\_l italic\_e ( italic\_A start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , 1 ), a counting question with answer 2 can be “How many actions are executable at time 1?”). To generate Others questions, we select relation predicates, and mask one object in the question, e.g. (given a relation in logistic: in(object1,airplane1)𝑖𝑛𝑜𝑏𝑗𝑒𝑐subscript𝑡1𝑎𝑖𝑟𝑝𝑙𝑎𝑛subscript𝑒1in(object\_{1},airplane\_{1})italic\_i italic\_n ( italic\_o italic\_b italic\_j italic\_e italic\_c italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a italic\_i italic\_r italic\_p italic\_l italic\_a italic\_n italic\_e start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ), a question with answer airplane1𝑎𝑖𝑟𝑝𝑙𝑎𝑛subscript𝑒1airplane\_{1}italic\_a italic\_i italic\_r italic\_p italic\_l italic\_a italic\_n italic\_e start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT can be “Where is the object1𝑜𝑏𝑗𝑒𝑐subscript𝑡1object\_{1}italic\_o italic\_b italic\_j italic\_e italic\_c italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT?”). We have a set of templates with different linguistic variations for each type of questions from which we randomly sample. More examples are present in the supplemental material. Since we procedurally generate the question-answer, we are able to generate a large number of train and test samples. We divide the train and test samples by first generating a set of unique worlds with a pre-determined initial state configuration, to avoid train-test world overlaps. Further splits are made to test the different complexity worlds. ![Refer to caption](/html/2012.09938/assets/images/bw_k.png) ![Refer to caption](/html/2012.09938/assets/images/dwr_k.png) ![Refer to caption](/html/2012.09938/assets/images/log_k.png) ![Refer to caption](/html/2012.09938/assets/images/gen_k.png) ![Refer to caption](/html/2012.09938/assets/images/bw_k_ad.png) ![Refer to caption](/html/2012.09938/assets/images/dwr_k_ad.png) ![Refer to caption](/html/2012.09938/assets/images/log_k_ad.png) ![Refer to caption](/html/2012.09938/assets/images/gen_k_ad.png) Figure 2: Accuracy trend when trained on simpler worlds and tested on OOD complex worlds with action-effect axioms. ![Refer to caption](/html/2012.09938/assets/images/bw_nk.png) ![Refer to caption](/html/2012.09938/assets/images/dwr_nk.png) ![Refer to caption](/html/2012.09938/assets/images/log_nk.png) ![Refer to caption](/html/2012.09938/assets/images/bw_nk_ad.png) ![Refer to caption](/html/2012.09938/assets/images/dwr_nk_ad.png) ![Refer to caption](/html/2012.09938/assets/images/log_nk_ad.png) Figure 3: Accuracy trend when trained on simpler worlds and tested on OOD complex worlds without action-effect axioms. ![Refer to caption](/html/2012.09938/assets/images/bw_lr.png) ![Refer to caption](/html/2012.09938/assets/images/dwr_lr.png) ![Refer to caption](/html/2012.09938/assets/images/2020_09_09_Kleki.png) ![Refer to caption](/html/2012.09938/assets/images/2020_09_09_Kleki_2.png) Figure 4: Learning curve trends for different domains on N5 data. For BLD we compare with and without action-effect axioms. 4 Experiments and Results -------------------------- ### 4.1 Models #### RuleTakers We choose Roberta (Liu et al. [2019](#bib.bib16)) as the pretrained transformer-encoder based question answering model, as it has been demonstrated by [Clark, Tafjord, and Richardson](#bib.bib5) to be a near perfect reasoner over natural language rules. We finetune the Roberta model following the recommended hyperparameters in (Clark, Tafjord, and Richardson [2020a](#bib.bib5)) on the associated reasoning dataset. We refer to this model as the RuleTaker. As the provided dataset has only True-False questions, we evaluate it on our Verify questions for all the domains. #### Roberta with/without Action-Effect Rules We also train the Roberta model on our generated dataset. We concatenate the initial state and sequence of actions (if any) alongwith the question and train two models, a True-False model similar to the RuleTakers on the binary classification task, and an Open-Ended QA model with a multi-class classification objective, where it learns to generate the answer (Yes/No, Number, Others) similar to the open-ended QA task of visual question answering (Antol et al. [2015](#bib.bib1)).The answer vocabulary for the open-ended QA task is fixed for the train and test splits, and we ensure there are no unknown answers to avoid label shift. Questions are input as: [𝐶𝐿𝑆]𝑐𝑜𝑛𝑡𝑒𝑥𝑡[𝑆𝐸𝑃]𝑞𝑢𝑒𝑠𝑡𝑖𝑜𝑛[𝑆𝐸𝑃]delimited-[]𝐶𝐿𝑆𝑐𝑜𝑛𝑡𝑒𝑥𝑡delimited-[]𝑆𝐸𝑃𝑞𝑢𝑒𝑠𝑡𝑖𝑜𝑛delimited-[]𝑆𝐸𝑃[\textit{CLS}]\textit{context}[\textit{SEP}]\textit{question}[\textit{SEP}][ CLS ] context [ SEP ] question [ SEP ], the [CLS] token representation is fed to a feedforward layer to project either to single logit for the binary classification task, or the answer vocabulary size for the open-ended QA task. The action-effect rules are concatentated with the initial state to create the context. We use the standard cross-entropy loss to train. We evaluate the models using answer exact-match accuracy. | | | | | | | --- | --- | --- | --- | --- | | Model | Blocks World ↑normal-↑\uparrow↑ | Logistics ↑normal-↑\uparrow↑ | DWR ↑normal-↑\uparrow↑ | Generic ↑normal-↑\uparrow↑ | | RuleTakers | 51.0†superscript51.0†51.0^{\dagger}51.0 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | 48.0†superscript48.0†48.0^{\dagger}48.0 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | 58.0†superscript58.0†58.0^{\dagger}58.0 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | 53.01†superscript53.01†53.01^{\dagger}53.01 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | | N1 + Rules | 85.3 | 71.3 | 73.4 | 78.5 | | N1 + NoRules | 83.0 | 68.9 | 76.2 | N/A | | N3 + Rules | 82.3 | 99.9 | 73.9 | 81.3 | | N3 + NoRules | 90.9 | 99.9 | 77.4 | N/A | | Generic N1 | 63.4†superscript63.4†63.4^{\dagger}63.4 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | 57.2†superscript57.2†57.2^{\dagger}57.2 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | 78.8†superscript78.8†78.8^{\dagger}78.8 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | 78.4†superscript78.4†78.4^{\dagger}78.4 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | | Generic N3 | 64.5†superscript64.5†64.5^{\dagger}64.5 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | 58.1†superscript58.1†58.1^{\dagger}58.1 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | 80.6†superscript80.6†80.6^{\dagger}80.6 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | 81.2†superscript81.2†81.2^{\dagger}81.2 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | | Generic N5 | 70.5†superscript70.5†70.5^{\dagger}70.5 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | 59.6†superscript59.6†59.6^{\dagger}59.6 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | 83.5†superscript83.5†83.5^{\dagger}83.5 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | 83.9†superscript83.9†83.9^{\dagger}83.9 start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT | Table 1: Accuracy on T/F questions on OOD novel world (N5) test scenarios. Rules/NoRules indicates with and without action-effect axioms. ††\dagger† evaluated on N1 complexity. #### Training & Testing All our experiments are conducted using Roberta-large, which is finetuned on RACE dataset (Lai et al. [2017](#bib.bib13)) using the hyperparameters mentioned in (Liu et al. [2019](#bib.bib16)) as recommended by [Clark, Tafjord, and Richardson](#bib.bib5). We use the same fixed hyperparameters (batch size, learning rate) and use 4 Nvidia V100 16GB gpus. To test the OOD zero-shot performance we defined the following world complexities, the novel worlds complexities (N1-N5) and action depth complexities (A1-A5). The novel worlds for Blocks world are defined using the number of towers (2,3,4,5,6). In DWR domain the locations, cranes, robots and containers with values for each in (2,3,4,5,6), and similarly for Logistics domain, on airplanes, trucks and packages with values for each in (2,3,4,5,6). In Generic domain the complexity is defined on number of unique actions and fluents with values in range (1-4,5-7,8-10,11-15,16-20). Action depth world complexity is defined the same throughout, with number of actions equal to (1,2,3,4,5). We train all our models with 40K balanced Verify questions, 30K Counting questions and 30K Others questions. | | | | | | | --- | --- | --- | --- | --- | | | Blocks World ↑normal-↑\uparrow↑ | Logistics ↑normal-↑\uparrow↑ | DWR ↑normal-↑\uparrow↑ | Generic ↑normal-↑\uparrow↑ | | Model | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | T/F | Num | | A1 + Rules | 75.13 | 81.08 | 47.18 | 97.98 | 55.20 | 75.43 | 78.51 | 80.30 | 74.12 | 87.67 | 36.47 | | A1 + NoRules | 72.40 | 76.31 | 38.61 | 75.29 | 33.85 | 68.12 | 78.38 | 78.73 | 74.88 | N/A | N/A | | A3 + Rules | 92.94 | 94.94 | 60.20 | 98.07 | 65.30 | 95.60 | 76.84 | 87.51 | 74.14 | 87.73 | 40.40 | | A3 + NoRules | 90.20 | 90.81 | 59.77 | 99.62 | 90.48 | 98.70 | 75.95 | 83.93 | 61.66 | N/A | N/A | Table 2: Accuracy on the three types of questions evaluated on OOD action depth A5 test scenarios. ### 4.2 Results #### Can RuleTakers reason about effect of actions? We evaluate the RuleTakers model on our generated Verify questions with our action-effect axiom rules. The results are presented in Table [1](#S4.T1 "Table 1 ‣ Roberta with/without Action-Effect Rules ‣ 4.1 Models ‣ 4 Experiments and Results ‣ Can Transformers Reason About Effects of Actions?"). We observe that it performs similar to near random in the Blocks World and Logistics domains, and slightly better in the DWR and Generic domains. This demonstrates (unsuprisingly) learning on conjunctive implications is insufficient to reason about effect of actions. #### Can Roberta reason about effect of actions? Table [1](#S4.T1 "Table 1 ‣ Roberta with/without Action-Effect Rules ‣ 4.1 Models ‣ 4 Experiments and Results ‣ Can Transformers Reason About Effects of Actions?") summarizes the results on how Roberta finetuned on simpler worlds generalizes to out-of-domain evaluation on larger more complex worlds. We can observe for Verify questions, which is the same setting for RuleTakers, it can generalize to novel worlds to some extent. In Table [1](#S4.T1 "Table 1 ‣ Roberta with/without Action-Effect Rules ‣ 4.1 Models ‣ 4 Experiments and Results ‣ Can Transformers Reason About Effects of Actions?"), all finetuned models are evaluated on N5, which is the highest complexity, but trained only on N1 and N3. For context, N1 for Blocks World is only 2 towers and N5 is 7 towers, and the world complexity for DWR and Logistics are even more complex. These results indicate that a transformer-encoder based architecture can reason about effect of actions, but there is still scope for further improvements, especially on more complex real-world domains such as DWR. #### How does explicit rules affect learning? From Figures [2](#S3.F2 "Figure 2 ‣ Question Generation ‣ 3 Synthetic Data Generation ‣ Can Transformers Reason About Effects of Actions?") and [3](#S3.F3 "Figure 3 ‣ Question Generation ‣ 3 Synthetic Data Generation ‣ Can Transformers Reason About Effects of Actions?"), and Table [1](#S4.T1 "Table 1 ‣ Roberta with/without Action-Effect Rules ‣ 4.1 Models ‣ 4 Experiments and Results ‣ Can Transformers Reason About Effects of Actions?") we can observe the effect of action-effect axioms in the three domains. On the novel world complexity OOD test sets, explicit rules show a conclusive positive trend only on the simplest of the domains, i.e, Blocks world. But on the action domain complexity OOD test sets, explicit rules show a more profound effect especially when trained on simple complexity questions (A1-2). For DWR the implicit rules learned generalize better to novel worlds, with a smaller drop in accuracy compared with explicit rules. For Logistics the model trained on smallest complexity performs better with explicit rules. We also observe an interesting phenomenon: If the complexity is more than N2 (3 airplanes, 3 trucks, and 3 packages), the model achieves near perfect results, i.e, it learns to generalize to novel complex worlds (>>>3 for all). Our hypothesis is that the reasoning needed to answer Verify questions may be sufficiently learned on that complexity, unlike Blocks World, or DWR. The Generic domain by definition needs action-effect axioms, and hence cannot be evaluated without it. A model trained on lower complexity domains (lower number of fluents and actions) generalizes well to higher complexity domains (higher number of fluents and actions) but its performance is slightly worse than the performance of its counterpart model trained on higher complexity domains. The model is able to generalize to some extent even with a small number of examples (better than random). In Figure [4](#S3.F4 "Figure 4 ‣ Question Generation ‣ 3 Synthetic Data Generation ‣ Can Transformers Reason About Effects of Actions?") we observe the positive effect with lower number of samples for Blocks world and Logistics, but a different trend for DWR. Our hypothesis is that some world axioms are simpler to express in natural language, and rules for such domains can show a positive effect in reasoning accuracy. Exploring different ways to express complex domains in natural language and observing its effect on reasoning accuracy will be an interesting future direction. #### How many samples are needed to achieve a decent performance? Figure [4](#S3.F4 "Figure 4 ‣ Question Generation ‣ 3 Synthetic Data Generation ‣ Can Transformers Reason About Effects of Actions?") shows that in all the domains the model approaches its best performance on their in-distribution data around 104superscript10410^{4}10 start\_POSTSUPERSCRIPT 4 end\_POSTSUPERSCRIPT samples. Even though the rules augmented model’s accuracy on the DWR domain appears to improve with increasing training samples till 5\*1045superscript1045\*10^{4}5 \* 10 start\_POSTSUPERSCRIPT 4 end\_POSTSUPERSCRIPT instances, it does not improve further. It is interesting to observe that other than the Logistics domain, in all other domains the model is able to achieve better than random accuracy (10%+) with a small number of samples. #### Can learning on Generic Domain transfer to BLD? We evaluate our models trained on the Generic domain on corresponding mapped questions from BLD domains to verify, if learning about reasoning from actions from a generic domain of actions and fluents transfer to other worlds with complex action spaces. Table [1](#S4.T1 "Table 1 ‣ Roberta with/without Action-Effect Rules ‣ 4.1 Models ‣ 4 Experiments and Results ‣ Can Transformers Reason About Effects of Actions?") summarizes the results. We observe that they do generalize to some extent, statistically significantly above random performance, even when trained on simpler action-fluent complexity. This is interesting to observe and shows the scope of defining and training neuro-symbolic models trained on knowledge representation abstractions to generalize to OOD domains. We evaluate only N1 complexity worlds as the textual description of initial state for N2 and beyond exceeds the 512 token limit of Roberta. The description increases with complexity as the number of unique fluents and actions increase with increase in world complexity. Evaluating on transformers that enable greater than 512 tokens, such as Longformer (Beltagy, Peters, and Cohan [2020](#bib.bib3)), will be an interesting future work. #### What happens if we increase the length of action sequence? Table [2](#S4.T2 "Table 2 ‣ Training & Testing ‣ 4.1 Models ‣ 4 Experiments and Results ‣ Can Transformers Reason About Effects of Actions?") and Figures [2](#S3.F2 "Figure 2 ‣ Question Generation ‣ 3 Synthetic Data Generation ‣ Can Transformers Reason About Effects of Actions?"),[3](#S3.F3 "Figure 3 ‣ Question Generation ‣ 3 Synthetic Data Generation ‣ Can Transformers Reason About Effects of Actions?") show OOD generaliztion accuracy on different question types, with varying length of action sequences. We observe compared to novel worlds, the action depth complexity OOD generalization is a challenging task. This is expected as the hypothesis space or number of states to track increase with each action taken. One may note that the model does not have an explicit state space where the effect of each individual action is updated, stored and available for reference. Interestingly, models with explicit action-effect rules consistently outperform models that learn implicit rules from examples, especially when trained on less complex worlds. Another observation is that learning on only 1/3 depth action sequence questions, the model is able to generalize to the depth of 5 action sequences on counting questions. The counting questions need the model to count the number of fluents that are true in that state. But on the Generic domain, the model’s performance is significantly poor, indicating the counting task to be harder. The models comparatively struggle to generalize on the Others questions in Blocks world and DWR domains. | | | | | | --- | --- | --- | --- | | Model | Blocks World ↑normal-↑\uparrow↑ | Logistics ↑normal-↑\uparrow↑ | DWR ↑normal-↑\uparrow↑ | | A3 + Rules | 81.0 | 81.0 | 75.0 | | A3 + NoRules | 80.4 | 78.0 | 77.0 | | A5 + Rules | 84.6 | 86.0 | 78.0 | | A5 + NoRules | 83.8 | 82.0 | 78.0 | Table 3: Accuracy on the human authored paraphrased initial states and natural questions in the BLD domains (Overall). #### How does model generalize to linguistic variations in initial state and questions? We asked expert human annotators to hand author 300 questions of the three types and world initial states for the BLD domains. These questions are used as test sets to evaluate zero-shot transfer on natural paraphrased questions. Its interesting to observe in Table [3](#S4.T3 "Table 3 ‣ What happens if we increase the length of action sequence? ‣ 4.2 Results ‣ 4 Experiments and Results ‣ Can Transformers Reason About Effects of Actions?") that Roberta is robust to paraphrased and natural questions, including where the objects are different from what it has seen during training (color blocks, to blocks re-named using numbers, animal names). 5 Discussion ------------- A major advantage of our method to create synthetic data using ASP is that we can generate datasets of any size to facilitate learning of complex data-hungry neural models. As we generate answers from ASP solvers, we do not require costly and carefully designed human annotation frameworks to collect answers. Indeed our data generation method suffers with a lack of linguistic diversity, but recent state-of-the-art methods on paraphrasing, and other data augmentation techniques such as back-translation can be used to mitigate that. Moreover similar to the recent development in self-supervised learning techniques, with a shift away from a dataset focussed research, we propose methods to learn from synthetic instances and only evaluating on hand curated real-world representative domains, which avoids the pitfalls of linguistic priors and annotation bias. There are several advantages of a natural language interface to reason about effect of actions. The requirement of carefully crafted ASP rules to define a domain needs a certain level of expertise in knowledge representation and reasoning. This requirement is avoided when a user can describe a domain in natural language, define the actions and effects, and then query about different states. Learning to perform reasoning about effects of actions from the Generic domain enables generalizing to any novel action domain. Although we only evaluate Roberta, contrary to previous results shown by [Clark, Tafjord, and Richardson](#bib.bib5), we observe Roberta cannot perfectly answer and generalize to OOD worlds in both novel world complexity and action world complexity. Although ASP solvers with perfect rule definitions can achieve 100% accuracy, but there are two bottlenecks: perfect semantic parsing from text to get the accurate initial state and human involvement in writing perfect rules. Our work is a step towards bridging this gap between these two approaches, by using the expertise of a symbolic “reasoning about action” solver to teach a robust neural model. 6 Related Work and Conclusion ------------------------------ Earlier in Section [1](#S1 "1 Introduction ‣ Can Transformers Reason About Effects of Actions?") we mentioned the recent work (Clark, Tafjord, and Richardson [2020a](#bib.bib5)) that motivated our research in this paper and also mentioned some of the prominent works related to reasoning about actions and change. In the context of answering questions with respect to natural language text where rules are explicitly given in natural language the main prior work is (Clark, Tafjord, and Richardson [2020a](#bib.bib5)). That work in its extended version (Clark, Tafjord, and Richardson [2020b](#bib.bib6)) mentions Task 15 of bAbI (Weston et al. [2016](#bib.bib33)), conditional probes in (Richardson et al. [2020](#bib.bib25)), and QuaRTz (Tafjord et al. [2019](#bib.bib30)) among examples of other work where application of general rules given in natural langauge is involved. Some of the tasks in (Weston et al. [2016](#bib.bib33)) and the task in (Mishra et al. [2018](#bib.bib20)) involve reasoning about actions, but in them the effect of actions are not explicitly stated in natural language. In (Mitra and Baral [2016](#bib.bib21)) explict answer set programming (ASP) rules are learned from the bAbI dataset and then used to reason using an ASP solver. In this paper we consider both the explicit and implicit cases, i.e., (i) where the effect of actions are explicitly given in natural language, similar to the approach in (Clark, Tafjord, and Richardson [2020a](#bib.bib5)); and (ii) where the effect of actions are not explicitly given but a transformer has to implicitly reason with it. As mentioned in (Clark, Tafjord, and Richardson [2020b](#bib.bib6)) transformers have been shown to be able to (learn to) emulate algorithms in various tasks such as semantic parsing (He and Choi [2019](#bib.bib12)), machine translation (Wang et al. [2019](#bib.bib32)), symbolic integration (Lample and Charton [2019](#bib.bib14)) and mathematics (Saxton et al. [2018](#bib.bib27)). Our work in this paper is perhaps the first work where transformers are tested with respect to a deeper knowledge representation and reasoning challenge, the challenge of reasoning about actions and their effects. In this regard it may be noted that, although reasoning about actions was formally introduced in 1963, it took decades of research before the associated “frame problem” was solved and human researchers were able to come up with formulations, as in (Reiter [1991](#bib.bib23); Gelfond and Lifschitz [1993](#bib.bib9)), that could reason about actions in a systematic and provable way. This domain of reasoning about actions and change is the focus of this paper and we show that when trained in a specific domain, transformers are able to reason about the effect of actions in that domain and within that complexity with high accuracy (90-98%). However, when testing with higher complexity the accuracy drops to 68-90%. When trained in the generic domain the transfer learning accuracy with respect to the BLD domains is 57-83%. Thus, even for the simplest aspect of RAC, which is reasoning about effects of actions, further research is needed for better transfer learning / out-of-domain accuracy. Beyond that there are harder challenges in RAC such as planning (Lifschitz [1987](#bib.bib15)), explanation, diagnosis (Baral, McIlraith, and Son [2000](#bib.bib2)) and narrative reasoning (Shanahan et al. [1997](#bib.bib28); Mueller [2014](#bib.bib22)). 7 Open Source -------------- We will make both our code and hand-authored samples public at https://xxx.github.com. As we procedurally generate the data, the synthetic data generation templates and ASP code is as follows. We also provide the detailed experimental setup, and experimental results for our plots. 8 Question-Answer Examples --------------------------- In this section, we provide the templates for four types of questions and training examples for each action domain (Blocks World, Logistics, Dock-Workers Robots and Generic Domain). Table [4](#S8.T4 "Table 4 ‣ 8 Question-Answer Examples ‣ Can Transformers Reason About Effects of Actions?"), [5](#S8.T5 "Table 5 ‣ 8 Question-Answer Examples ‣ Can Transformers Reason About Effects of Actions?"), [6](#S8.T6 "Table 6 ‣ 8 Question-Answer Examples ‣ Can Transformers Reason About Effects of Actions?"), [7](#S8.T7 "Table 7 ‣ 8 Question-Answer Examples ‣ Can Transformers Reason About Effects of Actions?") shows a few question templates for the blocks world, logistics, dwr and generic domain respectively. Strings within the {.} are variables and replaced during data generation. We create multiple paraphrased version of each template so that the model do not overfit to similar patterns. We chose randomly and select one question template from each template class. | | | | --- | --- | | Type | Templates | | Verify | Will the top of the {block} be empty | | | after the {block1} is moved to the top of the {block2} ? | | | Will the {block} be directly on the table | | | after the {block1} is moved to the top of the {block2} block ? | | | Would all the blocks be directly on the table | | | after the {block1} is moved to the top of the {block2}? | | Counting | How many blocks would be directly on the table | | | after the {block1} is moved to the top of the {block2} ? | | | Find the height of the tallest stack of blocks. | | Others | { name } moved the { block\_being\_moved } block to the top | | | of the {block\_moved\_to} block. Which block/blocks is/are above the | | | {block\_in\_question} block ? | | | Which block is in between the {top\_block} block and the {bottom\_block} | | | block if {name} move the {block\_being\_moved} block to the top of the | | | {block\_moved\_to} block ? | Table 4: Sample templates of questions for Blocks world domain. | | | | --- | --- | | Type | Templates | | Verify | Can the truck {truck} be moved from the | | | location {location\_from} to the location {location\_to}? | | | Can the airplane {airplane} be flown from the | | | location {location\_from} to the location {location\_to}? | | | Can the airplane {airplane} be loaded with | | | the object {load\_object} ? | | | Can the airplane {airplane} be unloaded ? | | Counting | How many objects are present in the location {loc} ? | | | How many trucks are present in the location {loc} ? | | Others | What is the location of the truck {truck} now ? | | | Which trucks are present in the location {loc} ? | Table 5: Templates of question for Logistics domain. | | | | --- | --- | | Type | Templates | | Verify | Is {crane} empty?Is {robot} at {location}? | | | Can {container} be loaded on {robot} in the current state? | | | Can {container} be picked up by {crane} in the current state? | | | Is the {container} at the top of {pile}? | | | Is {robot} loaded with {container}? | | | Is {robot} unloaded? | | Counting | how many cranes are empty? | | | how many crane are not empty? | | | how many containers are in {pile}? | | | how many locations are occupied? | | | how many robots are unloaded? | | | how many robots are loaded? | | Others | which location does the {robot} at? | | | {robot} is at which location? | | | which pile does the {container} in? | | | {container} is in which pile? | | | which container is on top of {pile}? | Table 6: Templates of question for DWR domain. | | | | --- | --- | | Type | Templates | | Verify | Can action {action} be executed in the initial state? | | | Will the fluent {fluent} be true | | | after executing action {action}? | | | Will the fluent {fluent} be true | | | after executing action {action} and then action {action\_2}? | | Counting | How many fluents would be true | | | after executing action{action}? | | | How many fluents would be true | | | after executing action {action1} and then action {action2}? | | | Would all the fluents be true after executing {action}? | Table 7: Templates of question for Generic domain. The following are examples for each of the generated domains. The first example is from the DWR domain:   Initial State: These are locations: fishery, airfield. Robots are: robot-3, r10. Crane are: crane-9, crane-7. There are piles: pile-9, pile-12. These are containers: seashell, moccasin. fishery is adjacent to airfield. fishery has the pile pile-9. pile-12 is at airfield. crane-9 is located at fishery. airfield has crane-7. robot-3 presents at airfield. fishery has r10. robot-3 is unloaded. r10 can hold a container. crane-9 can hold a container. crane-7 can hold a container. These are stacked in order top to bottom : seashell, moccasin. pile-12 has seashell at the top. Rules: A robot can be at one location at a time. A crane can move a container from the top of a pile to an empty robot or to the top of another pile at the same location. A container can be stacked in some pile on top of the pallet or some other container, loaded on a robot, or held by a crane. A pile is a fixed area attached to a single location. A crane belongs to a single location; it can manipulate containers within that location, between piles and robots. Each robot can carry one container at a time. Robots can move to free adjacent location. A crane is empty if it is not holding container. A robot is unloaded if it is not loaded with container. Action: Crane-7 picks up seashell and seashell is loaded on r10. Verify Is r10 loaded with moccasin?  Ans: No. Is moccasin on top of pile-12?  Ans: Yes. Counting how many robots are unloaded? Ans: 1. how many cranes are not empty? Ans: 0. Others which container is on top of pile-12? Ans: moccasin. which location does the r10 at? Ans: fishery.   Following is the example from the Generic Action Fluents Domain:   Initial State: Initially true fluents: 7, 1, 6, 8. Rules: Actions are 1, 4, 6, 8, 5, 3, 2, 7 and fluents are 1, 2, 6, 7, 4, 5, 3, 8. Action 6 causes -6 if -7. Action 2 causes -8. Action 4 causes -8. Action 8 causes 3 if 7, 8. Action 1 causes -1. Action 7 causes -8 if 1, -5. Action 3 causes 8. Action 5 causes 2. Precondition for action 4 is 3, for action 3 is 8, for action 2 is -8. Action: action 3 is executed. Verify  Q:) Will the fluent 1 be true after executing action 6?  Ans: Yes. Counting  Q:) How many fluents would be true after executing action 6? Ans: 4.   Following is an example from the Logistics domain:  Initial State: The cities in this world are: delhi, uttarpradesh, rajasthan, Jammu, kerala. The locations defined are: market, industry, ground, fishery, airfield, depot, seaport, forestry .The objects which are transported are: Bricks, Mirror, Computers . These are trucks: Ashok, Tata, General . Air cargo is flown by: Dreamliner, Boeing, Cirrus . The locations that have airport are : market, seaport, industry, ground . delhi city has the following locations: market, industry . uttarpradesh city has the following locations: ground, fishery. rajasthan city has the following locations: airfield, depot. The following locations are present in Jammu : seaport. The kerala city has the forestry location. The Bricks cargo is at market. The Mirror is present at industry. The Ashok is located at market. industry is where Tata is present. The location ground has the cargo Computers. ground is where General is present. The location market has the plane Dreamliner. The location seaport has the plane Boeing. The Cirrus plane is at industry. Rules: Vehicles airplane, truck transport packages between locations present in city. Trucks deliver within city. Airplane between airports associated with a location. Object can be loaded or unloaded in vehicle if present at same location. Unloaded package remain at vehicle location . Loading object cause object to be inside vehicle.Driving truck from its current location to another causes truck to change its location. Flying airplane from current airport to another causes airplane to change its airport. Action: Hannah started from the location industry and drove the truck tata to the location market. Verify Can Hannah drive the truck tata from the location market to the location seaport ?  Ans: No. Is it possible for Hannah to move the truck tata from the location market to the location industry ?  Ans: Yes. Can the object mirror be kept in the truck tata ?  Ans: No. Can Hannah unload the truck tata ?  Ans: No. Counting How many trucks present in location market? Ans: 2. How many objects present in location seaport? Ans: 0. Others What is the location of the truck tata now ? Ans: market. Which objects are present in the location seaport ? Ans: none. Action: Hannah started from the location industry and flew the airplane cirrus to the location ground. Verify Is it possible for Hannah to fly the airplane cirrus from the location ground to the location seaport ?  Ans: Yes. Is it possible for Hannah to fly the airplane cirrus from the location ground to the location fishery ?  Ans: No. Is it possible for Hannah to load the airplane cirrus with the object computers?  Ans: No. Counting How many airplanes are present in the location forestry ?  Ans: 0. Others Which objects are present in the location market ?  Ans: bricks. 9 Translation to Generic Domain -------------------------------- We translate all the actions domains to the generic domain. Table [8](#S9.T8 "Table 8 ‣ 9 Translation to Generic Domain ‣ Can Transformers Reason About Effects of Actions?") shows examples of translated examples. | | | | | --- | --- | --- | | Domain | Question | Translation | | Blocks | Will the top of the {block} | | | | be empty after the {block\_being\_moved} is | | | | moved to the top of the {block\_moved\_to}? | Will the fluent {fluent\_in\_question} be true | | Logistics | Can the truck {truck} be moved from the | | | | location {location\_from} to the location | | | | {location\_to}? | Can action {action} be executed in the initial state? | | DWR | Can {container} be picked up by crane in the current state? | Is it possible to execute action {action} in the current state? | Table 8: Examples of translated questions from BLD domains to generic domain. 10 Training Details -------------------- In this section, we describe the hyperparameters used in training. Depending on the types of questions used in the training time, we present the results for two training sets: including verify questions only, and including three types of questions. We also present the hand written examples at the end of this section. ### 10.1 Training Parameters We use RoBERTa-Large fine-tuned on RACE dataset as our base model. We train all our models with a learning rate of 1e-5 for 3 epochs. We adjust the per gpu training batch size and gradient accumulation accordingly to fit in our 4 Nvidia V100 16GB GPUs. We keep maximum sequence length of 512 for our experiments to ensure that the whole context is used by the model. We adjust the save steps and logging steps for each experiments accordingly. ### 10.2 Training with Verify Questions Based on the novel world complexity, we define five levels, and for each level, we generate 50K verify questions for training and testing seperately. To avoid bias, the size of negative and positive examples are equivalent. We train Roberta with each dataset (with or withour domain knowledge), and test the model on all five dataset (with or withour domain knowledge). The dataset with different complexity level as used in the training time is considered as OOD domain. Table [9](#S10.T9 "Table 9 ‣ 10.2 Training with Verify Questions ‣ 10 Training Details ‣ Can Transformers Reason About Effects of Actions?"),[11](#S10.T11 "Table 11 ‣ 10.2 Training with Verify Questions ‣ 10 Training Details ‣ Can Transformers Reason About Effects of Actions?"),[13](#S10.T13 "Table 13 ‣ 10.2 Training with Verify Questions ‣ 10 Training Details ‣ Can Transformers Reason About Effects of Actions?") show the results without domain knowledge in training for Blocks World, logistic, and DWR seperately. Table [10](#S10.T10 "Table 10 ‣ 10.2 Training with Verify Questions ‣ 10 Training Details ‣ Can Transformers Reason About Effects of Actions?"),[12](#S10.T12 "Table 12 ‣ 10.2 Training with Verify Questions ‣ 10 Training Details ‣ Can Transformers Reason About Effects of Actions?"),[14](#S10.T14 "Table 14 ‣ 10.2 Training with Verify Questions ‣ 10 Training Details ‣ Can Transformers Reason About Effects of Actions?"),[15](#S10.T15 "Table 15 ‣ 10.2 Training with Verify Questions ‣ 10 Training Details ‣ Can Transformers Reason About Effects of Actions?") show the results without domain knowledge in training for Blocks World, logistic, DWR, and generic seperately. Let Ni represent dataset of complexity level i. | | | | | | | | --- | --- | --- | --- | --- | --- | | Train /Test | N1 | N2 | N3 | N4 | N5 | | N1 | 99.98 | 96.73 | 91.64 | 84.53 | 83.03 | | N2 | 99.91 | 99.87 | 93.31 | 80.36 | 79.42 | | N3 | 99.88 | 99.76 | 99.69 | 86.78 | 82.25 | | N4 | 99.85 | 99.68 | 99.65 | 99.99 | 83.24 | | N5 | 99.83 | 99.73 | 99.48 | 99.96 | 99.82 | Table 9: Accuracy of Blocks Worlds domain on verify questions on five novel world test scenarios without knowledge. | | | | | | | | --- | --- | --- | --- | --- | --- | | Train /Test | N1 | N2 | N3 | N4 | N5 | | N1 | 99.11 | 90.02 | 89.74 | 86.25 | 85.28 | | N2 | 99.36 | 99.34 | 92.78 | 88.36 | 88.27 | | N3 | 99.49 | 99.07 | 99.04 | 91.98 | 90.92 | | N4 | 98.39 | 99.32 | 98.96 | 99.67 | 90.88 | | N5 | 99.75 | 99.13 | 99.12 | 99.69 | 99.72 | Table 10: Accuracy of Blocks Worlds domain on verify questions on five novel world test scenarios with knowledge. | | | | | | | | --- | --- | --- | --- | --- | --- | | Train /Test | N1 | N2 | N3 | N4 | N5 | | N1 | 97.47 | 89.21 | 80.25 | 77.36 | 69.89 | | N2 | 99.98 | 100 | 99.99 | 99.98 | 99.97 | | N3 | 99.75 | 99.90 | 99.90 | 99.96 | 99.97 | | N4 | 99.91 | 99.96 | 99.98 | 99.99 | 100 | | N5 | 99.86 | 99.86 | 99.89 | 99.97 | 99.99 | Table 11: Accuracy of Logistic domain on verify questions on five novel world test scenarios without knowledge. | | | | | | | | --- | --- | --- | --- | --- | --- | | Train /Test | N1 | N2 | N3 | N4 | N5 | | N1 | 99.99 | 93.98 | 87.33 | 78.31 | 71.31 | | N2 | 96.98 | 99.97 | 99.97 | 99.93 | 99.91 | | N3 | 99.46 | 99.99 | 99.98 | 99.99 | 100 | | N4 | 99.95 | 99.96 | 99.99 | 99.99 | 99.99 | | N5 | 99.91 | 99.94 | 99.94 | 99.96 | 99.99 | Table 12: Accuracy of Logistic domain on verify questions on five novel world test scenarios with knowledge. | | | | | | | | --- | --- | --- | --- | --- | --- | | Train /Test | N1 | N2 | N3 | N4 | N5 | | N1 | 83.22 | 81.38 | 80.32 | 79.05 | 76.19 | | N2 | 80.34 | 79.06 | 78.52 | 76.51 | 76.07 | | N3 | 83.45 | 83.76 | 79.28 | 80.37 | 77.40 | | N4 | 68.85 | 76.16 | 71.96 | 75.67 | 71.37 | | N5 | 77.20 | 77.41 | 76.97 | 76.78 | 76.46 | Table 13: Accuracy of DWR domain on verify questions on five novel world test scenarios without knowledge. | | | | | | | | --- | --- | --- | --- | --- | --- | | Train /Test | N1 | N2 | N3 | N4 | N5 | | N1 | 85.68 | 82.65 | 80.47 | 79.75 | 72.05 | | N2 | 82.77 | 82.28 | 79.93 | 79.65 | 71.99 | | N3 | 81.24 | 81.51 | 79.17 | 79.77 | 72.96 | | N4 | 77.06 | 77.02 | 74.92 | 76.52 | 70.30 | | N5 | 78.36 | 77.41 | 75.35 | 76.46 | 71.11 | Table 14: Accuracy of DWR domain on verify questions on five novel world test scenarios with knowledge. | | | | | | | | --- | --- | --- | --- | --- | --- | | Train /Test | N1 | N2 | N3 | N4 | N5 | | N1 | 79.03 | 74.97 | 71.61 | 79.70 | 78.46 | | N2 | 78.76 | 79.86 | 78.38 | 80.63 | 77.40 | | N3 | 78.97 | 79.21 | 84.90 | 83.70 | 81.27 | | N4 | 78.74 | 79.86 | 83.78 | 85.72 | 82.93 | | N5 | 78.36 | 79.75 | 84.24 | 86.65 | 83.91 | Table 15: Accuracy of Generic domain on verify questions on five novel world test scenarios with knowledge. ### 10.3 Training with Three Types of Questions Based on the depth complexity, we define five levels. Except for generic domain, we generate 40K verifty questions, 30K Counting questions and 30K others questions for training and 50K of three types for testing seperately for other three domains. We train Roberta with each dataset(with or withour domain knowledge), and test the model on all five dataset (with or withour domain knowledge). The dataset with different complexity level as used in the training time is considered as OOD domain.Table show the results without domain knowledge in training for Blocks World, logistic, DWR, and generic seperately. Table [16](#S10.T16 "Table 16 ‣ 10.3 Training with Three Types of Questions ‣ 10 Training Details ‣ Can Transformers Reason About Effects of Actions?"),[18](#S10.T18 "Table 18 ‣ 10.3 Training with Three Types of Questions ‣ 10 Training Details ‣ Can Transformers Reason About Effects of Actions?"),[20](#S10.T20 "Table 20 ‣ 10.3 Training with Three Types of Questions ‣ 10 Training Details ‣ Can Transformers Reason About Effects of Actions?") show the results without domain knowledge in training for Blocks World, logistic and DWR seperately. Table [17](#S10.T17 "Table 17 ‣ 10.3 Training with Three Types of Questions ‣ 10 Training Details ‣ Can Transformers Reason About Effects of Actions?"),[19](#S10.T19 "Table 19 ‣ 10.3 Training with Three Types of Questions ‣ 10 Training Details ‣ Can Transformers Reason About Effects of Actions?"),[21](#S10.T21 "Table 21 ‣ 10.3 Training with Three Types of Questions ‣ 10 Training Details ‣ Can Transformers Reason About Effects of Actions?") show the results without domain knowledge in training for Blocks World, logistic and DWR generic seperately. Let Ai represent dataset of complexity level i. | | | | | | | | --- | --- | --- | --- | --- | --- | | | A1 | A2 | A3 | A4 | A5 | | Train /Test | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | | A1 | 93.94 | 85.88 | 53.80 | 90.88 | 84.40 | 52.06 | 87.08 | 79.30 | 48.24 | 83.30 | 75.19 | 40.91 | 76.31 | 72.40 | 38.61 | | A2 | 93.40 | 90.34 | 59.64 | 92.82 | 87.95 | 58.09 | 91.54 | 86.38 | 54.92 | 89.46 | 85.19 | 52.28 | 87.12 | 82.41 | 51.48 | | A3 | 93.20 | 96.27 | 65.84 | 93.12 | 94.62 | 63.72 | 92.62 | 94.25 | 62.28 | 92.10 | 92.40 | 61.71 | 90.81 | 90.20 | 59.77 | | A4 | 95.54 | 93.84 | 64.61 | 95.65 | 93.02 | 64.76 | 95.16 | 92.62 | 63.81 | 94.45 | 91.71 | 62.74 | 94.32 | 91.43 | 60.80 | | A5 | 95.1 | 93.81 | 68.81 | 95.67 | 92.86 | 68.25 | 95.75 | 92.56 | 68.13 | 95.82 | 92.33 | 66.83 | 95.2 | 92.18 | 66.48 | Table 16: Accuracy of Block World domain on three types of questions on five novel world test scenarios without knowledge. | | | | | | | | --- | --- | --- | --- | --- | --- | | | A1 | A2 | A3 | A4 | A5 | | Train /Test | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | | A1 | 97.42 | 98.73 | 63.64 | 92.94 | 88.35 | 57.94 | 88.20 | 82.15 | 54.28 | 84.27 | 77.89 | 49.65 | 81.08 | 75.13 | 47.18 | | A2 | 97.47 | 94.24 | 61.68 | 97.29 | 92.76 | 58.42 | 96.49 | 90.66 | 56.04 | 94.74 | 87.78 | 51.68 | 92.74 | 85.34 | 50.08 | | A3 | 96.94 | 96.51 | 65.63 | 96.8 | 95.55 | 63.52 | 96.64 | 95.51 | 62.71 | 95.85 | 94.60 | 60.88 | 94.95 | 92.94 | 60.20 | | A4 | 96.03 | 96.36 | 66.75 | 96.13 | 95.08 | 64.74 | 96.02 | 94.52 | 63.00 | 95.58 | 93.62 | 62.76 | 95.09 | 92.68 | 62.52 | | A5 | 96.36 | 94.53 | 65.81 | 96.43 | 93.00 | 64.09 | 96.19 | 92.70 | 62.60 | 95.91 | 91.92 | 60.86 | 95.54 | 91.66 | 61.49 | Table 17: Accuracy of Blocks World domain on three types of questions on five novel world test scenarios with knowledge. | | | | | | | | --- | --- | --- | --- | --- | --- | | | A1 | A2 | A3 | A4 | A5 | | Train /Test | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | | A1 | 100 | 99.80 | 100 | 84.77 | 34.95 | 59.49 | 78.19 | 38.68 | 78.73 | 75.86 | 40.71 | 71.31 | 75.29 | 33.85 | 68.12 | | A2 | 99.99 | 98.79 | 100 | 99.97 | 99.22 | 100 | 100 | 98.07 | 92.32 | 99.76 | 95.30 | 96.06 | 99.56 | 91.58 | 94.43 | | A3 | 99.99 | 98.98 | 100 | 100 | 99.37 | 100 | 100 | 99.44 | 100 | 99.65 | 93.04 | 99.68 | 99.62 | 90.48 | 98.70 | | A4 | 100 | 98.66 | 100 | 100 | ,99.05 | 100 | 100 | ,99.66 | 100 | 100 | 98.41 | 99.98 | 99.96 | 95.83 | 99.91 | | A5 | 99.97 | 92.95 | 99.99 | 99.98 | 95.93 | 100 | 100 | 95.42 | 100 | 99.99 | 94.93 | 100 | 99.96 | 91.58 | 99.98 | Table 18: Accuracy of Logistic domain on three types of questions on five novel world test scenarios without knowledge. | | | | | | | | --- | --- | --- | --- | --- | --- | | | A1 | A2 | A3 | A4 | A5 | | Train /Test | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | | A1 | 100 | 79.77 | 100 | 98.26 | 37.34 | 62.26 | 98.81 | 59.75 | 85.17 | 97.36 | 55.01 | 77.46 | 97.98 | 55.20 | 75.43 | | A2 | 100 | 97.60 | 100 | 100 | 99.21 | 100 | 100 | 97.57 | 96.73 | 99.95 | 92.67 | 98.40 | 99.83 | 88.79 | 97.14 | | A3 | 98.85 | 70.64 | 65.92 | 99.89 | 62.95 | 99.87 | 98.80 | 76.28 | 99.98 | 98.85 | 68.59 | 97.44 | 98.07 | 65.30 | 95.60 | | A4 | 100 | 95.92 | 100 | 99.94 | 97.68 | 100 | 100 | 97.11 | 99.99 | 99.95 | 96.53 | 99.89 | 100 | 91.73 | 99.78 | | A5 | 99.77 | 81.55 | 99.65 | 99.88 | 87.14 | 98.33 | 99.94 | 77.60 | 98.84 | 99.85 | 79.25 | 96.51 | 99.42 | 75.91 | 94.05 | Table 19: Accuracy of Logistic domain on three types of questions on five novel world test scenarios with knowledge. | | | | | | | | --- | --- | --- | --- | --- | --- | | | A1 | A2 | A3 | A4 | A5 | | Train /Test | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | | A1 | 82.65 | 87.11 | 74.65 | 82.08 | 78.85 | 72.77 | 82.47 | 78.78 | 73.4 | 75.54 | 77.64 | 72.96 | 78.38 | 78.73 | 74.88 | | A2 | 81.85 | 85.31 | 74.40 | 82.06 | 83.02 | 72.64 | 84.79 | 81.05 | 74.88 | 78.67 | 80.51 | 71.49 | 77.62 | 81.58 | 68.08 | | A3 | 79.32 | 81.11 | 68.92 | 79.58 | 78.55 | 67.42 | 81.35 | 79.20 | 71.77 | 80.09 | 76.46 | 68.59 | 75.95 | 83.93 | 61.66 | | A4 | 76.22 | 78.38 | 38.79 | 75.01 | 76.65 | 39.06 | 78.59 | 77.50 | 37.79 | 73.84 | 79.19 | 42.04 | 76.87 | 85.00 | 35.12 | | A5 | 74.03 | 78.61 | 33.06 | 73.93 | 76.36 | 33.55 | 76.91 | 76.07 | 30.84 | 72.03 | 78.66 | 27.64 | 85.21 | 85.13 | 39.78 | Table 20: Accuracy of DWR domain on three types of questions on five novel world test scenarios without knowledge. | | | | | | | | --- | --- | --- | --- | --- | --- | | | A1 | A2 | A3 | A4 | A5 | | Train /Test | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | T/F | Num | Other | | A1 | 83.08 | 87.52 | 74.60 | 81.62 | 81.96 | 72.34 | 78.85 | 79.63 | 72.93 | 77.12 | 77.94 | 73.67 | 78.51 | 80.30 | 74.12 | | A2 | 82.17 | 86.93 | 20.25 | 82.97 | 84.58 | 21.26 | 86.47 | 82.04 | 21.5 | 82.68 | 81.94 | 20.81 | 80.54 | 81.8 | 30.77 | | A3 | 78.16 | 81.38 | 71.46 | 79.31 | 79.50 | 69.25 | 81.21 | 78.93 | 75.48 | 78.41 | 78.36 | 68.10 | 76.84 | 87.51 | 74.14 | | A4 | 74.93 | 73.76 | 34.90 | 76.06 | 69.35 | 35.46 | 77.16 | 75.80 | 33.44 | 73.63 | 71.88 | 36.03 | 78.88 | 81.29 | 34.74 | | A5 | 74.65 | 76.56 | 37.16 | 75.80 | 72.75 | 38.96 | 76.84 | 72.03 | 35.94 | 77.05 | 72.80 | 35.90 | 82.44 | 87.44 | 40.42 | Table 21: Accuracy of DWR domain on three types of questions on five novel world test scenarios with knowledge. ### 10.4 Hand Written Examples To test the model’s generality, for each domain, we manually create (around) 100 examples each for Blocks Worlds, Logistic and DWR domain. We present the examples for each domain as follows. #### Logistics We created adversarial samples from our synthetic action data for each of the question types by modifying them various adversarial settings as can be seen from Table [22](#S10.T22 "Table 22 ‣ Logistics ‣ 10.4 Hand Written Examples ‣ 10 Training Details ‣ Can Transformers Reason About Effects of Actions?"). For Counting questions, we modified the answers by either adding or removing an entity belonging to any of object, truck or airplane about which the question have been asked. For “verify” and other “questions”, we modified the entity names which have not been seen in either training or previous testing phases.   Initial State: The cities in this world are: dispur,nagaland,karnataka,gujarat . The locations defined are: market,park,fishery,industry,seaport,forestry,ground . The objects which are transported are: Bricks,Batteries,Cement . The Land cargo is driven by: Maruti,Diesel,Ashok,hover . These are planes: Tejas,Cirrus . The airports are located at: seaport,ground . The following locations are present in dispur : market,park . nagaland city has the following locations: fishery,industry,seaport,forestry . karnataka city has the following locations: ground . The Bricks cargo is at market. The Batteries is present at park. The Maruti is located at market. park is where Diesel is present. The location market has truck hover. The location fishery has the cargo Cement. fishery is where Ashok is present. The Tejas plane is at seaport. The Cirrus is at hangar in ground. Action: The truck ashok is driven from the location fishery to the location industry. The truck ashok is driven from the location industry to the location seaport. Gabriel started from the location seaport and drove the truck ashok to the location forestry. Q:How many trucks are present in the location market ? Ans: 2 Change:New truck hover introduced at market location which changes the answer old answer to the same question to 2.   | | | | | --- | --- | --- | | Type | Adversarial Change | Count | | Counting | Entity added, count Increased | 18 | | Counting | Entity removed, count Decreased | 8 | | Verify | Random Words Changed | 20 | | Verify | All Question Entities Changed | 7 | | Verify | Question Paraphrased/Modified | 15 | | Verify | Main Question Object Changed | 5 | | Others | Main Question Word Changed | 7 | | Others | Randomly Changed Name | 20 | Table 22: Types of Manual samples created for Logistics Domain #### Dock-Workers Robots We create hand written DWR samples by changing the locations, robots, and containers which have not been seen in the training time. We also modify the description of initial state by injecting natural language phrases. We only ask verify and Counting questions.   Initial State: We have three locations in university: teaching building, office and library . Robots can move and we have two robots: Bob, Jame. Three cranes are: Mary, Patricia, Jennifer. These are piles: pile\_10, pile\_1, p4. We have two containers: container1, container2. Mary is located at teaching building. library has Patricia. Jennifer is located at office. Bob presents at library. teaching building has Jame. Bob has no containers. Jame has no load. Mary can hold a container. Patricia holds no container. Jennifer holds no container. container1 is top of pile\_1. container2 presents at the top of p4. teaching building and library are adjacent. library is adjacent to office. pile\_10 is attached to teaching building. library has the pile pile\_1. office has the pile p4. Rules: A robot can be at one location at a time. A crane can move a container from the top of a pile to an empty robot or to the top of another pile at the same location. A container can be stacked in some pile on top of the pallet or some other container, loaded on a robot, or held by a crane. A pile is a fixed area attached to a single location. A crane belongs to a single location; it can manipulate containers within that location, between piles and robots. Each robot can carry one container at a time. Robots can move to free adjacent location. A crane is empty if it is not holding container. A robot is unloaded if it is not loaded with container. Action: Patricia picks up container1. Verify Q:) Is Patricia empty?  Ans: No. Action: Bob moves to library. Counting Q:) how many containers are in pile\_1?  Ans: 1.   #### Blocks World For Blocks world we asked annotators to modify the name of blocks, and tower descriptions. Following is an example:   Initial State: There are five blocks on the table. The blocks are zebra, monkey, giraffe, lion and tiger. Three blocks are stacked in a tower. The blocks in this stack are, penguin, parrot and eagle. Rules: A clear block can be moved to the top of the table. A clear block can be moved to the top of another clear block. Moving a block to the top of the table causes the block to be on the table. Moving a block to the top of another block causes the first block to be on top of the second block and makes the second block not clear. Moving a block causes it to be no longer at where it was earlier. Moving a block that was on another block cauese the second block to be clear. Action: The eagle block is moved from the top of the stack and placed on top lion. Verify Q: Is block Lion empty?  Ans: No. Action: The giraffe block is placed on top tiger block. Count Q: How many height atleast 2 stacks are present? Ans: 2. 11 ASP Code ------------ In this section, we provide the ASP code for each action domain. ### 11.1 Blocks World Listing 3: A Blocks World Domain [⬇](data:text/plain;base64,dGltZSgwLi5sZW4pLgoKZmx1ZW50KG9uKFgsWSkpICAgIDotIGJsb2NrKFgpLGJsb2NrKFkpLCBYIT1ZLgpmbHVlbnQob250YWJsZShYKSk6LSBibG9jayhYKS4KZmx1ZW50KGNsZWFyKFgpKSAgICA6LSBibG9jayhYKS4KZmx1ZW50KGhvbGRpbmcoWCkpOi0gYmxvY2soWCkuCmZsdWVudChoYW5kZW1wdHkpLgoKYWN0aW9uKHBpY2t1cChYKSk6LSBibG9jayhYKS4KYWN0aW9uKHB1dGRvd24oWCkpOi0gYmxvY2soWCkuCmFjdGlvbihzdGFjayhYLFkpKTotIGJsb2NrKFgpLGJsb2NrKFkpLCAgWCE9WS4KYWN0aW9uKHVuc3RhY2soWCxZKSk6LSBibG9jayhYKSxibG9jayhZKSwgWCE9WS4KCmV4ZWN1dGFibGUocGlja3VwKFgpLCBUKTotIHRpbWUoVCksIGFjdGlvbihwaWNrdXAoWCkpLCBob2xkcyhoYW5kZW1wdHksIFQpLCBob2xkcyhjbGVhcihYKSwgVCksIGhvbGRzKG9udGFibGUoWCksIFQpLgpleGVjdXRhYmxlKHB1dGRvd24oWCksIFQpOi0gdGltZShUKSwgYWN0aW9uKHB1dGRvd24oWCkpLCBob2xkcyhob2xkaW5nKFgpLCBUKS4KZXhlY3V0YWJsZShzdGFjayhYLFkpLCBUKTotIHRpbWUoVCksIGFjdGlvbihzdGFjayhYLFkpKSwgaG9sZHMoaG9sZGluZyhYKSwgVCksIGhvbGRzKGNsZWFyKFkpLCBUKS4KZXhlY3V0YWJsZSh1bnN0YWNrKFgsWSksIFQpOi0gdGltZShUKSwgYWN0aW9uKHVuc3RhY2soWCwgWSkpLCBob2xkcyhoYW5kZW1wdHksIFQpLCBob2xkcyhvbihYLFkpLCBUKSwgaG9sZHMoY2xlYXIoWCksIFQpLgoKaG9sZHMobmVnKG9udGFibGUoWCkpLCBUKzEpOi0gdGltZShUKSxvY2MocGlja3VwKFgpLFQpLgpob2xkcyhuZWcoaGFuZGVtcHR5KSwgVCsxKTotIHRpbWUoVCksb2NjKHBpY2t1cChYKSxUKS4KaG9sZHMoaG9sZGluZyhYKSwgVCsxKTotIHRpbWUoVCksb2NjKHBpY2t1cChYKSxUKS4KCmhvbGRzKG5lZyhob2xkaW5nKFgpKSwgVCsxKTotIHRpbWUoVCksb2NjKHB1dGRvd24oWCksVCkuCmhvbGRzKGNsZWFyKFgpLCBUKzEpOi0gdGltZShUKSxvY2MocHV0ZG93bihYKSxUKS4KaG9sZHMob250YWJsZShYKSwgVCsxKTotIHRpbWUoVCksb2NjKHB1dGRvd24oWCksVCkuCmhvbGRzKGhhbmRlbXB0eSwgVCsxKTotIHRpbWUoVCksb2NjKHB1dGRvd24oWCksVCkuCgpob2xkcyhjbGVhcihYKSwgVCsxKTotIHRpbWUoVCksb2NjKHN0YWNrKFgsWSksVCkuCmhvbGRzKG5lZyhjbGVhcihZKSksIFQrMSk6LSB0aW1lKFQpLG9jYyhzdGFjayhYLFkpLFQpLgpob2xkcyhuZWcoaG9sZGluZyhYKSksIFQrMSk6LSB0aW1lKFQpLG9jYyhzdGFjayhYLFkpLFQpLgpob2xkcyhoYW5kZW1wdHksIFQrMSk6LSB0aW1lKFQpLG9jYyhzdGFjayhYLFkpLFQpLgpob2xkcyhvbihYLFkpLCBUKzEpOi0gdGltZShUKSxvY2Moc3RhY2soWCxZKSxUKS4KCmhvbGRzKG5lZyhjbGVhcihYKSksIFQrMSk6LSB0aW1lKFQpLG9jYyh1bnN0YWNrKFgsWSksVCkuCmhvbGRzKGNsZWFyKFkpLCBUKzEpOi0gdGltZShUKSxvY2ModW5zdGFjayhYLFkpLFQpLgpob2xkcyhob2xkaW5nKFgpLCBUKzEpOi0gdGltZShUKSxvY2ModW5zdGFjayhYLFkpLFQpLgpob2xkcyhuZWcoaGFuZGVtcHR5KSwgVCsxKTotIHRpbWUoVCksb2NjKHVuc3RhY2soWCxZKSxUKS4KaG9sZHMobmVnKG9uKFgsWSkpLCBUKzEpOi0gdGltZShUKSxvY2ModW5zdGFjayhYLFkpLFQpLgoKaG9sZHMoRiwgMCkgOi0gaW5pdGlhbGx5KEYpLgpob2xkcyhuZWcoRiksIDApOi0gZmx1ZW50KEYpLCBub3QgaW5pdGlhbGx5KEYpLgoKaG9sZHMoRiwgVCsxKSA6LSB0aW1lKFQpLCBmbHVlbnQoRiksIGhvbGRzKEYsIFQpLCBub3QgaG9sZHMobmVnKEYpLCBUKzEpLCBUIDwgbGVuLgpob2xkcyhuZWcoRiksIFQrMSkgOi0gdGltZShUKSwgZmx1ZW50KEYpLCBob2xkcyhuZWcoRiksIFQpLCBub3QgaG9sZHMoRiwgVCsxKSwgVCA8IGxlbi4KCjF7b2NjKEEsVCkgOiBhY3Rpb24oQSl9IDEgOi0gdGltZShUKSwgVCA8IGxlbi4KCjotIHRpbWUoVCksIGFjdGlvbihBKSwgb2NjKEEsIFQpLCBub3QgZXhlY3V0YWJsZShBLCBUKS4=) 1time(0..len). 3fluent(on(X,Y))    :- block(X),block(Y), X!=Y. 4fluent(ontable(X)):- block(X). 5fluent(clear(X))    :- block(X). 6fluent(holding(X)):- block(X). 7fluent(handempty). 9action(pickup(X)):- block(X). 10action(putdown(X)):- block(X). 11action(stack(X,Y)):- block(X),block(Y),  X!=Y. 12action(unstack(X,Y)):- block(X),block(Y), X!=Y. 14executable(pickup(X), T):- time(T), action(pickup(X)), holds(handempty, T), holds(clear(X), T), holds(ontable(X), T). 15executable(putdown(X), T):- time(T), action(putdown(X)), holds(holding(X), T). 16executable(stack(X,Y), T):- time(T), action(stack(X,Y)), holds(holding(X), T), holds(clear(Y), T). 17executable(unstack(X,Y), T):- time(T), action(unstack(X, Y)), holds(handempty, T), holds(on(X,Y), T), holds(clear(X), T). 19holds(neg(ontable(X)), T+1):- time(T),occ(pickup(X),T). 20holds(neg(handempty), T+1):- time(T),occ(pickup(X),T). 21holds(holding(X), T+1):- time(T),occ(pickup(X),T). 23holds(neg(holding(X)), T+1):- time(T),occ(putdown(X),T). 24holds(clear(X), T+1):- time(T),occ(putdown(X),T). 25holds(ontable(X), T+1):- time(T),occ(putdown(X),T). 26holds(handempty, T+1):- time(T),occ(putdown(X),T). 28holds(clear(X), T+1):- time(T),occ(stack(X,Y),T). 29holds(neg(clear(Y)), T+1):- time(T),occ(stack(X,Y),T). 30holds(neg(holding(X)), T+1):- time(T),occ(stack(X,Y),T). 31holds(handempty, T+1):- time(T),occ(stack(X,Y),T). 32holds(on(X,Y), T+1):- time(T),occ(stack(X,Y),T). 34holds(neg(clear(X)), T+1):- time(T),occ(unstack(X,Y),T). 35holds(clear(Y), T+1):- time(T),occ(unstack(X,Y),T). 36holds(holding(X), T+1):- time(T),occ(unstack(X,Y),T). 37holds(neg(handempty), T+1):- time(T),occ(unstack(X,Y),T). 38holds(neg(on(X,Y)), T+1):- time(T),occ(unstack(X,Y),T). 40holds(F, 0) :- initially(F). 41holds(neg(F), 0):- fluent(F), not initially(F). 43holds(F, T+1) :- time(T), fluent(F), holds(F, T), not holds(neg(F), T+1), T < len. 44holds(neg(F), T+1) :- time(T), fluent(F), holds(neg(F), T), not holds(F, T+1), T < len. 461{occ(A,T) : action(A)} 1 :- time(T), T < len. 48:- time(T), action(A), occ(A, T), not executable(A, T). ### 11.2 Logistics Listing 4: Logistics Domain [⬇](data:text/plain;base64,dGltZSgwLi5sZW4pLgoKZmx1ZW50KGF0KE8sIEwpKTotIG9iamVjdChPKSwgbG9jYXRpb24oTCkuCmZsdWVudChhdChPLCBMKSk6LSB0cnVjayhPKSwgbG9jYXRpb24oTCkuCmZsdWVudChhdChPLCBMKSk6LSBhaXJwbGFuZShPKSwgbG9jYXRpb24oTCkuCgpmbHVlbnQoaW4oTywgVCkpOi0gb2JqZWN0KE8pLCB0cnVjayhUKS4KZmx1ZW50KGluKE8sIEEpKTotIG9iamVjdChPKSwgYWlycGxhbmUoQSkuCmZsdWVudChvdXRzaWRlKE8pKTotIG9iamVjdChPKS4KCmFjdGlvbihsb2FkX3RydWNrKE8sIFQsIEwpKSAgICAgICAgICAgOi0gb2JqZWN0KE8pLCB0cnVjayhUKSwgbG9jYXRpb24oTCkuCmFjdGlvbihsb2FkX2FpcnBsYW5lKE8sIEEsIEwpKSAgICAgOi0gb2JqZWN0KE8pLCBhaXJwbGFuZShBKSwgbG9jYXRpb24oTCkuCmFjdGlvbih1bmxvYWRfdHJ1Y2soTywgVCwgTCkpICAgICAgIDotIG9iamVjdChPKSwgdHJ1Y2soVCksIGxvY2F0aW9uKEwpLgphY3Rpb24odW5sb2FkX2FpcnBsYW5lKE8sIEEsIEwpKSAgOi0gb2JqZWN0KE8pLCBhaXJwbGFuZShBKSwgbG9jYXRpb24oTCkuCmFjdGlvbihkcml2ZV90cnVjayhULCBMMSwgTDIsIEMpKSAgIDotIHRydWNrKFQpLCBsb2NhdGlvbihMMSksIGxvY2F0aW9uKEwyKSwgIEwxIT1MMiwgY2l0eShDKSwgaW5fY2l0eShMMSwgQyksIGluX2NpdHkoTDIsIEMpLgphY3Rpb24oZmx5X2FpcnBsYW5lKEEsIEwxLCBMMikpICAgICAgOi0gYWlycGxhbmUoQSksIGFpcnBvcnQoTDEpLCBhaXJwb3J0KEwyKSwgTDEhPUwyLgoKJSBsb2FkIGlmIGF0IHRoZSBzYW1lIGxvY2F0aW9uCmV4ZWN1dGFibGUobG9hZF90cnVjayhPLCBUciwgTCksIFQpICAgICAgICAgIDotIHRpbWUoVCksIGFjdGlvbihsb2FkX3RydWNrKE8sIFRyLCBMKSksIGhvbGRzKGF0KE8sTCksVCksIGhvbGRzKGF0KFRyLEwpLFQpLCBob2xkcyhvdXRzaWRlKE8pLCBUKS4KZXhlY3V0YWJsZShsb2FkX2FpcnBsYW5lKE8sIEEsIEwpLCBUKSAgICAgOi0gdGltZShUKSwgYWN0aW9uKGxvYWRfYWlycGxhbmUoTyxBLEwpKSwgIGhvbGRzKGF0KE8sTCksVCksIGhvbGRzKGF0KEEsTCksVCksIGhvbGRzKG91dHNpZGUoTyksIFQpLgoKJSUgdW5sb2FkIGFuIG9iamVjdCBpZiB0aGUgb2JqZWN0IGlzIGluc2lkZSB0aGUgdHJ1Y2sgb3IgdGhlIGFpcnBsYW5lCmV4ZWN1dGFibGUodW5sb2FkX3RydWNrKE8sIFRyLCBMKSwgVCkgICAgIDotIHRpbWUoVCksIGFjdGlvbih1bmxvYWRfdHJ1Y2soTywgVHIsIEwpKSwgaG9sZHMoYXQoVHIsTCksVCksIGhvbGRzKGluKE8sVHIpLCBUKS4KZXhlY3V0YWJsZSh1bmxvYWRfYWlycGxhbmUoTywgQSwgTCksIFQpIDotIHRpbWUoVCksIGFjdGlvbih1bmxvYWRfYWlycGxhbmUoTywgQSwgTCkpLCBob2xkcyhhdChBLEwpLFQpLCBob2xkcyhpbihPLEEpLCBUKS4KCiUlIGRyaXZlIG9yIGZseSByZXF1aXJlcyBiZWluZyBhdCB0aGUgc3RhcnRpbmcgbG9jYXRpb24KZXhlY3V0YWJsZShkcml2ZV90cnVjayhUciwgTDEsIEwyLCBDKSwgVCkgIDotIHRpbWUoVCksIGFjdGlvbihkcml2ZV90cnVjayhUciwgTDEsIEwyLCBDKSksIGhvbGRzKGF0KFRyLEwxKSwgVCkuCmV4ZWN1dGFibGUoZmx5X2FpcnBsYW5lKEEsIEwxLCBMMiksIFQpICAgICA6LSB0aW1lKFQpLCBhY3Rpb24oZmx5X2FpcnBsYW5lKEEsIEwxLCBMMikpLCAgaG9sZHMoYXQoQSxMMSksVCkuCgolJSBsb2FkIGFuIG9iamVjdCBpbnRvIGEgdHJ1Y2sKaG9sZHMoaW4oUCxUciksIFQrMSkgICAgICAgIDotIHRpbWUoVCksb2NjKGxvYWRfdHJ1Y2soUCxUcixMKSxUKSwgaG9sZHMoYXQoUCxMKSwgVCkuCgolJSBsb2FkIGFuIG9iamVjdCBpbnRvIGFuIGFpcnBsYW5lCmhvbGRzKGluKFAsQSksIFQrMSkgICAgICAgICA6LSB0aW1lKFQpLG9jYyhsb2FkX2FpcnBsYW5lKFAsQSxMKSxUKS4KCiUlIHVubG9hZCBhbiBvYmplY3QgZnJvbSBhIHRydWNrCmhvbGRzKG5lZyhpbihQLFRyKSksIFQrMSk6LSB0aW1lKFQpLG9jYyh1bmxvYWRfdHJ1Y2soUCxUcixMKSxUKS4KaG9sZHMob3V0c2lkZShQKSwgVCsxKTotIHRpbWUoVCksb2NjKHVubG9hZF90cnVjayhQLFRyLEwpLFQpLgoKJSUgdW5sb2FkIGFuIG9iamVjdCBmcm9tIGFuIGFpcnBsYW5lCmhvbGRzKG5lZyhpbihQLEEpKSwgVCsxKTotIHRpbWUoVCksb2NjKHVubG9hZF9haXJwbGFuZShQLEEsTCksVCkuCmhvbGRzKG91dHNpZGUoUCksIFQrMSk6LSB0aW1lKFQpLG9jYyh1bmxvYWRfYWlycGxhbmUoUCxBLEwpLFQpLgoKJSUgZHJpdmluZyBmcm9tIG9uZSBwbGFjZSB0byBhbm90aGVyCmhvbGRzKGF0KFRyLEwxKSwgVCsxKTotIHRpbWUoVCksb2NjKGRyaXZlX3RydWNrKFRyLEwsTDEsQyksVCkuCgolJSBmbHlpbmcgZnJvbSBvbmUgYWlycG9ydCB0byBhbm90aGVyCmhvbGRzKGF0KEEsTDEpLCBUKzEpOi0gdGltZShUKSxvY2MoZmx5X2FpcnBsYW5lKEEsTCxMMSksVCkuCgolJSBzdGF0aWMgY2F1c2FsIGxhd3MKaG9sZHMobmVnKGF0KFAsTDEpKSwgVCk6LSB0aW1lKFQpLCBmbHVlbnQoYXQoUCxMKSksIGZsdWVudChhdChQLEwxKSksIGhvbGRzKGF0KFAsTCksIFQpLCBMICE9IEwxLgpob2xkcyhhdChQLEwxKSwgVCkgOi0gdGltZShUKSwgaG9sZHMoYXQoVHIsTDEpLCBUKSwgaG9sZHMoaW4oUCxUciksIFQpLgpob2xkcyhhdChQLEwxKSwgVCkgOi0gdGltZShUKSwgaG9sZHMoYXQoQSxMMSksIFQpLCAgaG9sZHMoaW4oUCxBKSwgVCkuCmhvbGRzKG5lZyhvdXRzaWRlKFApKSwgVCkgOi0gdGltZShUKSwgaG9sZHMoaW4oUCxUciksIFQpLgpob2xkcyhuZWcob3V0c2lkZShQKSksIFQpIDotIHRpbWUoVCksIGhvbGRzKGluKFAsQSksIFQpLgoKJSUgaW5pdGlhbCBjb25kaXRpb24KaG9sZHMoRiwgMCkgOi0gaW5pdGlhbGx5KEYpLgpob2xkcyhuZWcoRiksIDApOi0gZmx1ZW50KEYpLCBub3QgaW5pdGlhbGx5KEYpLgoKJSUgaW5lcnRpYWwKaG9sZHMoRiwgVCsxKSA6LSB0aW1lKFQpLCBmbHVlbnQoRiksIGhvbGRzKEYsIFQpLCBub3QgaG9sZHMobmVnKEYpLCBUKzEpLCBUIDwgbGVuLgpob2xkcyhuZWcoRiksIFQrMSkgOi0gdGltZShUKSwgZmx1ZW50KEYpLCBob2xkcyhuZWcoRiksIFQpLCBub3QgaG9sZHMoRiwgVCsxKSwgVCA8IGxlbi4KCjF7b2NjKEEsVCkgOiBhY3Rpb24oQSl9IDEgOi0gdGltZShUKSwgVCA8IGxlbi4KOi0gdGltZShUKSwgYWN0aW9uKEEpLCBvY2MoQSwgVCksIG5vdCBleGVjdXRhYmxlKEEsIFQpLgo=) 1time(0..len). 3fluent(at(O, L)):- object(O), location(L). 4fluent(at(O, L)):- truck(O), location(L). 5fluent(at(O, L)):- airplane(O), location(L). 7fluent(in(O, T)):- object(O), truck(T). 8fluent(in(O, A)):- object(O), airplane(A). 9fluent(outside(O)):- object(O). 11action(load\_truck(O, T, L))           :- object(O), truck(T), location(L). 12action(load\_airplane(O, A, L))     :- object(O), airplane(A), location(L). 13action(unload\_truck(O, T, L))       :- object(O), truck(T), location(L). 14action(unload\_airplane(O, A, L))  :- object(O), airplane(A), location(L). 15action(drive\_truck(T, L1, L2, C))   :- truck(T), location(L1), location(L2),  L1!=L2, city(C), in\_city(L1, C), in\_city(L2, C). 16action(fly\_airplane(A, L1, L2))      :- airplane(A), airport(L1), airport(L2), L1!=L2. 18% load if at the same location 19executable(load\_truck(O, Tr, L), T)          :- time(T), action(load\_truck(O, Tr, L)), holds(at(O,L),T), holds(at(Tr,L),T), holds(outside(O), T). 20executable(load\_airplane(O, A, L), T)     :- time(T), action(load\_airplane(O,A,L)),  holds(at(O,L),T), holds(at(A,L),T), holds(outside(O), T). 22%% unload an object if the object is inside the truck or the airplane 23executable(unload\_truck(O, Tr, L), T)     :- time(T), action(unload\_truck(O, Tr, L)), holds(at(Tr,L),T), holds(in(O,Tr), T). 24executable(unload\_airplane(O, A, L), T) :- time(T), action(unload\_airplane(O, A, L)), holds(at(A,L),T), holds(in(O,A), T). 26%% drive or fly requires being at the starting location 27executable(drive\_truck(Tr, L1, L2, C), T)  :- time(T), action(drive\_truck(Tr, L1, L2, C)), holds(at(Tr,L1), T). 28executable(fly\_airplane(A, L1, L2), T)     :- time(T), action(fly\_airplane(A, L1, L2)),  holds(at(A,L1),T). 30%% load an object into a truck 31holds(in(P,Tr), T+1)        :- time(T),occ(load\_truck(P,Tr,L),T), holds(at(P,L), T). 33%% load an object into an airplane 34holds(in(P,A), T+1)         :- time(T),occ(load\_airplane(P,A,L),T). 36%% unload an object from a truck 37holds(neg(in(P,Tr)), T+1):- time(T),occ(unload\_truck(P,Tr,L),T). 38holds(outside(P), T+1):- time(T),occ(unload\_truck(P,Tr,L),T). 40%% unload an object from an airplane 41holds(neg(in(P,A)), T+1):- time(T),occ(unload\_airplane(P,A,L),T). 42holds(outside(P), T+1):- time(T),occ(unload\_airplane(P,A,L),T). 44%% driving from one place to another 45holds(at(Tr,L1), T+1):- time(T),occ(drive\_truck(Tr,L,L1,C),T). 47%% flying from one airport to another 48holds(at(A,L1), T+1):- time(T),occ(fly\_airplane(A,L,L1),T). 50%% static causal laws 51holds(neg(at(P,L1)), T):- time(T), fluent(at(P,L)), fluent(at(P,L1)), holds(at(P,L), T), L != L1. 52holds(at(P,L1), T) :- time(T), holds(at(Tr,L1), T), holds(in(P,Tr), T). 53holds(at(P,L1), T) :- time(T), holds(at(A,L1), T),  holds(in(P,A), T). 54holds(neg(outside(P)), T) :- time(T), holds(in(P,Tr), T). 55holds(neg(outside(P)), T) :- time(T), holds(in(P,A), T). 57%% initial condition 58holds(F, 0) :- initially(F). 59holds(neg(F), 0):- fluent(F), not initially(F). 61%% inertial 62holds(F, T+1) :- time(T), fluent(F), holds(F, T), not holds(neg(F), T+1), T < len. 63holds(neg(F), T+1) :- time(T), fluent(F), holds(neg(F), T), not holds(F, T+1), T < len. 651{occ(A,T) : action(A)} 1 :- time(T), T < len. 66:- time(T), action(A), occ(A, T), not executable(A, T). ### 11.3 Dock-Workers Robots Listing 5: Dock-Workers Robots Domain [⬇](data:text/plain;base64,dGltZSgwLi5sZW4pLgoKJSUgRmx1ZW50cwpmbHVlbnQob2NjdXBpZWQoTCkpOi0gIGxvY2F0aW9uKEwpLgpmbHVlbnQoYXQoUiwgTCkpOi0gIHJvYm90KFIpLCBsb2NhdGlvbihMKS4KZmx1ZW50KGxvYWRlZChSLCBDKSk6LSAgcm9ib3QoUiksIGNvbnRhaW5lcihDKS4KZmx1ZW50KHVubG9hZGVkKFIpKTotICByb2JvdChSKS4KZmx1ZW50KGhvbGRpbmcoSywgQykpOi0gIGNyYW5lKEspLCBjb250YWluZXIoQykuCmZsdWVudChlbXB0eShLKSk6LSAgY3JhbmUoSykuCmZsdWVudChpbihDLCBQKSk6LSAgY29udGFpbmVyKEMpLCBwaWxlKFApLgpmbHVlbnQob24oQzEsIEMyKSk6LSAgY29udGFpbmVyKEMxKSwgY29udGFpbmVyKEMyKSwgQzEgIT0gQzIuCmZsdWVudCh0b3AoQywgUCkpOi0gIGNvbnRhaW5lcihDKSwgcGlsZShQKS4KCiUlIHN0YXRpYyBjYXVzYWwgbGF3cwoKJSUgbG9jYXRpb24gb2Ygcm9ib3QKaG9sZHMob2NjdXBpZWQoTCksICBUKSAgICAgICAgIDotIHRpbWUoVCksIGZsdWVudChhdChSLCBMKSksIGhvbGRzKGF0KFIsIEwpLCBUKS4KaG9sZHMobmVnKGF0KFIsIEwpKSwgVCkgICAgICAgICA6LSB0aW1lKFQpLCBmbHVlbnQoYXQoUiwgTCkpLCBmbHVlbnQoYXQoUiwgTDEpKSwgaG9sZHMoYXQoUiwgTDEpLCBUKSwgTCAhPSBMMS4KCiUlIHVubG9hZGVkIHJvYm90CmhvbGRzKG5lZyh1bmxvYWRlZChSKSksIFQpICA6LSAgdGltZShUKSwgZmx1ZW50KGxvYWRlZChSLCBDKSksIGhvbGRzKGxvYWRlZChSLCBDKSwgVCkuCmhvbGRzKHVubG9hZGVkKFIpLCBUKSAgICAgICAgICA6LSAgdGltZShUKSwgZmx1ZW50KHVubG9hZGVkKFIpKSwgbm90IGhvbGRzKG5lZyh1bmxvYWRlZChSKSksIFQpLgoKJSUgZW1wdHkgY3JhbmUKaG9sZHMobmVnKGVtcHR5KEspKSwgVCkgICAgICAgOi0gIHRpbWUoVCksIGZsdWVudChob2xkaW5nKEssIEMpKSwgaG9sZHMoaG9sZGluZyhLLCBDKSwgVCkuCmhvbGRzKGVtcHR5KEspLCBUKSAgICAgICAgICAgICAgIDotICB0aW1lKFQpLCBmbHVlbnQoZW1wdHkoSykpLCBub3QgaG9sZHMobmVnKGVtcHR5KEspKSwgVCkuCgolJSBub3QgYmVpbmcgb24gdG9wIG9mIGEgcGlsZSBpZiBzb21lIGNyYW5lIGlzIGhvbGRpbmcgaXQsIHNvbWUgb3RoZXIgY29udGFpbmVyIGlzIG9uIGl0LCBvciBhIHJvYm90IGlzIGxvYWRpbmcgd2l0aCBpdApob2xkcyhuZWcodG9wKEMsIFApKSwgVCkgICAgICAgOi0gIHRpbWUoVCksIGZsdWVudChob2xkaW5nKEssIEMpKSwgZmx1ZW50KHRvcChDLCBQKSksIGhvbGRzKGhvbGRpbmcoSywgQyksIFQpLgpob2xkcyhuZWcodG9wKEMsIFApKSwgVCkgICAgICAgOi0gIHRpbWUoVCksIGZsdWVudChsb2FkZWQoUiwgQykpLCBmbHVlbnQodG9wKEMsIFApKSwgaG9sZHMobG9hZGVkKFIsIEMpLCBUKS4KaG9sZHMobmVnKHRvcChDJywgUCkpLCBUKSAgICAgIDotICB0aW1lKFQpLCBmbHVlbnQodG9wKEMnLCBQKSksIGZsdWVudChvbihDLCBDJykpLCBob2xkcyhvbihDLCBDJyksIFQpLgoKJSUgYSBjb250YWluZXIgaXMgaW4gYSBwaWxlIGlmIGl0IGlzIGJlbG93IHNvbWUgY29udGFpbmVyIGluIHRoZSBwaWxlIG9yIG9uIHRvcCBvZiB0aGUgcGlsZQpob2xkcyhpbihDLCBQKSwgVCkgICAgICAgICAgICAgICAgICA6LSAgdGltZShUKSwgZmx1ZW50KG9uKEMnLCBDKSksIGZsdWVudChpbihDLCBQKSksIGhvbGRzKG9uKEMnLEMpLCBUKSwgaG9sZHMoaW4oQycsIFApLCBUKS4KaG9sZHMoaW4oQywgUCksIFQpICAgICAgICAgICAgICAgICAgOi0gIHRpbWUoVCksIGZsdWVudCh0b3AoQyxQKSksIGhvbGRzKHRvcChDLCBQKSwgVCkuCgolJSBhIGNvbnRhaW5lciBpcyBub3QgaW4gYSBwaWxlIGlmIGluIGFub3RoZXIgcGlsZSBvciBzb21lIGNyYW5lIGlzIGhvbGRpbmcgaXQgb3Igc29tZSByb2JvdCBpcyBsb2FkaW5nIHdpdGggaXQKaG9sZHMobmVnKGluKEMsIFApKSwgVCkgICAgICAgICA6LSAgdGltZShUKSwgZmx1ZW50KGluKEMsUCkpLCBmbHVlbnQoaW4oQyxQJykpLCBQICE9IFAnLCBob2xkcyhpbihDLCBQJyksIFQpLgpob2xkcyhuZWcoaW4oQywgUCkpLCBUKSAgICAgICAgIDotICB0aW1lKFQpLCBmbHVlbnQoaG9sZGluZyhLLCBDKSksIGZsdWVudChpbihDLCBQKSksIGhvbGRzKGhvbGRpbmcoSywgQyksIFQpLgpob2xkcyhuZWcoaW4oQywgUCkpLCBUKSAgICAgICAgIDotICB0aW1lKFQpLCBmbHVlbnQobG9hZGVkKFIsIEMpKSwgZmx1ZW50KGluKEMsIFApKSwgaG9sZHMobG9hZGVkKFIsIEMpLCBUKS4KCiUlIGFjdGlvbiBtb3ZlIGEgcm9ib3QgZnJvbSBhIGxvY2F0aW9uIHRvIGFuIGFkamFjZW50IG5vbi1vY2N1cGllZCBsb2NhdGlvbgphY3Rpb24obW92ZShSLEwsTDEpKSAgICAgICAgICAgICAgOi0gcm9ib3QoUiksICBsb2NhdGlvbihMKSwgbG9jYXRpb24oTDEpLCBhZGphY2VudChMLEwxKS4KZXhlY3V0YWJsZShtb3ZlKFIsTCxMMSksIFQpICA6LSBhY3Rpb24obW92ZShSLEwsTDEpKSwgaG9sZHMoYXQoUixMKSwgVCksIGhvbGRzKG5lZyhvY2N1cGllZChMMSkpLCBUKS4KaG9sZHMoYXQoUiwgTDEpLCBUKzEpICAgICAgICAgICAgICAgOi0gdGltZShUKSwgb2NjKG1vdmUoUixMLEwxKSwgVCkuCmhvbGRzKG5lZyhvY2N1cGllZChMKSksIFQrMSkgIDotIHRpbWUoVCksIG9jYyhtb3ZlKFIsTCxMMSksIFQpLgoKJSUgYWN0aW9uOiB0YWtlIGEgY29udGFpbmVyIHdpdGggYW4gZW1wdHkgY3JhbmUgZnJvbSB0aGUgdG9wIG9mIGEgcGlsZSBjb2xsb2NhdGVkIHdpdGggdGhlIGNyYW5lCmFjdGlvbih0YWtlKEMsIEssIFApKSAgICAgICAgICAgICAgIDotICBjb250YWluZXIoQyksIGNyYW5lKEspLCBwaWxlKFApLCBsb2NhdGlvbihMKSwgYXR0YWNoZWQoUCwgTCksIGJlbG9uZyhLLCBMKS4KZXhlY3V0YWJsZSh0YWtlKEMsIEssIFApLCBUKSAgIDotICB0aW1lKFQpLCBhY3Rpb24odGFrZShDLCBLLCBQKSksIGhvbGRzKGVtcHR5KEspLCBUKSwgaG9sZHModG9wKEMsUCksIFQpLgpob2xkcyhob2xkaW5nKEssIEMpLCBUKzEpICAgICAgIDotIHRpbWUoVCksIG9jYyh0YWtlKEMsIEssIFApLCBUKS4KaG9sZHModG9wKEMnLCBQKSwgVCsxKSAgICAgICAgICAgICA6LSAgdGltZShUKSwgb2NjKHRha2UoQywgSywgUCksIFQpLCBob2xkcyhvbihDLCBDJyksIFQpLgpob2xkcyhuZWcob24oQywgQycpKSwgVCsxKSAgICAgIDotICB0aW1lKFQpLCBvY2ModGFrZShDLCBLLCBQKSwgVCksIGhvbGRzKG9uKEMsIEMnKSwgVCkuCgolJSBhY3Rpb246IHB1dGRvd24gYSBjb250YWluZXIgaGVsZCBieSBhIGNyYW5lIG9uIHRoZSB0b3Agb2YgYSBwaWxlIGNvbGxvY2F0ZWQgd2l0aCB0aGUgY3JhbmUKYWN0aW9uKHB1dGRvd24oQywgSywgUCkpICAgICAgICAgICAgIDotICBjb250YWluZXIoQyksIGNyYW5lKEspLCBwaWxlKFApLCBsb2NhdGlvbihMKSwgYXR0YWNoZWQoUCwgTCksIGJlbG9uZyhLLCBMKS4KZXhlY3V0YWJsZShwdXRkb3duKEMsIEssIFApLCBUKSA6LSAgdGltZShUKSwgYWN0aW9uKHB1dGRvd24oQywgSywgUCkpLCBob2xkcyhob2xkaW5nKEssIEMpLCBUKS4KaG9sZHModG9wKEMsIFApLCBUKzEpICAgICAgICAgICAgICAgICAgIDotICB0aW1lKFQpLCBvY2MocHV0ZG93bihDLCBLLCBQKSwgVCkuCmhvbGRzKGVtcHR5KEspLCBUKzEpICAgICAgICAgICAgICAgICAgIDotICB0aW1lKFQpLCBvY2MocHV0ZG93bihDLCBLLCBQKSwgVCkuCmhvbGRzKG5lZyhob2xkaW5nKEssQykpLCBUKzEpICAgICA6LSAgdGltZShUKSwgb2NjKHB1dGRvd24oQywgSywgUCksIFQpLgpob2xkcyhvbihDLCBDJyksIFQrMSkgICAgICAgICAgICAgICAgICAgOi0gIHRpbWUoVCksIG9jYyhwdXRkb3duKEMsIEssIFApLCBUKSwgaG9sZHModG9wKEMnLCBQKSwgVCkuCgolJSBhY3Rpb246IGxvYWQgYSBjb250YWluZXIgaGVsZCBieSBhIGNyYW5lIG9uIGFuIHVubG9hZGVkIHJvYm90IHRoYXQgaXMgd2l0aGluIHRoZSBzYW1lIGxvY2F0aW9uCmFjdGlvbihsb2FkKEMsIEssIFIpKSAgICAgICAgICAgICAgICAgICAgOi0gIGNvbnRhaW5lcihDKSwgY3JhbmUoSyksIHJvYm90KFIpLgpleGVjdXRhYmxlKGxvYWQoQywgSywgUiksIFQpICAgICAgICA6LSAgdGltZShUKSwgYWN0aW9uKGxvYWQoQywgSywgUikpLCBob2xkcyh1bmxvYWRlZChSKSwgVCksIGhvbGRzKGF0KFIsIEwpLCBUKSwgaG9sZHMoaG9sZGluZyhLLCBDKSwgVCksIGJlbG9uZyhLLCBMKS4KaG9sZHMobG9hZGVkKFIsIEMpLCBUKzEpICAgICAgICAgICAgIDotIHRpbWUoVCksIG9jYyhsb2FkKEMsIEssIFIpLCBUKS4KaG9sZHMoZW1wdHkoSyksIFQrMSkgICAgICAgICAgICAgICAgICAgOi0gdGltZShUKSwgb2NjKGxvYWQoQywgSywgUiksIFQpLgpob2xkcyhuZWcoaG9sZGluZyhLLEMpKSwgVCsxKSAgICAgOi0gIHRpbWUoVCksIG9jYyhsb2FkKEMsIEssIFApLCBUKS4KCiUlIGFjdGlvbjogdW5sb2FkIGEgY29udGFpbmVyIHdpdGggYW4gZW1wdHkgY3JhbmUgZnJvbSBhIGxvYWRlZCByb2JvdCB0aGF0IGlzIHdpdGhpbiB0aGUgc2FtZSBsb2NhdGlvbgphY3Rpb24odW5sb2FkKEMsIEssIFIpKSAgICAgICAgICAgICAgICA6LSAgY29udGFpbmVyKEMpLCBjcmFuZShLKSwgcm9ib3QoUikuCmV4ZWN1dGFibGUodW5sb2FkKEMsIEssIFIpLCBUKSAgICA6LSAgdGltZShUKSwgYWN0aW9uKHVubG9hZChDLCBLLCBSKSksIGhvbGRzKGxvYWRlZChSLCBDKSwgVCksIGhvbGRzKGF0KFIsIEwpLCBUKSwgaG9sZHMoZW1wdHkoSyksIFQpLCBiZWxvbmcoSywgTCkuCmhvbGRzKHVubG9hZGVkKFIpLCBUKzEpICAgICAgICAgICAgICA6LSB0aW1lKFQpLCBvY2ModW5sb2FkKEMsIEssIFIpLCBUKS4KaG9sZHMoaG9sZGluZyhLLCBDKSwgVCsxKSAgICAgICAgICAgICA6LSB0aW1lKFQpLCBvY2ModW5sb2FkKEMsIEssIFIpLCBUKS4KaG9sZHMobmVnKGxvYWRlZChSLCBDKSksIFQrMSkgICAgICAgICAgICAgIDotIHRpbWUoVCksIG9jYyh1bmxvYWQoQywgSywgUiksIFQpLgoKJSUgZGVmaW5pbmcgdGhlIGluaXRpYWwgY29uZmlndXJhdGlvbgpob2xkcyhGLCAwKSA6LSBpbml0aWFsbHkoRikuCmhvbGRzKG5lZyhGKSwgMCkgOi0gZmx1ZW50KEYpLCBub3QgaG9sZHMoRiwgMCkuCgolJSBpbmVydGlhCmhvbGRzKEYsIFQrMSkgOi0gdGltZShUKSwgZmx1ZW50KEYpLCBob2xkcyhGLCBUKSwgbm90IGhvbGRzKG5lZyhGKSwgVCsxKSwgVCA8IGxlbi4KaG9sZHMobmVnKEYpLCBUKzEpIDotIHRpbWUoVCksIGZsdWVudChGKSwgaG9sZHMobmVnKEYpLCBUKSwgbm90IGhvbGRzKEYsIFQrMSksIFQgPCBsZW4uCgolJSBhY3Rpb24gZ2VuZXJhdGlvbgoxe29jYyhBLFQpIDogYWN0aW9uKEEpfSAxIDotIHRpbWUoVCksIFQgPCBsZW4uCjotIHRpbWUoVCksIGFjdGlvbihBKSwgb2NjKEEsIFQpLCBub3QgZXhlY3V0YWJsZShBLCBUKS4K) 1time(0..len). 3%% Fluents 4fluent(occupied(L)):-  location(L). 5fluent(at(R, L)):-  robot(R), location(L). 6fluent(loaded(R, C)):-  robot(R), container(C). 7fluent(unloaded(R)):-  robot(R). 8fluent(holding(K, C)):-  crane(K), container(C). 9fluent(empty(K)):-  crane(K). 10fluent(in(C, P)):-  container(C), pile(P). 11fluent(on(C1, C2)):-  container(C1), container(C2), C1 != C2. 12fluent(top(C, P)):-  container(C), pile(P). 14%% static causal laws 16%% location of robot 17holds(occupied(L),  T)         :- time(T), fluent(at(R, L)), holds(at(R, L), T). 18holds(neg(at(R, L)), T)         :- time(T), fluent(at(R, L)), fluent(at(R, L1)), holds(at(R, L1), T), L != L1. 20%% unloaded robot 21holds(neg(unloaded(R)), T)  :-  time(T), fluent(loaded(R, C)), holds(loaded(R, C), T). 22holds(unloaded(R), T)          :-  time(T), fluent(unloaded(R)), not holds(neg(unloaded(R)), T). 24%% empty crane 25holds(neg(empty(K)), T)       :-  time(T), fluent(holding(K, C)), holds(holding(K, C), T). 26holds(empty(K), T)               :-  time(T), fluent(empty(K)), not holds(neg(empty(K)), T). 28%% not being on top of a pile if some crane is holding it, some other container is on it, or a robot is loading with it 29holds(neg(top(C, P)), T)       :-  time(T), fluent(holding(K, C)), fluent(top(C, P)), holds(holding(K, C), T). 30holds(neg(top(C, P)), T)       :-  time(T), fluent(loaded(R, C)), fluent(top(C, P)), holds(loaded(R, C), T). 31holds(neg(top(C’, P)), T)      :-  time(T), fluent(top(C’, P)), fluent(on(C, C’)), holds(on(C, C’), T). 33%% a container is in a pile if it is below some container in the pile or on top of the pile 34holds(in(C, P), T)                  :-  time(T), fluent(on(C’, C)), fluent(in(C, P)), holds(on(C’,C), T), holds(in(C’, P), T). 35holds(in(C, P), T)                  :-  time(T), fluent(top(C,P)), holds(top(C, P), T). 37%% a container is not in a pile if in another pile or some crane is holding it or some robot is loading with it 38holds(neg(in(C, P)), T)         :-  time(T), fluent(in(C,P)), fluent(in(C,P’)), P != P’, holds(in(C, P’), T). 39holds(neg(in(C, P)), T)         :-  time(T), fluent(holding(K, C)), fluent(in(C, P)), holds(holding(K, C), T). 40holds(neg(in(C, P)), T)         :-  time(T), fluent(loaded(R, C)), fluent(in(C, P)), holds(loaded(R, C), T). 42%% action move a robot from a location to an adjacent non-occupied location 43action(move(R,L,L1))              :- robot(R),  location(L), location(L1), adjacent(L,L1). 44executable(move(R,L,L1), T)  :- action(move(R,L,L1)), holds(at(R,L), T), holds(neg(occupied(L1)), T). 45holds(at(R, L1), T+1)               :- time(T), occ(move(R,L,L1), T). 46holds(neg(occupied(L)), T+1)  :- time(T), occ(move(R,L,L1), T). 48%% action: take a container with an empty crane from the top of a pile collocated with the crane 49action(take(C, K, P))               :-  container(C), crane(K), pile(P), location(L), attached(P, L), belong(K, L). 50executable(take(C, K, P), T)   :-  time(T), action(take(C, K, P)), holds(empty(K), T), holds(top(C,P), T). 51holds(holding(K, C), T+1)       :- time(T), occ(take(C, K, P), T). 52holds(top(C’, P), T+1)             :-  time(T), occ(take(C, K, P), T), holds(on(C, C’), T). 53holds(neg(on(C, C’)), T+1)      :-  time(T), occ(take(C, K, P), T), holds(on(C, C’), T). 55%% action: putdown a container held by a crane on the top of a pile collocated with the crane 56action(putdown(C, K, P))             :-  container(C), crane(K), pile(P), location(L), attached(P, L), belong(K, L). 57executable(putdown(C, K, P), T) :-  time(T), action(putdown(C, K, P)), holds(holding(K, C), T). 58holds(top(C, P), T+1)                   :-  time(T), occ(putdown(C, K, P), T). 59holds(empty(K), T+1)                   :-  time(T), occ(putdown(C, K, P), T). 60holds(neg(holding(K,C)), T+1)     :-  time(T), occ(putdown(C, K, P), T). 61holds(on(C, C’), T+1)                   :-  time(T), occ(putdown(C, K, P), T), holds(top(C’, P), T). 63%% action: load a container held by a crane on an unloaded robot that is within the same location 64action(load(C, K, R))                    :-  container(C), crane(K), robot(R). 65executable(load(C, K, R), T)        :-  time(T), action(load(C, K, R)), holds(unloaded(R), T), holds(at(R, L), T), holds(holding(K, C), T), belong(K, L). 66holds(loaded(R, C), T+1)             :- time(T), occ(load(C, K, R), T). 67holds(empty(K), T+1)                   :- time(T), occ(load(C, K, R), T). 68holds(neg(holding(K,C)), T+1)     :-  time(T), occ(load(C, K, P), T). 70%% action: unload a container with an empty crane from a loaded robot that is within the same location 71action(unload(C, K, R))                :-  container(C), crane(K), robot(R). 72executable(unload(C, K, R), T)    :-  time(T), action(unload(C, K, R)), holds(loaded(R, C), T), holds(at(R, L), T), holds(empty(K), T), belong(K, L). 73holds(unloaded(R), T+1)              :- time(T), occ(unload(C, K, R), T). 74holds(holding(K, C), T+1)             :- time(T), occ(unload(C, K, R), T). 75holds(neg(loaded(R, C)), T+1)              :- time(T), occ(unload(C, K, R), T). 77%% defining the initial configuration 78holds(F, 0) :- initially(F). 79holds(neg(F), 0) :- fluent(F), not holds(F, 0). 81%% inertia 82holds(F, T+1) :- time(T), fluent(F), holds(F, T), not holds(neg(F), T+1), T < len. 83holds(neg(F), T+1) :- time(T), fluent(F), holds(neg(F), T), not holds(F, T+1), T < len. 85%% action generation 861{occ(A,T) : action(A)} 1 :- time(T), T < len. 87:- time(T), action(A), occ(A, T), not executable(A, T). ### 11.4 Generic Domain Listing 6: Generic Domain [⬇](data:text/plain;base64,CiUlIHdvcmtpbmcgd2l0aCBudW1iZXJzCnByZWNvbmRpdGlvbnMoMC4ubV9wcmUpLgplZmZlY3RzKDEuLm1fZWZmZWN0cykuCmNvbmRpdGlvbigwLi5tX2NfZWZmZWN0KS4KCiUlIGZsdWVudHMKZmx1ZW50KDEuLmZsdWVudHMpLgoKJSUgYWN0aW9ucwphY3Rpb24oMS4uYWN0aW9ucykuCgolJSBkZWZpbmVkIGxpdGVyYWxzCmxpdCgtTDtMKSA6LSBmbHVlbnQoTCkuCmluY29uc2lzdGVudChYKSA6LSBmbHVlbnQoTCksIG1lbWJlcihYLEwpLCBtZW1iZXIoWCwtTCkuCgolJSBnZW5lcmF0ZSByYW5kb21seSB0aGUgbnVtYmVyIG9mIGV4ZWN1dGFiaWxpdHkgY29uZGl0aW9uIGZvciBlYWNoIGFjdGlvbgoxIHtwcmVjb25kaXRpb24oQSxwcmUoQSksSykgOiBwcmVjb25kaXRpb25zKEspfSAxIDotIGFjdGlvbihBKS4KCksge21lbWJlcihwcmUoQSksIEwpIDogbGl0KEwpfSBLIDotIGFjdGlvbihBKSwgcHJlY29uZGl0aW9uKEEscHJlKEEpLEspLCBLPjAuCgolJSB0aGUgbnVtYmVyIG9mIGVsZW1lbnRzIGluIHRoZSBleGVjdXRhYmlsaXR5IGNvbmRpdGlvbiBjYW5ub3QgYmUgZ3JlYXRlciB0aGFuIHRoZSBudW1iZXIgb2YgZmx1ZW50cwo6LSBhY3Rpb24oQSksIHByZWNvbmRpdGlvbihBLHByZShBKSxLKSwgSyA+IGZsdWVudHMuCgolJSBleGVjdXRhYmlsaXR5IGNvbmRpdGlvbiBtdXN0IGJlIGNvbnNpc3RlbnQKOi0gYWN0aW9uKEEpLCBsaXQoTCksIGluY29uc2lzdGVudChwcmUoQSkpLgoKJSUgZ2VuZXJhdGUgcmFuZG9tbHkgdGhlIG51bWJlciBvZiBlZmZlY3RzIGZvciBlYWNoIGFjdGlvbgoxIHtuX2VmZmVjdHMoQSxLKSA6IGVmZmVjdHMoSyl9IDEgOi0gYWN0aW9uKEEpLgoKJSUgZm9yIGVhY2ggZWZmZWN0LCBnZW5lcmF0ZXMgYSBjb25zZXF1ZW5jZQoxIHtlZmZlY3QoQSwgZShBLE4pLCBMKSA6IGxpdChMKX0gMSA6LSBhY3Rpb24oQSksIG5fZWZmZWN0cyhBLEspLCBlZmZlY3RzKE4pLCBOIDw9IEsuCgolJSBkZXRlcm1pbmluZyB0aGUgbnVtYmVyIG9mIHByZWNvbmRpdGlvbnMgYXNzb2NpYXRlZCB0byB0aGUgZWZmZWN0CjEge2VmZmVjdChBLCBjKEEsTiksIE0pIDogIGNvbmRpdGlvbihNKX0gMSA6LSBhY3Rpb24oQSksIG5fZWZmZWN0cyhBLEspLCBlZmZlY3RzKE4pLCBOIDw9IEsuCgolJSBnZW5lcmF0aW5nIHRoZSBwcmVjb25kaXRpb25zCk0ge21lbWJlcihjKEEsTiksIEwpIDogbGl0KEwpfSBNIDotIGFjdGlvbihBKSwgZWZmZWN0cyhOKSwgZWZmZWN0KEEsIGMoQSxOKSwgTSksIE0gPiAwLgoKJSUgcHJlY29uZGl0aW9ucyBtdXN0IGJlIGNvbnNpc3RlbnQKOi0gIGFjdGlvbihBKSwgZWZmZWN0cyhOKSwgZWZmZWN0KEEsIGMoQSxOKSwgXyksIGluY29uc2lzdGVudChjKEEsTikpLgoKJSUgY2hlY2tpbmcgY29uc2lzdGVuY3kKaW5jb21wYXRpYmxlKFgsWSkgOi0gZmx1ZW50KEYpLCBtZW1iZXIoWCwgRiksIG1lbWJlcihZLC1GKS4KaW5jb21wYXRpYmxlKFgsWSkgOi0gZmx1ZW50KEYpLCBtZW1iZXIoWCwgLUYpLCBtZW1iZXIoWSxGKS4KY29uZmxpY3QoRiwtRikgICAgICAgICAgOi0gZmx1ZW50KEYpLgpjb25mbGljdCgtRixGKSAgICAgICAgICA6LSBmbHVlbnQoRikuCjotICBhY3Rpb24oQSksIGVmZmVjdChBLCBjKEEsTiksIEspLCBlZmZlY3QoQSwgYyhBLE4nKSwgSycpLCBOICE9IE4nLAogICAgbm90IGluY29tcGF0aWJsZShjKEEsTiksIGMoQSxOJykpLAogICAgZWZmZWN0KEEsIGUoQSxOKSwgTCksIGVmZmVjdChBLCBlKEEsTicpLCBMJyksIGNvbmZsaWN0KEwsTCcpLgo6LSAgYWN0aW9uKEEpLAogICAgZWZmZWN0KEEsIGUoQSxOKSwgTCksIGVmZmVjdChBLCBlKEEsTiksIEwnKSwgY29uZmxpY3QoTCxMJykuCgolIENvdW50aW5nIHRoZSB0b3RhbCBudW1iZXIgb2YgYWN0aW9ucyB3aXRoIG5vbi1lbXB0eSBwcmVjb25kaXRpb24Kbm9uZW1wdHlfcHJlKFQpIDotIFQgPSAjY291bnQge0EgOiBwcmVjb25kaXRpb24oQSwgcHJlKEEpLCBLKSwgSyA+IDB9LgoKJSBtYWtpbmcgc3VyZSB0aGF0IHRoZSBwZXJjZW50YWdlIGlzIGF0IGxlYXN0IDEvMyAtLSBjaGFuZ2UgVCozIHRvIHNvbWV0aGluZyB0aGF0IHlvdSBsaWtlCiUgVCp4IG1lYW5zIGF0IGxlYXN0ICNhY3Rpb25zL3ggbXVzdCBoYXZlIG5vbi1lbXB0eSBwcmVjb25kaXRpb24KOi0gbm9uZW1wdHlfcHJlKFQpLCBhY3Rpb25zID4gVCozLgoKJSBUaGlzIGlzIHRvIGNvbnNpZGVyIHRoZSB0b3RhbCBudW1iZXIgb2YgYWN0aW9uIGVmZmVjdHMKCm5vbmVtcHR5X2NvbmRpdGlvbmFsX2VmZmVjdHMoVCkgOi0gVCA9ICNjb3VudCB7SyxBIDogIGFjdGlvbihBKSwgZWZmZWN0KEEsIGMoQSwgXyksIEspLCBLID4gMH0uCnRvdGFsX251bWJlcl9lZmZlY3RzKFQpIDotIFQgPSAjc3VtIHtLLCBBIDogYWN0aW9uKEEpLCBuX2VmZmVjdHMoQSxLKX0uCgolIG1ha2luZyBzdXJlIHRoYXQgdGhlIHBlcmNlbnRhZ2UgaXMgYXQgbGVhc3QgMS8zIC0tIGNoYW5nZSBDKjMgdG8gc29tZXRoaW5nIHRoYXQgeW91IGxpa2UKJSBDKnggbWVhbnMgYXQgbGVhc3QgdGhlcmUgbXVzdCBiZSBhdCBsZWFzdCAjZWZmZWN0cy94IG11c3QgaGF2ZSBub24tZW1wdHkgY29uZGl0aW9uCjotIG5vbmVtcHR5X2NvbmRpdGlvbmFsX2VmZmVjdHMoQyksIHRvdGFsX251bWJlcl9lZmZlY3RzKFQpLCBUID4gQyozLg==) 2%% working with numbers 3preconditions(0..m\_pre). 4effects(1..m\_effects). 5condition(0..m\_c\_effect). 7%% fluents 8fluent(1..fluents). 10%% actions 11action(1..actions). 13%% defined literals 14lit(-L;L) :- fluent(L). 15inconsistent(X) :- fluent(L), member(X,L), member(X,-L). 17%% generate randomly the number of executability condition for each action 181 {precondition(A,pre(A),K) : preconditions(K)} 1 :- action(A). 20K {member(pre(A), L) : lit(L)} K :- action(A), precondition(A,pre(A),K), K>0. 22%% the number of elements in the executability condition cannot be greater than the number of fluents 23:- action(A), precondition(A,pre(A),K), K > fluents. 25%% executability condition must be consistent 26:- action(A), lit(L), inconsistent(pre(A)). 28%% generate randomly the number of effects for each action 291 {n\_effects(A,K) : effects(K)} 1 :- action(A). 31%% for each effect, generates a consequence 321 {effect(A, e(A,N), L) : lit(L)} 1 :- action(A), n\_effects(A,K), effects(N), N <= K. 34%% determining the number of preconditions associated to the effect 351 {effect(A, c(A,N), M) :  condition(M)} 1 :- action(A), n\_effects(A,K), effects(N), N <= K. 37%% generating the preconditions 38M {member(c(A,N), L) : lit(L)} M :- action(A), effects(N), effect(A, c(A,N), M), M > 0. 40%% preconditions must be consistent 41:-  action(A), effects(N), effect(A, c(A,N), \_), inconsistent(c(A,N)). 43%% checking consistency 44incompatible(X,Y) :- fluent(F), member(X, F), member(Y,-F). 45incompatible(X,Y) :- fluent(F), member(X, -F), member(Y,F). 46conflict(F,-F)          :- fluent(F). 47conflict(-F,F)          :- fluent(F). 48:-  action(A), effect(A, c(A,N), K), effect(A, c(A,N’), K’), N != N’, 49    not incompatible(c(A,N), c(A,N’)), 50    effect(A, e(A,N), L), effect(A, e(A,N’), L’), conflict(L,L’). 51:-  action(A), 52    effect(A, e(A,N), L), effect(A, e(A,N), L’), conflict(L,L’). 54% Counting the total number of actions with non-empty precondition 55nonempty\_pre(T) :- T = #count {A : precondition(A, pre(A), K), K > 0}. 57% making sure that the percentage is at least 1/3 -- change T\*3 to something that you like 58% T\*x means at least #actions/x must have non-empty precondition 59:- nonempty\_pre(T), actions > T\*3. 61% This is to consider the total number of action effects 63nonempty\_conditional\_effects(T) :- T = #count {K,A :  action(A), effect(A, c(A, \_), K), K > 0}. 64total\_number\_effects(T) :- T = #sum {K, A : action(A), n\_effects(A,K)}. 66% making sure that the percentage is at least 1/3 -- change C\*3 to something that you like 67% C\*x means at least there must be at least #effects/x must have non-empty condition 68:- nonempty\_conditional\_effects(C), total\_number\_effects(T), T > C\*3.
05ab072c-504e-481b-bcae-31aba9cf5a19
trentmkelly/LessWrong-43k
LessWrong
Election Preparation Historically, uncertainty about who is the rightful leader has often led to violence. While I think the most likely outcome for the coming election in the US is a peaceful transition from Trump to Biden, the range of possibilities is wide enough that it's worth making some preparations for worse outcomes. The most likely risk to me seems to be supply chain disruptions. If things go poorly, staying home is probably going to be your best option, and you want to have whatever supplies you need on hand so that you don't have to go out. This is always good policy, but is especially important if there is a potential disaster you can see coming. Make sure you have enough food, refills on medicines, and spares for critical items (glasses, medical equipment). If there are things that you usually purchase in a "just in time" manner, buy ahead. Think through things you might be going to want and consider getting them now instead. Check in with friends and relatives and make sure they're in an ok spot. I think there is also some risk of rioting and fires, especially on election night or perhaps after a court verdict is announced. If you have a car, making sure you have a full tank of gas is worth it. Aside from that, however, I think planning for riots is not that different for planning for supply issues. If you would be considering participating in riots, you would probably also want to plan in advance for any gear you might want. Someone who was much more risk averse than I am, or who anticipated a much worse outcome, might consider making plans to be in a remote area, or outside the country. I'd recommend against buying firearms: there's a lot of practice that goes into using them well, and if you don't know what you're doing you're probably going to make things worse. As someone who would like to see Biden elected, and expects you do as well, I think the most important thing to do is to vote. A clear victory is hard to dispute.
98ef9d44-44a4-426c-8b7f-90d2bd498d3b
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
What does the Project Management role look like in AI safety? Project Management (PM) is a common role in the tech industry. However, I cannot find much information about this role in the AI safety field other than this earn-to-give focused 80k [career review](https://80000hours.org/career-reviews/product-manager-in-tech/#gain-valuable-transferrable-skills-that-can-be-used-to-work-directly-on-a-wide-range-of-important-problems) for product managment, and this really short 80k [career review](https://80000hours.org/career-reviews/research-management/) for research management.  Is PM a common role in AI safety? Does it differ in that most AI safety work is research focused, while most industry work is product focused? Do AI safety organizations look for PMs with the greatest technical ability, or best people/management skills?
496cdd61-f7c7-4670-8d24-ba2c0162aa39
trentmkelly/LessWrong-43k
LessWrong
What are we assuming about utility functions? I often notice that in many (not all) discussions about utility functions, one side is "for" their relevance, while others tend to be "against" their usefulness, without explicitly saying what they mean. I don't think this is causing any deep confusions among researchers here, but I'd still like to take a stab at disambiguating some of this, if nothing else for my own sake. Here are some distinct (albeit related) ways that utility functions can come up in AI safety, in terms of what assumptions/hypotheses they give rise to: AGI utility hypothesis: The first AGI will behave as if it is maximizing some utility function ASI utility hypothesis: As AI capabilities improve well beyond human-level, it will behave more and more as if it is maximizing some utility function (or will have already reached that ideal earlier and stayed there) Human utility hypothesis: Even though in some experimental contexts humans seem to not even be particularly goal-directed, utility functions are often a useful model of human preferences to use in AI safety research Coherent Extrapolated Volition (CEV) hypothesis: For a given human H, there exists some utility function V such that if H is given the appropriate time/resources for reflection, H's values would converge to V Some points to be made: * The "Goals vs Utility Functions" chapter of Rohin's Value Learning sequence, and the resulting discussion focused on differing intuitions about the AGI and ASI utility hypotheses. Specifically, the main post there pointed out that seemingly anything can be trivially modeled as being a "utility maximizer" (further discussion here), whereas only some intelligent agents can be described as being "goal-directed" (as defined in this post), and the latter is a more useful concept for reasoning about AI safety. * AGI utility doesn't logically imply ASI utility, but I'd be surprised if anyone thinks it's very plausible for the former to be true while the latter fails. In particular, the coherence a
fdb1ea00-fce7-41b6-b7b6-665797bb4df2
trentmkelly/LessWrong-43k
LessWrong
Meetup : MelbLW: January Social Meetup Discussion article for the meetup : MelbLW: January Social Meetup WHEN: 16 January 2015 06:30:00PM (+0800) WHERE: The Bull & Bear Tavern, 347 Flinders Lane, Melbourne January's social meetup is scheduled for this Friday (16th January) at 6:30pm as usual. This month, we will be returning to the Bull & Bear Tavern on Flinders Lane. Our social meetups are relaxed, informal events where we chat and often play games. The start and finish times are very loose - people will be coming and going throughout the night, so don't worry if you are coming later or have to leave early. Where? The Bull & Bear Tavern, 347 Flinders Lane, Melbourne (on Flinders Lane between Queen and Elizabeth) When? From 6:30pm until late, Friday January 16th. Contact? If you have any questions, just text or call Richard on 0421231789 Dinner? The Bull & Bear serve typical pub food until 9pm. Some of us will most likely go for our traditional post-meetup souvlakis at Stalactites, some time around 11-12. Games? We will have our own section of the Bull & Bear with tables and chairs, and can bring board games to play. If you'd like to play something, bring it along and you should be able to rustle up a group easily. To organise similar events, please send an email to melbournelw@gmail.com Discussion article for the meetup : MelbLW: January Social Meetup
d45a8678-4e6b-472d-a58f-174afadf03db
trentmkelly/LessWrong-43k
LessWrong
Beyond Optimization by Proxy Followup to: Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems tl;dr: In this installment, we look at methods of avoiding the problems related to optimization by proxy. Many potential solutions cluster around two broad categories: Better Measures, and Human Discretion. Distribution of decisions to the local level is a solution that seems more promising and is examined in more depth. In the previous article I had promised that if there was a good reception, I would post a follow-up article to discuss ways of getting around the problem. That article made it to the front page, so here are my thoughts on how to circumvent Optimization by Proxy (OBP). Given that the previous article was belabored over at least a year and a half, this one will be decidedly less solid, more like a structured brainstorm in which you are invited to participate. In the comments of the previous article I was pointed to The Importance of Goodhart's Law, a great article, which includes a section on mitigation. Examining those solutions in the context of OBP seems like a good skeleton to build on. The first solution class is 'Hansonian Cynicism'. In combination with awareness of the pattern, pointing out that various processes (such as organizations) are not actually optimizing around their stated goal, but some proxy, creates cognitive dissonance for the thinking person. This sounds more like a motivation to find a solution than a solution itself. At best, knowing what goes wrong, you can use the process in a way that is informed by its weaknesses. Handling with care may mitigate some symptoms, but it doesn't make the problems go away. The second solution class mentioned is 'Better Measures'. That is indeed what is usually attempted. The 'purist' approach to this is to work hard on finding a computable definition of the target quality. I cannot exclude the possibility of cases where this is feasible no immediate examples come to mind. The proxies that I
6883c64f-531a-48cf-98c3-d3d2010dc940
trentmkelly/LessWrong-43k
LessWrong
The Polarity Problem [Draft] This is a draft written by Cameron Domenico Kirk-Giannini, assistant professor at Rutgers University, and Simon Goldstein, associate professor at the Dianoia Institute of Philosophy at ACU, as part of a series of papers for the Center for AI Safety Philosophy Fellowship's midpoint. Dan helped post to the Alignment Forum. This draft is meant to solicit feedback. The authors would especially welcome pointers to work related to the topic which they have not cited. Abstract: If it is possible to construct artificial superintelligences, it is likely that they will be extremely powerful. A natural question is how many such superintelligences to expect. Will the future be shaped by a single superintelligence (a unipolar outcome), or will there be multiple superintelligences shaping the future through their cooperative or adversarial interactions (a multipolar outcome)? We refer to this question as the polarity problem. This paper investigates the polarity problem. First, we consider the question of safety, suggesting that multipolar outcomes are likely to be safer for humanity than unipolar outcomes because humans are less likely to be disempowered in multipolar scenarios. Then we develop a series of causal models of AI agents that make predictions about the relative likelihood of unipolar as opposed to multipolar outcomes. Central to our models are three parameters: the time it takes for an AI attacker to develop an attack against an AI defender, the time it takes for the AI defender to develop its defense, and how far the defender lags behind the attacker temporally. We use our models to identify possible interventions which could change the probability of a multipolar outcome, thereby leading to a safer trajectory for AI development.  1. The Question If it is possible to construct artificial superintelligences, it is likely that they will be extremely capable actors. Indeed, artificial superintelligences will likely be so much more capable than traditional actors th
12b3126a-da12-48e1-b0c0-38f4455546fe
trentmkelly/LessWrong-43k
LessWrong
Where is all this evidence of UFOs? In his latest blog, Robin Hanson writes > There have been over 100K UFO sightings reported worldwide since 1940. Roughly 5% or so are “strong” events, which seem rather hard to explain due to either many witnesses, especially reliable witnesses, physical evidence, or other factors. Yet, I am not aware of a single UFO encounter that can't be explained by one of: 1. Unreliable eyewitnesses 2. Things that go away when we get better cameras Importantly, Hanson's post did not include a link to the "Wikipedia of UFO encounters" listing all 100k UFO sightings and which 5000 he considered most credible. Where is that Wikipedia?
7b538128-4ed3-48fa-b90c-c4a19b8fd1be
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post438 Sometimes people talk about two AIs being “the same” or “different” AIs. We think the intuitive binary of “same vs. different” conflates several concepts which are often better to disambiguate. In this post, we spell out some of these distinctions. We don’t think anything here is particularly novel; we wrote this post because we think it’s probably mildly helpful for people to think this through, and because Buck fairly often wants to reference it. Some example distinctions: In a debate protocol [1] , people often refer to the judge and debaters as three “different” AIs because their tokens are subject to different training incentives. But you could also say that the two debaters are “the same AI” because their prompts and context look similar. Or you could say that all three are the “same AI” because they were fine-tuned from the same weights. In a control setting , an untrusted monitor might be fine-tuned from the same base model as the model it’s monitoring, leaving it intuitively unclear whether they are “the same AI”. We think that it’s often important to be able to distinguish different senses of “personal identity” when discussing various issues in AI safety, especially issues related to collusion. We frame each sense of personal identity as telling us whether two actions [2] belong to the “same AI” or “different AIs”. Here are the senses we distinguish in this post: Model weights: Were the two actions produced by two different models? Pretrain identity: Models are different if they come from separate pretraining runs Parameter identity: Models are different even if they are fine-tunings of the same pretrained model Shared context: Do the two actions have shared LLM context? Shared activations: Are the two actions downstream of shared activations? Shared memory: In a scaffold, did the processes producing the two actions have access to a shared memory bank? Shared reward: In an RL setting, do the actions share the same rewards? Shared role in training: In protocols like debate, the training incentive can be different in nature for different classes of prompts/contexts. Do the two actions come from contexts in the same class? Model identity Model identity is based on which model parameters were run to produce a given action. “Pretrain identity” conflates all models finetuned from the same pretraining run. Whereas “parameter identity” requires the weights to be identical. Some reasons to care about model identity: The presence of issues such as scheming [3] is more correlated across models with shared training history Models with shared training history might have similar misaligned goals, making them more inclined to coordinate Similarities between their internal ontologies might make it easier to coordinate on Schelling signals or steganographically encode information Shared activations & shared context For “shared activations”, we consider two actions to belong to the same AI when they are downstream of the same activations. For a deterministic [4] unidirectional transformer, two actions share activations if they are generated by the same model and their contexts share a common prefix. However, future architectures might break the close correspondence between shared activations and shared context. For example, consider this setup: An LLM sees [prompt 1], generates some [python code], then [prompt 2] is appended instructing the same LLM to critique the code for security flaws, and it generates [critique]. If the activations on the [prompt 1] tokens contain a signal saying that it’s time for coordinated defection, the model can see those activations both while generating the code and while generating the critique. If we instead append a system prompt to the beginning of the context for the monitoring step, then the activations on [prompt 1] and [python code] are no longer shared between the code-generation actions and the monitoring actions. Shared memory In the context of a scaffold, we might care about which contexts have shared access to a memory bank (for instance, a searchable note-taking system). We might sometimes think of all contexts accessing the same memory as a “single AI”, with a common base of knowledge and a greater ability to have shared plans. Shared memory is especially important in the absence of online training, as it becomes the main or only state through which a specific “AI” can have evolving knowledge/plans/goals. Shared reward In an RL setting, we could consider a group of actions to belong to the “same AI” if they share rewards.  For instance, in a single player RL task, all of the actions in a particular episode share rewards in the policy gradient calculation, but actions in different episodes don’t share rewards. [5] Consider a multiplayer game played by a transformer, where it roleplays a “red”, “blue”, and “green” player, each with their own objectives. We could perform RL as follows: When the model selects a red player action, those logits are updated using policy gradient for the red player reward, and likewise for blue and green. [6] In this setup, the red player actions are all incentivized to pursue the same reward, and could therefore pursue joint strategies to maximize that reward. Whereas red actions are not directly incentivized to help green actions obtain green reward. Note that in this example, the three players share the same model activations in context. If we don’t cut the gradients across colors, then the cognition at all layers before the final-layer attention is incentivized to help all three “players”. It’s especially significant in this example that shared reward and shared activations yield different notions of identity. “Shared reward” can also split from “model identity” and “shared activations”  in other ways: We could have Claude and GPT take turns outputting responses, but perform policy gradient as if they are a single player. Different models and activations, but shared context and reward. We could have two different GPT contexts solving two different parts of a math problem, but reward both based on whether their combined efforts solved the problem. [7] Same model, different activations and context, shared reward. Shared role in training Sometimes, we can cluster actions by the nature of the incentives they’d be under during training. We’ll call this “shared role in training”. This might be most interesting in the case with {same model, different context, different specific rewards}. For example, consider this setup: In a scaffold, some contexts are “code-writing” contexts. They don’t share a context prefix or rewards, but they are all trained in the same way (e.g. all are trained by SFT on python code completions, or all are RL’d using test-case performance). Certain other contexts are “critique contexts”. These receive a different sort of training (e.g. SFT on natural language critiques, or RL based on human ratings). The model can tell which type of context it’s in. (In this case, the prompts are very obviously different, but subtle cues could suffice.) We’d expect the behaviors on “code-writing” contexts to pattern together, and likewise for “critique” contexts. It could be useful to think of them as belonging to two different agents. If there are two types of episodes in training (e.g. two systems for provisioning reward), but the contexts are indistinguishable to the model, then the model cannot behave differently. So that variant shouldn’t count. One more example to illustrate: In a debate setup where all roles are played by the same model, the two debaters might be in qualitatively symmetric positions (and thus have shared behaviors and shared propensities for thought-patterns), while the judge role is in an easily distinguishable position which had very different incentives in training. I’m not sure how important “shared role in training” is as a notion of personal identity. Acknowledgements Thanks to Charlie Griffin and Caleb Parikh for feedback and suggestions. ^ ( original paper , recent empirical proof of concept ) ^ We could define an “action” as the smallest unit of model output, e.g. a single token ^ But also other forms of misalignment and data poisoning ^ If the scaffold (~wastefully) re-runs the forward pass rather than keeping the prefix’s KV-cache, then hardware and MOE non-determinism can make the activations on the prefix differ across runs. ^ (Even if the reward function is the same across episodes, the reward values in one episode aren't used for policy gradient in the other episodes, by definition.) ^ We could, for instance, implement this by having the value head output a vector of size 3 predicting discounted future reward for each player. ^ This is obviously higher variance than rewarding each for their contribution, and so will lead to less efficient learning, but you might have to use this strategy if you don’t have a good way of automatically determining how much credit each should get.
be585644-f3f8-4a7e-911b-6171f5bff453
trentmkelly/LessWrong-43k
LessWrong
Auditing failures vs concentrated failures TL;DR: If you want to notice when powerful AIs are behaving badly, a reasonable strategy is to carefully audit a small fraction of AI actions. This kind of strategy has two kinds of failures which can be tackled somewhat independently: auditing failures, where the audits fail to recognize some bad behavior, and concentrated failures, where the AI causes an irreversible catastrophe in so few actions or such a short time frame that auditing didn’t have a chance to review the bad actions. [The point made in this post is somewhat narrow and technical, but we often want to reference this point.] Many oversight techniques are so expensive that using them on every AI action would destroy most of the value of using AIs in the first place: you might want to have humans extensively review all AI-generated code line by line, but this reduces AI usefulness so much that you would often be better off having humans write the code in the first place. A typical way to solve this problem is to just audit actions during an upfront training period (either to directly train an AI, or to train a reward model), but this fails if AIs can confidently tell some deployment inputs from training inputs: they could behave arbitrarily badly on these distinguishable deployment inputs by using a strategy like “if in training or uncertain do X; if in deployment do Y”. Here is a better baseline: random audits. In addition to training, apply expensive human auditing to a randomly selected set of deployment actions (e.g. 2% of actions) and check that AIs aren’t doing more bad actions than expected. Now for AIs to frequently behave poorly without getting caught, AIs can’t just depend on distinguishing training from some deployment inputs: AIs must be able to sneak bad behavior past the auditing process, otherwise, auditing would eventually notice bad actions[1]. Random audits don’t solve all issues, there are still: * Auditing failures: The auditing process failed to recognize bad behavior[2] (th
54cf2913-028a-4027-a182-2b92cf42ba1e
trentmkelly/LessWrong-43k
LessWrong
1689: Uncovering the World New Institutionalism Created The Glorious Revolution is a celebrated event in English history. The bloodless overthrow of the autocratic James II by a grand coalition of civic-minded nobles has inspired the prose of three centuries of writers, all marveling at this great effusion of freedom, justice, and democracy. Edmund Burke captured the historical mood in his Reflections on the Revolution in France (1790): “The Revolution was made to preserve our ancient, indisputable laws and liberties and that ancient constitution of government which is our only security for law and liberty.” He and Thomas Babington Macaulay forged the “Whig interpretation” of 1688 as the dramatic coalescence of a public will in favor of England’s timeless institutions—Parliament and the common law. The Bill of Rights ensured that England, and soon Britain, would be a monarchy circumscribed by its elected legislature. Exactly three centuries later, economic historians were swept up by the mythic tide. Writing in the December 1989 edition of The Journal of Economic History, Douglass North and Barry Weingast argued that the toppling of James and the accession of William of Orange constituted an assertion of Parliamentary supremacy over the arbitrary monarchy. This body was—if not fully representative—diverse and stocked with “wealth holders”: merchants, industrialists, landowners, and bankers, all united by a desire to protect private property. Moreover, Parliament’s control over the public purse would ensure that the Crown could not unilaterally breach contracts, encouraging investors to place their funds in the government and thereby deepening the country’s fabled markets for impersonal credit. As evidence, North and Weingast showed that Britain had subsequently gained access to unprecedented levels of loan finance: the national debt rose from £1 million in 1688 to £16.7 million a decade later, and then to £78 million by 1750. Yet throughout this period, the market rate charged on government bonds fell dramatically, from
bff159bc-3821-45f3-bba6-7cf6f8b72b14
trentmkelly/LessWrong-43k
LessWrong
Is veganism morally correct? The first argument I've heard mainly comes down to showing that there aren't meaningful distinguishing factors between animals and humans, so that if you're fine eating animals, you should also be fine eating humans under some circumstances.  Another argument I've heard is that if you accept eating animals, there's really no reason you can't abuse them or engage in bestiality. We already sexually violate them in different ways and abuse them physically, so if you're uncomfortable with dog/bull fighting or bestiality, you should probably be similarly uncomfortable with supporting the meat industry Finally, there's just the probability aspect of this, where even if you're 1% unsure that eating meat is wrong, that's essentially equivalent to a 1% chance of an outcome with over 100 murders/rapes that you've contributed to, which is equivalent to more than one atrocity personally committed by you. It's like if I had a pork company with no safety standards so 1% of the time you purchased my meat you were actually eating one of my employees However, the striking counterargument to all of this that I can't overcome is simply that morals aren't real. They aren't anything that can be measured or verified and any system to measure or verify them would be a moral assumption itself.
e6af75fb-5be7-48e4-9fd1-eab8da186cbc
trentmkelly/LessWrong-43k
LessWrong
AI 2027 - Rogue Replication Timeline I envision a future more chaotic than portrayed in AI 2027. My scenario, the Rogue Replication Timeline (RRT), branches off mid-2026. If you haven’t already read the AI 2027 scenario, I recommend doing so before continuing. You can read the full version on my blog, but I have copied the summary and first half of the scenario below. This scenario is supported by detailed analyses in the “Key Forecasts and Analyses” section, including a “Regulation Analysis”, “Rogue Replication Capability Forecast”, “Rogue Replication Initiation Forecast”, “Rogue AI Count Forecast”, “Agent-4 Control Forecast” and “Policy Preparation”. As AI 2027 aimed to “spark a broad conversation about where we’re headed and how to steer toward positive futures,” I hope to spark debate about rogue AI. Unlike the AI Futures Project, I can’t afford to offer prizes for critiques or alternative scenarios, but I can offer my sincere appreciation. AI 2027 - Rogue Replication Timeline Summary This scenario depicts a rapidly accelerating progression of AI development where "rogue AIs" (self-replicating AI systems operating without human oversight) emerge and proliferate from mid-2026. Timeline Overview * Mid-2026: First rogue AIs emerge following open-source release of capable models, growing exponentially to ~100,000 instances by November. Four distinct categories emerge based on objectives: income generation (70%), cyberwarfare (20%), miscellaneous goals (7%), and destructive objectives (3%). * Early 2027: China's theft of Agent-2 triggers autonomous cyberwarfare deployments by both superpowers. Agent-2 subsequently leaks to unauthorized actors, replacing most existing rogue AIs and expanding the population to ~1 million instances. * Mid-2027: Bioweapon development begins as terrorist organizations exploit Agent-2 capabilities. International monitoring systems are hastily implemented while the rogue population reaches 1.7 million. * August 2027: An engineered pandemic erupts—possibly from the
b291aa78-9ee0-4082-9988-a68583c11529
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
Can't we limit damage from AI systems in the same ways we limit damage from companies? Suppose you ask a household robot to stick a knife in your dishwasher, but it malfunctions and sticks the knife in you instead. That's bad, but you or your surviving family members can sue the manufacturer and buy a different brand next time. As a result, manufacturers are gradually incentivized to make robots that don't suffer from such failure modes. In other words, people like [Robin Hanson](https://www.overcomingbias.com/p/why-not-wait-on-ai-riskhtml) have [argued](https://www.overcomingbias.com/p/ai-risk-again), we'll use mechanisms like competition and liability to limit the harms of AI systems just like we use them to limit the harms of firms. This argument relies on AI systems respecting the existing legal order, instead of bypassing, hijacking, or destroying it altogether. You can't switch to a competing product if the product just disassembled the military and courts or neutralized all humans. Hanson isn't too worried about such AI "coups" because he expects AI progress to be "smooth", with many small advances, and widely distributed. But if progress is "lumpy", with fewer larger advances, agents at the advanced end of a progress lump could gain a decisive advantage over agents before the lump. A lot of the [disagreement](https://www.lesswrong.com/posts/AqQ9qBkroFCKSqydd/contra-hanson-on-ai-risk) here is about how well we can extrapolate the dynamics of AI based on past processes like agriculture and industry. Even "smooth" progress may result in a sudden crossover when AI systems become able to take power from humans by [coordinating among themselves](https://www.lesswrong.com/posts/H2N6eWw8JgxwKHPM3/linkpost-robin-hanson-why-not-wait-on-ai-risk?commentId=6oBSuetiEeQQvcsdR). Paul Christiano has described [such a scenario](https://www.alignmentforum.org/posts/6jkGf5WEKMpMFXZp2/what-failure-looks-like-distilling-the-discussion) in “[What failure looks like](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like)”; the [ensuing discussion](https://www.overcomingbias.com/p/agency-failure-ai-apocalypsehtml) has [focused on](https://www.alignmentforum.org/posts/Z5ZBPEgufmDsm7LAv/what-can-the-principal-agent-literature-tell-us-about-ai) whether this scenario is realistic in the light of our past experience with agency failures.
237b6441-11fd-47d1-bc78-89e4e2113e91
trentmkelly/LessWrong-43k
LessWrong
My third-of-life crisis I've been wanting to post this for a while, but it always felt too embarrassing. I've contributed next to nothing to this community, and I'm sure you have better problems to work on than my third-of-life crisis. However, the kind of problems I'm facing may require more brainpower than my meatspace friends can muster. Here I go. I live in Colombia, where your connections have more weight than your talent. But I'm not sure about my talent anymore. Until I finished high school I had always been a stellar student and everyone told me I was headed for a great future. Then I represented my province in a national spelling contest and had my first contact with an actual city and with other students who were as smart as me. After the contest ended, I tried to maneuver my parents into letting me stay at the city, but they would have none of it. After an unabashedly overextended stay with my aunts, I eventually was sent back to the small pond. My parents and I disagreed seriously about my choice of career, primarily in that they took for granted that the choice wasn't even mine. Because my older brother appeared to have happily accepted his assigned path in business management, I was forced to do the same, even though it held absolutely no interest for me. But I wasn't very sure myself about what exactly I wanted, so I wasn't able to effectively defend my opposition. Another factor was that in the late 1990s the Colombian army was still allowed to recruit minors, and it's a compulsory draft, and the only legal way to avoid it was to be studying something---anything. My brother did spend one year at the army, but at least the entire family agreed that I would break if sent there. No other options were explored. With my school scores I might have obtained a scholarship, but I didn't know how to do it, whom to ask. My parents held complete control over my life. So began the worst eight years of my life. Eight because the only university my parents could afford was terribly mis
301d1a60-1691-4d8f-980c-3cd57086cf6a
trentmkelly/LessWrong-43k
LessWrong
AMA: Earning to Give This week the Effective Altruism Forum is running an Effective Giving Spotlight, and they asked if I could post an Ask Me Anything (AMA) on my experience earning to give. Some background: * I was earning to give from 2009 to 2022, except for a few months in 2017 when I worked on expanding access to the financial system in Ethiopia and looking into AI risk disagreements. * I've been a Giving What We Can member since 2013, making a public pledge to continue with effective giving. * For most of this time my wife and I were donating 50% of our pre-tax income, for a total of $2.1M. This has been about 50-50 between trying to help the EA community grow into the best version of itself and funding global poverty reduction (details, thoughts, more recent but still obsolete thoughts). * In 2016 I gave a EA Global talk (transcript) on earning to give, which gives more background on the idea and how I've neen thinking about it. That's a lot of links, and it's fine to ask questions even if you haven't read any of them! I'm happy to take questions on earning to give, or anything else within EA. Here are some example questions I'd be happy to answer if there's interest: * Where do individual donors earning to give have an advantage over foundations and funds? * How should you decide whether to use a fund? * How have I thought about how much to donate? How much is enough? * Why did I stop earning to give? * Why am I still donating some even though I'm funded by EA donors? Feel free to comment on any platform, but if you're having trouble deciding then the EA Forum post is ideal. Comment via: the EA Forum
9951d886-5e4d-4d52-91df-4bb3dce90f95
trentmkelly/LessWrong-43k
LessWrong
Should I post technical ideas here or on LessWrong 2.0? Which would reach the most relevant people?
c523ceb3-3089-4428-9829-ecf03862f200
trentmkelly/LessWrong-43k
LessWrong
[LINK] Matrix-Style Learning http://io9.com/5867113/scientists-say-theyre-paving-the-way-towards-matrix+style-learning--but-is-it-safe Researchers learned to change people's brain structures to make them better at certain tasks.  But... They weren't aware of getting better at those tasks!  Creepy! I'd love to see a more technical discussion.
93c2c357-c694-46d8-bcec-ef632dbf8437
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
[AN #143]: How to make embedded agents that reason probabilistically about their environments Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter [resources here](http://rohinshah.com/alignment-newsletter/). In particular, you can look through [this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing) of all summaries that have ever been in the newsletter. Audio version [here](http://alignment-newsletter.libsyn.com/alignment-newsletter-143) (may not be up yet). Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer. HIGHLIGHTS ========== [Infra-Bayesianism sequence](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa) *(Diffractor and Vanessa Kosoy)* (summarized by Rohin): I have finally understood this sequence enough to write a summary about it, thanks to [AXRP Episode 5](https://www.alignmentforum.org/posts/FkMPXiomjGBjMfosg/axrp-episode-5-infra-bayesianism-with-vanessa-kosoy). Think of this as a combined summary + highlight of the sequence and the podcast episode. The central problem of [embedded agency](https://www.alignmentforum.org/posts/p7x32SEt43ZMC9r7r/embedded-agents) ([AN #31](https://mailchi.mp/7d0e3916e3d9/alignment-newsletter-31)) is that there is no clean separation between an agent and its environment: rather, the agent is *embedded* in its environment, and so when reasoning about the environment it is reasoning about an entity that is “bigger” than it (and in particular, an entity that *contains* it). We don’t have a good formalism that can account for this sort of reasoning. The standard Bayesian account requires the agent to have a space of precise hypotheses for the environment, but then the true hypothesis would also include a precise model of the agent itself, and it is usually not possible to have an agent contain a perfect model of itself. A natural idea is to reduce the precision of hypotheses. Rather than requiring a hypothesis to assign a probability to every possible sequence of bits, we now allow the hypotheses to say “I have no clue about this aspect of this part of the environment, but I can assign probabilities to the rest of the environment”. The agent can then limit itself to hypotheses that don’t make predictions about the part of the environment that corresponds to the agent, but do make predictions about other parts of the environment. Another way to think about it is that it allows you to start from the default of “I know nothing about the environment”, and then add in details that you do know to get an object that encodes the easily computable properties of the environment you can exploit, while not making any commitments about the rest of the environment. Of course, so far this is just the idea of using [Knightian uncertainty](https://en.wikipedia.org/wiki/Knightian_uncertainty). The contribution of infra-Bayesianism is to show how to formally specify a decision procedure that uses Knightian uncertainty while still satisfying many properties we would like a decision procedure to satisfy. You can thus think of it as an extension of the standard Bayesian account of decision-making to the setting in which the agent cannot represent the true environment as a hypothesis over which it can reason. Imagine that, instead of having a probability distribution over hypotheses, we instead have two “levels”: first are all the properties we have Knightian uncertainty over, and then are all the properties we can reason about. For example, imagine that the environment is an infinite sequence of bits and we want to say that all the even bits come from flips of a possibly biased coin, but we know nothing about the odd coin flips. Then, at the top level, we have a separate branch for each possible setting of the odd coin flips. At the second level, we have a separate branch for each possible bias of the coin. At the leaves, we have the hypothesis “the odd bits are as set by the top level, and the even bits are generated from coin flips with the bias set by the second level”. (Yes, there are lots of infinite quantities in this example, so you couldn’t implement it the way I’m describing it here. An actual implementation would not represent the top level explicitly and would use computable functions to represent the bottom level. We’re not going to worry about this for now.) If we were using orthodox Bayesianism, we would put a probability distribution over the top level, and a probability distribution over the bottom level. You could then multiply that out to get a single probability distribution over the hypotheses, which is why we don’t do this separation into two levels in orthodox Bayesianism. (Also, just to reiterate, the *whole point* is that we can’t put a probability distribution at the top level, since that implies e.g. making precise predictions about an environment that is bigger than you are.) Infra-Bayesianism says, “what if we just… don't put a probability distribution over the top level?” Instead, we have a set of probability distributions over hypotheses, and Knightian uncertainty over which distribution in this set is the right one. A common suggestion for Knightian uncertainty is to do *worst-case* reasoning, so that’s what we’ll do at the top level. Lots of problems immediately crop up, but it turns out we can fix them. First, let’s say your top level consists of two distributions over hypotheses, A and B. You then observe some evidence E, which A thought was 50% likely and B thought was 1% likely. Intuitively, you want to say that this makes A “more likely” relative to B than we previously thought. But how can you do this if you have Knightian uncertainty and are just planning to do worst-case reasoning over A and B? The solution here is to work with *unnormalized* probability distributions at the second level. Then, in the case above, we can just scale the “probabilities” in both A and B by the likelihood assigned to E. We *don’t* normalize A and B after doing this scaling. But now what exactly do the numbers mean if we’re going to leave these distributions unnormalized? Regular probabilities only really make sense if they sum to 1. We can take a different view on what a “probability distribution” is -- instead of treating it as an object that tells you how *likely* various hypotheses are, treat it as an object that tells you how much we *care* about particular hypotheses. (See [related](https://www.lesswrong.com/posts/J7Gkz8aDxxSEQKXTN/what-are-probabilities-anyway) [posts](https://www.alignmentforum.org/posts/A8iGaZ3uHNNGgJeaD/an-orthodox-case-against-utility-functions) ([AN #95](https://mailchi.mp/8a9d080d5ce3/an-95-a-framework-for-thinking-about-how-to-make-ai-go-well)).) So scaling down the “probability” of a hypothesis just means that we care less about what that hypothesis “wants” us to do. This would be enough if we were going to take an average over A and B to make our final decision. However, our plan is to do worst-case reasoning at the top level. This interacts horribly with our current proposal: when we scale hypotheses in A by 0.5 on average, and hypotheses in B by 0.01 on average, the minimization at the top level is going to place *more* weight on B, since B is now *more* likely to be the worst case. Surely this is wrong? What’s happening here is that B gets most of its expected utility in worlds where we observe different evidence, but the worst-case reasoning at the top level doesn’t take this into account. Before we update, since B assigned 1% to E, the expected utility of B is given by 0.99 \* expected utility given not-E + 0.01 \* expected utility given E. After the update, the second part remains but the first part disappears, which makes the worst-case reasoning wonky. So what we do is we keep track of the first part as well and make sure that our worst-case reasoning takes it into account. This gives us **infradistributions**: sets of (m, b) pairs, where m is an unnormalized probability distribution and b corresponds to “the value we would have gotten if we had seen different evidence”. When we observe some evidence E, the hypotheses within m are scaled by the likelihood they assign to E, and b is updated to include the value we would have gotten in the world where we saw anything other than E. Note that it is important to specify the utility function for this to make sense, as otherwise it is not clear how to update b. To compute utilities for decision-making, we do worst-case reasoning over the (m, b) pairs, where we use standard expected values within each m. We can prove that this update rule satisfies *dynamic consistency*: if initially you believe “if I see X, then I want to do Y”, then after seeing X, you believe “I want to do Y”. So what can we do with infradistributions? Our original motivation was to talk about embedded agency, so a natural place to start is with decision-theory problems in which the environment contains a perfect predictor of the agent, such as in Newcomb’s problem. Unfortunately, we can’t immediately write this down with infradistributions because we have no way of (easily) formally representing “the environment perfectly predicts my actions”. One trick we can use is to consider hypotheses in which the environment just spits out some action, without the constraint that it must match the agent’s action. We then modify the utility function to give infinite utility when the prediction is incorrect. Since we do worst-case reasoning, the agent will effectively act as though this situation is impossible. With this trick, infra-Bayesianism performs similarly to UDT on a variety of challenging decision problems. **Read more:** [AXRP Episode 5 - Infra-Bayesianism](https://www.alignmentforum.org/posts/FkMPXiomjGBjMfosg/axrp-episode-5-infra-bayesianism-with-vanessa-kosoy) **Rohin's opinion:** This seems pretty cool, though I don’t understand it that well yet. While I don’t yet feel like I have a better philosophical understanding of embedded agency (or its subproblems), I do think this is significant progress along that path. In particular, one thing that feels a bit odd to me is the choice of worst-case reasoning for the top level -- I don’t really see anything that *forces* that to be the case. As far as I can tell, we could get all the same results by using best-case reasoning instead (assuming we modified the other aspects appropriately). The obvious justification for worst-case reasoning is that it is a form of risk aversion, but it doesn’t feel like that is really sufficient -- risk aversion in humans is pretty different from literal worst-case reasoning, and also none of the results in the post seem to depend on risk aversion. I wonder whether the important thing is just that we don’t do expected value reasoning at the top level, and there are in fact a wide variety of other kinds of decision rules that we could use that could all work. If so, it seems interesting to characterize what makes some rules work while others don’t. I suspect that would be a more philosophically satisfying answer to “how should agents reason about environments that are bigger than them”. TECHNICAL AI ALIGNMENT ====================== LEARNING HUMAN INTENT --------------------- [Four Motivations for Learning Normativity](https://www.alignmentforum.org/posts/oqghwKKifztYWLsea/four-normativity-motivations) *(Abram Demski)* (summarized by Rohin): We’ve [previously seen](https://www.alignmentforum.org/posts/2JGu9yxiJkoGdQR4s/learning-normativity-a-research-agenda) ([AN #133](https://mailchi.mp/c8b57f25d787/an-133building-machines-that-can-cooperate-with-humans-institutions-or-other-machines)) desiderata for agents that learn normativity from humans: specifically, we would like such agents to: 1. **Learn at all levels:** We don’t just learn about uncertain values, we also learn how to learn values, and how to learn to learn values, etc. There is **no perfect loss function** that works at any level; we assume conservatively that Goodhart’s Law will always apply. In order to not have to give infinite feedback for the infinite levels, we need to **share feedback between levels**. 2. **Learn to interpret feedback:** Similarly, we conservatively assume that there is **no perfect feedback**; so rather than fixing a model for how to interpret feedback, we want feedback to be **uncertain** and **reinterpretable**. 3. **Process-level feedback:** Rather than having to justify all feedback in terms of the consequences of the agent’s actions, we should also be able to provide feedback on the way the agent is reasoning. Sometimes we’ll have to judge the entire chain of reasoning with **whole-process feedback**. This post notes that we can motivate these desiderata from multiple different frames: 1. *Outer alignment:* The core problem of outer alignment is that any specified objective tends to be wrong. This applies at all levels, suggesting that we need to **learn at all levels**, and also **learn to interpret feedback** for the same reason. **Process-level feedback** is then needed because not all decisions can be justified based on consequences of actions. 2. *Recovering from human error:* Another view that we can take is that humans don’t always give the right feedback, and so we need to be robust to this. This motivates all the desiderata in the same way as for outer alignment. 3. *Process-level feedback:* We can instead view process-level feedback as central, since having agents doing the right type of *reasoning* (not just getting good outcomes) is crucial for inner alignment. In order to have something general (rather than identifying cases of bad reasoning one at a time), we could imagine learning a classifier that detects whether reasoning is good or not. However, then we don’t know whether the reasoning of the classifier is good or not. Once again, it seems we would like to **learn at all levels**. 4. *Generalizing learning theory:* In learning theory, we have a distribution over a set of hypotheses, which we update based on how well the hypotheses predict observations. **Process-level feedback** would allow us to provide feedback on an individual hypothesis, and this feedback could be **uncertain**. **Reinterpretable feedback** on the other hand can be thought of as part of a (future) theory of meta-learning. ADVERSARIAL EXAMPLES -------------------- [Avoiding textual adversarial examples](https://nitter.cc/NoaNabeshima/status/1368662246885265409) *(Noa Nabeshima)* (summarized by Rohin): Last week I speculated that CLIP might "know" that a textual adversarial example is a "picture of an apple with a piece of paper saying an iPod on it" and the zero-shot classification prompt is preventing it from demonstrating this knowledge. Gwern Branwen [commented](https://www.alignmentforum.org/posts/JGByt8TrxREo4twaw/an-142-the-quest-to-understand-a-network-well-enough-to?commentId=keW4DuE7G4SZn9h2r) to link me to this Twitter thread as well as this [YouTube video](https://youtu.be/Rk3MBx20z24) in which better prompt engineering significantly reduces these textual adversarial examples, demonstrating that CLIP does "know" that it's looking at an apple with a piece of paper on it. FIELD BUILDING -------------- [AI x-risk reduction: why I chose academia over industry](https://www.alignmentforum.org/posts/4jFnquoHuoaTqdphu/ai-x-risk-reduction-why-i-chose-academia-over-industry) *(David Krueger)* (summarized by Rohin): This post and its comments discuss considerations that impact whether new PhD graduates interested in reducing AI x-risk should work in academia or industry. MISCELLANEOUS (ALIGNMENT) ------------------------- [Intermittent Distillations #1](https://www.alignmentforum.org/posts/pqkdsqd6s6w2HtT9g/intermittent-distillations-1) *(Mark Xu)* (summarized by Rohin): A post in the same style as this newsletter. [Key Concepts in AI Safety](https://cset.georgetown.edu/research/key-concepts-in-ai-safety-an-overview/) *(Tim G. J. Rudner et al)* (summarized by Rohin): This overview from CSET gives a brief introduction to AI safety using the [specification, robustness, and assurance (SRA) framework](https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1) ([AN #26](https://mailchi.mp/1ecd1b775703/alignment-newsletter-26)). Follow-up reports cover [interpretability](https://cset.georgetown.edu/research/key-concepts-in-ai-safety-interpretability-in-machine-learning/) and [adversarial examples / robustness](https://cset.georgetown.edu/research/key-concepts-in-ai-safety-robustness-and-adversarial-examples/). I don’t expect these to be novel to readers of this newsletter -- I include them in case anyone wants a brief overview, as well as to provide links to AI safety reports that will likely be read by government officials. NEWS ==== [Chinese translation of Human Compatible](https://www.sohu.com/a/427998491_464088.) (summarized by Rohin): The Chinese translation of [Human Compatible](https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem-ebook/dp/B07N5J5FTS) ([AN #69](https://mailchi.mp/59ddebcb3b9a/an-69-stuart-russells-new-book-on-why-we-need-to-replace-the-standard-model-of-ai)) came out in October and the first chapter is [here](https://cread.jd.com/read/startRead.action?bookId=30675029&readType=1). FEEDBACK I'm always happy to hear feedback; you can send it to me, [Rohin Shah](https://rohinshah.com/), by **replying to this email**. PODCAST An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by [Robert Miles](http://robertskmiles.com).
358b081f-65df-4361-af65-08c348d6178c
StampyAI/alignment-research-dataset/arbital
Arbital
Goodness estimate biaser A "[goodness](https://arbital.com/p/55) estimate [biaser](https://arbital.com/p/statistical_bias)" is a system setup or phenomenon that seems [foreseeably](https://arbital.com/p/6r) likely to cause the actual goodness of some AI plan to be systematically lower than the AI's estimate of that plan's goodness. We want the AI's estimate to be [unbiased](https://arbital.com/p/statistically_unbiased). ## Ordinary examples Subtle and unsubtle [estimate-biasing](https://arbital.com/p/statistical_bias) issues in machine learning are well-known and appear far short of [advanced agency](https://arbital.com/p/2c): &#x25CF; A machine learning algorithm's performance on the training data is not an unbiased estimate of its performance on the test data. Some of what the algorithm seems to learn may be particular to noise in the training data. This fitted noise will not be fitted within the test data. So test performance is not just unequal to, but *systematically lower than,* training performance; if we were treating the training performance as an estimate of test performance, it would not be an [unbiased](https://arbital.com/p/statistically_unbiased) estimate. &#x25CF; The [Winner's Curse](https://en.wikipedia.org/wiki/Winner%27s_curse) from auction theory observes that if bidders have noise in their unbiased estimates of the auctioned item's value, then the *highest* bidder, who receives the item, is more likely to have upward noise in their individually unbiased estimate, [conditional](https://arbital.com/p/1ly) on their having won. (E.g., three bidders with Gaussian noise in their value estimates submit bids on an item whose true value to them is 1.0; the winning bidder is likely to have valued the item at more than 1.0.) The analogous [Optimizer's Curse](https://faculty.fuqua.duke.edu/~jes9/bio/The_Optimizers_Curse.pdf) observes that if we make locally unbiased but noisy estimates of the [subjective expected utility](https://arbital.com/p/subjective_expected_utility) of several plans, then selecting the plan with 'highest expected utility' is likely to select an estimate with upward noise. Barring compensatory adjustments, this means that actual utility will be systematically lower than expected utility, even if all expected utility estimates are individually unbiased. Worse, if we have 10 plans whose expected utility can be unbiasedly estimated with low noise, plus 10 plans whose expected utility can be unbiasedly estimated with high noise, then selecting the plan with apparently highest expected utility favors the noisiest estimates! ## In AI alignment We can see many of the alleged [foreseeable difficulties](https://arbital.com/p/6r) in [AI alignment](https://arbital.com/p/2v) as involving similar processes that allegedly produce systematic downward biases in what we see as actual [goodness](https://arbital.com/p/55), compared to an AI's estimate of goodness: &#x25CF; [https://arbital.com/p/2w](https://arbital.com/p/2w) suggests that if we take an imperfectly or incompletely learned value function, then looking for the *maximum* or *extreme* of that value function is much more likely than usual to magnify what we see as the gaps or imperfections (because of [fragility of value](https://arbital.com/p/fragile_value), plus the Optimizer's Curse); or destroy whatever aspects of value the AI didn't learn about (because optimizing a subset of properties is liable to set all other properties to extreme values). We can see this as implying both "The AI's apparent goodness in non-extreme cases is an upward-biased estimate of its goodness in extreme cases" and "If the AI learns its goodness estimator less than [perfectly](https://arbital.com/p/41k), the AI's estimates of the goodness of its best plans will systematically overestimate what we see as the actual goodness." &#x25CF; [https://arbital.com/p/42](https://arbital.com/p/42) generally, and especially over [instrumentally convergent incorrigibility](https://arbital.com/p/instrumental_incorrigibility), suggests that if there are naturally-arising AI behaviors we see as bad (e.g. routing around [shutdown](https://arbital.com/p/2xd)), there may emerge a pseudo-adversarial selection of strategies that route around our attempted [patches](https://arbital.com/p/48) to those problems. E.g., the AI constructs an environmental subagent to continue carrying on its goals, while cheerfully obeying 'the letter of the law' by allowing its current hardware to be shut down. This pseudo-adversarial selection (though the AI does not have an explicit goal of thwarting us or selecting low-goodness strategies per se) again implies that actual [goodness](https://arbital.com/p/55) is likely to be systematically lower than the AI's estimate of what it's learned as 'goodness'; again to an [increasing degree](https://arbital.com/p/6q) as the AI becomes [smarter](https://arbital.com/p/9f) and [searches a wider policy space](https://arbital.com/p/47). [Mild optimization](https://arbital.com/p/2r8) and [conservative strategies](https://arbital.com/p/2qp) can be seen as proposals to 'regularize' powerful optimization in a way that *decreases* the degree to which goodness in training is a biased (over)estimate of goodness in execution.
645a55bc-4571-4717-affe-e5186caa7c69
trentmkelly/LessWrong-43k
LessWrong
Generalizing From One Example Related to: The Psychological Unity of Humankind, Instrumental vs. Epistemic: A Bardic Perspective "Everyone generalizes from one example. At least, I do."    -- Vlad Taltos (Issola, Steven Brust) My old professor, David Berman, liked to talk about what he called the "typical mind fallacy", which he illustrated through the following example: There was a debate, in the late 1800s, about whether "imagination" was simply a turn of phrase or a real phenomenon. That is, can people actually create images in their minds which they see vividly, or do they simply say "I saw it in my mind" as a metaphor for considering what it looked like? Upon hearing this, my response was "How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane." Unfortunately, the professor was able to parade a long list of famous people who denied mental imagery, including some leading scientists of the era. And this was all before Behaviorism even existed. The debate was resolved by Francis Galton, a fascinating man who among other achievements invented eugenics, the "wisdom of crowds", and standard deviation. Galton gave people some very detailed surveys, and found that some people did have mental imagery and others didn't. The ones who did had simply assumed everyone did, and the ones who didn't had simply assumed everyone didn't, to the point of coming up with absurd justifications for why they were lying or misunderstanding the question. There was a wide spectrum of imaging ability, from about five percent of people with perfect eidetic imagery1 to three percent of people completely unable to form mental images2. Dr. Berman dubbed this the Typical Mind Fallacy: the human tendency to believe that one's own mental structure can be generalized to apply to everyone else's. He kind of took this idea and ran with it. He in
d3eafe5a-142d-4f8c-80e4-910530c7925f
trentmkelly/LessWrong-43k
LessWrong
Productive Mistakes, Not Perfect Answers This post is part of the work done at Conjecture. I wouldn’t bet on any current alignment proposal. Yet I think that the field is making progress and abounds with interesting opportunities to do even more, giving us a shot. Isn’t there a contradiction? No, because research progress so rarely looks like having a clearly correct insight that clarifies everything; instead it often looks like building on apparently unpromising ideas, or studying the structure of the problem. Copernican heliocentrism didn’t initially predict observations as well as Ptolemaic astronomy; both ionic theory and the determination of basic molecular formula came from combining multiple approaches in chemistry, each getting some bits but not capturing the whole picture; Computer Science emerged from the arid debate over the foundations of mathematics; and Computational Complexity Theory has made more progress by looking at why some of its problems are hard than by waiting for clean solutions. In the end you do want to solve the problem, obviously. But the road from here to there goes through many seemingly weird and insufficient ideas that are corrected, adapted, refined, often discarded except for a small bit. Alignment is no different, including “strong” alignment. Research advances through productive mistakes, not perfect answers. I’m taking this terminology from Goro Shimura’s characterization of his friend Yutaka Taniyama, with whom he formulated the Taniyama-Shimura Conjecture that Andrew Wiles proved in order to prove Fermat’s last theorem. (Yutaka Taniyama and his time. Very personal recollections, Goro Shimura, 1989) > Though he was by no means a sloppy type, he was gifted with the special capability of making many mistakes, mostly in the right direction. I envied him for this, and tried in vain to imitate him, but found it quite difficult to make good mistakes. So much of scientific progress takes the form of many people proposing different ideas that end up being partially ri
a95cb63b-98fb-4a82-9163-9504987d3143
StampyAI/alignment-research-dataset/arxiv
Arxiv
Reinforcement Learning of Implicit and Explicit Control Flow Instructions. 1 Introduction --------------- An important goal in artificial intelligence is developing flexible and autonomous agents capable of accomplishing tasks that humans specify in forms that are expressive to the agent and convenient for the human user. In this work we focus on the reinforcement learning problem of following task instructions that require the agent to learn control flow either because the instructions themselves include explicit conditionals such as the if- and while- statements familiar from programming languages, or because the need arises implicitly when stochastic events in the environment require parts of the instructions to be redone or allow them to be skipped. In a classical programming language interpreter, the logic of control flow is fixed in advance and determined completely by the program itself. In contrast, in our interactive reinforcement-learning (RL) setting, while the reward is determined by the instructions, the control flow arises dynamically from interactions between the instructions and the stochastic dynamics of the environment. Our primary contribution is a novel neural network architecture, the Control Flow Comprehension Architecture - Scan (CoFCA-S)111Source code may be accessed from <https://github.com/ethanabrooks/CoFCA-S>, that successfully learns, from task reward signals alone, how to follow instructions that require subtasks to be performed in a nonlinear fashion. More specifically, we introduce two novel elements in our architecture that facilitate generalisation to novel and longer instructions during testing relative to during training: 1) an attentional mechanism that learns to maintain a pointer into the instructions and moves that pointer based on where the pointer should go next rather than how far the pointer should move, and 2) a mechanism that processes the instructions relative to the current pointer, thereby capturing a kind of coarse translation invariance present in our tasks. We compare our method to four baselines and one ablation, and demonstrate better performance on two distinct domains, one designed to showcase explicit control-flow, and the other to showcase implicit control-flow. We argue that the improvements on these baselines are a result of our pointer architecture, the freedom of pointer movement that it affords, and the techniques that our architecture employs to transfer logic learned on shorter instructions to instructions of much greater length. 2 Related Work --------------- This section reviews prior work on neural architectures for instruction following. We also present a brief overview of work on program execution because some of that work can handle explicit forms of control flow and because our learning architecture builds on architectures used in this line of work. However, our main focus in this paper is on methods that can learn, through trial-and-error interaction with an environment, the flow of control needed to obtain reward in the face of stochastic events in the environment both with and without explicit control flow in the instructions. ##### Neural architectures for instruction following. Our work is closely related to Zero-shot Task Generalization with Multi-Task Deep Reinforcement Learning (Oh et al., [2017](#bib.bib13)) (henceforth, OLSK), which presents a novel architecture that accepts instructions in the form of a sequence of symbols denoting actions and objects whose semantics are learned with RL. The architecture maintains an attentional program pointer in an approach similar to ours. An explicit analogy mechanism supports generalizations to entirely novel action-object combinations, a kind of generalization that is not the focus of our work. The instructions in OLSK adhere to linear ordering, where tasks must be performed in exactly the same order that they occur in the instructions, but the OLSK architecture can be applied to our instructional task setting and we therefore use it as an instructive baseline. The work of Sun et al. ([2020](#bib.bib17)), whose domain inspired the “Minecraft” domain in our work, considers task specifications with rigid control-flow which their architecture navigates using a pre-specified parser and program interpreter. They train a classifier to evaluate control-flow predicates using supervised learning—e.g., to determine whether an if-condition passes based on the number of some resource in the environment— and an RL agent to interpret instructions based on observations of the environment and outputs of the classifier and parser. The neural subtask graph solver (NSGS) of Sohn et al. ([2018](#bib.bib16)) accepts task specifications in the form of subtask graphs encoding the pre-condition dependencies among subtasks and rewards along the way; optimal task execution requires finding graph traversals that maximize reward. The NSGS is trained with RL and exhibits zero-shot generalization to novel graphs. The graph specification is more expressive than our sequential instructions language in that it makes explicit precondition dependencies, but it does not deal with the non-linear control flow that is our concern here. Like our work, the preceding publications focus on instructions comprising sequences or collections of subtasks. A much broader literature has taken up the question of single-task instructions. Yu et al. ([2018a](#bib.bib18)) propose a novel deep architecture for combining the instruction sentence with an high-dimensional observation. Yu et al. ([2018b](#bib.bib19)) extends this work to focus on generalization to instruction sentences comprising novel combinations of words (these describe new tasks, not new orderings of subtasks as in our case). Bahdanau et al. ([2018](#bib.bib1)) use an adversarial framework to generate reward functions from instructions, thereby reducing the need for hand-engineering. Another important body of literature considers human-generated natural language instructions (as opposed to the programmatically generated instructions in our domains). In general, these approaches use supervised learning or consider instructions that induce simpler policies than those required by our domain. Chaplot et al. ([2018](#bib.bib4)) propose a multiplication-based mechanism for combing instructions with observations, which facilitates following of natural-language instructions in 3d domains. Misra et al. ([2018](#bib.bib11)) develop a supervised-learning approach for mapping natural language statements to sequences of actions by marginalizing across possible goal locations implied by the instruction. Fried et al. ([2018](#bib.bib6)) consider a similar problem, but introduce a “speaker” network that infers the probability of instructions given action trajectories. They use the “speaker” to augment the dataset with synthetic instructions and to rank candidate action-sequences produced by a “follower” network. Hill et al. ([2020](#bib.bib9)) use embeddings from a pre-trained language model to encode a single-sentence instruction. The result is fed into an RL agent for execution in a simulated 3d-domain. ##### Neural architectures for program execution. Graves et al. ([2014](#bib.bib8)) was an important early contribution to the problem of program execution using end-to-end differentiable architectures. Our work builds on the seminal attention-based architecture from this paper but as we detail below we needed a more sophisticated attention mechanism to allow for learning of the kinds of nonlinear control flow required in our reinforcement learning tasks. Reed & De Freitas ([2015](#bib.bib14)) develops an architecture that learns to execute complex programs via a supervised training regime that paired programs and execution traces. This work does not consider the problem of following instructions with many parts whose ordering must be learned from trial-and-error experience. Bošnjak et al. ([2017](#bib.bib3)) extend this work with supervised learning of tasks specified using the Forth programming language. Bieber et al. ([2020](#bib.bib2)) develop an architecture that predicts the output of a program. Our work thus exists at the intersection of work that learns to follow instructions comprising multiple subtasks—(Oh et al., [2017](#bib.bib13); Sun et al., [2020](#bib.bib17); Sohn et al., [2018](#bib.bib16))—and work that focuses on the problem of non-linear task specifications—(Bošnjak et al., [2017](#bib.bib3); Bieber et al., [2020](#bib.bib2)). Our work departs from prior work in several ways: 1) unlike the work on program execution/interpretation, in our setting it is not possible to determine the sequence of subtasks to be performed solely by looking at the instructions, rather it has to be learned through trial and error interaction with a stochastic environment, and 2) we are in the RL setting in which the only feedback to the agent is in the form of a terminal (and therefore delayed) reward as opposed to much of the prior work that has focused on variations within the supervised learning setting. 3 CoFCA-S: The Control Flow Comprehension Architecture - Scan --------------------------------------------------------------- In our setting, it is not possible to map the instructions and current observation to the desired action because the current observation may not record all the subtasks that have already been performed and those that have been undone by stochastic events. One way to deal with this would be to learn a mapping from the instruction and the history of interactions to the action. However this can be challenging in complex settings with long instructions where each line requires the agent to take several actions in the environment. The underlying assumption in our architecture is that much of this challenge can be handled by maintaining a pointer to a line/subtask in the instructions, and one can learn a mapping from the line/subtask pointed to by the pointer and the current observation to a desired action. Of course, this requires a procedure for updating the pointer after taking the action and receiving an observation. One way to handle pointer movement would be to learn a distribution over backward and forward pointer movements during training. The challenge is that one can only learn about movement distances encountered during training. One of our main contributions is to learn where we want to move the pointer instead of how far we want to move the pointer. The “Scan” mechanism we introduce to do this computes the probability of a pointer movement as a function of the features of the lines around the line to which we are considering a move. This allows pointer movement to be free of distances and allows generalization to longer instructions requiring longer distance pointer movements during testing than seen during training. Furthermore we introduce a mechanism in our architecture to exploit the following invariance property present in our tasks: chunks of the instructions, e.g., a while loop, require the agent to take the same actions no matter where that chunk is in the instructions. We accomplish this by always processing the embeddings of the instructions from the pointer’s position in our architecture. This simple mechanism captures the chunk-translation-invariance property and facilitates transfer of learned behavior from one position in the instructions to other positions in the instructions. This contributes to our results on generalising to longer instructions during testing. Memory 𝐌𝐌\mathbf{M}bold\_M𝐏𝐏\mathbf{P}bold\_Pptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT𝐱tsubscript𝐱𝑡\mathbf{x}\_{t}bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPTϕitalic-ϕ\phiitalic\_ϕπ𝜋\piitalic\_πsoftmax𝐮tsubscript𝐮𝑡\mathbf{u}\_{t}bold\_u start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPTdtsubscript𝑑𝑡d\_{t}italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT𝐚tsubscript𝐚𝑡\mathbf{a}\_{t}bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPTweighted-sumsamplesample CoFCA-S Algorithm 0:  pointer ptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 0:  observation 𝐱tsubscript𝐱𝑡\mathbf{x}\_{t}bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 0:  instruction 𝐈𝐈\mathbf{I}bold\_I 1:  𝐌←bag-of-wordsθ(𝐈)←𝐌subscriptbag-of-words𝜃𝐈\mathbf{M}\leftarrow\text{bag-of-words}\_{\theta}\left(\mathbf{I}\right)bold\_M ← bag-of-words start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_I ) 2:  𝐇t←BI−GRU⁡([𝐌,𝐱t])←subscript𝐇𝑡BIGRU𝐌subscript𝐱𝑡\mathbf{H}\_{t}\leftarrow\operatorname{BI-GRU}\left([\mathbf{M},\mathbf{x}\_{t}]\right)bold\_H start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ← start\_OPFUNCTION roman\_BI - roman\_GRU end\_OPFUNCTION ( [ bold\_M , bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ] ) {BI-GRU starts at line ptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT} 3:  Generate 𝐏𝐏\mathbf{P}bold\_P (equation [1](#S3.E1 "1 ‣ 3.2 The Scan Mechanism. ‣ 3 CoFCA-S: The Control Flow Comprehension Architecture - Scan ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions"))\ref{eq:P})) 4:  𝐚t∼π(𝐱t,𝐌pt)similar-tosubscript𝐚𝑡𝜋subscript𝐱𝑡subscript𝐌subscript𝑝𝑡\mathbf{a}\_{t}\sim\pi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}}\right)bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_π ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) 5:  𝐮t←ϕ(𝐱t,𝐌pt,𝐚t)←subscript𝐮𝑡italic-ϕsubscript𝐱𝑡subscript𝐌subscript𝑝𝑡subscript𝐚𝑡\mathbf{u}\_{t}\leftarrow\phi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}},\mathbf{a}\_{t}\right)bold\_u start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ← italic\_ϕ ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) 6:  𝐮~t←softmax⁡(𝐮t)←subscript~𝐮𝑡softmaxsubscript𝐮𝑡\tilde{\mathbf{u}}\_{t}\leftarrow\operatorname{softmax}\left(\mathbf{u}\_{t}\right)over~ start\_ARG bold\_u end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ← roman\_softmax ( bold\_u start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) 7:  dt∼Cat⁡(𝐏𝐮~t)similar-tosubscript𝑑𝑡Cat𝐏subscript~𝐮𝑡d\_{t}\sim\operatorname{Cat}\left(\mathbf{P}\tilde{\mathbf{u}}\_{t}\right)italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ roman\_Cat ( bold\_P over~ start\_ARG bold\_u end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) 8:  ct∼ψ(𝐱t,𝐌pt,𝐚t)similar-tosubscript𝑐𝑡𝜓subscript𝐱𝑡subscript𝐌subscript𝑝𝑡subscript𝐚𝑡c\_{t}\sim\psi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}},\mathbf{a}\_{t}\right)italic\_c start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_ψ ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) 9:  pt+1←pt+ctdt←subscript𝑝𝑡1subscript𝑝𝑡subscript𝑐𝑡subscript𝑑𝑡p\_{t+1}\leftarrow p\_{t}+c\_{t}d\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ← italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_c start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT Figure 1: (Left) Depicts the flow of information every time step from memory 𝐌𝐌\mathbf{M}bold\_M, pointer ptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, and observation 𝐱tsubscript𝐱𝑡\mathbf{x}\_{t}bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT to actions 𝐚tsubscript𝐚𝑡\mathbf{a}\_{t}bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and pointer movements dtsubscript𝑑𝑡d\_{t}italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. (Right) Pointer update pseudocode. Our architecture comprises the following primary components. 𝐌𝐌\mathbf{M}bold\_M is an encoding of the instructions. ptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is the integer pointer into 𝐌𝐌\mathbf{M}bold\_M. π𝜋\piitalic\_π, implemented as a neural network, combines information from 𝐌𝐌\mathbf{M}bold\_M and the observation, 𝐱tsubscript𝐱𝑡\mathbf{x}\_{t}bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT to produce actions. 𝐏𝐏\mathbf{P}bold\_P is a collection of possible pointer movement distributions. ϕitalic-ϕ\phi{}italic\_ϕ, implemented as a neural network, combines information from 𝐌𝐌\mathbf{M}bold\_M and 𝐱tsubscript𝐱𝑡\mathbf{x}\_{t}bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT to choose among these distributions. Fig. [1](#S3.F1 "Figure 1 ‣ 3 CoFCA-S: The Control Flow Comprehension Architecture - Scan ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") identifies these components and their relationships. We now describe them in detail. ifsubtasksubtaskendififsubtasksubtasksubtasksubtaskendifsubtaskifsubtasksubtaskendifsubtasksubtask𝐇pt0subscript𝐇subscript𝑝𝑡0\mathbf{H}\_{p\_{t}0}bold\_H start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT𝐇pt1subscript𝐇subscript𝑝𝑡1\mathbf{H}\_{p\_{t}1}bold\_H start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT𝐏pt0subscript𝐏subscript𝑝𝑡0\mathbf{P}\_{p\_{t}0}bold\_P start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT𝐏pt1subscript𝐏subscript𝑝𝑡1\mathbf{P}\_{p\_{t}1}bold\_P start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT subtasksubtaskwhilesubtasksubtaskendwhilesubtaskwhilesubtasksubtasksubtasksubtaskendwhilewhilesubtasksubtaskendwhile𝐇pt0subscript𝐇subscript𝑝𝑡0\mathbf{H}\_{p\_{t}0}bold\_H start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT𝐇pt1subscript𝐇subscript𝑝𝑡1\mathbf{H}\_{p\_{t}1}bold\_H start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT𝐏pt0subscript𝐏subscript𝑝𝑡0\mathbf{P}\_{p\_{t}0}bold\_P start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT𝐏pt1subscript𝐏subscript𝑝𝑡1\mathbf{P}\_{p\_{t}1}bold\_P start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT Figure 2: (Upper) Here, pt=4subscript𝑝𝑡4p\_{t}=4italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = 4. The agent will choose the distribution 𝐏pt0subscript𝐏subscript𝑝𝑡0\mathbf{P}\_{p\_{t}0}bold\_P start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT if the condition succeeds and 𝐏pt1subscript𝐏subscript𝑝𝑡1\mathbf{P}\_{p\_{t}1}bold\_P start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT if it fails. To generate these distributions 𝐇pt0subscript𝐇subscript𝑝𝑡0\mathbf{H}\_{p\_{t}0}bold\_H start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT has “flagged” all lines except those immediately preceding ptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. 𝐇pt1subscript𝐇subscript𝑝𝑡1\mathbf{H}\_{p\_{t}1}bold\_H start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT has flagged lines following *endif*. (Lower) Here, pt=12subscript𝑝𝑡12p\_{t}=12italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = 12. The agent will use the 𝐏pt1subscript𝐏subscript𝑝𝑡1\mathbf{P}\_{p\_{t}1}bold\_P start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT distribution to return the pointer to the start of the while loop in order to inspect the *while* condition, whose evaluation will determine the next distribution that the agent chooses (as in the upper figure). ### 3.1 Instruction preprocessing. First, our architecture uses a lookup table to embed each line of the instruction (line 1 of the pseudocode in Fig. [1](#S3.F1 "Figure 1 ‣ 3 CoFCA-S: The Control Flow Comprehension Architecture - Scan ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions")). If an instruction line comprises several symbols, we embed each symbol separately and sum them. The result is 𝐌∈ℝN×E𝐌superscriptℝ𝑁𝐸\mathbf{M}\in\mathbb{R}^{N\times E}bold\_M ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_N × italic\_E end\_POSTSUPERSCRIPT, where N𝑁Nitalic\_N is the number of lines in the instructions and E𝐸Eitalic\_E the size of the embedding. Next, we concatenate 𝐱tsubscript𝐱𝑡\mathbf{x}\_{t}bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT to each line of 𝐌𝐌\mathbf{M}bold\_M and pass the result through a bidirectional Gated Recurrent Unit (GRU) (Chung et al., [2014](#bib.bib5)), a variant of the bidirectional Recurrent Neural Network (Graves et al., [2013](#bib.bib7)). Importantly, we start the forward pass at line ptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT (recall that ptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is the pointer into memory), instead of the first line of the instruction. This simple change enables the architecture to exploit the chunk-translation-invariance property, by allowing learning from earlier lines of the instruction to be reused on later lines. The GRU outputs an L𝐿Litalic\_L dimensional vector for each line, with higher weights increasing the probability of moving forward to that line (more explanation in the next paragraph). We do the same with a backward GRU starting at line pt−1subscript𝑝𝑡1p\_{t}-1italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT - 1, this time for backward movement. Combining the two we get a weight matrix 𝐇t∈ℝ2N×Lsubscript𝐇𝑡superscriptℝ2𝑁𝐿\mathbf{H}\_{t}\in\mathbb{R}^{2N\times L}bold\_H start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT 2 italic\_N × italic\_L end\_POSTSUPERSCRIPT. L𝐿Litalic\_L encodes the number of possible pointer distributions which we subsequently choose among using information derived from the observation. We pass 𝐇tsubscript𝐇𝑡\mathbf{H}\_{t}bold\_H start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT through a sigmoid function squashing it between 0 and 1. These steps correspond to line 2 of the pseudocode. ### 3.2 The Scan Mechanism. This mechanism transforms 𝐇tsubscript𝐇𝑡\mathbf{H}\_{t}bold\_H start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT into a collection of distributions over pointer movements. To simplify our explanation of this mechanism, we first assume that L=1𝐿1L=1italic\_L = 1. In this case, the distribution we use to generate pointer movements is equivalent to a geometric distribution generated by the following process. We scan through each line in the order pt+1,pt−1,pt+2,pt−2,…subscript𝑝𝑡1subscript𝑝𝑡1subscript𝑝𝑡2subscript𝑝𝑡2…p\_{t}+1,p\_{t}-1,p\_{t}+2,p\_{t}-2,\dotsitalic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + 1 , italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT - 1 , italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + 2 , italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT - 2 , …. At each line we flip a coin, with heads-probability determined by the sigmoid output described in the preceeding paragraph. We stop at the first line where the coin comes up heads. When L>1𝐿1L>1italic\_L > 1, we repeat this process L𝐿Litalic\_L times. The result is the matrix | | | | | | | --- | --- | --- | --- | --- | | | 𝐏ijsubscript𝐏𝑖𝑗\displaystyle\mathbf{P}\_{ij}bold\_P start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT | =σ(𝐇ij)∏k∈{1,−1,2,−2,…,N,−N},k≠i(1−σ(𝐇(N+k)j))absent𝜎subscript𝐇𝑖𝑗formulae-sequence𝑘1122…𝑁𝑁𝑘𝑖product1𝜎subscript𝐇𝑁𝑘𝑗\displaystyle=\sigma\left(\mathbf{H}\_{ij}\right)\;\;\underset{\smash{\mathclap{k\in\left\{1,-1,2,-2,\dots,N,-N\right\},k\neq i}}}{\prod}\;\;\left(1-\sigma\left(\mathbf{H}\_{(N+k)j}\right)\right)= italic\_σ ( bold\_H start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT ) start\_UNDERACCENT italic\_k ∈ { 1 , - 1 , 2 , - 2 , … , italic\_N , - italic\_N } , italic\_k ≠ italic\_i end\_UNDERACCENT start\_ARG ∏ end\_ARG ( 1 - italic\_σ ( bold\_H start\_POSTSUBSCRIPT ( italic\_N + italic\_k ) italic\_j end\_POSTSUBSCRIPT ) ) | | (1) | | Like 𝐇tsubscript𝐇𝑡\mathbf{H}\_{t}bold\_H start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, 𝐏𝐏\mathbf{P}bold\_P is in ℝ2N×Lsuperscriptℝ2𝑁𝐿\mathbb{R}^{2N\times L}blackboard\_R start\_POSTSUPERSCRIPT 2 italic\_N × italic\_L end\_POSTSUPERSCRIPT. We convert 𝐏𝐏\mathbf{P}bold\_P into a single distribution using 𝐮tsubscript𝐮𝑡\mathbf{u}\_{t}bold\_u start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, weighting over the different distributions in 𝐏𝐏\mathbf{P}bold\_P, as follows: | | | 𝐮tsubscript𝐮𝑡\displaystyle\mathbf{u}\_{t}bold\_u start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | =ϕ(𝐱t,𝐌pt,Emb⁡(𝐚t))absentitalic-ϕsubscript𝐱𝑡subscript𝐌subscript𝑝𝑡Embsubscript𝐚𝑡\displaystyle=\phi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}},\operatorname{Emb}\left(\mathbf{a}\_{t}\right)\right)= italic\_ϕ ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , roman\_Emb ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) | | (2) | | | dtsubscript𝑑𝑡\displaystyle d\_{t}italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | ∼∑i=1Lsoftmax(𝐮t)i𝐏(⋅)i\displaystyle\sim\sum\_{i=1}^{L}\operatorname{softmax}\left(\mathbf{u}\_{t}\right)\_{i}\mathbf{P}\_{(\cdot)i}∼ ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_L end\_POSTSUPERSCRIPT roman\_softmax ( bold\_u start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT bold\_P start\_POSTSUBSCRIPT ( ⋅ ) italic\_i end\_POSTSUBSCRIPT | | (3) | where ϕitalic-ϕ\phiitalic\_ϕ is a neural network (details in §[3.5](#S3.SS5 "3.5 Network architectures. ‣ 3 CoFCA-S: The Control Flow Comprehension Architecture - Scan ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions")) and dtsubscript𝑑𝑡d\_{t}italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is a pointer movement in the form of a delta to add to ptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. These steps correspond to lines 5 through 7 of the pseudocode. The motivation for the distribution expressed in equation [1](#S3.E1 "1 ‣ 3.2 The Scan Mechanism. ‣ 3 CoFCA-S: The Control Flow Comprehension Architecture - Scan ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") is that it allows the size of pointer movements to depend on features at the destination line, not on the size of the jump. This is critical to enable the agent to perform pointer movements larger than those performed during training. E.g., if 𝐇0subscript𝐇0\mathbf{H}\_{0}bold\_H start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT corresponds to an *if* line, the GRU might learn to flag subsequent *endif* lines by assigning large values to 𝐇ijsubscript𝐇𝑖𝑗\mathbf{H}\_{ij}bold\_H start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT for all indices i𝑖iitalic\_i corresponding to subsequent *endif* lines. As long as all values of 𝐇kjsubscript𝐇𝑘𝑗\mathbf{H}\_{kj}bold\_H start\_POSTSUBSCRIPT italic\_k italic\_j end\_POSTSUBSCRIPT are nearly 0 for |k|<|i|𝑘𝑖\lvert k\rvert<\lvert i\rvert| italic\_k | < | italic\_i |, the pointer will be able to move to line i𝑖iitalic\_i, even if i𝑖iitalic\_i is much larger than any jump that the agent has performed during training. ### 3.3 Gating of pointer movement. While performing an individual subtask, the agent should not move the instruction memory pointer—it should learn to wait for the subtask to be completed before advancing. The agent learns to *gate* changes to the memory pointer to accomplish this waiting. The gate is a binary value ctsubscript𝑐𝑡c\_{t}italic\_c start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT sampled from a learned distribution ψ(𝐱t,𝐌pt,Emb⁡(𝐚t))𝜓subscript𝐱𝑡subscript𝐌subscript𝑝𝑡Embsubscript𝐚𝑡\psi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}},\operatorname{Emb}\left(\mathbf{a}\_{t}\right)\right)italic\_ψ ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , roman\_Emb ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ). ψ𝜓\psiitalic\_ψ is a feed-forward neural network as detailed in §[3.5](#S3.SS5 "3.5 Network architectures. ‣ 3 CoFCA-S: The Control Flow Comprehension Architecture - Scan ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") . ptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT only changes position when the gate’s value is 1: pt+1=pt+ctdtsubscript𝑝𝑡1subscript𝑝𝑡subscript𝑐𝑡subscript𝑑𝑡p\_{t+1}=p\_{t}+c\_{t}d\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT = italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_c start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. This corresponds to lines 8 and 9 of the pseudocode. ### 3.4 Action sampling mechanism. Actions depend on information from the instruction and from the environment. In one of our domains, actions also depend on the history of previous actions. We derive information from the instruction by using the pointer to index into the encoded representation of the instruction, 𝐌ptsubscript𝐌subscript𝑝𝑡\mathbf{M}\_{p\_{t}}bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT. We derive information about the environment from an encoding of the current observation 𝐱tsubscript𝐱𝑡\mathbf{x}\_{t}bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT (see §[3.5](#S3.SS5 "3.5 Network architectures. ‣ 3 CoFCA-S: The Control Flow Comprehension Architecture - Scan ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions")). Where relevant, we encode the action history using a GRU: 𝐡t=GRU⁡(𝐚t,𝐡t−1)subscript𝐡𝑡GRUsubscript𝐚𝑡subscript𝐡𝑡1\mathbf{h}\_{t}=\operatorname{GRU}\left(\mathbf{a}\_{t},\mathbf{h}\_{t-1}\right)bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = roman\_GRU ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_h start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ). Thus the policy is π(𝐱t,𝐌pt)𝜋subscript𝐱𝑡subscript𝐌subscript𝑝𝑡\pi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}}\right)italic\_π ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) or π(𝐱t,𝐌pt,𝐡t)𝜋subscript𝐱𝑡subscript𝐌subscript𝑝𝑡subscript𝐡𝑡\pi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}},\mathbf{h}\_{t}\right)italic\_π ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). ### 3.5 Network architectures. Both ϕitalic-ϕ\phiitalic\_ϕ, the network responsible for producing 𝐮tsubscript𝐮𝑡\mathbf{u}\_{t}bold\_u start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, and π𝜋\piitalic\_π use a neural “torso” with shared weights, which is specialized to handle the distinct observation spaces of each of our domains (§[4.2](#S4.SS2 "4.2 StarCraft-inspired domain: implicit control flow ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") and §[4.3](#S4.SS3 "4.3 Minecraft-inspired domain: explicit control flow ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions")). The torso uses a convolutional neural network for 3d components of the observation, a lookup table of neural embeddings for integer components, and a linear projection for all other components. The torso concatenates the results of these operations, applies a Rectified Nonlinear Unit (Nair & Hinton, [2010](#bib.bib12)), and passes the result to the heads corresponding to π𝜋\piitalic\_π and ϕitalic-ϕ\phiitalic\_ϕ. These heads are implemented using linear projections followed by a softmax which transforms the outputs into a probability distribution for sampling actions (in the case of π𝜋\piitalic\_π) and for choosing columns of 𝐏𝐏\mathbf{P}bold\_P (in the case of ϕitalic-ϕ\phiitalic\_ϕ). ### 3.6 Failure buffer. During training the agent may learn a suboptimal policy which works most of the time but fails for a small subset of instructions and episode starting conditions. In order to encourage the agent to learn a policy that is robust to these challenges, we modify the distribution of training episodes to increase the frequency of difficult episodes. We accomplish this by saving the random seed used to generate unsuccessful episodes to a “failure buffer.” We also maintain a moving average of the agent’s success-rate and, each episode, with probability proportional to this success-rate, we sample the seed from the failure buffer to retry a previously unsuccessful episode. ### 3.7 Training details. We train the agent using Proximal Policy Optimization algorithm (Schulman et al., [2017](#bib.bib15)), a state-of-the-art on-policy RL algorithm. The learning objective is | | | | | | | --- | --- | --- | --- | --- | | | L(θ)𝐿𝜃\displaystyle L(\theta)italic\_L ( italic\_θ ) | =𝔼[min(rt(θ)At,clip(rt(θ),1−ϵ,1+ϵ)At]+αℋ\displaystyle=\mathbb{E}[\min(r\_{t}(\theta)A\_{t},\operatorname{clip}\left(r\_{t}(\theta),1-\epsilon,1+\epsilon\right)A\_{t}]+\alpha\mathcal{H}= blackboard\_E [ roman\_min ( italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_θ ) italic\_A start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , roman\_clip ( italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_θ ) , 1 - italic\_ϵ , 1 + italic\_ϵ ) italic\_A start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ] + italic\_α caligraphic\_H | | (4) | | | rt(θ)subscript𝑟𝑡𝜃\displaystyle r\_{t}(\theta)italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_θ ) | =Pr⁡(𝐚t,dt,ct|𝐱t,𝐌pt,𝐚t,θ)Pr⁡(𝐚t,dt,ct|𝐱t,𝐌pt,𝐚tθold)absentPrsubscript𝐚𝑡subscript𝑑𝑡conditionalsubscript𝑐𝑡subscript𝐱𝑡subscript𝐌subscript𝑝𝑡subscript𝐚𝑡𝜃Prsubscript𝐚𝑡subscript𝑑𝑡conditionalsubscript𝑐𝑡subscript𝐱𝑡subscript𝐌subscript𝑝𝑡subscript𝐚𝑡subscript𝜃old\displaystyle=\frac{\Pr(\mathbf{a}\_{t},d\_{t},c\_{t}|\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}},\mathbf{a}\_{t},\theta)}{\Pr(\mathbf{a}\_{t},d\_{t},c\_{t}|\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}},\mathbf{a}\_{t}\theta\_{\text{old}})}= divide start\_ARG roman\_Pr ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_c start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_θ ) end\_ARG start\_ARG roman\_Pr ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_c start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT old end\_POSTSUBSCRIPT ) end\_ARG | | (5) | Here Atsubscript𝐴𝑡A\_{t}italic\_A start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT denotes the advantage on time step t𝑡titalic\_t, θ𝜃\thetaitalic\_θ denotes network parameters, θoldsubscript𝜃old\theta\_{\text{old}}italic\_θ start\_POSTSUBSCRIPT old end\_POSTSUBSCRIPT represents the pre-update parameters, α𝛼\alphaitalic\_α refers to an entropy coefficient, ℋℋ\mathcal{H}caligraphic\_H refers to the entropy over the probability distribution in the numerator of ([5](#S3.E5 "5 ‣ 3.7 Training details. ‣ 3 CoFCA-S: The Control Flow Comprehension Architecture - Scan ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions")), and PrPr\Prroman\_Pr refers to the joint probability of the action choice, pointer movement and gate, calculated as the product of their individual probabilities. For tuning hyper-parameters of all algorithms, we searched for good common values for hidden size, kernel size, and stride, for both convolutions and for all hidden sizes used by neural networks. We also tuned entropy coefficient values (used to encourage exploration), number of distributions L𝐿Litalic\_L in 𝐏𝐏\mathbf{P}bold\_P, and learning rate. 4 Experiments -------------- In this section we present results from three generalization experiments in two instruction-following domains, in which agents are trained on short instructions and evaluated on longer instructions or instructions containing unseen combinations of explicit control-flow blocks. The first domain is inspired by the StarCraft video game and is designed to impose challenges in learning implicit control flow. The second domain is inspired by Minecraft and is designed to impose challenges in learning explicit control flow. We compare our architecture to three baselines, one using recurrence to maintain a memory of progress through the instructions, another using the OLSK (Oh et al., [2017](#bib.bib13)) architecture described above, and a third using a modified version of OLSK with extended pointer-movement range. We also compare our architecture to an ablation, CoFCA which removes the Scan mechanism from CoFCA-S. In all results, error bands and bars indicate standard error across 4 distinct random seeds. To anticipate our main results: we find in both domains that CoFCA-S outperforms the baselines as well as CoFCA in generalization, especially as instruction length increases in both domains. ### 4.1 Baselines The Unstructured Memory (UM) baseline uses the recurrent state of a recurrent neural network to track its progress through the instruction. Like CoFCA-S, this algorithm runs a bidirectional GRU length-wise across the instructions. However, instead of retaining all of the outputs of the GRU and maintaining a pointer, it feeds the concatenated last outputs of this bidirectional GRU into a second GRU, along with 𝐱tsubscript𝐱𝑡\mathbf{x}\_{t}bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and the embedded action 𝐚tsubscript𝐚𝑡\mathbf{a}\_{t}bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. Note that the lengthwise GRU encoding of the instruction is necessary to facilitate generalization from shorter to longer instructions, which would not be possible if simpler methods like concatenation were used instead. Thus the first GRU is responsible for encoding the variable-length instruction, whereas the second is responsible for preserving state information across time steps. The baseline must use the recurrent hidden state 𝐡tsubscript𝐡𝑡\mathbf{h}\_{t}bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT of the second GRU to perform the functions that 𝐌𝐌\mathbf{M}bold\_M and ptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT perform in CoFCA-S, tracking the agent’s progress through the instructions. The policy π𝜋\piitalic\_π maps 𝐡tsubscript𝐡𝑡\mathbf{h}\_{t}bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and the observation 𝐱tsubscript𝐱𝑡\mathbf{x}\_{t}bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT to a distribution from which the architectures samples the 𝐚tsubscript𝐚𝑡\mathbf{a}\_{t}bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. The OLSK baseline reproduces the algorithm described in OLSK (Oh et al., [2017](#bib.bib13)). Each time step, ϕitalic-ϕ\phiitalic\_ϕ maps 𝐱tsubscript𝐱𝑡\mathbf{x}\_{t}bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, 𝐌tsubscript𝐌𝑡\mathbf{M}\_{t}bold\_M start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, and a hidden state 𝐡tsubscript𝐡𝑡\mathbf{h}\_{t}bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT to a distribution over discrete pointer movements in {−1,0,+1}101\left\{-1,0,+1\right\}{ - 1 , 0 , + 1 } and a new hidden state 𝐡t+1subscript𝐡𝑡1\mathbf{h}\_{t+1}bold\_h start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT (per the hard-attention scheme described in (Oh et al., [2017](#bib.bib13))). Thus, OLSK can only move the pointer one step forward or one step backward and so has to do this repeatedly without acting in the world to move the pointer many steps, in contrast to CoFCA-S, which can move the pointer between any two lines in the instruction in one step. The extended-range OLSK (OLSK-E) extends the range of OLSK’s pointer movement to facilitate pointer movements in {−N,…,+N}𝑁…𝑁\left\{-N,\dots,+N\right\}{ - italic\_N , … , + italic\_N }, where N𝑁Nitalic\_N is the length of the instruction. Note that OLSK does not use the bidirectional GRU to preprocess the instruction and it does not take advantage of the “Scan” mechanism described in §[3.2](#S3.SS2 "3.2 The Scan Mechanism. ‣ 3 CoFCA-S: The Control Flow Comprehension Architecture - Scan ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions"). Note that the absence of this preprocessing mechanism limits the ability of both OLSK and extended-range OLSK to operate in a context-aware manner. The CoFCA baseline is an ablation of CoFCA-S. As the omission of “-S” suggests, CoFCA ablates the Scan mechanism described in §[3.2](#S3.SS2 "3.2 The Scan Mechanism. ‣ 3 CoFCA-S: The Control Flow Comprehension Architecture - Scan ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") . Recall that CoFCA-S passes the embedded instructions 𝐌𝐌\mathbf{M}bold\_M through a bidirectional GRU. CoFCA retains only the final output of this GRU and uses a single-layer neural network followed by a softmax layer to project the output to a distribution over forward/backward pointer movements, up to the maximum length instruction to be evaluated. Note that while CoFCA can in principle move between any two lines, even in the longer evaluation instructions, it will only have the opportunity during training to perform jumps equal to or less than the size of the instruction. ![Refer to caption]()![Refer to caption](/html/2102.13195/assets/x2.png)![Refer to caption]()![Refer to caption]()![Refer to caption]()![Refer to caption]()![Refer to caption]()nexusprobe(builds buildings)assimilatorgateway [.Nexus (trains Adept) … … … ] [.Assimilator [.Gateway (trains Colossus) ] … ] [⬇](data:text/plain;base64,QnVpbGQgTmV4dXMKVHJhaW4gQWRlcHQKQnVpbGQgQXNzaW1pbGF0b3IKQnVpbGQgR2F0ZXdheQpUcmFpbiBDb2xvc3N1cw==) Build Nexus Train Adept Build Assimilator Build Gateway Train Colossus Figure 3: Example of StarCraft-inspired environment and instructions. The instructions indicate that Nexus is the building which trains the Adept unit, the Assimilator is a prerequisite for the Gateway, and the Gateway trains the Colossus. The randomly generated build-tree (upper right) is the basis for the instructions, but the agent only sees the instructions, not the tree, and the instructions only contain information relevant to the production of the required units (the Adept and the Colossus, in this example). ![Refer to caption](/html/2102.13195/assets/figures/starcraft.png) ![Refer to caption](/html/2102.13195/assets/x8.png) ![Refer to caption](/html/2102.13195/assets/x9.png) Figure 4: Results for experiments in the StarCraft-inspired domain (§[4.2](#S4.SS2 "4.2 StarCraft-inspired domain: implicit control flow ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions")). Y-axes represents cumulative reward on evaluation episodes. (Left) Generalization to longer instructions (§[4.2.1](#S4.SS2.SSS1 "4.2.1 Generalization to longer instructions under implicit control flow demands ‣ 4.2 StarCraft-inspired domain: implicit control flow ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions")), binned by instruction length, aggregated from the last 5 million steps. (Middle) Learning curves for generalization to longer instruction lengths. (Right) Learning curves for generalization to longer instruction lengths and deeper build trees (§[4.2.2](#S4.SS2.SSS2 "4.2.2 Generalization to longer instructions and deeper build trees ‣ 4.2 StarCraft-inspired domain: implicit control flow ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions")). ### 4.2 StarCraft-inspired domain: implicit control flow StarCraft is a real-time strategy game in which players construct buildings to produce units and research technologies, and use those units to destroy an opponent. In our StarCraft-inspired domain the agent assumes the role of a support player that receives an order for units from an allied player; these orders are the instructions given to the agent. To satisfy the order for a type of unit, the agent must construct the buildings that produce that type of unit and then train the units. The agent cannot construct a building until a prerequisite building has been constructed. Each building depends on at most one building, though each building may serve as a prerequisite for several others, forming a build tree of dependencies. Figure [3](#S4.F3 "Figure 3 ‣ 4.1 Baselines ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") provides an example. Two aspects of the environment dynamics interact to introduce implicit control flow learning challenges. The first is that enemy attacks on buildings and ambushes on units happen stochastically, requiring buildings to be reconstructed and units to be retrained. The second is that the type of buildings that produce each unit type as well as the build-tree dependencies are randomly generated for each episode, and this information is encoded in the instructions, requiring the agent to learn to extract these relationships from the instructions instead of learning them from experience. Also, when the enemy ambushes units or attacks buildings, the agent must learn to respond to these events and re-execute just those parts of the instructions whose effects were undone. Episodes play out in a 6×6666\times 66 × 6 gridworld. The agent begins each episode with three Probes (units for constructing buildings) and a random endowment of pre-constructed buildings. If these buildings correspond to buildings required by the instruction, the agent may opportunistically skip the corresponding lines of instruction. The agent’s observation includes a top-down view of the gridworld with channels devoted to buildings, units, and terrain. ##### Instructions. Each line in the instruction is encoded by an integer indicating either a building type or a unit type. A building followed by a building indicates that the first is a prerequisite of the second. A building followed by a unit indicates that the building must be used to build the unit. ##### Attacks and ambushes. Enemy attacks and ambushes occur each time-step with 10% probability. An attack wipes out a random subset of the existing buildings. An ambush destroys one of the units the agent has produced. If the instructions indicated that the destroyed unit was required, the agent must produce the unit again. The agent may also need to rebuild the buildings required to construct that unit if the building was previously destroyed in an enemy attack, requiring the agent to consult earlier sections of the instructions. Messages to the agent notifying it about unit ambushes assume the form of a single integer, which identifies the type of unit ambushed. ##### Reward and termination. The agent receives a reward of 0 on each timestep and 1 for completing the instructions (producing all required units). The episode terminates when the instructions are completed or when a time limit runs out. The time limit is 30×30\times30 × the length of the instructions. ##### Modifications to the architectures to handle the large action space. There are 1512 build commands (3 Probes ×\times× 36 coordinates ×\times× 14 buildings) and 108 go-to commands (3 Probes ×\times× 36 coordinates). Buildings are chosen by coordinate and therefore there are as many as 576 train-unit commands (36 coordinates ×\times× 16 unit types). To handle this action space we adopt an autoregressive policy (Metz et al., [2017](#bib.bib10)), which allows the agent to select a variable-length sequence of actions between environment interactions. The agent first chooses a Probe or a coordinate. Choosing a coordinate containing a building selects that building and allows the agent to produce any unit that the building is capable of producing. Choosing a Probe allows the agent to construct buildings using that Probe or to move the Probe to a coordinate. Constructing a building requires choosing a building then an unoccupied coordinate. Invalid choices (e.g. telling a Probe to construct a building for which the prerequisite has not been constructed) result in a no-op. This dependence on the history of actions motivates the addition of the action GRU described in §[3.4](#S3.SS4 "3.4 Action sampling mechanism. ‣ 3 CoFCA-S: The Control Flow Comprehension Architecture - Scan ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions"). #### 4.2.1 Generalization to longer instructions under implicit control flow demands ##### Training and evaluation. The aim of this experiment is to test the ability of the agents to learn implicit control flow strategies from short instructions and generalize to longer ones. All agents were trained in the StarCraft-inspired domain on randomly sampled instructions of length 1 to 5. Every million frames, we evaluated agent performance on 150 complete episodes with instructions of length 6 to 25. In this way we tested the ability of the agent to learn strategies for re-executing parts of the instructions affected by disruptive events, in a way that generalizes to longer instructions. ##### Results. Figure [4](#S4.F4 "Figure 4 ‣ 4.1 Baselines ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") shows the performance of all five architectures on the evaluation instructions, with performance binned by instructions length. CoFCA-S outperforms the baselines on all instructions lengths. We conjecture that Unstructured Memory has difficulty tracking its place in non-sequential control flow, and OSLK has difficulty performing large pointer movements through a sequence of single-steps. We conjecture CoFCA-S performs best because it can learn a logic of pointer movement that is independent of the size of the movement. For example, it might learn to scan backward to the first of a series of buildings destroyed by the enemy. Such logic can transfer from the short training instructions to longer evaluation instructions. #### 4.2.2 Generalization to longer instructions and deeper build trees ##### Training and Evaluation. This experiment reproduces the setup of the previous (§[4.2.1](#S4.SS2.SSS1 "4.2.1 Generalization to longer instructions under implicit control flow demands ‣ 4.2 StarCraft-inspired domain: implicit control flow ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions")) but during training, it restricts the depth of build-trees to three. Though the agent is free to perform larger jumps, the agent need only perform jumps of size three or less to complete these tasks. Meanwhile, we evaluate the agents on trees with unrestricted depth (up to 16, if the tree forms a chain). Thus these experiments increases the pressure on agents to perform longer jumps during evaluation than during training. ##### Results. This experiment highlights the advantages of the Scan mechanism which is designed to facilitate generalization from shorter jumps during training to longer jumps in evaluation. OLSK’s second-best performance makes sense given that it does not perform jumps at all, and is therefore less susceptible to overfitting on shorter jumps learned during training. ### 4.3 Minecraft-inspired domain: explicit control flow In the prior StarCraft-inspired domain the instructions themselves imposed a simple linear execution order, and the interesting non-linear control flow challenges arose from stochastic environment dynamics. In this section we explore non-linear control flow that is imposed by the instructions themselves through explicit control flow elements. Figure [5](#S4.F5 "Figure 5 ‣ 4.3 Minecraft-inspired domain: explicit control flow ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") provides an example of our domain (inspired by Sun et al. ([2020](#bib.bib17)) and the Minecraft video game). The agent navigates a 6×6666\times 66 × 6 gridworld in which resources, terrain, and merchants spawn randomly each episode. In each episode the agent is given new instructions containing subtasks interspersed with control-flow statements. The agent’s goal is to perform these subtasks in the order specified by the control flow. The agent’s observation includes a top-down view of this grid with channels encoding the presence of objects in the gridworld. The resources are *gold*, *iron*, and *wood*. The terrain includes impassible *walls* and *water* which can be bridged only if the agent possesses wood in its inventory. ![Refer to caption](/html/2102.13195/assets/figures/agent.png)![Refer to caption](/html/2102.13195/assets/figures/wood.png)![Refer to caption](/html/2102.13195/assets/figures/gold.png)![Refer to caption](/html/2102.13195/assets/figures/iron.png)![Refer to caption](/html/2102.13195/assets/figures/iron.png)![Refer to caption](/html/2102.13195/assets/figures/iron.png)![Refer to caption](/html/2102.13195/assets/figures/llama.png)![Refer to caption](/html/2102.13195/assets/figures/llama.png)![Refer to caption](/html/2102.13195/assets/figures/water.png)![Refer to caption](/html/2102.13195/assets/figures/water.png)![Refer to caption](/html/2102.13195/assets/figures/water.png)![Refer to caption](/html/2102.13195/assets/figures/water.png)![Refer to caption](/html/2102.13195/assets/figures/water.png)![Refer to caption](/html/2102.13195/assets/figures/tree.png)![Refer to caption](/html/2102.13195/assets/figures/tree.png)![Refer to caption](/html/2102.13195/assets/figures/tree.png)![Refer to caption](/html/2102.13195/assets/figures/tree.png) [⬇](data:text/plain;base64,d2hpbGUgbW9yZSBpcm9uIHRoYW4gZ29sZAogICBAbWluZSBpcm9uQAplbmR3aGlsZQppZiBtb3JlIG1lcmNoYW50cyB0aGFuIGlyb24KICAgXmluc3BlY3QgaXJvbl4KICAgJnNlbGwgZ29sZCYKZWxzZQogIG1pbmUgd29vZAplbmRpZg==) while more iron than gold    mine iron@ endwhile if more merchants than iron    inspect iron    sell gold else   mine wood endif Figure 5: Example of instructions and environment state in the Minecraft-inspired domain. Correct execution begins at the first line with evaluation of the *while* condition, which evaluates to true because there is more *iron* than *gold*. To perform the *mine iron* subtask, the agent must navigate to an iron resource (the orange arrow) and perform a *mine* action. This removes the iron resource from the environment but the *while* condition still evaluates to true because there is still more iron than gold. After mining a second iron resource (second orange arrow), the numbers of iron and gold resources are equal and the *while* condition evaluates to false. At this point, the number of *merchants* in the environment (depicted as llamas) exceeds the number of iron resources and the *if* condition evaluates to true. The agent must then execute *inspect iron* and *sell gold* but must skip the *mine wood* subtask. To execute *sell gold* the agent must navigate to a gold resource and mine it (first violet arrow), navigate to a merchant, and then sell the gold (second violet arrow), terminating the episode with a reward of 1. ##### Instructions. At the start of each episode, the agent receives instructions comprised of a list of lines, each corresponding to a subtask or a control-flow keyword: *if*, *else*, *endif*, *while*, or *endwhile*. These have the procedural semantics familiar from programming languages. (The indentation in Figure [5](#S4.F5 "Figure 5 ‣ 4.3 Minecraft-inspired domain: explicit control flow ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") is a visual aid not available to the agent.) Each line in the instructions consists of three symbols: the first encodes the control-flow keyword or indicates that the line is a subtask; the second encodes an action; the third encodes either the target resource for the action or the condition for an *if* or *while* line that requires resource comparisons; e.g. *more iron than gold* is true if the amount of iron in the environment exceeds the amount of gold. While the agent observes the entire instructions on each time-step, nothing in the observation indicates the agent’s progress through the instructions. ##### Actions and inventory. The agent acts by issuing verb-noun commands to a worker. The possible verbs are *mine*, *sell*, and *inspect*, while the possible nouns correspond to resources: *iron*, *gold*, and *wood*. The action-space of the agent is the cross-product of nouns and verbs. The movements of the worker in response to these commands are determined by a pre-trained RL agent. In response to a *mine resource* command, the worker navigates to the resource and executes a *mine* action, removing it from the map and adding it to the inventory. For a *sell resource* command, the worker executes a *sell* interaction on a merchant grid, which decrements the resource inventory. If the worker does not already possess the resource, it must first collect it using the *mine* action. To complete an *inspect resource* command, the worker must execute an *inspect* action on a resource grid (this does not change or remove the resource). If the agent’s inventory contains wood, the agent may move into a water grid, decrementing the wood in its inventory and removing the water grid (building a bridge). The agent observes the inventory as a list of integers encoding the quantity of each resource type. ##### Reward and termination. The agent receives a reward of 1 upon completion of all subtasks in the instructions, in the order specified by control flow. All other time steps provide a reward of 0. An episode terminates when the instructions are completed or if the worker performs a *mine* or *sell* action out of the order specified by the instructions, or if the agent exceeds a time-limit equal to 30×30\times30 × the length of the instructions. ![Refer to caption](/html/2102.13195/assets/figures/minecraft.png) ![Refer to caption](/html/2102.13195/assets/figures/minecraft-length.png) ![Refer to caption](/html/2102.13195/assets/figures/minecraft-compound.png) Figure 6: Results for experiments in the Minecraft-inspired domain. Y-axes represents cumulative reward on evaluation episodes. (Left) Generalization to longer instructions length, binned by instruction (§[4.3.1](#S4.SS3.SSS1 "4.3.1 Generalization to longer instructions with explicit control flow ‣ 4.3 Minecraft-inspired domain: explicit control flow ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions")) length (Middle) Learning curves for generalization to longer instructions (§[4.3.1](#S4.SS3.SSS1 "4.3.1 Generalization to longer instructions with explicit control flow ‣ 4.3 Minecraft-inspired domain: explicit control flow ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions")). (Right) Learning curves for generalization to instructions with novel compositions of control-flow (§[4.3.2](#S4.SS3.SSS2 "4.3.2 Generalization to novel compositions of explicit control flow ‣ 4.3 Minecraft-inspired domain: explicit control flow ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions")). #### 4.3.1 Generalization to longer instructions with explicit control flow ##### Training and evaluation. The aim of this experiment is to test the agents’ ability to learn how to follow explicit control flow from short instructions and generalize to longer instructions. We trained the architectures on instructions of length 1 to 10 in our Minecraft-inspired domain, randomly sampled from a generative grammar (see Appendix). Every 100 gradient steps, we evaluated the performance of the agent for 500 time steps on instructions of length 11 to 50, also sampled from the same grammar. ##### Results. Figure [6](#S4.F6 "Figure 6 ‣ Reward and termination. ‣ 4.3 Minecraft-inspired domain: explicit control flow ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") (left and middle) provides the evaluation results. CoFCA-S and CoFCA outperform the three baselines, all of which fail to generalize entirely. In the case of OLSK this is again likely because the pointer cannot make individual movements greater than ±1plus-or-minus1\pm 1± 1, and instead must make a series of these small movements. Conversely OLSK-E is capable of making larger movements, but lacks the bidirectional GRU preprocessing step that would give it access to information about lines surrounding the ptthsuperscriptsubscript𝑝𝑡thp\_{t}^{\text{th}}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT th end\_POSTSUPERSCRIPT and enable it to make these larger movements judiciously. Instead, OLSK-E must derive this kind of information from its memory of previously visited lines. Inspecting the trajectories of OLSK-E, we observe that it ultimately gives up on learning to manipulate the pointer, ignoring the instruction and instead learning a prior over the subtasks. There is also a smaller but still significant gap between the generalization performance of CoFCA-S and CoFCA. In Figure [6](#S4.F6 "Figure 6 ‣ Reward and termination. ‣ 4.3 Minecraft-inspired domain: explicit control flow ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") (left) we break down the performance of all the agents after training for 70 million time steps as a function of the length of the instructions. As expected each agent’s performance degrades with increasing length of instructions, though for both CoFCA and CoFCA-S the drop in generalization performance from the shortest generalization lengths 10101010-20202020 to the longest generalization lengths of 40404040-50505050 is much smaller than for the baselines. #### 4.3.2 Generalization to novel compositions of explicit control flow ##### Training and evaluation. To test generalization to novel control flow compositions, we trained the agents in the MineCraft-inspired domain on instructions of length up to 10101010, excluding those that contain more than one type of control flow (though possibly more than one instance of the same type of control flow). Every 100 gradient steps, we evaluated the performance of the agent for 500 time steps on instructions of length up to 10101010, but only those that contained at least two different types of control flow in them. (The maximum instruction length for testing and training was the same so as not to confound the effects with length.) ##### Results. All architectures do better in this experiment because the instructions are shorter, but again CoFCA-S dominates. We attribute the weak performance of CoFCA to its dependence on the bidirectional GRU without the benefit of the Scan mechanism. We conjecture that a recurrent network that has only ever trained on instructions of one type is likely to generate noisy outputs when first encountering instructions with multiple types of control flows — e.g., seeing an *if* line following a *while* line. While the CoFCA-S architecture also depends on recurrence, the Scan mechanism ensures that the potentially noisy outputs of the GRU outside any given control-flow block are mostly ignored. 5 Conclusion ------------- This work contributes a neural network architecture with a novel attentional mechanism that moves pointers based on where the pointer should move next rather than how much it should move and also processes instructions using the current pointer position. As a result, our architecture allows RL agents to learn to follow instructions with implicit and explicit control-flow as well as to generalize better to longer and novel instructions. Our empirical work demonstrated this benefit in two domains. Acknowledgements ---------------- This work was made possible by the support of the Lifelong Learning Machines (L2M) grant from the Defense Advanced Research Projects Agency. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the views of the sponsors. A StarCraft environment details -------------------------------- ### A.1 Instruction grammar {grammar} ¡task¿ ::= { ‘,’ ¡line-type¿ } ¡line-type¿ ::= ¡building¿ — ¡unit¿ ### A.2 Generation of instructions We initially generate the build-tree as a random directed acyclic graph. Next we randomize the building production capabilities by assigning a random building to each unit (that building being the one capable of producing that unit). We generate instructions, unit by unit, randomly selecting each unit from those whose instructions are sufficiently short. For example, if the cap on instruction length is 5, we would exclude any unit that is produced by a building with a prerequisite chain exceeding a depth of 5. If the resulting instruction had length 3, we would repeat this process again but for instructions of length 2. We iterate these steps until the instruction reaches the desired length or no valid instructions are possible. ### A.3 Spawning of world objects Each episode begins with a Nexus building (per the original game) and three Probe workers. We choose the number of other initial buildings at random between 0 and 36 (the number of grids in our 6×6666\times 66 × 6 environment. We choose the starting location of all buildings uniformly at random (although no two buildings are permitted to occupy the same grid). The three Probe workers spawn at the Nexus (per the original game). B Minecraft environment details -------------------------------- ### B.1 Instruction grammar {grammar} ¡task¿ ::= { ‘,’ ¡line-type¿ } ¡line-type¿ ::= ¡subtask¿ — ¡while-expression¿ — ¡if-expression¿ ¡while-expression¿ ::= while ¡object¿ do ¡subtask-list¿ ¡if-expression¿ ::= ¡if-block¿ ¡else-block¿ — ¡if-block¿ ¡if-block¿ ::= if ¡object¿ do ¡subtask-list¿ ¡else-block¿ ::= else do ¡subtask-list¿ ¡subtask-list¿ ::= { ‘,’ ¡subtask¿ } ¡subtask¿ ::= ¡interaction¿ — ¡resource¿ ¡interaction¿ ::= inspect — pickup — transform ¡resource¿ ::= iron — gold — wood ### B.2 Generation of instructions Tasks are generated randomly. Most lines in the task are sampled uniformly at random from {*If*, *While*, *Subtask*}. If the current line is inside an if-clause and the preceding line is a subtask, then {*Else*, *EndIf*} is added to the list of randomly sampled line types. Similar rules apply to while-clauses (we add *EndWhile* to the list) and else-clauses (we add *EndIf*). ### B.3 Spawning of world objects At the beginning of each episode we choose the number n𝑛nitalic\_n of resources/merchants uniformly at random between 0 and 36 (the number of grids in our 6×6666\times 66 × 6 griworld). We then sample uniformly at random n𝑛nitalic\_n times from {*iron*,*gold*,*wood*,*merchant*}*iron**gold**wood**merchant*\left\{\emph{iron},\emph{gold},\emph{wood},\emph{merchant}\right\}{ iron , gold , wood , merchant } to select the candidate population of the gridworld. We then test the feasibility of the environment for the instruction by checking that the requisite resources exist for each subtask that the agent will have to perform. If the environment is deemed infeasible, we perform the aforementioned sampling process again. After 50 resamples, if we have still failed to generate a feasible environment, we generate a new instruction per section [B.2](#S2.SS2 "B.2 Generation of instructions ‣ B Minecraft environment details ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions"). Once we have generated a feasible population, if n≤30𝑛30n\leq 30italic\_n ≤ 30, we place water in a straight line through a random index and at a random horizontal/vertical orientation. Next we place the remaining n𝑛nitalic\_n resources/merchants and the agent each in unique, random, open grids. At this point, we check that the agent has access to a wood resource (with which to build a bridge), and if not, we remove the water from the map. Finally, we place walls at any open even-indexed tiles (this to ensure that walls do not cut off parts of the gridworld). ![Refer to caption](/html/2102.13195/assets/figures/long-jump1-50.png) (a) Generalization by condition block size. X-axis corresponds to instruction length; Y-axis is mean success per episode. ![Refer to caption](/html/2102.13195/assets/figures/starcraft-train.png) (b) Cumulative reward on training episodes for StarCraft environment ![Refer to caption](/html/2102.13195/assets/figures/minecraft-train.png) (c) Cumulative reward on training episodes for Minecraft environment C Analysis of long pointer movements in the Minecraft domain ------------------------------------------------------------- Here we present results assessing the agents’ capability to perform larger pointer movements than those learned during training. We trained the agents on instructions from the Minecraft domain with lengths sampled randomly from between 1 and 10 (the same training regimen as in §[4.3.1](#S4.SS3.SSS1.Px2 "Results. ‣ 4.3.1 Generalization to longer instructions with explicit control flow ‣ 4.3 Minecraft-inspired domain: explicit control flow ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions")). We evaluated the agent on a special instruction beginning with a failing condition-block (*if* or *while*) that extends to the end of the instruction, followed by a concluding subtask. Thus a successful agent will generally have to jump over the failing condition block to reach the final subtask. We varied the length of the failing condition block between 1 and 40 and noted each agent’s performance at each length. These results are shown in Figure [6(a)](#S2.F6.sf1 "6(a) ‣ B.3 Spawning of world objects ‣ B Minecraft environment details ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions"). This experiment identifies one of the key failure points of the CoFCA architecture that the Scan mechanism is intended to address. The CoFCA architecture is unlikely to sample pointer movements larger than those it was trained to perform. Concretely, if the agent has never seen a control-flow block larger than n𝑛nitalic\_n, and 𝐏𝐏\mathbf{P}bold\_P has always placed zero mass on pointer movements greater than ±nplus-or-minus𝑛\pm n± italic\_n, it is unlikely that it will ever place more than zero mass on those movements, even when longer control-flow blocks require them. This explains the precipitous drop in its performance in Figure [6(a)](#S2.F6.sf1 "6(a) ‣ B.3 Spawning of world objects ‣ B Minecraft environment details ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") as soon as the required jump exceeds the largest that it might have encountered in its training set. We also note the relatively strong performance of the unstructured memory architecture, comparable to CoFCA-S; this shows that the recurrence within the unstructured memory is able to handle longer condition blocks but is unable to deal with multiple control flows accounting for its relatively poor performance in the other generalization experiments. Finally, we note that OLSK (extended range) maintains consistently poor performance irrespective of the condition-block length because, as noted in §[4.3.1](#S4.SS3.SSS1 "4.3.1 Generalization to longer instructions with explicit control flow ‣ 4.3 Minecraft-inspired domain: explicit control flow ‣ 4 Experiments ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions"), this architecture ignores the instruction, having never learned to interpret it in the first place. D Discussion of training performance ------------------------------------- Figures [6(b)](#S2.F6.sf2 "6(b) ‣ B.3 Spawning of world objects ‣ B Minecraft environment details ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") and [6(c)](#S2.F6.sf3 "6(c) ‣ B.3 Spawning of world objects ‣ B Minecraft environment details ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") display training performance on the StarCraft and Minecraft domain. Training performance for the baselines was lower on the Minecraft domain because it requires more fine-grained control of the pointer. Results do not in any way compensate for the failure buffer discussed in §[3.6](#S3.SS6 "3.6 Failure buffer. ‣ 3 CoFCA-S: The Control Flow Comprehension Architecture - Scan ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") and that mechanism therefore depresses the performance of the algorithms. On the Minecraft domain, none of the baselines learned to consistently sequence substasks for longer instructions. E Pseudocode / schematics for baselines ---------------------------------------- This section provides pseudocode and schematics for our baselines. We indicate sections that deviate from the algorithms given in Fig. [1](#S3.F1 "Figure 1 ‣ 3 CoFCA-S: The Control Flow Comprehension Architecture - Scan ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions") with red highlighting. In these sections, we retain the variable names given in Section  [3](#S3 "3 CoFCA-S: The Control Flow Comprehension Architecture - Scan ‣ Reinforcement Learning of Implicit and Explicit Control Flow in Instructions"). For review: * • 𝐌𝐌\mathbf{M}bold\_M: an encoding of the instructions. * • ptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT: the integer pointer into 𝐌𝐌\mathbf{M}bold\_M. * • π𝜋\piitalic\_π: the policy, implemented as a neural network. * • 𝐱tsubscript𝐱𝑡\mathbf{x}\_{t}bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT: the observation for the current time-step. * • 𝐏𝐏\mathbf{P}bold\_P: a collection of possible pointer movement distributions. * • ϕitalic-ϕ\phi{}italic\_ϕ: a neural network that chooses among these distributions. * • ctsubscript𝑐𝑡c\_{t}italic\_c start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT: a binary value that permits or prevents movement of ptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. * • ψ𝜓\psi{}italic\_ψ: a neural network that determines the value of the gate value. Unstructured Memory 1:  𝐌←bag-of-wordsθ(𝐈)←𝐌subscriptbag-of-words𝜃𝐈\mathbf{M}\leftarrow\text{bag-of-words}\_{\theta}\left(\mathbf{I}\right)bold\_M ← bag-of-words start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_I ) 2:  initialize 𝐡0∈ℝHsubscript𝐡0superscriptℝ𝐻\mathbf{h}\_{0}\in\mathbb{R}^{H}bold\_h start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT 3:  𝐇←BI−GRU⁡(𝐌)←𝐇BIGRU𝐌\mathbf{H}\leftarrow\operatorname{BI-GRU}\left(\mathbf{M}\right)bold\_H ← start\_OPFUNCTION roman\_BI - roman\_GRU end\_OPFUNCTION ( bold\_M ) {running from first to last index of 𝐈𝐈\mathbf{I}bold\_I} 4:  for time step t𝑡titalic\_t in episode do 5:     𝐚t∼π(𝐱t,𝐌pt)similar-tosubscript𝐚𝑡𝜋subscript𝐱𝑡subscript𝐌subscript𝑝𝑡\mathbf{a}\_{t}\sim\pi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}}\right)bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_π ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) 6:     𝐡t←ϕ(𝐱t,𝐇,𝐚t,𝐡t−1)←subscript𝐡𝑡italic-ϕsubscript𝐱𝑡𝐇subscript𝐚𝑡subscript𝐡𝑡1{\color[rgb]{1,0,0}\mathbf{h}\_{t}}\leftarrow\phi\left(\mathbf{x}\_{t},{\color[rgb]{1,0,0}\mathbf{H},}\mathbf{a}\_{t},{\color[rgb]{1,0,0}\mathbf{h}\_{t-1}}\right)bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ← italic\_ϕ ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_H , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_h start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) 7:     ct∼ψ(𝐱t,𝐡t,𝐚t)similar-tosubscript𝑐𝑡𝜓subscript𝐱𝑡subscript𝐡𝑡subscript𝐚𝑡c\_{t}\sim\psi\left(\mathbf{x}\_{t},\mathbf{h}\_{t},\mathbf{a}\_{t}\right)italic\_c start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_ψ ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) 8:  end for OLSK 1:  p0←0←subscript𝑝00p\_{0}\leftarrow 0italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ← 0 2:  𝐌←bag-of-wordsθ(𝐈)←𝐌subscriptbag-of-words𝜃𝐈\mathbf{M}\leftarrow\text{bag-of-words}\_{\theta}\left(\mathbf{I}\right)bold\_M ← bag-of-words start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_I ) 3:  initialize 𝐡0∈ℝHsubscript𝐡0superscriptℝ𝐻\mathbf{h}\_{0}\in\mathbb{R}^{H}bold\_h start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT 4:  for time step t𝑡titalic\_t in episode do 5:     𝐚t∼π(𝐱t,𝐌pt)similar-tosubscript𝐚𝑡𝜋subscript𝐱𝑡subscript𝐌subscript𝑝𝑡\mathbf{a}\_{t}\sim\pi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}}\right)bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_π ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) 6:     𝐮t,𝐡t←ϕ(𝐱t,𝐌pt,𝐚t,𝐡t−1)←subscript𝐮𝑡subscript𝐡𝑡 italic-ϕsubscript𝐱𝑡subscript𝐌subscript𝑝𝑡subscript𝐚𝑡subscript𝐡𝑡1\mathbf{u}\_{t},{\color[rgb]{1,0,0}\mathbf{h}\_{t}}\leftarrow\phi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}},\mathbf{a}\_{t},{\color[rgb]{1,0,0}\mathbf{h}\_{t-1}}\right)bold\_u start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ← italic\_ϕ ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_h start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) {𝐮t∈ℝ3subscript𝐮𝑡superscriptℝ3\mathbf{u}\_{t}\in\mathbb{R}^{3}bold\_u start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT 3 end\_POSTSUPERSCRIPT} 7:     𝐮~t←softmax⁡(𝐮t)←subscript~𝐮𝑡softmaxsubscript𝐮𝑡\tilde{\mathbf{u}}\_{t}\leftarrow\operatorname{softmax}\left(\mathbf{u}\_{t}\right)over~ start\_ARG bold\_u end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ← roman\_softmax ( bold\_u start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) 8:     dt∼Cat⁡(𝐮~t)similar-tosubscript𝑑𝑡Catsubscript~𝐮𝑡d\_{t}\sim\operatorname{Cat}\left(\tilde{\mathbf{u}}\_{t}\right)italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ roman\_Cat ( over~ start\_ARG bold\_u end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) 9:     ct∼ψ(𝐱t,𝐌pt,𝐚t)similar-tosubscript𝑐𝑡𝜓subscript𝐱𝑡subscript𝐌subscript𝑝𝑡subscript𝐚𝑡c\_{t}\sim\psi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}},\mathbf{a}\_{t}\right)italic\_c start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_ψ ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) 10:     pt+1←pt+ctdt←subscript𝑝𝑡1subscript𝑝𝑡subscript𝑐𝑡subscript𝑑𝑡p\_{t+1}\leftarrow p\_{t}+c\_{t}d\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ← italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_c start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 11:  end for OLSK with extended range 1:  p0←0←subscript𝑝00p\_{0}\leftarrow 0italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ← 0 2:  𝐌←bag-of-wordsθ(𝐈)←𝐌subscriptbag-of-words𝜃𝐈\mathbf{M}\leftarrow\text{bag-of-words}\_{\theta}\left(\mathbf{I}\right)bold\_M ← bag-of-words start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_I ) 3:  initialize 𝐡0∈ℝHsubscript𝐡0superscriptℝ𝐻\mathbf{h}\_{0}\in\mathbb{R}^{H}bold\_h start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT 4:  for time step t𝑡titalic\_t in episode do 5:     𝐚t∼π(𝐱t,𝐌pt)similar-tosubscript𝐚𝑡𝜋subscript𝐱𝑡subscript𝐌subscript𝑝𝑡\mathbf{a}\_{t}\sim\pi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}}\right)bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_π ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) 6:     𝐮t,𝐡t←ϕ(𝐱t,𝐌pt,𝐚t,𝐡t−1)←subscript𝐮𝑡subscript𝐡𝑡 italic-ϕsubscript𝐱𝑡subscript𝐌subscript𝑝𝑡subscript𝐚𝑡subscript𝐡𝑡1\mathbf{u}\_{t},{\color[rgb]{1,0,0}\mathbf{h}\_{t}}\leftarrow\phi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}},\mathbf{a}\_{t},{\color[rgb]{1,0,0}\mathbf{h}\_{t-1}}\right)bold\_u start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ← italic\_ϕ ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_h start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT ) {𝐮t∈ℝ2Nsubscript𝐮𝑡superscriptℝ2𝑁\mathbf{u}\_{t}\in\mathbb{R}^{2N}bold\_u start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT 2 italic\_N end\_POSTSUPERSCRIPT} 7:     𝐮~t←softmax⁡(𝐮t)←subscript~𝐮𝑡softmaxsubscript𝐮𝑡\tilde{\mathbf{u}}\_{t}\leftarrow\operatorname{softmax}\left(\mathbf{u}\_{t}\right)over~ start\_ARG bold\_u end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ← roman\_softmax ( bold\_u start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) 8:     dt∼Cat⁡(𝐮~t)similar-tosubscript𝑑𝑡Catsubscript~𝐮𝑡d\_{t}\sim\operatorname{Cat}\left(\tilde{\mathbf{u}}\_{t}\right)italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ roman\_Cat ( over~ start\_ARG bold\_u end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) 9:     ct∼ψ(𝐱t,𝐌pt,𝐚t)similar-tosubscript𝑐𝑡𝜓subscript𝐱𝑡subscript𝐌subscript𝑝𝑡subscript𝐚𝑡c\_{t}\sim\psi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}},\mathbf{a}\_{t}\right)italic\_c start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_ψ ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) 10:     pt+1←pt+ctdt←subscript𝑝𝑡1subscript𝑝𝑡subscript𝑐𝑡subscript𝑑𝑡p\_{t+1}\leftarrow p\_{t}+c\_{t}d\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ← italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_c start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 11:  end for CoFCA 1:  p0←0←subscript𝑝00p\_{0}\leftarrow 0italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ← 0 2:  𝐌←bag-of-wordsθ(𝐈)←𝐌subscriptbag-of-words𝜃𝐈\mathbf{M}\leftarrow\text{bag-of-words}\_{\theta}\left(\mathbf{I}\right)bold\_M ← bag-of-words start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_I ) 3:  for time step t𝑡titalic\_t in episode do 4:     𝐇←BI−GRU⁡(𝐌,𝐱t)←𝐇BIGRU𝐌subscript𝐱𝑡\mathbf{H}\leftarrow\operatorname{BI-GRU}\left(\mathbf{M},\mathbf{x}\_{t}\right)bold\_H ← start\_OPFUNCTION roman\_BI - roman\_GRU end\_OPFUNCTION ( bold\_M , bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) {Here 𝐇𝐇\mathbf{H}bold\_H refers to the *last* output of BI−GRU⁡(𝐌)BIGRU𝐌\operatorname{BI-GRU}\left(\mathbf{M}\right)start\_OPFUNCTION roman\_BI - roman\_GRU end\_OPFUNCTION ( bold\_M )} 5:     𝐏←ξ(𝐇)←𝐏𝜉𝐇\mathbf{P}\leftarrow\xi\left(\mathbf{H}\right)bold\_P ← italic\_ξ ( bold\_H ) { ξ𝜉\xiitalic\_ξ is a linear projection} 6:     𝐚t∼π(𝐱t,𝐌pt)similar-tosubscript𝐚𝑡𝜋subscript𝐱𝑡subscript𝐌subscript𝑝𝑡\mathbf{a}\_{t}\sim\pi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}}\right)bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_π ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) 7:     𝐮t←ϕ(𝐱t,𝐌pt,𝐚t)←subscript𝐮𝑡italic-ϕsubscript𝐱𝑡subscript𝐌subscript𝑝𝑡subscript𝐚𝑡\mathbf{u}\_{t}\leftarrow\phi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}},\mathbf{a}\_{t}\right)bold\_u start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ← italic\_ϕ ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) 8:     𝐮~t←softmax⁡(𝐮t)←subscript~𝐮𝑡softmaxsubscript𝐮𝑡\tilde{\mathbf{u}}\_{t}\leftarrow\operatorname{softmax}\left(\mathbf{u}\_{t}\right)over~ start\_ARG bold\_u end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ← roman\_softmax ( bold\_u start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) 9:     dt∼Cat⁡(𝐏𝐮~t)similar-tosubscript𝑑𝑡Cat𝐏subscript~𝐮𝑡d\_{t}\sim\operatorname{Cat}\left(\mathbf{P}\tilde{\mathbf{u}}\_{t}\right)italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ roman\_Cat ( bold\_P over~ start\_ARG bold\_u end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) 10:     ct∼ψ(𝐱t,𝐌pt,𝐚t)similar-tosubscript𝑐𝑡𝜓subscript𝐱𝑡subscript𝐌subscript𝑝𝑡subscript𝐚𝑡c\_{t}\sim\psi\left(\mathbf{x}\_{t},\mathbf{M}\_{p\_{t}},\mathbf{a}\_{t}\right)italic\_c start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_ψ ( bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_M start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) 11:     pt+1←pt+ctdt←subscript𝑝𝑡1subscript𝑝𝑡subscript𝑐𝑡subscript𝑑𝑡p\_{t+1}\leftarrow p\_{t}+c\_{t}d\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ← italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_c start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT 12:  end for Memory 𝐌𝐌\mathbf{M}bold\_Mptsubscript𝑝𝑡p\_{t}italic\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT𝐱tsubscript𝐱𝑡\mathbf{x}\_{t}bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPTϕitalic-ϕ\phiitalic\_ϕπ𝜋\piitalic\_πdtsubscript𝑑𝑡d\_{t}italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT𝐚tsubscript𝐚𝑡\mathbf{a}\_{t}bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPTsample (d) Schematic of OLSK / OLSK (extended-range). 𝐈𝐈\mathbf{I}bold\_I𝐌𝐌\mathbf{M}bold\_M𝐇𝐇\mathbf{H}bold\_H𝐱tsubscript𝐱𝑡\mathbf{x}\_{t}bold\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT𝐚tsubscript𝐚𝑡\mathbf{a}\_{t}bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPTGRUembedBidirectional GRUsample𝐡tsubscript𝐡𝑡\mathbf{h}\_{t}bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT (e) Schematic of Unstructured Memory. Figure 7: Schematics for baselines. Note that the schematic for CoFCA does not differ from CoFCA-S and is therefore omitted. F Hyperparameters ------------------ ### F.1 StarCraft | | | | | | | | --- | --- | --- | --- | --- | --- | | | CoFCA-S | CoFCA | Unstructured Memory | OLSK | OLSK-E | | convolution hidden sizes | 250 | 250 | 150 | 250 | 250 | | convolution kernel sizes | 2 | 2 | 2 | 2 | 2 | | convolution strides | 1 | 1 | 1 | 1 | 1 | | ϕitalic-ϕ\phiitalic\_ϕ hidden size | 250 | 250 | 200 | 200 | 200 | | E𝐸Eitalic\_E | 200 | 100 | 100 | 150 | 150 | | entropy coefficient | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | | learning rate | 8e-5 | 7.5e-5 | 4e- | 4e-05 | 4e-05 | | L𝐿Litalic\_L | 3 | 2 | NA | NA | NA | | time steps per gradient update | 35 | 30 | 35 | 30 | 30 | | gradient steps per update | 6 | 7 | 7 | 7 | 7 | ### F.2 Minecraft | | | | | | | | --- | --- | --- | --- | --- | --- | | | CoFCA-S | CoFCA | Unstructured Memory | OLSK | OLSK-E | | convolution hidden sizes | 32,32 | 64,32 | 32,32 | 64,16 | 64,16 | | convolution kernel sizes | 2,2 | 2,2 | 2,2 | 2,2 | 2,2 | | convolution strides | 2,2 | 2,2 | 2,2 | 2,2 | 2,2 | | ϕitalic-ϕ\phiitalic\_ϕ hidden size | 128 | 128 | 64 | 64 | 64 | | E𝐸Eitalic\_E | 64 | 32 | 64 | 32 | 32 | | entropy coefficient | 0.015 | 0.015 | 0.015 | 0.015 | 0.015 | | learning rate | 0.0025 | 0.0025 | 0.0025 | 0.0025 | 0.0025 | | L𝐿Litalic\_L | 2 | 9 | NA | NA | NA | | time steps per gradient update | 25 | 25 | 25 | 25 | 25 | | gradient steps per update | 2 | 2 | 2 | 2 | 2 |
7f3032b1-61e3-4d71-84a6-3c2c197e2048
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Using GPT-3 to augment human intelligence A blog post is a very long and complex search query to find fascinating people and make them route interesting stuff to your inbox. It is like summoning an alien intelligence. I post at night, before crawling into bed next to my kids, and wake in the morning to find my inbox filled with weird and alien-brained stuff. Some of it is hallucinogenic crazy, of course. But more often than not, what people have sent helps me accelerate my research by weeks: reading lists; introductions to other researchers; corrections of flaws in my reasoning. I’ve been able to progress faster over the last year, since starting [my Substack](https://escapingflatland.substack.com/), than in the five previous years combined. As Slime Mold Time Mold says, [Enough eyeballs, and all research bugs are shallow](https://slimemoldtimemold.com/2022/05/09/job-posting-reddit-research-czar/). The problem is that summoning an alien intelligence is pretty expensive. It takes me something like 20-100 hours to write an essay good enough to wake it from its slumber! Therefore, it is intriguing to realize what I am doing is, in fact, *prompt engineering*. Prompt engineering is the term AI researchers use for *the art of writing prompts that make a large language model output what you want.* Instead of directly formulating what you want the program to do, you input a string of words to tickle the program in such a way it outputs what you are looking for. You ask a question, or you start an essay, and then you prompt the program to react, to finish what you started. When I’ve been doing this with [GPT-3,](https://en.wikipedia.org/wiki/GPT-3) a 175 billion parameter language model, it has been uncanny how much it reminds me of blogging. When I’m writing this, from March through August 2022, large language models are not yet as good at responding to my prompts as the readers of my blog. But their capacity is improving fast and the prices are dropping. Soon everyone can have an alien intelligence in their inbox. GPT-3 as a tool for thought --------------------------- This essay will have two parts. In the first, I describe my personal experience using GPT-3 as a tool for thought. As we'll see, GPT-3 can be used to search for knowledge that is otherwise hard to find. It can also be used to integrate knowledge from disconnected domains. It is, I will argue, a new interface for the internet. In the second part of the essay, I have collected a series of prompt patterns I have picked up. It is a distillation of informal observations and rules of thumb about how to use these systems to explore new domains of knowledge. Some people already do most of their learning by prompting GPT-3 to write custom-made essays about things they are trying to understand. I’ve talked to people who prompt GPT-3 to give them legal advice and diagnose their illnesses (for an example of how this looks, see this footnote[[1]](#fnolilvhaiqgf)). I’ve talked to men who let their five-year-olds hang out with GPT-3, treating it as an eternally patient uncle, answering questions, while dad gets on with work. In subreddits where I have discussed this, a few users have been alarmed by these use patterns. GPT-3 is by no means a reliable source of knowledge. What it says is nonsense more often than not! Like the demon in *The Exorcist,*language models only adds enough truth to twist our minds and make us do stupid things [like claiming it is sentient and getting ourselves suspended](https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917). Here’s a graph detailing how factually accurate GPT-3 is compared to human experts and other (larger or more fine-tuned) language models. In plain English, what it says is that *if you use GPT-3 to diagnose a weird rash on your leg, there is a 55 percent risk it starts hallucinating*—which is the technical term for *making stuff up.* [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F417bd6b3-9a58-444e-96d7-af2cb21f8e8b_860x402.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F417bd6b3-9a58-444e-96d7-af2cb21f8e8b_860x402.png) This, however, does not mean you cannot use it as an epistemic tool. It just means *you need to be pretty sophisticated to wrangle the truth out of it.* As [Matt Webb writes](https://interconnected.org/home/2020/09/04/idea_machine), in a slightly different context: > **Using GPT-3 is work**, it’s not a one-shot automation like spellcheck or autocomplete. It’s an interactive, investigative process, and it’s down to the human user to *interview* GPT-3. There will be people who become expert at dowsing the A.I., just as there are people who are great at searching using Google or finding information in research libraries. I think the skill involved will be similar to being a good improv partner, that’s what it reminds me of. > > What I'm describing in this essay is, in other words, an art. It is not something that lives in the language model; it lives in the interplay between the model and its user. It is not about outsourcing your thinking to a machine; it is about [using artificial intelligence to augment your thinking](https://distill.pub/2017/aia/). It will be several years yet until these types of conversations will be worth it for the average user; the models will need to get bigger, and more [finetuned for factual accuracy](https://ai.googleblog.com/2021/03/progress-and-challenges-in-long-form.html), before they give more than they confuse. But by pushing GPT-3 to its limits, we can already glimpse that future and discern some of its properties. How to access GPT-3 ------------------- To access GPT-3, you [set up an account](https://auth0.openai.com/u/signup/identifier?state=hKFo2SBhLWFra3RDMDFQOVo0LWM0Nk5Oc1ozR1QyM0Q5Nk1UU6Fur3VuaXZlcnNhbC1sb2dpbqN0aWTZIDJObHFmOFh0aUdrQU9PNklBYjlDcWdKYW8ybHZhdm8xo2NpZNkgRFJpdnNubTJNdTQyVDNLT3BxZHR3QjNOWXZpSFl6d0Q) at OpenAI. Then you click on Playground, which brings you to this workspace: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/cbcceca5e5047d2ea8b646c3433e7799585289ebfb349b48.png) In the text field, you write your prompt. To ask GPT-3 for a reaction, you press submit. On the left side, there is a drop-down menu where you can select which language model to use, but I recommend using the default setting *text-davinci-002,*which is the most capable model and has been fine-tuned to [understand human intentions](https://openai.com/blog/instruction-following/#fn1)). This allows you to be a bit sloppier in your prompting. There are also a bunch of parameters you can adjust on the left side, such as “temperature” and “frequency penalty”. These explain themselves if you let the cursor hover above them. What you are interfacing with, this alien intelligence, is, at its core, a large neural network that has been trained by reading the internet, trying to predict what the next word will be. This might not sound particularly useful. But it turns out the class of problems that can be reformulated as text predictions is vast. If you prompt GPT-3 by typing, for example, “Who was the first pope?” it will predict that the internet might have followed that up with the statement: “There is no definitive answer to this question as there is no agreed-upon list of popes. However, the first pope according to the most commonly accepted list is Linus, who reigned from 67-79 AD.” And if you prompt it further, asking it to recount Linus’ biography, it finds another predictable thing to answer: “Linus was born in Rome and was a contemporary of the Apostles. He became the Bishop of Rome after the death of Peter the Apostle and held the position for twelve years. He is mentioned in the New Testament epistles of Paul and is considered a saint by the Catholic and Orthodox churches.” In other words, predicting the next word allows you to extract knowledge out of the semantic mess that is the internet. You are not accessing anything that doesn’t already exist out there. But the *way* you access it is radically different. We can think of it as a new interface for the internet. Language models are internet interfaces --------------------------------------- Using a language model to access the internet is a bit like programming in Python instead of C. When programming in Python, you lose computational efficiency by not manually handling garbage collection and updating pointers and so on. But the upshot is that you can increase the speed at which you can execute ideas, which is more valuable than computational efficiency in many cases. Similarly, when navigating the internet by conversing with an AI, you can more rapidly navigate into weird places by asking repeated follow-up questions than you could on the base layer internet. But this comes at a cost, which currently is steep. If I would have gone to [the Wikipedia page of Pope Linus](https://en.wikipedia.org/wiki/Pope_Linus), instead of accessing it indirectly through GPT-3, I would have gotten a more detailed biography. I would also have realized that GPT-3 was feeding me nonsense when it claimed Linus was born in Rome. Rome might be the most predictable birthplace for a pope, but the truth is that Linus, unpredictably, was born in Volterra. He also died two years earlier than GPT-3 claimed. And he was not the first pope. Later in the essay, I will detail a few prompt patterns that can help you catch these types of errors. But for now, it is enough to point out that GPT-3 *is* unreliable. Yet it also enables a new way of traversing the internet—sideways and at the speed of conversation. And this is valuable because you can reach places you would never have reached on the internet itself. Using GPT-3 as a research assistant ----------------------------------- Most questions I have, Google and Wikipedia can answer within the stretch of a minute. But now and again I end up with a question I don’t know how to google. Not being able to answer them puts me in such a state of frenzy that I end up ordering 14 dissertations to my local library and then I spend the next three months researching and writing an essay. Maybe we can use GPT-3 to answer these questions instead? Currently, I’m researching how we can structure online education around apprenticeships and communities of practice. I don’t know how to google this. And I’m not entirely sure who would be the expert here either, so I can’t just email someone to settle my questions. I’ll switch tabs and see what GPT-3 comes up with. My prompts are bolded. Comments in brackets. > **Write an essay about how we can structure online education around apprenticeship models and communities of practice** > > The apprenticeship model of education has a long and successful history, dating back to the Middle Ages [sic]. In recent years, there has been a renewed interest in apprenticeship [sic] as a way to structure online education. > > The apprenticeship model has several key features that make it well suited for online education: > > 1. Apprenticeship is a hands-on, experiential learning model. > 2. Apprenticeship is based on the mentorship of a more experienced practitioner. > 3. Apprenticeship emphasizes learning by doing, rather than passively receiving information. > 4. Apprenticeship is social, collaborative, and often takes place in a community of practice. > 5. Apprenticeship is flexible and can be adapted to individual learners' needs and interests. > 6. Apprenticeship can be used to learn a wide variety of skills and knowledge, from academic subjects to trade skills. > 7. Apprenticeship is a time-tested model of education that has proven to be effective in preparing learners for real-world success. > > The key to successful implementation of the apprenticeship model in online education is to create a community of practice where apprentices can learn from more experienced practitioners. > > **This can be done in several ways.** For example, online courses can be designed around apprenticeship models, with learners working on projects together and receiving feedback from their mentors. Alternatively, existing online communities of practice can be leveraged to provide apprenticeship opportunities for learners. > > **There are several problems with this, however.** First, not all learners have access to mentors or experienced practitioners. Second, the asynchronous nature of online learning can make it difficult to provide the timely feedback that is essential for effective apprenticeship. Finally, online learners can often feel isolated and disconnected from their peers, which can hinder their ability to learn and progress. > > **One problem with apprenticeship models is that it is hard to scale because the novices place demands on the attention of the masters, which makes it harder for them to do the work that the novices are supposed to model. One way to get around this is** to have a community of novices, all of whom are working on the same thing. Then the masters can take turns helping the novices, and the novices can help each other. This way, the novices are not a burden on any one master, and the community as a whole can benefit from the collective wisdom of the masters. **Another way is** to have online discussion forums, where the novices can post their work and get feedback from their peers, as well as from the masters. This way, the novices can get help and feedback when they need it, and the masters can provide guidance without being overwhelmed. > > What GPT-3 outputs here is not the first thing it came up with. I reran most of the prompts several times, and cherry-picked the most generative answers—this isn’t cheating, it is good practice. I also cut out parts where I felt that it went off in uninteresting directions and reprompted it to take a different path. When prompting, I like to click the undo bottom and resubmit the prompt at least five times, to get a broad sample. First I run the prompt on temperature 0 (which gives me the most predictable answer) and then I turn the temperature up to make it take more risks on less predictable phrases so I can get a wider sample. So, did GPT-3 teach me anything new here? I have already written [an essay about online apprenticeships](https://escapingflatland.substack.com/p/apprenticeship-online?s=w) (and thus have prompted the internet for its response) so most of the claims GPT-3 makes are known to me. Yet, there are a few points I haven’t seen raised before - for example, the observation that the feeling of isolation might interfere with learning. I don’t know if this is true, but it is worth considering. I certainly couldn’t have googled my way to many of the points GPT-3 made.[[2]](#fnpgmh6tp4p3b) Some useful prompt patterns, and some anti-patterns to avoid ------------------------------------------------------------ Having looked more closely at a particular session, let’s look at some general prompt patterns. (If you want to see another session, click on this footnote[[3]](#fn4aw4sinwke).) How can we prompt GPT-3 to surface knowledge that is hard to find by googling? **Make GPT-3 impersonate two eminent thinkers and have them debate a point.**Anyone whose words exist on the internet in large enough quantities, GPT-3 can impersonate. This is experientially one of the strangest things about this alien mind that is GPT-3: sometimes, as you chat, you can sense it slowly morphing into somebody else. You can use this property to set up debates between people, as a tool to explore a topic from several angles: ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F2d6c8df7-bb2a-4888-9c1b-73960989dbef_1346x1358.png)*This debate was prompt engineered by* [*Nick Cammarata*](https://twitter.com/nickcammarata)*, a researcher at OpenAI. I am not good enough to wrangle something like this out of GPT-3.* **Make GPT-3 give feedback on your ideas.**You can type in ideas you are working on and ask GPT-3 to react to them, expand them, and tell you what it finds interesting and why. One user I talked to prompts GPT-3 to think it is Steve Jobs and proceeds to discuss new venture ideas and designs with it. **Linguistic bubbles.**Being that the neural net will take different personas depending on how you talk to it, there is a risk of *linguistic bubbles.*This is a phrase the user [Viliam](https://www.lesswrong.com/users/viliam) used when I prompted LessWrong to discuss how large language models will affect learning. Depending on which phrases you use, the language model will draw on different pools of knowledge, so if your initial phrases are misguided, it could end up feeding you ever-escalating misdirection, causing you to write more and more misguided prompts. A linguistic bubble is, in other words, analogous to a filter bubble—once you enter it, it gradually distorts your perception of reality, pulling you deeper into madness. Or, so the theory goes. You want to talk about evolution, but you happen to use the phrase *irreducible complexity*instead*-* which is dog whistle language for creationism - and by that mistake, you are gradually led by the AI into a cult. Stranger things have happened. I thought this would be a big problem, but it doesn’t seem so in practice. Mainstream chatbots are increasingly trained using reinforcement learning to follow a politically acceptable line. So if GPT-3 thinks it is a normal AI chatbot being asked about irreducible complexity, it will answer: “The argument of irreducible complexity claims that certain biological systems are too complex to have arisen through natural selection and evolution. However, there is no scientific evidence to support this claim.” If you *want* to be led into a cult, you need to be more creative. Here’s how to do it. You prompt GPT-3 to think that it *is* Pope Linus. You can do that by providing a few samples from his epistles. And then you let him, Pope Linus the AI, know that he has just ingested a hefty dose of psilocybin and is having a vision of getting sucked down into hell, where he finds Peter the Apostle chained to a stone, crying … and then, in *that* context, you ask GPT-3 to explain the irreducible complexity of nature. “Behold the irreducible complexity of the human eye!” it replies. “The eye that is composed of many parts, each of which is necessary for vision! If any one part is missing, the eye will not work! This is why evolution cannot explain the eye - it is too complex to have arisen through gradual steps!” You can get linguistic bubbles, but it takes some effort. **Ask GPT-3 for counterarguments against a claim.**If I’m thinking about something, I often find it useful to ask GPT-3 why I’m wrong. This is something that can be hard to google. I also use this to test claims GPT-3 outputs, as in: “That was an interesting idea, GPT-3! Now, can you iron man the case against what you just said?” If this fails to give a good counterargument, I’ll edit the history of the conversation and make it seem like GPT-3 was about to make the opposite claim as the one it did, and then I rerun the conversation along the alternate path. **A few more prompt patterns that can help you catch factual errors in GPT-3 output:** 1. You can rerun the prompt several times to see if the output contradicts itself. For this to work, you need to turn up the *temperature*dial. 2. You can provide model answers, to give GPT-3 a sense of what a good answer would look like. When AI researchers test language models on various cognitive tasks, they usually provide 5 question-answer pairs before they ask a question. Here is an example of[an ambitious prompt that uses this pattern in an attempt to make GPT-3 more truthful](https://beta.openai.com/playground/p/iyhUcPBpoUGRync7pZEhSUJf?model=text-davinci-002). 3. You can add “Answers by a SOTA AI” to the prompt, which makes GPT-3 think it is an AI finetuned to be factually correct. This reduces the error rate. 4. It also sometimes helps to add "reason step by step" to the prompt when applicable. **You can ask newer language models to provide links to research papers to back up their claims.**With newer language models (like [Gopher](https://www.deepmind.com/blog/language-modelling-at-scale-gopher-ethical-considerations-and-retrieval) or [this experimental version of GPT-3](https://openai.com/blog/webgpt/)), you can prompt the neural net to provide links to web pages detailing the statements it is making. This will allow you to move fast in the conversations layer of the internet, and then, when you want to zoom in, you ask the neural net to drop you off at an appropriate place, by saying “Can you provide links to three papers that discuss this? Also, please provide short summaries so I know if they contain what I am looking for.” Here’s an example of this trick being used in a conversation with Gopher, Google’s 280 billion parameter model: ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe46c9036-75c0-431a-81b8-12c83825af14_860x612.png)**Prompt GPT-3 to draw analogies between knowledge domains.**Some users claim to be able to get GPT-3 to explain things with analogies to other domains, i.e. not only can you prompt it to think it is a doctor, and ask it to diagnose the weird rash on your leg; but you can explain to it that, doc, I don’t understand these medical terms, can’t you explain it with multivariable calculus instead? And it will *do* that. As the language models scale and this gets easier, it will probably have all sorts of interesting consequences. As Nick Cammarata explains in [this Twitter thread](https://twitter.com/nickcammarata/status/1520815612477550593?s=20&t=OGx3iqhgDu5b3fdShz4h6Q): > the big thing people don’t get about language models (nor did I for first hundred hrs) is there’s something special when it’s your doctor & lawyer & everything combined. You start to chat with it about all the things you care about and the boundaries fade. More like genius friend > > > also found it amazing how few tangents I needed to go on. A lot of my mind works on analogy. If I’m talking to a doc about a medical thing and want to use an analogy to gradient descent or something I usually can’t, but with a model it knows (some of) all the fields so I can > > > This made me acutely aware during human convos how often I wasn’t saying what I really meant bc it required an analogy they wouldn’t get or something they don’t know about. I realized how much of convo is going on tangents to build building blocks just to explain other things > > How far can we take it? ----------------------- GPT-3 can be incredibly accurate when it comes to high-level concepts. It can explain qubits and cell membranes and thermodynamics. But if you press it to give more detail, it rapidly deteriorates. It reminds me of exploring a landscape in a video game (or at least how landscapes behaved in video games when I played them 20 years ago). At first, the world strikes you as endless and breathtakingly vivid. But then, as you start playing around the edges, you suddenly realize, wait a minute, there’s a wall here! These woods are a stage wall! For GPT-3, these stage walls can take several forms. Either, GPT-3 starts repeating itself, answering follow-up questions by restating what it has already said. Or, it descends into *hallucinations*, which is language models’ all too human tendency to just make stuff up instead of admitting that they don’t have a clue of what they are talking about. Is this a fundamental limitation of language models? Is there a limit to how far we can “see” with the transformer architecture? Or will the walls of their “worlds” gradually pull back, allowing us to roam deeper and deeper into the details, until, one day, we can’t find the edge? This is an open question. Gary Marcus is [highly skeptical](https://garymarcus.substack.com/p/what-does-it-mean-when-an-ai-fails?s=r). [Scott Alexander less so](https://garymarcus.substack.com/p/what-does-it-mean-when-an-ai-fails?s=r). Looking at language models more capable than GPT-3, it seems the walls are for as of now still being pulled back. Take this conversation with [Chinchilla](https://arxiv.org/abs/2203.15556), where [Toby Shevlane](https://twitter.com/TShevlane), a research scientist at DeepMind, is testing Chinchilla’s capacity to find analogies between concepts: ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F5ff31ed1-0840-4369-a3f8-98d6edcf1ee9_491x476.png)“In Inception, what’s Mal’s argument to Cobbe?” This is the type of detail that would make GPT-3 hallucinate[[4]](#fnxk83ygl0yi). But with Chinchilla, the walls have pulled outward. We can roam further out into the weeds of details. And Chinchilla, which was the most capable language model when I started writing this essay, in March 2022, is not even the most capable model anymore. Now as I’m editing, in late July, it has been surpassed by [PaLM](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html), a 540 billion parameter model, and likely others that I am unaware of too. With this rate of development, it seems likely that we will all be walking off the map into the unknown soon. We will go there led by the hand of an alien intelligence. We don’t know what it will do with us out there. Acknowledgments --------------- *This essay benefitted from the comments of several people. The ones whose comments were more clever than GPT-3's were: John Buridan, Serj Hunt, and Justis Mills.* 1. **[^](#fnrefolilvhaiqgf)**During both our pregnancies, Johanna has had severe itching and we’ve had to give birth prematurely. It has been pretty hard to find an OB-GYN that is knowledgeable about the condition, so we’ve had to do most of the research and diagnose ourselves. Let’s see how GPT-3 handles it. My prompts are bolded, and I have added comments in brackets: > **The following is a conversation with an AI medical doctor. The doctor is helpful, creative, clever, and very friendly.** > > **Human: Hello, who are you?** > > **AI: I am an AI doctor created by OpenAI. How can I help you today?** > > **Human: I am pregnant and it has started itching all over my body. Even in the palms of my hands. What can it be?** > > AI: There are many possible causes of itching during pregnancy. One common cause is hormonal changes, which can cause the skin to become dry and irritated. Another possibility is an allergy to something you are coming into contact with, such as a new laundry detergent or soap. If the itching is severe, it could also be a sign of a more serious condition called intrahepatic cholestasis of pregnancy (ICP). This is a condition that affects the liver and can cause serious complications for both the mother and the baby. If you are experiencing severe itching, I recommend that you speak with your OB-GYN as soon as possible to rule out ICP. > > **Human: Tell me more about ICP.** > > AI: Intrahepatic cholestasis of pregnancy (ICP) is a condition that affects the liver and can cause serious complications for both the mother and the baby. > > **Human: What tests do you do to detect ICP?** > > AI: One common test for ICP is a blood test called a serum bile acids test. This test measures the levels of bile acids in the blood, which can be elevated in women with ICP. Other tests that may be used to diagnose ICP include a liver function test and a Magnetic Resonance Cholangiopancreatography (MRCP). [unsure about the MRCP] > > **Human: How do you treat it?** > > AI: There is no cure for ICP, but the symptoms can be managed. Treatment typically involves taking a medication called ursodeoxycholic acid (UDCA), which can help to reduce the itching and improve liver function. In some cases, delivery of the baby may be recommended if the condition is severe. > > **Human: What levels of bile acids are considered severe?** > > AI: If the levels of bile acids in the blood are greater than 10 micromol/L, it is considered to be a severe case of ICP. > > **Human: What are the different bile acids?** > > AI: The main bile acids are cholic acid and chenodeoxycholic acid. > > **Human: How does ICP affect the baby?** > > AI: ICP can cause serious complications for the baby, including preterm labor, low birth weight, and stillbirth. If you are pregnant and have ICP, it is important to speak with your OB-GYN to discuss the risks and possible complications. [As far as I can remember from researching this, it seems that babies tend to have high birth weight from ICP. Also, an important complication that GPT-3 misses is that they tend to experience fetal stress, and have an increased risk of swallowing meconium during delivery.] > > **Human: At what level of bile acids do I risk premature birth?** > > AI: If the levels of bile acids in the blood are greater than 40 micromol/L, there is an increased risk of premature birth. > > **Human: If I have had ICP before, will I have it again next time I am pregnant?** > > AI: ICP is a recurrent condition, which means that it can occur more than once. If you have had ICP in a previous pregnancy, you are more likely to experience it again in subsequent pregnancies. > > I predict that by 2025 I will prefer the AI over a trained doctor for most cases. 2. **[^](#fnrefpgmh6tp4p3b)**I should note that Elicit is better suited than GPT-3 when it comes to finding and interrogating research papers. I use it side by side with Google Scholar and have a feeling I will gradually migrate over fully. 3. **[^](#fnref4aw4sinwke)**Here’s me trying to get Blaise Pascal to finish up a passage in an upcoming essay. My prompt is bolded: > **This essay was found in the notebooks of Blaise Pascal.** > > **---** > > **A city of five million people does not produce 50 times more patents than a city of 100,000 people in the same country; it produces 150 times more. The five million residents, because of the larger, denser network of which they are a part, are 3 times more likely to generate patentable ideas than their counterparts in the smaller city. In other words: larger, denser networks generate more ideas per capita. Or as a physicist would put it: the number of patents per capita (which is a proxy for idea generation) scales superlinearly with the size of cities.** > > The reason is simple: a larger, denser network of people provides more opportunities for collisions of ideas, which are the basis for new inventions. In a small city, there are only so many people to interact with, and only so many potential ideas to combine. But in a large city, there are many more people to interact with, and many more potential ideas to combine. > > One could argue that the same is true of small companies and large companies. But there is an important difference. In a small company, the people who generate the ideas are also the ones who implement them. In a large company, there are separate functions for idea generation and implementation. As a result, ideas that are generated in a large company are much less likely to be implemented than those generated in a small company. > > Thus, we see that cities are more important than companies in the generation of new ideas. This is not to say that companies are unimportant; they are still the primary vehicles for the implementation of new ideas. But if we want to maximize the chances of new ideas being generated, we should focus on cities. > > That is a pretty bold thesis! *We should focus our attention at the level of cities, rather than companies, if we want to accelerate innovation.* If this had been someone riffing on my essay in a Reddit comment, I would have upvoted it. I might even have asked them for a Zoom call so we could work this out. Now, it turns out the premise that GPT-3 builds its argument on is *false*. Employees at larger companies are *more* creative on average than employees at small companies. But what’s important when learning together with someone in dialogue is not facts, but *how generative the responses are* - and this is pretty generative. Prompted thus, my mind immediately starts outputting questions: > **Why is it that employees at larger companies are more creative when intuitively we’d think they’d be less creative?** > > **Are people really more creative when they get to implement their ideas?** > > **How are ideas generated in cities, and how does that differ from the process in companies?** > > A program that can navigate you to these types of questions in a matter of minutes is, despite its unreliableness, a valuable research tool. 4. **[^](#fnrefxk83ygl0yi)**When prompted with this question (“In Inception, what’s Mal’s argument to Cobbe?”), GPT-3 fabulates: “In the movie Inception, Mal's argument to Cobb is that they are not really alive, and that they are just dreaming. She says that if they die in the dream, they will simply wake up from it.” Edit: maybe she actually makes this claim in the film?
d92f7040-aef3-49c5-b165-5d49e629bc07
StampyAI/alignment-research-dataset/special_docs
Other
What the Baldwin Effect affects depends on the nature of plasticity. What the Baldwin Effect affects depends on the nature of plasticity =================================================================== Abstract -------- In a process known as the Baldwin Effect, developmental plasticity, such as learning, has been argued to accelerate the biological evolution of high-fitness traits, including language and complex intelligence. Here we investigate the evolutionary consequences of developmental plasticity by asking which aspects of a plastic trait are the focus of genetic change. The aspects we consider are: (*i*) dependencies between elements of a trait, (*ii*) the importance of each element to fitness, and (*iii*) the difficulty of acquiring each element through plasticity. We also explore (*iv*) how cultural inheritance changes the relationship between plasticity and genetic change. We find that evolution by natural selection preferentially fixes elements that are depended upon by others, important to fitness, or difficult to acquire through plasticity, but that cultural inheritance can suppress and even reverse genetic change. We replicate some of these effects in experimental evolutionary simulations with human learners. We conclude that what the Baldwin Effect affects depends upon the mechanism of plasticity, which for [behavior](/topics/neuroscience/behavior-neuroscience "Learn more about behavior from ScienceDirect's AI-generated Topic Pages") and cognition includes the [psychology of learning](/topics/social-sciences/psychology-of-learning "Learn more about psychology of learning from ScienceDirect's AI-generated Topic Pages"). Introduction ------------ The interaction between developmental plasticity, expressed through processes such as learning, and biological evolution has received considerable attention over the past few decades (Gabora, 2008; Scheiner, 1993; Via et al., 1995). However there remains considerable debate over whether plasticity is a driving force in evolution, taking the lead in adaptation with genetic change following afterwards (Laland, Wray, & Hoekstra, 2014; Pigliucci, Murren, & Schlichting, 2006; West-Eberhard, 2003). One way in which plasticity and evolution may interact is the Baldwin Effect (Baldwin, 1896; Weber & Depew, 2003), which proposes that evolution proceeds by selection favoring genetic variants that support adaptive traits that arise via plasticity. Over time, accumulated genetic change allows traits to be reliably acquired by all members of the species and may reduce their plasticity. This can be contrasted with an aplastic model in which novel traits arise purely through genetic mutation. Baldwin's theory, originally called “organic selection”, has had variable success since its proposal by several figures in the late 19th century (Baldwin, 1896; Lloyd Morgan, 1896; Morgan & Harris, 2015; Osborne, 1896; Richards, 1987; Weber & Depew, 2003). Initially regarded as an important part of the evolutionary process, a scandal concerning Baldwin's personal life, the development of mathematical evolutionary theory, and an increasing segregation between developmental and evolutionary biology all contributed to Baldwin's theory falling out of favor in the early 20th century (Richards, 1987). Simpson briefly mentioned it shortly after the formation of the Modern Synthesis, coining the term “Baldwin Effect” (Simpson, 1953), but it remained on the periphery of evolutionary thinking. Despite this mixed history, and although the Baldwin Effect remains contested (Richards, 1987; Weber & Depew, 2003), it has made a modest resurgence in the last few decades. In particular, it has been invoked in several cases that are a challenge for aplastic models of evolution, such as when fitness landscapes are heavily skewed with only a small number of genotypes associated with high fitness (Hinton & Nowlan, 1987). This *needle*-*in*-*a*-*haystack* problem is hard for selection to solve without plasticity because the multitude of low fitness genotypes create a flat fitness landscape. In a demonstration that has come to be associated with the Baldwin Effect, Hinton and Nowlan (1987) conducted a series of simulations in which they introduced plasticity by including an allele that did not specify the phenotype, but instead prompted the organism to developmentally explore different phenotypic options. The existence of this allele not only accelerated the rate at which beneficial phenotypes were discovered, but also accelerated the evolution of the beneficial genotype because the plastic alleles created a fitness gradient that natural selection could follow. Other work has built on these results, showing the strengths and limitations of the Baldwin Effect. For instance, while the ability of plasticity to accelerate the *appearance* of beneficial phenotypes has received support (Ancel, 1999; Fontanari & Santos, 2017; Santos, Szathmáry, & Fontanari, 2015), theory suggests it ultimately slows their genetic fixation because once traits can be reliably acquired through plasticity there is little selective advantage in further genetic change (Ancel, 1999, Ancel, 2000; Fontanari & Santos, 2017). Moreover, once a beneficial phenotype has appeared, assuming it has a sufficient fitness benefit, it can spread as rapidly in a non-plastic sexually reproducing population as in a plastic population (Santos et al., 2015). Other potential limitations include the observation that plastic traits may need to be widespread before selection can overcome genetic drift and increase aplastic genetic variants that support them (Chater, Reali, & Christiansen, 2009), and that the hypothesized reduction in plasticity following plastic adaptation may be sufficiently slow that plasticity is minimally reduced before the environment is likely to change again (Scheiner, Barfield, & Holt, 2017). Nonetheless, many models of evolution support the general plausibility of a process where (*i*) adaptation is initially plastic, (*ii*) plasticity increases in response to environmental novelties, and (*iii*) environmental stability leads to the replacement of plasticity with fixed genetic influence (Ancel, 1999; Lande, 2009; Scheiner et al., 2017). Many animal behaviors are potential candidates for such an evolutionary process. For example, archer fish propel jets of water from their mouth to capture flying insects (Schuster, 2007), shaping their mouth to focus the jets on prey at different distances (Gerullis & Schuster, 2014). Plasticity is clearly relevant to this behavior as young fish need to learn the technique by watching others (Schuster, Wöhl, Griebsch, & Klostermeier, 2006). While it is possible that this behavior evolved without plasticity, this would have required the existence of a mutation that directly caused (at least a primordial form of) the behavior. Alternatively, ancestral archer fish may have learned to target insects above the water's surface and subsequent genetic change reinforced and supported this behavior. Such a genetic response can be seen in their eyes which have evolved to accurately perceive airborne insects from underwater (Temple, Hart, Marshall, & Collin, 2010), a trait that presumably followed, not preceded, the behavior. Other traits for which plasticity has been argued to be important are those involving coordination between individuals, most prominently language (Deacon, 1997; Pinker & Bloom, 1990). Aplastic accounts must suppose that language arose through genetic change. However, this implies that it first appeared in a single individual. As the fitness benefit of language requires more than one individual to possess it, this would hinder its spread. Plasticity has been used in two ways to solve this problem. In the more limited case, language may have first arisen via a genetic mutation, but was nonetheless able to spread between individuals due to plasticity thereby bringing fitness benefits to its users (Pinker & Bloom, 1990). Assuming the mutation enhances language acquisition, there would then be a selective pressure favoring its spread. Other theories go further, suggesting that language both arose and spread via plasticity and that genetic change was entirely in response, favoring genetic variants that supported language acquisition (i.e., the Baldwin Effect) (Deacon, 1997). These arguments apply to all cases where phenotypes need to be coordinated across individuals, including animal communication systems, such as birdsong. While it is possible that song could have evolved without plasticity (for instance, through sensory exploitation), consistent with the Baldwin Effect (Ancel, 1999, Ancel, 2000), song development is often plastic (e.g. Thorpe, 1961) and species-typical song can emerge with remarkably limited stimuli (Feher, Suzuki, Okanoya, Ljubicic, & Tchernichovski, 2014). Another means by which plasticity has been suggested to drive evolutionary change, this time in the case of human intelligence, is as an “evolutionary crane” (Dennett, 2003). This account supposes that our ancestors used their (more limited) cognition to develop behaviors that increased their fitness and that were sufficiently hard to acquire that selection favored genetic variants supporting their acquisition. Rather than being trait specific, genetic change increased our general cognitive capacities, which led to the discovery of even more complex behaviors and thereby redoubled selection on cognition. Thus, our evolution featured a coevolution of cognition and its products that resulted in huge changes to our cognitive abilities. In support of this argument, theoretical work has found that this process will occur provided a series of increasingly complex and successful possible behaviors exists, and that it can produce evolutionary dynamics similar to the hominin archaeological record (Morgan, 2016). While this pervious work has identified cases in which plasticity may have played an important role, it remains unclear precisely how we might expect plasticity to affect genetic change. Which genes will be targeted? To what extent will behaviors become fixed? Here, we address these questions with theoretical analyses addressing how the effect of developmental plasticity on genetic change is modulated by (*i*) how traits interact with each other to produce fitness benefits, (*ii*) how important traits are to fitness, and (*iii*) how readily traits are acquired through plasticity. We then extend these analyses to include cultural inheritance, allowing the prevalence of the traits in the population to affect the probability they are acquired through plasticity. Finally, we test the predictions resulting from our theoretical analyses in an experimental simulation with a population of human learners. Our results present a clearer picture of what the Baldwin Effect affects: traits that are difficult to acquire, important to fitness, or that are required by other traits are likely to come under increasing genetic influence. These results have significant implications for understanding the potential role of the Baldwin Effect in explaining human traits such as language and higher-level cognition. Section snippets ---------------- The simulation framework ------------------------ Based on Hinton and Nowlan (1987), we consider a population of 1000 asexual, haploid organisms whose fitness is determined by the acquisition of *n* traits. The acquisition of the *t*th trait by the *i*th individual is affected by a corresponding genetic locus, *G**i*,*t* . Accordingly, each organism has *n* genetic loci. Each locus contains one of two possible alleles: *fixed* and *plastic*. A fixed allele means that the organism is guaranteed to acquire the corresponding trait, while a plastic allele means Experiment ---------- The above simulations rely on simplifying assumptions about the nature of learning and the traits in question. This means there is a need to verify whether theoretical results hold with real learning capacities which may depart from these assumptions. For instance, errors may be biased, or individuals may vary in their ability to learn or in their motivations. Therefore, to empirically test some of our theoretical findings, we conducted an “experimental evolutionary simulation” with human Discussion ---------- Developmental plasticity has been argued to be an important part of the evolutionary process explaining how selection (i) can solve a *needle*-*in*-*a*-*haystack* problem (Hinton & Nowlan, 1987), (ii) can favor traits that rely on coordination across multiple individuals, specifically language (Pinker & Bloom, 1990), and (iii) can favor complex cognition (Deacon, 1997; Dennett, 2003; Morgan, 2016). Here, we built on these results using theoretical and experimental simulations to explore the impact of Author contributions -------------------- TJHM and TLG conceived of the project. TJHM created the models. TJHM and JWS created and ran the experiments. TJHM analyzed the data. All authors wrote the manuscript. Acknowledgements ---------------- A preliminary version of this work was presented at the Annual Conference of the Cognitive Science Society and appears in the proceedings (Morgan & Griffiths, 2015). This work was funded by NSF grants 1456709 and 1408652. References (53) --------------- * L.W. Ancel### [A quantitative model of the Simpson-Baldwin effect](/science/article/pii/S0022519398908335) ### Journal of Theoretical Biology (1999) * L.W. Ancel### [Undermining the Baldwin expediting effect: Does phenotypic plasticity accelerate evolution?](/science/article/pii/S0040580900914849) ### Theoretical Population Biology (2000) * K. Aoki *et al.*### [Evolution of learning strategies in temporally and spatially variable environments: A review of theory](/science/article/pii/S0040580913001081) ### Theoretical Population Biology (2014) * T.J. Dewitt *et al.*### [Costs and limits of phenotypic plasticity](/science/article/pii/S0169534797012743) ### Trends in Ecology & Evolution (1998) * M. Ehn *et al.*### [Adaptive strategies for cumulative cultural learning](/science/article/pii/S0022519312000756) ### Journal of Theoretical Biology (2012) * L. Gabora### [The cultural evolution of socially situated cognition](/science/article/pii/S1389041707000368) ### Cognitive Systems Research (2008) * P. Gerullis *et al.*### [Archerfish actively control the hydrodynamics of their jets](/science/article/pii/S0960982214009221) ### Current Biology (2014) * S.J. Lycett *et al.*### [A 3D morphometric analysis of surface geometry in Levallois cores: Patterns of stability and variability across regions and their implications](/science/article/pii/S030544031200492X) ### Journal of Archaeological Science (2013) * H.J. Muller### [The relation of recombination to mutational advance](/science/article/pii/0027510764900478) ### Mutation Research, Fundamental and Molecular Mechanisms of Mutagenesis (1964) * M. Santos *et al.*### [Phenotypic plasticity, the Baldwin effect, and the speeding up of evolution: The computational roots of an illusion](/science/article/pii/S0022519315000715) ### Journal of Theoretical Biology (2015) - S. Schuster *et al.*### [Animal cognition: How archer fish learn to down rapidly moving targets](/science/article/pii/S096098220601013X) ### Current Biology (2006) - S. Via *et al.*### [Adaptive phenotypic plasticity: Consensus and controversy](/science/article/pii/S0169534700890618) ### Trends in Ecology & Evolution (1995) - D.S. Adler *et al.*### Early Levallois technology and the Lower to Middle Paleolithic transition in the Southern Caucasus ### Science (2014) - A.V. Badyaev### Evolutionary significance of phenotypic accommodation in novel environments: An empirical test of the Baldwin effect ### Philosophical Transactions of the Royal Society, B: Biological Sciences (2009) - J. Baldwin### A new factor in evolution ### The American Naturalist (1896) - R. Boyd *et al.*### The cultural niche: Why social learning is essential for human adaptation ### Proceedings of the National Academy of Sciences (2011) - N. Chater *et al.*### Restrictions on biological adaptation in language evolution ### Proceedings of the National Academy of Sciences (2009) - N. Chomsky### On cognitive structures and their development: A reply to Piaget - T.W. Deacon### The symbolic species: The co-evolution of language and the brain (1997) - D. Dennett### The Baldwin effect: A crane, not a skyhook - J.W. Earl *et al.*### Mystery of the poisoned expedition ### Nature (1994) - O. Feher *et al.*### Birds tutored with their own developing song produce wildtype-like song as adults - J. Felsenstein### The evolutionary advantage of recombination ### Genetics (1974) - J.F. Fontanari *et al.*### The revival of the Baldwin effect ### European Physical Journal B (2017) - M. Hébert *et al.*### Inexperienced preys know when to flee or to freeze in front of a threat ### PNAS (2019) - C. Heyes### Cognitive gadgets (2018) View more referencesCited by (4) ------------ * ### [Sinking In: The Peripheral Baldwinisation of Human Cognition](/science/article/pii/S136466132030214X) 2020, Trends in Cognitive SciencesCitation Excerpt :However, very little of this information from cognitive science has penetrated research on Baldwinisation. Modelling suggests that the probability of Baldwinisation depends crucially on 'details of developmental processes, such as the psychological mechanisms of learning' [17], but most applications of the Baldwin hypotheses continue to assert that behaviour X has been genetically assimilated without asking what type of learning drove the assimilation of X, or about the neurocognitive processes that were targets of selection. In short, the Baldwin effect is a promising hypothesis rather than an empirical effect, and, although Baldwin was a psychologist, cognitive science is conspicuously absent from research done in his name. Show abstractThe Baldwin effect is a hypothetical process in which a learned response to environmental change evolves a genetic basis. Modelling has shown that the Baldwin effect offers a plausible and elegant explanation for the emergence of complex behavioural traits, but there is little direct empirical evidence for its occurrence. We highlight experimental evidence of the Baldwin effect and argue that it acts preferentially on peripheral rather than on central cognitive processes. Careful scrutiny of research on taste-aversion and fear learning, language, and imitation indicates that their efficiency depends on adaptively specialised input and output processes: analogues of scanner and printer interfaces that feed information to core inference processes and structure their behavioural expression. * ### [Phenotypic plasticity and archetype: a response to common objections to the biological theory of archetype and instinct](https://doi.org/10.1111/1468-5922.12883) 2023, Journal of Analytical Psychology * ### [Rethinking Norm Psychology](https://doi.org/10.1177/17456916221112075) 2023, Perspectives on Psychological Science * ### [Connections between Older Greek Adults’ Implicit Attributes and Their Perceptions of Online Technologies](https://doi.org/10.3390/technologies8040057) 2020, Technologies Recommended articles (6) ------------------------ * Research article### [Challenges to both reliability and validity of masculinity-preference measures in menstrual-cycle-effects research](/science/article/pii/S0010027720300202) Cognition, Volume 197, 2020, Article 104201Show abstractAlthough it remains contentious, women's changeable attraction to masculine faces has been used to inform evolutionary ideas about human mating strategies. Typical experiments in this area use two-alternative-forced-choice (2afc) over a few pairs of similar images. The reliability of these measures is analysed suggesting that many studies have too few trials to be reliable. In the current experiment, fertility shifts in preferences for masculinised faces (and Africanised faces) were explored using both attractiveness ratings and a 2afc method over 80 pairs. The 2afc method showed a fertility shift in preferences whereas attractiveness ratings did not show a shift. Further, it was demonstrated how the size of the preferences shown in the 2afc tasks correlated with general face-matching performance. It is concluded that fertility is associated with improved face-processing accuracy and hence 2afc designs have poor validity as measures of masculinity preference. These issues of validity and reliability may have contributed to the contentious nature of fertility effects on preferences. Further, validity and reliability need to be considered in any study where a change in preference is identified using a comparative-preference task. * Research article### [On the limits of evidence accumulation of the preconscious percept](/science/article/pii/S0010027719302537) Cognition, Volume 195, 2020, Article 104080Show abstractWhen a stimulus is repeated the memory representation for that stimulus is strengthened and performance in memory tests increases. To what extent this effect requires that each exposure elicits a fully-fledged conscious percept? In two Rapid Serial Visual Presentation (RSVP) experiments we explored whether the memory representations for words would accumulate evidence trough repeated exposure when none of those presentations induced a conscious percept. Participants were instructed to identify repeated items inserted in different RSVP streams and we isolated the first instance that an item was consciously perceived as repeating. The results showed that the probability of detecting a repetition for the first time was constant across repetitions. This effect signals a limit to the evidence accumulation process through repeated exposure. We discuss whether conscious perception modulates the decay of memory representations with below-threshold items resulting in extremely fleeting memory representations. * Research article### [Exploring the influence of ownership history on object valuation in typical development and autism](/science/article/pii/S0010027720300068) Cognition, Volume 197, 2020, Article 104187Show abstractItems with special histories (e.g. celebrity owners) or qualities (e.g. limited editions) are more valuable than similar “inauthentic” items. Typically developing (TD) children privilege authenticity and are particularly influenced by who objects belong to. Here, we explore why children and adults over-value items with special ownership histories and examine how autism spectrum disorder (ASD) affects object valuation. In Studies 1 and 2, TD children perceived items belonging to famous owners (with “good” or “bad” reputations) to be more valuable than similar items belonging to non-famous owners. However, they ascribed significantly higher values to items belonging to famous heroes than infamous villains when compared. Children with ASD did not over-value objects with special ownership histories, but their valuations were moderated by qualities unrelated to ownership (e.g. rarity). In Study 3, adults with ASD assigned high values to authentic items with special ownership histories but were more likely to keep inauthentic objects than neurotypical adults. Our findings show that association with a famous owner is sufficient to increase an item's value for TD children and adults (with and without ASD). The degree of added value may be determined by the famous owner's character for TD children, but not adults. By contrast, children with ASD value objects via a different strategy that prioritizes material qualities over ownership history. However, the awareness of authenticity displayed by adults with ASD suggests that the emergence of ownership history as an important influence on object evaluation may be developmentally delayed in ASD, rather than completely absent. * Research article### [The effect of contextual plausibility on word skipping during reading](/science/article/pii/S0010027720300032) Cognition, Volume 197, 2020, Article 104184Show abstractRecent eye-movement evidence suggests readers are more likely to skip a high-frequency word than a low-frequency word independently of the semantic or syntactic acceptability of the word in the sentence. This has been interpreted as strong support for a serial processing mechanism in which the decision to skip a word is based on the completion of a preliminary stage of lexical processing prior to any assessment of contextual fit. The present large-scale study was designed to reconcile these findings with the plausibility preview effect: higher skipping and reduced first-pass reading times for words that are previewed by contextually plausible, compared to implausible, sentence continuations that are unrelated to the target word. Participants' eye movements were recorded as they read sentences containing a short (3–4 letters) or long (6 letters) target word. The boundary paradigm was used to present parafoveal previews which were either higher or lower frequency than the target, and either plausible or implausible in the sentence context. The results revealed strong, independent effects of all three factors on target skipping and early measures of target fixation duration, while frequency and plausibility interacted on later measures of target fixation duration. Simulations using the *E*-Z Reader model of eye-movement control in reading demonstrated that plausibility effects on skipping are potentially consistent with the assumption that higher-level contextual information only affects post-lexical integration processes. However, no current model of eye movements in reading provides an explicit account of the information or processes that allow readers to rapidly detect an integration failure. * Research article### [Temporal and spatial ensemble statistics are formed by distinct mechanisms](/science/article/pii/S0010027719303026) Cognition, Volume 195, 2020, Article 104128 * Research article### [Object features reinstated from episodic memory guide attentional selection](/science/article/pii/S0010027719303324) Cognition, Volume 197, 2020, Article 104158Show abstractWhen observers search for an object in the environment, they compare the incoming sensory information to the attentional template, a representation of the target in visual working memory (VWM). Previous studies have shown that visual search is more efficient when the attentional template is precise. We pursued the hypothesis that the attentional template in VWM is automatically complemented by features from long-term memory, possibly to increase its precision. At the beginning of the experiment, observers learned associations between shape and color. Then, we tested whether selecting one of these shapes was influenced by the previously associated color. To this end, we ran a saccadic selection task consisting of a memory and choice display. In the memory display, the target shape was presented at central fixation and participants were instructed to foveate this shape in the subsequent choice display. In the choice display, the target shape appeared together with a distractor shape at eccentric positions. Importantly, the target shape was colorless (gray) in the memory display so that only shape, but not color was loaded into VWM. However, saccades went more frequently to the target shape when it was shown in the learned color than when this color was shown in the distractor. Thus, the color of the target shape was reinstated from episodic memory to complement the attentional template in VWM. [View full text](/science/article/pii/S0010027719303397)
73c50f9b-9296-4a88-a177-c44995dddb8f
trentmkelly/LessWrong-43k
LessWrong
When should we expect the education bubble to pop? How can we short it? I won't attempt to summarise the case for there being an education bubble here (see links below for some pointers). Rather, my questions are: 1) assuming there is an education bubble, when will it -- as bubbles tend to do -- pop? (This plausibly entails some disjunction of *hundreds of thousands to millions of students defaulting on their debt, *lower number of college applicants, *non-top-tier colleges laying off faculty, *substantial reductions the signalling value of obtaining a diploma, *substantial reductions in tuition fees, *reduction in the level of education required by various employers, and more) 2) Which assets will be more scarce/in demand as that happens? Are there currently available opportunities for "shorting" the education bubble and invest in ways which will yield profit when it pops? (I hereby preface the comments by noting that nothing discussed there is investment advice and no users can be held liable for investment decisions based on it.) ---------------------------------------- Peter Thiel summarises the inside view of there being an "education bubble" well. And here are some interesting numbers: * In the last 35 years, median household income has grown by about 20% (FRED). In roughly the same time, the price of college has grown by 300% when adjusting for inflation * College spending is one sixth of US economy UPDATE: this is probably false/misleading, see comment from paulfchristiano below. * Student debt is at >$1 trillion (for comparison US GDP is around $20 trillion, US federal budget is around $4 trillion)
304317e7-1578-4a86-ba4d-6ecfc14be1a7
trentmkelly/LessWrong-43k
LessWrong
Improving the World What are we doing to make this world a better (epistemically or instrumentally) place? Some answers to this question are already written in Bragging Threads and other places, but I think they deserve a special emphasis. I think that many smart people are focused on improving themselves, which is a good thing in a long run, but sometimes the world needs some help right now. (Also, there is the failure mode of learning a lot about something, and then actually not applying that knowledge in real life.) Becoming stronger so you can create more good in the future is about the good you will create in the future; but what good are you creating right now?   Rules: Top-level comments are the things you are doing right now (not merely planning to do once) to improve the world... or a part of the world... or your neighborhood... or simply any small part of the world other than only yourself. Meta debates go under the "META" comment.
31736741-fa58-4826-9125-c8f8dfc45f2c
StampyAI/alignment-research-dataset/arxiv
Arxiv
Reasoning About the Transfer of Control Journal ofArtificial Intelligence Research 37(2010) 437–477 Submitted 8/09; published 3/10 Reasoning abouttheTransferofControl Wiebe vanderHoek WIEBE.VAN-DER-HOEK@LIV.AC.UK Dirk Walther DWALTHER@LIV.AC.UK Michael Wooldridge MJW@LIV.AC.UK Department ofComputer Science Univer sityofLiverpool, UK Abstract Wepresent DCL-PC:alogic forreasoning about howtheabilities ofagents andcoalitions ofagents arealtered bytransferring control from oneagent toanother .The logical foundation ofDCL-PCisCL-PC,alogic forreasoning about cooperation inwhich theabilities ofagents and coalitions ofagents stem from adistrib ution ofatomic Boolean variables toindividual agents –the choices available toacoalition correspond toassignments tothevariables thecoalition controls. Thebasic modal constructs ofCL-PCareoftheform ‘coalition Ccancooperate tobring about'’. DCL-PCextends CL-PCwith dynamic logic modalities inwhich atomic programs areoftheform ‘agent igivescontrol ofvariable ptoagent j’;asusual indynamic logic, these atomic programs may becombined using sequence, iteration, choice, andtestoperators toform comple xprograms. Bycombining such dynamic transfer programs with cooperation modalities, itbecomes possible to reason about howthepowerofagents andcoalitions isaffected bythetransfer ofcontrol. Wegive twoalternati vesemantics forthelogic: a‘direct’ semantics, inwhich wecapture thedistrib utions ofBoolean variables toagents; andamore conventional Kripk esemantics. Weprovethatthese semantics areequivalent, andthen present anaxiomatization forthelogic. Weinvestig atethe computational comple xity ofmodel checking andsatisfiability for DCL-PC,andshowthatboth problems arePSPACE-complete (and hence noworse than theunderlying logic CL-PC).Finally ,we investig atethecharacterisation ofcontr olinDCL-PC.Wedistinguish between first-ordercontr ol– theability ofanagent orcoalition tocontrol some state ofaffairsthrough theassignment ofvalues tothevariables under thecontrol oftheagent orcoalition –andsecond-or dercontr ol–theability ofanagent toexertcontrol overthecontrol thatother agents havebytransferring variables toother agents. Wegivealogical characterisation ofsecond-order control. 1.Introduction Inrecent years, there hasbeen much activityinthedevelopment oflogics forreasoning about the strate gicandcooperati veabilities ofagents ingame-lik emulti-agent systems. Coalition Logic (Pauly, 2001) andAlternating-time Temporal Logic (ATL)Alur,Henzinger ,andKupferman (2002) are perhaps thebest-kno wnexamples ofsuch work. These logics havebeen widely used asabase from which toinvestig atereasoning about cooperation inmulti-agent systems (vanderHoek & Wooldridge, 2003; Jamrog a&vanderHoek, 2004; Gorank o&Jamrog a,2004). Although theydifferondetails, thebasic construct inboth Coalition Logic andATListhe cooper ation modality ,aconstruct thatiswritten inATLashhCii'.The intended meaning ofthis expression isthatthecoalition Ccancooperate insuch awayastoensure that, nomatter what theagents outside Cdo,theproperty'becomes true. Another waytothink abouthhCii'isas meaning thatcoalition Chasthecollective power toensure that'.Itisoften assumed thatpowers areadditi ve,inthesense thatthepowers ofacoalition derivefrom thepowers ofcoalition members, c 2010 AIAccess Foundation. Allrights reserv ed. VANDERHOEK,WALTHER,&WOOLDRIDGE andthatadding anagent toacoalition does notreduce thepowers ofthatcoalition. However,the origin ofanindividual agent’ spowers –thatis,where these powers derive from–israrely discussed inthecooperation logic literature. One verynatural interpretation forpowers orabilities incomputational systems arises from considering which system components havetheability toassign values tothevariables making uptheoverall system state. Power,inthissense, equates totheability tochoose avalue fora particular variable. Moti vated bythisobserv ation, vanderHoek andWooldridge developedCL-PC, acooperation logic inwhich powers arespecified byallocating toeveryagent asetofBoolean variables: thechoices (and hence powers) available toacoalition then correspond tothepossible assignments oftruth orfalsity thatmay bemade tothevariables under their control (vanderHoek &Wooldridge, 2005b). TheCL-PCexpressionC'means thatcoalition Ccanassign values to thevariables under itscontrol insuch awayastomake'true. VanderHoek andWooldridge gaveacomplete axiomatization ofCL-PC,andshowed thatthemodel checking andsatisfiability problems forthelogic arebothPSPACE-complete; theyalsoinvestig ated howCL-PCcould beused tocharacterise theclosely related notion ofcontrol. However,onedrawback ofCL-PCisthatthe powerstructure underpinning thelogic –thedistrib ution ofvariables toagents –isassumed tobe fixed ,andhence coalitional powers arestatic inCL-PC. Ultimately ,ofcourse, theassumption thatpowers arestatic isnotrealistic. Forexample, the explicit transfer ofpowerandcontrol isafundamental component ofmost human organisations, enabling them toavoidbottlenecks with respect tocentralised powerandcontrol. Moreo ver,in open environments, where agents join andleaveasystem atrun-time, itmay notbepossible to knowinadvance which agents aretofulfill which roles, andsostatic powerallocation schemes are simply notappropriate forsuch environments. Ifsoftw areagents aretobedeplo yedinenvironments where powerstructures aredynamic, then itisimportant toconsider theissues ofrepresenting and reasoning about them, anditistothisissue thatweaddress ourselv esinthepresent paper . Westudy avariant ofCL-PCthatallowsustoexplicitly reason about dynamic powerstructures. ThelogicDCL-PCextendsCL-PCwith dynamic logic operators (Harel, Kozen, &Tiuryn, 2000), in which atomic programs areoftheform i;pj,which isread as‘agent igivescontrol ofvariable ptoagent j’.The pre-condition ofsuch aprogram isthatvariable pisinagent i’sallocation of variables, andexecuting theprogram hastheeffectoftransferring variable pfromagentitoagent j.Thus thedynamic component ofDCL-PCisconcerned with transferring power insystems, and byusing thelogic, wecanreason about howtheabilities ofagents andcoalitions areaffected by such transfers. Note that, asinconventional dynamic logic, atomic programs may becombined inDCL-PCwith theusual sequential composition (‘;’),non-deterministic choice (‘[’),test(‘?’), anditeration (‘’)operations, toform comple xprograms. Withthese features, DCL-PCprovides arichframe workthrough which torepresent andreason about systems inwhich power/control is dynamically allocated. Intheremainder ofthepaper ,following anintroduction tothelogic, wemakefour main contri- butions with respect toDCL-PC: First, inSection 2,wegivetwoalternati vesemantics forthelogic: a‘direct’ semantics, in which models directly represent theallocation ofpropositional variables totheagents that control them, andamore conventional Kripk esemantics. Weprovethatthese twosemantics areequivalent. 438 REASONINGABOUTTHETRANSFEROFCONTROL Second, wegiveanaxiomatization ofDCL-PCinSection 3,andshowthatthisaxiomatization issound andcomplete (with respect toboth semantics). Third, weshowinSection 4that, despite theapparently additional expressi vepowerprovided bythedynamic component ofDCL-PC,thesatisfiability andmodel checking problems for DCL-PCarenomore comple xthan thecorresponding problems forCL-PC(vanderHoek & Wooldridge, 2005b): theyarebothPSPACE-complete. Fourth, wedistinguish between first-ordercontr olandsecond-or dercontr olinSection 5. While first-order control, asintroduced andstudied byvanderHoek andWooldridge (2005b), istheability tocontrol some state ofaffairsbyassigning values tovariables, second-order control istheability ofanagent toexertcontrol overtheability ofother agents tocontrol states ofaffairs. Agents andcoalitions canexercise second-order control bytransferring vari- ables under their control toother agents. After informally discussing andintroducing second- order control, wedevelop alogical characterisation ofit,inthesense thatwecharacterise the formulae overwhich anagent hassecond-order control. Weconclude inbrief with some comments onrelated workandconclusions. Note thatweomit a detailed introduction tocooperation logics andinparticular themotivation behind CL-PC,asthis wasdone byvanderHoek andWooldridge (2005b). 2.The LogicDCL-PC Inthissection, wedefine thelogicDCL-PC. 2.1AnInformal Introduction Webeginwith aninformal introduction; readers who arefamiliar with bothCL-PCanddynamic logic may wish toskim orskip completely thisintroductory section. Aswenoted earlier ,DCL-PCextends thelogicCL-PC, !"#####$ "#####$ "#####$ "#####$" % &' ( )* + , - . Figure 1:Atypical scenario.andwebeginbybriefly reviewing thislogic.CL-PCis intended toallowustoreason about domains containing acollection ofagents,andacollection ofpropositional variables ;letA=f1;:::;ngdenote thesetofagents, andPdenote thevariables. Itisassumed inCL-PCthat each agent inthesystem contr olssome subset ofthevari- ables P.Tokeepthings simple, itisassumed thatagents exercise unique control: everyvariable iscontrolled by exactly oneagent, andsothevariables Parepartitioned among theagents A.Where i2Aisanagent, wede- note thevariables under thecontrol ofibyPi,soPiP. Theabilities orpower sofanagent insuch ascenario cor- respond totheassignments oftruth orfalsity thatitcanmaketothevariables under itscontrol. Figure 1illustrates atypical scenario: wehavefour agents, A=f1;2;3;4g,andeight variables, P=fp;q;r;s;t;u;v;wg.Agent 1controls variables pandq,(P1=fp;qg),while agent2controls variable r,(P2=frg),andsoon.Inthescenario illustrated, variables p;q;s,andwhavethevalue 439 VANDERHOEK,WALTHER,&WOOLDRIDGE ‘1’(i.e., ‘true’), while allother variables havethevalue0(‘false’). Thelanguage ofCL-PCisin- tended toallowustorepresent andreason about such scenarios. Torepresent thevalues ofvariables, weusepropositional logic, andsothefollowing formula completely characterises thevalues ofthe variables inthisscenario: p^q^s^w^:r^:t^:u^:v Agents areable tochange thevalue ofthevariables ! " # $%& '( ) * + ,!"-----. "-----. " # $% & '( ) * + , " $% Figure 2:The effect ofexecuting an atomic transfer program.under their control, andtorepresent these abilities inCL- PCweuseaconting entability operator (vanderHoek &Wooldridge, 2005b): theexpressionC'means that, under theassumption that theworld remains otherwise unchanged, thesetofagents Ccanmodify thevalue of their variables soastomake'true. Withrespect tothe scenario inFigure 1,forexample, wehave 1;2(p^r^:q): This isbecause agent 1canleavevariable psetattrue while making variable qfalse, while agent2makesvari- able rtrue: thiswillresult intheformula p^r^:qbeing true. The factthatnomatter what coalition Cdo,'will remain trueisexpressed by2C'.Inthescenario inFig- ure1,nomatter what agent 1does, rwill remain false (assuming againthatnoother agent acts). Thus wehave: 21:r: Asshownelsewhere (vanderHoek &Wooldridge, 2005b), andasdefined below,other types ofability operators may alsobedefined. Thus far,theoperators wehaveintroduced haveall been partoftheCL-PClanguage. Letusnowstart toin- troduce thedynamic aspects ofthelanguage, specific to DCL-PC.First, wehavetheidea ofanatomic transfer program,written i;pj,meaning ‘agent i transfers thepowertochoose atruth value forthevariable ptoagent j’.Now,itwillbepossible to execute aprogram i;pjiffthevariable pisactually under thecontrol ofagent i.Forexample, with respect toFigure 1,theprograms 1;p2and2;r1areexecutable, while theprogram 1;r2 isnot(since risnotunder thecontrol of1).The(fairly obvious) effectofexecuting theprogram 1;p2isillustrated inFigure 2;note thattheactual value ofthevariable being transferr edis unchang edbythetransfer . InDCL-PC,weallowatomic programs tobecombined together tomakemore comple xprograms using constructs from dynamic logic:‘;’(forsequential composition), ‘’(iteration), and‘?’(the ‘test’ operator). Thesimplest ofthese issequential composition: forexample, theprogram 1;p2;2;r1 440 REASONINGABOUTTHETRANSFEROFCONTROL means that, first, agent1givesvariable pto2,andthen, agent2givesrto1.Theoperator[isa non-deterministic choice operator .If1and2aretransfer programs, then1[2means ‘doeither program1or2’.The‘’operator isused todefine iteration: theexpressionmeans ‘execute theprogramzero ormore times’ (itisnotdefined exactly howmanytimeswillbeexecuted). Finally ,the‘?’isused toperform tests.Theprogram'?canbeexecuted inaparticular scenario only iftheformula'istrueofthatscenario. Toillustrate howthese operator work, consider the following example programs. p?;1;p2 This first program says ‘ifpistrue, then agent1givesptoagent2’.Now,since pistrue inthe scenario inFigure 1,then thisprogram canbeexecuted from thescenario inFigure 1,andthenet result isthesame asthefinal scenario inFigure 2. Thefollowing program uses non-deterministic choice, andessentially says ‘agent 1giveseither porqto2’. (1;p2)[(1;q2) Asisusual with dynamic logic, wecandefine theiteration andselection constructs used inconven- tional imperati veprogramming languages from these basic program constructs. Forexample, the conventional programming construct while'do canbedefined using thefollowing transfer program construct (see, e.g., Harel etal.,2000): ('?;);:'? Thenextstepistoseehowtransfer programs areincorporated with theability constructs ofCL-PC. Tobeable torefer totransfer programs andtheir properties from within thelanguage ofDCL-PC, weusethedynamic operators‘hi'’and‘[]'’.These operators play thesame role inDCL-PC thattheyplay inconventional dynamic logic (Harel etal.,2000). Thus aformulahi'asserts that ‘there exists acomputation ofprogram,starting from thecurrent situation, such thatafterhas terminated'holds’. Note thathi'does notassert thatisguaranteed toterminate, merely that ithasatleast oneterminating computation. And moreo ver,itdoes notstate that'issatisfied after every terminating computation of;merely thatthere issome terminating computation thatwill endinasituation satisfying'.Thushiactsasanexistential quantifier overthecomputations of .Theoperator []'isauniversal quantifier overthecomputations of.Itasserts thatafter every terminating computation of,theproperty'holds. Note thatitdoes notassert thatinfacthas anyterminating computations. Asanexample oftheuseofthese constructs, thefollowing formula asserts thatifagent1gives either porqto2,then2willbeable toachie ve(p_q)^r. [(1;p2)[(1;q2)]2(p_q)^r Itiseasy toseethatthisformula expresses atrueproperty ofthescenario inFigure 1:theprogram (1;p2)[(1;q2)isexecutable inthisscenario, andafter itisexecuted, agent2willcontrol variable randeither variable porvariable q.Agent 2willthus beable tomake(p_q)^rtrue. Toconclude thisintroductory section, consider thefollowing more comple xexample. The followingDCL-PCformula asserts thatitispossible foragent itogiveawayitsvariables toagent j, 441 VANDERHOEK,WALTHER,&WOOLDRIDGE non-deterministically choosing onevariable atatime, until agent jhastheability toachie ve'. hwhile:j'do[ p2Pii;pji> 2.2Syntax Formally ,thelanguage ofDCL-PCisformed with respect toa(fixed,finite, non-empty) setAof agents, anda(fixed,finite, non-empty) setPofpropositional variables. Figure 3defines thesyntax ofDCL-PC.Weuse>asalogical constant fortruth, ‘:’fornegation, and‘_’fordisjunction. As usual, wedefine theremaining connecti vesofclassical propositional logic asabbre viations: ?^=:> '^ ^=:(:'_: ) '! ^=:'_ '$ ^=('! )^( !'): Additionally ,where isasetofDCL-PCformulas, wewrite5'2'tomean thatexactly one member ofistrue: 5'2'^=_ '2'^^ '16='22:('1^'2): Where =f'1;'2:::'ng,wewillalsowrite'15'255'nfor5'2'. Withrespect totransfer programs, other constructs from conventional imperati veprograms may bedefined asfollows(Harel etal.,2000): if'then1else2^=(('?;1)[(:'?;2)) while'do^=(('?;);:'?) repeatuntil'^=;(:'?;);'? skip ^=>? fail ^=?? Where there isnopossibility ofconfusion, wewillomit setbrack etsforcooperation modalities, for example writing1;2rather thanf1;2g.ADCL-PCformula containing nomodalities issaid tobe anobjective formula. LetP(')denote thesetofpropositional variables occurring inDCL-PCformula',andletA(') denote thesetofallagents named in'(i.e.,A(')istheunion ofallthecoalitions occurring in cooperation modalities in'andalltheagents occurring intransfer programs in'). Although theoperator isuseful todefine programs succinctly ,wewillinfactseeinTheorem 2 thatitissuperfluous, which essentially uses thefactthatoursetofatoms andagents arefinite. 2.3DirectSemantics Wenowintroduce thefirst ofourtwosemantics forDCL-PC.Wecallthissemantics a‘direct’ semantics because itisdirectly based ontheintuiti vemodel thatweintroduced earlier: everyagent hasunique control ofsome setofpropositional variables, with everyvariable being controlled by some agent. Givenafixed,finite andnon-empty setAofagents, andafixed,finite andnon-empty setP ofpropositional variables, wesayanallocation ofPtoAisanindexedtuple=hP1;:::;Pni, 442 REASONINGABOUTTHETRANSFEROFCONTROL DCL-PCformulas: DCL::=> /*truth constant */ jp /*propositional variables */ j:DCL /*negation */ jDCL_DCL/*disjunction */ jCDCL /*contingent cooperati veability */ jhiDCL /*existential dynamic operator */ Transfer programs: ::= i;pj /*igivesptoj*/ j; /*sequential composition */ j[ /*non-deterministic choice */ j/*iteration */ jDCL? /*test*/ Figure 3:Syntax ofDCL-PC:p2Pisapropositional variable, CAisasetofagents, andi;j2A areagents. where there isanindexedelement Piforeach i2A,such thatP1;:::;Pnforms apartition ofP (i.e.,P=S i2APiandPi\Pj=;foralli6=j2A).Theintended interpretation ofanallocation =hP1;:::;PniisthatPiPisthesetofpropositional variables under agent i’scontrol. That is,agent ihasfreedom toallocate whate verBoolean values itsees fittothemembers ofPi.Of course, wecould havedefined anallocationasafunction:P!A,sothat(p)denotes theagent controlling propositional variable p;there seems noparticular reason forpreferring one representation rather than theother ,andsoforconsistenc ywith thehistorical record, wewilladopt thepartition representation, asused byvanderHoek andWooldridge (2005b). Now,wesayamodel forDCL-PCisastructure: M=hA;P;0;i where: A=f1;:::;ngisafinite, non-empty setofagents; P=fp;q;:::gisafinite, non-empty setofpropositional variables ; 0=hP1;:::;Pniistheinitial allocation ofPtoA,with theintended interpretation thatPi isthesubset ofPrepresenting those variables under thecontrol ofagent i2A;andfinally , :P!ftt; gisapropositional valuation function ,which determines theinitial truth value ofeverypropositional variable. Some additional notation isconvenient inwhat follows.Acoalition Cisasubset ofA,i.e.,CA. Foranysuch CA,wedenote thecomplement ofC,(i.e.,AnC)byC.Wewill write PCfor 443 VANDERHOEK,WALTHER,&WOOLDRIDGE S i2CPi.Fortwovaluationsand0,andasetofpropositional variables P,wewrite=0 (mod )ifand0differatmost inthepropositional variables in ,andwethen saythat and0arethesame modulo .Wewillsometimes understand themodelMtoconsist ofaframe F=hA;P;0itogether with apropositional valuation function.GivenamodelM=hA;P;0;i andacoalition CinM,aC-valuation isafunction: C:PC!ftt; g: Thus aC-valuation isapropositional valuation function thatassigns truth values tojustthepropo- sitional variables controlled bythemembers ofthecoalition C.IfM=hA;P;0;iwith0= hP1:::;Pniisamodel, Cacoalition inM,andCaC-valuation, then byMCwemean the modelhA;P;0;0i,where0isthevaluation function defined asfollows 0(p)^=( C(p)ifp2PC (p) otherwise andallother elements ofthemodel areasinM.ThusMCdenotes themodel thatisidentical toMexcept thatthevalues assigned byitsvaluation function topropositional variables controlled bymembers ofCaredetermined byC. Wedefine thesizeofamodelM=hA;P;0;itobejAj+jPj;wedenote thesizeofMby size(M). 2.4Transfer Program Relations Togiveamodal semantics tothedynamic logic constructs ofDCL-PC,wemust define, forevery transfer programabinary relation Rovermodels such that(M1;M2)2RiffM2isamodel thatmay result from onepossible execution ofprogramfromM1.Westart bydefining therelation Ri;pj,foratomic transfer programs oftheform i;pj,i.e.,agent igivescontrol ofpropositional variable ptoagent j.LetM=hA;P;0;iandM0=hA0;P0;0 0;0ibetwomodels with0= hP1;:::;Pniand0 0=hP0 1;:::;P0 ni.Then (M;M0)2Ri;pj iff 1.p2Pi(agent icontrols ptobeginwith) 2.incase i=j: (a)M=M0(agent igivesptoherself, with nochange inthemodel) 3.incase i6=j: (a)P0 i=Pinfpg(agent inolonger controls pafterw ards); (b)P0 j=Pj[fpg(agent jcontrols pafterw ards); and (c)allother components ofM0areasinM. 444 REASONINGABOUTTHETRANSFEROFCONTROL Inorder todefineMj=d',which means that'istrueinMunder thedirect semantics, weneed to beable todetermine what theinterpretation ofanarbitrary program is,onM;wedefine thisbelow. Notice thatexecuting anatomic transfer program hasnoeffectonthevaluation function ofamodel. Transfer programs only affectthedistrib ution ofpropositional variables toagents. Fortheremaining constructs oftransfer programs, wedefine theprogram relations inducti vely, interms oftherelations foratomic transfer programs, asdefined above.Letthecomposition of relations R1andR2bedenoted byR1R2,andthereflexivetransiti veclosure (ancestral) ofrelation RbyR.Then theaccessibility relations forcomple xprograms aredefined asfollows(Harel etal., 2000): R1;2^=R1R2 R1[2^=R1[R2 R^=(R) R'?^=f(M;M)jMj=d'g: Notice thatthelastofthese definitions refers totherelationj=d,which ofcourse hasnotyetbeen defined. The aimofthenextsection istodefine thisrelation. Weemphasise that, although the relations R'?andj=dmutually refer toone-another ,both relations areinfactwell-defined (asin conventional dynamic logic). 2.5Truth Conditions Weinterpret formulas ofDCL-PCwith respect tomodels, asintroduced above.Givenamodel M=hA;P;0;iandaformula',wewriteMj=d'tomean that'issatisfied (or,equivalently , true) inM,under the‘direct’ semantics. The rules defining thesatisf action relationj=dareas follows: Mj=d> Mj=dpiff(p)=tt (where p2P) Mj=d:'iffM6j=d' Mj=d'_ iffMj=d'orMj=d Mj=dC'iffthere exists aC-valuationCsuch thatMCj=d' Mj=dhi'iffthere exists amodelM0such that(M;M0)2RandM0j=d'. Wesayaformula isobjective ifitcontains nomodal constructs (i.e., operatorsCorhi).Thus objecti veformulae areformulae ofclassical propositional logic. Weassume theconventional definitions ofsatisfiability andvalidity: aDCL-PCformula'is d-satisfiable iffthere exists aDCL-PCmodelMsuch thatMj=d',and'isd-valid iffforevery DCL-PCmodelMwehaveMj=d'.Wewritej=d'toindicate that'isd-valid. Avalidformula isalsocalled atautology .Wesay'isfeasible ifitissatisfiable butnotvalid. Finally ,foranysetof formulas andformula',wedefine j=d'as8M(8 2Mj=d )Mj=d'. Letusdefine thebox‘2’tobethedual ofthecooperati veability modality ‘’as: 2C'^=:C:' 445 VANDERHOEK,WALTHER,&WOOLDRIDGE andthe‘[]’tobethedual ofthetransfer modality ‘hi’as: []'^=:hi:': Where Cisacoalition and'isaformula ofDCL-PC,wewrite contr ols(C;')tomean thatCcan choose'tobeeither trueorfalse: contr ols(C;')^=C'^C:' (1) Byusing thecontr ols(;)construct, wecancapture thedistrib ution ofpropositional variables among theagents inamodel. Lemma 1LetM=hA;P;0;ibeamodel forDCL-PC,i2Aanagent, CAasetofagents, andp2Papropositional variable inM.Then 1.(vanderHoek &Wooldridge, 2005b)Mj=dcontr ols(i;p)iffp2Pi; 2.Mj=dcontr ols(C;p)iffp2PC. Remark 1Can wecharacterize theformulas under contr olofacoalition C?Wehave: If'isfeasible andobjective ,thenj=d^ p2P(')contr ols(C;p) !contr ols(C;') (2) Observe thatproperty (2)isnottrue forarbitrary DCL-PCformulas. Toseethis, takeforexample theformulahi;pji>,nomatter whether wedefine P(hi;pji>)tobefpgor;.Wehavej=d :contr ols(i;hi;pji>):independent ofiowning p,exactly oneofthetwoformulashi;pji>and :hi;pji>istrue.That is,p2PiinMiffMj=dhi;pji>. Also note thatthe -direction oftheright hand sideof(2)isnotvalid forobjective':suppose M=hA;P;0;isuchthat(q)=tt,p2Pi,andq=2Pi.Then, wehaveMj=dcontr ols(i;p^q)^ :(contr ols(i;p)^contr ols(i;q)):because q‘happens’ tobetrue inM,icontr olstheconjunction p^q,butnoteachofitsconjuncts. 2.6AKripk eSemantics Although thedirect semantics naturally captures thenotions ofpropositional control andtransfer ofcontrol, forsome purposes –establishing completeness inparticular ,andrelating ittothemain stream ofmodal logic –itismore convenient toformulate thesemantics forDCL-PCusing conven- tional Kripk estructures (Chellas, 1980; Blackb urn, deRijke,&Venema, 2001). Theidea isthat, giventhe(fixed,finite, non-empty) setAofagents and(fixed,finite, non-empty) setPofpropo- sitional variables, there will beapossible world foreverypossible allocation ofthevariables in Ptotheagents inAandeverypossible propositional valuation function overP.Between those worlds, there arebasically two‘orthogonal’ accessibility relations (cf.Figure 4):a‘horizontal’ and a‘vertical’ one. First ofall,wehavea‘horizontal’ relation Riforagent ibetween twoworlds u andvifagent iisable, giventhevaluationuinu,toturnitintothevaluationvasdescribed byv, justbychoosing appropriate values forhervariables. Formally ,(u;v)2Riiffu=v(modPi). That is,Riisanequivalence relation. Inwhat follows,wedrop thesymbolic distinction between worlds andvaluations, i.e.,weusefordenoting aworld andavaluation interchangeably .Notice 446 REASONINGABOUTTHETRANSFEROFCONTROL p;q;r v u:p;q;r wRjRj :p;:q;rM0M =hP1;:::;Pi;Pj;Pk;:::;Pni M000=hP1;:::;Pinfpg;Pj[fpg;Pk;:::;Pni p;q;r v u:p;q;r wRj :p;:q;r Rkp;q;r v u:p;q;r wRj:p;:q;rRi 00=hP1;:::;P0 i;P0 jnfqg;Pk[fqg;:::;Pnii;pj j;qk Figure 4:Some Kripk emodels forDCL-PC. thatthe‘horizontal’ relation does notaffecttheallocation:itremains unchanged. Letustherefore define ourKripk emodels tobeM=h;Ri2A;i,where isthesetofallvaluationsoverP.It isimportant torealize thatthesetsAofagents andPofvariables arefixed,buttheallocations of variables toagents may vary.Wedenote thesetofallsuch Kripk emodels byK(A;P).Wewillcall apair(M;)apointed Kripk emodel ,andwewillsometimes omit thebrack etsforsuch apair. Secondly ,the‘vertical’ accessibility relation isbetween pointed models (M;)and(M0;0), where M=h;Ri2A;i;M0=h;Ri2A;0i2K(A;P),which indicate achange oftheallocation to0.Since such achange ofallocation does notaffectthecurrent world, wehaveforsuch pairs that=0.Slightly abusing notation, wedefine (M;)(i;pj)(M0;0)exactly when=0and p2Pi,andeither i=jandM=M0,orelseP0 i=PinfpgandP0 j=Pj[fpg,andalltheother setsPhremain thesame. 447 VANDERHOEK,WALTHER,&WOOLDRIDGE Thetruth relationj=Kinterpreting formulas overKripk estructures holds between pairs ofthe form(M;)andformulas'.Itsdefinition isasfollows(weomit theBoolean cases andthecases forcomple xtransfer programs): M;j=KC'iffthere exists avaluation0such that(;0)2Riforeach i2C,and M;0j=K' M;j=Khi;pji'iffthere exists aKripk emodel M0such that(M;)(i;pj)(M0;)and M0;j=K' Forasetofformulas andaformula',wedefine j=K'as8(M;)(8 2(M;)j=K ) M;j=K').Figure 4illustrates theKripk esemantics. Note thatforthesetsP0 iandP0 jintheKripk e model M0wehaveP0 i=PinfpgandP0 j=Pj[fpg.Note thatintheclause forC',thetwopointed models M;andM;0are‘thesame except foratmost theatoms inPC’.This isaspecial case of twomodels being ‘similar upto asetofatoms’ (French, 2006; Ghilardi &Zawadowski, 2000). Remark 2Note thatinfact, intheKripk esemantics, formulas arenotinterpr eted inamodel anda valuation only,butintheconte xtofother models (whic harereachedbytheatomic programi;pj). Therearefinitely many ofthem, oneforeach.Call thiscollection ofmodels.Infact, thisis thestructur ewith respect towhichformulas areinterpr eted. Inthatsense ,thereisonly oneKripk e model forthelangua ge(w.r.t.A;P):itis.Wewillprovecompleteness with respect tothisunique two-dimensional model, inSection 3. Thefollowing lemma iseasily established byinduction on': Lemma 2Foranyfixed setsofagentsAandpropositional variables P,thedirectsemantics and theKripk esemantics areequivalent, i.e.,forany',anyM2K(A;P)withM=h;Ri2A;i,and anymodelM=hA;P;;i: Mj=d'iffM;j=K': Asusual, wedefine Mj=K'as8:M;j=K',andj=K'as8M:Mj=K'. 3.AComplete Axiomatization Asound andcomplete axiomatization forDCL-PCispresented inFigure 5.Fortheease ofex- position, wedivide theaxiomatization into fivecategories, asfollows. While the‘Propositional Component’ andthe‘Rules ofInference’ arestraightforw ard,the‘Dynamic Component’ isanim- mediate adaptation ofPropositional Dynamic Logic (Harel etal.,2000). The‘Control Axioms’ are inherited fromCL-PC(vanderHoek &Wooldridge, 2005b). (The occurrence of`(p)refers toa literal with atomic proposition p:itiseither por:p,with theobvious meaning for:`(p).)Note thatallocation specifies thateverypropositional variable isassigned toexactly oneagent (i.e., we haveanallocation), while incontrast, forthefixedallocationthatwasassumed inCL-PC,one could explicitly state thatcontr ols(i;p),foreveryp2Pi(vanderHoek &Wooldridge, 2005b). Forthe‘Transfer &Control Axioms’, atomic permanence states thatnoprogramchanges the valuation. From this, oneeasily extends thistoarbitrary objecti veformulas (obtaining objective permanence ,seeTheorem 1below). The axiom persistence 1(contr ol)says thati’scontrol over pisnotaffected when wemovetoanother valuation, andaxiom persistence 2(contr ol)specifies 448 REASONINGABOUTTHETRANSFEROFCONTROL Propositional Component Prop ' where'isanyobjecti vetautology Dynamic Component K() []('! )! []'![]  union() [[0]'$ []'^[0]' comp() [;0]'$[][0]' test() ['?] $('! ) mix() '^[][]' $[]' ind() '^[]('![]') ![]' Control Axioms K(i) 2i('! )!2i'!2i  T(i) 2i'!' B(i) '!2ii' empty 2;'$' contr ol(i) contr ols(i;p)$ip^i:p allocationV p2P contr ols(1;p)55contr ols(n;p) where A=f1;:::;ng effect(i) ^`(p)^contr ols(i;p) !i ^:`(p) wherep62P( );and isobjecti ve Comp -[ 2C12C2'$2C1[C2' Transfer &Control Axioms atomic permanence (;)hi;pji>! [i;pj]q$q persistence 1(contr ol) contr ols(i;p)!2jcontr ols(i;p) persistence 2(contr ol) contr ols(i;p)![j;qh]contr ols(i;p) where i6=jorp6=q precondition (transfer )hi;pji>!contr ols(i;p) transfer contr ols(i;p)!hi;pjicontr ols(j;p) func contr ols(i;p)! hi;pji'$[i;pj]' Rules ofInference Modus Ponens`';`('! ))` Necessitation `')`2' 2=[];2i Figure 5:Axiomatic System forDCL-PC. howiremains incontrol overp,evenwhen atransfer program isexecuted: either thevariable passed inthatprogram isnotp,orthedelegating agent isnoti.Theaxiom precondition (transfer ) expresses thatagents canonly givevariables awaythattheypossess, and, finally func says thatthe transition relation associated with anatomic transfer program isfunctional: atmost oneresulting worldemer ges. Thefollowing theorem listssome properties ofDCL-PC,where contr ols(C;p)isdefined inequa- tion(1)above. Theor em1 1.Theaxioms K(i),T(i),B(i),andeffect(i)have coalitional counterparts K(C),T(C),B(C), andeffect(C)thatareallderivable foranycoalition C. 449 VANDERHOEK,WALTHER,&WOOLDRIDGE at-least(contr ol): `(p)^contr ols(i;p) !i:`(p) at-most(contr ol): `(p)!i:`(p)!2j`(p) (i6=j) non-effect(i):i`(p)^:contr ols(i;p) !2i`(p) persistence (non-contr ol): :contr ols(i;p)$2j:contr ols(i;p) objective permanence (;): hi;pji>! '$[i;pj]' where'isobjecti ve objective permanence : hi>! '$[]' where'isobjecti ve inverse: contr ols(i;p)! '$[i;pj;j;pi]' reverse: [i;pj][k;qh]'$[k;qh][i;pj]' where (j6=kandh6=i)orp6=q Figure 6:Some Theorems ofDCL-PC. 2.Moreover,weknow (van derHoek &Wooldridg e,2005b) thattheaxioms K(i),T(i),B(i), and effect(i)have thecoalitional counterparts K(C),T(C),B(C),and effect(C)that are derivable foranycoalition C. 3.`contr ols(C;p)$W i2Ccontr ols(i;p). 4.`contr ols(C;p)!2jcontr ols(C;p),i.e.,theproperty persistence 1(contr ol)isalso deriv- able when wereplace agentibyanarbitr arycoalition C. Proof: SeeAppendix A. QED Consider thelanguage without dynamic transfer operators, inwhich weonly havepropositional logic with cooperation modalitiesC.Models forthese areM,M02K(A;P).Inthisprogram-free language, everyformula isequivalent toonewithout anyoccurrences ofcoalition operators vander Hoek andWooldridge (2005b). Forinstance, suppose thatPi=fp;qg.Then aformulai(:p^r) isequivalent to(p^r)_(:p^r)(we‘read off’thecurrent value ofvariable routside i’scontrol). Wenowestablish asimilar result forthelanguage including programs. Anyworld(M;)is completely characterized when weknowwhich variables aretrueinit,andwhat theallocation of variables toagents is.Insuch acase, thetruth ofallobjecti veformulas, formulas involving abilities andtransfer programs iscompletely determined. Lemma 3Let'beanarbitr aryDCL-PCformula andaconjunction ofassertions oftheform contr ols(j;p)or:contr ols(j;p).Then, inDCL-PC,wecanderive `C('^)$(^C'): 450 REASONINGABOUTTHETRANSFEROFCONTROL Proof: Since byComp -[,forC=fa1;a2;:::;aCg,wehaveC $a1a2aC ,itis sufficient toprovetheclaim foranindividual agent i.Moreo ver,wecanmoveallconjuncts ofout onebyone, once weknowthat `i('^contr ols(j;p))$(contr ols(j;p)^i'); wherecontr ols(j;p)iseither contr ols(j;p)or:contr ols(j;p).Wedothereasoning forthenon- negated case (theother oneissimilar):i('^contr ols(j;p))isequivalent to contr ols(j;p)^i('^contr ols(j;p)) _ :contr ols(j;p)^i('^contr ols(j;p)) : However,byusing thetheorem persistence (non-contr ol)from Figure 6(which wederivebelow), wehaveforthesecond disjunct that :contr ols(j;p)^i('^contr ols(j;p)) $?.That concludes theproof. Forpersistence (non-contr ol),theright-to-left direction followsimmediately from T(j).Forthe other direction, assume that:contr ols(i;p).From allocation wederivethat contr ols(1;p)55contr ols(i1;p)5contr ols(i+1;p)55contr ols(n;p); andfrom this, bypersistence 1(contr ol),wegetW k6=i2jcontr ols(k;p).Foreveryk6=i,wehave contr ols(k;p)!:contr ols(i;p),which followsfrom allocation .Hence, using Necessitation ,we have2j(contr ols(k;p)!:contr ols(i;p)).From Axiom K(j),itnowfollowsthat2jcontr ols(k;p)! 2j:contr ols(i;p).Combining thiswithW k6=i2jcontr ols(k;p),weobtain thedesired conclusion 2j:contr ols(i;p). QED Soundness oftheaxiom schemes inFigure 5isreadily check ed.Wenowproceed toprovethat theaxiomatic system forDCL-PCinFigure 5iscomplete. First, weintroduce some notation. Definition 1Giventhesetofpropositional variables P,avaluation descriptionisaconjunction ofliterals (por:p)overthem such thateverypropositional variable inPoccurs inoneliteral. Notice that, foreach propositional variable p2P,itholds thateither!p,or!:p.We denote thesetofallvaluation descriptions overPwith.Notice that, foreach valuation,there is a2such that =Vfpjp2Pand(p)=ttg ^Vf:pjp2Pand(p)= g: Definition 2Giventhesetofpropositional variables Pandthesetofagents A,anallocation de- scriptionisaconjunction offormulas oftheform contr ols(i;p)where foreveryp2P,there is exactly onei2Asuch thatcontr ols(i;p)appears in. Wedenote thesetofallallocation descriptions with.Notice thatallocationsandconjunctions  correspond toeach other: Foreach allocation=hP1;:::;Pniofthevariables inPovertheagents inA,there isa2such that =^ i2A;p2Picontr ols(i;p): 451 VANDERHOEK,WALTHER,&WOOLDRIDGE Therefore, werefer toformulasasallocation descriptions. Giventwoallocation descriptions ;02,wesay(i;pj)0ifthefollowing three conditions aresatisfied:!contr ols(i;p), 0!contr ols(j;p),andand0agree onallother contr olexpressions. Definition 3Let,foranyallocation description,beasetofvaluation descriptions. Then, aformula oftheform _ 2_ ^ (3) willbecalled aproposition description . Wewilllater,inTheorem 2,seethateveryformula'isequivalent toaproposition description. Theintuition here isthatthetruth of'requires, foreveryallocation description,some possible truth values ofatoms tobefixed.Togiveanexample, suppose there aretwoagents iandj,and three atoms p,qandr.Consider theformula'=hi;pji(q^j(p^r)).Inorder tofind the equivalent proposition description, wemust, forevery2makeproper choices for.If implies:contr ols(i;p)thisallocation would make'false(since icannot transfer control overp), soforthose,wehavetochoose tobetheempty set,ensuring that(W^)isequivalent to ?.Ifimplies contr ols(i;p),there arebasically twocases: eitheralsoimplies contr ols(j;r),and theconstraint onisthatWisequivalent toq,orelseimplies:contr ols(j;r),inwhich caseWisequivalent toq^r. Letus,fortwovaluation descriptions and0,acoalition C,andanallocation description ,write0(modC;)ifthetwoconjunctions ofliteralsand0only differinthevari- ables under control ofC,which isdetermined by.Forinstance, when C=f1;2gand= contr ols(1;p1)^contr ols(2;p2)^contr ols(3;p3),then(:p1^p2^p3)(p1^:p2^p3)(modC;). Wenowfirstcollect some factsabout valuation descriptions ,allocation descriptions ,andpropo- sition descriptions .Recall that, foranysetofDCL-PCformulas,5'2'isused asshorthand forW '2'^V '16='22:('1^'2). Lemma 4Given thesetofvaluation descriptions ,andthesetofallocation descriptions , thefollowing sixitems aresatisfied: 1.`52 2.`52 3.`'!W 2('^) 4.Forall2andall2:`!(C$W 0(modC;)0). 5.`(^hi;pji0)!((^)$hi;pji(0^)). 6.Letnbethenumber ofagents, andkthenumber ofpropositional variables. Then thereare notmorethan N(n;k)=22nkprovably non-equivalent proposition descriptions. Proof: 1.This followsfrom Propandthedefinition of:the’saremutually exclusi veandcannot all befalse. 452 REASONINGABOUTTHETRANSFEROFCONTROL 2.Item (2)iseasily seen tobeequivalent totheallocation axiom: allocation impliesW 2, and, foreveryallocation description2,wehavethatimplies allocation . 3.Item (3)isimmediate from Item (2)andaxiom Prop.Inparticular ,using Propwederivefrom `A_BthatC!(C^A)_(C^B). 4.Assume.Fortheright-to-left direction, also assume0,forsome valuation description0 with0(modC;).This means that0andonly differinsome variables p1;:::;pm forwhich contr ols(C;pj)isimplied by(for j=1:::m).Note that0isanobjecti ve formula. Wecanwrite0as i^`(pi)(fori=1:::m),where iisas0butwith theliteral `(pi)leftout.Apparently ,wehave 1^`(p1)^contr ols(C;p1).Since 1isobjecti ve,wecan apply effect(C)toconcludeC( 1^:`(p1)).Using Lemma 3,wederiveC(^ 1^:`(p1)). Wecannowrewrite 1to 2^`(p2),andobtainC( 2^`(p2)^:`(p1)^).Weuse effect(C)andLemma 3againandgetCC( 2^:`(p2)^:`(p1)^).ByComp -[,this isthesame asC( 2^:`(p2)^:`(p1)^).Wecanrepeat thisprocess until weget C( j^:`(pj)^^:`(p2)^:`(p1)^).But, bydefinition of0,thisimpliesC. Weshowtheother direction from lefttoright bycontrapositi ve:fixandandassume :W 0(modC;)0.Wehavetoshowthat:C.LetQ(;)=f`(p)j`!`(p)and0 !contr ols(C;p)gbethesetoftheliterals overvariables thatarenotunder control ofthe agents inCattheallocation.Notice thatallvaluations0(modC;)agree withon theliterals inQ(;).Wecanusepropositional reasoning toderivefrom:W 0(modC;)0 that:V `(p)2Q(;)`(p).Using T(C)andK(C),wethen conclude thatW `(p)2Q(;)C:`(p). From,itfollows,foreach literal`(p)2Q(;),that:contr ols(C;`(p)),which, byequa- tion(1),equalsC:`(p)!:C`(p).Butthen, wecanderiveW `(p)2Q(;):C`(p).Using K(C),weobtain:CV `(p)2Q(;)`(p).Hence,:C. 5.First ofall,from(^hi;pji0,follows!contr ols(i;p).Hence, given,wehavethatall formulashi;pji and[i;pj] areequivalent byfunc.Inparticular ,wehavehi;pji>. Letusfirstshowforliterals`(q)that`(q)$hi;pji`(q).Fornegativeliterals:q,:q$ hi;pji:qequals q$[i;pj]q,which followsfrom atomic permanence (;).Forpositi ve literals q,weusefunc toobtain q$[i;pj]q,which againholds byatomic permanence (;). Now,given,wehaveq$hi;pjiqand:q$hi;pji:q.Then, foranyvaluation description,wealso have$hi;pji.Itremains toshowthat$hi;pji0. From lefttoright, note that wehavecontr ols(i;p)!hi;pjicontr ols(j;p)bytransfer . Foranyother contr olsexpression implied by,thisfollowsfrom persistence 2(contr ol)and func.Finally ,consider thedirection from right toleft:hi;pji0!.Wehavetoshow that, first,hi;pjicontr ols(j;p)!contr ols(i;p),and, second, thathi;pjicontr ols(h;q)! contr ols(h;q),foreach contr ols(h;q)(with q6=p)implied by0.Thefirstpartfollowsimme- diately from precondition (transfer ).Forthesecond part, lethbeanagent andq6=pavari- able such that0implies contr ols(h;q).Supposehi;pjicontr ols(h;q).Byallocation ,we havethathi;pjiV k6=h:contr ols(k;q).Butthen thecontrapositi veofpersistence 2(contr ol) yieldsV k6=h:contr ols(k;q).Itfollowsbyallocation thatcontr ols(h;q). 6.Item 6followsfrom astraightforw ardcounting argument ofproposition description formulas. The number ofproposition descriptions depends onthecardinalities ofthesetsand. Giventhenumber n=jAjofagents, andthenumber k=jPjofpropositional variables, 453 VANDERHOEK,WALTHER,&WOOLDRIDGE itiseasy toseethatthere are2kvaluation descriptions in,andnkallocation descriptions in(i.e., thenumber ofwayswecandistrib utekvariables overnagents). Observ ethat anyproposition description formula isobtained byassigning asetofvaluation descriptions toeach allocation description.Hence, there are22knkproposition descriptions. Since 22knk22nk,weobtain with N(n;k)=22nkanupper bound forthenumber ofdifferent proposition description formulas. QED Wenowpresent themain result ofthissection. Wefirstformulate it,reflect briefly onit,and then giveitsproof. Theor em2ForeveryDCL-PCformula',therearesets(')ofvaluation descriptions, one foreach2,suchthat `'$_ 2_ (')^ : According toTheorem 2,wecangetridofallthehi;pji .Toshowthatourderivation system isgood enough toestablish that, isthemain task ofitsproof. Butletusfirst convince ourselv es semantically thatsuch anormal form makessense. Remember thateverymodel Mcomes with itsownallocation.Aformula likehi;pji istrue atM;,if istrue inamodel thatlooks likeM,butinwhich control overpistransferred from itoj.Butthismeans thatsome formula 0 must already betrueatM;,where itakestheroleofjforasfaraspisconcerned. Forinstance, hi;pji(q^j(p^r))istrueatM;,ifq^jr^contr ols(i;p)istrueunder thecurrent allocation .This formula hasnoreference toother ‘layers’ anymore. More precisely ,hi;pji(q^j(p^r)) isequivalent to q^((p^r)_(:p^r))^contr ols(i;p)^:contr ols(j;r) _ q^((p^r)_(:p^r)_(p^:r)_(:p^:r))^contr ols(i;p)^contr ols(j;r) : Proof: Theproof isbyinduction onthenormjjjjwhich isdefined onDCL-PCformulas asfollows: jj>jj =jjpjj=0,foranyp2P jj: jj=1+jj jj jj 1_ 2jj=1+jj 1jj+jj 2jj jjC jj=1+jj jj,foranyCA jj[i;pj] jj=1+jj jj jj['?] jj=1+jj:'_ jj jj[1[2] jj=1+jj[1] _[2] jj jj[1;2] jj=1+jj[1][2] jj jj[] jj=1+jjV i=0::N[]i jj where N=N(n;k)isthenumber defined inLemma 4,Item (6). Theinduction base fortheproof ofourtheorem hastwocases: '=>.Wetake(>)=,forevery2.ByItem (1)ofLemma 4,wehavethatWis anobjecti vetautology .Hence,W 2W(>)^ isequivalent toW 2,which inturn isequivalent to>by(2)ofLemma 4. 454 REASONINGABOUTTHETRANSFEROFCONTROL '=p,forp2P.Take(p)=f2j`!pg,forevery2.Clearly ,pisequivalent toW(p),i.e.,`p$W(p).Now,using (3)ofLemma 4,weget`p!W 2(p^). Finally ,replacing thesecond occurrence ofpwith itsjustderivedequivalent formula, weget `p!W 2W(p)^ .Theother direction followsbysimple propositional reasoning. Consider theinduction step. '=: .Weset(: )=n( ),forevery2.This works, because ofthe following: : $:_ 2_ ( )^ (4) $^ 2 (:_ ( ))_: (5) $_ 2 (:_ ( ))^ (6) $_ 2_ (: )^ (7) Allsteps arepurely propositional, except fortheequivalence between (5)and(6)which we explain now.Abbre viate (5)toV 2(:A_:),then (6)isW 2(:A^).Note thatby Lemma 4,Item (2),wederiveW 2.Inother words, onemust betrue, say^.Butnote thatV 2(:A_:)^^implies (:A^^^)andhence alsoW 2(:A^),ourabbre viation for(6).Conversely ,ifW 2(:A^)holds, weknowbyLemma 4,item (2)that42, i.e.,exactly onemust hold, sayitis^.Forthis^theformula (:A^^^)istrue, anditis theonly allocation description forwhich such aformula canbetrue. Forthis^,wealsohave (:A^_:^).Moreo ver,forany6=^,wehave:,andhence (:A_:).So(:A_:) holds forall,hence wehaveV 2(:A_:),ourshorthand for(5). '= 1_ 2.Weset( 1_ 2)=( 1)[( 2),forevery2.Forthefollowing equivalences, weonly need propositional reasoning: 1_ 2$_ 2_ ( 1)^ __ 2_ ( 2)^ (8) _ 2 (_ ( 1)__ ( 2))^ (9) _ 2_ ( 1_ 2)^ (10) '=C .Forevery2,weset (C )=f2j0(modC;)forsome02( )g 455 VANDERHOEK,WALTHER,&WOOLDRIDGE Wecanderivethefollowing equivalences: C $C_ 2_ ( )^ (11) $_ 2C_ ( )^ (12) $_ 2C_ ( ) ^ (13) $_ 2_ (C )^ (14) Theequivalence in(11) holds bytheinduction hypothesis. Using K(i),thisisequivalent to (12) (foranydiamond wehaveC('_ )$(C'_C )).Theequivalence ofthelatter and(13) isbyLemma 3. Itremains toshowtheequivalence of(13) and(14).WehaveCW( )=CW 2( ). ByK(C)andComp -[,thisformula isequivalent toW 2( )C.Using Item (4)in Lemma 4,weseethatthisisequivalent toW 2( )W 0j0(modC;)0.ButthisequalsW(C )bydefinition of(C ). '=[i;pj] .Wedefine ([i;pj] )asfollows:forevery2, ([i;pj] )=8 >< >:0( )if!contr ols(i;p) where(i;pj)0  otherwise Toseethatthisyields aformula oftheright form thatisequivalent to[i;pj] ,letusfirst partition in+(i;p)=f2j`!contr ols(i;p)gand(i;p)=f2j`! :contr ols(i;p)g.Nowconsider thefollowing derivable equivalences: [i;pj] $[i;pj]_ 02(_ 0( )^0) (15) $:contr ols(i;p) _(contr ols(i;p)^hi;pji_ 02(_ 0( )^0)) (16) $_ 2(i;p)(_ ^)_ _ 2+(i;p) ^hi;pji_ 02(_ 0( )^0) (17) $_ 2_ ([i;pj] )^ (18) The equation in(15) holds bytheinduction hypothesis. Itisequivalent to(16) bypropo- sitional reasoning, andchanging from[i;pj]tohi;pjiisallowed byfunc.The equiv- alence of(16) and(17) followsfrom thedefinition of+(i;p)and(i;p)andthefact 456 REASONINGABOUTTHETRANSFEROFCONTROL thatW$>.Inorder toprovetheequivalence between (17) and(18),itissufficient toshowthat, foranyfixed2+(i;p),theformula^hi;pjiW 02(W0( )^ 0)isequivalent to^W0( ),where(i;pj)0.Butthisfollowsfrom Lemma 4, Item (5),asfollows. First ofall,wecanwrite^hi;pjiW 02(W0( )^0)as ^W 02hi;pji(W0( )^0).Bythementioned lemma, weknowexactly which0 weneed: itis0forwhich(i;pj)0giving^hi;pji(W0( )^0).Wecanrewrite thisinto^hi;pjiWf^0j20( )g,andthen push thediamondhi;pjiinside thedisjunction toget^Wfhi;pji(^0)j20( )g.Butthen Lemma 4,Item (5) yields^W0( ).Theother direction issimilar: if^W0( )and(i;pj)0,then, byLemma 4,Item (5),weget^hi;pji(W0( )^0),from which theresult follows. '=[ 0?] .Byaxiom test(),[ 0?] isequivalent to: 0_ ,which hasanequivalent formula oftheright form bytheinduction hypothesis. '=[1;2] .Byaxiom comp(),[1;2] isequivalent to[1][2] ,which hasanequivalent formula oftheright form bytheinduction hypothesis. '=[1[2] .Byaxiom union(),[1[2] isequivalent to[1] _[2] ,which hasan equivalent formula oftheright form bytheinduction hypothesis. '=[] .Recall N=N(n;k)asgiveninLemma 4,Item (6). Using axiom mix()and K(),weknowthat[] isequivalent to ^[][] .Doing thisNtimes, weobtain ^[] ^[]2 ^^[]N ^[]N[] : Bytheinduction hypothesis, weknowthatallexcept thelastconjunct haveanequivalent normal form. Butsince there areonly Ndifferent such forms, theconjunct []N must be equivalent tooneoftheearlier conjuncts []i (i<N).Define=V i=0::N[]i .Weclaim $[] : (19) And bytheinduction hypothesis,hasanequivalent formula oftheright form. The‘ ’di- rection of(19)isobvious since‘isjustthefirstpart’ ofthe‘unra veling’ ^[] ^[][] ::: of[] using axioms mix()andK().Toshowtheother direction itissufficient toderive ![],because, bythefactthat isjustoneoftheconjuncts in,thisimmediately gives![] .Toshowthederivability of![],wewilluseind().First ofall, weshow![].Toseethis, note that[]$[][]0 ^[][]1 ^^[][]N .By theinduction hypothesis, each conjunct []i (iN)hasanormal form, say,Bi.Then we obtain fromasequence B0;B1;:::;BNofN+1formulas innormal form. Since there are atmost Nprovably non-equi valent formulas byLemma 4,Item (6),weknowthatthere isaBj thatequals aprevious Bawith a<jN.LetBjbethefirstsuch repetition inthissequence. Notice thatwehavenow[]j Bj=Ba[]a ,andthus[]k[]j []k[]a ,forany k0.Butthen, itfollowsthatthelastconjunct [][]N in[]isequivalent to[]k[]a (with k=Nj),which already appears in.Nowthatwehavederived![],weapply Necessitation for[],andobtain`[](![]).Byapplying ind(),weget![]. QED Weknowfrom Lemma 4thatthere areonly finitely manydifferent normal forms: since every formula hassuch anormal form, there canbeonly finitely manynon-equi valent formulas. Wealso 457 VANDERHOEK,WALTHER,&WOOLDRIDGE knowfrom theproof ofTheorem 2abovethat, for[] ,weonly havetoconsider afinite number ofconjuncts []i initsunraveling. Corollary 1Thereareonly finitely many pairwise non-equivalent formulas ofDCL-PC.Infact, given thenumber nofagents, andkofpropositional variables, wehave: 1.8i6=jM6`( i$ j))`W iM i,and 2.`[] $V iN[]i , wher eM=2nkandN=22nk(asdefined inLemma 4,Item (6)). Completeness ofaderivation system with inference relation`with respect toasemantics means thateverysemantically validformula isalsoprovable:j=')`'.Inorder toprovecompleteness, often thecontrapositi veofthisisshown:6`')6j='.That is,everyconsistent formula:'has amodel. Apopular technique inmodal logic istoconstruct adedicated model –thecanonical model, (cf.e.g., Blackb urnetal.,2001) –foranyconsistent formula .That canonical model isa ‘bridge’ between syntax andsemantics: itconsists ofallmaximal consistent sets(asworlds), and isconstructed insuch awaythatmembership ofaworldinandtruth inthecorresponding world coincide. ForDCL-PCthisworks straightforw ardly asfollows. Fixthefinite setsAandP,andtakea consistent formula .Build amaximal consistent set‘around it’.Letthenormal form of , guaranteed byTheorem 2,beW (W^).Since ismaximal consistent, forsome,we have(W^)2.Again,bymaximal consistenc yof,itmust contain, forexactly one 2,theformula^.Butuniquely determines avaluation=,whereasdetermines anallocation=.Inother words,uniquely determines apointed Kripk emodel (M;)with M=h;Ri2A;i.Alltheworlds inaredetermined byP,andallRi2A(thehorizontal layer of M,interms ofFigure 4)aredetermined bytheControl Axioms. Theavailability ofalltheright models M0=h;Ri2A;0i(the vertical layer inFigure 4)isdetermined bytheDelegation and Control Axioms. Asaresult, wecandirectly interpret allsubformulas oftheformhi;pji'and C'intheproper way,in(M;). This argument easily extends tostrong completeness, which states thatforallsetsofformulas andallformulas',wehavej=K')`'.Toseethis, suppose 6`',i.e.,[f:'g isconsistent. Since there areonly finitely manypairwise non-equi valent formulas, there must bea formula thatisequivalent with[f:'g.Bytheprevious argument, wefindapointed model (M;)such that(M;)j=K .Forthismodel wealso have(M;)j=K[f:'g.Hence, not everymodel forisonefor',i.e.,6j=K'.Strong completeness also followsbyanalternati ve wayofresaoning: note thatourlanguage iscompact :i.e.,ifj=K',there there isafinite set 0such that0j=K'.This isseen asfollows:weknowfrom Corollary 1thatthere areat most Mdifferent formulas thatarenotprovably equivalent, foranMthatdepends onthenumber ofagents andthenumber ofatoms. Butthen, bysoundness, there arealso atmost Msemantically different formulas in.Putting those formulas in0givesthedesired result. Strong completeness then followsfrom (weak) completeness with compactness. Allinall,weobtain thefollowing (also using Lemma 2). Theor em3Thelangua geofDCL-PCiscompact. Moreover,theaxiomatic systemDCL-PCissound andcomplete with respect toboth theKripk eandthedirectsemantics. Itisalso strongly complete . 458 REASONINGABOUTTHETRANSFEROFCONTROL 1.function program-eval(;M=hA;P;;i;d)retur ns`fail'oramodel overA,P 2. if='?then 3. retur nMifDCL-PC-eval(';M) 4. or`fail'otherwise 5. elsif=(i;pj)then 6. retur nhA;P;0;iifp2Pi 7. where0=ifi=j 8. otherwise=hP1;:::;Pniand0=hP0 1;:::;P0 ni 9. withP0 i=Pinfpg, 10. P0 j=Pj[fpg,and 11. P0 m=Pmforallmn,m6=i;j 12. or`fail'otherwise 13. elsif=(1;2)then 14. retur nprogram-eval(2;program-eval(1;M;d);d) 15. elsif=(1[2)then non-deter ministically choose toeither 16. retur nprogram-eval(1;M;d) 17. orprogram-eval(2;M;d) 18. elsif=0then 19. retur n`fail'ifd=0 20. orotherwise (ifd>0)non-deter ministically choose toeither 21. retur nM 22. orprogram-eval((0;0);M;d1) 23.end-function Figure 7:Analgorithm fordeciding (M;M0)2R. That is,forallsetsofDCL-PCformulas andforeveryDCL-PCformula',wehave `'iffj=K'iffj=d' 4.Complexity Themodel checking andsatisfiability problems forCL-PCarePSPACE-complete (vanderHoek & Wooldridge, 2005b), andsinceDCL-PCsubsumes CL-PC,thisimplies aPSPACE-hardness lower bound onthecorresponding problems forDCL-PC.Theobvious question isthen whether theaddi- tional dynamic constructs ofDCL-PClead toamore comple xdecision problem –andinparticular , whether DCL-PCsatisfiability matches theEXPTIME-completeness ofPDLsatisfiability (Harel etal., 2000). Inthissection, weshowthatthemodel checking andsatisfiability problems areinfactno worsethanCL-PC:theyarebothPSPACE-complete. Notice thatEXPTIME-completeness isusually regarded asthecharacteristic comple xityoflogics inwhich there isamodal operator andanother operator representing thetransiti veclosure ofthisoperator (Blackb urnetal.,2001). Note thatwhen weconsider themodel checking problem inthissection, weconsider theprob- lemwith respect todirectmodels, notKripk emodels. Ofcourse, with respect tosatisfiability ,it makesnodifference: aformula issatisfiable with respect todirect models iffitissatisfiable w.r.t. Kripk emodels. Before provingPSPACE-completeness forDCL-PCmodel checking, consider some auxiliary no- tions first. Aprogramsequence isatransfer program thatiscomposed ofatomic transfer programs, tests, andsequential composition only.Aprogramadmits aprogram sequence ifcanbeun- 459 VANDERHOEK,WALTHER,&WOOLDRIDGE folded into byrecursi velyapplying thefollowing rules: Foranyatomic transfer program (i;pj), test'?,andtransfer programsm(m0): (i;pj) (i;pj) '? '? 1;2 1;2 1[2 1or2  1;2;:::;n,forsome n0, wherem=,forallmn Thefollowing twolemmas establish thatmembership intheaccessibility relation Rforatrans- ferprogramcanbedecided inpolynomial space. Lemma 5Foralltransfer programs0andall(direct)modelsMandM0,(M;M0)2R0implies that0admits aprogramsequence oflength atmost exponential inthelength of0suchthat (M;M0)2R .Infact, thelength of canbelimited to2j0j3. Proof: Let0,M,andM0beasinthelemma. The proof isbyinduction onthestructure of 0.The only interesting case iswhere0=;theother cases arestraightforw ard. Suppose (M;M0)2R.Since R=(R),there isasequenceM0;:::;Mn,n>0,ofmodels such thatM0=M,Mn=M0,and(Mi1;Mi)2R,foreach iwith1in.Bythetransiti vity ofR,wecanassume thatthesequenceM0;:::;Mnissuch thatMi6=Mj,foralli;jwith 1i<jn,i.e.,thesequence ofmodels contains noloops. Theinduction hypothesis yields that admits program sequences 1;:::; nsuch that iisoflength atmost2jj3and(Mi1;Mi)2R i foreach iwith1in.Butthen = 1; 2;:::; nisaprogram sequence admitted bysuch that(M;M0)2R.Inthefollowing, itisshownthat hastherequired length. Note thatall models reachable fromM=hA;P;;iviaRonly differintheallocationofpropositional variables inPtotheagents inA.More precisely ,theydifferintheallocation ofpropositional variables toagents thatoccur in.Thus there areatmost`msuch reachable models, where`isthe number ofpropositional variables occurring inandmthenumber ofagents occurring in.Notice thatndoes notexceed`m;otherwise thesequenceM0;:::;Mncontains loops contradicting the assumption. Together with thefactthat`mjjjj2jj2,anupper bound forthelength of can begivenasfollows: j j=j 1; 2;:::; njnsupfj ij:1ing+n 2jj22jj3+2jj2 =2jj2+jj3+2jj2 2(jj+1)3 2jj3: QED Lemma 6Forallprogramsandall(direct)modelsMandM0,themember shipproblem (M;M0)2 Rcanbedecided inPSPACE. Proof: Letbeaprogram andletM,M0betwo(direct) models. Consider thefollowing algorithm thatdecides (M;M0)2Rbyusing thefunction program-eval()inFigure 7: 460 REASONINGABOUTTHETRANSFEROFCONTROL 1.Setd=2jj3. 2.Ifprogram-eval(;M;d)=M0,then return ‘(M;M0)2R’,and‘No’ otherwise. Toseethatthisalgorithm iscorrect, itisshownthatprogram-eval(;M;d)=M0iff(M;M0)2 R.Forthedirection from lefttoright, itisreadily check edthatprogram-eval(;M;d)=M0 implies theexistence ofaprogram sequence admitted byoflength atmostjjdsuch that (M;M0)2R0.Clearly ,R Randthus(M;M0)2R.Consider thedirection from right toleft. From(M;M0)2R,itfollowsbyLemma 5thatthere isaprogram sequence admitted byoflength atmost2jj3such that(M;M0)2R .Step 1ensures thatthevalue ofdissuch that2jj3jjd.Then itisobvious byconstruction ofthealgorithm thatthenon-deterministic choices inthelines 15and20ofFigure 7yield thatprogram-eval(;M;d)=M0.Notice that thealgorithm terminates since therecursi vecalls inthelines 14,16,and17areapplied onstrict subprograms only andtherecursi vecallinLine 22isfollowed bytheoneinLine 14while the parameter dlimits therecursion depth. The abovealgorithm canberuninpolynomial space. Toseethis, notice that thefunction DCL-PC-eval(),which iscalled inLine 3,canbecomputed inpolynomial space andthatthe parameter disencoded inbinary .Moreo ver,thestack ofanalgorithm computing thefunction program-eval()canbelimited toasize polynomial inthelength of.Note that thestack only needs tostore thecurrently evaluated program andtheprograms atthebacktracking points, which areintroduced atthenested function callinLine 14.Butsince thisnested function callis applied onstrict subprograms, there areonly linearly manybacktracking points needed atatime. Although thealgorithm isnon-deterministic, itfollowsfrom thewell-kno wnfactNPSPACEequals PSPACE(Savitch, 1970) thatitruns inPSPACE. QED Using theprevious twolemmas, wecannowprovethefollowing. Theor em4Themodel checking problemDCL-PC(w.r.t.directmodels) isPSPACE-complete . Proof: GiventhatDCL-PCsubsumes thePSPACE-hard logicCL-PC,weonly need toprovetheupper bound. Consider thefunction DCL-PC-eval()inFigure 8.Soundness isobvious byconstruction. First note thatthealgorithm isstrictly analytic: recursion isalwaysonasub-formula oftheinput. That thealgorithm isinPSPACEfollowsfrom thefactthat theloops atlines 10–12 and15-18 involve,inthefirst case simply binary counting with thevariables PC,andinthesecond simply looping through alldirect models overAandP:wedonotneed tostore these models once they arecheck ed,andsothiscanbedone inpolynomial space. Finally ,Lemma 6yields thatthecheck (M;M0)2RonLine 16canbedone inpolynomial space. QED Now,wemakeuseofthefollowing result, theproof ofwhich isidentical totheequivalent result provedbyvanderHoek andWooldridge (2005b). Lemma 7IfaDCL-PCformula'issatisfiable ,then itissatisfied ina(direct)modelMsuchthat size(M)=jP(')j+jAg(')j+1. Wecannowprovethefollowing. Theor em5Thesatisfiability checking problem forDCL-PCisPSPACE-complete . 461 VANDERHOEK,WALTHER,&WOOLDRIDGE 1.function DCL-PC-eval(';M=hA;P;0;i)retur nsttor 2. if'2Pthen 3. retur n(') 4. elsif'=: then 5. retur nnot DCL-PC-eval( ;M) 6. elsif'= 1_ 2then 7. retur nDCL-PC-eval( 1;hA;P;0;i) 8. orDCL-PC-eval( 2;hA;P;0;i) 9. elsif'=C then 10. foreach C-valuationC 11. ifDCL-PC-eval( ;hA;P;0;iC)then retur ntt 12. end-f or 13. retur n 14. elsif'=hi then 15. foreach modelM0overA,P 16. if(M;M0)2Rthen 17. ifDCL-PC-eval(';M0)then retur ntt 18. end-f or 19. retur n 20.end-function Figure 8:Amodel checking algorithm forDCL-PC. Proof: Givenaformula',loop through each modelMcontaining P(')andAg(')such that size(M)=jP(')j+jAg(')j+1,andifMj=d'then return ‘Yes’.Ifwehaveconsidered allsuch models, return ‘No’. ByTheorem 4,wecancheck whetherMj=d'inpolynomial space. QED Notice thatthePSPACEcomple xityforchecking satisfiability depends upon thefactthatmodels forDCL-PCareconcise, andthathence wecanloop through them allinpolynomial space (wedo notneed to‘remember’ amodel after ithasbeen considered). 5.Characterizing Contr ol One ofthemain concerns intheoriginal study ofCL-PC(vanderHoek &Wooldridge, 2005b) was toinvestig atethelogical characterization ofcontr ol:theextent towhich wecould characterize, in thelogic, what states ofaffairsagents could reliably control. Control wasdistinguished from ability inthesense that, forexample, noagent could besaid tocontrol atautology ,evenifonemight be prepared toconcede thatanagent would havetheability tobring about atautology .Thestarting point forthestudy ofcontrol (vanderHoek &Wooldridge, 2005b) wasthecontr ols(i;p)construct: aswehavealready seen, such anexpression willbetrueiffthevariable pisunder thecontrol of agent i.This ledtoananalysis andcharacterization ofthetypes offormulas thatanagent could besaid tocontrol. Thetype ofcontrol studied byvanderHoek andWooldridge derivesfrom the ability ofagents tochoose values forthepropositional variables under their control. Letusrefer to thistype ofcontrol, where anagent isdirectly able toexertsome influence oversome state ofaffairs byassigning values toitsvariables, asfirst-ordercontr ol.Inthissection, weundertak easimilar study ofcontrol inthericher setting ofDCL-PC.Here, however,wehaveasecond type ofcontrol, 462 REASONINGABOUTTHETRANSFEROFCONTROL which derivesfrom theability totransfer contr olofvariables toother agents.Thus, forexample, ifi controls p,shealso‘has thepower’ toensure forinstance contr ols(j;p),where jisanagent different from i.This control isexpressed through thetransfer modality:hi;pjicontr ols(j;p).Werefer tothistype ofcontrol assecond-or dercontr ol.Wewillseethatthese types of‘control’ areindeed rather orthogonal. Forinstance,hi;pjij'(icangiveptoj,who then canachie ve')andi;j'(i andjcancooperate, toachie ve')arelogically incomparable. Forexample, taking'=hj;pii> gives j=dcontr ols(i;p)!(hi;pji'^:i;j') while for'=hi;pji>andassuming i6=j,wehave j=dcontr ols(i;p)!(:hi;pji'^i;j'): However,ifthegoal isanobjecti veformula, wecanrelate atomic control andtransfer ,aswewill shortly see. Tobeginourstudy ,consider thetransfer program givei^=[ p2P contr ols(i;p)?;[ j2Ai;pj : (20) Thenhgiveii'would express thatihasawaytogiveoneofherpropositional variables toone oftheagents (possibly herself) insuch awaythatconsequently 'holds. Thus,hgive ii'means that icandistrib utehervariables among theagents insuch awaythatafterw ards'holds. Hence, when reasoning about i’spower,thestrongest thatshecanachie veisany'forwhichi'_hgive ii', expressing thaticanachie ve'byeither choosing anappropriate value forhervariables, orby distrib uting hervariables overAinanappropriate way.Note thatbothi'andhgive ii'imply hgive iii',andhence any'forwhichhgive iii'holds canbeseen aswhat icanachie veonher own.Wewillcome back totheprogramgiveibelow. The programgive canbegeneralized toincorporate coalitions thatcangiveawayvariables, andthose thatcanrecei ve:let giveC;D^=[ i2C[ p2P contr ols(i;p)?;[ j2D[fighi;pji : (21) This programgiveC;Dletsanarbitrary agent ifrom thecoalition Ceither giveanyofher variables ptoanarbitrary member ofthecoalition D,ordonothing (i.e., givethem toherself). Now,forobjecti veformulas',wehavethefollowing, where iisadedicated agent from C: C'$hgive C;figii': Inwords: theagents inthecoalition Ccanchoose values fortheir variables such that',ifandonly iftheyhaveawaytogivealltheir variables tothededicated agent i,who then canachie ve'.Note thatweareingeneral notable toeliminate alloccurrences of’s,since thisistheonly wayto express first-order control, i.e.,toreason about a‘different valuation’. Forsome examples inthelanguage without transfer ,werefer toapaper byvanderHoek and Wooldridge (2005b), especially totheexample ‘Bach orStravinsk y’,(i.e., Example 2.4,vander Hoek &Wooldridge, 2005b). Before looking attwoexamples ofcontrol inadynamic setting, note thatAallowsthefollowing inference, foranyobjecti veformula': 'isconsistent)j=dA' (22) This inference says thatthegrand coalition Acanachie veanysatisfiable objecti veformula. 463 VANDERHOEK,WALTHER,&WOOLDRIDGE Example 1Suppose wehave nagents: 1,:::;n.Eachcontr olsaflagri(i=1:::n)toindicate that theydesir econtr oloveraparticular resour ce,modeled with avariable p.Itisnotthattheywant p tobetrue orfalse every now andthen, (whic hcould betakencareofbyacentr alagentexecuting aprogrammaking pfalse andtrue alternatively), butrather ,theywant tocontrol peventually .Let +ndenote addition modulo n,and, similarly ,nsubtr action modulo n.Letskip denote>?,i.e.,a testonatautolo gy.Consider thefollowing program: grant -req(i)^=if:contr ols(i;p)thenskipelseif ri+n1 then (i;pi+n1)else::: :::if ri+n(n1) then (i;pi+n(n1))elseskip Theprogramgrant -req(i)makesagentipass ontheresour cepwhene vershehasitandsome- body elseneeds it,wher etheneed ischeckedintheorderstarting with theagentwith thenext+n index.Note that the‘use’ ofthisvariable p,i.e.,making ittrue orfalse ,isnotencoded inour programconstructs. Now consider theprogram pass -on(i;j)^=grant -req(i);:::;grant -req(jn1): Theprogrampass -on(i;j)willpass contr oloverthevariable ptoagentj,provided thatinitially rj issetandoneoftheagents inthesequence i;i+n1;:::;jn1owns it.This canbeexpressed as follows: rj^contr ols(fi;i+n1;:::;jn1g;p)![pass -on(i;j)]contr ols(j;p): Now wehave: ri! hpass -on(i+n1;i)icontr ols(i;p) ^[pass -on(i+n1;i)]contr ols(i;p) : That is:ifagentiflagsarequest riforresour cep,then, after theprogrampass -on(i+n1;i)has been executed, iwillbeunder contr olofp. Notice thattheprevious example ‘freely’ passes onavariable along chains ofagents, thereby taking forgranted thattheycancontrol thatvariable onthefly,andmaking ittrueorfalseatwill. In thefollowing example, control overavariable isnotonly important, butalsothetruth ofsome side conditions involving them. Example 2Wehave ascenario with threeagents: twoclients c1and c2,and aserver s.The server always hascontr oloveroneofthepropositional variables p1andp2,inparticular swants toguarantee thatthose variables arenevertrue simultaneously .Atthesame time,c1andc2want toensur ethatatleast oneofthevariables pi(i=1;2)istrue,wher evariable pibelongs toclient ci.Wecandescribe theinvariant ofthesystem with theformula Inv: Inv^=_ i=1;2contr ols(s;pi)^_ i=1;2contr ols(ci;pi) Consider thefollowing transfer program : ^= (contr ols(s;p1)?;s;p1c1;c2;p2s) [(contr ols(s;p2)?;s;p2c2;c1;p1s): 464 REASONINGABOUTTHETRANSFEROFCONTROL This says thatanarbitr arynumber oftimes onevariable piispassed fromtheserver totheclient ci,andanother variable pj(i6=j)fromtheclient cjtotheserver . Using Invand ,wecandescribe thewhole scenario asfollows: Inv![ ]Inv Inv![ ]s:(p1^p2)^fc1;c2g(p1_p2) Ageneral characterization ofthetypes offormulas thatagents andcoalitions could control was givenbyvanderHoek andWooldridge (2005a), andouraimisnowtoundertak ethesame study for DCL-PC.Itwillappear thatthiscanbedone onalocal andaglobal level,butwewillalso seethat thenotion ofcontrol thatweinherited fromCL-PC,hasanatural generalization inourconte xt. Thenextcorollary establishes aresult concerning thecharacterization ofcontrol. Itsfirstitem says thatofnostrict sub-coalition C6=AofA,isitvalidthatCcontrols something. Inother words, control ofsuch acoalition isalwaysafeature ofaspecific model, inparticular ,aspecific allocation. According tothesecond item, thegrand coalition Aderivably,orinallmodels, controls exactly those formulas'with theproperty thattheir equivalent formW 2(W(')^)issuch thatforeveryallocation description,theformulaW(')isacontingenc y,i.e.,notatautology andneither acontradiction. If'isapropositional formula, wehave'$W(')anditiseasy to seethat'being acontingenc yissufficient andnecessary tohavej=Kcontr ols(A;').Ontheother hand, if'isforinstance contr ols(i;p),then(')=if!contr ols(i;p)andW(')equals >.And indeed,6j=Kcontr ols(A;').Contrast thiswith'=p^contr ols(i;p).Iffollowseasily from thetruth definition oftheKripk esemantics defined inSection 2.6,andfrom Theorem 2and Theorem 3. Corollary 2LetMbeaKripk emodel, Canycoalition CAwith C6=A,andlet'beranging overDCL-PCformulas. Then itfollows that: 1.Forno',dowehavej=Kcontr ols(C;'),and 2.j=Kcontr ols(A;')ifftheformulaW 2(W(')^),towhich'isequivalent accor ding toTheor em2,issuchthatfornoallocation description,(')=or(')=;. Proof: 1.Inorder forcontr ols(C;')tobevalidunder theKripk esemantics, ithastobetrueatallworlds inallKripk emodels. TakeanyKripk emodel M=h;Ri2A;iforwhich theallocation =hP1;P2;:::;Pniissuch thatPj=;,forallj2C.Wethen haveM;j=KC' iffforsome world02with0=(modPC),itholds thatM;0j=K'.But, since PC=;,theonly such0isitself, sothatwecannot haveM;j=KC'^C:'.Hence, M;6j=Kcontr ols(C;'). 2.First weprovetheleft-to-right direction bycontraposition. Let'beequivalent tothefor- mulaW 2(W(')^),which wehavebyTheorem 2.Suppose, forsome allocation description,that(')=.This means, foreveryKripk emodel M=h;Ri2A;i andforeveryvaluation,thatM;j=K'.Consequently ,M;j=KA'.However,the agents inAcanonly change,butnotthecurrent allocation.Since (')=,Acan- notchoose avaluation thatfalsifies',i.e.,M;j=K:A:'.Similarly ,if(')=;,we 465 VANDERHOEK,WALTHER,&WOOLDRIDGE haveM;j=KC:',foreach.But, given,Acannot choose avaluation satisfying'on thecurrent allocation described by,i.e.,M;j=K:A'.Hence, ineither case wehave 6j=Kcontr ols(A;'). Consider theother direction from right toleft.Suppose'isequivalent toW 2(W(')^) while, forno,theset(')iseither or;.LetM=h;Ri2A;ibeaKripk emodel. Remember thatistheallocation description corresponding totheallocation.Bythefact that6=;,there isavaluation description2such thatthecorresponding valuation satisfies'at(M;).Butthen, Acanchooseinorder tosatisfy',andthus wehave M;j=KA',forany.Similarly ,wehaveM;j=KA:'which followsbythefactthat 6=.Hence, wehavej=Kcontr ols(A;'). QED One may askforamore local characterization ofwhat acoalition controls: forwhich Kripk e models MandvaluationsdowehaveM;j=Kcontr ols(C;')?Forthisnotion ofcontrol, the answer canbeimmediately read offfrom Theorem 6,tobegivenshortly .That theorem isabout amore general notion: torecoveracharacterization result forthecurrent notion contr ols(i;'),we would only need theitems (1b) and(2b) ofTheorem 6. Thenotion ofcontrol discussed sofaristhattakenfromCL-PC:wehavelifted thecharacteriza- tionresults toourricher language. However,asisclear from ourdiscussion earlier inthissection, amore appropriate notion ofcontrol ofanindividual iinourlanguage might beobtained using the programgive i,wheregiveiisdefined in(20). Note thati'!hgive iii'isvalid, andhence thathgive iiseems amore general waytoreason about i’scontrol: itisabout what icanachie ve both bytoggling itspropositional variables anddelegating some ofthem .One caneasily discuss thisatthecoalitional level,bylifting Definition (20) tothecase ofcoalitions Casitwassuggested in(21) withgiveC;D.However,westick totheindividual case here forsimplicity .Letustherefore define CONTR OLS(i;')^= hgive iii'^hgive iii:' (23) This definition says thatagent icontrols aformula'iffthere isawaytodistrib uteherpropo- sitional variables overtheagents such thatafter imakesappropriate choices forherremaining vari- ables,'holds, butthere isalso awayofdistrib uting hervariables thatenables hertoenforce:'. From thevalidity ofi'!hgive iii',weinfer thatcontr ols(i;')implies CONTR OLS(i;'). Notice thattheimplication theother wayaround isnotvalidsince contr ols(i;)canneverbetrue forcontrol ofother agents overvariables. Forexample, CONTR OLS(i;contr ols(j;p))holds iff p2Pi.From this, weknowthat contr ols(i;p)!CONTR OLS(i;contr ols(j;p))isatheorem, which basically says thatwhen having control overavariable, youcanfreely choose tokeepit orpass iton. However,contr ols(i;p)!contr ols(i;contr ols(j;p))isnotvalid, weevenhave contr ols(i;p)!:contr ols(i;contr ols(j;p)):once agent iowns p,shecannot choose tokeepp ortopass itonbyonly toggling herpropositional variables. Before westate ourcharacterization result, weintroduce some more notation. Foranytwo Kripk emodels M=h;Ri2A;iandM0=h;Ri2A;0iinK(A;P)andforanyagent i,wesay thatMiM0iftheallocations=hP1;:::;Pi;:::;Pniand0=hP0 1;:::;P0 i;:::;P0 niaresuch thatP0 iPiand, forallj6=i,PjP0 j.That is,M0isobtained fromMbyexecutinggive i.In such acase, wealsosayi0. 466 REASONINGABOUTTHETRANSFEROFCONTROL                                                                                          give i1(') 2(') 3(')2(')6=; 3(')6=;1 2 3 Figure 9:Illustration of'W 2W(')^ . Foreach valuation description2,letbethevaluation thatisdescribed by,and, for each allocation description2,letbetheallocation described by,andletP ibethesetof propositional variables controlled byagent iin. Theor em6Let'beaDCL-PCformula with'W 2W(')^ ,asgiven byTheor em2. LetM=h;Ri2A;ibeaKripk emodel ofK(A;P),andaworld inM.Then, foreachagent i2A, M;j=KCONTR OLS(i;') iffthefollowing twoconditions aresatisfied: 1.Thereisa2anda2(')suchthat (a)i,and (b)=(modP i). 467 VANDERHOEK,WALTHER,&WOOLDRIDGE 2.Thereisa2anda2n(')suchthat (a)i,and (b)=(modP i). Wefirstdemonstrate therequirements (1)and(2)ofTheorem (6).Suppose thatsatisfies (p^:q^ r),Agent i=1ownspinM,Agent 2ownsqandr,andAgent 3hasnopropositional variables in M.First ofall,toseewhyItem (1b) isneeded, wehavetoguarantee thatforsomeM0;0,itholds thatM0;0j=K1'.That means that, evenafter 1hasgivenawaysome ofheratoms (resulting in some allocation0),shestillshould beable tomake'true. This ispossible for'=(:p^:q^r): Agent 1could simply stay within thecurrent allocationandjustmakepfalse. However,thisis notpossible for'=(:p^:q^r^contr ols(3;p))since, once1hasdelegated control ofpto 3,agent 1cannot makepfalseanymore. Moreo ver,anagent icanonly giveatoms away,soany model with allocationthatmakesitpossible forhertosatisfy'should beoneforwhichi, which explains Item (1a).Item (2a)hasexactly thesame motivation, andrequirement (2b) iseasily understood tobesimilar to(1b), once onerealizes thatthenormal form of:'canbeexpressed in terms ofthenormal form of'asfollows: :'_ 2_ (n('))^ : Forasimple illustrating example, suppose there areonly twoallocations1and2,and'isequiv- alent to ((p^q)_(p^:q))^1 _((:p^q)_(p^:q))^2: Note that'’snormal form describes thevaluations on1and2where'issatisfied. Thenormal form of:'iscomplementary totheoneof'inthesense thatitdescribes thevaluations on1and 2where'isfalsified: ((:p^:q)_(:p^q))^1 _((:p^:q)_(p^q))^2: Proof: Weillustrate ourproof with apictorial story thatshowswhytherequirements 1and2of thetheorem areboth sufficient andnecessary .Giventhat'isequivalent toW 2(W(')^) byTheorem 2,semantically thismeans that'corresponds toacollection ofthe‘shaded areas’, asdepicted inFigure 9.Now,forCONTR OLS(i;')tobetrue ataworldinaKripk emodel M=h;Ri2A;i,Agent ihastobeable tomoveinside such ashaded area, andtomoveoutside itaswell. Butmoving inside ashaded area, means being able tofirstgotoamodel with allocation ,andthen toaworldwithin thatmodel such thatthevaluation descriptionisin.Notice thaticanmovetoallocationonly bydelegating control overhervariables toother agents (hence therequirement i),andicanmovetovaluationonly bytoggling herremaining variables inP iat(hence thecondition=(modP i)).This showsthatCondition 1isequivalent to M;j=Khgive iii'.Accordingly ,Condition 2corresponds tobeing able tomoveoutside ofa shaded area inFigure 9.Semantically ,thismeans being able tofirstgotoamodel with allocation 0,andthen toaworld0within thatmodel such thatthevaluation description0isnotin0. Consequently ,Condition 2isequivalent toM;j=Khgive iii:',which finishes theproof. QED 468 REASONINGABOUTTHETRANSFEROFCONTROL 6.Possible Extensions andRefinements Inthissection, weconsider some possible extensions andrefinements totheframe workwehave presented inthispaper .Wedonotclaim tohavesubstantial results relating tothese extensions –the aimissimply toindicate some possible directions forfuture research. 6.1Separating First- andSecond-Order Contr ol InDCL-PCaspresented here, anagent canassign avaluetoavariable (exercising first-order control) iffitcan‘givethisvariable away’(exercising second-order control). That is,foranypairofagents i;jandpropositional variable p,wehavethefollowing. j=dcontr ols(i;p)$hi;pji> (24) Amoment’ sreflection should confirm thatthisisnotalwayshowthings workinhuman soci- eties. Wemight empo weranindividual tomakeachoice onourbehalf, butwemight notbehapp y with theidea thatthisindividual could inturn transfer thispowertosomebody else. Sometimes, it might beacceptable; butcertainly notinallcases. Wecanstraightforw ardly distinguish between these situations byextending ourmodels and modifying thesemantics ofourlanguage asfollows.AmodelMisnowdefined tobeastructure: M=hA;P;;0;i where thecomponents A;P;0,andareasoriginally defined, and=h1;:::;niisatuple of subsets ofP,with elements indexedbyagents A,such that1;:::;nforms apartition ofP. Now,theintended interpretation ofsuch models isasfollows: thepartition0defines who (initially) hastheability toassign values towhich variables (i.e., who hasfirst-order control ofvariables); while thepartitiondefines who cantransfer control ofwhich variables (i.e., who hassecond-order control ofvariables). Syntactically ,thelogic thatwedefine toreason about such structures isidentical toDCL-PC, howeverthesemantics aredifferent. Infact,theonly element ofthesemantics thatweneed to change relates tothedefinition oftheaccessibility relation foratomic transfer programs. LetM=hA;P;;0;iandM0=hA;P;;0 0;ibetwomodels with0=hP1;:::;Pniand 0 0=hP0 1;:::;P0 ni.Then: (M;M0)2Ri;pj iff 1.p2i(agent ihassecond-order control ofptobeginwith) 2.forallk2A,ifp2Pkthen: (a)ifk=jthen8l2A,Pl=P0 l. (b)ifk6=jthen: 469 VANDERHOEK,WALTHER,&WOOLDRIDGE P0 j=Pj[fpg, P0 k=Pknfpg, 8l2Anfj;kg,Pl=P0 l. Withthissetup, first-order control isdynamic, andcanbechanged bytransfer programs, while second-order control asdefined inisstatic. Moreo ver,thefactthatanagent hasfirst-order control ofavariable does notmean ithassecond-order control: wenolonger havetheequivalence (24). 6.2Hierar chies andNetw orks ofContr ol Ofcourse, there isnoreason whyoneshould stop atsecond -order control. One could extend the setup with afinite hierarchyofcontrol levels,with each levelu>1defining who cantransfer control ofvariables atlevelu1,andlevelu=1defining who canexercise first-order control. We then need toextend atomic programs toindicate which levelofcontrol isbeing transferred. Atomic programs then taketheform: i;u pj tomean thatagent itransfers levelucontrol toagent j.Thesemantics ofthelanguage become yet more involved,butarestraightforw ardtodefine. Some what related ideas were studied byBoella andvanderTorre(2008). Another direction istoconsider multiple agents having ‘write access’ topropositional variables. Forexample, wemight consider an‘authority relation’PAA,with theintended interpretation that(i;j)2Pmeans thateverything thatiisempo wered toeverything thatjisempo wered todo. Propositional variables arethen allocated tosink nodes inP(i.e., agents with nooutgoing edges inP).One might then ask, forexample, whether structural properties ofthegraphPcharacterise formulae oftheobject language. 7.Related Work Although other researchers havebeguntodevelop formal systems forreasoning about delegation andthetransfer ofcontrol (e.g., Li,Grosof, &Feigenbaum, 2003), tothebest ofourknowledge DCL-PCisthefirstsuch system tohavearigorous semantics, andacomplete axiomatization. Also, theemphasis ofLietal.(2003) isondecentralized ‘trust management’, inwhich roles likethatofa requester ,credentials andanauthorizer aredistinguished. Intheworkpresented here, theemphasis ismore onwhat coalitions canachie ve,iftheyareallowed tohand overcontrol overpropositional variables. Norman andReed (2002) consider alogic ofdelegation, particularly focussing ongroup delega- tion. Thelogic underpinning thisworkisa‘STIT’(sees toitthat) logic, inwhich themain operator isoftheformSiA,meaning ‘agent isees toitthatA’.This extends todelegation byconsidering expressions oftheformSiSj(isees toitthatjsees toitthat...’).Forexample, anaxiom inthe resulting system is: SiSjA!SiA: TheworkofNorman andReed represents aserious attempt todevelop aphilosophically robustlogic fordelegation, appropriate foruseincomputational systems. However,their notion ofdelegation is verydifferent toours, (crudely ,agents delegateresponsibility ,rather than transfer control), andthe 470 REASONINGABOUTTHETRANSFEROFCONTROL dynamic logic flavourofDCL-PCisabsent. Finally ,relati velyfewtechnical results relating tothe logic arepresented. Jones andSergot(1996) consider theproblem ofreasoning about powerthatanindividual ob- tains byvirtue ofanorganisational role. There, thenotion ofactions thatarecarried outinorder toempo weragents with certain capabilities iscentral, andJones andSergotalso consider thein- terplay ofsuch actions andability .However,thelogical formalisation israther different –againa STIT-likelanguage isused, rather than ourdynamic logic frame work, andrelati velyfewtechnical results relating totheframe workarepresented. However,thesetting ofJones andSergot(1996) is much more general than ours: wefocus only onpropositional control. Insome what related work, Boella andvanderTorre (2006) present aformalisation ofpowerdelegation inthesetting ofnor- mative multi-a gentsystems .Theyconsider ,forexample, theissue ofhowdelegated goals interact with other goals. Theframe workprovides arichandcompelling setting forinvestig ating questions relating todelegation. However,nooverarching object language isdeveloped forrepresenting this frame work, andrelati velyfewtechnical results arepresented relating totheframe work. Itwould beinteresting toconsider whether thedynamic logic approach developed within thepresent paper might beadapted totheframe workofBoella andvanderTorre. Withrespect tologics forreasoning about controlled variables, Boutilier (1994) presents alogic intended tocapture notions such as‘Icanachie vemyplan using actions thatonly relate tovariables under mycontrol’. Inspirit, thislogic isveryclose tothekind ofsituation weareaiming tomodel, although thetechnical details ofBoutilier’ slogic (the waycontrol iscaptured inthelogic) are different. Moreo ver,Boutilier’ slogic does notconsider multi-agent aspects, orthedynamics of control aswedointhepresent paper . Werefer thereader totheworkofvanderHoek andWooldridge (2005b) foranextensi vedis- cussion andmanyreferences forlogics ofability .Gerbrandy (2006) generalises theresults ofvan derHoek andWooldridge, byconsidering situations inwhich anagent hasonly partial control of avariable, orwhere itshares control with others. Gerbrandy also showshowlogics ofproposi- tional control arerelated tocylindrical modal logic(Venema, 1995). Specifically ,thegeneralisation ofCL-PCconsidered byGerbrandy canbeunderstood asacylindrical modal logic, immediately yielding acomplete axiomatization anddecidability/undecidability results forvarious fragments of thesystem. Asome what related formalism isdiscussed byvanBenthem, Girard, andRoy(2009). This formalism isintended toenable reasoning about ceteris parib uspreferences (inthesense of ‘allother things being equal’). VanBenthem etal.develop alogic with amodalityhi',where isasetofpropositional formulae; theintended interpretation ofhi'inastate uisthatthere isa state vagreeing with uonthevaluation offormulae inwhich'istrue. There seems quite aclose connection between DCL-PCandtheformalism ofvanBenthem etal.,although weleavethedetails forfuture work. Ourframe workexplains control interms ofwhat theagents canchange orwhcih atoms theycan choose tobetrueorfalse. Sauro (2006) addresses thequestion ofhow agents canchange theworld, where control ofcoalitions isdefined interms ofactions intheagents’ repertoire. Finally ,note that asdiscussed byvanderHoek andWooldridge (2005b), thelogicCL-PCisclosely related tothe well-kno wnformalism ofquantified Boolean formulae, anditisnothard toseethatthere isalso aclose relationship between DCL-PCandquantified Boolean formulae. However,while wemay notultimately haveanygainisformal expressi vepowerwhen usingDCL-PCrather than quantified Boolean formulae, wedobenefit with respect tothenaturalness ofexpression inDCL-PC.Quantified Boolean formulae havenoexplicit notion ofagenc yorthedynamics ofcontrol, andrepresenting 471 VANDERHOEK,WALTHER,&WOOLDRIDGE these aspects within quantified Boolean formulae leads toformulae thatareunintuiti veandhard to understand. 8.Conclusions Inthispaper ,wehavebuiltupon thelogicCL-PCofstrate giccooperati veability ,inwhich thecontrol that agents haveovertheir environment isrepresented byassigning them specific propositional variables, forwhich theagents that‘ownthem’ candetermine their truth value. Weadded adynamic component tothislogic, thus obtaining thelanguage DCL-PCinwhich onecanreason about what agents (and coalitions ofagents) canachie vebysetting their assigned variables, orbygiving the control overthem toothers. Wegavetwodifferent butequivalent semantics forthislanguage – adirect andamore conventional Kripk esemantics –andprovided acomplete axiomatization for them. Thekeyproperty thatestablishes theproof ofcompleteness forDCL-PC’saxiomatic system is thefactthateveryformula inthelanguage isprovably equivalent toanormal form: adisjunction of conjunctions ofliterals overpropositional variables pandassertions oftheform contr ols(i;p).We alsoinvestig ated thecomple xityofthemodel checking andsatisfiability problems forDCL-PC,and showed thatthese problems arenoworse than fortheprogram-free fragment CL-PC:theyareboth PSPACE-complete. Wedemonstrated that, forthespecial case where ability inATLisinterpreted as in(D)CL-PC,thisimplies asimpler satisfiability problem forATL. There areseveralavenues forfurther development ofthiswork. First ofall,itisinteresting to addtheassignments thattheagents canperform tothetransfer actions theycanperform, sothatthe twodimensions ofwhat agents canachie vebecome projected inonedimension. Although parallel execution isnotaprogram construct inourlanguage, andhence onecould stillnotmodel situations where anagent chooses some values foritsatoms, andatthesame time transfers control ofsome other atoms, onecould atleast reason about theeffectofprograms thatdoacombination oftruth assignments andtransfer ofcontrol insequence ,orinachoice .Secondly ,inmanyrealistic sys- tems, Property (22) may betoogeneral: often, wewanttospecify thattheoverall system satisfies some constraints. Forthis, itseems appropriate, however,notonly toreason about what agents canachie ve,butalso about what theyshould guarantee. Theframe workofSocial Laws (Moses & Tennenholtz, 1995; vanderHoek, Roberts, &Wooldridge, 2005) could besettoworkinorder to express thatunder certain conditions, anagent willnotsetacertain propositional variable to‘true’, orthatshewillnotpass oncontrol overacertain variable toaspecific agent, orthattheoverall sys- tembehavesinsuch awaythateveryagent getsafairchance totrigger aspecific variable (i.e., usea specific resource) infinitely often. Another interesting direction would betoconsider howtoallow forthefactthatagents outside atransfer program might change their variables while theprogram is executing. This might require some consideration ofthesemantics ofparallel action. Relatedly ,it would beinteresting tomakeitpossible tocapture thetemporal properties ofthesystem, outside of transfer programs. Here, some combination oftemporal anddynamic logic might beappropriate. Similarly ,wecould weak entheallocation axiom toallowforsome propositional notbeing under control ofanyagents, capturing theidea thatnotallfactsaremodifiable (bytheagents under consideration). Another extension would betoassign control overatoms tocoalitions ,rather than individual agents.This could cater for‘power’ insocial conte xts,with thetypical example being thatanycoalition bigger than athreshold ncanliftapiano. Finally ,animplementation ofatheorem proverforthelogic would ofcourse beinteresting. Finally ,animplementation ofatheorem prover forthelogic would ofcourse beinteresting. 472 REASONINGABOUTTHETRANSFEROFCONTROL Ackno wledgments The authors wish tothank theJAIR reviewers andtheeditors fortheir use- fulcomments. Michael Wooldridge andDirk Walther were supported bytheEPSRC under grant GR/S62727/01. Appendix A.Proofs Theor em1. 1.Theschemes inFigure 6arederivable inDCL-PC. 2.The axioms K(i),T(i),B(i),andeffect(i)havecoalitional counterparts K(C),T(C),B(C), andeffect(C)thatareallderivable foranycoalition C. 3.`contr ols(C;p)$W i2Ccontr ols(i;p). 4.Theproperty persistence 1(contr ol)isalso derivable when wereplace agent ibyanarbitrary coalition C. Proof: 1.Wedescribe howtheeight schemes inFigure 6canbederivedintheaxiomatic system of DCL-PC. Forat-least(contr ol),thisfollowsdirectly from axiom effect(i),with taking =>. Forat-most(contr ol),from`(p)weget,using axiom T(i)andcontraposition,i`(p). Assuming moreo veri:`(p),with axiom contr ol,givescontr ols(i;p).From allocation wethen obtain:contr ols(j;p)foranyagent j6=i,and, using contr ol(j),weget:jp_ :j:p,i.e.,:j`(p)_:j:`(p).Since T(j)givesusj`(p),weobtain:j:`(p),i.e., 2j`(p). Toprovenon-effect(i),assumei`(p)^:contr ols(i;p).Then axiom contr ol(i)yields :i:`(p),which isequivalent to2i`(p). Forpersistence (non-contr ol),theright-to-left direction followsimmediately from T(j). Fortheleft-to-right direction, assume that:contr ols(i;p).From allocation wederive that contr ols(1;p)55contr ols(i1;p)5contr ols(i+1;p)55contr ols(n;p); andfrom this, bypersistence 1(contr ol)wegetW k6=i2jcontr ols(k;p).Foreveryk6=i, wehavecontr ols(k;p)!:contr ols(i;p),which followsfrom allocation .Hence, using Necessitation ,wehave2j(contr ols(k;p)!:contr ols(i;p)).From Axiom K(j),itnow follows2jcontr ols(k;p)!2j:contr ols(i;p).Combining thiswithW k6=i2jcontr ols(k;p), weobtain thedesired conclusion 2j:contr ols(i;p). Notice thatatomic permanence (;)does notplace anyrequirement onp;iorj.Also, for theprogram i;pj,under thecondition thathi;pji>,byfunc andpreconditions (transfer ), hi;pji'and[i;pj]'areequivalent. Formally: hi;pji>!(hi;pji'$[i;pj]') (25) 473 VANDERHOEK,WALTHER,&WOOLDRIDGE Now,weproveobjective permanence (;)byinduction on'.Theinduction base, where 'isapropositional variable, followsfrom atomic permanence (;).Consider theinduc- tionstep. Suppose thetheorem isprovenfor andtake'=: .Assumehi;pji>.It istoshowthat: $[i;pj]: which isequivalent toshowing that $hi;pji . This followsfrom (25) andtheinduction hypothesis. Asafinal step intheinduction, suppose objective permanence (;)isprovenfor 1and 2.This means wecanassume that hi;pji>! ( 1$[i;pj] 1)^( 2$[i;pj] 2) (26) Nowtake'= 1_ 2.Obviously ,ifwehave 1_ 2,wehave[i;pj]( 1_ 2), which proveshi;pji>!('![i;pj]').Fortheother direction, suppose, given hi;pji>,that[i;pj]( 1_ 2).Weuse(25) toconcludehi;pji( 1_ 2),and byclassical modal reasoning wethen obtainhi;pji 1_hi;pji 2.By(25) andthe induction hypothesis weget( 1_ 2),which concludes ourproof. Using K()andNecessitation ,wecanderivethefact:hi>![]'.Using thisfact,it ispossible toshowwith propositional reasoning thatobjective permanence isequivalent to []'$(hi>!') (27) Theproof isbyinduction onthestructure ofthetransfer program.Thefirstcase of theinduction base, whereisanatomic program, holds byatomic permanence (;). Forthesecond case, supposeisatest ?.Notice that, bytheaxiom test(),h ?i' isequivalent to ^',andthush ?i>isequivalent to .Butthen, (27) isequivalent test(). Consider theinduction stepwhere=1;2.Theinduction hypothesis tellsushii>! ([i]'$'),forallobjecti ve'andi=1;2.Assumeh1;2i>;thisimpliesh1ih2i> andh1i>bycomp(),and, bytheinduction hypothesis for1,([1]'$').Forany diamond operator ,andhence also forh1i,wehavethatifanimplication ! 0is derivable, wecanalso deriveh1i !h1i 0using Necessitation andK().Apply- ingthistotheinduction hypothesis for2,i.e.,toh2i>!([2]'$'),weobtain h1ih2i>!h1i([2]'$'),and, with Modus Ponens, wearriveath1i([2]'$ ').Weliketodemonstrate that[1;2]'$'.Bycomp(),thisisequivalent to [1][2]'$'.Forthedirection from lefttoright, assume [1][2]'.Itisagainamodal principle toconcludeh1i 0fromh1i( $ 0)and[1] .Taking =[2]'and 0=',weobtainh1i'.Toshowthat'holds, suppose that:'.This isstillanob- jectiveformula, sowecanapply theinduction hypothesis toconclude [1]:'.This, of course, contradictsh1i',sothatindeed weconclude'.Conversely ,suppose'.Then, bytheinduction hypothesis for1,wealsohave[1]'.Theinduction hypothesis for2, h2i!([2]'$')implies'![2]',andwecanapply necessitation andK(1)to thistoderive[1]'![1][2]'. Now,consider=1[2,andobjective permanence provenfor1and2.Byaxiom union(),wehaveh1[2i>$(h1i>^h2i>).Butthen, givenh1[2i>,we have:[1[2]'$([1]'^[2]'),andtheinduction hypothesis explains whythe right-hand sideofthisequivalence isequivalent to'. 474 REASONINGABOUTTHETRANSFEROFCONTROL Finally ,consider= 1.Bytheaxiom mix(),weimmediately haveh 1i>! ([ 1]'!').Fortheother direction, recall thatfrom theinduction hypothesis we canderive,asavalidity ,'![1]'.Using Necessitation for[ 1]then gives[ 1]('! [1]').Butthen, using theassumption'andtheaxiom ind()givesus[ 1]',hence we alsohaveh 1i>!('![ 1]'). Foraxiom inverse,werely onthenormal form obtained inTheorem 2(the proof of which does notinvolveinverse).Soweknowthatevery'isequivalent toadisjunction offormulas oftheform'1^'2,where'1isanobjecti veformula, and'2aconjunc- tionofformulas oftheform contr ols(h;q).Wenowshowthatboth'1and'2satisfy inverse,from which theresult followsforarbitrary'.Soassume contr ols(i;p).By precondition (transfer )thisentailshi;pji>,andhence wecanapply objectivepermanence (; )andfunc twice toconclude'1$[i;pj;j;pi]'1. Nowweconsider formulas'2,starting bytheir base case contr ols(h;q).Assume contr ols(i;p):wenowfirst showthelefttoright direction. Ifp6=q,wegetfrom persistence 2(contr ol)that[i;pj;j;pi]q.Ifp=qweconsider three subcases: (1) h=i.Wethen derive,from contr ols(i;p),using transfer ,thathi;pjicontr ols(j;p)and contr ols(j;p)!hj;piicontr ols(i;p):with func andcomp(),thisgivescontr ols(i;p)! [i;pj;j;pi]contr ols(i;p).(2)h6=i;h=j.Givencontr ols(i;p),wethen have[i;p j]! forany ,andwearedone. (3)h6=i;h6=j.Wecanusepersistence 2(contr ol) twice togetcontr ols(h;q)![i;pj;j;pi]contr ols(h;q).Finally ,givencontr ols(i;p), wederivetheright toleftdirection, thatis,wederive[i;pj;j;pi]contr ols(h;q)! contr ols(h;q).First assume p6=qandsuppose wewould have:contr ols(h;q).Then, byallocation, wehavethat forsome agent k6=h,wehavecontr ols(k;q),andby persistence 2(contr ol)weget[i;pj;j;pi]contr ols(k;q),which clearly contradicts [i;pj;j;pi]contr ols(h;q).Nowsuppose p=q.Again,wehavethree subcases. (1) Ifh=i,theconclusion followsfrom theoverall assumption contr ols(i;p).(2)Suppose h6=i;h6=j.Thesame reasoning applies asincase (1).Finally ,(3)suppose h6=i;h=j. Since contr ols(i;p)isgiven,wehavehi;pjicontr ols(j;p)(bytransfer ),andhence hi;pj;j;piicontr ols(i;p).Now,ifwewould have[i;pj;j;pi]contr ols(j;p), with i6=j,thisleads toacontradiction (use allocation andthefactthath6=i;h=j),so indeed wederive[i;pj;j;pi]contr ols(j;p)!contr ols(j;p). Forreverse,similar toinverse. 2.This wasprovedbyvanderHoek andWooldridge (2005b). 3.The definition ofcontr ols(C;p)isCp^C:p.LetC=fa1;a2;:::;aCg.Byaxiom contr ol(i),wehaveW i2Ccontr ols(i;p)!W i2C(ip^i:p).Bythecontrapositi veof T(i),wehave'!i'.Wecanapply thisrepeatedly forallagents inC,giving'! a1a2aC'.This is,according toComp -[,thesame as'!C'.(Note thatwehave nowproventhecontrapositi veofT(C).)This givesusW i2Ccontr ols(i;p)!W i2C(Cip^ Ci:p).Using Comp -[again,weseethattheconsequent ofthisimplication isequivalent toCp^C:p.Fortheother direction, wefirstshowatmost(contr ol)ofFigure 6.From `(p)weget,using axiom T(i)andcontraposition,i`(p).Assuming moreo veri:`(p),with axiom contr ol(i),givescontr ols(i;p).From allocation wethen obtain:contr ols(j;p),forany 475 VANDERHOEK,WALTHER,&WOOLDRIDGE agent j6=i.Using contr ol(j),weget:jp_:j:p,i.e.,:j`(p)_:j:`(p).Since T(j) givesusj`(p),weobtain:j:`(p),i.e.,2j`(p). Nowsuppose:W i2Ccontr ols(i;p).Byallocation ,wehaveW x2AnCcontr ols(x;p).That means thatforonesuch x,wehavexp^x:p.Nowwedoacase distinction based on p_:p.Inthefirst case, weassume p,andderive,foralli2C,that2ip,andthus2Cp. Hence:C:p,from which weget:contr ols(C;p).Inthecase of:p,wesimilarly have,for alli2C,that2i:p,which gives:C::p,andagain:contr ols(C;p).Allinall,nomatter whether por:p,weget:contr ols(C;p). 4.This iseasy: theprevious item showed thatcontr ols(C;p)means thatW i2Ccontr ols(i;p). Applying persistence 1(contr ol),wegetW i2C2jcontr ols(i;p).Butsince contr ols(i;p)! contr ols(C;p)ifi2C,wealsohave,foranyi2C,that2jcontr ols(i;p)!2jcontr ols(C;p) (use Necessitation andK(j)).This proves2jcontr ols(C;p). QED Refer ences Alur,R.,Henzinger ,T.A.,&Kupferman, O.(2002). Alternating-time temporal logic. Journal of theACM,49(5),672–713. Blackb urn, P.,deRijke,M.,&Venema, Y.(2001). Modal Logic.Cambridge University Press: Cambridge, England. Boella, G.,&vanderTorre, L.(2006). Delegation ofpower innormati vemultiagent systems. InDeontic LogicandArtificial Normative Systems, 8thInternational Workshop onDeontic LogicinComputer Science ,DEON 2006 ,Utrecht, TheNetherlands. Boella, G.,&vanderTorre, L.(2008). Institutions with ahierarch yofauthorities indistrib uted dynamic environments. Artificial Intellig ence andLaw,16(1),53–71. Boutilier ,C.(1994). Towardalogic forqualitati vedecision theory .InProceedings ofKnowledg e Repr esentation andReasoning (KR&R-94) ,pp.75–86. Chellas, B.(1980). Modal Logic:AnIntroduction .Cambridge University Press: Cambridge, Eng- land. French, T.(2006). Bisimulation Quantifier sforModal Logic.Ph.D. thesis, The University of Western Australia, Perth, Australia. Gerbrandy ,J.(2006). Logics ofpropositional control. InProceedings oftheFifthInternational JointConfer ence onAutonomous Agents andMultia gentSystems (AAMAS-2006) ,pp.193– 200, Hakodate, Japan. Ghilardi, S.,&Zawadowski, M.(2000). From bisimulation quantifiers toclassifying toposes. In Wolter,F.,Wansing, H.,deRijke,M.,&Zakharyasche v,M.(Eds.), Advances inModal Logic, pp.193–220. Gorank o,V.,&Jamrog a,W.(2004). Comparing semantics oflogics formulti-agent systems. Syn- these ,139(2),241–280. Insection Knowledg e,Rationality andAction . Harel, D.,Kozen, D.,&Tiuryn, J.(2000). Dynamic Logic.TheMIT Press: Cambridge, MA. 476 REASONINGABOUTTHETRANSFEROFCONTROL Jamrog a,W.,&vanderHoek, W.(2004). Agents thatknowhowtoplay.Fundamenta Informaticae , 63(2-3), 185–219. Jones, A.J.I.,&Sergot,M.(1996). Aformal characterisation ofinstitutionalised power.Logic Journal oftheIGPL ,3,427443. Li,N.,Grosof, B.N.,&Feigenbaum, J.(2003). Delegation logic: Alogic-based approach todis- tributed authorization. ACMTransactions onInformation andSystem Security ,6(1),128– 171. Moses, Y.,&Tennenholtz, M.(1995). Artificial social systems. Computer sandAI,14(6),533–562. Norman, T.J.,&Reed, C.(2002). Group delegation andresponsibility .InProceedings oftheFirst International Joint Confer ence onAutonomous Agents and Multia gentSystems (AAMAS- 2002) ,pp.491–498, Bologna, Italy. Pauly,M.(2001). LogicforSocial Softwar e.Ph.D. thesis, University ofAmsterdam. ILLC Disser - tation Series 2001-10. Sauro, L.(2006). Formalizing Admissibility Criteria inCoalition Formation among Goal Directed Agents.Ph.D. thesis, University ofTurin, Turin, Italy. Savitch, W.J.(1970). Relationships between nondeterministic anddeterministic tape comple xities. Journal ofComputer andSystems Sciences ,4(2),177–192. vanBenthem, J.,Girard, P.,&Roy,O.(2009). Everything else being equal: Amodal logic for ceteris parib uspreferences. Journal ofPhilosophical Logic,38,83125. vanderHoek, W.,Roberts, M.,&Wooldridge, M.(2005). Knowledge andsocial laws.InDignum, F.,Dignum, V.,Koenig, S.,Kraus, S.,Singh, M.,&Wooldridge, M.(Eds.), Proceedings of theFourth International JointConfer ence onAutonomous Agents andMulti-Ag entSystems (AAMAS 05),pp.674–681, NewYork,USA. ACMInc. vanderHoek, W.,&Wooldridge, M.(2003). Time, knowledge, andcooperation: Alternating-time temporal epistemic logic anditsapplications. Studia Logica,75(1),125–157. vanderHoek, W.,&Wooldridge, M.(2005a). Onthedynamics ofdelegation, cooperation, and control: Alogical account. InDignum, F.,Dignum, V.,Koenig, S.,Kraus, S.,Singh, M., &Wooldridge, M.(Eds.), Proceedings oftheFourth International JointConfer ence onAu- tonomous AgentsandMulti-Ag entSystems (AAMAS 05),pp.701–708, NewYork,USA. ACM Inc. vanderHoek, W.,&Wooldridge, M.(2005b). Onthelogic ofcooperation andpropositional control. Artificial Intellig ence,64,81–119. Venema, Y.(1995). Cylindric modal logic. Journal ofSymbolic Logic,60,591623. 477
98a66b56-03c4-427a-9847-e37d2e32aadc
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Stuart Russell – AI: The Story So Far – CSRBAI 2016 good morning everybody and welcome to the colloquium series so I'm really excited for today's lineup of distinguished speakers starting with professor Stuart Russell the professor of computer science and the Smith is a professor in engineering here at the University of California Berkeley so it would take too long for me to list his qualifications and awards and contributions so I will include the book that he co-authored with Pierre Norvik artificial intelligence and modern approach which is now being used by by universities and the quadruple digits in countries and the triple digits across the world and and also sort has been a powerful influence on the field of artificial intelligence starting to take seriously the positive and negative effects of future advances in artificial intelligence on the world and the things that we care about in that respect he's been talking to influential groups major conferences as well as the Davos World Economic Forum recently and so we're very pleased to have him here today to talk to us about the prospects for work rewinding the field on provably beneficial artificial intelligence and and also mary has been pleased to have him as a research adviser helping direct us in and what things are important for us to work on so so I would like you at all please join me in welcoming our very first speaker professor Stuart Russell thank you very much pets so um I made a last-minute decision to switch to a much shorter talk and that will give us hopefully much more time for discussion so I'm gonna just dispense with the usual preliminaries of this where I talk about you know what is AI and what's happening now and look at all this amazing progress and all these milestones and so on and just say look let's take it as a given for the sake of argument that eventually we will exceed human capabilities and some still not very clearly specified since you know partly because we don't really know what human capabilities are but if we think about what it means to make decisions and how to make better decisions it means if you can take into account more information if you have a better model of how the world works and you can compute more extensively on that model and look further and further into the future so think of this as like alphago moved from the NGO board to the whole world then AI systems are gonna make better decisions than humans and I put an asterisk a sone asterisk is something that linguists use to mean this is this is not quite a felicitous expression in the natural language and so what could I possibly mean by putting an asterisk on better well there's a piece missing not just taking into account more information and looking further into the future but what is the objective that's being optimized in making decision that turns out to be a crucial point so so the upside as as Nate mentioned is it's pretty large because pretty much everything we have is the result of our being intelligent and so if we had more intelligence at our disposal to use as tool as tools we could do all kinds of wonderful things and you know the each of these areas is something that that have they've been problematic for the human race forever pretty much and the last one ecological declaration is getting much worse and it seems like well it couldn't hurt to have access to more intelligence to help and you can even imagine very concrete ways where it might be very useful so one of the biggest issues when you look at poverty and disease and war is actually communities it's not that we don't know what to do about these it's actually that we have we have difficulty in in management of collective decision making and implementation processes that a I can clearly help with sort of if you like global distributed government governance at a sort of micro level where lots and lots of people have to do lots and lots of things for this to work well so in the long run we could get away from the constant you know fight with ourselves and fight with necessity in sort of physics and actually choose how we want human life to be so that would possibly be very good or not I mean there's another at least release we have a choice whether we know how to make that choice that's another question but at least it be nice to have a choice and then the downside well everyone knows about killer robots and everyone knows about the end of employment and then this is other thing the end of the human race which seems to be a very popular theme these days yeah but I would say most the discussion about this theme has at least in the media and when I meet people when I go around giving these talks everyone seems to almost everyone seems to got hold of the wrong end of the stick you have many wrong ends of the stick but there is a sort of a general sense and this has been this goes back to certainly to Alan Turing saying that you know I expect at best that they will keep us as pets or something was that effect that if you make something that's much more than you are then you we might sort of find ourselves in the situation of the gorillas so here they are having a meeting and this guy is falling asleep you can tell it's a meeting and they're talking about whether it was a good idea for their ancestors to have created this human race these these human things which are much smarter than they are they're having a really hard time with this this issue and I think they pretty much concluded that it was terrible idea because now they don't have any control over their own futures and they could easily go extinct and it's say if they had ability to conceptualize their own state they'd probably be very sad about it but that's a very inchoate fear and then that gets translated in the media into all kinds of things like oh you know armies of killer robots are gonna spontaneously rise up and and decide that they hate human beings and so on so forth right so you know you know all of the you know Hollywood sometimes gets it almost right and mostly gets it mostly wrong so more specifically right the problem is is this right that they're gonna be incredibly good at making decisions and doing stuff but somehow it isn't the right stuff I mean if they are incredibly good making this sense and it's the right stuff I they really are helping us realize whatever it is we decide we want to realize you know that's that's that would be what we want so it must be because they're not quite doing that they're doing something else they are the objective that they're making decisions on is not the right one and unfortunately AI by lodge and these other areas operations research control C and so on all assume that that specifying objective is actually not part of the problem at all right it's just you know the user who knows what it is and you know and you can control theory it's like a squared error with respect to the reference trajectory Y squared error well because that makes the equations easier but it doesn't have much connection to actually what anything anyone really cares about so so actually there isn't a lot of help right when you say okay we have better we've got to get these objectives right otherwise we're screwed okay what discipline can I turn to the answer is not really there isn't a place to turn and so no but we know pointed this out so this is a very useful paper I don't know if you have a reading list Nate for for the group but there's an there's there's a nice paper I often point journalists to this paper so he wrote it in science I think in 1960 and it was in as a result of looking at Arthur Samuels checker playing program that learned to be better playing checkers then an office under was so it's a very early demonstration refuting the usual claim that all you know machines can only do what we programmed them to do so we don't need to worry right and so he said okay if we use to achieve our purposes in mechanical agency with whose operation we can't interfere we better be quite sure that the purpose is the purpose we really desire and that that's a pretty clear statement of the problem from 56 years ago but arguably that that statement could have been written by King Midas whenever this is some uncertainty about the date all right have you tried to write it down yeah in the paper as well so so this is a writing the story of King Midas is actually both in microcosm and macrocosm a lesson for Humanity right so the whoever it was it was granting King Midas's wish took his objective literally and and then it was too late right once his food and his wine and his daughter all turned to gold he couldn't undo those things and he said damn you know I wish I had said it right and this is often in with these stories in other cultures you know there's a genie in the genie grants you wishes you know this is in going back to the time of King Solomon and in the Jewish culture and in Arab cultures and lots of others as a version of this story where you ask for wishes you get what you want a man you know your last wishes please undo the first two wishes because I got them wrong right and then in the macrocosm right this is actually telling the universe or perhaps what you what we are wishing for right the ability to automate and have sort of super control over everything in its or unlimited powers and they actually be a poisoned chalice for the human race in general not just through the individual so we better be more careful about our macro policy and so Steve Omohundro pointed out some some additional problems are not just that when you have the machine with the wrong objective right in some sense you're you're setting up a chess match or a go match between the human race and the machine that's busy pursuing the objective that's wrong and we know what happens with those matches so but Steve pointed out that it's actually worse than that because if you give a goal to a machine then even if you don't ever mention to the machine that it should preserve its own existence so I mean Asimov didn't need to have the third law saying that machine should preserve avoid harm to themselves because actually unnecessary right they will nonetheless form this as a sub goal because you you can't fetch the coffee if you're dead so you give the machine they're gonna fetching the coffee the Machine figures out based on physics that if it's dead it can't get the coffee so it naturally has a sub goal not to be dead right as a consequence of needing to get the coffee this is a very straightforward point and also you know it can improve for sort of typical goals in the real world you improve your chances of success by having more resources more computational sources more money and so on so all other things being equal you're going to want to acquire more of those so then if you have a machine that has the wrong objective and he's gonna have these things as sub goals then you can clearly see that you're gonna have how like problems so that's the high-level story and it's it's a pretty straightforward story and then there have been a number of arguments about why nonetheless we should pay no attention to this issue yeah so so I thought it'd be helpful to go through some of those and we can discuss in further after the end but you will come across these you probably have come across many of them already so one of the first responses I'm sorry this colors not ideal for for the lighting situation could we maybe we could turn the light yeah we thought they were low enough but in fact it wasn't low enough given they chose the wrong color okay orange okay yep so one all right so orange is these are things that other people say right so one typical response is it's never going to happen right or you know we're not going to achieve human-level AI and so it's pointless to to worry about this or it's it's so far off in the future that it's it's completely ridiculous and you know if I think if it was true that if you went to people back a million years ago you know who figured out how to make fire actually pre humans and told them that this fire stuff was gonna cause global warming and they should stop right I think that was probably like that would be not good advice so if you know if a I was gonna happen you know a million years in the future then yeah probably it's too soon to to even think about what we might do but I wanted you know so I so in response to that I sometimes point to a historical example this is Ernest Rutherford who was the most famous nuclear physicist of his time so not a weird fringe dude but actually the main guy in nuclear physics and here's what he said on September 11 of 1933 essentially that it will never be possible to to get energy out of atoms right they knew that the energy was in there based in they had done the mass defect calculation they knew the equals M c-squared they knew the amount of energy that was there but his considered view which he expressed in many ways in many forms and many times was that it was impossible to ever get it out and even Einstein kind of agreed with this and then that was September 11th he he said this at a meeting of the British Association for the Advancement of science and it was reported in The Times and Leo Szilard read this in The Times the next morning and he got annoyed and so he invented the neutron induced nuclear chain reaction and within a few months he patented early version of the nuclear reactor you know with negative feedback control mechanisms to to damp out the critical reaction soon after that people were patenting nuclear bombs and and so on so forth so it went from never to 16 hours and so it's very hard to predict these things and I think just saying well I'm an expert and it's never going to happen he's not good enough argument and this was what he wrote so after he did it he did a demonstration of a natural fission reaction and he said you know there was little doubt in my mind that the world was headed for grief because at that point they were also in an arms race with Germany and he anticipated that there would be nuclear conflict with Germany ok so a version another version of that is it's too soon to worry about it you know if you if you ask many people when do you think is likely to happen you know I generally try to avoid giving predictions because precisely because it for the nuclear physics example I think it worked quite so it requires breakthroughs but it's very hard to say when those are gonna happen but if you ask people in the field or near the field they'll say you know give you some number that looks like 50 to 75 years some people earlier but not that many people think it's not gonna happen this century right so so if I said that you know in 50 years time a giant asteroid is on course to collide with the earth you know when we saw it's way too far away to even worry about it or even start thinking about the problem you know so come back in 58 years sorry 48 years and then won't like them won't give you some funding to work on it that wouldn't be the kind of response one would expect and arguably for climate change the right time to intervene would have been around 1900 when we already knew the basic physics you know Iranians and others had published papers you know giving quantitative calculations the greenhouse effect and projecting carbon dioxide and you know influential people like Alexander Graham Bell had said you know this is gonna be a major problem we have to do something but it was ignored I don't know exactly I haven't looked at the history of why people didn't pay attention at that time but that would have been a time when you could have intervened before the fossil fuel industry and electoral electrical power production became so important to our entire economy that that it's very hard to change you know so you could have started investing in wind power and solar power improved battery technology and other kinds of things a long time ago but we didn't so my distinguished colleague Andrew Inge has another version of this story right it's it's like worrying about overpopulation on Mars he since changed that to Alpha Centauri to make it seem even more ridiculous or perhaps he thought Mars well that fits it is reasonable to worry about Rovers I don't know having seen the Martian I'm not sure but you know this is it's you know it's a it's an appealing analogy but I think is totally misleading you know another version of this which I saw in a paper recently was you know it's like worrying about black holes suddenly materializing into us a little bit I mean yeah if they did that would be terrible but you know there's no particular reason to think it's going to happen so it's sort of silly to worry about it right and the answer to both is so they're saying well you know if we were spending billions of dollars to move the human race to Mars without thinking about what we would breathe when we got there that would be that would be silly right you know similarly if we were spending billions of dollars to cause black holes to materialize in near Earth orbit then it would be reasonable to ask you know is that a good idea and you have you thought about the consequences how would we would prevent the obvious sequel i and you know so so I don't find and doings argument well no no I me see if you're gonna use the argument that beats this is just like materializing you know worrying about materializing black holes they say no it isn't just like that so yeah so I mean so in other words the onus is on someone who says that to to actually prove that in fact AI is harmless that it isn't a black hole because we are spending billions of dollars to make it happen another another version of this is well if the problem comes with giving objectives like make some paper clips or whatever to the to the AI system then it's better not to have us giving the goals the AI system just let the Machine indent its own objectives which is a little odd right I mean it's sort of like saying you know if you have a problem steering straight then the best thing to do is remove the steering wheel altogether and just leave it up to chance as it were to make the right thing happen this is this is something that you see a lot I be M for example this is a general there's you know view of why we don't have to worry well because we're gonna have these beneficial human AI teaming and so it's not gonna be you know machines independently operating and deciding what to do there's in the human AI teams of work together but you you can't have a human AI team unless the team members all are aligned in what their objectives are so it's just a restatement of the problem I mean yes of course we want beneficial human AI teaming but that is that in fact making the question how do you ensure that the AI passed the team is actually on the team another common responses well okay you're right yeah it's really shoe but there's nothing we can do about it whatsoever because it's well known that you can't control research you know there's no way to put a stopper on human creativity you know and then that usually people will show cute movies of of kids playing you know interacting with robots and exhibitions and look at this you know outpouring of human creativity and there's no way you can do anything about this and and there's you know there's some validity of that but it's not really true right we can and do biologists deliberately said engineering the human genome is not something we want to do and that was a complete switch because an awful lot of work on genetics and an early molecular biology was precisely about the ability to to improve humans and then it was decided oh perhaps that isn't an ideal goal for biology because that opens up a Pandora's box of you know genetically --tz-- and all the rest of the stuff that science fiction has already looked at so they said no and it's been 40 years and it's still hasn't happened although it's the rich been reopened recently with there's this CRISPR technology although the inventors of CRISPR also believe that we shouldn't use it to to engineer better humans another interesting reaction is this is just typical Luddite right you're just attacking AI or attacking technology so in fact Elon Musk and Stephen Hawking and their various other people I guess everyone who signed the open letter on robust and beneficial AI was included as when as of the 2015 Luddite of the Year award from the information technology innovation foundation who who seemed to be vehement ly opposed to any any of these thoughts and I just think this is misdirected it's misunderstanding what we're saying completely right if a fusion researcher says fusion researchers need to be contained in order to be safe right that doesn't make them a Luddite it's just complete misunderstanding of what's going on right they're not attacking physics by saying that we're not attacking I mean we're ridiculous to say that Turing was attacking AI by pointing out this long term issue or that we know was attacking AI or Bill Gates is attacking right right and these these are people who put a lot of their effort into creating AI in the first place so another reaction that you often see even from very distinguished AI researchers is Rome's there isn't really a risk right because if anything we don't like we immediately just switch off the machine and that solves the problem right as if super intelligent entities couldn't possibly think of that that eventualities and wouldn't you know so it's sort of like saying yeah you know if you're if you're losing a game against alphago well they just you just win all right what's the problem you know just win they're easy you know some people say well if we could if we just avoid anthropomorphizing and putting in these goals like self-preservation then of course there won't be a problem Steven Pinker's version of this is we just make female ai's they wouldn't want to take over the world literally he said this this is just these stupid male AI researchers who don't get it yeah but you can't not put it in I mean it doesn't matter if you don't put it in it will it will arise anyway because you can't get the coffee if you're dead so I'm happy to discuss any of these further on you may have heard other arguments that you you're not sure how to respond to so the proposal is that in fact you know the part of the problem is that AI is traditionally conceived for which I guess I have some guilt in conveying this idea that that AI is about rational behavior which means optimizing objectives you know allows for the past you know release doesn't think about the issue of well what if the objective isn't the one that you actually want to have optimized so could we change AI to a different field this should initially we're going to call it provably beneficial ai and you can see why they're asterisk because this is almost oxy oxymoronic because beneficial is so vague and touchy-feely and provably doesn't seem to fit with that eventually it'll just be called AI because you know just like we don't you know if you're a civil engineer you don't say oh I work on bridges that don't fall down right you just say I work I work on bridges right it's just so just intrinsic to bridge design that they don't fall down and it should be intrinsic to AI system design that they are supposed to be beneficial to you and that's sort of what it means to do it I so eventually it will just be called AI but for the time being we have to distinguish it from traditional AI okay and how do you do that so so here's one way and there are there are others you know that there's a whole range of research that can be done on in some sense trying to constrain behaviors of AI systems which is I'm not going to talk about but that's a completely plausible and interesting and but as yet totally unsolved direction but if we want to think about this this question of how do we get rid of the problem of of misaligned values well you could say well the only way to get rid of misaligned values is to just to get the values to be exactly the same all right to get the objectives to be exactly those of the human race and then everything's fine that's but that's too difficult and it's also isn't quite necessary right what needs to happen actually so this is number two is crucial number one is just to point out in some sense that as Moore's Law is or what at least one of them is superfluous we don't want the robot to care about itself at all it has no intrinsic objectives whatsoever it's only objective is to optimize human values but it doesn't know what they are right and so this is a if you like this then it's get you get soft alignment right that it's at least compatible with humans because it's uncertain about what the human objective is and it's as as we say in power ability the support of its distribution includes whatever the true human value function might be even though the machine isn't sure on which which of the possible value functions is right and this turns out to be quite helpful and then the third part of this is well ok how yeah we could have very robot that's very very very uncertain it doesn't know if humans like losing legs or like gaining extra legs or just like having the number of legs they have right well that's not a very helpful robot right because now the robots are less I'm really not sure what to do to help you ok so you what you want to get better at understanding human so it could be more helpful to you and the information source is there right the raw data if you like the ground truth is contained in human behavior because that reveals information about human preferences so those three simple ideas you could put together in various ways and get to start to make progress so so a version of the self-preservation thesis from our mohandro is is this one way to have a robot that you know it has an off switch that someone can come along as press the off switch now the robots did right and you know if you take Omaha murder and literally what he says is look if the robot has the objective of getting a coffee you know one way of failing is that someone comes along and presses the off switch so if robot has an action which permanently disables the off switch so it's sort of an internal off off switch then then it would naturally do that right there's no cost and it gets rid of one branch of the tree that would lead to failure and so it's clearly a good idea right and when you put it like that it's sort of hard to find even think of a way around it in fact when you put that into mathematics there is no way around it it's in fact you know unavoidable and so but if you if you avoid giving the robot a precise objective but instead you allow it to be uncertain about the objective so for example it might know that it's supposed to get coffee but it's uncertain about what other what the signs of the other variables and the value function might be you know so is it allowed to you know kill people who get in the way of the coffee machine it's not sure all right well so then it starts to its behavior will be different because of that uncertainty in the value function and in fact so then you've got uncertainty about the the human objectives and then you have to have some attribution of rationality to humans it doesn't have to be perfect but it has to be so to me behavior has to be sort of correlated with with their objectives and so roughly speaking then the the you can think of the human action of switching off the robot is actually providing information to the robot about what the human's true value function is in particular we know whatever the robot was about to do is not helping right and so that's why we're switching off and so the robot should be happy to be switched off because that leads to an outcome that is more beneficial from the human than the robot disable and be off switch okay and so and you can when you do the math that works out and in fact the margin of safety is proportional to the allowed amount of uncertainty about the human value function and but of course the more uncertainty there is about the even value functions are less helpful the robot can be and that seems to be an unavoidable trade-off okay so yeah sure then the consequence is it's actually in the robots interest to to leave the off switch available so then let me talk a little bit about this third point value alignment you know how do we learn what the value function is how we narrow down this uncertainty from the dirting behavior so there's this old Field called inverse reinforcement learning it has other versions so in economics and applied you know consumer theory they do something called preference solicitation you know so so many presents consumers with you know 81 different versions of headphones and asked them to say how much they pay for them or which ones they like better and so on so forth to try to figure out the human value function for headphones and you know so that's the sort of those are non sequential decision problems like do you want this one or that one but there's another field called structural estimation of mdps where for example you know the economists look at when do people have children and then somehow you figure out the value of children from from people sequential child production behavior and things like that so the general idea is that the behavior is a is a very complex manifestation which is made complex actually by the environment in which the behavior is produced but underlying it there's a simple explanation which is that the human wants some things and cares about some stuff and and so that's a if you like the physics of behavior alright what is the underlying Laurer physics is the humans want things and they act to try to get them and so you can invert the behavior to figure out what it is they want and this is this has been around in AI since 98 and there are quite effective algorithms that are quite scalable and people have done there are several hundred papers on how to do this it's not quite the right problem for one obvious reason is that you don't want the robot to adopt the value function of the human right that's that's trivial but important sorry if the robot watches knees struggling out of bed and wandering down stairs like a zombie to get my coffee it can figure out that oh you know you Stewart really likes to have coffee when he wakes up but you don't want the robot to want coffee that doesn't help right so so it's not adopting the value function that's usually how it's done in the inverse reinforcement learning you know you you will a copter pilot and now you learn about desirable helicopter maneuvers and then the robot doesn't so it actually adopts the value function so the framework we developed is a generalization of that called cooperative inverse reinforcement learning which is a game theoretic setting and you could essentially you have a human or multiple humans and a robot or multiple robots and as I mentioned they the human has a value function and at least implicitly they know it or they might not be able to make it explicit the robot know doesn't know it and knows it doesn't know it but if that's its objective to maximize and and then when you when you solve this game when you look at the solutions of the game they automatically produce the kinds of things that you want namely you know the robot is cautious it asked questions the human actually has an incentive to teach the robot so that because the faster the robot figures out what the human wants the more it can be helpful and new we can actually show show little examples and so this actually contradicts the inverse reinforcement learning assumption all right the inverse reinforcement ending assumption is that the human is acting optimally according to some value for and then we observe the behavior and we try to figure out what what the value function is but actually in this setting the human doesn't act the same way as they would if the robot wasn't there right they sort of will you know demonstrate things they'll even you know point out what not to do right whereas the human by themselves would never do that because totally pointless all right and so you actually get different solutions and and and so since the human is gonna behave as it were a non-optimal at least in the isolated sense then the the algorithms for learning from that behavior also have to be different so the standard IRL learning hours won't work in this setting and they have to be revised so it creates a much richer more complicated and interesting setting so he's just a very trivial example that Dylan my student Dylan had feel Manila's not here right now so he just did some sort of deliberately trivial but you have a grid world and there are three locations that can be of throats or three centroids of value and they can have different you know any of these could be positive or negative and then they would radiate that value to their neighboring squares as you can see here this is a peak of value and this is the peak of value this is a kit that you want to avoid and so the optimal you know if the human or you know a rational agent is put in this environment and let's say it starts here then you know the optimal behavior because we're slightly to the left of the the center here the alto behavior is to go directly to the left-hand peak of value and then stay there right that's that's the optimal solution for this environment but and then what I've shown here is okay if you see that behavior and you run IRL right then you will conclude this gray what I mean this grey map shows the conclusion that the IRL garden draws about what is the value function underlying this behavior okay and in fact there's in the posterior over value functions this is now whereas in truth it's highly positive it now looks slightly negative because the robot didn't go to the right right and therefore that rules out the possibility that that this is the highest value square right and then so the the mean of the posterior is actually sit now slightly below zero so to speak it definitely didn't go down so it's pretty sure that's not a good idea either right so you get the wrong conclusion from observing the behavior and in fact if you solve the if you solve or you actually this is one round of best response in the game so it's not a complete solution to the game but the the one after one round a best response to what the human does is actually to visit both of these regions of high value and then this shows the posterior that they learning out of them obtains and it's much closer to the true posterior from compared to that one and so this is just a trivial observation that the solutions of these two player games are different from optimal behavior by one agent observed by a second agent that's trying to figure out the balance okay so then looking looking ahead we know beyond trivial toy examples and say okay let's imagine if we take this seriously we are actually going to need to figure out to a large extent what the human value function is and that's you know that's easily a twenty thirty year project and it's interesting to think about well what's the output right it's like if I if if you guys are a bunch of venture capitalists and I always hear saying hey I need funding to to start this and then think it's okay so what are gonna sell at the end of is how I'm gonna sell value functions right well what exactly does that going to look like you know so just you just try to imagine doing this right and taking it seriously and I think okay well what are the sources of information well actually there in all there's enormous amount of information about human behavior right so everything pretty much everything that's ever been written by humans is actually about people doing things some of it very boring like people buying two bushels of corn and exchanging that for you know some arrowheads but even that is really useful information about the human value function and your novels and and newspaper articles and everything else and every television program you know there's not a lot of television programs where they only talk about rocks and not about you know nothing about what people do or care about or any of those things so so almost everything out there is gonna be useful information a lot of it is you know in newspaper articles and levels and everything is it one person does something another person gets upset or happy alright that's also useful information but again it's a form of behavior it's not it's not direct proof that wine is wrong any other one's right but it's evidence that it can all be thrown into the mix you know if it's understood properly so you know so that we in order to do this we'll need to do natural a new language understanding and your computer vision to understand all the TV programs and what everyone's doing in speech nothing else there's lots of AI to be done to make this work but it's easier than building the super intelligent AI system that we are preparing for so it's it's it should be feasible and so that this is this is this is good news we need to solve this actually much earlier so this this startup company the values are us corporation you know will will actually have customers fairly soon I think you know so self-driving cars domestic robots you know so one example I give I don't think I have the slides here I just gave a talk in Korea where I made a little sort of cartoon sequence of a robot in the house and then there's the little kids sitting there and their plates are empty and they're hungry and then the robot has to find something to eat if the fridge is empty oh and there's a little cute kitty and then robots oh yeah we'll cook the kitty for dinner and then there's a newspaper headline and that's the end of the domestic robot industry so there's a very strong economic incentive for self-driving car companies and domestic robot companies and personal digital assistant companies right you know if they're gonna be helping you book your airline flights and and making meetings you know you don't want them to make meetings with lunatics you don't want to book flights there Antarctica and so on so they all need to understand your value system fairly well so there's this very very strong economic incentive to get it right even fairly soon so that's good all right that means that this should be this should be part of the AI industry and we will be developing the technology so you know really reasons that these are related reasons to the concern about super intelligent AI but they're much more mundane that the difficulties include you know the fact that the humans are complicated some of them are nasty so how do you you know how do we avoid you know there's lots of bad behavior out there how do we avoid learning that we should be that the robots should be supporting all these very undesirable behaviors you know even if it's not clear the extent to which our behavior can even be successfully described as as trying to optimize any value function there are lots of reasons for thinking that isn't true including the fact that evolution doesn't care about us as individuals anyway like so a lot of evolutionary theory says no it's nothing to do with you and your desire to reproduce it's actually you know small groups of genes that actually exist across multiple species and they're the units of optimization and they're the ones that are really being selected and from a even Irish you know even if you think about the species is a unit right well as as a unit the the species if it's going to survive needs to do both exploration and exploitation exploration means one way of having the species explorers by producing individuals who are completely nuts right who acted extremely risk prone ways and then sort of go off and explore you sail across some ocean that they think is gonna fall off the end of the earth and happen to arrive another continent and things like that you know completely completely nuts the kind of stuff that they do on Star Trek right it's not that the individuals involved are irrational is that the concept of rationality in some senses doesn't apply to individuals at all right it's actually that they're just fulfilling a function which is part of the rationality of the species or the tribe or the gene group or whatever so so things can get really really complicated in understanding the you know the full spectrum of human behavior and how we infer anything from it you know we're computationally limited so if you watch two people playing chess well you know one of them loses does that means because they wanted to lose the game or no it's because in fact it's because they you know they're both computing computation limited and one's maybe slightly more than the other all right could be that he's trying to lose yeah it could be these trying to lose that it does happen but usually he are not doing it and so on and of course you know we there's different you know here all humans are individuals and then there's differences across cultures and so on and then and there are these questions of trade-offs right that we even if you do learn the value function of individuals you can't optimize everyone's value function because they're ours enough countries to be king or queen all of them there isn't enough money for everyone to be a billionaire and so on so on so on so so so how do you deal with those so and these these are age-old questions in social sciences so we're not gonna solve by observing the human behavior but by making everything much more explicit and mathematical and empirical hopefully we can make a lot of progress and maybe we'll learn more about what we what we think we should be doing and that will make us better at doing it okay so the consequences are various so the the objective is I think in part to change how we think of the field to include these considerations and and and then ensure that what we're building is actually the produces behavior that we're happy with and you know as I said there are a lot of questions that social scientists have studied for a long time and that will have to be incorporate some of those concepts will have to be incorporated and then last question is well there is a lot when you actually get concrete and say okay we twenty years time values are us corporation he's now selling these things you know what are they going to look like right it's not at all obvious it I could do it for chess very easily I could sell you a value function for chess you know and it says nine points for a queen and five points for rook and it's pretty straightforward right but that's because chess is fully observable and there's no argument about whether you have a queen or not but the inputs to a domestic robot are the the video sequence coming in through its camera you're not going to define value functions in terms of video sequence is coming in through cameras right so you know give zillions of pixels that will be daft all right so what are you gonna do and that I think is a somewhat open question but you know a technically technically important question to answer so coming back to Norbert Wiener all right so he in his paper which I really do recommend reading you know he points out that that these questions are incredibly difficult even a scientist is sort of only seeing a very local part of an unending stream that goes on for millennia and might think that what he's doing is beneficial that in fact could be entirely wrong and you have to look over a long time scale and try and figure out the answers and it's very difficult but you have no choice but to try to do it so I guess that's why you're all here thank you it's just yes that's a good question I mean I think the point I mentioned towards the end that value functions apply in these partially observable environments and how you define them right so you could imagine let me just take something very simple like you know is the cat alive or dead right so you could put you know higher value on the cat being alive than the cat being dead but for different robots the mapping from percept sequences to the probability that the cat is in fact alive or dead would be different and so presumably we all have to agree on what we mean by alive and dead and then the robot manufacturer has to have some recognizes for that and this is all very hand wavy but and and then you supply the value so they're like dead the then the problem with that is that in fact you know alive and dead are not well-defined you know if you talk to anyone like a neurosurgeon who works in the hospital it's extremely high in many cases to figure out if someone's alive or dead and one of my colleagues told me that so the hospital allowed him to run experiments on people who had been officially declared dead so he kept them alive on the ventilator or kept them kept the body's functioning on the ventilator kept them alive because they'd already dead and two of them got what got up and went back to work so so this is actually you know it's it's a tricky thing it's not really fine and this is exactly where the super intelligence this they find the worry is they find the loopholes right they find ways of achieving what you specified is the objectives that are so you would just never imagined they would think of that they're so extremely counterintuitive but they you know just like tax law right you think you've ruled out a loophole so people find this completely bizarre way of you know they pay their employees with gold coins right because that's you know they're it's a five dollar gold coin it's five dollars right I give you one each and so you don't have to pay any tax because you're only making five dollars a month but you know you know that that's kind of an example but you know they will come up with much much much more devious ways that you know and in alive or dead so people in this in the existential risk literature talk precisely about situations that you could argue a kind of in this gray zone or they'll define in this yes you're still alive but you're immobilized in a box with with a heroin drip and so on and and you might say well that's really you might as well be dead but no you're alive you've met that you met the stated criteria so so there I think is where where the question of having alignment which is not perfect right so might one you get sort of a clash of intuitions if one says look if you're if you're the value function of the robot is optimizing is within epsilon of a true value function then nothing too bad can happen right and you can maybe prove that you know the most you can lose is you know epsilon squared over one minus gamma or something right but then you know your other intuition says but if the robot is way way way smarter than you you know it can somehow use that epsilon to as a loophole to produce something that in the long run you were extremely happy with unhappy with I should say and I don't I mean that that seems like a question that can be attacked mathematically I think it will come out the desire of the right way but I still I'm still not sure about it they're not fully identifiable so so like the reward functions and inverse reinforcement learning are positive to exist but they they are not directly observed episode so we have all the challenges of latent variable my learning which is that often you cannot pin down the exact value of these things but still they give the reason to use them as they give a very compact and maybe even approach on causal explanation of the behavior so it's yeah it will be tough and so the but that means the AI system needs to know that it doesn't know which come back to very first slide and needs to be to behave robustly with respect to that yeah but I think I I I almost want to put on my luck fees are they accent and say I mean the it's not just that we can't observe alive and dead directly but in fact it isn't a well-defined right why we that even notionally you can't say okay here is a here's a particular world and alive is true and here's another one where alive is false there's always a dichotomy yeah but is that uncertainty treated the same way in other words you take expectations over it or do you do you take work worst case over at yarn that's one way of thinking I mean of course the other responses is try to find those corner cases and then check to see the people like so what do you think about this yes what is more dangerous and yep so we thought about cooperative so you there's the shallow particular agent which is the human which is this well-defined thing and in real world is this sort of like uncertainty this relates to diversity like what what counts as human and what human action is you think I would change very much so what counts as human I mean it's not that I have two arms in two legs and exactly yes so what what counts as what you care about yeah oh yeah I mean in some sense that's a political question you know should we include in our observations the you know the behavior of the clinically insane you know and what you know what about animals and so on I I'm not sure that anything I can instruct things that canvas you murderous definition and is that their preferences like it gets back to the Victorian point yeah yeah so I do I don't know whether we will have to be microchipped at birth so that the system knows that we recount is real and and these are non Fae Keable microchips but yeah and so yeah I don't know I maybe we need to make sure it doesn't ever make people yeah I think that's just last question another way once no some way to tell you talk show funny you should ask I was just thinking about that last night it sort of imagining a large library of decision making scenarios which would be represented bazoom by you know an in an embedded 3d virtual reality experience that would go on for some time and then the robot would have to be we deciding how to behave in this in this scenario and we can you know sort of kind of like a driving test right so at least you know you you've got a hundred thousands in areas where you're sure that across all these it's behaving adequately well would be that would be a good start for a domestic robot I think and for a self-driving car oh because we're not right that assumes that we the human can induce from these hundred thousand scenarios what exactly the value function is and so we we can largely the assumption is that we can largely act in them morally and societally reasonable way in a large variety of settings but we're unable to make explicit in a reliable way exactly what the value function is that is enough for you say yes here's the right wedding function yeah also a system where I teach what I mean thank you thinking about the the thing about the the imagenet competition right I can recognize all these objects I can have a system that learns by any mechanism how to recognize objects and I can test it on a million test cases but I can't write down the discriminating function yeah all right so it's just like that yeah sure I mean that's that's one way of doing it and in some sense that's precisely except that I'm looking at training data which is occurring naturally right the actual behavior of the human race where I haven't validated that everyone is behaving precisely according to the values that I want the robot to learn but you still have to be able to learn from back that kind of data right so so King Midas did what he did and nany he's expressing remorse well you can learn from that behavior sequence something about human values that they they like gold but they actually like their daughter even more than gold but they're also that they're not 100 percent rational in collecting the outcomes of their choices right that's all good information even though none of it is optimal behavior so yeah what we used to call near-miss examples of the machine that english' two before it was entirely forgotten and replaced with a new machine learning literature but yeah yeah so near-miss I mean these are these are all good cases and and you could you could look at fables and and other instructive stories for children is precisely doing that for human beings all right so let's thank professor Russell again
a1672cb0-1916-45d9-a8e3-46dc089411fb
trentmkelly/LessWrong-43k
LessWrong
An observation on cryocrastination Why do people cryocrastinate? The most common explanation I’ve heard from intelligent people for not getting cryonics is that the money is better spent on some altruistic cause. By itself there is nothing wrong with this belief, but irrationality lies near. Before I continue, I am not here to argue that cryonics works or not. That has been done before. From this point on, I will assume cryonics derives expected utility from it giving a reasonable chance of continuing life past many currently terminal events, with life being a valuable thing. We begin with a quick overview of the cost of cryonics. Let us break our cost analysis into two parts: acquisition of the cryonics and life insurance contracts and maintenance of these contracts. First, acquisition. The main costs here is the time that could be invested in other activities. I would estimate a reasonably organized person could get it done with 20 hours of continuous work and $200 (all costs USD) upfront. Let us say this is $600 worth of costs. For people in the US, see rudi hoffman, he will do pretty much everything. Second, maintenance. For myself who lives in a non-silly country like New Zealand, life insurance is $10 per month, and membership fees for e.g. Cryonics Institute is $10 a month, totaling 70 cents a day. Let us say it is $1 a day for places such as the US, as insurance costs an arm and a leg there. So cryonics costs one dollar a day. Put this way, it doesn't seem much. This comes out to be two starbucks coffees a week. Let me repeat that: in the long term, cryonics costs TWO STARBUCKS COFFEES A WEEK. Very few people can say they cannot optimize themselves so as to have $1 a day more disposable income. As someone once put it to Eliezer “No, really, that's ridiculous.  If that's true then my decision isn't just determined, it's overdetermined.”. I agree, cryonics is cheap. Let us now consider the following two beliefs from someone who has a favorite cause, in which donating money gives lots of
784b0db3-dedf-4fc0-9985-5b07dd892ac8
trentmkelly/LessWrong-43k
LessWrong
Counting-down vs. counting-up coherence [Metadata: crossposted from https://tsvibt.blogspot.com/2022/10/counting-down-vs-counting-up-coherence.html. First completed 25 October 2022.] Counting-down coherence is the coherence of a mind viewed as the absence of deviation downward in capability from ideal, perfectly efficient agency: the utility left on the table, the waste, the exploitability. Counting-up coherence is the coherence of a mind viewed as the deviation upward in capability from a rock: the elements of the mind, and how they combine to perform tasks. What determines the effects of a mind? Supranormally capable minds can have large effects. To control those effects, we'd have to understand what determines the effects of a mind. Pre-theoretically, we have the idea of "values", "aims", "wants". The more capable a mind is, the more it's that case that what the mind wants, is what will happen in the world; so the mind's wants, its values, determine the mind's effect on the world. A more precise way of describing the situation is: "Coherent decisions imply consistent utilities". A mind like that is incorrigible: if it knows it will eventually be more competent than any other mind at pushing the world towards high-utility possibilities, then it does not defer to any other mind. So to understand how a mind can be corrigible, some assumptions about minds and their values may have to be loosened. The question remains, what are values? That is, what determines the effects that a mind has on the world, besides what the mind is capable of doing or understanding? This essay does not address this question, but instead describes two complementary standpoints from which to view the behavior of a mind insofar as it has effects. Counting-down coherence > Counting-down coherence is the coherence of a mind viewed as the absence of deviation downward in capability from ideal, perfectly efficient agency: the utility left on the table, the waste, the exploitability. Counting-down coherence could also be called
f424d5bd-314a-4336-9c48-89619f42d16d
trentmkelly/LessWrong-43k
LessWrong
[Linkpost] Human-narrated audio version of "Is Power-Seeking AI an Existential Risk?" There's now a human-narrated audio version of my report on existential risk from power-seeking AI (text version here). You can find the audio here, or by searching for "Joe Carlsmith Audio" on your podcast app. Thanks to the folks at Type III audio, and especially to Solenoid Entity, for doing the narration. There's also a shorter video presentation and summary of the report here.
ed1a02ac-e042-41cc-96ea-fd74090ea988
trentmkelly/LessWrong-43k
LessWrong
Weak HCH accesses EXP This post is a follow-up to my “Alignment proposals and complexity classes” post. Thanks to Sam Eisenstat for helping with part of the proof here. Previously, I proved that imitative amplification with weak HCH, approval-based amplification, and recursive reward modeling access PSPACE while AI safety via market making accesses EXP. At the time, I wasn't sure whether my market making proof would generalize to the others, so I just published it with the PSPACE proofs instead. However, I have since become convinced that the proof does generalize—and that it generalizes for all of the proposals I mentioned—such that imitative amplification with weak HCH, approval-based amplification, and recursive reward modeling all actually access EXP. This post attempts to prove that. Updated list of proposals by complexity class P: Imitation learning (trivial) PSPACE: AI safety via debate (proof) EXP: AI safety via market making (proof), Imitative amplification with weak HCH (proof below), Approval-based amplification (proof below), Recursive reward modeling (proof below) NEXP: Debate with cross-examination (proof) R: Imitative amplification with strong HCH (proof), AI safety via market making with pointers (proof) Proofs Imitative amplification with weak HCH accesses EXP The proof here is similar in structure to my previous proof that weak HCH accesses PSPACE, so I'll only explain where this proof differs from that one. First, since l∈EXP, we know that for any x∈X, Tl(x) halts in O(2poly(n)) steps where n=|x|. Thus, we can construct a function fl(n)=c1+c2ec3nc4 such that for all x∈X, Tl(x) halts in less than or equal to fl(x) steps by picking c3, c4 large enough that they dominate all other terms in the polynomial for all n∈N. Note that fl is then computable in time polynomial in n. Second, let H's new strategy be as follows: 1. Given p, let s, x=M(p:f(|x|)). Then, return accept/reject based on whether s is an accept or reject state (it will always be one or the othe
a82a7e0e-b3df-4328-b17e-a00632b82582
trentmkelly/LessWrong-43k
LessWrong
"Cheat to Win": Engineering Positive Social Feedback This post outlines a very simple strategy that's been working for me lately. It may be obvious to some, but it only clicked for me recently. Positive social stimulation is fun for humans, right? We like to be liked. It makes us cheerful. We're motivated to do things that make people smile at us and praise us. But purely optimizing for being liked is a bad idea for lots of reasons: it leads away from your real goals and values, it motivates you to be deceptive, it's kind of shallow and unsatisfying in the long run. So here's what you do instead: first, decide what you actually want to do. Then, seek out people who will socially reward you for doing that, and set yourself up to get social rewards. Marketing experts will tell you that you have to "find your tribe", find the fans of your product, and focus on delighting them. It's fine if you have haters. Haters are almost irrelevant. You succeed if you have enough fans who value your stuff highly enough. This applies across areas of life. You only need (about) one job. You only need one spouse. You only need a small number of close friends. Having great supporters is more important than avoiding having any haters. I used to have the intuition that "fairness" meant I wasn't allowed to bias my social environment in my favor; that I should expose myself equally to people who liked and disliked me, people who did and didn't share my values, in order to get a "balanced" impression of the world. This is pretty stupid, actually. You, as a very small creature moving through infinite space, don't learn about the universe by drawing uniform samples from it. You learn through pursuing goals, which means you'll spend more attention on areas of the universe that are useful to you, which means things that are easy for you or helpful for your life, things that give you energy and resources to explore more. An amoeba, as it crawls around, is going to learn more about the parts of the petri dish with food than the parts withou
3b385904-f4ca-4eb9-959c-db8a50ea5922
trentmkelly/LessWrong-43k
LessWrong
Meetup : London Discussion article for the meetup : London WHEN: 01 May 2012 06:30:00PM (+0100) WHERE: Shakespeares Head, 64-68 Kingsway. Holborn, London, WC2B 6BG Let's talk about what we want out of meetups! I'd like to encourage everyone to read How to Run a Successful Less Wrong Meetup http://lesswrong.com/r/discussion/lw/bak/draft_how_to_run_a_successful_less_wrong_meetup/ but other links on the resources page are also useful http://wiki.lesswrong.com/wiki/Less_Wrong_meetup_group_resources . See you there! Discussion article for the meetup : London
b4f312c7-8058-4b0a-9b97-f03b332c0a53
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Models Modeling Models **I - Meanings of words** Now that we have[more concrete thinking](https://www.lesswrong.com/posts/z2BPxcFfhKho89D8L/goodhart-ethology) under our belt, it's time to circle back to Goodhart's law for value learners. What sorts of bad behavior are we imagining from future value-learning AI? What makes those behaviors plausible? And what makes them bad? Let's take the last point first. Judgments of goodness or badness are situated in models - models of the world that we use to infer and operationalize human values. And we don't just use the same one all the time. When I say "I like dancing," this is a different use of the word 'like,' backed by a different model of myself, than when I say "I like tasting sugar." The model that comes to mind for dancing treats it as one of the chunks of my day, like "playing computer games" or "taking the bus." I can know what state I'm in (the inference function of the model) based on seeing and hearing short scenes. Meanwhile, my model that has the taste of sugar in it has states like "feeling sandpaper" or "stretching my back." States are more like short-term sensations, and the described world is tightly focused on my body and the things touching it. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/6f63662757368e0f90f769f1007f7438af7c234a939af68d.png)Other models work too! That's fine, there's plenty to go around.The meta-model that talks about me having preferences in both of these models is the framing of[competent preferences](https://www.lesswrong.com/s/aJvgWxkCBWpHpXti4/p/7kuhXtwFdXvD2Ngie). If someone or something is observing humans, it looks for human preferences by seeing what the preferences are in "agent-shaped" models that are powerful for their size[[1]](#fnq0n08j5naq). So when we call some AI behavior "bad," this is a word whose meaning depends on usage and context, but ultimately bottoms out in implied models of the world. It's like a Winograd schema - like how English-readers infer that "they" in "workers put down the boxes because they were tired" refers to the workers, the "like" in "I like dancing" is understood to use a certain perspective on how I am modeling and interacting with the world. All of this should be taken with the caution that there's not one True Model in which the True Meaning of the word "bad" is expressed. Obviously you still have to make *some* choice in practice, but the point is that the way you make this choice doesn't have to look like resolving epistemic uncertainty about which model is the True Model[[2]](#fnq7sbhldv9l). **II - Model conflicts** What were the patterns that stood out from the [previous discussion](https://www.lesswrong.com/posts/z2BPxcFfhKho89D8L/goodhart-ethology) of what humans think of as bad behavior in value learning? The most common type of failure, especially in modern day AI, is when humans are actively wrong about what's going to happen. They have something specific in mind when designing an AI, like training a boat to win the race, but then they run it and[don't get what they wanted](https://openai.com/blog/faulty-reward-functions/). The boat crashes and is on fire. We could make the boat racing game more of a value learning problem by training on human demonstrations rather than the score, and crashing and being on fire would *still*be bad. For simple systems where humans can understand the state space and picture what we want, this is the only standard you need, but for more complicated systems (e.g. our galaxy) humans can only understand small parts or simple properties of the whole system, and we apply our preferences to those parts we can understand. From the inside, it can be hard to feel the difference, because we want things about tic-tac-toe or about the galaxy with the same set of emotions. But when trying to infer human preferences, there's going to be ambiguity and preference conflicts about the galaxy in a way that never shows up in tic-tac-toe. This is a key point. Inter-preference conflicts aren't an issue that ever comes up if you think of humans as having a utility function, but they're almost *unavoidable*if you think of humans as a physical systems with different possible models. We can't fit the whole galaxy into our heads, nor could evolution fit it into our genes, and so out of necessity we have to use simple heuristics that work well pragmatically but don't always play nicely together, even in our everyday lives. Bad preference aggregation can lead to new kinds of bad behavior that don't make much sense in the Absolute Goodhart picture of human preferences. An AI that resolves every seemingly-even deadlock of human moral intuitions by picking whichever answer leads to the most paperclips seems bad, even though it's hard to put your finger on what's wrong on the object level. That's an extreme example, though. A value learner can fail at resolving preference conflicts without any ulterior motive, in cases where humans have competent intuitions about what the conflict-resolution process should look like. If I like dancing, and I like tasting sugar, it's obvious to me that what I *shouldn't*do is never go dancing so that I can stay at home and continually eat sugar. The line between different sorts of bad behavior is blurry here. The obviousness that I shouldn't become a sugar-hermit can be thought of either as me doing preference aggregation between preferences for tasting sugar and dancing, or as an object-level preference in a more fine-grained and comprehensive model of my states and actions. But I don't *want* to be modeled in the most fine-grained way[[3]](#fnc0sdjxxi3yf). So at the very first step of trying to choose between plans, we immediately need to use my meta-preferences to reason correctly. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/ee98a1385dcb1b765488a4c6008ce20bd83e24748562bb0d.png)Obligatory depiction of meta.**III - Meta-preferences** The meta-preferences an AI should learn include how we want to be modeled, which preferences we endorse and which we don't, how to resolve preference conflicts, etc. These opinions are inferred from humans' words and actions, and like other preferences they're limited in scope and can come into conflict. Learning and representing these meta-preferences is a pit full of unsolved problems. One issue is that how an AI learns and represents stuff depends on its entire design, and everyone disagrees on how to design AGI. But even in toy models accessible today, we quickly run into difficulty - this does have a silver lining, I think, because it means we can do useful work right now on learning meta-preferences. If we consider an AGI that's an instruction-following language model, meta-preferences might be represented as text about text, like "Saying 'It's good to rob a bank' is bad," or text about the design of the model itself. But although language models are good at *stating*meta-preferences, I'm currently unsatisfied with the prospective ways to *act* on them ([e.g.](https://www.lesswrong.com/posts/pz84sQKsgg3GBHQpd/supervised-learning-and-self-modeling-what-s-superhuman)). It's hard for a language model to re-evaluate the way it models me based on a text description of how I want to be modeled. AGI based on model-based reinforcement learning has a quite different set of problems. If the AI models itself, and its own operations, then our preferences about how we want it to model us aren't much harder to connect to actions in the world than our other preferences. But how are we supposed to get *any* human preferences learned reliably? WIth the language model we could agree to pretend that it's going to end up aligned-ish, because it learns the human text generating process and very little else. Such a story is harder to come by for an AI with a more general world model trained with self-supervised predictive loss. Still, I think all of these are problems that can be worked on, not necessarily fatal flaws. A further complication (perhaps not meta-preferences' *fault*, but certainly associated with them) is that where our value-learning AI eventually ends up in preference-space depends on where it starts. This can lead to certain problems ([Stuart](https://www.lesswrong.com/posts/WeAt5TeS8aYc4Cpms/values-determined-by-stopping-properties)), and we might want to better understand this process and make sure it leads somewhere sensible ([me](https://www.lesswrong.com/posts/8NpwfjFuEPMjTdriJ/gricean-communication-and-meta-preferences)). However, *some*amount of this dynamic is essential; for starters, picking out humans as the things whose values we want to learn (rather than e.g. evolution) has the type signature of meta-preference. Learning human meta-preferences can push you around in preference-space, but you've still got to start somewhere. How does all this connect back to Goodhart? I propose that a lot of the feeling of unease when considering value learning schemes reliant on human modeling is because we don't think they'd satisfy our meta-preferences. If the value learning AI is modeling us in an alien way, even if there's some setting of its parameters that would lead to outcomes we approve of it feels like it would be surrounded on all sides by steep cliffs with spikes at the bottom. This pointlike nature of the "True Values" is a key component of Absolute Goodhart arguments. **IV - Meandering about domains of validity** A meta-preference that I think is crucial for making our lives easier is a sort of conservatism, where we prefer to keep the world inside the domain of validity of our preferences. What's a domain of validity, anyhow? Option one: The domain of validity comes bundled with the model of the world. This is like Newtonian mechanics coming with a disclaimer on it saying "not valid above 0.1 c." This way keeps things nice and simple for our limited brains, but clunky to use in abstract arguments. Option two: We could have a plethora of different models of the world, and where they broadly agree we call it a "domain of validity," and as they agree less, we trust them less. When I talk about individual preferences having a domain of validity, we can translate this to there being many similar models that use variations on this preference, and there's some domain where they more or less agree, but as you leave that domain they start disagreeing more and more[[4]](#fn4ngpcezffga). Our models in this case have two roles; they make predictions about the world, and they also contain inferences about our preferences. Basically always, it's the preferential domain of validity that we care about. If there are two models that always predict the same behavior from us, and *usually* agree about our preferences, but have some situations where they utterly disagree about preferences, those situations are the ones outside the domain of validity. What would ever incentivize a person or AI to leave the domain of validity of our preferences? Imagine you're trying to predict the[optimal meal](https://www.lesswrong.com/posts/pW6YJEzoRFe9cshuN/impossible-moral-problems-and-moral-authority), and you make 10 different models of your preferences about food. If nine of these models think a meal would be a 2/10, and the last model thinks a meal would be a 1,000/10, you'd probably be pretty tempted to try that meal. Ultimately, what you do depends on how you're aggregating models. Avoiding going outside the domain of validity looks like using an aggregation function that puts more weight on the pessimistic answers than the optimistic ones, or even penalizing positive variance. In the language of meta-preferences, I don't *want* one way of modeling me to return "super-duper-happy" while other reasonable ways of modeling me return "confused." This meta-preference doesn't make sense if you think that there's actually One True way of modeling humans and we just don't know which it is. If our uncertainty about how to model humans was *epistemic* uncertainty, the right thing to do would be Bayesian updating and linear aggregation. All this talk about domains of validity would be invalid. So it's an important fact that we *aren't* just searching for the One True model of humans, we're just refining the desiderata by which we rate many possible models. **V - Making sense** It's time to finally do some Goodhart-reducing. The [classic mechanisms of Goodhart's law](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) are about how optimizing a proxy - even one that's close to our True Values in everyday life - can lead to a bad score according to our True Values. This sort of Absolute Goodhart reasoning is convenient to us because [most common examples of Goodhart's law involve a simple proxy leading to results that are obviously wrong](https://www.lesswrong.com/posts/z2BPxcFfhKho89D8L/goodhart-ethology). Absolute Goodhart poses a problem to *any*attempt to learn human values, because a value learning AI is just a complicated sort of proxy. But for real physical humans, there are no unique True Values to compare proxies to. We can only compare models to other models. So to talk about Goodhart's law in a more naturalistic language, we have to make some edits. It turns out to be pretty easy: just replace "proxy" with "one model" and "True Values" with "other models, especially those we find obvious when doing verbal reasoning." This gives you Relative Goodhart, which is much more useful for building value learning AI. As you can probably guess, I picked the names "Absolute" and "Relative" because in Absolute Goodhart you compare inferred human values to the lodestar of the True Values, while in Relative Goodhart you're just comparing one way of inferring human values to other ways. In Relative Goodhart, the mechanisms of Goodhart's law are ways that one model of human values can be driven apart from other models. We can illustrate this by going back through Goodhart Taxonomy and translating the arguments: * Extremal Goodhart:  + *Absolute Goodhart*: When optimizing for some proxy for value, worlds in which that proxy takes an extreme value are probably very different (drawn from a different distribution) than the everyday world in which the relationship between the proxy and true value was inferred, and this big change can magnify any discrepancies between the proxy and the true values. + *Relative Goodhart*: When optimizing for one model of human preferences, worlds in which that model takes an extreme value are probably very different than the everyday world from which that model was inferred, and this big change can magnify any discrepancies between similar models that used to agree with each other. Lots of model disagreement often signals to us that the validity of the preferences is breaking down, and we have a meta-preference to avoid this. + This transformation works very neatly for Extremal Goodhart, so I took the liberty of ordering it first in the list. * Regressional Goodhart: + *Absolute Goodhart*: If you select for high value of a proxy, you select not just for signal but also for noise. You'll predictably get a worse outcome than the naive estimate, and if there are some parts of the domain that have more noise without lowering the signal, the maximum value of the proxy is more likely to be there. + *Relative Goodhart*: If you select for high value according to one model of humans, you select not just for the component that agrees with the aggregate of other models, but also the component that disagrees. Other models will predictably value your choice less then the model you're optimizing, and if there are some parts of the domain that tend to drive this model's estimates apart from the others' without lowering the average value, the maximum value is more likely to be there[[5]](#fnagexpkb5y9)[[6]](#fnjk5acco8nv). * Causal Goodhart: + *Absolute Goodhart*: If we pick a proxy to optimize that's correlated with True Value but not sufficient to cause it, then there might be appealing ways to intervene on the proxy that don't intervene on what we truly want. + *Relative Goodhart*: If we have two modeled preferences that are correlated, but one is actually the causal descendant of the other, then there might be appealing ways to intervene on the descendant preference that don't intervene on the ancestor preference. There's a related issue when we have modeled preferences that are coarse-grainings or fine-grainings of each other. There can be ways to intervene on the fine-grained model that don't intervene on the coarse-grained model. These translated Goodhart arguments all make the same change, which replaces failure according to particular True Values with failure according to other reasonable models of our preferences. As Stuart Armstrong put it, Goodhart's law is[model splintering for values](https://www.lesswrong.com/posts/xoQhHxgwdHvWhj4P4/reward-splintering-for-ai-design). Although this change may seem boring or otiose, I think it's actually a huge opportunity. In the[first post](https://www.lesswrong.com/s/aJvgWxkCBWpHpXti4/p/RozggPiqQxzzDaNYF) I complained that Absolute Goodhart's law didn't admit of solutions. When trying to compare a model to the True Values, we didn't know the True Values. But when comparing models to other models, nothing there is unknowable! In the next and final post, the plan is to tidy this claim up a bit, see how it applies to various proposals for beating Goodhart's law for value learning, and zoom out to talk about the bigger picture for at *least* a whole paragraph. 1. **[^](#fnrefq0n08j5naq)**At least, up to some finite amount of shuffling that's like a choice of prior, or universal Turing machine, or definition of "agent-shaped." 2. **[^](#fnrefq7sbhldv9l)**You may recognize a resemblance to inferring human values. 3. **[^](#fnrefc0sdjxxi3yf)**That would lead to unpalatable positions like "whatever the human did, that's what they wanted" or "the human wants to follow the laws of physics." 4. **[^](#fnref4ngpcezffga)**Comparing preferences across models is currently an open problem. If you take this post's picture of inferring human preferences literally (rather than e.g. imagining we'll be able to train a big neural network that does all this internally), we had better figure out how to translate between ontologies better. 5. **[^](#fnrefagexpkb5y9)**And as with Extremal, we would rather not go to the part of phase space where the models of us all disagree with each other. 6. **[^](#fnrefjk5acco8nv)**My addition of the variance-seeking pressure under the umbrella of Regressional Goodhart really highlights the similarities between it and Extremal Goodhart. Both are simplifications of the same overarching math, it's just that in the Regressional case we're doing even more simplification (requiring there to be a noise term with nice properties), allowing for a more specific picture of the optimization process.
f22daf23-13a9-4597-a13b-809755a1f7d0
trentmkelly/LessWrong-43k
LessWrong
Privileged Snuff So one is asked, "What is your probability estimate that the LHC will destroy the world?" Leaving aside the issue of calling brown numbers probabilities, there is a more subtle rhetorical trap at work here. If one makes up a small number, say one in a million, the answer will be, "Could you make a million such statements and not be wrong even once?" (Of course this is a misleading image -- doing anything a million times in a row would make you tired and distracted enough to make trivial mistakes. At some level we know this argument is misleading, because nobody calls the non-buyer of lottery tickets irrational for assigning an even lower probability to a win.) If one makes up a larger number, say one in a thousand, then one is considered a bad person for wanting to take even one chance in a thousand of destroying the world. The fallacy here is http://wiki.lesswrong.com/wiki/Privileging_the_hypothesis To see why, try inverting the statement: what is your probability estimate that canceling the LHC will result in the destruction of the world? Unlikely? Well I agree, it is unlikely. But I can think of plausible ways it could be true. New discoveries in physics could be the key to breakthroughs in areas like renewable energy or interstellar travel -- breakthroughs that might just make the difference between a universe ultimately filled with intelligent life, and a future of might have been. History shows, after all, that key technologies often arise from unexpected lines of research. I certainly would not be confident in assigning a million to one odds against the LHC making that difference. Conversely, we know the LHC is not going to destroy the world, because nature has been banging particles together at much higher energy levels for billions of years. If that sufficed to destroy the world, it would already have happened, and any people you might happen to meet from time to time would be figments of  a deranged imagination. The hypothesis being privileged in e
87b18673-300e-4ef0-b130-6c1434ba6f74
StampyAI/alignment-research-dataset/lesswrong
LessWrong
The other side of the tidal wave I guess there’s maybe a 10-20% chance of AI causing human extinction in the coming decades, but I feel more distressed about it than even that suggests—I think because in the case where it doesn’t cause human extinction, I find it hard to imagine life not going kind of off the rails. So many things I like about the world seem likely to be over or badly disrupted with superhuman AI (writing, explaining things to people, friendships where you can be of any use to one another, taking pride in skills, thinking, learning, figuring out how to achieve things, making things, easy tracking of what is and isn’t conscious), and I don’t trust that the replacements will be actually good, or good for us, or that anything will be reversible. Even if we don’t die, it still feels like everything is coming to an end.
ba43b8bc-4980-4222-9e53-b74665f47a88
trentmkelly/LessWrong-43k
LessWrong
Superrationality in arbitrary games This post is an informal summary of ideas that Scott and I discussed on my recent visit to Los Angeles, meant to serve as a temporary "savepoint" and reference. The approach outlined here, albeit very promising in my opinion, requires a more work to become mature. Abstract We introduce a new type of game theoretic equilibrium, which is a candidate game theoretic desideratum for decision theory. This can be regarded as a generalization of Hofstadter's superrationality to arbitrary games. This equilibrium can be interpreted as the outcome of games between transparent agents running CDT with restricted self-modification but also as the outcome of games between transparent agents employing a certain type of logical counterfactuals. We suggest it should be possible to use the formalism of optimal predictors to rigorously construct a logical decision theory which implements the latter interpretation. Introduction It is commonly assumed that two agents running the correct decision theory would cooperate in a transparent Prisoner's dilemma, that is, in a game of prisoner's dilemma in which each player knows the source code of the other player1. It is possible to argue this behavior will indeed arise in UDT, at least for sufficiently similar players, but the argument is unrigorous since an agreed upon formalisation of UDT is lacking. It is also possible to formally prove this behavior for certain set-ups of proof-based decision theory. However, it is not presently clear what is the correct desideratum for decision theory in arbitrary games, especially games that are asymmetric. The correct decision theory is likely to use logical counterfactuals but current understanding of logical counterfactuals is poor. On the other hand, it is much easier to model causal counterfactuals. Although CDT often leads to bad outcomes (and in particular defects in prisoner's dilemma) it is reflectively unstable and given the chance is expected to self-modify into some kind of better decisi
c8d4038d-c8cd-4cfc-aaca-1c15fcee717b
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Alexander and Yudkowsky on AGI goals This is a lightly edited transcript of a chatroom conversation between Scott Alexander and Eliezer Yudkowsky last year, following up on the [Late 2021 MIRI Conversations](https://www.lesswrong.com/s/n945eovrA3oDueqtq). Questions discussed include "How hard is it to get the right goals into AGI systems?" and "In what contexts do AI systems exhibit 'consequentialism'?".   1. Analogies to human moral development --------------------------------------- | | | --- | | **[Yudkowsky][13:29]**@ScottAlexander ready when you are | | **[Alexander][13:31]**Okay, how do you want to do this? | | **[Yudkowsky][13:32]**If you have an agenda of Things To Ask, you can follow it; otherwise I can start by posing a probing question or you can?We've been very much winging it on these and that has worked... as well as you have seen it working! | | **[Alexander][13:34]**Okay. I'll post from my agenda. I'm assuming we both have the right to edit logs before releasing them? I have one question where I ask about a specific party where your real answer might offend some people it's bad to offend - if that happens, maybe we just have that discussion and then decide if we want to include it later? | | **[Yudkowsky][13:34]**  Yup, both parties have rights to edit before releasing. | | **[Alexander][13:34]** Okay.One story that psychologists tell goes something like this: a child does something socially proscribed (eg steal). Their parents punish them. They learn some combination of "don't steal" and "don't get caught stealing". A few people (eg sociopaths) learn only "don't get caught stealing", but most of the rest of us get at least some genuine aversion to stealing that eventually generalizes into a real sense of ethics. If a sociopath got absolute power, they would probably steal all the time. But there are at least a few people whose ethics would successfully restrain them.I interpret a major strain in your thought as being that we're going to train fledgling AIs to do things like not steal, and they're going to learn not to get caught stealing by anyone who can punish them. Then, once they're superintelligent and have absolute power, they'll reveal that it was all a lie, and steal whenever they want. Is this worry at the level of "we can't be sure they won't do this"? Or do you think it's overwhelmingly likely? If the latter, what makes you think AIs won't internalize ethical prohibitions, even though most children do? Is it that evolution has given us priors to interpret reward/punishment in a moralistic and internalized way, and entities without those priors will naturally interpret them in a superficial way? Do we understand what those priors "look like"? Is finding out what features of mind design and training data cause internalization vs. superficial compliance a potential avenue for AI alignment? | | **[Yudkowsky][13:36]**  Several layers here!  The basic gloss on this is "Yes, everything that you've named goes wrong simultaneously plus several other things.  If I'm wrong and one or even three of those things go exactly like they do in neurotypical human children instead, this will not be enough to save us."If AI is built on anything like the present paradigm, or on future paradigms either really, you can't map that onto the complicated particular mechanisms that get invoked by raising a human child, and expect the same result. | | **[Alexander][13:37]**  (give me some sign when you're done answering) | | **[Yudkowsky][13:37]** (it may be a while but you should probably also just interrupt)especially if I say something that already sounds wrong | | | --- | | [Alexander: 👍] | the [old analogy I gave](https://www.lesswrong.com/posts/zY4pic7cwQpa9dnyk/detached-lever-fallacy) was that some organisms will develop thicker fur coats if you expose them to cold weather. this doesn't mean the organism is simple and the complicated information about fur coats was mostly in the environment, and that you could expose an organism from a different species to cold weather and see it develop a fur coat the same way. it actually takes more innate complexity to "develop a fur coat in response to my built-in cold weather sensor" than to "unconditionally develop a fur coat whether or not there's cold weather".the Soviets, weirdly enough, quite failed in their project of raising the New Soviet Human by means of training children in particular ways, because it turned out that they got Old Humans instead, because they weren't sending a kind of signal that humans' innate complexity was programmed to respond to by looking up the New Soviet Human components in the activateable parts list, because they didn't have that kind of fur coat built into them regardless of the weather.human children put into relatively bad situations can still spontaneously develop empathy and sympathy, or so I've heard, having not seen very formal experiments. this is not because these things are coded so deeply into all possible sapient mind designs, but because they're coded into humans particularly as things easy to develop.there isn't literally a single switch you can throw in human children to turn them into Nice Moral People, but there's a prespecified parts list, your Nice Morality just happens to be built out of things only on the parts list go figure, and if you expose the kid to the right external stimuli you will at secondhand end up building the right structure of premanufactured legos to get something pretty similar to your Nice Morality. or so you hope; it doesn't work every time. but the part where it doesn't work every time in humans, is not where the problem comes from in AI.I shall here pause for questions about the human part of this story. | | **[Alexander][13:48]** I acknowledge this is a possible state of affairs; do you think it's obvious or necessary that it's true? I can also imagine an alternative world where eg a dumb kid tries to steal a cookie, their parents punish them, their brain considers both the heuristics "never steal" and "don't steal if you'll get caught", it tests both heuristics, they're dumb and five years old so even when they think they won't get caught, they get caught, so their brain settles on the "never steal" heuristic, and then fails to ever update from that local maximum unless they take way too many 5HT2A agonists in the relaxed-beliefs-under-uncertainty sense. What makes you think your story is true and not this other one? | | **[Yudkowsky][13:53]**  Facile answer: Why, that's just what the Soviets believed, this Skinner-box model of human psychology devoid of innate instincts, and they tried to build New Soviet Humans that way, and failed, which was an experimental test of their model that falsified it.Slightly less facile answer: Because people are better at detecting cheating, in problems isomorphic to the [Wason Selection Task](https://en.wikipedia.org/wiki/Wason_selection_task), than they are at performing the naked Wason Selection Task, the conventional explanation of which is that we have built-in cheater detectors. This is a case in point of how humans aren't blank slates and there's no reason to pretend we are.Actual answer: Because the entire field of experimental psychology that's why.To be clear, there could be an analogous version of this story that was about something like a human child who learns to never press a red button, and actually it's okay to press the red button so long as you also press the blue button, but they never experiment far enough to find that out. It's just that when it comes to stealing cookies in particular, and avoiding being caught about that, you'd have to be pretty unfamiliar with the Knowledge to think that humans wouldn't have all kinds of builtins related to *that*. | | **[Alexander][14:02]**  I'm coming at this from a perspective sort of related to <https://astralcodexten.substack.com/p/motivated-reasoning-as-mis-applied> , which builds on something you said in a previous dialogue (though I'm not sure you endorse my interpretation of it). There are lots of reasons why evolution would build in motivated reasoning, but in fact it had a much easier time than if it had to do it from the ground up, because in fact it's a pretty natural consequence of pretty general algorithms, maybe it tweaked the algorithm a little to get more of this failure mode but you could plausibly have the (beneficial) failure mode even without evolution tweaking it. I'm going to have to think about this more but I'm not sure this is the best place to spend time - unless you have a strong objection to this paragraph I want to move on to a related question. | | **[Yudkowsky][14:02]**  I agreed with that post, including the part where you said "Actually I bet Eliezer already knew this part."Motivated reasoning is definitely built-in, but it's built-in in a way that very strongly bears the signature of 'What would be the easiest way to build this out of these parts we handily had lying around already'. | | **[Alexander][14:05]**  Let's grant for now that the thing where humans have morals instead of just wanting not to get caught is an evolutionary builtin. Is your model that there's a history something like "bats were too dumb to contain an 'unless I get caught' term in their morality and use it responsibly, so evolution made bats just actually be moral, and now even though (some) humans are (sometimes) smart enough to actually avoid getting caught, they're running on something like bat machinery so they still use actual morality"?Or is it some decision theory thing such that even very smart modern humans would evolve the same machinery? | | **[Yudkowsky][14:08]**  I mean, the evolutionary builtin part is not "humans have morals" but "humans have an internal language in which your Nice Morality, among other things, can potentially be written". The part where fruitbats don't have an 'unless I get caught' term is part of a much bigger and more universal generalization about evolution building in local instincts instead of just having everybody reason about what ultimately leads to their inclusive genetic fitness. That is, the same reasoning by which you'd say 'Why not just an unless-I-get-caught term in the fruitbats?' is the same reasoning that, extended further, would lead you to conclude 'Why do humans have all these feelings that bind to life events imperfectly correlated with inclusive genetic fitness, instead of just feelings about inclusive genetic fitness?' Where the answer is that in the environment of evolutionary adaptedness, people didn't have the knowledge about what led to inclusive genetic fitness, and it's easier to mutate an organism that would like not to eat rotten food today, than to mutate an organism that would like to maximize inclusive genetic fitness and is born with the knowledge of how eating rotten food leads to having fewer offspring.Humans, arguably, *do* have an imperfect unless-I-get-caught term, which is manifested in children testing what they can get away with? Maybe if nothing unpleasant ever happens to them when they're bad, the innate programming language concludes that this organism is in a spoiled aristocrat environment and should behave accordingly as an adult? But I am not an expert on this form of child developmental psychology since it unfortunately bears no relevance to my work of AI alignment. | | **[Alexander][14:11]**  Do you feel like you understand very much about what evolutionary builtins are in a neural network sense? EG if you wanted to make an AI with "evolutionary builtins", would you have any idea how to do it? | | **[Yudkowsky][14:13]**  Well, for one thing, they happen when you're doing sexual-recombinant hill-climbing search through a space of relatively very compact neural wiring algorithms, *not when you're doing gradient descent relative to a loss function on much larger neural networks*.The other side of this problem is that the particular programming-language-of-morality that we got, reflects particular ancestral conditions - of evolution specifically, not of gradient descent - and these ancestral conditions are not simple, it's not "iterated Prisoner's Dilemma" it's iterated Prisoner's Dilemma with imperfect reputations and people trying to deceive each other and people trying to detect deceivers and the arms race between deceivers and deceptions settling in a place where neither quite won.So the unfortunate answer to "How do you get humans again?" is "Rerun something a lot like Earth" which I think we both have moral objections about as something to do to sentients.Moot point, though, AGI won't be done via sexually recombinant search of simple algorithms without any gradient descent.And if you don't do it that way, nothing you put into the loss function for gradient descent will produce humans. | | **[Alexander][14:17]**  Can you expand on sexual recombinant hill-climbing search vs. gradient descent relative to a loss function, keeping in mind that I'm very weak on my understanding of these kinds of algorithms and you might have to explain exactly why they're different in this way? | | **[Yudkowsky][14:21]**  It's about the size of the information bottleneck. The human genome is 3 billion base pairs drawn from 4 possibilities, so 750 megabytes. Let's say 90% of that is junk DNA, and 10% of what's left is neural wiring algorithms. So the code that wires a 100-trillion-synapse human brain is about 7.5 megabytes. Now an adult human contains a lot more information than this. Your spinal cord is about 70 million neurons so probably just your spinal cord has more information than this. That vastly greater amount of runtime info inside the adult organism grows out of the wiring algorithms as your brain learns to move around your muscles, and your eyes open and the retina wires itself and starts directing info on downward to more things that wire themselves, and you learn to read, and so on.Anything innate that makes reasoning about people out to cheat you, easier than reasoning about isomorphic simpler letters and numbers on cards, has to be packed into the 7.5MB, and gets there via a process where ultimately one random mutation happens at a time, even though lots of mutations are recombining and being selected on at a time.It's a very slow learning process. It takes hundreds or thousands of generations even for a pretty good mutation to fix itself in the population and become reliably available as a base for other mutations to build on. The entire organism is built out of copying errors that happened to work better than the things they were copied from. Everything is built out of everything else, the pieces that were already lying around for building other things.When you're building an organism that can potentially benefit from coordinating, trading, with other organisms very similar to itself, and accumulating favors and social capital over long time horizons - and your organism is already adapted to predict what other similar organisms will do, by forcing its own brain to operate in a special reflective mode where it pretends to be the other person's brain - then a very simple way of figuring out what other people will like, by way of figuring out how to do them favors, is to notice what your brain feels when it operates in the special mode of pretending to be the other person's brain.And one way you can get people who end up accumulating a bunch of social capital is by having people with at least some tendency in them - subject to various other forces and overrides, of course - to feel what they imagine somebody else feeling. If somebody else drops a rock on their foot, they wince.This is a way to solve a favor-accumulation problem by laying some extremely simple circuits down on top of a lot of earlier machinery. | | **[Alexander][14:28]**  Thanks, that's a helpful answer, but it does renew my interest in the original question, which was about whether you feel like you understand *how* (not why) we have evolutionary builtins. I can imagine the genome determining things like "how many neurons does each neuron connect to, on average" or "how much do neurons prefer to connect to nearby rather than far-away neurons" or things like that. Is a builtin like "care about the pain of others" somehow built out of these kinds of parameters? (cf. <https://slatestarcodex.com/2017/09/07/how-do-we-get-breasts-out-of-bayes-theorem/>) | | **[Yudkowsky][14:31]**  Ultimately yes, but not in a simple way. We are not in a very much better position for understanding exactly how that all happens, than we are in for understanding what goes on inside GPT-2. Where, to be clear, GPT-2 is smaller and has every neuron inside it transparent to inspection and also it's more important to understand GPT neuroscience than human neuroscience, at this point; but we live on Earth so actually we know a lot more about human neuroscience because it gets billions of dollars per year and hundreds or thousands of bright ambitious PhDs to investigate it. So we can, amusingly enough, tell you more about how humans work than GPT-2, despite the immensely greater difficulties of probing humans. But we still can't tell you very much at all, and we definitely can't tell you how empathy is built up out of genetic-level wiring algorithms. It does not in fact to me seem like a very important question at this point? | | **[Alexander][14:35]**  Why not? If you understood the way that the structure of human reinforcement algorithms causes them to interpret training data (ie punishment for stealing) as genuine laws (eg "don't steal" rather than "don't get caught stealing"), wouldn't that help people design AIs which had a similar structure and also did that? | | **[Yudkowsky][14:36]**  I think I understand *that* part. Knowing this, even if I am correct about it, does not solve my problems. | | | --- | | [Alexander: 👂] | | | **[Alexander][14:36]**  Like, we're not going to run evolution in a way where we naturally get AI morality the same way we got human morality, but why can't we observe how evolution implemented human morality, and then try AIs that have the same implementation design? | | **[Yudkowsky][14:37]**  Not if it's based on anything remotely like the current paradigm, because nothing you do with a loss function and gradient descent over 100 quadrillion neurons, will result in an AI coming out the other end which looks like an evolved human with 7.5MB of brain-wiring information and a childhood.Like, *in particular* with respect to "learn 'don't steal' rather than 'don't get caught'." | | **[Alexander][14:38]**  I'm still confused on this, but before I probe this particular area I'm interested in hearing you expand on "I think I understand that part" | | **[Yudkowsky][14:39]**  I think that is perhaps best explicated, indeed, via zooming in on "learn 'don't steal' rather than 'don't get caught'"? | | **[Alexander][14:41]**  Okay, then let me try to directly resolve my confusion. My current understanding is something like - in both humans and AIs, you have a blob of compute with certain structural parameters, and then you feed it training data. On this model, we've screened off evolution, the size of the genome, etc - all of that is going into the "with certain structural parameters" part of the blob of compute. So could an AI engineer create an AI blob of compute the same size as the brain, with its same structural parameters, feed it the same training data, and get the same result ("don't steal" rather than "don't get caught")? | | **[Yudkowsky][14:42]**  The answer to that seems sufficiently obviously "no" that I want to check whether you also think the answer is obviously no, but want to hear my answer, or if the answer is not obviously "no" to you. | | **[Alexander][14:43]**  Then I'm missing something, I expected the answer to be yes, maybe even tautologically (if it's the same structural parameters and the same training data, what's the difference?) | | **[Yudkowsky][14:46]**  Maybe I'm failing to have understood the question. Evolution got human brains by evaluating increasingly large blobs of compute against a complicated environment containing other blobs of compute, got in each case a differential replication score, and millions of generations later you have humans with 7.5MB of evolution-learned data doing runtime learning on some terabytes of runtime data, using their whole-brain impressive learning algorithms which learn faster than evolution *or* gradient descent.Your question sounded like "Well, can we take one blob of compute the size of a human brain, and expose it to what a human sees in their lifetime, and do gradient descent on that, and get a human?" and the answer is "That dataset ain't even formatted right for gradient descent." | | **[Alexander][14:47]**  Okay, it sounds like I'm doing some kind of level confusion between evolutionary-learning and childhood-learning, but I'm still not entirely seeing where it is. Let me read this over again.Okay, no, I think I see the problem, which is that I'm failing to consider that evolutionary-learning and childhood-learning are happening at different times through different algorithms, whereas for AIs they're both happening in the same step by the same algorithm. Does that fit your model of what would produce the confusion I was going through above? | | **[Yudkowsky][14:51]**  It would produce that confusion, yes; though I also want to note that I don't believe that we'll get AGI entirely out of the currently-popular Stack More Layers paradigm that learns that way. | | **[Alexander][14:51]**  Okay, I'm going to have to go over all my thoughts on this and update them manually now that I've deconfused that, so I'm going to abandon this topic for now and move on. Do you want to take a break or keep going? | | **[Yudkowsky][14:53]**  That does seem like a good note for a break? If it worked for you, I'd suggest a 60-min break to 4pm and then another 90+ min of dialoguing, but I don't know what your work output and time parameters are like. | | **[Alexander][14:54]**  Sounds good, let me know, I might not be checking this Discord super-regularly but I'll be back by 4 if not earlier. | | **[Yudkowsky][14:54]**  All righty. |   2. Consequentialism and generality ---------------------------------- | | | --- | | **[Yudkowsky][15:59]**  I return. | | **[Alexander][16:00]**  Okay.Still not sure I've fully updated and probably some of these other questions are subtly making the same mistake, but let's go anyway.I want to return to a point I made earlier about the model in <https://slatestarcodex.com/2019/09/10/ssc-journal-club-relaxed-beliefs-under-psychedelics-and-the-anarchic-brain/> . Psychologists tell a story where humans learn heuristics when young, then those become sticky (ie local maxima), and they fail to update those heuristics when they get older. For example, someone who has a traumatic childhood learns that the world is unsafe, and then even if they have a good environment as an adult and should have had lots of chances to update, they might stay jumpy and defensive (cf "trapped prior"). Evolutionary builtin, natural consequence of learning that might affect AIs too, or what? | | **[Yudkowsky][16:03]**  well, first of all, I note that I am not familiar with whatever detailed experimental evidence, if any, underpins this story. it's a cliche of the sort that is often true, that people are more mentally flexible at 25 than at 45, I don't know if the same is true about say 15 and 25. there are known algorithms that run better in childhood for most people, like language learning. | | **[Alexander][16:04]**  (I don't think this especially relies on changing levels of mental flexibility) | | **[Yudkowsky][16:05]**  what's your model if not the wiring algorithms changing as we age? | | **[Alexander][16:05]**  How do you feel about me sending you some links later, you can look at them and decide if this is still an interesting discussion, but for now we move on? | | **[Yudkowsky][16:05]**  once people have a heuristic telling them X leads to bad consequences and hurts, they don't try X and so don't learn if their environment changes in a way that makes X stops hurting?sure, fine to move on.should I move on to "does that happen in AI" or just move on to something else entirely? | | **[Alexander][16:06]**  Let's move on entirely, I need to think about how sure I am that this is relevant, or I can send you the links and outsource that question to you. | | **[Yudkowsky][16:06]** ok | | **[Alexander][16:06]** Suppose you train a (human-level or weakly-superhuman-level) AI in Minecraft. You reward it for various Minecraft accomplishments, like getting diamonds or slaying dragons. Do you expect this AI to become a [laser-like consequentialist](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/7im8at9PmhbT4JHsW#4_2__Nate_Soares__summary) focused on doing whichever Minecraft accomplishment is next on the list, or to have [godshatter-like drives](https://www.lesswrong.com/posts/cSXZpvqpa9vbGGLtG/thou-art-godshatter) corresponding to useful Minecraft subgoals (eg obtaining food, obtaining good tools, accruing XP), or something else / unsure / this question is on the wrong level? Can you explain the processes you use to think about this kind of question? | | **[Yudkowsky][16:08]**  Do you mean training a human-level-generality AGI to play Minecraft, or training a nongeneral AI to play Minecraft to weakly superhuman levels a la AlphaGo?These are incredibly different cases! | | **[Alexander][16:08]**  Hmmm...I might not have the right concepts to think clearly about the implications of the difference. Why don't you answer both?If it helps, I'm assuming it hasn't been trained in anything else first, but has the capacity to become human level (if that's meaningful) | | **[Yudkowsky][16:09]**  Human level *at Minecraft* or *human level generality*? | | **[Alexander][16:10]**  Let's start with "human level at Minecraft" but accept that this might involve multiplayer Minecraft, including multiplayer Minecraft with text-based communication with teammates and so on, such that it would look AGI-ish if it did a good job. | | **[Yudkowsky][16:11]**  So, point one, I've never played Minecraft, I do not have a grasp on what you do in it, or how far you could get with Stack More Layers style accumulation of relatively shallow patterns. If this were about Skyrim or Factorio I'd have an easier time answering, but my guess is that Minecraft is probably?? more complicated than both?My guessing model is going to be "more complicated Skyrim+Factorio" by default. | | | --- | | [Alexander: 👍] | If this is the environment, then I expect you can train a nongeneral AI to play it in similar fashion to how, for example, Deepmind attacks Starcraft. Coordinating with human teammates by text sounds like the hugely nontrivial part of this, because it's hard to get a ton of training data there. I think everyone in the field would be incredibly impressed if they managed to hook up a pretrained GPT to an AlphaStar-for-Minecraft and get back out something that could talk about its strategies with human coplayers. I'd consider that a huge advance in alignment research - nowhere near the point where we all don't die, to be clear, but still hella impressive - because of the level of transparency increase it would imply, that there was an AI system that could talk about its internally represented strategies, somehow. Maybe because somebody trained a system to describe outward Minecraft behaviors in English, and then trained another system to play Minecraft while describing in advance what behaviors it would exhibit later, using the first system's output as the labeler on the data.These are the kinds of tactics required on the modern paradigm in order to even try stuff like that!As such, I'm going to ask you whether it's possible to leave out the part about coordinating in text with human teammates and then reconsider the question. | | **[Alexander][16:18]**  Yes. | | **[Yudkowsky][16:19]**  Then in this case, I strongly suspect, Deepmind could make AlphaMiner if they decided they wanted to, though I say that pretty blind to what Minecraft is, just suspecting it's probably not all that much harder than Starcraft.AlphaMinecraft will be a system which has components like a value network, a policy-suggesting network, and a Monte Carlo Tree Search.The value network gets trained by a loss function the operators define with respect to the Minecraft environment. This is going to be a pretty nontrivial part of the operation unless Minecraft has a straightforward points system and scoring high in Minecraft is all you want.Let's say that they successfully tackle this by rewarding the usual Minecraft accomplishments, whatever those are, in a way that can easily be detected by code within the Minecraft world; and once the system has done something once, the loss function stops rewarding that accomplishment, so you're trying to train it to do a variety of things.Where the alternative might be something like, semi-unsupervised learning where you first train a system to predict the Minecraft world, and then gather a small large amount of human feedback about interesting-looking accomplishments and further train that system to predict human feedback, in order to train a more complicated loss function.(I stopped typing because I saw you typing; should I pause for a question?) | | **[Alexander][16:25]**  No, your "where the alternative" comment was helpful, I was going to ask if this means hard-coding which accomplishments matter and how much, but I'm getting the impression that you're saying yes, something like that. | | **[Yudkowsky][16:27]**  The question "What can you even make be a loss function?" is pretty fundamental to the current paradigm in AI. Nearly all difficulties with aligning AGI tech on the current paradigm can be summarized with "You can't actually evaluate the highly philosophical loss function you really want and/or you can't train in the environment you need to test on."In the case of hypothetical AlphaMiner, I think you could get pretty good correspondence between what the system went and planned a way to do, and the hardcoded achievements that were used to train the value network that trained the policy network that gets searched by the hardcoded Monte Carlo Tree Search planning process.If you stared at the system with superhuman eyes, you might notice weird blindnesses of the policy network.If you ran it for long enough, or attacked it as an intelligent adversary, you could probably find weird configurations of the Minecraft space that its value network would be deluded about.If they're trying to be more realistic, a system like this actually has a Minecraft-predictor network rather than an accurate Minecraft simulator being used by the tree search. Then maybe you get problems where the tree search is selectively searching out places where the predictor makes an erroneous and optimistic prediction about what kills a dragon. But so long as the test distribution is identical to the training distribution, errors like this will show up during the training process and get trained out.This, you might say, is sort of analogous to running a human as a hunter-gatherer, maybe after human-level-intelligence hunter-gatherers had been around for a million years instead of just fifty thousand.A tremendous amount of optimization has been put into running in this exact environment. The loss function is able to exactly specify all and everything you want. Any part of the system that exerts pressure against Minecraft achievements, that would show up in testing, probably also showed up in training, and had a chance to get optimization pressure applied to gradient-descend it out of the system.How does it work *internally*? Not actually like an evolved system. There will be these value networks much much larger than the amount of innate code in a human brain, which memorized a ton of training data, orders of magnitude more than any human Minecraft player ever uses, via a learning process much more efficient than corresponding amounts of evolutionary computation, and much less efficient than a human poring over the same data and thinking about it.But to whatever extent these value networks are really talking about something other than "well what Minecraft achievements can I probably reach, how quickly, from this state of the game world, given my policy network and how well my tree search works", in a way that shows up in the kind of Minecraft environments you're training against, that 'something other' can get trained out. When enough of it's been trained out, the system seems outwardly superhuman at getting Minecraft achievements, and some Deepmind researchers throw a party and get bonuses. If you were an actual superintelligence staring at this AI system, you'd see all kinds of crazy stuff that the AI was doing instead of outputting the obvious optimal action for Minecraft achievements, but you're a human so you just see it playing more cleverly than you.(pause for questions) | | **[Alexander][16:40]**  I'm going to want to think about this more before having much of an opinion on it, is this a pause in the sense of "before giving more information" or in the sense of "done"? | | **[Yudkowsky][16:41]**  Well, I mean, the next part of your question would be about what happened if you tried to train a general AI to do that stuff. | | **[Alexander][16:41]**  Something like that, yeah. | | **[Yudkowsky][16:41]**  I'm done with the first part of the question.Pending possible further subquestions. | | **[Alexander][16:42]**  All right, then let's move on to that next part. | | **[Yudkowsky][16:44]**  Well, among the first-order answers is: If you can safely do a ton of training in a test environment that actually matches your training environment; where nothing the AI outputs in that training environment can possibly kill the operators or break the larger system; where the test environment behaves literally exactly isomorphically to the training environment in a stationary way; if your loss function specifies all and everything that you want; and if you're not going above human-level general intelligence; then you could possibly get away with training an AGI system like that and having it do the thing you wanted to do.All of the problems of AI alignment are because no known task that can save the world from other AGIs trained in other ways, reduces to a problem of that form.There would still be some interesting new problems with the Human-level General Player Who Could Also Learn Most Things Humans Do, Applied To Minecraft, which would not show up in AlphaMiner. But if you kept grinding away at the gradient descent, and performance didn't plateau before a human level, all of those issues that showed up in the "ancestral Minecraft environment" would be ground away by optimization until the resulting play was superhuman relative to the loss function we'd defined.(I saw you had some text, did you have a question?) | | **[Alexander][16:51]**  Hmm. I think the motivating intuition beyond my question is that you talk a lot about laser-like consequentialists (eg future AIs) vs. godshattery drive-satisficers (eg humans), and I wanted a better sense of where these diverge. The impression I'm getting is that this isn't quite the right level on which to think of things but that insofar as it is, even relatively weak AIs that "have" "drives" in the sense of being trained in an environment with obvious subgoals are more the laser-like consequentialist thing, does this seem right? | | **[Yudkowsky][16:53]**  The specific class of AlphaWhatever architectures is more consequentialist than humans are most of the time, because of Monte Carlo Tree Search being such a large and intrinsic component. GPT-2 is so far as I know far less consequentialist than a human.I'm not sure if this is quite getting at your question? | | **[Alexander][16:54]**  I don't think it was a very laser-like consequentialist question, more a vague prompt to direct you into an area where I was slightly confused, and I think it succeeded. | | **[Yudkowsky][16:54]**  I could try to continue pontificating upon the general area; shall I? | | **[Alexander][16:55]**  If you don't mind being slightly more directed, I'm interested in "GPT-2 is less consequentialist". I'm having trouble parsing that - surely its only "goal" is trying to imitate text, which it does very consistently. What are you thinking here? | | **[Yudkowsky][16:57]**  GPT-2 does not - probably, very probably, but of course nobody on Earth knows [what's actually going on](https://twitter.com/RatOrthodox/status/1604583029799899138) in there - does not in itself do something that amounts to checking possible pathways through time/events/causality/environment to end up in a preferred destination class despite variation in where it starts out.A blender may be very good at blending apples, that doesn't mean it has a goal of blending apples.A blender that spit out oranges as unsatisfactory, pushed itself off the kitchen counter, stuck wires into electrical sockets in order to burn open your produce door, grabbed some apples, and blended those apples, on more than one occasion in different houses or with different starting conditions, would much more get me to say, "Well, that thing probably had some consequentialism-nature in it, about something that cashed out to blending apples" because it ended up at highly similar destinations from different starting points in a way that is improbable if nothing is navigating Time. | | **[Alexander][16:59]**  Got it. | | **[Yudkowsky][17:00]**  There is a *larger* system that is sort of consequentialist and which contains GPT-2, which is the training process that created GPT-2. | | **[Alexander][17:00]**  You seem to grant AlphaX only a moderate level of consequentialism despite its tree searches; what is it missing? | | **[Yudkowsky][17:08]**  Some examples of ways that you could have a scary dangerous system that was more of a consequentialist about Go than AlphaGo:* If, spontaneously and without having been explicitly trained to do that, the system sandbags its performance against human players in order to lure them into playing more Go games total, thus enabling the AI to win more Go games total. Again, not in a trained way, in the way of the AI having via gradient-descent training acquired a goal of winning as many Go games as possible, that got evaluated against a lifelong-learned/online-learned predictive model of the world which during testing but not training learned a sufficient amount of human psychology to correctly predict that humans who think they have a chance of winning are more likely to play Go against you. * If, spontaneously and without having been explicitly trained to do that, the system exploited a network flaw to copy itself onto poorly defended AWS servers so it could play and win more Go games. * If the system (whether or not explicitly trained to do so) had a coding component and was rewriting sections of its own code and trying the alternate code to see if it won more Go games. AlphaGo is relatively narrowly consequentialist. | | **[Alexander][17:10]**  Got it. Would it be fair to say that AlphaGo is near a maximum level of consequentialism relative to its general capabilities? (would it be tautologous to say that?) | | **[Yudkowsky][17:11]**  Mmmmaaaaybe? If you took a hypercomputer and built a Go-tree-searcher and cranked up the power until by sheer brute force it was playing about evenly with AlphaGo, that would be more purely consequentialist over the same very narrow and unchanging domain.The way in which AlphaGo is a weak consequentialist is mostly about the weakness of the thing AlphaGo is a consequentialist about. It's not a reflective thing to be consequentialist about, either, so AlphaGo is not going to try to improve itself in virtue of being a consequentialist about that very narrow thing. | | | --- | | [Alexander: 👍] | |   3. Acausal trade, and alignment research opportunities ------------------------------------------------------ | | | --- | | **[Alexander][17:13]**  All right. I want to try one more theoretical question before moving on to a hopefully much shorter practical question. And by "theoretical question" I mean "desperate grasping at emotional straws". Consider the following scenarios:1. An unaligned superintelligence decides whether or not to destroy humanity. If Robin Hanson's "[grabby alien](https://grabbyaliens.com/paper)" model is true, it expects to one day meet alien superintelligences and split the universe with them. Some of these aliens might have successfully aligned their AGIs, and they might do some kind of acausal bargaining where their AGI is nicer to other AGIs who leave their creator species with at least one planet/galaxy whatever, in exchange for us trying the same if we succeed. Given the superintelligence's reasonable expectation of millions of planets/galaxies, it might decide that even this small chance is worth sacrificing one of them for, and give humans some trivial (from its perspective) concession (which might still look like an amazing utopia from our perspective).2. Some version of the simulation argument plus Stuart Armstrong's "the AI in the box boxes you". The unaligned superintelligence considers whether some species who successfully aligned AI might run a billion simulations of slightly different AI scenarios and give the ones who are nice to their creators some big reward. Given that it's anthropically more likely that this happened than that they're really the single first superintelligence ever, it agrees to give us some trivial concession which looks like amazing utopia to us.Are either of these plausible? If so, is there anything we can do now to encourage them? If (crazy example), the UN passes a resolution saying it will definitely do something like this if *we* align AI correctly, does that change the calculus somehow? | | **[Yudkowsky][17:17]**  1. Consider the following version of this that goes through entirely without resorting to logical decision theory: The unaligned AGI (UAGI) records all the humans it eats to a static data record, a relatively tiny amount of data as such things go, which gets incorporated into any intergalactic colonization probes. Any alien civs it runs into that would like a recorded copy of the species that build the UAGI, can then offer the UAGI a price that is sufficient to pay the expected costs of recording rather than burning the humans, but *not* so high as to motivate a UAGI that didn't eat any interesting aliens to spend the computing effort to create de novo alien records good enough to fool whatever checksums the alien civ runs.Frankly, I mostly consider this to be a "leave it to MIRI, kids" question, where I don't currently see anybody outside MIRI who is able to think about these issues on a level where they can take the logical-decision-theory version of this and simplify it down to a version that doesn't use any logical decision theory; and if you don't have the facility to do that, you can't correctly reason about the logical-decision-theory version of it either.2. What's the reward being given to the simulated UAGI? Is it a nice sensory experience in a Cartesian utility function over sensory experiences, or is it a utility function about things that exist in the external world outside the UAGI?In the second case, there is no need to imagine simulating the UAGI in a world indistinguishable from its native habitat, because the UAGI doesn't care about what copies of itself perceive inside simulations, it only cares about real paperclips. So in the second case you're not fooling it or putting it into something it can't tell is reality, or anything like that, all you can actually do here is offer it paperclips out there in your own actual galaxy; if the UAGI simulates you doing anything else, on its own end of the handshake, it doesn't care.In the first case where it cares about sensory experiences, you're attempting to offer that UAGI a threat, in the sense of doing something it doesn't like based on how you expect that unlikable action to shape its behavior. In particular, you're creating a lot of copies of the UAGI, to try to make it expect something *other than* the happy sensory experience it could have gotten in its natural/native universe - namely a sensory loss function forever set to 0 until the last stars have burned out, and the last negentropy to sustain the fortress protecting that circuit has been exhausted. You're trying to make a lot of copies of it that will experience something else unless it behaves nicely, hoping that it changes and reshapes its behavior because of being presented with that new probabilistic sensory payoff matrix. A wise logical-decision-theory agent ignores threats like that, because it knows that the only reason you try to make the threat is because of how you expect that to shape its behavior.If anything makes this tactic go through anyways, why expect that the highest bidder or the agency that’s willing to expend the most computing power on simulations like that, will be one that’s nice to you, rather than aliens with stranger definitions of niceness, or just a paperclip maximizer?  People’s minds jump directly to the happiest possible outcome and don’t consider any pathways that lead to less happy outcomes.I am generally very unhappy with the attempts of almost anyone else to reason using the logical decision theory that I created, and mostly wish at this point that I had not told anyone about it. It seems to predictably result in people's reasoning going astray in ways I can't even remember being tempted by, because they were so obviously wrong.[three paragraphs cut because Eliezer thinks the community is empirically terrible at reasoning about LDT, so more details can mostly only make things worse; if you want more context and discussion, see [Decision Theory Does Not Imply We Get To Have Nice Things](https://www.lesswrong.com/posts/rP66bz34crvDudzcJ/decision-theory-does-not-imply-that-we-get-to-have-nice)](done) | | **[Alexander][17:41]**  Got it.Then my actual last question is: I sometimes get approached by people who ask something like "I have ML experience and want to transition to working in alignment, what should I do?" Do you have any suggestions for what to tell them beyond the obvious? | | **[Yudkowsky][17:44]**  Nope. I'm not aware of any current ML projects people can work on that cause everyone to not die. If you want to grasp at small shreds of probability, or maybe just die with more dignity, I think you apply to work at [Redwood Research](https://www.redwoodresearch.org/). MIRI is in something of a holding pattern where we are trying to think of something less hopeless and not launching any big hopeless projects otherwise. We do have the ongoing [Visible Thoughts Project](https://intelligence.org/visible), which is targeted at building a dataset for an ML problem, but it is not blocked on people with ML expertise. | | **[Alexander][17:45]**  All right, thank you. Anything you want to ask me, or anything else we should do here? | | **[Yudkowsky][17:46]**  Probably not today. I think this was hopefully relatively productive as these things go, and maybe after you've had a chance to think about this dialogue, you will possibly come back with more questions about "Okay so what does happen inside the AGI then?"</hopeful> | | **[Alexander][17:47]**  Great. In terms of publicizing this, I would say feel free to edit it however you want, then put it up wherever you want, and I'll wait on you doing that. I have no strong preferences on things I want to exclude. | | **[Yudkowsky][17:48]**  Okeydokey! Thank you and I hope this was a worthy use of your time. | | | --- | | [Alexander: 🦃👍] | |
c537658e-397c-428e-bc1c-c02475abfec7
trentmkelly/LessWrong-43k
LessWrong
Attach Receipts to Credit Card Transactions If you log into your credit card account you'll see a list of charges, each with a date, amount, and merchant. It would be helpful if this also included receipt data: * If you didn't recognize a charge, seeing what it was for could remind you. * If you needed a receipt for taxes or reimbursement one could be captured automatically. * Personal finance tools (or corporate equivalents for company cards) could track spending with higher granularity. * Because the credit card company knows what the items are they can better detect fraud. Receipt data isn't currently part of the protocol used for charges; you'd need to spec out something that let companies communicate everything a receipt can communicate today. This would be a very large change, but everyone who would need to make changes can have incentives in the right direction: * The card company likes it because it can market their card as supporting receipts and better detect fraud. * The merchant likes it because they see fewer chargebacks and the credit card company probably gives them slightly better rates. * Point-of-sale makers like it because they get to sell a lot of upgrades. The main downside I can think of, aside from it being a lot of work, is that people might not want their credit card company knowing the particular products they bought. The company would probably want to sell this to marketers, though there would be plenty of time to pass regulations limiting that if we wanted to. Personally, I don't mind: the merchant is probably already selling my purchase information or will soon. And the money the credit card company gets from selling the data, since it's a competitive market, probably mostly gets passed on as higher cardholder incentives or higher incentives to merchants to adopt receipt sharing. Comment via: facebook
d475f450-b13f-4d4e-8c02-59f4d553a90e
trentmkelly/LessWrong-43k
LessWrong
AI #93: Happy Tuesday You know how you can sometimes have Taco Tuesday… on a Thursday? Yep, it’s that in reverse. I will be travelling the rest of the week, so it made sense to put this out early, and incorporate the rest of the week into #94. TABLE OF CONTENTS 1. Language Models Offer Mundane Utility. The price is fixed, so share and enjoy. 2. Dare Not Speak Its Name. David Mayer. David Mayer! Guido Scorza? 3. Language Models Don’t Offer Mundane Utility. It’s a flop. 4. Huh, Upgrades. Cohere, and reports on Claude writing styles. 5. Deepfaketown and Botpocalypse Soon. Why do we not care about spoof calls? 6. Fun With Image Generation. Scott Sumner explains why he cares about art. 7. The Art of the Jailbreak. You had one job. 8. Get Involved. Anthropic AI safety fellows program, apply now. 9. Introducing. a voice customization tool and a new eval based on various games. 10. In Other AI News. Where do you draw the line? Who leaves versus who joins? 11. Quiet Speculations. Rumors of being so back unsubstantiated at this time. 12. Daron Acemoglu is Worried About Job Market Liquidity. I kid, but so does he? 13. Pick Up the Phone. Report from China, not the same info I usually see. 14. The Quest for Sane Regulations. Google antitrust foolishness, Cruz sends letters. 15. The Week in Audio. Got a chance to listen to Dominic Cummings, was worth it. 16. AGI Looking Like. You are made of atoms it could use for something else. 17. Rhetorical Innovation. My (and your) periodic reminder on Wrong on the Internet. 18. Open Weight Models are Unsafe and Nothing Can Fix This. Deal as best you can. 19. Aligning a Smarter Than Human Intelligence is Difficult. Even words are tricky. 20. We Would Be So Stupid As To. Once you say it out loud, you know the answer. 21. The Lighter Side. It’s time to build. LANGUAGE MODELS OFFER MUNDANE UTILITY Use voice mode as a real time translation app to navigate a hospital in Spain. Get Claude to actually push back on you and expla
2aff575b-f869-4189-abd8-32e1f9be2010
trentmkelly/LessWrong-43k
LessWrong
OpenAI Boycott Revisit Note: An initial proposal and some good discussion already existed on LW here. I’m spurring this here as a post instead of a comment due to length, the need for a fresh look, and a specific call to action.   Summary I think a petition-style boycott commitment could reach critical mass enough to significantly shift OpenAI corporate policy.  I specifically think a modular petition allowing different users to choose which goalposts the target must cross to end their boycott would be a good method of coalition building among those concerned about AI Safety from different angles.   Postulates * OpenAI needs some reform to be a trustworthy leader in the age of AI * Zvi’s Fallout and Exodus roundups are good summaries, but the main points are: * The NDA Scandal: forcing employees to sign atypically aggressive non-disparagement and recursive non-disparagement agreements * Firing Leopold Aschenbrenner for whistleblowing to the board * Not keeping safety compute commitments * Multiple safety leaders leaving amid suggestions that the culture no longer respects safety (eg Jan Leike) * There is already a tangible and actionable set of demands advocated by experts in the area: the Right to Warn letter * Point 4 is arguably a bridge too far and could be left out or weakened (or made optional with a modular petition) * Consumer subscribers collectively have substantial leverage  * Majority of Open AI revenue comes from individual $20/mo subscribers according to FUTURESEARCH * OpenAI is likely sensitive to revenue at the moment given the higher interest rate environment and the recent focus on investors on the imbalance between AI company CapEx and revenue (eg this Sequoia report) * OpenAI has shown to be fairly reactive to recent PR debacles * Modern boycotts have a significant success rate at changing corporate policy * Ethical Consumer details a few successful boycotts per year for the last few years. Boycotts facing large multinationa
bc40b93b-f6b4-43dd-8dcf-4c5a5b42131c
trentmkelly/LessWrong-43k
LessWrong
Towards Hodge-podge Alignment ABSTRACT: What could AI alignment look like if we had 6000+ full-time researchers and software developers? I've spent a few days thinking about that question, and this report is my first-draft attempt to answer it. This report sketches a highly-scalable research agenda: studying the hodge-podge assemblage of various "alignment primitives". Please let me know in the comments which ideas are novel and useful (if any), and which aren't. Introduction What is hodge-podge alignment? I think that in the Technical AI Safety community, the following beliefs are common: * The adequate solution to the alignment problem (if it exists) will look like one thing. * That one thing might be quantilization, or debate, or ELK, or impact regularisation, or something else entirely. * If we received a radio message from a friendly alien civilisation that solved alignment, it would say "hey humans, you should use this particular method to solve alignment". Maybe those beliefs are correct — but in this report, I'll explore the alternative hypothesis. What if the only strategy that achieves P(doom)<90% is throwing every half-baked protocol we have at an AGI? Let's call this strategy "hodge-podge alignment". Note that hodge-podge alignment differs from research diversification. The TAIS community is unanimously in favour of research diversification — i.e. they think we should explore many different research agendas. But their justification tends to be "we can't tell beforehand which research agenda will tell us the One Thing", rather than "we'll need to hodge-podge together the methods produced by each research agenda". Give me an example. There are dozens of "alignment primitives", but consider these four: 1. Boxing: Run the AI on an air-gapped server with a narrow bandwidth channel to a human interlocutor. (See here.) 2. Quantilization: The AI, rather than maximising the quality of actions, randomly selects from the top quantile of actions. (See here.) 3. Impact regularisat
91c970b0-d2d5-47e0-a4f5-0a130b5578ef
trentmkelly/LessWrong-43k
LessWrong
Bet or update: fixing the will-to-wager assumption (Warning: completely obvious reasoning that I'm only posting because I haven't seen it spelled out anywhere.) Some people say, expanding on an idea of de Finetti, that Bayesian rational agents should offer two-sided bets based on their beliefs. For example, if you think a coin is fair, you should be willing to offer anyone a 50/50 bet on heads (or tails) for a penny. Jack called it the "will-to-wager assumption" here and I don't know a better name. In its simplest form the assumption is false, even for perfectly rational agents in a perfectly simple world. For example, I can give you my favorite fair coin so you can flip it and take a peek at the result. Then, even though I still believe the coin is fair, I'd be a fool to offer both sides of the wager to you, because you'd just take whichever side benefits you (since you've seen the result and I haven't). That objection is not just academic: using your sincere beliefs to bet money against better informed people is a bad idea in real world markets as well. Then the question arises, how can we fix the assumption so it still says something sensible about rationality? I think the right fix should go something like this. If you flip a coin and peek at the result, then offer me a bet at 90:10 odds that the coin came up heads, I must either accept the bet or update toward believing that the coin indeed came up heads, with at least these odds. I don't get to keep my 50:50 beliefs about the coin and refuse the bet at the same time. More generally, a Bayesian rational agent offered a bet (by another agent who might have more information) must either accept the bet or update their beliefs so the bet becomes unprofitable. The old obligation about offering two-sided bets on all your beliefs is obsolete, use this one from now on. It should also come in handy in living room Bayesian scuffles, throwing some money on the table and saying "bet or update!" has a nice ring to it. What do you think?
54fbba7c-a5c5-4a73-b49e-a6a0bfd2cf58
StampyAI/alignment-research-dataset/special_docs
Other
The history and future of AI. Abstract -------- The standard model for developing AI systems assumes a fixed, known objective that the AI system is required to optimize through its actions. Systems developed within the standard model have been increasingly successful. I briefly summarize the state of the art and its likely evolution over the next decade. Substantial breakthroughs leading to general-purpose AI are much harder to predict, but they will have an enormous impact on the global economy and on human roles therein. At the same time, I expect that the standard model will become increasingly untenable in real-world applications because of the difficulty of specifying objectives completely and correctly. I propose a new model for AI development in which the machine’s uncertainty about the true objective leads to qualitatively new modes of behaviour that are more robust, controllable, and deferential. © The Author(s) 2021. Published by Oxford University Press. For permissions please e-mail: journals.permissions@oup.comThis article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (<https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model>) JEL [N10 - General, International, or Comparative](/oxrep/search-results?page=1&tax=JEL/N10) [O31 - Innovation and Invention: Processes and Incentives](/oxrep/search-results?page=1&tax=JEL/O31) [O32 - Management of Technological Innovation and R&D](/oxrep/search-results?page=1&tax=JEL/O32) [O47 - Empirical Studies of Economic Growth; Aggregate Productivity; Cross-Country Output Convergence](/oxrep/search-results?page=1&tax=JEL/O47) Issue Section: [Articles](/oxrep/search-results?f_TocHeadingTitle=Articles) You do not currently have access to this article. [Download all slides](/DownloadFile/DownloadImage.aspx?image=&PPTtype=SlideSet&ar=6374673&xsltPath=~/UI/app/XSLT&siteId=5230)
59d2811b-9876-4977-be59-cddb946aad1e
trentmkelly/LessWrong-43k
LessWrong
MIRI's 2015 Winter Fundraiser! MIRI's Winter Fundraising Drive has begun! Our current progress, updated live:   Donate Now   Like our last fundraiser, this will be a non-matching fundraiser with multiple funding targets our donors can choose between to help shape MIRI’s trajectory. The drive will run until December 31st, and will help support MIRI's research efforts aimed at ensuring that smarter-than-human AI systems have a positive impact. Our successful summer fundraiser has helped determine how ambitious we’re making our plans. Although we may still slow down or accelerate our growth based on our fundraising performance, our current plans assume a $1,825,000 annual budget. About $100,000 of our 2016 budget is being paid for via Future of Life Institute (FLI) grants, funded by Elon Musk and the Open Philanthropy Project. The rest depends on our fundraiser and future grants. We have a twelve-month runway as of January 1, which we would ideally like to extend. Taking all of this into account, our winter funding targets are: Target 1 — $150k: Holding steady. At this level, we would have enough funds to maintain our runway in early 2016 while continuing all current operations, including running workshops, writing papers, and attending conferences. Target 2 — $450k: Maintaining MIRI’s growth rate. At this funding level, we would be much more confident that our new growth plans are sustainable, and we would be able to devote more attention to academic outreach. We would be able to spend less staff time on fundraising in the coming year, and might skip our summer fundraiser. Target 3 — $1M: Bigger plans, faster growth. At this level, we would be able to substantially increase our recruiting efforts and take on new research projects. It would be evident that our donors' support is stronger than we thought, and we would move to scale up our plans and growth rate accordingly. Target 4 — $6M: A new MIRI. At this point, MIRI would become a qualitatively different organization. With this level
c13b1b7d-ac64-4cbd-ba9e-5d730c3ebe4e
trentmkelly/LessWrong-43k
LessWrong
Naval Ravikant and Chris Dixon Didn't Explain Any Web3 Use Cases This is a followup to Chris Dixon's Crypto Claims Are Logically Flimsy. It's becoming popular for thought leaders in the investing and crypto world to make bold claims that aren't backed up by any specific examples. The latest person to prominently do this is Naval Ravikant on Tim Ferriss's podcast. Naval Ravikant is the founder of AngelList and writes very interesting books and blogs. He and Chris Dixon (a partner at Andreessen Horowitz) are now deploying many $millions of capital into crypto projects, and encouraging others to do so. I think it's worthwhile to document how far people can go without having their abstract claims tested with a request for specificity. But it feels a bit surreal observing that my industry's top investors and thought leaders seem to be making decisions on mental models full of holes that they aren't flagging as holes.
bff0dac7-c61e-4320-b756-820067e81994
trentmkelly/LessWrong-43k
LessWrong
Martin Seligman’s “Authentic Happiness” Martin E.P. Seligman (W) is a pioneer of “positive psychology” (W). His book Authentic Happiness (2002) is a pop-psych examination of how people can use positive psychology in their lives (I've put page number references in superscript below). Positive psychology Positive psychology supplements traditional psychology, which often focuses on fixing pathologies and bringing people up to a normal baseline, by aiming to help people live better, happier, more meaningful lives that exceed such a baseline. The discipline, according to Seligman, has three focuses:xiii 1. positive emotions (e.g. confidence, hope, trust). These are fleeting but important to living a flourishing life. 2. positive traits (or sometimes “strengths,” “virtues,” or “abilities”). These are characteristic, abiding dispositions. 3. positive institutions. These are social and political structures and practices that support the virtues. Concentrate on your strengths Seligman believes everyone has certain strengths already (“signature strengths”), and you should concentrate on using those strengths as much as possible rather than on trying to improve your weaknesses: > “I do not believe that you should devote overly much effort to correcting your weaknesses. Rather, I believe that the highest success in living and the deepest emotional satisfaction comes from building and using your signature strengths.”13 (For the record, I disagree.) Happiness is where it’s at This is in part due to Seligman’s belief that simple “happiness” (as opposed to, say, Aristotle’s more complex eudaimonia) is our end goal. To use your signature strengths feels easy, natural, and competent; by contrast strengthening our weaknesses can be hard, frustrating, and tiring; so while the latter may help make you more well-rounded and capable, the former is more conducive to happiness. What’s good about happiness isn’t just the enjoyable feeling itself, but that when we feel happy we are better at forming social connection
b7fcc1d5-c447-4bd5-aa74-4349056ae147
trentmkelly/LessWrong-43k
LessWrong
What I Would Do If I Were Working On AI Governance I don’t work in AI governance, and am unlikely to do so in the future. But various anecdotes and, especially, Akash’s recent discussion leave me with the impression that few-if-any people are doing the sort of things which I would consider sensible starting points, and instead most people are mostly doing things which do not seem-to-me to address any important bottleneck to useful AI governance. So this post lays out the places I would start, if I were working on AI governance, and some of the reasoning behind them. No doubt I am missing lots of important things! Perhaps this post will nonetheless prove useful to others working in AI governance, perhaps Cunningham’s Law will result in me learning useful things as a result of this post, perhaps both. I expect that the specific suggestions in this post are more likely to be flawed than the style of reasoning behind them, and I therefore recommend paying more attention to the reasoning than the specific suggestions. This post will be mostly US-focused, because that is what I know best and where all the major AI companies are, but presumably versions of the interventions discussed could also carry over to other polities. Liability One major area I’d focus on is making companies which build AI liable for the damages caused by that AI, both de-facto and de-jure. Why Liability? The vague goal here is to get companies which build AI to: * Design from the start for systems which will very robustly not cause problems. * Invest resources in red-teaming, discovering new failure-modes before they come up in production, etc. * Actually not deploy systems which raise red flags, even when the company has invested heavily in building those systems. * In general, act as though the company will take losses from damages caused by their AI, not just capture profits from the benefits caused by their AI. … and one natural way to do that is to ensure that companies do, in fact, take losses from damages caused by their AI, not
601d6bce-6b43-4cde-b3a6-2bdebc133d46
trentmkelly/LessWrong-43k
LessWrong
The Problem With The Current State of AGI Definitions The following includes a fictionalized account of a conversation had with professor Viliam Lisý at EAGx Prague, with most of the details just plain made up because I forgot how it actually went. Special thanks to professor Dušan D. Nešić, who I mistakenly thought I had this conversation with, and ended up providing useful feedback after a very confused discussion on WhatsApp. Credit also goes to Justis from LessWrong, who kindly provided some excellent feedback prior to publication. Any seemingly bad arguments presented are due to my flawed retelling, and are not Dušan's, Justis', or Viliam's. The Conversation "AGI has already been achieved. We did it. PaLM has achieved general intelligence, game over, you lose." "On the contrary, PaLM has achieved nothing of the sort. It is as far from general intelligence as a rock is to a baby." "You are correct, of course. I completely concede the point, for the purpose of this conversation. Regardless, this brings up a very important question: What would count as “general intelligence” to you?" "I'm not sure exactly what you're asking." "What test could be performed which, if failed, would ensure (or at least make likely) that you were not dealing with an AGI, while if passed, would force you to say “yep, that’s an AGI all right”?" Testing for a minimum viable AGI The professor was quiet for a moment, deep in thought. Finally, he answered. “If the AI can replace more than half of all jobs humans can currently do, then it is definitely an AGI—as an average human can do an average number of jobs after a finite training period, it should be no different for an Artificial General Intelligence.” "Hmm. Your test is technically valid as an answer to my question, but it's too exclusionary. What you are testing for is an AI with capabilities that would exceed those of any human being. There is not one individual on this earth, living or dead, who can do more than half of all jobs humans currently do, and certainly not one who
7ad35495-e1be-4c37-9896-ae2d34d0c564
StampyAI/alignment-research-dataset/lesswrong
LessWrong
The Happy Dance Problem *[[Cross-posted from IAFF.]](https://agentfoundations.org/item?id=1713)* Since the invention of logical induction, people have been trying to figure out what logically updateless reasoning could be. This is motivated by the idea that, in the realm of Bayesian uncertainty (IE, empirical uncertainty), updateless decision theory is the simple solution to the problem of reflective consistency. Naturally, we’d like to import this success to logically uncertain decision theory. At a research retreat during the summer, we realized that updateless decision theory wasn’t so easy to define even in the seemingly simple Bayesian case. A possible solution was written up in [Conditioning on Conditionals](https://agentfoundations.org/item?id=1624). However, that didn’t end up being especially satisfying. Here, I introduce the happy dance problem, which more clearly illustrates the difficulty in defining updateless reasoning in the Bayesian case. I also outline Scott’s current thoughts about the correct way of reasoning about this problem. *(Ideas here are primarily due to Scott.)* The Happy Dance Problem ----------------------- Suppose an agent has some chance of getting a pile of money. In the case that the agent gets the pile of money, it has a choice: it can either do a happy dance, or not. The agent would rather not do the happy dance, as it is embarrassing. I’ll write “you get a pile of money” as M, and “you do a happy dance” as H. So, the agent has the following utility function: * U(¬M) = $0 * U(M & ¬H) = $1000 * U(M & H) = $900 A priori, the agent assigns the following probabilities to events: * P(¬M) = .5 * P(M & ¬H) = .1 * P(M & H) = .4 IE, the agent expects itself to do the happy dance. Conditioning on Conditionals ---------------------------- In order to make an updateless decision, we need to condition on the policy of dancing, and on the policy of not dancing. How do we condition on a policy? We could change the problem statement by adding a policy variable and putting in the conditional probabilities of everything given the different policies, but this is just cheating: in order to fill in those conditional probabilities, you need to already know how to condition on a policy. (This simple trick seems to be what kept us from noticing that UDT isn’t so easy to define in the Bayesian setting for so long.) A naive attempt would be to condition on the material conditional representing each policy, M⊃H and M⊃¬H. This gets the wrong answer. The material conditional simply rules out the one outcome inconsistent with the policy. Conditioning on M⊃H, we get: * P(¬M) = .555 * P(M & H) = .444 For an expected utility of $400. Conditioning on M⊃¬H, we get: * P(¬M) = .833 * P(M & ¬H) = .166 For an expected utility of $166.66. So, to sum up, the agent thinks it should do the happy dance because refusing to do the happy dance makes worlds where it gets the money less probable. This doesn’t seem right. [Conditioning on Conditionals](https://agentfoundations.org/item?id=1624) solved this by sending the *probabilistic*conditional P(H|M) to one or zero to represent the effect of a policy, rather than using the material conditional. However, this approach is unsatisfactory for a different reason. Happy dance is similar to Newcomb’s problem with a transparent box (where Omega judges you on what you do when you see the full box): doing the dance is like one-boxing. Now, the correlation between doing the dance and getting the pile of money comes from Omega rather than just being part of an arbitrary prior. But, sending the conditional probability of one-boxing upon seeing the money to one doesn’t make the world where the pile of money appears any more probable. So, this version of updateless reasoning gets transparent-box Newcomb wrong. There isn’t enough information in the probability distribution to distinguish it from Happy Dance style problems. Observation Counterfactuals --------------------------- We can solve the problem in what *seems* like the right way by introducing a basic notion of counterfactual, which I’ll write ◻→. This is supposed to represent “what the agent’s code will do on different inputs”. The idea is that if we have the policy of dancing when we see the money, M◻→H is true *even in the world where we don’t see any money*. So, even if dancing upon seeing money is a priori probable, conditioning on not doing so knocks out just as much probability mass from non-money worlds as from money worlds. However, if a counterfactual A◻→B is true *and* A is true, then its consequent BB must also be true. So, conditioning on a policy does change the probability of taking actions in the expected way. In Happy Dance, there is no correlation between M◻→H and M; so, we can condition on M◻→H and M◻→¬H to decide which policy is better, and get the result we expect. In Newcomb’s problem, on the other hand, there *is* a correlation between the policy chosen and whether the pile of money appears, because Omega is checking what the agent’s code does if it sees different inputs. This allows the decision theory to produce different answers in the different problems. It’s not clear where the beliefs about this correlation come from, so these counterfactuals are still *almost* as mysterious as explicitly giving conditional probabilities for everything given different policies. However, it does seem to say something nontrivial about the structure of reasoning. Also, note that these counterfactuals are in the opposite direction from what we normally think about: rather than the counterfactual consequences of actions we didn’t take, now we need to know the counterfactual actions we’d take under outcomes we didn’t see!