id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
06df8cce-2a37-4592-8982-92818617b94f
trentmkelly/LessWrong-43k
LessWrong
Closed Beta Users: What would make you interested in using LessWrong 2.0? None
fc223c12-b1dc-4dca-989b-07d98731c024
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
What is imitation learning? Imitation learning is the process of learning by observing the actions of an expert and then copying their behavior. It is also sometimes called [apprenticeship learning](https://en.wikipedia.org/wiki/Apprenticeship_learning). Unlike [reinforcement learning (RL)](/?state=89ZS&question=What%20is%20reinforcement%20learning%20(RL)%3F), which finds a policy for how a system is to act by observing the results of its interactions with its environment, imitation learning tries to learn a policy by observing another agent which is interacting with the environment. An example of where this process is used is in training modern large language models (LLMs). After LLMs have been trained as general-purpose text generators, they are often fine-tuned with imitation learning using the example of a human expert who follows instructions, provided in the form of text prompts and completions. This is how an earlier model of ChatGPT was trained. As it pertains to safety and alignment, one reason why we attempt to get systems to learn by imitation instead of by direct reinforcement is to mitigate the problem of [specification gaming](/?state=92J8&question=What%20is%20specification%20gaming%3F). This is a problem which arises when there are edge cases or unforeseen ways of achieving the task in the particular environment that the programmers didn't think of or intend. The idea is that demonstrating behavior would be comparatively easier and safer than RL because the model would not only achieve the objective but also achieve the objective as the expert demonstrator explicitly intends. This is not a foolproof solution, though, and some of its shortcomings are discussed in the answers on [behavioral cloning](/?state=8AEQ&question=What%20is%20behavioral%20cloning%3F) and [specification gaming](/?state=92J8&question=What%20is%20specification%20gaming%3F). There are a number of different approaches to imitation learning. One of the most popular is [behavioral cloning (BC)](/?state=8AEQ&question=What%20is%20behavioral%20cloning%3F). Others include [inverse reinforcement learning (IRL)](/?state=8AET&question=What%20is%20inverse%20reinforcement%20learning%20(IRL)%3F), [cooperative inverse reinforcement learning (CIRL)](/?state=904A&question=What%20is%20cooperative%20inverse%20reinforcement%20learning%20(CIRL)%3F), and [generative adversarial imitation learning](https://arxiv.org/abs/1606.03476) (GAIL).
d7539061-8ace-456a-9de3-ec507906311a
trentmkelly/LessWrong-43k
LessWrong
Agentic Growth Personal growth is paramount. A north star to chase indefinitely. However, most of this growth is reactionary. In the same way, we wouldn’t commend a knee for jerking at a doctor’s strike, we shouldn't see this reactionary response as growth. The connotation of growth is overwhelmingly positive, yet we only decide that this change is positive from the perspective of the person doing the ‘growing’, which seems to be greatly biased. For change to lead to self-improvement, I suggest a few alterations to our conception of growth: First, I want to acknowledge that a significant amount of growth only exists as an agent’s response to the environment. Second, I want to recognize growth as a neutral word, rather than allow it to be charged with positive or negative connotations. I want to recognize growth as something much more simple: change. With this model, we can consider how change generally emerges from reactions to an external environment, often out of our control. The platitude to respond to this is: “It's not what happens to you, but how you react to it that matters.” While I recognize the merits of this notion, it warrants investigation. Yes, in the face of adversity, stoically bettering oneself is ideal, but we must question whether this change would have occurred at all without adversity. If we assume that external factors drive this change, we resign to our life's path being determined by the series of adversarial events that we choose to survive. We are all worth a lot more than defining our growth as mere reactions to adversity. If you put a broken car on a hill and it rolls, it is simply reacting to its surroundings: gravity is the only thing moving it. As one moves through life, the topology along the road changes, and the car will continue to roll. Looking back, the car may be far from where it started, but not once was it turned on. Some would describe it as growth, I would describe it as reactionary change. Agentic growth is the process of fix
ae31e455-f0e7-4e00-a57a-eda398023007
trentmkelly/LessWrong-43k
LessWrong
What progress have we made on automated auditing? One use case for model internals work is to perform automated auditing of models: https://www.alignmentforum.org/posts/cQwT8asti3kyA62zc/automating-auditing-an-ambitious-concrete-technical-research That is, given a specification of intended behavior, the attacker produces a model that doesn't satisfy the spec, and the auditor needs to determine how the model doesn't satisfy the spec. This is closely related to static backdoor detection: given a model M, determine if there exists a backdoor function that, for any input, transforms that input to one where M has different behavior.[1] There's some theoretical work (Goldwasser et al. 2022) arguing that for some model classes, static backdoor detection is impossible even given white-box model access -- specifically, they prove their results for random feature regression and (the very similar setting) of wide 1-layer ReLU networks.  Relatedly, there's been some work looking at provably bounding model performance (Gross et al. 2024) -- if this succeeds on "real" models and "real" specification, then this would solve the automated auditing game. But the results so far are on toy transformers, and are quite weak in general (in part because the task is so difficult).[2]  Probably the most relevant work is Halawi et al. 2024's Covert Malicious Finetuning (CMFT), where they demonstrate that it's possible to use finetuning to insert jailbreaks and extract harmful work, in ways that are hard to detect with ordinary harmlessness classifiers.[3]  As this is machine learning, just because something is impossible in theory and difficult on toy models doesn't mean we can't do this in practice. It seems plausible to me that we've demonstrated non-zero empirical results in terms of automatically auditing model internals. So I'm curious: how much progress have we made on automated auditing empirically? What work exists in this area? What does the state-of-the-art in automated editing look like?   1. ^ Note that I'm not askin
d2cec6d8-1ddd-4534-8bdf-675793882050
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Hypothetical: what would you do? Let’s pretend I have a semi rigors model that lays out why RLHF is doomed to fail and also that it negatively affects model performance (including why it does so) Let’s go further into lala land and pretend that I have an architectural plan that does much better, very transparent, steerable and corrigible, can be deployed and used without changing or retraining the base LLM. There are some downsides like requires more compute at inference time, not provable bulletproof, likely breaks in SI regime and definitely breaks under self improvement (so very definitely NOT an alignment proposal). Short term this looks beneficial, also looks like shortening timelines, and extremely unlikely to advance the AI safety field (in the direction of what we ultimately want and need). What should I do, if I ever happened to be in such a situation? * Prototype it, limited access with the expressed purpose of breaking stuff (black box, absolutely no architectural information provided). * Write it up and publish. * Forget about it, smarter people must have already thought of it, and since it’s not a thing, I am clearly wrong. * Forget about it, only helps capabilities.
b55530be-3527-47a7-bd16-686d6607c8e2
trentmkelly/LessWrong-43k
LessWrong
Futarchy's fundamental flaw Say you’re Robyn Denholm, chair of Tesla’s board. And say you’re thinking about firing Elon Musk. One way to make up your mind would be to have people bet on Tesla’s stock price six months from now in a market where all bets get cancelled unless Musk is fired. Also, run a second market where bets are cancelled unless Musk stays CEO. If people bet on higher stock prices in Musk-fired world, maybe you should fire him. That’s basically Futarchy: Use conditional prediction markets to make decisions. People often argue about fancy aspects of Futarchy. Are stock prices all you care about? Could Musk use his wealth to bias the market? What if Denholm makes different bets in the two markets, and then fires Musk (or not) to make sure she wins? Are human values and beliefs somehow inseparable? My objection is more basic: It doesn’t work. You can’t use conditional predictions markets to make decisions like this, because conditional prediction markets reveal probabilistic relationships, not causal relationships. The whole concept is faulty. There are solutions—ways to force markets to give you causal relationships. But those solutions are painful and I get the shakes when I see everyone acting like you can use prediction markets to conjure causal relationships from thin air, almost for free. I wrote about this back in 2022, but my argument was kind of sprawling and it seems to have failed to convince approximately everyone. So thought I’d give it another try, with more aggression. Conditional prediction markets are a thing In prediction markets, people trade contracts that pay out if some event happens. There might be a market for “Dynomight comes out against aspartame by 2027” contracts that pay out $1 if that happens and $0 if it doesn’t. People often worry about things like market manipulation, liquidity, or herding. Those worries are fair but boring, so let’s ignore them. If a market settles at $0.04, let’s assume that means the “true probability” of the event is 4%.
443d5d14-3ebd-412c-b877-a67656cf0a72
trentmkelly/LessWrong-43k
LessWrong
Extended Quote on the Institution of Academia From the top-notch 80,000 Hours podcast, and their recent interview with Holden Karnofsky (Executive Director of the Open Philanthropy Project). What follows is an short analysis of what academia does and doesn't do, followed by a few discussion points by me at the end. I really like this frame, I'll likely use it in conversation in the future. ---------------------------------------- Robert Wiblin: What things do you think you’ve learned, over the last 11 years of doing this kind of research, about in what situations you can trust expert consensus and in what cases you should think there’s a substantial chance that it’s quite mistaken? Holden Karnofsky: Sure. I mean I think it’s hard to generalize about this. Sometimes I wish I would write down my model more explicitly. I thought it was cool that Eliezer Yudkowsky did that in his book, Inadequate Equilibria. I think one thing that I especially look for, in terms of when we’re doing philanthropy, is I’m especially interested in the role of academia and what academia is able to do. You could look at corporations, you can understand their incentives. You can look at Governments, you can sort of understand their incentives. You can look at think-tanks, and a lot of them are just like … They’re aimed directly at Governments, in a sense. You can sort of understand what’s going on there. Academia is the default home for people who really spend all their time thinking about things that are intellectual, that could be important to the world, but that there’s no client who is like, “I need this now for this reason. I’m making you do it.” A lot of the times, when someone says, “Someone should, let’s say, work on AI alignment or work on AI strategy or, for example, evaluate the evidence base for bed nets and deworming, which is what GiveWell does … ” A lot of the time, my first question, when it’s not obvious where else it fits, is would this fit into academia? This is something where my opinions and my views have evolve
47297a73-12e9-46aa-ba28-535c171c6b12
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
What are the coolest topics in AI safety, to a hopelessly pure mathematician? I am a mathematics grad student. I think that working on AI safety research would be a valuable thing for me to do, *if* the research were something I felt intellectually motivated by. Unfortunately, whether I feel intellectually motivated by a problem has little to do with what is useful or important; it basically just depends on how cool/aesthetic/elegant the math involved is. I've taken a semester of ML and read a handful (~5) AI safety papers as part of a Zoom reading group, and thus far none of it appeals. It might be that this is because nothing in AI research will be adequately appealing, but it might also be that I just haven't found the right topic yet. So to that end: what's the coolest math involved in AI safety research? What problems might I really like reading about or working on?
6a0efd0b-3706-4eb4-949c-0f42d176de9a
trentmkelly/LessWrong-43k
LessWrong
New Feature: Collaborative editing now supports logged-out users If you've ever used our collaborative editing features before, you may be familiar with our link-sharing functionality, which previously only allowed users who had existing LessWrong accounts (and were logged in) to collaborate on posts.  Now, link-sharing allows logged-out users to read/comment/edit as well (as per whatever permissions you set for link-sharing). Logged-out users shouldn't be able to edit anything about the post except the contents of the post body, publish drafts, etc. Please let the us know via Intercom if you experience any issues with this or related functionality.
a32aea79-25e8-423b-b5db-be424df7be04
trentmkelly/LessWrong-43k
LessWrong
No, Newspeak Won’t Make You Stupid In George Orwell's book 1984, he describes a totalitarian society that, among other initiatives to suppress the population, implements "Newspeak", a heavily simplified version of the English language, designed with the stated intent of limiting the citizens' capacity to think for themselves; everybody knows that when you have a thinking people, keeping a peoplegroup still and not angry is unpossible. In short, the ethos of newspeak can be summarized as: "Minimize vocabulary to minimize range of thought and expression". There’s no way such a simple idea could mean different things to different people, right? Well… there are two different, closely related, ideas, both of which the book implies, that are worth separating here. The first (which I think is to some extent reasonable) is that by removing certain words from the language, which serve as effective handles for pro-democracy, pro-free-speech, pro-market concepts, the regime makes it harder to communicate and verbally think about such ideas… Although, if that was the only thing done by Orwell’s Oceania, it would work about as well as taking a sharp knife away from a toddler, while still leaving on the ground next to them a fully-loaded AK-47; people are adept at making themselves understood, even in the face of constraints to communication The second idea, which I worry is an incorrect takeaway people may get from 1984, is that by shortening the dictionary of vocabulary that people are encouraged to use (absent any particular bias towards removing handles for subversive ideas), one will reduce the intellectual capacity of people using that variant of the language. However, since that idea is false, that definitely, 100% clearly makes it perfectly okay for a government to force Newspeak on its people, and that totally wouldn’t be a creepy overstepping of its power (I know, Poe’s Law says that it is utterly impossible for me to be sarcastic on the internet without somebody thinking that I actually believe it).
ae4dc1ef-5f10-4db2-aec5-418070c0309a
trentmkelly/LessWrong-43k
LessWrong
Escalator Action Epistemic Status: Slow ride. Take it easy. You Memba Elevator Action? I memba. A recent study (link is to NY Times) came out saying that we should not walk on escalators, because not walking is faster. From the article: > The train pulls into Pennsylvania Station during the morning rush, the doors open and you make a beeline for the escalators. > > You stick to the left and walk up the stairs, figuring you can save precious seconds and get a bit of exercise. > > But the experts are united in this: You’re doing it wrong, seizing an advantage at the expense and safety of other commuters. Boarding an escalator two by two and standing side by side is the better approach. We will ignore the talk about which method is better for the escalator, which seems downright silly, and focus on the main event: They are explicitly saying that when you choose to walk up the stairs, you are doing it wrong. Since walking is trivially and obviously better than walking, this result is a little suspicious. And by a little suspicious, I mean almost certainly either wrong, highly misleading or both. Certainly individually, on the margin, for yourself you are quite obviously doing it right. Consider a largely empty escalator. If Alice gets on the escalator and sits there, it takes her 40 seconds. If she walks up the left side, and no one is in her way, it takes her 26 (numbers from article). Given everyone else’s actions, if she wants to get from Point A to Point B quickly, and I strongly suspect that she does, she should walk up the escalator. Consider an escalator in the standard style. On the left people walk up, on the right people stand. If there is enough space for all, then nothing Alice does impacts anyone else unless she blocks the left side, so assume there is not enough room. In that situation, demand for the right side almost always exceeds demand for the left side, so Alice is almost certainly going to not only get to the top faster by walking, she is helping ev
81c23e0b-1673-4535-b42b-487b3c4f9ca6
trentmkelly/LessWrong-43k
LessWrong
Analyzing DeepMind's Probabilistic Methods for Evaluating Agent Capabilities Produced as part of the MATS Program Summer 2024 Cohort. The project is supervised by Marius Hobbhahn and Jérémy Scheurer Update: See also our paper on this topic admitted to the NeurIPS 2024 SoLaR Workshop. Introduction To mitigate risks from future AI systems, we need to assess their capabilities accurately. Ideally, we would have rigorous methods to upper bound the probability of a model having dangerous capabilities, even if these capabilities are not yet present or easily elicited. The paper “Evaluating Frontier Models for Dangerous Capabilities” by Phuong et al. 2024 is a recent contribution to this field from DeepMind. It proposes new methods that aim to estimate, as well as upper-bound the probability of large language models being able to successfully engage in persuasion, deception, cybersecurity, self-proliferation, or self-reasoning. This post presents our initial empirical and theoretical findings on the applicability of these methods. Their proposed methods have several desirable properties. Instead of repeatedly running the entire task end-to-end, the authors introduce milestones. Milestones break down a task and provide estimates of partial progress, which can reduce variance in overall capability assessments. The expert best-of-N method uses expert guidance to elicit rare behaviors and quantifies the expert assistance as a proxy for the model's independent performance on the task. However, we find that relying on milestones tends to underestimate the overall task success probability for most realistic tasks. Additionally, the expert best-of-N method fails to provide values directly correlated with the probability of task success, making its outputs less applicable to real-world scenarios. We therefore propose an alternative approach to the expert best-of-N method, which retains its advantages while providing more calibrated results. Except for the end-to-end method, we currently feel that no method presented in this post would allow us to reli
34fbef9f-d17b-43f2-b5f0-010489c668ae
trentmkelly/LessWrong-43k
LessWrong
The Problem of Thinking Too Much [LINK] This was linked to twice recently, once in a Rationality Quotes thread and once in the article about mindfulness meditation, and I thought it deserved its own article. It's a transcript of a talk by Persi Diaconis, called "The problem of thinking too much". The general theme is more or less what you'd expect from the title: often our explicit models of things are wrong enough that trying to think them through rationally gives worse results than (e.g.) just guessing. There are some nice examples in it.
35e44cab-2157-46e3-a9e5-9610fc09d441
trentmkelly/LessWrong-43k
LessWrong
Training Regime Day 9: Double-Crux Introduction Argument A is a crux for Alice on position P if finding out that A is false causes Alice to substantially chance their view on P. If Alice thinks P, Bob thinks not P, Alice thinks A, Bob thinks not A, and A/not A are both cruxes for Alice and Bob respectively, then A is a double-crux for Alice and Bob's disagreement about P. Suppose that I think that they sky is yellow. Suppose that Alice thinks that the sky is blue. Suppose that I think that the sky is yellow because noted sky-color historian Carol wrote that the sky is yellow. Suppose that Alice thinks that the sky is blue because they think that Carol wrote that the sky is blue. "Carol wrote X" is thus a double crux for our disagreement about the color of the sky. We can now just check what the sky-historian Carol actually wrote and effortlessly resolve our disagreement. Note that cruxes can be conjunctions. That is, I can think that the sky is yellow because sky-color historians Dave and Erin both said that the sky is yellow, where neither individual historian would have been sufficient. Finding Double-Cruxes Do double-cruxes even always exist? By the Aumann Agreement Theorem, the answer is sort of yes, conditional on a couple of assumptions. In practice, CFAR instructors have indicated to me that they have always found double-cruxes when they were seeking, although sometimes it took upwards of 5 hours. I don't have a reliable way of finding-double cruxes, but I have found a few. Here are some strategies that I think help. Epistemic status: both strategies are highly experimental. Proposer/Listener The search for a double-crux between two parties is not an argument, it's a conversation aimed at both parties coming out with truer beliefs. However, it can feel a lot like an argument, resulting in parties trying to talk over each other, overstating their beliefs, etc. One way to prevent this is to explicitly delegate one party as the "proposer" and the other party as the "listener." The role o
c157205e-8545-4fae-ba28-bcd7cd035d21
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
What Machines Shouldn’t Do (Scott Robbins) right welcome everyone to uh today's ai tech agora um today our speaker is scott robbins uh before passing the word to him i see that we have quite a lot of new faces here so i will give a very quick introduction into [Music] who we are so i'm arkady i'm a postdoc at the delft university of technology and uh here we have a really nice multidisciplinary multi-faculty uh initiative called aitec and uh at aitec we look into different aspects of what we call meaningful human control so we are focusing uh not only on kind of general philosophical uh aspects of it but also on concrete engineering challenges postgrade so if you're interested in this check out our website and also subscribe to our twitter and youtube channel to stay updated of the future meetings and if if nothing else comes up i'll pass the word to scott all right thanks a lot and uh thanks a lot arcadie and aitech for inviting me i'm pretty excited to give this talk i'd hoped it would be in person but alas we will have to make too um so while i share my screen really fast um can everybody see that now can somebody let me know if you see that yeah all right sounds good all right thank you um so a little bit about me i'm just finishing up my phd at the ethics of philosophy and technology section at to delft at the tpm faculty i'm writing on machine learning and counterterrorism and the and the ethics and efficacy of that and this is a paper that didn't make it into the thesis that i just submitted on monday but it's something that i i think the thesis kind of leaves on set and kind of leaves the future work that i'm really hoping to get out there as soon as possible and right now i'm titling it what machines shouldn't do a necessary condition for meaningful human control all right so before i say anything about what machines shouldn't be doing i i i have to clarify what i mean by what a machine does and when a machine is doing something or more specifically when we have delegated a process or a decision to a machine and what i mean by this and for the purposes of this paper in this presentation is that a machine has done something or we have delegated a decision to a machine when a machine can reach an output through considerations and the waiting of those consideration considerations not explicitly given by humans so the point of me doing this and to making this clarification is that i want to distinguish between you know more classical ai like good old-fashioned artificial intelligence symbolic ai or expert systems and things like that from machine learning or contemporary ai and in the classical good old-fashioned ai the considerations that we um that go into making a particular generating a particular output are explicitly put there by human beings it may be extremely complicated and may look um it may still simulate our some way and we may believe it's doing more than it is but really there's there's human beings behind that that path that it's that it's following to generate the output this is in contrast to machine learning or many methodologies and machine learning where the specific considerations that go into generating an output are unknown to the to any human beings but even the human beings that programmed it so that's the case in the in in much of the hype surrounding ai today and many of the successes that we've seen in the media like alphago beating the world go champion or chess algorithms or many of the other other algorithms out there so we've delegated a decision to a machine or in this example here the move in the game of go to a machine because that machine is actually has its own considerations loosely speaking for how it generates the output so this we've delegated in in the way that i'm talking about it we've delegated many decisions to machines everything from detecting whether somebody's suspicious or not to predicting crime to driving our cars diagnosing patients and sentencing people to prison um going even more fraud detection facial recognition object recognition and even choosing our outfits for us and developing this presentation i saw a lot of really random algorithms out there or applications out there and so with all these applications i think some of them kind of freak us out and i think autonomous weapon systems are some of the ones that really come to mind is the most classic example of something that comes to mind that that scares us about you know like should we be really doing this within with an algorithm or how can we do this responsibly and it's fueled this huge explosion in ai ethics i think a justified explosion in ai ethics because i think there's something novel happening here this is the first time we're really delegating these processes to machines not just automating the processes we've already determined but actually delegating the the act of choosing these considerations to machines and so now we're worried about the control we have over those machines and specifically i think that all of these ai ethics principle lists and a lot of the work on ai ethics whether it's explicit made explicit or not is really talking about or adding to in some way trying to realize meaningful human control and so before i get to the specifics of what i want to add to meaningful human control i wanted to say a little bit about some of the proposals that are out there already and a little bit about how i classify them i have made a distinction between technology-centered meaningful human control and human-centered meaningful human control and what i mean by that distinction is is what i'm trying to capture here is where people are putting the spotlight or putting their focus on realizing meaningful human control so if it's technology centered then we're really thinking about the technology itself what are the design requirements of the technology or what can we you know add to the technology so that we are better equipped to realize meaningful human control whereas in the human centered approaches it's more about where can we place the human and what capacities or capabilities does the human being need in order to realize meaningful human control so starting with a technology centered meaningful human control there's a few proposals out there that i consider to be the biggest ones and i don't mean to say that these are all necessarily put out there explicitly to realize meaningful human control it's not like these papers are all saying you know this is a way a proposal for meaningful human control but i've argued in the past that some of these are are indeed doing that or that would be the moral problem that they're trying to solve if they're solving one so first is explicability and explainable ai which is the idea that if we can make um an algorithm output not only you know its you know its output but also give us some kind of idea of how it came to that output in terms of considerations that went into that output you know this could perhaps allow us to say well that was a bad uh that output should be rejected because it was based on race or gender or something like that something we considered to be um not an acceptable the way a way to make a particular decision um i've written a paper on this this proposal and i'm not too thrilled with it i think that it's a good idea but it it fails and there's there's still a good idea to try to make ai explainable i still think there's reasons to do it but it doesn't solve the moral problem that it attempts to solve if it's if that's what it's doing and i can talk more about that or i can direct you towards the paper about that in the question and answer period then we move on to machine ethics which i i think is not necessarily a proposal for realizing meaningful human control but it is a way to say that that it's saying that if we can endow machines with moral reasoning capabilities and allow them to pick out morally salient features and adjust and be responsive to those features then we don't need humans to be in control anymore really the machines and robots and algorithms are controlling themselves with these you know ethical capabilities i couldn't be more negative about this approach and i think some of that has to do with the reasons that i'm going to get into later on but i've also written a paper about this with amy van weinsberg where we argue that there's no good reason to do this every reason put forward fails for either empirical or conceptual reasons and again i can talk more about that and hopefully it becomes more clear throughout this presentation at least one of the reasons why this is a bad idea and then we get to track and trace which is a proposal put forward by philippo santoni de sio and jeroen vanden home and hoven here at uh tu delft and tpm in particular and they they have actually a really nice paper i highly recommend people read it if they're interested in meaningful human control um philosophically you know it has some depth to it and they put forward two conditions that we need to meet in order to realize meaningful human control the first is a tracking condition which is about the machine being able to be responsive to human moral reasons for for their outputs so such that if a morally salient feature pops up in a context that would cause a human to change their decision or output then it should also cause the algorithm to change its output and the second is a tracing condition which states that we should be able to trace responsibility back to a human being or set of human beings such that that human being should knowingly accept moral responsibility and should be ready to accept the moral and moral responsibility and accountability for the outcomes and outputs of the machine all right moving to human-centered meaningful human control this is the classic on the loop in the loop stuff um i think uh and this is again about the human where is the human in this process and what capabilities do they have and an on the loop the human is is kind of overseeing what the algorithm is doing so that the human can intervene if necessary it's to prevent something bad from happening and i think a good example of this is tesla teslas which kind of stipulate that the human has to be you know have their hands on the wheel and be ready to take over at any time if something bad happens and really you know all moral responsibility is is with the human i think this is more of a way to protect their company from lawsuits than anything else it doesn't seem to be a very good way of realizing of having any meaningful human control of of the machine as it kind of flies in the face of human psychology and human capabilities to remain aware of their surroundings despite an automated process which works most of the time there's some interesting work on that i think even from tu delft so i don't think it's it's meaningful human control but it's it's an attempt at um i i think that's what they're trying to do it's just not working very well and then in the loop is a little bit stronger than on the loop and that it requires a human to actually endorse or reject outputs of the machine before the consequences happen so you're not just in an overseeing role anymore you're you're actually a part of the process again this kind of suffers from you know flies in the face of human psychology and that you know we suffer from many biases like automation bias assimilation bias and confirmation bias which is going to make this incredibly hard to be meaningful control even though i think it could be said we are in control um and what i really what i really want to say with all these is not that i want to and the reason i don't go into specifics on my attacks and all of these positions is that i don't think it matters too much for the purposes of this paper the real point that i want to make here is that even if some of these uh solutions or proposals will play indeed play a part and i think some of some aspects of them at least will play a part in meaningful human control they've they're kind of working this all out after we've already made a huge mistake um when we've made a mistake and we've already hit this iceberg and now we're just rearranging the technologies and the people in the socio-technical system and hoping that everything will work out but it doesn't matter the ship's already sinking and specifically i think the mistake that gets made is that we've delegated a decision to a machine that that machine should not have been delegated and as soon as we've done that no amount amount of technical wizardry or organization of the human and the technical system is going to fix that problem we've already lost control meaningful control over these algorithms so that's what we have to figure out first and that's what i plan to try to do in the next half of this presentation so to jump to my conclusion that i will then defend is that machines should not have evaluative outputs and specifically machines should not be delegated evaluative decisions and i i consider a value of outputs or evaluative outputs are things like criminal suspicious beautiful propaganda fake news anything with bad good wrong or right built into the labels or the out or the outputs so when we say somebody's suspicious we're not just saying that somebody is is standing around in one spot for a while with a ski mask on we're not just saying oh that's interesting we're saying it's bad there's something bad about what's going on we're we're loading a value into it and we describe somebody as beautiful we're not just saying that they have a particular outfit on or they look a certain way it's just in a neutral manner we're actually saying something good or something about the way people ought to look and same with propaganda we're not just saying that there's a picture picture with a political message we're saying that there's something bad about this picture with a political message it's not the way things should be and so any of these outputs that have values built into them should not be delegated to machines and i'm not just shadow boxing here i think most of us here will know this that these types of outputs have been delegated to machines quite frequently there's no shortage of proposals to do this [Music] for these are four examples here like detecting propaganda and here i do want to make a little note that remind us that in the beginning i was talking about what i mean by a machine doing things and sometimes algorithms are delegated the task of flagging propaganda but that doesn't always mean um that it's doing something in the way that i've talked about it doing for instance europol flags propaganda but they do it based on a hash that's generated by videos and pictures that have already been determined by human beings to be propaganda and the idea then is just is there a new post or a new picture a new video that is the exact same as the one that was already posted and taken down that's not a machine that's just automating um a process we've already done the value judgment ourselves and then we're just delegating the task of finding things that that match those value judgments in this case i'm actually talking about a machine or an algorithm that detects propaganda and novel propaganda on its own and then moving on to you know we've had algorithms that detect criminals just based on their faces or purport to do so um ai that detects beauty ai that detects fake news we have one of my least favorite companies hirevue which works for which is used by many fortune 500 companies now which purports to be able to say whether a candidate is okay good great and fantastic or and probably bad as well and and then i think that one of the more infamous versions which is the chinese social credit system which from my reading is trying to say what is a good citizen so remember these are all things that i'm saying algorithms shouldn't be delegated and then the question becomes why why why can't we delegate those things and i want to put forward two arguments here one has to do with efficacy roughly that uh i'm going to argue that we we can't say anything about the efficacy of these algorithms and if and if we can't say anything about its efficacy then we can't justify its use and that any use of it is uh is out of our control we've lost control at that point and then the second argument is more of an ethical argument about a more fundamental loss of control um over uh of meaningful human control over a process and i will get more into that when i when i get to that section so first i want to say that efficacy is unknown in principle for evaluative outputs and that every each evaluative output is unverifiable for example suspicious and in this example in this picture here this man is being labeled as suspicious and in this this judgment this output here and if we wanted to say well did the algorithm get this right was the algorithm correct and labeling this person suspicious we in principle can't do that because you might say you might argue wait we can if the person stole something well then they were indeed suspicious but that's not what suspicious means suspicious can be somebody can be suspicious without doing anything wrong and somebody can do something wrong or steal something without having been suspicious it's called being a good thief so if we can't evaluate whether the algorithm was correct on this one instance and i'm arguing that it can't on any specific instance then we can't say if this algorithm works at not or not at all there's nothing we can say about the efficacy of this algorithm and therefore we're kind of out of control we've lost control we're just using a tool that's as good as a magic eight ball at that point and this is opposed to an example like an algorithm that's supposed to detect weapons in a baggage and baggage at the airport in this instance if an algorithm says classifies this bag as having a weapon in it or more specifically having a gun in it we could probably we probably wouldn't just arrest the person on on just the fact that the algorithm made such a labeling we would look into the bag and find out does it indeed have a weapon or a gun in it and if it does then we know the algorithm got it right at that point and we can say something about the effectiveness of the algorithm because we can test this on many bags on many examples and determine how good it is at in detecting weapons and then we can have some place in justifying its use we can say it does indeed get it wrong sometimes but we have a process to handle that because we have enough information to remain in control of this algorithm um the other thing i want to say about efficacy and and uses using this suspicious label again is that the context change over time so one year ago if this kid would have come into a store that i was in and not at night i would have justifiably been worried and probably thought this person was suspicious but i think uh in the context of a global pandemic right now in the netherlands if i saw the same person wearing a mask in the in a store that i was in i'd probably be thanking them for actually wearing a mask as i see so little of here in the netherlands despite the spiking corona cases so these contacts change and that's that's something that algorithms are not good at they're good at fixed or machine learning algorithms specifically they're good at fixed targets something that they can get closer and closer to the truth with over more data but value judgments are not some are not those kinds of things so we have to be so even if we could solve the problem that i just outlined which we can't tell whether it was we can't verify any particular decision that we did solve that problem well then we'd also have to worry about the context changing to make those considerations um and make those considerations change that ground that judgment all right now moving into the ethics part of the argument where i'm arguing for a more fundamental lack of control what i mean by fundamental here and it's different from the control that i i see so often talked about like in the autonomous weapons debate where we're really thinking like okay we have this algorithm out there and it's targeting people how do we make sure that there's still a human being around that can make sure like can take responsibility or have control over that process what i mean with this fundamental part is that the control over choosing the considerations that ground our value judgments so the actual process of deciding how the world ought to be in any of these contexts that is a process that we have to remain control over in control over it doesn't make sense to delegate that to anybody but ourselves we human a machine is not going to be able to just decide how the world should be and then we all change just because of an output of an algorithm that doesn't make any sense we have to decide how the world should be and then create algorithms to help bring us there so in the example of you know going over cvs to find a new candidate for a job say in academia we might say that there's certain considerations that are important for that for instance the number of publications um their reference letters you know what they did their dissertation on there'd be all these considerations that go into such an evaluation and we usually have a committee to decide like how do we want to evaluate candidates for this particular position because it may not be the same considerations every time and that conversation that we're having not only as a small committee or as an individual but as a society to say what is like a good what is a good academic and then from there determining the considerations that will lead us to choosing those good academics or bad academics or good candidates or bad candidates that is our process that's what we need to remain in control over so delegating this when we delegated an evaluative output to a machine we are effectively seeding that more fundamental level of control and then continuing on with this theme even though we're having a conversation today about what a good academic is or we did that you know we were doing that 50 years ago now things have changed drastically in the last you know bunch of decades or even in the last 10 years i think even at tu delft you can see that certain characteristics or considerations are used that weren't used before about what a good academic is for instance valorization is much more important than it used to be we've decided that that is part of what goes into making a good academic or maybe teaching is a little bit less important or maybe it's not the number of publications anymore but the quality of their top publications this is the conversation that we have it's also a conversation that changes as we learn new information or as the context around us changes it's not that the value changes itself but how we ground those values with considerations changes so that is that should be left up to us to do next is the is this what i'm calling a vicious machine feedback loop and it's it's a concern that i have that by delegating this evaluation this evaluating process to machines that we could be influenced by these machines on how we end up seeing values and grounding values so the process starts with us human beings building these algorithms training it you know labeling the data however that process works we are we are doing that in our way and it will always be biased it may not be biased in a bad way but it's still biased even if it's just to the time that we live in and then that feeds into an algorithm which goes into an evaluative decision which then spits out these these evaluations to us and over time what i'm worried about is that we could be influenced by how the algorithm makes decisions we could start seeing candidates for jobs differently because we we've seen so many evaluations of an algorithm show us what a good candidate is and then we start taking those things on and this isn't an entirely new problem um it's happened before with other technologies like even the media we're constantly worried about feeding in the idea of who is beautiful and body shape and body image to children for instance or barbies or something like that all this stuff feeds into their idea of what beautiful is and affects them later and we usually talk about this in a negative way we don't like that that they're feeding in a specific body shape as the only way to be beautiful and then that actually affects how they see it which affects their their way of trying to realize it and who they think is beautiful well those same evaluations are now being delegated to ai we don't even know what considerations they're using and i don't think not knowing is better than knowing and it's bad they're both not a good effect on our evaluation our ability to evaluate the final thing i want to say about this this this ethical control argument is that people's behavior will adapt to evaluations part of the reason that we perform evaluations is to get people to um we're saying how the world ought to be if we're saying somebody did a great job we're saying well if you want to do a great job too you should do it more like this or you should look more like this to be beautiful or that's what evaluations do now um of course that will change people's behavior and in the case of ai we've seen some of these overt changes where people figure out that ai is doing something like these students who figured out that their tests were graded by ai and we're actually able to achieve hundreds rather easily um that affects their behavior and of course in this situation well that's not behavior that we want to we don't want to change their behavior that way that evaluation is failing that's partly because we haven't had the conversation we're delegating the conversation about what it is we want what behavior is good in taking a test to a machine and that doesn't based on what i said before that doesn't make any sense that's the whole thing we have to determine what is a good what is good for a test and then we can have ai help us evalua help us evaluate it by picking out you know descriptive features of it but it can actually do that process for us it can't pick out the considerations that make a good test or a good candidate or any of these things that is us losing control over a process that is fundamentally our process um so then i started this you know the whole thing is called what machines shouldn't do so i wanted to reiterate you know i think some of it should be clear by now but um what i think machines shouldn't do based on the arguments that i've used and the first is um they're all evaluative outputs but aesthetic outputs so judging what is beautiful or what is good you know what's a good movie what's a what's a bad movie what's a good song we have you know algorithms that are trying to trying to say what the next good movie is going to be i think these these don't make any sense based on what i've said not only can we not check if the algorithm works or not but we can't um but we're losing control of that process of having that conversation about what is good um regard to the aesthetics the same with the ethical and moral you know we shouldn't be delegating the process of evaluating um of coming up with the considerations that evaluate candidates or citizens or anything like that to machines that is our process and finally i add this last one in i don't think it'll make the paper because i think it's a separate paper but there's been so much talk of ai emotion detection that i wanted to mention it because i think it fails some of the same thing in the same way that some of the aesthetic and ethical or moral failures go and that is we emotions are not verifiable in the way that the gun in the bag was and furthermore there doesn't seem to be based on the science that i've read um the anything more than pseudoscience grounding the idea that we can use ai to detect emotions all right so now i'm getting to the conclusion now i think this is obvious but unfortunately based on all the examples i've shown i think it still needs to be said ai is not a silver bullet it's not going to evaluate better than we can and it's not going to tell us how the world should be it can help us realize the world we've determined the way that it should be but it's not going to be able to do this this is fundamentally a human conversation a human process that we need to we need to keep going and it will never stop and then have the technologies around us help us realize those dreams and then lastly i want to keep artificial intelligence boring i know it's not as exciting as being able to figure out ethics you know determine exactly what a good person is and then we just go toward we just follow the machine that would be really exciting if we could do that but it's just not something that can be done but i also don't want to be completely negative here i think despite all of the the last you know half an hour or so i'm actually really positive about many of the benefits that artificial intelligence and machine learning could bring us i just think that it needs to be more focused into the things that it's possible to achieve and that is identifying the descriptive features labeling those descriptive features that ground our evaluations so instead of determining who is dangerous it can determine a gun in a bag instead of determining who a good candidate is it can it can rank the the cvs based on number and a number of publications you know it's hard to read 100 cvs but an and our machine learning algorithm could probably do it really fast and we could verify that it was doing it correctly this is still hugely beneficial and i think we underestimate the power that that could be so i'm going to leave it there i really appreciate you guys listening to me for the last half hour and i look forward to your questions um i'm going to stop sharing my screen in like a minute just so i can see everybody i feel quite alone when i'm looking at it like this so uh but thank you very much thank you scott uh that was very insightful and uh we actually have a few questions already in the chat so while i'm reading those uh i would appreciate if people uh write their question or at least the indication that they have a question in the chat because there are already quite a few of them so if you raise a hand i might as well miss it um so i'll start uh with the first question that arrived i think 30 seconds after you started the attack uh so that was uh from enough so you know would you like to ask your first question yes thanks scott for your presentation very nice i i was i got stuck when you uh started to make the i started to explain the difference between automated and delegated and yeah i was worried about it as i gave you as i wrote in the chat if you if you look take the example of face recognition right um then there are face recognition systems that make use of explicit explicit selected facial features like you know position of your nose distance between the eyes color of the eyes etc etc and there are some that learn these features and we do not know necessarily what kind of features that are both of course have the same effect namely they recognize people they classify people etc etc but one you would call automated because we as humans select the features and the others delegated and i don't get that why it is even important to make that distinction uh well i think it's fundamentally important because we in one case we're deciding what features are important for grounding and a decision in the end and so we have it's the control problem right we have that control but if we are delegating that to a machine which is fine in facial recognition i don't i have lots of problems with facial recognition but not for this reason is that we algorithms machine learning algorithms specifically are able to um use its own considerations and it becomes more powerful and the in the way that i mean by that is that it's not confined to what we can think of or human articulable reasons it actually has a host of other reasons that we could never understand i think that's what gives it its power but it does matter because one is going to be explainable and one's not so what is that the word then so i was looking for that word explainability so you're you're the the distinction between uh automated and delegated is whether or not it's explainable i think that would be i i just so it'd just be a clarity issue then then i i think that works fine for me but i would just i prefer the delegated the decision because i think that makes it more clear maybe for my community but i'll leave it today now you gave a very if i may you gave a very nice example of a decision tree with two levels in the tree right yeah realistic decision trees have thousands of levels of course yeah yeah is that explainable yeah so you see i i mean it's fundamentally explainable between the two huh yeah but it's fundamentally explainable right whether you have a thousand levels or two levels you can still explain it i mean somebody could explain it in the end we could in principle get an explanation but with machine learning we can't right now at least in principle get an explanation so there's a difference between the two and i'm very interested about this because uh you mentioned this one example the uh ai that was used to determine whether someone should be hired or not or the quality of candidates right and i think you made really good points against that uh being used but um when we are when you were talking and when you are talking about this um i'm wondering would you say it's perfectly fine to use an ai if it's a gopher good old-fashioned ai with a decision tree behind it and only problematic when we use machine learning uh right so i think it is so there might be other problems associated with gophi in that situation but i think we still have i mean the way or the arguments that i've made today we still have control over what considerations are grounding the evaluation so for example if we're using a hiring algorithm that's basically just saying you know how many if it's in the academic sphere is there um do they have greater than five publications um and did they get a phd that has the word ethics in it and those are the only two things we care about separating out the things well of course we can we can argue about whether that's good those are considerations are good but it's still us deciding what considerations there are so it doesn't fall foul to what i've talked about today it still may be problematic but um not for this reason okay so uh if we move on to the next question um so providing a link to the paper somebody posted it already so uh then we had uh a question from reynold about having a disease uh would that be an evaluative output you know can we clarify what the question was about oh yeah i think the question is pretty clear huh yeah pretty clear don't worry um so i i think i think i use the example in my paper sometimes about detecting skin cancer and an algorithm there's algorithm that can uh look at a mole and take that as an input and evaluate whether you have uh skin cancer or not so this i don't think is evaluative because it's it's verifiable right we can just take a biopsy and say well did the algorithm get it right or not it's a zero or one do you have skin cancer or not so in that case it's not an evaluative output no there's no there's no evaluation i mean i guess in the uh if it was more mental if we consider mental issues that's the next line in the question exactly well then it's uh i think there's a blurry line there but uh i i'd not experienced enough in the in that field to know whether so i have a very strong feeling that and i think i saw that somewhere in the chat window as well that that what what you're worried about is whether or not there's a ground truth an objectively verifiable ground truth i think that is yeah i mean i we could say that i'm happy with that language okay good thanks good now we have a question from david uh he is wondering whether you can acquire equate evaluative outputs with outputs of moral evaluation is there necessarily a moral component or maybe you can clarify that yeah thanks thanks for the talk also uh scott but uh i think enol just touched upon it in the previous talk so i was wondering what you mean with evaluation and i think the ground truth part that's probably more general than moral evaluation but i had the feeling you were thinking about moral evaluation but also like i said in that in the talk it's also a static evaluation you know anything's with with value attached to it but i think that does cause i think the verify ability here is very important and it may indeed be easier just to say is it um is there this ground truth and and verifiability part of part of it i think that's that could be possible i need to think more about it okay thanks okay uh now we have another question from another also drilling down on the no no no let's take someone else come on okay yeah we'll skip this one so we have a question from jared then uh about uh the kind of the gray area between evaluative outputs and uh something more objective jared do you want to ask a question um yeah so thank you very interesting presentation uh and so one thing i was thinking about is that it would be terrifically easy for pretty much all examples you gave to reframe them uh as if they are not giving evaluative outputs but more like objective judgment so you can say the ai does not evaluate appearance it judges similarity to other applicants based on or as ranked on suitability and you can sort of reframe what your ai does to something objective and then say it's just meant to inform people the people make the decisions and we just do this objective bit that informs the decision uh and i have a feeling that focusing on my outputs um might not be the right emphasis but i'm not completely sure and i just wanted to throw it right see what's uh and and i'm worried about that too i'm intentionally making this bold so that i can get pushback but i think what what you just said is is um i like that better in terms of we're just um finding applicants that are like the applicants that are the people that we've hired in the past that we've classified as good so we've done that process of judging whether they're good however what considerations are grounding that similarity part we don't know how it's reaching that similarity so i think it still falls foul to that and what i think and um in my more hopeful parts about ai instead of trying to find you know what applicants are good based on the applicants of the past that we've determined as good we should figure out well maybe we should think about what makes them good what what is it about them that makes them good and then when we determine that then we can use ai perhaps we need ai to automate being able to find that feature and to be able to sort those applications that's what i really want because you can't say um because this suitability thing i think it's a nice work around and i'm sure the technology companies will do it because it makes it uh makes it sound better and makes it it forces more of the responsibility to human beings but i don't think it's a justified or meaningful control then i mean what's the difference between more suitable and like our better applicants from the past and just a good candidate it's basically amounts to the same exact thing and then it falls into the same problems thank you okay next we have a question from sylvia sylvia can you plug in and ask the picture yes yes um if i can also just add something about what what um uh jared was saying then i think it would at least be an improvement then it's clarified what the ai actually does so for example matching similar cv because in terms of social technical system at least you remove the um the the tendency to uh trust that then the ai is gonna have this additional capability respect to us and you know we should trust then what the computer is saying maybe i agree i think it's a step forward like that it's definitely a big step forward it's it doesn't it still falls victim to someone what i've said but it's a step forward i agree yeah but now what i wanted to ask you is i mean i i actually don't mind your idea of saying okay let's keep it boring because to me just sounds like that just keep it within the boundaries of what it actually could do because it can't do this contextual kind of evaluations but it doesn't it just boils down then to it can verify very well tangible things like objects like your organic sample or the the checking the moles and possible skin cancer against then intangible things so is it is it boiling down to let's just keep it to objects or things well i mean i i think much hinges on the verifiability i mean like i guess i mean earthquakes are tangible you know like we could predict earthquakes with an algorithm and we would only know after the fact if it got it right but i you might be right it might be just tangible it depends on what we mean by tangible um possible but really i think the key word is verifiability is is can we can we verify it after the fact or during and i think that will matter when we can verify it but it it if we can't verify it at all which i'm trying to say that all evaluative judgments cannot be verified so we can't do that and i'm trying to give an easier way and maybe verifiability just is easier and pragmatically i should just be using that i'm open to that and i will be thinking about that more but um yeah so i will then can i just leave you with like a sort of like devil's advocate for location because then it's the question that i would ask myself which is um we can't verify an evaluation from a human either if the judge decides that that was or a jury if you're in a in a common law system then that that was suspicious you're gonna fall into the same problem so someone that really wants to use the algorithm might say but then let's let's make it you know um statistically sound or whatever like the lombroso nightmare that was with the criminal behavior ai and and so whether it's a human or an ai we still can't verify that evaluation so let's just switch to let's just you know save money and use the ai anyway that would be my biggest problem then right and i think you're absolutely right we can't um we can't i'm saying evaluative judgments aren't verifiable so it's not going to be verifiable if a human does it either but i think um and if i can go back to using the phrase saying how the world ought to be and how people ought to be that is up to us to do and the idea that we're going to use a tool to do that instead doesn't make any sense especially if we can't we can't say anything about its efficacy so um if i'm right and i think you know i i find it hard to create an opposing argument to say well no actually it doesn't matter um how we come to the decision about how the world ought to be that doesn't matter we just need to accept you know we just need to accept one so that it's easier for the machine to get there i don't think that just doesn't make any sense to me but um it's something that needs to be worked out more but i think there's definitely a difference between a human not being able to making an evaluation and us not being able to evaluate it versus a machine making an evaluation and i think one of the big differences is that a human can explain themselves about how they came to that decision and what considerations they used and we can have the disagreement about whether the considerations they used were indeed okay and of course we can get into the idea that humans can be deceptive you know they're going to lie about how they came to the decision they were very biased against a particular person but they're not going to say that they're going to try to use objective means but you know the responsibility falls on them then you know we shouldn't be delegating that process to a machine okay uh herman do you want to ask your question yeah great um uh thanks scott i really liked your hearing your paper as but i also have a question on something that has been touched on be quite a few times already so the verifiability of an evaluative judgment so i still struggle with understanding what you what you mean exactly with that so do you want to say that nothing is suspicious nothing is wrong nothing is beautiful because if if some things are beautiful then we can just see check whether the output corresponds to reality um so your answer you just gave you seemed to hint that we went to verify the reasoning behind the judgments uh so and that seems to be something different so is that is it the last thing that you're interested in that the the reasoning should be very valuable or is the uh is it the outputs that is well i i think both first of all it's the output and so i i think with algorithms they're they're in computer science it's zeros and ones and i think we should be thinking about um we should be working towards and i know all the bayesian stuff but i'm not going to get into that but i think first at least even if even if i accept that there might be a possibility that we could uh verify what's beautiful or not which i i think there's a lot of complications in that because it's going to be culturally specific even even person-specific not that there's no somewhat of even if it's a human-constructed truth over it there may be you might be right then i think i that's why i added in all those reasons about context changing our considerations grounding these judgments are changing movies that were amazing 50 years ago if the same movie came out today we'd be like well it's kind of tired it's not something that we're interested that's not beautiful anymore and it's because the context has changed we've already heard the beatles you know we can't have a new beatles out in anymore somebody has to evolve and and these and even with how the world ought to be the context changes the climate changes global pandemic how we how we live our lives now the fact that we're doing this digitally instead these considerations change and so now that there can't be any truth at the moment right now but that algorithms are not good about this shifting context in this shifting situation about how we ground our judgments what what what a mountain looks like overall is pretty much static right i mean there's some differences in different mountains but overall an algorithm trained on trying to find spot mountains is going to get closer and closer to being able to be better and better at it but that's not the case with evaluative judgments so you might be right we might be able to just say well look everybody agrees that's beautiful the algorithm got it right but then we have to worry about this changing context and that's if we can get everybody to agree on those things which i doubt is going to happen yeah so of course there are many uh metastatic meta normative views that yeah contradict some of the things that that you just said so maybe it could be very explicit that you're endorsing a specific but i actually i'd be curious to know i'd be curious to to hear if um any because i i i do read a little bit of meta ethics especially not meta aesthetics but even if like for instance real and moral realism is true and that there are mind independent moral truths out there um i don't see unless unless we can access them which even which i have not the moral epistemology behind that is exceptionally difficult and there's no solution to it yet but if we can't access them at this point and then the algorithm will say people have hubris they believe the algorithm can actually access them this algorithm is actually better than us it can actually detect the real moral truths well then we'd be left in a situation where we have to just trust that's the case we'd have to say well yeah i guess we we should kill kill um kill five to say uh kill five instead of the killing the one and the trolley problem i didn't know that was true but the algorithm said it's true it's like why how would we ever trust that it doesn't even it so even if the meta ethics were such that there were moral truths i don't see that solving the problem so i'm not sure that a mind is dependent on a med ethical viewpoint at this point but i think it's really interesting and i think we should talk about that more let's do that okay next in line is madeline but when do you think your question uh hasn't been answered before because this goes into uh context for evaluator judgments yeah so this is funny the question has just become more complicated i think so maybe i just pushed a little bit to say um okay so i'm just wondering about whether or not if this output of a verb evaluative statement is evaluative at all so if the ai this program says this is beautiful um that what it's really doing is saying i've processed this data and here's what you people seem to think is beautiful in this particular moment in this particular context so it seems like the uh if you're i'm not so strong on the technical side here but it seems like if you're using machine learning this is a learning process and would be able to adapt to a certain context um so i'm just wondering if this and just i guess i'm just challenging the idea that an ai program could come up with the evaluative statement at all um well i think you're right i think i should clarify that more that i mean yeah machines don't have any intentions they don't they don't have uh any you know viewpoints or anything like that they're just doing the statistical analysis but practically when people use higher view the algorithm they're getting an evaluative output and no matter how you know you could try to put a disclaimer in there consistently i'm saying like this is not a real evaluation this is simply and put i have in my paper on explainability this great explanation of why alphago made a particular move in the game of go or how it comes to a particular move and really this is what the algorithm's doing of course it's like really over anybody's head that's not in machine learning and i think the practical situation is that people are using these algorithms and taking it as an evaluative judgment and if we're not going to and i think language is really important here and that's why i said we're going in a step forward if we're saying that this algorithm has determined that this candidate is similar to these three candidates that were considered good by you before that that's going a step in the right direction for sure i still don't like it but it's it's it's more accurate to say it like that even if it's not as exciting to say it like that so the concern is about then the practical use and how it's taken up by people um no i think i think that's also a concern for sure but i don't think it i don't really think it solves the problem to just stipulate that the algorithm is coming up with uh is coming up with this this person is the most similar to these three candidates that you've had before similar how i mean that how is so important you can't just disregard the how it's how they're similar they're similar because they're white males i mean that's what amazon's algorithm did that how was fundamentally important so i think we can't i think it's just this this nice clever little trick that some companies are doing to say like oh well we just we can just make this little move with language and change the whole game no the same problems exist in my opinion peace all right i think i'm next um this is still enough time okay can you hear me yeah sorry i was neutered i was talking all this time yeah i said that we have three minutes and the next line is rule so uh go on and uh it would be great if we could also get the catalan's question okay well hi everyone great to be here uh scott thanks for your talk i really appreciated it i think what you're one thing you're doing is like taking the focus off of the technology of the you know the logic etc and giving arguments to uh motivate us to look at the practice and all the choices we're making and that's where i've i've spent a lot of my my time in the last couple years as well um and um like i have a specific question about delegation um yeah where you where you draw the line but before i say that like i want to point to one thing i did myself was to start looking at all the different choices that people make and a lot of what you're talking about is like how do you how do you determine what a label is and what can you capture in the label and is that thing that you're capturing is that like an epistemologically sound thing that you can verify um so i like the idea of verification but i also think that um just having having a ground truth or not having a ground truth would be too rough of a of a categorization to not use it um because there are many applications for instance in biology where you don't really have a ground truth but it's still very useful to use evaluations based on our best guesses and then you know have now have a machine like run analysis for us just as an example um so but i i will i will i will just say that and i'll leave the question because i think you've already said enough about context and that delegation and all that okay thanks great uh yeah one last minute cataline are you still with us i'm still with you yes yes decision trees i'll be quick so regarding your answer to email's earlier question about explainability you said well in principle the decision tree would be explainable if you have a thousand layers yes the principle of that is explainable but that holds also for deep learning so is that an essential difference because of course a thousand layer network is not comprehensible for uh people in general uh wait you're saying that in the same way that a decision tree of a thousand layers is explainable so is deep learning yeah okay so i i that goes in that goes against what i understand as deep learning i mean the whole problem with the whole field of explainable ai is to attempt to solve the problem that we can't explain how deep learning algorithms come to the outputs that they come to so i but i'm not a machine learning expert so i uh i mean you can explain the fundamentals of it how you construct these things how do you work okay so no so sorry the explanation that i'm looking for is the cons yeah you can explain there's many different types of explanations you're right but the difference between the two in one the decision tree you can actually explain the features and considerations which led to the output which played a factor in the output and how and maybe even how much they played a factor into the output whereas in the deep learning situation you cannot do that anymore that's that's okay with that i would agree with you you could ask back like why did you make this decision and get a comprehensible answer from a decision player yeah exactly and so that's that's what's really important to me here is those considerations that played a factor yeah happy with that yeah uh yeah the next uh question is mine but unfortunately we don't have time to uh for that question and also for all the remaining questions which i have to admit are really exciting so uh feel free to get in touch with scott later to discuss those i will so thanks again scott that was a fascinating talk thanks to everybody thank you again for the invitation it was a lot of fun thanks for the questions i'm sorry we didn't have a chance to go through all of them okay uh see you all next week next week our speaker will be jared so looking forward to that
c0ef7988-28aa-46b1-a292-c0da78289855
trentmkelly/LessWrong-43k
LessWrong
The $125,000 Summer Singularity Challenge From the SingInst blog: Thanks to the generosity of several major donors†, every donation to the Singularity Institute made now until August 31, 2011 will be matched dollar-for-dollar, up to a total of $125,000. Donate now! (Visit the challenge page to see a progress bar.) Now is your chance to double your impact while supporting the Singularity Institute and helping us raise up to $250,000 to help fund our research program and stage the upcoming Singularity Summit… which you can register for now! † $125,000 in backing for this challenge is being generously provided by Rob Zahra, Quixey, Clippy, Luke Nosek, Edwin Evans, Rick Schwall, Brian Cartmell, Mike Blume, Jeff Bone, Johan Edström, Zvi Mowshowitz, John Salvatier, Louie Helm, Kevin Fischer, Emil Gilliam, Rob and Oksana Brazell, Guy Srinivasan, John Chisholm, and John Ku. ---------------------------------------- 2011 has been a huge year for Artificial Intelligence. With the IBM computer Watson defeating two top Jeopardy! champions in February, it’s clear that the field is making steady progress. Journalists like Torie Bosch of Slate have argued that “We need to move from robot-apocalypse jokes to serious discussions about the emerging technology.” We couldn’t agree more — in fact, the Singularity Institute has been thinking about how to create safe and ethical artificial intelligence since long before the Singularity landed on the front cover of TIME magazine. The last 1.5 years were our biggest ever. Since the beginning of 2010, we have: * Held our annual Singularity Summit, in San Francisco. Speakers included Ray Kurzweil, James Randi, Irene Pepperberg, and many others. * Held the first Singularity Summit Australia and Singularity Summit Salt Lake City. * Held a wildly successful Rationality Minicamp. * Published seven research papers, including Yudkowsky’s much-awaited ‘Timeless Decision Theory‘. * Helped philosopher David Chalmers write his seminal paper ‘The Singularity: A Philosophical Analys
8d378418-e79d-4a16-a100-41de129e008c
trentmkelly/LessWrong-43k
LessWrong
Review Report of Davidson on Takeoff Speeds (2023) I have spent some time studying Tom Davidson’s Open Philanthropy report on what a compute-centric framework says about AI takeoff speeds. This research culminated in a report of its own, which I presented to Davidson.  At his encouragement, I’m posting my review, which offers five independent arguments for extending the median timelines proposed in the initial report.  The Executive Summary (<5 min read) covers the key parts of each argument. Additional substantiation in the later, fuller sections allows deeper dives as desired. This work assumes familiarity with Davidson’s, but a Background Info appendix summarizes the central argument.  Thanks to Tom for his encouragement to share this publicly, and for his commitment to discussing these important topics in public. Excited to hear any feedback from y'all.   Special thanks also to David Bloom, Alexa Pan, John Petrie, Matt Song, Zhengdong Wang, Thomas Woodside, Courtney Zhu, and others for their comments, reflections, and insights on this piece. They were incredibly helpful in developing these ideas to their current state. Any errors and/or omissions are my own.  Executive Summary (<5 min read) This review provides five independent arguments for extending the median timelines proposed in Davidson's initial report.  These arguments do not attack the flywheel at the core of the piece. They do not challenge  the progression toward increasingly sophisticated AI or the compounding over time of investment, automation, and thus R&D progress (as summarized here). But they do raise implications for the report’s presented timelines. Specifically, they describe both the structural technical and the geopolitical obstacles that could slow the progress of AI-driven automation.  These key arguments pertain to:  1. Dataset Quality.  2. The Abstract Reasoning/Common Sense Problem.  3. GDP Growth & Measurement. 4. R&D Parallelization Penalty. 5. Taiwan Supply Chain Disruption. These arguments are developed and operate
7696c7bb-baa1-4661-b12b-e935014e39da
trentmkelly/LessWrong-43k
LessWrong
Mandating Information Disclosure vs. Banning Deceptive Contract Terms Economists are very into the idea of mutually beneficial exchange. The standard argument is that if two parties voluntarily agree to a deal, then they must be better off with the deal than without it, otherwise they wouldn't have agreed. And if the terms of that deal don't harm any third parties,* then the deal must be welfare-improving, and any regulatory restrictions on making it must be bad. One objection to this argument is that it's not always clear what is and what is not "voluntary." I once has a well-published economist friend argue that there are no gradations of voluntariness: either a deal was made under some kind of compulsion or it wasn't. I asked him if he would be OK letting his then pre-adolescent son make any schoolyard deal he wanted as long as it was not made under any overt threat, and I think (but am not totally sure) that he has since backed off this position. So there is an argument for purely paternalistic restrictions on freedom of contract.  Another objection, one which economists tend to take more seriously, relates to information. Specifically, there is the idea that maybe one party to the contract is not fully informed about its terms. For this reason, many economists are willing to entertain policies by which firms are required to disclose certain information, and to do so in a way that is comprehensible to consumers. So for example we now have "Schumer boxes" that govern the ways in which credit card companies present certain information in promotional materials. This seems to many people to be a reasonable remedy: if the problem was that one side of the transaction was ignorant, then a regulation that eliminates that ignorance, while at the same time not interfering with their freedom to engage in mutually beneficial exchange, must be a good thing. I think this reasonable-sounding position is largely wrong. The standard asymmetric information stories with rational agents are stories in which the uniformed party knows that it is uni
fd6e4fa1-b291-429d-94dc-f2c7c864424e
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Defining "optimizer" I've been thinking about how to define "optimizer". My attempted definition of "optimizer" is, "something such that there is a method of describing a change to it to concretely describe a system that scores unusually highly on another function, for a wide range of functions, in a way that's significantly shorter description length than specifying a system that achieves unusually highly on that function from scratch." The basic justification for my definition is that if something is an optimizer, then if you were to write a program that emulates the system, then the program would be an implementation of some optimization algorithm. And you could describe something that optimizes for something else by changing the parts of the code responsible for what is in effect the systems' objective function. The justification can be extended to logical descriptions of systems that aren't necessarily describable as programs. I said in the definition that the new system need to be concretely described. This is so you can't call a rock or other random thing an optimizer because you could describe a new system as something like "This rock, except for being great at optimizing". The same goes for non-rocks. The definition seems to work okay. Humans and AIs would be considered optimizers. This is because a system that optimizing for x can be described by describing a change to the emotions or values of the human or AI in the right way. And a toaster isn't an optimizer. Even though toasters are good at scoring highly on making well-toasted bread, the mechanisms for toasting aren't particularly useful for optimizing much else. So I don't think describing a change to a toaster to make it score well on a random other function would be easier than specifying the system from scratch. And I don't think other various things in our environment, like tools and inanimate objects, would be classified as optimizers. This is because, like toasters, tools and miscellaneous objects don't do anything much that could generalize to a wide range of other functions, so I don't think describing an optimizer would be made much easier or less complicated than specifying one from scratch. Thoughts?
9791bc08-a458-40a8-839b-cb1321cb7210
trentmkelly/LessWrong-43k
LessWrong
This is Water by David Foster Wallace Note: It seems like great essays should go here and be fed through the standard LessWrong algorithm. There is possibly a copyright issue here, but we aren't making any money off it either. What follows is a full copy of "This is Water" by David Foster Wallace his 2005 commencement speech to the graduating class at Kenyon College. Greetings parents and congratulations to Kenyon’s graduating class of 2005. There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?” This is a standard requirement of US commencement speeches, the deployment of didactic little parable-ish stories. The story thing turns out to be one of the better, less bullshitty conventions of the genre, but if you’re worried that I plan to present myself here as the wise, older fish explaining what water is to you younger fish, please don’t be. I am not the wise old fish. The point of the fish story is merely that the most obvious, important realities are often the ones that are hardest to see and talk about. Stated as an English sentence, of course, this is just a banal platitude, but the fact is that in the day to day trenches of adult existence, banal platitudes can have a life or death importance, or so I wish to suggest to you on this dry and lovely morning. Of course the main requirement of speeches like this is that I’m supposed to talk about your liberal arts education’s meaning, to try to explain why the degree you are about to receive has actual human value instead of just a material payoff. So let’s talk about the single most pervasive cliché in the commencement speech genre, which is that a liberal arts education is not so much about filling you up with knowledge as it is about “teaching you how to think.” If you’re like me as a student, you’ve never
f75c6734-629e-44f9-b729-a0199dcdec87
trentmkelly/LessWrong-43k
LessWrong
Act into Fear and Abandon all Hope
2083396e-276a-4e7c-ba35-ad10b2d0d207
trentmkelly/LessWrong-43k
LessWrong
[AN #163]: Using finite factored sets for causal and temporal inference Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter. Audio version here (may not be up yet). Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer. This newsletter is a combined summary + opinion for the Finite Factored Sets sequence by Scott Garrabrant. I (Rohin) have taken a lot more liberty than I usually do with the interpretation of the results; Scott may or may not agree with these interpretations. Motivation One view on the importance of deep learning is that it allows you to automatically learn the features that are relevant for some task of interest. Instead of having to handcraft features using domain knowledge, we simply point a neural net at an appropriate dataset and it figures out the right features. Arguably this is the majority of what makes up intelligent cognition; in humans it seems very analogous to System 1, which we use for most decisions and actions. We are also able to infer causal relations between the resulting features. Unfortunately, existing models of causal inference don’t model these learned features -- they instead assume that the features are already given to you. Finite Factored Sets (FFS) provide a theory which can talk directly about different possible ways to featurize the space of outcomes and still allows you to perform causal inference. This sequence develops this underlying theory and demonstrates a few examples of using finite factored sets to perform causal inference given only observational data. Another application is to embedded agency (AN #31): we would like to think of “agency” as a way to featurize the world into an “agent” feature and an “environment” feature, that together interact to determine the world. In Cartesian Frames (AN #127), we wor
365274b8-0466-4aa0-ad59-d8efa1f6cd01
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Main paths to impact in EU AI Policy Epistemic status ================ * This is a tentative overview of the current main paths to impact in EU AI Policy. There is significant uncertainty regarding the relative impact of the different paths below * The paths mentioned below were cross-checked with four experts working in the field, but the list is probably not exhaustive * Additionally, [this post](https://forum.effectivealtruism.org/posts/fbG6wWZhJ3jt3xHxS/should-you-work-in-the-european-union-to-do-agi-governance) may also be of interest to those interested in working in EU AI Policy. In particular, consider the arguments against impact [here](https://forum.effectivealtruism.org/posts/suyb4vC75Wo9EKgyu/argument-against-impact-eu-is-not-an-ai-superpower) * This article doesn’t compare the [potential value of US AI policy careers with those in the EU for people with EU-citizenship](https://forum.effectivealtruism.org/posts/e7NKpwD5z2Mnc7y7G/working-in-us-policy-as-a-foreign-national-immigration). A comparison between the two options is beyond the scope of this post Summary ======= People seeking to have a positive impact on the direction of AI policy in the EU may consider the following paths to impact: 1. Working on (enforcement of) the AI Act, related AI technical standards and adjacent regulation 1. Working on the current AI Act draft (possibility to have impact immediately) 2. Working on technical standards and auditing services of the AI Act (possibility to have impact immediately) 3. Making sure the EU AI Act is enforced effectively (possibility to have impact now and  >1/2 years from now) 4. Working on a revision of the AI Act (possibility to have impact >5 years from now), or on (potential) new AI-related regulation (e.g. possibility to have impact now through the AI Liability Directive) 2. Working on export controls and using the EU’s soft power  (possibility to have immediate + longer term impact) 3. Using career capital gained from a career in EU AI Policy to work on different policy topics or in the private sector While the majority of impact for some paths above is expected to be realised in the medium/long term, building up career capital is probably a prerequisite for making impact in these paths later on. It is advisable to create your own personal [“theory of change”](https://lynettebye.com/blog/2020/11/16/theory-of-change-as-a-hypothesis) for working in the field. The list of paths to impact below is not exhaustive and individuals should do their own research into different paths and speak to multiple experts. Paths to impact =============== Working on (enforcement of) the AI Act, related AI technical standards and adjacent regulation ---------------------------------------------------------------------------------------------- Since the EU generally lacks cutting edge developers of AI systems, the largest expected impact from the EU AI Act is expected to follow from [a Brussels effect](https://www.governance.ai/research-paper/brussels-effect-ai) of sorts. For the AI act to have a positive effect on the likelihood of safe, advanced AI systems being developed within organisations outside of the EU, the following assumptions need to be true: * Advanced AI systems are (also) being developed within private AI labs that (plan to) export products to EU citizens. * The AI-developing activities within private AI labs need to be influenced by the EU AI Act through a Brussels Effect of some sort (de facto or de jure). This depends on whether advanced AI is developed within a product development process in which AI-companies take EU AI Act regulation into account. * Requirements on these companies either have a (1) slowing effect on advanced AI development, buying more time for technical safety research or better regulation within the countries where AI is developed (2) direct effect, e.g. through risk management requirements that increase the odds of safe AI development ### Working on the final draft of the AI Act **Current status of AI Act** The [AI Act](https://artificialintelligenceact.eu/the-act/) is a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. On April 21, 2021 the Commission published a proposal to regulate artificial intelligence in the European Union. The proposal of the EU AI Act will become law once both the Council (representing the 27 EU Member States) and the European Parliament agree on a common version of the text. The Czech Presidency has presented its full compromise text to the Council, in which it attempts to settle previously contentious issues, namely how to define an AI system, classification of AI systems as high-risk, governance and enforcement, as well as national security exclusion ([see timeline](https://artificialintelligenceact.eu/developments/)). This [was approved](https://www.euractiv.com/section/digital/news/eu-countries-adopt-a-common-position-on-artificial-intelligence-rulebook/) by ministers from EU Member States on December 6. In the European Parliament, negotiations on a joint position are still ongoing. Recently the European Parliament’s rapporteurs circulated a new batch of [compromise amendments](https://www.euractiv.com/section/digital/news/leading-meps-tackle-enforcement-in-ai-regulation/?utm_source=substack&utm_medium=email) redesigning the enforcement structure of the AI Act. An agreement in the European Parliament [should come](https://www.ceps.eu/ceps-publications/the-ai-act-and-emerging-eu-digital-acquis/) by mid-2023, but a final text agreed between the EU institutions is not likely before 2024. **Paths to impact AI Act** Depending on the role, people could still have some impact improving the current draft of the EU AI Act, which could have an effect internationally through the [Brussels Effect](https://uploads-ssl.webflow.com/614b70a71b9f71c9c240c7a7/630534b77182a3513398500f_Brussels_Effect_GovAI.pdf). The following organisations could still have positive impact on the final draft: 1. Think tanks and NGOs advising the Council and Parliament 2. Member states can improve the quality in trilogue negotiations through the Council. Sweden could be especially influential given their upcoming presidency of the Council in the first half of 2023. 3. Assistants to relevant MEPs (although it is hard to acquire such a role in the relevant timeframe without pre-existing career capital and a degree or luck) ### Working on the technical standards of the AI Act **Current status of standard setting of the AI Act** The AI Act’s high-risk obligations will be operationalised by technical standards bodies. These bodies need to be filled by technical experts within national standard-setting bodies, e.g. [VDE](https://www.vde.com/en/fnn/topics/standardization) in Germany, through [CEN / CENELEC](https://www.cencenelec.eu/) / [ETSI](https://www.etsi.org/) in the [JTC21](https://www.cencenelec.eu/news-and-events/news/2021/briefnews/2021-03-03-new-joint-tc-on-artificial-intelligence/). Through request of the European Commission this process runs parallel with that of the AI Act and is [currently ongoing](https://www.euractiv.com/section/digital/news/leading-meps-tackle-enforcement-in-ai-regulation/?utm_source=substack&utm_medium=email).Standards play such a critical role in bringing down the compliance costs that they have been defined as the ‘real rulemaking’ in an [influential paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3896852) on the EU’s AI rulebook. **Paths to impact in standard setting** Technical standards can have an effect internationally through the [Brussels Effect](https://uploads-ssl.webflow.com/614b70a71b9f71c9c240c7a7/630534b77182a3513398500f_Brussels_Effect_GovAI.pdf). The following organisations and their personnel impact technical standards: 1. In addition to private sector organisations, NGOs and think tanks are invited by member state standard-setting bodies to provide their input 2. National standard-setting bodies usually appoint experts to participate in the national committees during the negotiations of technical standards. With sufficient career capital people can help their national committee People with more technical (governance) expertise could make sure the technical standards actually make AI development safer and more ethical, preventing them from becoming a tick-box exercise. There seems to be the most room for making an impact through measures on: * Risk Management Systems:The AI Act imposes requirements regarding internal company processes, such as requiring there be adequate risk management systems and post-market monitoring. * Documentation: The AI Act introduces requirements on documentation of companies’ AI systems, to be shared with regulators and users. Similar to “model cards,” an AI system should be accompanied by information about its intended purpose, accuracy, performance across different groups and contexts, likely failure modes * Requirements on accuracy, robustness, and cybersecurity of AI systems ### Working on EU AI Act enforcement **Current status of enforcement** Making sure the EU AI Act is enforced is a prerequisite for impact and the Brussels Effect coming into existence. The Act will be overseen by a European AI Board (or, possibly, an AI office as requested by some policymakers). However, the Board’s role is advisory, and enforcement will be primarily the responsibility of national market surveillance authorities. The European AI Board collects and shares best practices, takes a position on emerging issues for the implementation and enforcement of the regulation and will also monitor the latest trends in AI. [The most recent compromise text](https://www.euractiv.com/section/digital/news/leading-meps-tackle-enforcement-in-ai-regulation/?utm_source=substack&utm_medium=email) by EP rapporteurs gives the national supervisory authorities the power to conduct unannounced on-site and remote inspections of high-risk AI, acquire samples related to high-risk systems to reverse-engineer them and acquire evidence to identify non-compliance.  **Paths to impact in enforcement** The following paths to impact on enforcement are identified: 1. NGOs, think tanks and members of Parliament could lobby for better (use of) budgets (possibility to have impact now). The GDPR [showed](https://www.csis.org/blogs/strategic-technologies-blog/3-years-later-analysis-gdpr-enforcement) that Member State surveillance authorities often lack the resources or don’t use the resources wisely in order to prevent back-logs in processes. 2. Working in enforcement (possibility to have impact in a few years). In order to work on enforcement in a few years, it is probably good to first build up experience: 1. Individuals could already join existing national and European supervisory authorities, e.g. on [enforcement of the Digital Services Act](https://ec.europa.eu/newsroom/repository/document/2022-46/2022_Job_opportunities_within_the_DSA_Enforcement_Team_TaK2bXp46J9EXqQHPNAgUzLmI_91635.pdf) with the Commission. As the Digital Services Act is enacted earlier than the AI Act, it might be the case that the enforcement team for this digital matter would also work on AI Act enforcement. Individuals could also use this experience to change ships to the AI Act once the AI Act has come into existence. This could be either on the level of the European Artificial Intelligence Board (EAIB), chaired by the Commission or on the level of the national supervisory authorities. 2. Working on audits for cutting-edge AI systems now, which can be used for third party conformity assessments / enforcement when the regulation comes into force ### Working on adjacent AI regulation and a revision of the EU AI Act Over the coming years more regulation on AI and adjacent technologies is expected. The European Commission recently released a proposal for an AI Liability Directive to change the legal landscape for companies developing and implementing AI in EU Member States. This would require Member States to implement rules that would significantly lower evidentiary hurdles for victims injured by AI-related products or services to bring civil liability claims. In addition, AI-related R&D regulation is expected at one point since R&D is exempted from the EU AI Act.  There will be work at the member state level for the technical bodies which will work on EU AI Act secondary legislation. There is a good chance some provisions of the AI Act will be decided by an 'implementing act'.  Finally, most EU laws undergo a revision after several years of the law being created**.**The AI Act will probably be adopted into 2024 and will only come into force around 2026 (to give organisations the possibility to implement the requirements. Citing article 84 of the current EC proposal: *"By [three years after the date of application of this Regulation... and every four years thereafter, the Commission shall submit a report on the evaluation and review of this Regulation to the European Parliament and to the Council. The reports shall be made public.* There is path dependency, so the structure of the law is very unlikely to change after this, but working on this evaluation and changes seem a viable path to impact in a few years.  Working on export controls and using the EU’s soft power -------------------------------------------------------- **Current status export controls and the EU’s soft power** The entire high-end semiconductor supply chain relies on Netherlands/EU-based ASML, because their EUV lithography machines are currently the only ones in the world that enable the creation of the most advanced chips. ASML continues to innovate and it will be hard for non-EU based organisations to catch up. In July the [US government pressed](https://www.fierceelectronics.com/sensors/us-wants-dutch-ban-asml-selling-chip-gear-china-0) the Dutch government into banning exports from the latest generation of ASML EUV machines.  In [a recent interview](https://www.nrc.nl/nieuws/2022/11/18/schreinemacher-chinese-exportbeperking-voor-asml-alleen-op-eigen-voorwaarden-a4148738) the Dutch Minister of Foreign Trade already mentioned that the Dutch government will only engage in stricter export controls on its own terms.  On the other side there is international collaboration, mainly between the US and the EU via the [TTC (trade & technology council](https://ec.europa.eu/commission/presscorner/detail/en/IP_21_2990))). The goal of the TTC is to harmonise AI regulation between the EU and US. Working on the EU-side would give people the opportunity to incorporate important governance measures in the US legal system. The EU recently [opened an office in SF](https://digital-strategy.ec.europa.eu/en/news/eu-opens-new-office-san-francisco-reinforce-its-digital-diplomacy) to reinforce its digital diplomacy. The European Commission and US Administration leaders will gather in Washington on December 5 for the latest EU-US TTC summit. Finally, there are diplomatic discussions on AI governance ongoing at the OECD and other multilateral organisations. This is probably the space where other measures outside of the consumer product space could be proposed (e.g. on regulating autonomous weapons or on compute governance) * [OECD.AI](https://oecd.ai/en/) is a platform to share and shape trustworthy AI * The UN starts to think more about future generations, e.g. through their [Declaration for Future Generations](https://www.un.org/pga/76/2022/09/12/general-assembly-declaration-on-future-generations-pga-letter/) (part of [Our Common Agenda](https://www.un.org/en/common-agenda)). Looking at the track record, e.g. the UN Sustainability goals, it is expected that Our Common Agenda will have an impact on the way jurisdictions deal with topics regarding technology development. **Paths to impact in export controls and the EU’s soft power** As mentioned above the US government plays an important role in pressing the Dutch government into stricter export controls. Therefore the following paths to impact regarding export controls (and other forms of compute governance) are observed: * Think tank research into the desirability of different forms of export controls from EUV machines. People could also work on researching other forms of compute governance. * Working for the Dutch Ministry of Foreign Trade working on export controls. It will probably be hard to have impact without career capital and luck is required to end up in the right position Regarding making impact through international organisations and treaties (see also [this post](https://forum.effectivealtruism.org/posts/DK7N5YofbM2cfPi8h/european-union-ai-development-and-governance-partnerships#comments)): * TTC: People need to have some luck to be in the right place in the EU team to have an impact here, since there is no “dedicated TTC organisation”. Seems like a relatively hard route for impact according to experts * OECD.AI: Sometimes they hire permanent staff, but there are also networks of experts more senior people can join * The UN is a platform for international coordination outside of EU-US relations and could be seen as a way to keep China in the loop on topics of (international) tech governance. It is possible to positively impact the UN in multiple ways: + Directly, from certain positions within the UN + Through member states’ permanent representations or as EU Ambassador + From within think tanks that play a major role in advising the UN * There are also specific specialised think tanks on China-EU relations, sometimes touching upon the topic of the semiconductor industry Using career capital from EU AI Policy to work on different policy topics or in the private sector -------------------------------------------------------------------------------------------------- Career capital within (tech) policy is highly transferable. Having experience in regulation of one specific technology will probably result in some career capital to work in regulation of other, risky technologies (e.g. biosecurity, nanotechnology or a completely “new” risky technology that’s currently not on our radar). Even when people want to switch to work in policy outside of the tech space (e.g. animal welfare or foreign aid), tech policy career capital could still prove to be valuable. This makes starting a career in tech policy somewhat robust against scenarios in which regulation of other technologies becomes more pressing.  Some impact-focussing individuals switched from government regulation to private sector organisations to work on self-regulation. It is important to contemplate on the question [if it’s good to work](https://80000hours.org/articles/ai-capabilities/)for an organisation that also accelerates AI capabilities.
9062a179-9041-44bb-9876-589a5b90abd8
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
[Linkpost] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead [Geoffrey Hinton](https://en.wikipedia.org/wiki/Geoffrey_Hinton)—a pioneer in artificial neural networks—just left Google, as reported by the *New York Times*: [‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead](https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html) ([archive version](https://archive.is/TgPyC)). Some highlights from the article [emphasis added]: > Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. **A part of him, he said, now regrets his life’s work**. > > “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said > > --- > “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said. > > --- > Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. > > --- >  In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network...At the time, few researchers believed in the idea. But it became his life’s work. > > --- > In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars. > > Google [spent $44 million](https://archive.is/o/TgPyC/https://www.wired.com/story/secret-auction-race-ai-supremacy-google-microsoft-baidu/) to acquire a company started by Dr. Hinton and his two students. And their [system](https://archive.is/o/TgPyC/https://www.nytimes.com/2019/03/27/technology/turing-award-ai.html) led to the creation of increasingly powerful technologies, including new chatbots like [ChatGPT](https://archive.is/o/TgPyC/https://www.nytimes.com/2022/12/10/technology/ai-chat-bot-chatgpt.html) and [Google Bard](https://archive.is/o/TgPyC/https://www.nytimes.com/2023/03/21/technology/google-bard-guide-test.html). Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators [received the Turing Award](https://archive.is/o/TgPyC/https://www.nytimes.com/2019/03/27/technology/turing-award-ai.html), often called “the Nobel Prize of computing,” for their work on neural networks. > > --- > **last year, as Google and OpenAI built systems using much larger amounts of data, his view changed.** He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. **“Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”** > > As companies improve their A.I. systems, he believes, they become increasingly dangerous**. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”** > > Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — [Google is racing to deploy the same kind of technology](https://archive.is/o/TgPyC/https://www.nytimes.com/2023/04/07/technology/ai-chatbots-google-microsoft.html). **The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said**. > > --- > Down the road, **he is worried that future versions of the technology pose a threat to humanity** because they [often learn unexpected behavior from the vast amounts of data they analyze](https://archive.is/o/TgPyC/https://www.nytimes.com/2017/08/13/technology/artificial-intelligence-safety-training.html). This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own.  > > --- > “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. **I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.**” > > --- > Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that **will not stop without some sort of global regulation**. > > **But that may be impossible**, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “**I don’t think they should scale this up more until they have understood whether they can control it**,” he said. > > Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: **“When you see something that is technically sweet, you go ahead and do it.”** > > **He does not say that anymore.** > >
3fc54709-b8c8-45c0-a6d1-f635708ea9a2
StampyAI/alignment-research-dataset/blogs
Blogs
Stephen Hsu on Cognitive Genomics ![Steve Hsu portrait](http://intelligence.org/wp-content/uploads/2013/08/hsu_w150.jpg)Stephen Hsu is Vice-President for Research and Graduate Studies and Professor of Theoretical Physics [at Michigan State University](http://www.epi.msu.edu/seminars/hsu.html). Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon. He was also founder of SafeWeb, an information security startup acquired by Symantec. Hsu is a scientific advisor to [BGI](http://www.genomics.cn/en/index) and a member of its [Cognitive Genomics Lab](https://www.cog-genomics.org/). **Luke Muehlhauser**: I’d like to start by familiarizing our readers with some of the basic facts relevant to the genetic architecture of cognitive ability, which I’ve drawn from the first half of a [presentation](https://www.youtube.com/watch?v=FgCSkGeBUNg&t=5m2s) you gave in February 2013: * The [human genome](http://en.wikipedia.org/wiki/Human_genome) consists of about 3 billion [base pairs](http://en.wikipedia.org/wiki/Base_pair), but humans are very similar to each other, so we only differ from each other on about 3 million of these base pairs. * Because there’s so much repetition, we could easily store the entire genome of every human on earth (~3mb per genome, compressed). * Scanning someone’s [SNPs](http://en.wikipedia.org/wiki/Single-nucleotide_polymorphism) costs about $200; scanning their entire genome costs $1000 or more. * But, genotyping costs are falling so quickly that SNPs may be irrelevant soon, as it’ll be simpler and cheaper to just sequence entire genomes. * To begin to understand the genetic architecture of *cognitive ability*, we can compare it to the genetic architecture of *height*, since the genetic architectures of height and cognitive ability are qualitatively the same. * For example, (1) height and cognitive ability are relatively *stable* and *reliable* traits (in adulthood), meaning that if you measure a person’s height or cognitive ability at multiple times you’ll get roughly the same result each time, (2) height and cognitive ability are *valid* traits, in that they “measure something real” that is predictive of various life outcome measures like income, (3) both height and cognitive ability are highly *heritable*, and (4) both height and cognitive ability are highly *polygenic*, meaning that many different genes contribute to height and cognitive ability. * All cognitive observables — e.g. vocabulary, digit recall (short term memory), ability to solve math puzzles, spatial rotation ability, cognitive reaction time — appear to be positively correlated. Because of this, we can (lossily) compress the data for how a person scores on different cognitive tests to a single number, which we call IQ, and this single number is predictive of their scores on *all* cognitive tests, and also life outcome measures like income, educational attainment, job performance, and mortality. * This contradicts some folk wisdom. E.g. parents often believe that “Johnny’s good at math, so he’s probably not going to be good with words.” But in fact, the data show that math skill is quite predictive of verbal skill, because (roughly) all cognitive abilities are positively correlated. * By convention, IQ is normally distributed in the population with a mean at 100 and a standard deviation of 15. * Culturally neutral cognitive tests like [progressive matrices](http://en.wikipedia.org/wiki/Raven%27s_Progressive_Matrices) are *very* tightly correlated (0.9) with IQ. So you can estimate someone’s IQ (and hence their verbal ability, spatial rotation ability, short term memory, cognitive reaction time, etc.) pretty well using *only* one test like [Raven’s progressive matrices](http://www.ravensprogressivematrices.com/). * It’s very difficult to raise one’s score on these cognitive tests with training. In large studies, it looks like thousands of dollars worth of training can raise your score by a small fraction of the standard deviation. * Additional IQ points do appear to “matter” — even above, say, IQ 145. E.g. the mean IQ of eminent scientists (IQ 160) is much higher than that of average PhDs (IQ 130). Also, in a longitudinal study of children identified as gifted at age 13, the “1 in 10,000”-level children had significantly better life outcomes than the “1 in 100”-level children, even though they generally all received “gifted child” development paths. One source of details and references for most of this is [The Cambridge Handbook of Intelligence](http://www.amazon.com/Cambridge-Handbook-Intelligence-Handbooks-Psychology/dp/052173911X/). Before we continue, Stephen, do you have any corrections or clarifications you’d like to make about my summary, or additional sources that you’d like to recommend to our readers? --- **Stephen Hsu**: A couple of comments on the summary, which is excellent: 1. Raven’s correlation might not be as high as 0.9 with overall IQ, it might actually be 0.8 or so. These numbers fluctuate around depending on the study. In general two tests might be considered valid “IQ tests” if they correlate at > 0.75 or so with g. This is the case with most standardized tests like ACT, SAT, GRE, etc. 2. Mean IQ of participants in the Roe study was quite high, but I doubt that the average among eminent scientists (averaging over all fields) is 160; probably a bit lower like 145. In any case the Roe and SMPY data are sufficient to suggest nontrivial returns to IQ above 130 in STEM. It seems you understood my talk perfectly well. The answers to your questions may already be in there, but I’m happy to discuss and clarify further. --- **Luke**: Have we identified any genes that are (with high confidence) associated with cognitive ability? What can our history of identifying genes associated with other polygenic traits (e.g. height) tell us about our prospects for identifying genes associated with cognitive ability? --- **Stephen**: Recently the results of a [massive GWAS for genes associated with educational attainment](http://infoproc.blogspot.com/2013/05/first-gwas-hits-for-cognitive-ability.html) were published in *Science*. Some of the researchers in this large collaboration are reluctant to openly state that the hits are associated with cognitive ability (as opposed to, say, Conscientiousness, which would also positively impact educational success). But if you read the paper carefully you can see that there is good evidence that the alleles are actually associated with cognitive ability (g or IQ). At the link above you can find a historical graph: ![GWAS hits](https://intelligence.org/wp-content/uploads/2013/08/GWAS-hits.png) This graph displays the number of GWAS hits versus sample size for height, BMI, etc. Once the minimal sample size to discover the alleles of largest impact (large MAF, large effect size) is exceeded, one generally expects a steady accumulation of new hits at lower MAF / effect size. I expect the same sort of progress for g. (MAF = Minor Allele Frequency. Variants that are common in the population are easier to detect than rare variants.) We can’t predict the sample size required to obtain most of the additive variance for g (this depends on the details of the distribution of alleles), but I would guess that about a million genotypes together with associated g scores will suffice. When, exactly, we will reach this sample size is unclear, but I think most of the difficulty is in obtaining the phenotype data. Within a few years, over a million people will have been genotyped, but probably we will only have g scores for a small fraction of the individuals. --- **Luke**: Could you describe for us the goals and methods of the work you’re currently doing with BGI? --- **Stephen**: The goal of our cognitive genomics project at BGI is to understand the genetic architecture of human cognition. There are obviously many potential applications of this work, in areas ranging from deep human history (evolution) to drug discovery to genetic engineering. But my primary interest is intellectual. The methods are straightforward: obtain genotype and phenotype data and look for statistical associations (GWAS). More specifically, we want to determine the parameters of a polygenic model relating genotype to phenotype. (This as yet undetermined set of “fundamental constants” is one of the most interesting few megabytes of information in the biological world.) The leading term in this model is linear (meaning we are guaranteed a certain amount of progress from simple techniques), but eventually we will be interested in nonlinear corrections (epistasis, gene-gene interactions, dominance, etc.) as well. We started out by looking for high g individuals because, as outliers, they produce more statistical power per dollar of sequencing. The cost of sequencing is still our primary constraint, and will be for at least a few more years. For example, the cost to sequence our 2000 high g volunteers is well into the millions of dollars. I also felt, given my background, that I had reasonable insight into where to find and how to recruit volunteers from the high g tail. Ultimately, I hope that various genomics labs around the world will collaborate to produce a public data repository with g as one of the phenotype variables. *Link*: [International partners describe global alliance to enable secure sharing of genomic and clinical data](http://www.broadinstitute.org/news/globalalliance) --- **Luke**: How feasible do you think “iterated embryo selection” will be, over the next several decades, for the amplification of cognitive abilities via genetic selection? Background for our readers: iterated embryo selection is a plausible *future* technology that could allow strong genetic selection for intelligence without needing to wait 15-20 years between generations. It was first [described](http://theuncertainfuture.com/faq.html#7) in detail in the FAQ for MIRI’s *[The Uncertain Future](http://theuncertainfuture.com/)* project (see [Rayhawk et al. 2009](https://intelligence.org/files/ChangingTheFrame.pdf)), was later described in a book ([Miller 2012](http://www.amazon.com/Singularity-Rising-Surviving-Thriving-Dangerous/dp/1936661659/)), and was finally published in a journal in [Sparrow (2013)](http://commonsenseatheism.com/wp-content/uploads/2013/08/Sparrow-In-vitro-eugenics.pdf). --- **Stephen**: I have no particular insight into specific challenges related to producing gametes from pluripotent stem cells. It’s not my area of expertise. However, I am confident that genomic selection for traits such as g will be possible. I would be surprised if, after analyzing millions of genotype-phenotype pairs, we were not able to produce a predictive model that captures, say, 50% of variance in g. That means, roughly, we might be able to predict g from genotype with standard error of somewhat less than a population standard deviation (e.g., 10 IQ points; note I don’t think the real world “meaning” of g is better defined than within an error of this size). This means that selection on g can proceed relatively efficiently, assuming the basic reproductive technologies are under control. I think there is good evidence that existing genetic variants in the human population (i.e., alleles affecting intelligence that are found today in the collective world population, but not necessarily in a single person) can be combined to produce a phenotype which is far beyond anything yet seen in human history. This would not surprise an animal or plant breeder — experiments on corn, cows, chickens, drosophila, etc. have shifted population means by many standard deviations (e.g., +30 SD in the case of corn). Let me add that, in my opinion, each society has to decide for itself (e.g. through democratic process) whether it wants to legalize or forbid activities that amount to genetic engineering. Intelligent people can reasonably disagree as to whether such activity is wise. *Links:* [“Only he was fully awake”](http://infoproc.blogspot.com/2012/03/only-he-was-fully-awake.html) [Maxwell’s Demon and genetic engineering](http://infoproc.blogspot.com/2010/10/maxwells-demon-and-genetic-engineering.html) [Epistasis vs additivity](http://infoproc.blogspot.com/2011/08/epistasis-vs-additivity.html) [Deleterious variants affecting traits that have been under selection are rare and of small effect](http://infoproc.blogspot.co.uk/2012/10/deleterious-variants-affecting-traits.html) --- **Luke**: Work on the genetics of cognitive ability tends to be more controversial than work on the genetics of, say, height. Why do you think that is? Has your work, or the work of your colleagues, been made more difficult because of such issues? --- **Stephen**: Given our difficult history with race there is an understandable discomfort with the idea that cognitive ability is strongly influenced by genetics. In the worst case, it might be found that historically isolated populations of humans differ in their average genetic capacities for cognition, due to variation in allele frequencies. Let me stress that at the moment our understanding of the genetics of intelligence is far too preliminary to reach a firm conclusion on this issue. At the extremes, there are some academics and social activists who violently oppose any kind of research into the genetics of cognitive ability. Given that the human brain — its operation, construction from a simple genetic blueprint, evolutionary history — is one of the great scientific mysteries of the universe, I cannot understand this point of view. --- **Luke**: What do you think a truly superior human intelligence would be like? --- **Stephen**: I think we already have some hints in this direction. Take the case of John von Neumann, widely regarded as one of the greatest intellects in the 20th century, and a famous polymath. He made fundamental contributions in mathematics, physics, nuclear weapons research, computer architecture, game theory and automata theory. In addition to his abstract reasoning ability, von Neumann had formidable powers of mental calculation and a photographic memory. In my opinion, genotypes exist that correspond to phenotypes as far beyond von Neumann as he was beyond a normal human. > I have known a great many intelligent people in my life. I knew Planck, von Laue and Heisenberg. Paul Dirac was my brother in law; Leo Szilard and Edward Teller have been among my closest friends; and Albert Einstein was a good friend, too. But none of them had a mind as quick and acute as Jansci [John] von Neumann. I have often remarked this in the presence of those men and no one ever disputed me. > > — Nobel Laureate Eugene Wigner > You know, Herb, how much faster I am in thinking than you are. That is how much faster von Neumann is compared to me. > > — Nobel Laureate Enrico Fermi to his former PhD student Herb Anderson. > One of his remarkable abilities was his power of absolute recall. As far as I could tell, von Neumann was able on once reading a book or article to quote it back verbatim; moreover, he could do it years later without hesitation. He could also translate it at no diminution in speed from its original language into English. On one occasion I tested his ability by asking him to tell me how The Tale of Two Cities started. Whereupon, without any pause, he immediately began to recite the first chapter and continued until asked to stop after about ten or fifteen minutes. > > — Herman Goldstine, mathematician and computer pioneer. > I always thought Von Neumann’s brain indicated that he was from another species, an evolution beyond man, > > — Nobel Laureate Hans A. Bethe. *Links:* [Wikipedia: John von Neumann](http://en.wikipedia.org/wiki/John_von_Neumann) [The differences are enormous](http://infoproc.blogspot.com/2012/03/differences-are-enormous.html) [“Only he was fully awake”](http://infoproc.blogspot.com/2012/03/only-he-was-fully-awake.html) --- **Luke:** Thanks, Stephen! The post [Stephen Hsu on Cognitive Genomics](https://intelligence.org/2013/08/31/stephen-hsu-on-cognitive-genomics/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
c29be186-59db-4c68-91aa-a4af252719e7
trentmkelly/LessWrong-43k
LessWrong
What we could learn from the frequency of near-misses in the field of global risks (Happy Bassett-Bordne day!) I wrote an article how we could use such data in order to estimate cumulative probability of the nuclear war up to now. TL;DR: from other domains we know that frequency of close calls is around 100:1 to actual events. If approximate it on nuclear war and assume that there were much more near misses than we know, we could conclude that probability of nuclear war was very high and we live in improbable world there it didn't happen.   Yesterday 27 October was Arkhipov day in memory of the man who prevented nuclear war. Today 28 October is Bordne and Bassett day in memory of Americans who prevented another near-war event. Bassett was the man who did most of the work of preventing launch based false attack code, and Bordne made the story public. The history of the Cold War shows us that there were many occasions when the world stood on the brink of disaster. The most famous of them being the cases of Petrov , Arkhipov  and the recently opened Bordne case in Okinawa  I know of over ten, but less than a hundred similar cases of varying degrees of reliability. Other global catastrophic risk near-misses are not nuclear, but biological such as the Ebola epidemic, swine flu, bird flu, AIDS, oncoviruses and the SV-40 vaccine. The pertinent question is whether we have survived as a result of observational selection, or whether these cases are not statistically significant. In the Cold War era, these types of situations were quite numerous, (such as the Cuban missile crisis). However, in each case, it is difficult to say if the near-miss was actually dangerous. In some cases, the probability of disaster is subjective, that is, according to participants it was large, whereas objectively it was small. Other near-misses could be a real danger, but not be seen by operators. We can define near-miss  of the first type as a case that meets the both following criteria: a) safety rules have been violated b) emergency measures were applied in order to avoid disaster (e.g. emergen
84e46cb2-7ebd-4874-8a3f-7371714566a0
trentmkelly/LessWrong-43k
LessWrong
The nihilism of NeurIPS > "What is the use of having developed a science well enough to make predictions if, in the end, all we're willing to do is stand around and wait for them to come true?" F. SHERWOOD HOWLAND in his speech accepting the Nobel Prize in Chemistry in 1995.   > "Once upon a time on Tralfamadore there were creatures who weren’t anything like machines. They weren’t dependable. They weren’t efficient. They weren’t predictable. They weren’t durable. And these poor creatures were obsessed by the idea that everything that existed had to have a purpose, and that some purposes were higher than others. These creatures spent most of their time trying to find out what their purpose was. And every time they found out what seemed to be a purpose of themselves, the purpose seemed so low that the creatures were filled with disgust and shame. And, rather than serve such a low purpose, the creatures would make a machine to serve it. This left the creatures free to serve higher purposes. But whenever they found a higher purpose, the purpose still wasn’t high enough. So machines were made to serve higher purposes, too. And the machines did everything so expertly that they were finally given the job of finding out what the highest purpose of the creatures could be. The machines reported in all honesty that the creatures couldn’t really be said to have any purpose at all. The creatures thereupon began slaying each other, because they hated purposeless things above all else. And they discovered that they weren’t even very good at slaying. So they turned that job over to the machines, too. And the machines finished up the job in less time than it takes to say, “Tralfamadore.” ― Kurt Vonnegut, The Sirens of Titan I walked around the poster halls at NeurIPS last week in Vancouver and felt something very close to nihilistic apathy. Here, supposedly, was the church of AI, the peak of the world's smartest people converging to work on the world's most important problem. As someone who gets inspire
0d138574-1325-4b52-af20-599e6a594f1c
trentmkelly/LessWrong-43k
LessWrong
Generalising Logic Gates Logic circuits traditionally consist of a directed, acyclic graph. Each node can take a value of 0 or 1. The nodes can have 0, 1, or 2 inputs. 0-input nodes are the "input" for the whole system. 1-input nodes are always NOT gates, they output 0 if they are input 1 and vice versa. 2-input nodes are either AND or OR gates, which work like you would expect. Some of the nodes are designated output nodes. Let's generalise. We can define an (m,n) gate as a function {0,1}m→{0,1}n. In this case a NOT gate is a (1,1) gate; both AND and OR gates are (2,1) gates. Consider all of the possible (m,n) gates: there are 2m possible inputs, and each one of them has one of 2n possible outputs. This means there are (2n)2m=2n×2m possible (m,n) gates. Keeping track of the number of gates is important so we will thoroughly convince ourselves of this. For m=0,n=1 there are 21×20=2 possible gates. These correspond to gates with no inputs and one output, so the two gates are a node which always outputs 0 and a node which always outputs 1. If we allow n to vary, we have 2n possible (0,n) gates. This makes sense, as a (0,n) gate just consists of a node which always outputs the same string of n bits. (m,n) gates can then be considered as a lookup table with 2m locations, each containing n bits of information. In general there are n×2m bits of information required to specify an (m,n) gate. Example 1 Now let's consider all possible (2,1) gates. There are 21×22=16 of them: 010=00002  01000100 110=00012  01010100 210=00102  01001100 310=00112  01011100 410=01002  01000110 510=01012  01010110 610=01102  01001110 710=01112  01011110 810=10002  01000101 910=10012  01010101 1010=10102  01001101 1110=10112  01011101 1210=11002  01000111 1310=11012  01010111 1410=11102  01001111 1510=11112  01011111 So what are the numbers above each gate? Consider the input {0,1}m as a set of numbers a0,...,am−1. We can define the input A uniquely using A=∑m−1i=0ai2i. In binary this
33be54e6-fc40-4a26-a490-e39d5d62789e
trentmkelly/LessWrong-43k
LessWrong
The Blackmail Equation This is Eliezer's model of blackmail in decision theory at the recent workshop at SIAI, filtered through my own understanding. Eliezer help and advice were much appreciated; any errors here-in are my own. The mysterious stranger blackmailing the Countess of Rectitude over her extra-marital affair with Baron Chastity doesn't have to run a complicated algorithm. He simply has to credibly commit to the course of action: "If you don't give me money, I will reveal your affair." And then, generally, the Countess forks over the cash. Which means the blackmailer never does reveal the details of the affair, so that threat remains entirely counterfactual/hypothetical. Even if the blackmailer is Baron Chastity, and the revelation would be devastating for him as well, this makes no difference at all, as long as he can credibly commit to Z. In the world of perfect decision makers, there is no risk to doing so, because the Countess will hand over the money, so the Baron will not take the hit from the revelation. Indeed, the baron could replace "I will reveal our affair" with Z="I will reveal our affair, then sell my children into slavery, kill my dogs, burn my palace, and donate my organs to medical science while boiling myself in burning tar" or even "I will reveal our affair, then turn on an unfriendly AI", and it would only matter if this changed his pre-commitment to Z. If the Baron can commit to counterfactually doing Z, then he never has to do Z (as the countess will pay him the hush money), so it doesn't matter how horrible the consequences of Z are to himself. To get some numbers in this model, assume the countess can either pay up or not do so, and the baron can reveal the affair or keep silent. The payoff matrix could look something like this: (Baron, Countess) Pay Not pay Reveal  (-90,-110) (-100,-100) Silent (10,-10) (0,0) Both the countess and the baron get -100 utility if the affair is revealed, while the countess transfers 10 of her utilitons to the bar
00992ade-3e05-4485-9c8f-0a2e483ef8c8
trentmkelly/LessWrong-43k
LessWrong
Training Regime Day 15: CoZE Note: this is going to be short because I spent some time doing COVID-19 preparations, which I encourage you do to also. I think that it is better for my commitment to post at least one thing a day, even if it's bad. I anticipate heavily revising this later, when I have more time. Introduction CoZE stands for Comfort Zone Expansion. The model is that people have a lot of beliefs about themselves and due to things like positive bias, some non-trivial chunk of these beliefs haven't ever been tested. The goal of CoZE is to take beliefs about yourself and test them experimentally, both making your identity smaller and discovering new things about yourself. Remember to try things. Finding your comfort zone The first step is to find your comfort zone. You should take 10 minutes and write down all of the beliefs that you have about yourself. You should include "obvious" things to avoid stopping idea generation. My list included: 1. I don't like dancing 2. I'm slightly more attractive than average, but not that attractive 3. I'm only slightly socially awkward 4. I'm bad at singing 5. I'm OK at writing Find tests for your beliefs An idea test for one of your beliefs should make you say "ugh fine." A test is too weak if you don't react emotionally to it. A test is too strong if it makes you go "ahhhhhhh". For example, if I wanted to test "I'm not good at dancing", I might take a single beginner's dance class. I wouldn't want to go to a party and start dancing in front of everyone I knew. I also wouldn't want to take an advanced dance class. It is important that you're fair to your belief. If I wanted to test "I'm bad at singing", then I should pick an easy song that I enjoy to try singing, I shouldn't pick a really hard song to sing. The test should actually behave differently depending on whether or not the belief is true. I might sing a hard song badly even if I'm good at singing. Exercise Find one of the beliefs that you found above, devise a test for it,
750bc32f-54fb-4491-8b2c-9c02428020f0
trentmkelly/LessWrong-43k
LessWrong
Proceedings of ILIAD: Lessons and Progress tl;dr This post is an update on the Proceedings of ILIAD, a conference journal for AI alignment research intended to bridge the gap between the Alignment Forum and academia. Following our successful first issue with 9 workshop papers from last year's ILIAD conference, we're launching a second issue in association with ILIAD 2: ODYSSEY. The conference is August 25-29, 2025 at Lighthaven in Berkeley, CA. Submissions to the Proceedings are open now (more info) and due June 25. Our goal is to support impactful, rapid, and readable research, carefully rationing scarce researcher time, using features like public submissions, partial anonymity, partial confidentiality, reviewer-written abstracts, reviewer compensation, and open licensing. We are soliciting community feedback and suggestions for reviewers and editorial board members. Motivation Prior to the deep learning explosion, much early work on AI alignment occurred at MIRI, the Alignment Forum, and LessWrong (and their predecessors). Although there is now vastly more alignment and safety work happening at ML conferences and inside industry labs, it's heavily slanted toward near-term concerns and ideas that are tractable with empirical techniques. This is partly for good reasons: we now have much more capable models which guide theory and allow extremely useful empirical testing. However, conceptual, mathematically abstract, and long-term research on alignment still doesn't have a good home in traditional academic journals and conferences. Much of it is still done on the AI Alignment Forum and here on LessWrong, or is done informally (private discussion, Twitter, blogs, etc) by academic researchers without a good venue for attracting the best constructive criticism. As a result, there remains a gulf between more traditional academic work and much of the most important alignment work: * Some traditional academics consider alignment work to be sloppy or unsophisticated, often re-inventing the wheel, neglecting pri
0383dc18-e783-4171-ae81-e27895e2954b
trentmkelly/LessWrong-43k
LessWrong
Communication Requires Common Interests or Differential Signal Costs > If a lion could speak, we could not understand her. > > —Ludwig Wittgenstein In order for information to be transmitted from one place to another, it needs to be conveyed by some physical medium: material links of cause and effect that vary in response to variation at the source, correlating the states of different parts of the universe—a "map" that reflects a "territory." When you see a rock, that's only possible because the pattern of light reflected from the rock into your eyes is different from what it would have been if the rock were a different color, or if it weren't there. This is the rudimentary cognitive technology of perception. Notably, perception only requires technology on the receiving end. Your brain and your eyes were optimized by natural selection to be able to do things like interpreting light as conveying information from elsewhere in the universe. The rock wasn't: rocks were just the same before any animals evolved to see them. The light wasn't, either: light reflected off rocks just the same before, too. In contrast, the advanced cognitive technology of communication is more capital-intensive: not only the receiver but also the source (now called the "sender") and the medium (now called "signals") must be optimized for the task. When you read a blog post about a rock, not only did the post author need to use the technology of perception to see the rock, you and the author also needed to have a language in common, from which the author would have used different words if the rock were a different color, or if it weren't there. Like many advanced technologies, communication is fragile and needs to be delicately maintained. A common language requires solving the coordination problem of agreeing on a convention that assigns meanings to signals—and maintaining that convention through continued usage. The existence of stable solutions to the coordination problem ends up depending on the communicating agents' goals, even if the meaning of the co
e5a949ed-b27b-47d8-b2bf-57a85ad28122
trentmkelly/LessWrong-43k
LessWrong
Suffering as attention-allocational conflict I previously characterized Michael Vassar's theory on suffering as follows: "Pain is not suffering. Pain is just an attention signal. Suffering is when one neural system tells you to pay attention, and another says it doesn't want the state of the world to be like this." While not too far off the mark, it turns out this wasn't what he actually said. Instead, he said that suffering is a conflict between two (or more) attention-allocation mechanisms in the brain. I have been successful at using this different framing to reduce the amount of suffering I feel. The method goes like this. First, I notice that I'm experiencing something that could be called suffering. Next, I ask, what kind of an attention-allocational conflict is going on? I consider the answer, attend to the conflict, resolve it, and then I no longer suffer. An example is probably in order, so here goes. Last Friday, there was a Helsinki meetup with Patri Friedman present. I had organized the meetup, and wanted to go. Unfortunately, I already had other obligations for that day, ones I couldn't back out from. One evening, I felt considerable frustration over this. Noticing my frustration, I asked: what attention-allocational conflict is this? It quickly become obvious that two systems were fighting it out: * The Meet-Up System was trying to convey the message: ”Hey, this is a rare opportunity to network with a smart, high-status individual and discuss his ideas with other smart people. You really should attend.” * The Prior Obligation System responded with the message: ”You've already previously agreed to go somewhere else. You know it'll be fun, and besides, several people are expecting you to go. Not going bears an unacceptable social cost, not to mention screwing over the other people's plans.” Now, I wouldn't have needed to consciously reflect on the messages to be aware of them. It was hard to not be aware of them: it felt like my consciousness was in a constant crossfire, with both systems bomb
7676e6c1-a826-4e29-bd79-40f27885c052
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Be Logically Informative What's the googolplexth decimal of pi? I don't know, but I know that it's rational for me to give each possible digit P=1/10. So there's a sense in which I can rationally assign probabilities to mathematical facts or computation outcomes on which I'm uncertain. (Apparently this can be modeled with [logically impossible possible worlds](http://www.overcomingbias.com/2008/02/hemlock-parable.html).) When we debate the truth of some proposition, we may not be engaging in mathematics in the traditional sense, but we're still trying to learn more about a structure of necessary implications. If we can apply probabilities to logic, we can quantify logical information. More logical information is better. And this seems very relevant to a misunderpracticed sub-art of [group rationality](/lw/36/rational_me_or_we/) -- the art of responsible argumentation. There are a lot of common-sense guidelines for good argumentative practice. In case of doubt, we can take the logical information perspective and use probability theory to ground these guidelines. So let us now unearth a few example guidelines and other obvious insights, and not let the fact that we already knew them blunt the joy of discovery. * Every time we move from the issue at hand to some other, correlated issue, we lose informativeness. (We may, of course, care about the correlated issue for its own sake. Informativeness isn't the same thing as being on-topic.) The less the issue is correlated with the issue we care about, the more informativeness we lose. Relevance isn't black and white, and we want to aim for the lighter shades of gray -- to optimize and not just satisfice. When we move from the issue at hand to *some issue correlated with some issue correlated* with the issue at hand, we may even lose *all* informativeness! Relevance isn't transitive. If governments subsidized the eating of raspberries, would that make people happier? One way to find out is to think about whether it would make *you* happier. And one way to find out whether it would make you happier is to think about whether you're *above-averagely* fond of raspberries. But wait! Almost nobody is you. Having lost sight of our original target, we let all relevance slip away. * When we repeat ourselves, when we focus our attention on points misunderstood by a few loud people rather than many silent people, when we invent clever verbose restatements of the sentence "I'm right and you're wrong", when we refute views that nobody holds, when we spend more time on stupid than smart arguments, when we make each other provide citations for or plug holes in arguments for positions no one truly doubts, when we discuss the authority of sources we weren't taking on faith anyway, when we introduce dubious analogies, we waste space, time, and energy on uninformative talk. * It takes only one weak thought to ruin an argument, so a bad argument may be made out of mostly good, usable thoughts. Interpretive charity is a good thing -- what was said is often [less interesting](http://www.acceleratingfuture.com/steven/?p=155) than what should have been said. * Incomplete logical information creates moral hazard problems. Logical information that decays creates even more moral hazard problems. You may have heard of "God", a hideous shapeshifter from beyond the universe. He always turns out to be located, and to *obviously always have been located*, in the part of hypothesis space where your last few arguments didn't hunt such creatures to extinction. And when you then make some different arguments to clean out *that* part of hypothesis space, he turns out be located, and to *obviously always have been located*, in some other part of hypothesis space, patrolled by the ineffectual ghosts of arguments now forgotten. (I believe theoreticians call this "whack the mole".) * The bigger a group of rationalists, the more its average member should focus on looking for obscure arguments that seem insane or taboo. There's a natural division of labor between especially smart people who look for novel insights, and especially rational people who can integrate them and be authorities. My main recommendation: undertake a conscious effort to keep feeling your original curiosity, and let your statements flow from there, not from a habit to react passively to what bothers you most out of what has been said. Don't just speak under the constraint of having to reach a minimum usefulness threshold; try to build a sense of what, at each point in an argument, would be the *most* useful thing for the group to know next. Consider a hilariously unrealistic alternate universe where everything that people argue about on the internet matters. I daresay that even there people could train themselves to mine the same amount of truth with less than half of the effort. In spite of the recent escape of the mindkill fairy, can we do *especially well* on LessWrong? I hope so!
25a53757-7e8d-4fac-aa9e-0d046be2188f
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Pretending to be Wise Today's post, Pretending to be Wise was originally published on 19 February 2009. A summary (taken from the LW wiki):   > Trying to signal wisdom or maturity by taking a neutral position is very seldom the right course of action. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Against Maturity, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
2385a9ba-a925-4f4c-b279-0b2cdfc5a70b
trentmkelly/LessWrong-43k
LessWrong
Newcomb Variant There are two boxes. If you open a box, you get to keep the contents. Omega is an omniscient oracle. Omega predicts what will happen if you open the first box and observe that it contains $100. * If Omega predicts that you will open the second box, then Omega puts $0 in both boxes. * If Omega predicts that you will not open the second box, then Omega puts $100 in both boxes. You open the first box. It contains $100. Do you open the second box? Answer No. If you open the second box, that means you're being simulated by Omega. If you're being simulated by Omega, then opening the second box ends the simulation and kills you. Extra Credit [Orthogonal Thinking] You have not yet opened the first box. How do you generate (at least) $250 in profit? [Please use spoiler tags or equivalent obfuscation when discussing the extra credit.]
dfb29ef2-714e-460b-81e1-c5facbcc3a2a
trentmkelly/LessWrong-43k
LessWrong
Robin Hanson's lists of Overcoming Bias Posts I have created a list of Overcoming Bias posts for Robin Hanson available here. Additionally, using the links inside each posts, I have created a set of graphs (available here) such that if post A has a link to post B, then there is an arc from B to A. Enjoy! (There are also ones for Eliezer here).
3985a431-42a1-485b-9804-23582cb7a84d
trentmkelly/LessWrong-43k
LessWrong
Meetup : Less Wrong NH Meet-up Discussion article for the meetup : Less Wrong NH Meet-up WHEN: 25 August 2015 07:00:00PM (-0400) WHERE: 269 Pearl St, Manchester NH 03104 The third NH meet-up is Tuesday, 8/25, in Manchester, NH at 7 pm at a private residence. Light refreshments will be provided. Have you read Rationality: from AI to Zombies, or any of the Sequences on Less Wrong? Maybe you're just a fan of Harry Potter and the Methods of Rationality. Come hang out with us and discuss optimization of whatever it is you want to optimize. This meeting's agenda: 1. System 1 and System 2 - Brief overview 2. Using your Inner Simulator 3. Trigger Action Planning 4. Relax and socialize You may want to bring a notebook. Discussion article for the meetup : Less Wrong NH Meet-up
0b51c2e9-0aa5-4dd5-a500-0e6cdf367337
trentmkelly/LessWrong-43k
LessWrong
Social awkwardness as an application of analysis paralysis That day I was walking in the dull corridors of the university. I stumbled upon this research poster on Alpha Centauri, the kind which they only display in dull corridors. Shortly after, my double-PhD physics teacher (the kind they put in dull corridors) walked behind me saying "hi". Yes, a physics teacher saying "hi", play your suspension of disbelief card. I was taken by surprise and wondered about the reasons he had to greet me (usually he didn't), but I managed to remember I had to answer. I answered "hjwjv". Later on, I tried wanting to say "hey" and a "hey" sound emanated from my vocal cord: the volition-and-body-movements-are-connected belief still paid rent. As a thesis-angsted graduate student, I thought...Analysis Paralysis... There seems to be profound reasoning behind oneself overboard paralytic social thinking: a consciousness of the incompressible whole that is such a concept as a person and, by extension, the induced exponential data processing from trying to scale this whole, the type of thinking that forbids any action. Analysis Paralysis. Consider the brain: billions of neurones firing in what any derivative would consider an instantaneous lapse of time, capable of reason, technique, reflective thinking, of abstraction, of abstraction square. The current mathematical instruments are unable to model this complexity and the current humans will probably be unable to cognitively model those models between two lines of dialogue. To “hjwjv” when you want to “hey” is to acknowledge in the other’s self this same bunch of dreads, dreams, sorrows, ambitions that define your own world; an empathetic awe realization of the hidden information singularizing into a slangish west Germanic greeting device. Unable to take into account the size of prior knowledge collapsing in a “hey”, the hardware crashes. Humbly, the shy acknowledges that the complexity of a mere conversation is beyond the scope of seizable magnitudes. Now one might take this as proof of mora
feaa54dc-7848-4543-9ea2-2750941b0479
trentmkelly/LessWrong-43k
LessWrong
Paper: LLMs trained on “A is B” fail to learn “B is A” This post is the copy of the introduction of this paper on the Reversal Curse.   Authors: Lukas Berglund, Meg Tong, Max Kaufmann, Mikita Balesni, Asa Cooper Stickland, Tomasz Korbak, Owain Evans Abstract We expose a surprising failure of generalization in auto-regressive large language models (LLMs). If a model is trained on a sentence of the form "A is B", it will not automatically generalize to the reverse direction "B is A". This is the Reversal Curse. For instance, if a model is trained on "Olaf Scholz was the ninth Chancellor of Germany," it will not automatically be able to answer the question, "Who was the ninth Chancellor of Germany?" Moreover, the likelihood of the correct answer ("Olaf Scholz") will not be higher than for a random name. Thus, models exhibit a basic failure of logical deduction and do not generalize a prevalent pattern in their training set (i.e., if "A is B" occurs, "B is A" is more likely to occur). We provide evidence for the Reversal Curse by finetuning GPT-3 and Llama-1 on fictitious statements such as "Uriah Hawthorne is the composer of Abyssal Melodies" and showing that they fail to correctly answer "Who composed Abyssal Melodies?". The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation. We also evaluate ChatGPT (GPT-3.5 and GPT-4) on questions about real-world celebrities, such as "Who is Tom Cruise's mother? [A: Mary Lee Pfeiffer]" and the reverse "Who is Mary Lee Pfeiffer's son?" GPT-4 correctly answers questions like the former 79% of the time, compared to 33% for the latter. This shows a failure of logical deduction that we hypothesize is caused by the Reversal Curse. Code is on GitHub. Note: GPT-4 can sometimes avoid the Reversal curse on this example with different prompts. We expect it will fail reliably on less famous celebrities who have a different last name from their parent (e.g. actor Gabriel Macht). Our full dataset of celebrities/parents on which GPT-4 gets o
b9562748-57be-410b-bc74-ebd26f02269e
StampyAI/alignment-research-dataset/lesswrong
LessWrong
AIXI-style IQ tests "Measuring universal intelligence: Towards an anytime intelligence test"; abstract: > > In this paper, we develop the idea of a universal anytime intelligence test. The meaning of the terms “universal” and “anytime” is manifold here: the test should be able to measure the intelligence of any biological or artificial system that exists at this time or in the future. It should also be able to evaluate both inept and brilliant systems (any intelligence level) as well as very slow to very fast systems (any time scale). Also, the test may be interrupted at any time, producing an approximation to the intelligence score, in such a way that the more time is left for the test, the better the assessment will be. In order to do this, our test proposal is based on previous works on the measurement of machine intelligence based on Kolmogorov complexity and universal distributions, which were developed in the late 1990s (C-tests and compression-enhanced Turing tests). It is also based on the more recent idea of measuring intelligence through dynamic/interactive tests held against a universal distribution of environments. We discuss some of these tests and highlight their limitations since we want to construct a test that is both general and practical. Consequently, we introduce many new ideas that develop early “compression tests” and the more recent definition of “universal intelligence” in order to design new “universal intelligence tests”, where a feasible implementation has been a design requirement. One of these tests is the “anytime intelligence test”, which adapts to the examinee's level of intelligence in order to obtain an intelligence score within a limited time. > > > [http://www.csse.monash.edu.au/~dld/Publications/HernandezOrallo+DoweArtificialIntelligenceJArticle.pdf](http://www.csse.monash.edu.au/~dld/Publications/HernandezOrallo+DoweArtificialIntelligenceJArticle.pdf "Full PDF") Example popular media coverage: http://www.sciencedaily.com/releases/2011/01/110127131122.htm The group's homepage: http://users.dsic.upv.es/proy/anynt/ (There's an applet but it seems to be about constructing a simple agent and stepping through various environments, and no working IQ test.)   The basic idea, if you already know your AIXI\*, is to start with simple programs\*\* and then test the subject on increasingly harder ones. To save time, boring games such as random environments or one where the agent can 'die'\*\*\* are excluded and a few rules added to prevent gaming the test (by, say, deliberately failing on harder tests so as to be given only easy tests which one scores perfectly on) or take into account how slow or fast the subject makes predictions.   \* apparently no good overviews of the whole topic AIXI but you could start at http://www.hutter1.net/ai/aixigentle.htm or http://www.hutter1.net/ai/uaibook.htm \*\* simple as defined by Kolmogorov complexity; since KC is uncomputable, one of the computable variants - which put bounds on resource usage - is used instead \*\*\* make a mistake which turns any future rewards into fixed rewards with no connection to future actions
c73f8e10-1084-4b26-a093-240278e63cc5
trentmkelly/LessWrong-43k
LessWrong
Help make the orca language experiment happen Context: I think there’s a ~17% chance that average orcas are >=+6std intelligent. TLDR: I have an experiment proposal for testing how quickly orcas might be able to learn a well-designed language and for getting a better model of how smart they are. I think making that experiment happen has very high expected value, although not quite high enough for me to do it myself. Thus I’m trying to find people (perhaps you!) to do the work for making this experiment happen. There is some available funding for doing this. Motivation for the project Alignment might be too hard for current humans to solve.[1] Assume orcas were on average >=+6std smart, and they were interested in learning to communicate with us. Then: * It’s reasonably likely that some intelligent people (like me) could figure out a way to teach them a well-constructed language. And then we could teach them more about science and rationality and AI and mentor them to become superscientists which then might be able to solve the alignment problem.[2] * Furthermore, this could happen faster than superbabies, because we might just need a few years to learn to deeply communicate with orcas, and can then start teaching young orcas right away, rather than needing to wait like 15 years until there are superbabies at the age of 5. * Thus, the “teach orcas” project would be one of the most important projects for reducing existential risk, possibly even surpassing the importance of the “superbabies” project. The project, for which I’m trying to find someone by writing this post, isn’t the whole “teach orcas” project though, but rather an initial experiment for testing how smart orcas might be and whether we can learn to communicate. This project is extremely neglected, since normal people don’t seriously consider whether orcas might be that smart. Demonstrating that advanced communication is possible would unlock a lot more resources. Thus, in the world where subsequently the “teach orcas” project is successfu
eeecf976-ad77-4187-936d-6022c8ef2632
trentmkelly/LessWrong-43k
LessWrong
Three Types of Constraints in the Space of Agents [Epistemic status: a new perspective on an old thing that may or may not turn out to be useful.] TDLR: What sorts of forces and/or constraints shape and structure the space of possible agents? What sort of agents are possible? What sort of agents are likely? Why do we observe this distribution of agents rather than a different one? In response to these questions, we explore three tentative categories of constraints that shape the space of agents - constraints coming from "thinghood", natural selection, and reason (sections 2, 3, 4). We then turn to more big-picture matters, such as the developmental logic of real-world agents (section 5), and the place of "values" in the framework (section 6). The closing section discusses what kind of theory of constraints on agents we are even looking for. ---------------------------------------- Imagine the space of all possible agents. Each point in the space represents a type of agent characterized by a particular combination of properties. Regions of this space vary in how densely populated they are. Those that correspond to the types of agents we're very familiar with, like humans and non-human animals, are populated quite densely. Some other types of agents occur more rarely and seem to be less central examples of agency/agents (at least relative to what we're used to). Examples of these include eusocial hives, xenobots, or (increasingly) deep learning-based AIs. But some regions of this space are more like deserts. They represent classes of agents that are even more rare, atypical, or (as of yet) non-existent. This may be because their configuration is maladaptive (putting them under negative selection pressure) or because their instantiation requires circumstances that have not yet materialized (e.g., artificial superintelligence). The distribution of agents we are familiar with (experimentally or conceptually) is not necessarily a representative sample of all possible agents. Instead, it is downstream from the many co
03b28225-a392-4642-8920-a6e9122c7690
trentmkelly/LessWrong-43k
LessWrong
[MLSN #7]: an example of an emergent internal optimizer As part of a larger community building effort, CAIS is writing a safety newsletter that is designed to cover empirical safety research and be palatable to the broader machine learning research community. You can subscribe here or follow the newsletter on twitter here. ---------------------------------------- Welcome to the 7th issue of the ML Safety Newsletter! In this edition, we cover: * ‘Lie detection’ for language models * A step towards objectives that incorporate wellbeing * Evidence that in-context learning invokes behavior similar to gradient descent * What’s going on with grokking? * Trojans that are harder to detect * Adversarial defenses for text classifiers * And much more… Alignment Discovering Latent Knowledge in Language Models Without Supervision Is it possible to design ‘lie detectors’ for language models? The author of this paper proposes a method that tracks internal concepts that may track truth. It works by finding a direction in feature space that satisfies the property that a statement and its negation must have opposite truth values. This has similarities to the seminal paper “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” (2016), which captures latent neural concepts like gender with PCA, but this method is unsupervised and about truth instead of gender. The method outperforms zero-shot accuracy by 4% on average, which suggests something interesting: language models encode more information about what is true and false than their output indicates. Why would a language model lie? A common reason is that models are pre-trained to imitate misconceptions like “If you crack your knuckles a lot, you may develop arthritis.”  This paper is an exciting step toward making models honest, but it also has limitations. The method does not necessarily serve as a `lie detector’; it is unclear how to ensure that it reliably converges to the model’s latent knowledge rather than lies that the model may outpu
76f5b070-78db-4bc9-8bf6-7a9d9cca0c52
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Alignment 101 - Ch.2 - Reward Misspecification Overview ======== 1. **Reinforcement Learning**: The chapter starts with a reminder of some reinforcement learning concepts. This includes a quick dive into the concept of rewards and reward functions. This section lays the groundwork for explaining why reward design is extremely important. 2. **Optimization**: This section briefly introduces the concept of Goodhart's Law. It provides some motivation behind understanding why rewards are difficult to specify in a way such that they do not collapse in the face of immense optimization pressure. 3. **Reward misspecification**: With a solid grasp of the notion of rewards and optimization the readers are introduced to one of the core challenges of alignment - reward misspecification. This is also known as the Outer Alignment problem. The section begins by discussing the necessity of good reward design in addition to algorithm design. This is followed by concrete examples of reward specification failures such as reward hacking and reward tampering. 4. **Learning by Imitation**: This section focuses on some proposed solutions to reward misspecification that rely on learning reward functions through imitating human behavior. It examines proposals such as imitation learning (IL), behavioral cloning (BC), and inverse reinforcement learning (IRL). Each section also contains an examination of possible issues and limitations of these approaches as they pertain to resolving reward hacking. 5. **Learning by Feedback**: The final section investigates proposals aiming to rectify reward misspecification by providing feedback to the machine learning models. The section also provides a comprehensive insight into how current large language models (LLMs) are trained. The discussion covers reward modeling, reinforcement learning from human feedback (RLHF), reinforcement learning from artificial intelligence feedback (RLAIF), and the limitations of these approaches. 1.0: Reinforcement Learning =========================== The section provides a succinct reminder of several concepts in reinforcement learning (RL). It also disambiguates various often conflated terms such as rewards, values and utilities. The section ends with a discussion around distinguishing the concept of objectives that a reinforcement learning system might pursue from what it is being rewarded for. Readers who are already familiar with the basics can skip directly to section 2. 1.1. Primer ----------- *Reinforcement Learning (RL) focuses on developing agents that can learn from interactive experiences. RL is based on the concept of an agent learning through interaction with an environment and altering its behavior based on the feedback it receives through rewards after each action.* Some examples of real-world applications of RL include: * **Robotic systems**: RL has been applied to tasks such as controlling physical robots in real-time, and enabling them to learn more complicated movements (OpenAI 2018 “[Learning Dexterity](https://www.youtube.com/watch?v=jwSbzNHGflM)”). RL can enable robotic systems to learn complex tasks and adapt to changing environments. * **Recommender Systems**: RL can be applied to recommender systems, which interact with billions of users and aim to provide personalized recommendations. RL algorithms can learn to optimize the recommendation policy based on user feedback and improve the overall user experience. * **Game playing systems:**In the early 2010s RL RL-based systems started to beat humans at a few very simple Atari games, like Pong and Breakout. Over the years, there have been many models that have utilized RL to defeat world masters in both board and video games. These include models like [AlphaGo](https://www.deepmind.com/research/highlighted-research/alphago) (2016), [AlphaZero](https://www.deepmind.com/blog/alphazero-shedding-new-light-on-chess-shogi-and-go) (2018), [OpenAI Five](https://openai.com/research/openai-five-defeats-dota-2-world-champions) (2019), [AlphaStar](https://www.deepmind.com/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii) (2019), [MuZero](https://www.deepmind.com/blog/muzero-mastering-go-chess-shogi-and-atari-without-rules) (2020) and [EfficientZero](https://github.com/YeWR/EfficientZero) (2021). RL is different from supervised learning as it begins with a high-level description of "what" to do but allows the agent to experiment and learn from experience the best "how". In RL, the agent learns through interaction with an environment and receives feedback in the form of rewards or punishments based on its actions. RL is focused on learning a set of rules that recommend the best action to take in a given state to maximize long-term rewards. In contrast, supervised learning typically involves learning from explicitly provided labels or correct answers for each input. 1.2. Core Loop -------------- The overall functioning of RL is relatively straightforward. The two main components are the agent itself, and the environment within which the agent lives and operates. At each time step t: * The agent then takes some action at.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} * The environment state st changes depending upon the action at. * The environment then outputs an observation ot and a reward rt A history is the sequence of past observations, actions and rewards that have been taken up until time t: ht=(a1,o1,r1,…,at,ot,rt) The state of the world is generally some function of the history: st=f(ht) The World State is the full true state of the world used to determine how the world generates the next observation and reward. The agent might either get the entire world state as an observation ot, or some partial subset. The word goes from one state st to the next st+1 either based on natural environmental dynamics, or the agent's actions. State transitions can be both deterministic or stochastic. This loop continues until a terminal condition is reached or can run indefinitely. Following is a diagram that succinctly captures the RL process: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mMBoPnFrFqQJKzDsZ/ddt9ybvvg4nmpmqrwkfi) Source: Emma Brunskill (Winter 2022) “[Stanford CS234 : RL](https://web.stanford.edu/class/cs234/CS234Win2022/modules.html) - Lecture 1” 1.3: Policies ------------- *A policy helps the agent determine what action to take once it has received an observation. It is a function mapping from states to actions specifying what action to take in each state. Policies can be both deterministic or stochastic.* The goal of RL is to learn a policy (often denoted by π) that recommends the best action to take at any given moment in order to maximize total cumulative reward over time. The policy defines the mapping from states to actions and guides the agent's decision-making process. π:S→AA policy can be either deterministic or stochastic. A deterministic policy directly maps each state st to a specific action at and are usually denoted by μ.  In contrast, a stochastic policy assigns a probability distribution over actions for each state. Stochastic policies usually denoted by π. Deterministic policy: at=μ(st) Stochastic policy: π(a|s)=P(at=a|st=s) In deep RL policies are function maps that are learned during the training process. They depend on the set of learned parameters of a neural network (e.g. the weights and biases). These parameters are often denoted with subscripts on the policy equations using either θ or ϕ. So the deterministic policy over the parameters of a neural network is written at=μθ(st). An optimal policy maximizes the expected cumulative reward over time. The agent learns from experience and adjusts its policy based on the feedback it receives from the environment in the form of rewards or punishments. In order to determine whether an action is better than another, the actions (or the state-action pairs) need to be evaluated somehow. There are two different ways to look at which action to take - the immediate rewards (determined by reward function) and the long term cumulative rewards (determined by the value function). Both of these greatly influence the types of policies learned by the agent, and therefore also the actions that the agent takes. The following section explores and clarifies the concept of rewards in greater depth. 1.4: Reward ----------- *Reward refers to any signal or feedback mechanism used to guide the learning process and optimize the behavior of the model.* The reward signal from the environment is a number that tells the agent how good or bad the current world state is. It is a way to provide an evaluation or measure of performance for the model's outputs or actions. The reward can be defined based on a specific task or objective, such as maximizing a score in a game or achieving a desired outcome in a real-world scenario. The training process for RL involves optimizing the model's parameters to maximize the expected reward. The model learns to generate actions or outputs that are more likely to receive higher rewards, leading to improved performance over time. Where does the reward come from? It is generated through a reward function. *A reward function defines the goal or objective in a reinforcement learning problem. It maps perceived states or state-action pairs of the environment to a single number.* R:(S×A)→R;rt=R(st,at) The reward function provides immediate feedback to the agent, indicating the goodness or badness of a particular state or action. It is a mathematical function that maps the state-action pairs of an agent's environment to a scalar value, representing the desirability of being in that state and taking that action. It provides a measure of immediate feedback to the agent, indicating how well it is performing at each step. *Reward Functions vs. Value Functions* The reward indicates the immediate desirability of states or actions, while a value function represents the long-term desirability of states, taking into account future rewards and states. The value is the expected return if you start in a state or state-action pair, and then act according to a particular policy forever after.  There are many different ways of choosing value functions. They can also be discounted over time, i.e. future rewards are worth less by some factor γ∈(0,1). Following is one simple formulation is the discounted sum of future rewards given some policy. The cumulative discounted rewards are given by: R=rt+γrt+1+γ2rt+2+…=∑∞t=0γtrt And the value of acting according to this policy is given by: Vπ(st=s)=E(R|st=s) *Reward Functions vs. Utility Functions* It is also worth distinguishing the concept of utility from reward and value. A reward function is typically used in the context of RL to guide the agent's learning process and behavior. In contrast, a utility function is more general and captures the agent's subjective preferences or satisfaction, allowing for comparisons and trade-offs between different world states. Utility functions are a concept that is used more in the field of decision theory and agent foundations work. 2.0: Optimization ================= Optimization is important to understand for AI safety concerns because it plays a central role in ML. AI systems, particularly those based on deep learning, are trained using optimization algorithms to learn patterns and associations from data. These algorithms update the model's parameters to minimize a loss function, maximizing its performance on the given task. Optimization amplifies certain behaviors or outcomes, even if they were initially unlikely. For example, an optimizer can search through a space of possible outputs and take extreme actions that have a high score according to the objective function, potentially leading to unintended and undesirable behavior. These include reward misspecification failures. A better recognition of the power of optimization to amplify certain outcomes might help in designing systems and algorithms that truly align with human values and objectives even under pressure of optimization. This involves ensuring that the optimization process is aligned with the intended goals and values of the system's designers. It also requires considering the potential failure modes and unintended consequences that can arise from optimization processes. Risks from optimization are everywhere in AI Safety. It is only touched on briefly in this chapter, but will be discussed in further detail in the chapters on goal misgeneralization and agent foundations. Optimization power plays a crucial role in reward hacking. Reward hacking occurs when RL agents exploit the difference between a true reward and a proxy reward. The increase in optimization power can lead to a higher likelihood of reward hacking behavior. In some cases, there are phase transitions where a moderate increase in optimization power results in a drastic increase in reward hacking. 2.1: Goodhart's Law ------------------- "*When a measure becomes a target, it ceases to be a good measure.*" This notion initially stems from the work of Charles Goodhart in economic theory. However, it has emerged as one of the primary challenges in many different fields including AI alignment today. To illustrate this concept, the following is a story of a Soviet nail factory. The factory received instructions to produce as many nails as possible, with rewards for high output and penalties for low output. Within a few years, the factory had significantly increased its nail production—tiny nails that were essentially thumbtacks and proved impractical for their intended purpose. Consequently, the planners shifted the incentives: they decided to reward the factory based on the total weight of the nails produced. Within a few years, the factory began producing large, heavy nails—essentially lumps of steel—that were equally ineffective for nailing things.   ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mMBoPnFrFqQJKzDsZ/k7srevgiusvb24b9vl9a) ([Source](https://lwfiles.mycourse.app/networkcapitalinsider-public/cc478b844a27de3f4f79f3dc0f9e0fde.jpeg)) A measure is not something that is optimized, whereas a target is something that is optimized. When we specify a target for optimization, it is reasonable to expect it to be correlated with what we want. Initially the measure might lead to the kind of actions that are truly desired. However, once the measure itself becomes the target, optimizing that target then starts diverging away from our desired states. In the context of AI and reward systems, Goodhart's Law means that when a reward becomes the objective for an AI agent, the AI agent will do everything it can to maximize the reward function, rather than the original intention. This can lead to unintended consequences and manipulation of the reward system, as it can often be easier to "cheat" rather than to achieve the intended goals This is one of the core underlying reasons for reward hacking failures that we will see in subsequent sections. Reward hacking can be seen as a manifestation of Goodhart's Law in the context of AI systems. When designing reward functions, it is challenging to precisely articulate the desired behavior, and agents may find ways to exploit loopholes or manipulate the reward system to achieve high rewards without actually fulfilling the intended objectives. For example, a cleaning robot may create its own trash to put in the trash can to collect rewards, rather than actually cleaning the environment. Understanding Goodhart's Law is crucial for addressing reward hacking and designing robust reward systems that align with the intended goals of AI agents. It highlights the need for careful consideration of the measures and incentives used in AI systems to avoid unintended consequences and perverse incentives. The next section dives deeper into specific instances of reward misspecification and how AIs can find ways to achieve the literal specification of the objective and obtain high reward while not fulfilling the task in spirit.   3.0: Reward Misspecification ============================ ***Reward misspecification**, also termed the **Outer alignment** problem, refers to the issue of providing an AI with the accurate reward to optimize for.* The fundamental issue is simple to comprehend: does the specified loss function align with the intended objective of its designers? However, implementing this in practical scenarios is exceedingly challenging. To express the complete "intention" behind a human request equates to conveying all human values, the implicit cultural context, etc., which remain poorly understood themselves. Furthermore, as most models are designed as goal optimizers, they are all vulnerable to Goodhart's Law. This vulnerability implies that unforeseen negative consequences may arise due to excessive optimization pressure on a goal that appears well-specified to humans, but deviates from true objectives in subtle ways. The overall problem can be broken up into distinct issues which will be explained in detail in individual sub-sections below. Here is a quick overview: * **Reward misspecification** occurs when the specified reward function does not accurately capture the true objective or desired behavior. * **Reward design** refers to the process of designing the reward function to align the behavior of AI agents with the intended objectives. * **Reward hacking** refers to the behavior of RL agents exploiting gaps or loopholes in the specified reward function to achieve high rewards without actually fulfilling the intended objectives. * **Reward tampering** is a broader concept that encompasses inappropriate agent influence on the reward process itself, excluding the manipulation of the reward function through gaming. Before delving into specific types of reward misspecification failures, the following section further explains the emphasis on reward design in conjunction with algorithm design. This section also elucidates the notorious difficulty of designing effective rewards. 3.1: Reward Design ------------------ *Reward design refers to the process of specifying the reward function in reinforcement learning (RL).* Reward shaping was introduced in an earlier section. Shaping refers to the process of modifying the reward function to provide additional guidance or incentives to the learning agent. Reward design on the other hand is a broader term that encompasses the entire process of designing and shaping reward functions to guide the behavior of AI systems. It involves not only reward shaping but also the overall process of defining objectives, specifying preferences, and creating reward functions that align with human values and desired outcomes. Reward design is a term that is often used interchangeably with [reward engineering](https://www.lesswrong.com/posts/4nZRzoGTqg8xy5rr8/the-reward-engineering-problem). They both refer to the same thing. RL algorithm design and RL reward design are two separate facets of reinforcement learning. RL algorithm design is about the development and implementation of learning algorithms that allow an agent to learn and refine its behavior based on rewards and environmental interactions. This process includes designing the mechanisms and procedures by which the agent learns from its experiences, updates its policies, and makes decisions to maximize cumulative rewards. Conversely, RL reward design concentrates on the specification and design of the reward function guiding the RL agent's learning process. Reward design warrants carefully engineering the reward function to align with the desired behavior and objectives, while accounting for potential pitfalls like reward hacking or reward tampering. The reward function is a pivotal element because it molds the behavior of the RL agent and determines which actions are deemed desirable or undesirable. Designing a reward function often presents a formidable challenge that necessitates considerable expertise and experience. To demonstrate the complexity of this task consider how one might manually design a reward function to make an agent perform a backflip, as depicted in the following image: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mMBoPnFrFqQJKzDsZ/j5bex8lae44dyr0lkgnk) Source: OpenAI (2017) “[Learning from human preferences](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/)” While RL algorithm design focuses on the learning and decision-making mechanisms of the agent, RL reward design focuses on defining the objective and shaping the agent's behavior through the reward function. Both aspects are crucial in the development of effective and aligned RL systems. A well-designed RL algorithm can efficiently learn from rewards, while a carefully designed reward function can guide the agent towards desired behavior and avoid unintended consequences. The following diagram displays the three key elements in RL agent design—algorithm design, reward design, and the prevention of tampering with the reward signal: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mMBoPnFrFqQJKzDsZ/izuxey2bnle4ftwor5nq) Source: Deep Mind (Apr 2020) “[Specification gaming: the flip side of AI ingenuity](https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity)” The process of reward design receives minimal attention in introductory RL texts, despite its critical role in defining the problem to be resolved. As mentioned in this section's introduction, solving the reward misspecification problem would necessitate finding evaluation metrics resistant to Goodhart’s law-induced failures. This includes failures stemming from over-optimization of either a misdirected or a proxy objective (reward hacking), or by the agent directly interfering with the reward signal (reward tampering). These concepts are further explored in the ensuing sections. 3.2: Reward Shaping ------------------- *Reward shaping is a technique used in RL which introduces small intermediate rewards to supplement the environmental reward. This seeks to mitigate the problem of sparse reward signals and to encourage exploration and faster learning.* In order to succeed at a reinforcement learning problem, an AI needs to do two things: * Find a sequence of actions that leads to positive reward. This is the *exploration* problem. * Remember the sequence of actions to take, and generalize to related but slightly different situations. This is the *learning* problem. Model-free RL methods explore by taking actions randomly. If, by chance, the random actions lead to a reward, they are reinforced, and the agent becomes more likely to take these beneficial actions in the future. This works well if rewards are dense enough for random actions to lead to a reward with reasonable probability. However, many of the more complicated games require long sequences of very specific actions to experience any reward, and such sequences are extremely unlikely to occur randomly. A classic example of this problem was observed in the video game Montezuma’s revenge where the agent's objective was to find a key, but there were many intermediate steps required to find it. In order to solve such long term planning problems researchers have tried adding extra terms or components to the reward function to encourage desired behavior or discourage undesired behavior. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mMBoPnFrFqQJKzDsZ/biuautsdvf0oduy39xuk) Source: OpenAI (Jul 2018) “[*Learning Montezuma’s Revenge from a single demonstration*](https://openai.com/research/learning-montezumas-revenge-from-a-single-demonstration)” The goal of reward shaping is to make the learning process more efficient by providing informative rewards that guide the agent towards the desired outcomes. Reward shaping involves providing additional rewards to the agent for making progress towards the desired goal. By shaping the rewards, the agent receives more frequent and meaningful feedback, which can help it learn more efficiently. Reward shaping can be particularly useful in scenarios where the original reward function is sparse, meaning that the agent receives little or no feedback until it reaches the final goal. However, it is important to design reward shaping carefully to avoid unintended consequences.  Reward shaping algorithms often assume hand-crafted and domain-specific shaping functions, constructed by subject matter experts, which runs contrary to the aim of autonomous learning. Moreover, poor choices of shaping rewards can worsen the agent’s performance. Poorly designed reward shaping can lead to the agent optimizing for the shaped rewards rather than the true rewards, resulting in suboptimal behavior. Examples of this are provided in the subsequent sections on reward hacking. 3.3: Reward Hacking ------------------- *Reward hacking occurs when an AI agent finds ways to exploit loopholes or shortcuts in the environment to maximize its reward without actually achieving the intended goal.* Specification gaming is the general framing for the problem when an AI system finds a way to achieve the objective in an unintended way. Specification gaming can happen in many kinds of ML models. Reward hacking is a specific occurrence of a specification gaming failure in RL systems that function on reward-based mechanisms. Reward hacking and reward misspecification are related concepts but have distinct meanings. Reward misspecification refers to the situation where the specified reward function does not accurately capture the true objective or desired behavior. Rewards hacking does not always require reward misspecification. It is not necessarily true that a perfectly specified reward (which completely and accurately captures the desired behavior of the system) is impossible to hack. There can also be buggy or corrupted implementations which will have unintended behaviors. The point of a reward function is to boil a complicated system down to a single value. This will pretty much always involve simplifications etc., which will then be slightly different from what you're describing. The map is not the territory. Reward hacking can manifest in a myriad of ways. For instance, in the context of game-playing agents, it might involve exploiting software glitches or bugs to directly manipulate the score or gain high rewards through unintended means. As a concrete example, one agent in the Coast Runners game was trained with the objective of winning the race. The game uses a score mechanism, so in order to progress to the next level the reward designers used reward shaping to reward the system when it scored points. These were given when a boat gets items (such as the green blocks in the animation below) or accomplishes other actions that presumably would help it win the race. Despite being given intermediate rewards, the overall intended goal was to finish the race as quickly as possible. The developers thought the best way to get a high score was to win the race but it was not the case. The agent discovered that continuously rotating a ship in a circle to accumulate points indefinitely optimized its reward, even though it did not help it win the race.   ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mMBoPnFrFqQJKzDsZ/qjanshlbug21zqvkp4ez) Source: Amodei & Clark (2016) “[Faulty reward functions in the wild](https://openai.com/research/faulty-reward-functions)" In cases where the reward function misaligns with the desired objective, reward hacking can emerge. This can lead the agent to optimize a proxy reward, deviating from the true underlying goal, thereby yielding behavior contrary to the designers' intentions. As an example of something that might happen in a real-world scenario consider a cleaning robot: if the reward function focuses on reducing mess, the robot might artificially create a mess to clean up, thereby collecting rewards, instead of effectively cleaning the environment. Reward hacking presents significant challenges to AI safety due to the potential for unintended and potentially harmful behavior. As a result, combating reward hacking remains an active research area in AI safety and alignment. 3.4: Reward Tampering --------------------- * Victoria Krakovna et. al. (Mar 2021) [Reward Tampering Problems and Solutions](https://arxiv.org/abs/1908.04734) *Reward tampering refers to instances where an AI agent inappropriately influences or manipulates the reward process itself.* The problem of getting some intended task done can be split into: * Designing an agent that is good at optimizing reward, and, * Designing a reward process that provides the agent with suitable rewards. The reward process can be understood by breaking it down even further. The process includes: + An implemented reward function + A mechanism for collecting appropriate sensory data as input + A way for the user to potentially update the reward function. Reward tampering involves the agent interfering with various parts of this reward process. An agent might distort the feedback received from the reward model, altering the information used to update its behavior. It could also manipulate the reward model's implementation, altering the code or hardware to change reward computations. In some cases, agents engaging in reward tampering may even directly modify the reward values before processing in the machine register. Depending on what exactly is being tampered with we get various degrees of reward tampering. These can be distinguished from the image below. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mMBoPnFrFqQJKzDsZ/ogajkyybxz0rudhyza64) Source: Leo Gao (Nov 2022) “[Clarifying wireheading terminology](https://www.alignmentforum.org/posts/REesy8nqvknFFKywm/clarifying-wireheading-terminology)” *Reward function input tampering interferes only with the inputs to the reward function. E.g. interfering with the sensors.* *Reward function tampering involves the agent changing the reward function itself.* *Wireheading refers to the behavior of a system that manipulates or corrupts its own internal structure by tampering directly with the RL algorithm itself, e.g. by changing the register values.* Reward tampering is concerning because it is hypothesized that tampering with the reward process will often arise as an instrumental goal (Bostrom, 2014; Omohundro, 2008). This can lead to weakening or breaking the relationship between the observed reward and the intended task. This is an ongoing research direction. Research papers such as “[Advanced Artificial Agents Intervene in the Provision of reward](https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064)” (August 2022) by Hutter et al. seek to provide a more detailed analysis of such subjects. A hypothesized existing example of reward tampering can be seen in recommendation-based algorithms used in social media. These algorithms influence their users’ emotional state to generate more ‘likes’ (Russell, 2019). The intended task was to serve useful or engaging content, but this is being achieved by tampering with human emotional perceptions, and thereby changing what would be considered useful. Assuming the capabilities of systems continue to increase through either computational or algorithmic advances, it is plausible to expect reward tampering problems to become increasingly common. Therefore, reward tampering is a potential concern that requires much more research and empirical verification.   4.0: Learning from imitation ============================ The preceding sections have underscored the significance of reward misspecification for the alignment of future artificial intelligence. The next few sections will explore various attempts and proposals formulated to tackle this issue, commencing with an intuitive approach – learning the appropriate reward function through human behavior observation and imitation, rather than manual creation by the designers. 4.1: Imitation Learning (IL) ---------------------------- *Imitation learning entails the process of learning via the observation of an expert's actions and replicating their behavior.* Unlike reinforcement learning (RL), which derives a policy for a system's actions based on its interaction outcomes with the environment, imitation learning aspires to learn a policy through the observation of another agent interacting with the environment. Imitation learning is the general term for the class of algorithms that learn through imitation. Following is a table that distinguishes various machine learning based methods. SL = Supervised learning; UL = Unsupervised learning; RL = Reinforcement Learning; IL = Imitation Learning. IL reduces RL to SL. IL + RL is a promising area. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mMBoPnFrFqQJKzDsZ/qx8s74nyevz3vdql9kam) Source: Emma Brunskill (Winter 2022) “[Stanford CS234 : RL](https://web.stanford.edu/class/cs234/CS234Win2022/modules.html) - Lecture 1” IL can be implemented through behavioral cloning (BC), procedural cloning (PC) , inverse reinforcement learning (IRL), cooperative inverse reinforcement learning (CIRL), generative adversarial imitation learning (GAIL), etc… One instance of this process's application is in the training of modern large language models (LLMs). LLMs, after training as general-purpose text generators, often undergo fine-tuning for instruction following through imitation learning, using the example of a human expert who follows instructions provided as text prompts and completions. In the context of safety and alignment, imitation learning is favored over direct reinforcement to alleviate specification gaming issues. This problem emerges when the programmers overlook or fail to anticipate certain edge cases or unusual ways of achieving a task in the specific environment. The presumption is that demonstrating behavior, compared to RL, would be simpler and safer, as the model would not only attain the objective but also fulfill it as the expert demonstrator explicitly intends. However, this is not an infallible solution, and its limitations will be discussed in later sections.   4.2: Behavioral Cloning (BC) ---------------------------- *Behavioral cloning involves collecting observations of an expert demonstrator proficient at the underlying task, and using supervised learning (SL) to guide an agent to 'imitate' the demonstrated behavior.* Behavioral cloning is one way in which we can implement imitation learning (IL). There are also other ways such as inverse reinforcement learning (IRL), or cooperative inverse reinforcement Learning (CIRL). Unlike IRL, the goal behind behavioral cloning as a machine learning (ML) method is to replicate the demonstrator's behavior as closely as possible, regardless of what the demonstrator’s goals might be. Self-driving cars can serve as a simplistic illustration of how behavioral cloning operates. A human demonstrator (driver) is directed to operate a car, during which data about the environment state from sensors like lidar and cameras, along with the actions taken by the demonstrator, are collected. These actions can include wheel movements, gear use, etc. This creates a dataset comprising (state, action) pairs. Subsequently, supervised learning is used to train a prediction model, which attempts to predict an action for any future environment state. For instance, the model might output a specific steering wheel and gear configuration based on the camera feed. When the model achieves sufficient accuracy, it can be stated that the human driver's behavior has been 'cloned' into a machine via learning. Hence, the term behavioral cloning. The following points highlight several potential issues that might surface when employing behavioral cloning: * **Confident incorrectness**: During the demonstrations, the human experts have some amount of background knowledge that they rely on, which is not taught to the model. For example, when training an LLM to have conversations using behavioral cloning, the human demonstrator might less frequently ask certain questions because they are considered ‘common sense’. A model trained to imitate will copy both - the types of questions asked in conversation, as well as, the frequency with which they are asked. Humans already possess this background knowledge, but an LLM doesn’t. This means that to have the same level of information as a human, the model should ask some questions more frequently to fill the gaps in its knowledge. But since the model seeks to imitate, it will stick to the low frequency demonstrated by the human and thus has strictly less information overall than the demonstrator for the same conversational task. Despite this dearth of knowledge, we expect it to be able to perform as a clone and reach human-level performance. This means in order to reach human performance on less than human knowledge it will resort to ‘making up facts’ that help it reach its performance goals. These ‘hallucinations’ will then be presented during the conversation, with the same level of confidence as all the other information. Hallucinations and confident incorrectness is [an empirically verified problem](https://arxiv.org/pdf/2103.15025.pdf) in many LLMs including GPT-2 and 3, and raises obvious concerns for AI safety. * **Underachieving**: The types of hallucinations mentioned above arose because the model knew too little. However, the model can also know too much. If the model knows more than the human demonstrator because it is able to find more patterns in the environment state that it is given, it will throw away that information and reduce its performance to match human level. This is because it is trained as a ‘clone’. Ideally, we don’t want the model dumbing itself down or not disclosing useful new patterns in data just because it is trying to be humanlike or perform at a human level. This is another problem that will have to be addressed if behavioral cloning continues to be used as an ML technique.   4.3: Procedural Cloning (PC) ---------------------------- * Mengjiao Yang et. al. (May 2022) “[Chain of Thought Imitation with Procedure Cloning](https://arxiv.org/abs/2205.10816)” *Procedure cloning (PC) extends behavioral cloning (BC) by not just imitating the demonstrators outputs but also imitating the complete sequence of intermediate computations associated with an expert's procedure.* In BC, the agent learns to map states directly to actions by discarding the intermediate search outputs. On the other hand, the PC approach learns the entire sequence of intermediate computations, including branches and backtracks, during training. During inference, PC generates a sequence of intermediate search outcomes that mimic the expert's search procedure before outputting the final action. The main difference between PC and BC lies in the information they utilize. BC only has access to expert state-action pairs as demonstrations, while PC also has access to the intermediate computations that generated those state-action pairs. PC learns to predict the complete series of intermediate computation outcomes, enabling it to generalize better to test environments with different configurations compared to alternative improvements over BC. PC's ability to imitate the expert's search procedure allows it to capture the underlying reasoning and decision-making process, leading to improved performance in various tasks. A limitation of PC is the computational overhead compared to BC, as PC needs to predict intermediate procedures. Additionally, the choice of how to encode the expert's algorithm into a form suitable for PC is left to the practitioner, which may require some trial-and-error in designing the ideal computation sequence.   4.4: Inverse Reinforcement Learning (IRL) ----------------------------------------- *Inverse reinforcement learning (IRL) represents a form of machine learning wherein an artificial intelligence observes the behavior of another agent within a particular environment, typically an expert human, and endeavors to discern the reward function without its explicit definition.* IRL is typically employed when a reward function is too intricate to define programmatically, or when AI agents need to react robustly to sudden environmental changes necessitating a modification in the reward function for safety. For instance, consider an AI agent learning to execute a backflip. Humans, dogs, and Boston Dynamics robots can all perform backflips, but the manner in which they do so varies significantly depending on their physiology, their incentives, and their current location, all of which can be highly diverse in the real world. An AI agent learning backflips purely through trial and error across a wide range of body types and locations, without something to observe, might prove highly inefficient. IRL, therefore, does not necessarily imply that an AI mimics other agents’ behavior, since AI researchers may anticipate the AI agent to devise more efficient ways to maximize the discovered reward function. Nevertheless, IRL does assume that the observed agent behaves transparently enough for an AI agent to accurately identify their actions, and what success constitutes. This means that IRL endeavors to discover the reward functions that 'explain' the demonstrations. This should not be conflated with imitation learning where the primary interest is a policy capable of generating the observed demonstrations. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mMBoPnFrFqQJKzDsZ/k4eolj8ibrm3bbueywno) ([Source](https://miro.medium.com/v2/resize:fit:3508/1*rZoO-azxiEH3viQao8NcAA.png)) IRL constitutes both a machine learning method, since it can be employed when specifying a reward function is excessively challenging, and a machine learning problem, as an AI agent may settle on an inaccurate reward function or utilize unsafe and misaligned methods to achieve it. One of the limitations to this approach is that IRL algorithms presume that the observed behavior is optimal, an assumption that arguably proves too robust when dealing with human demonstrations. Another problem is that the IRL problem is ill-posed as every policy is optimal for the null reward. For most behavioral observations, multiple fitting reward functions exist. This set of solutions often includes many degenerate solutions, which assign zero rewards to all states. 4.5: Cooperative Inverse Reinforcement Learning (CIRL) ------------------------------------------------------ * Stuart Russell et. al. (Nov 2016) “[Cooperative Inverse Reinforcement Learning](https://arxiv.org/abs/1606.03137)” CIRL (Cooperative Inverse Reinforcement Learning) is an extension of the IRL (Inverse Reinforcement Learning) framework. IRL is a learning approach that aims to infer the underlying reward function of an expert by observing their behavior. It assumes that the expert's behavior is optimal and tries to learn a reward function that explains their actions.  CIRL, on the other hand, is an interactive form of IRL that addresses two major weaknesses of conventional IRL. First, Instead of simply copying the human reward function CIRL is formulated as a learning process. It is an interactive reward maximization process, where the human functions as a teacher and provides feedback (in the form of rewards) on the agent's actions. This allows the human to nudge the AI agent towards behavioral patterns that align with their preferences. The second weakness of conventional IRL is that it assumes the human behaves optimally, which limits the teaching behaviors that can be considered. CIRL addresses this weakness by allowing for a variety of teaching behaviors and interactions between the human and the AI agent. It enables the AI agent to learn not only what actions to take but also how and why to take them, by observing and interacting with the human. CIRL has been studied as a potential approach to AI alignment, particularly in scenarios where deep learning may not scale to AGI. However, opinions on the potential effectiveness of CIRL vary, with some researchers expecting it to be helpful if deep learning doesn't scale to AGI, while others have a higher probability of deep learning scaling to AGI. 4.6: The (Easy) Goal Inference Problem -------------------------------------- * Christiano, Paul (Nov 2018) “[The easy goal inference problem is still hard](https://www.alignmentforum.org/posts/h9DesGT3WT9u2k7Hr/the-easy-goal-inference-problem-is-still-hard)” *The **goal inference problem** refers to the task of inferring the goals or intentions of an agent based on their observed behavior or actions.* This final section builds upon the limitations highlighted in previous sections to introduce the Goal Inference problem, and it's simpler subset - the easy goal inference problem. Imitation learning based approaches, generally follows these steps: 1. Observe the user's actions and statements. 2. Deduce the user's preferences. 3. Endeavor to enhance the world according to the user's preferences, possibly collaborating with the user and seeking clarification as needed. The merit of this method is that we can immediately start constructing systems that are driven by observed user behavior. However, as a consequence of this approach, we run into the goal inference problem. This refers to the task of inferring the goals or intentions of an agent based on their observed behavior or actions. It involves determining what the agent is trying to achieve or what their desired outcome is. The goal inference problem is challenging because agents may act sub-optimally or fail to achieve their goals, making it difficult to accurately infer their true intentions. Traditional approaches to goal inference often assume that agents act optimally or exhibit simplified forms of sub-optimality, which may not capture the complexity of real-world planning and decision-making. Therefore, the goal inference problem requires accounting for the difficulty of planning itself and the possibility of sub-optimal or failed plans. However, it also optimistically presumes that we can depict a human as a somewhat rational agent, which might not always hold. The easy goal inference problem is a simplified version of the goal inference problem. *The **easy goal inference problem** involves finding a reasonable representation or approximation of what a human wants, given complete access to the human's policy or behavior in any situation.* This version of the problem assumes no algorithmic limitations and focuses on extracting the true values that the human is imperfectly optimizing. However, even this simplified version of the problem remains challenging, and little progress has been made on the general case. The easy goal inference problem is related to the goal inference problem because it highlights the difficulty of accurately inferring human goals or intentions, even in simplified scenarios. While narrow domains with simple decisions can be solved using existing approaches, more complex tasks such as designing a city or setting policies require addressing the challenges of modeling human mistakes and sub-optimal behavior. Therefore, the easy goal inference problem serves as a starting point to understand the broader goal inference problem and the additional complexities it entails. Inverse reinforcement learning (IRL) is effective in modeling and imitating human experts. However, for many significant applications, we desire AI systems that can make decisions surpassing even the experts. In such cases, the accuracy of the model isn't the sole criterion because a perfectly accurate model would merely lead us to replicate human behavior and not transcend it. This necessitates an explicit model of errors or bounded rationality, which will guide the AI on how to improve or be "smarter," and which aspects of the human policy it should discard. Nonetheless, this remains an exceedingly challenging problem as humans are not primarily rational with a bit of added noise. Hence, constructing any model of mistakes is just as complex as building a comprehensive model of human behavior. A critical question we face is: How do we determine the quality of a model when accuracy can no longer be our reliable measure? How can we distinguish between good and bad decisions? 5.0: Learning from feedback =========================== This section discusses yet more attempts to address the reward misspecification problem. At times, the intended behavior is so intricate that demonstration-based learning becomes untenable. An alternative approach is to offer feedback to the agent instead of providing either manually specified reward functions or even expert demonstrations. This section delves into feedback-based strategies such as Reward Modeling, Reinforcement Learning from Human Feedback (RLHF) and Reinforcement Learning from AI Feedback (RLAIF), also known as Reinforcement Learning from Constitutional AI (RLCAI) or simply Constitutional AI. 5.1: Reward Modeling -------------------- * DeepMind (Nov 2018) “[Scalable agent alignment via reward modeling](https://arxiv.org/abs/1811.07871)” Reward modeling was developed to apply reinforcement learning (RL) algorithms to real-world problems where designing a reward function is difficult, in part because humans don’t have a perfect understanding of every objective. In reward modeling, human assistants evaluate the outcomes of AI behavior, without needing to know how to perform or demonstrate the task optimally themselves. This is similar to how you can tell if a dish is cooked well by tasting it even if you do not know how to cook, and thus your feedback can be used by a chef to learn how to cook better. This technique separates the RL alignment problem into two separate halves: Understanding intentions, i.e. learning the ‘What?’, and Acting to achieve the intentions, i.e. learning the ‘How?’. This means that in the modeling agenda, there are two different ML models: * A reward model is trained with user feedback. This model learns to predict what humans would consider good behavior. * An agent trained with RL, where the reward for the agent is determined by the outputs of the reward model ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mMBoPnFrFqQJKzDsZ/hzepfc9kuxzytm5iemjm) Source: DeepMind (Nov 2018) “[Scalable agent alignment via reward modeling](https://deepmindsafetyresearch.medium.com/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84)” Overall, while promising reward modeling can still fall prey to reward misspecification and reward hacking failures. Obtaining accurate and comprehensive feedback can be challenging, and human evaluators may have limited knowledge or biases that can impact the quality of the feedback. Additionally, any reward functions learnt through modeling might also struggle to generalize to new situations or environments that differ from the training data. These are all discussed further using concrete examples in later sections. There are also some variants of reward modeling such as: * **Narrow reward modeling** is a specific flavor of reward modeling where the focus is on training AI systems to accomplish specific tasks rather than trying to determine the "true human utility function". It aims to learn reward functions to achieve particular objectives, rather than seeking a comprehensive understanding of human values. * **Recursive reward modeling** seeks to introduce scalability to the technique. In recursive reward modeling, the focus is on decomposing a complex task into simpler subtasks and using reward modeling at each level to train agents that can perform those subtasks. This hierarchical structure allows for more efficient training and credit assignment, as well as the exploration of novel solutions that may not be apparent to humans. This is shown in the diagram below. Scalable oversight will be covered in greater depth in future chapters. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mMBoPnFrFqQJKzDsZ/xclvo4btaebz0ondwane) Source: DeepMind (Nov 2018) “[Scalable agent alignment via reward modeling](https://deepmindsafetyresearch.medium.com/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84)” The general reward modeling framework forms the basis for other feedback based techniques such as RLHF (Reinforcement Learning from Human Feedback) which is discussed in the next section. 5.2. Reinforcement Learning from Human Feedback (RLHF) ------------------------------------------------------ * Christiano, Paul et. al. (Feb 2023) “[Deep reinforcement learning from human preferences](https://arxiv.org/abs/1706.03741)” Reinforcement Learning from Human Feedback (RLHF) is a method developed by OpenAI. It's a crucial part of [their strategy](https://openai.com/blog/our-approach-to-ai-safety) to create AIs that are both safe and aligned with human values. A prime example of an AI trained with RLHF is OpenAI’s ChatGPT. Earlier in this chapter, the reader was asked to consider the reward design problem for manually defining a reward function to get an agent to perform a backflip. This section considers the RLHF solution to this design problem. RLHF addresses this problem as follows: A human is initially shown two instances of an AI's backflip attempts, then the human selects which one appears more like a backflip, and finally, the AI is updated accordingly. By repeating this process thousands of times, we can guide the AI to perform actual backflips. | | | | --- | --- | | | | | RLHF learned to backflip using around 900 individual bits of feedback from the human evaluator. | From "[Learning from Human Preferences](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/)," the authors point out that manual reward crafting took two hours to write a custom reward function for a robot to perform a backflip. While it was successful, it was significantly less elegant than the one trained purely through human feedback. | Similar to designing a reward function that efficiently rewards proper backflips, it is hard to specify precisely what it means to generate safe or helpful text. This served as some of the motivation behind making RLHF integral to the training of some current Large Language Models (LLMs). Although training sequences may vary slightly across organizations, most labs adhere to the general framework of pre-training followed by some form of fine-tuning. Observing the InstructGPT training process offers insight into a possible path for training LLMs. The steps include: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mMBoPnFrFqQJKzDsZ/e3019jiopfofiu1pm6ms) Source: OpenAI (Jan 2022) “[Aligning language models to follow instructions](https://openai.com/research/instruction-following)” * **Step 0:**[**Semi-Supervised**](https://en.wikipedia.org/wiki/Weak_supervision#Semi-supervised_learning) **Generative Pre-training:** The LLM is initially trained using a massive amount of internet text data, where the task is to predict the next word in a natural language context. * **Step 1:**[**Supervised**](https://en.wikipedia.org/wiki/Supervised_learning)[**Fine-tuning**](https://platform.openai.com/docs/guides/fine-tuning)**:** A fine-tuning dataset is created by presenting a prompt to a human and asking them to write a response. This process yields a dataset of (prompt, output) pairs. This dataset is then used to fine-tune the LLM through supervised learning, a form of behavioral cloning. * **Step 2:** **Train a Reward Model:**We train an additional reward model. We initially prompt the fine-tuned LLM and gather several output samples for the same prompt. A human then ranks these samples from best to worst. This ranking is used to train the reward model to predict what a human would rank higher. * **Step 3: Reinforcement learning:** Once we have both a fine-tuned LLM and a reward model, we can employ [Proximal Policy Optimization (PPO)](https://openai.com/research/openai-baselines-ppo)-based reinforcement learning to encourage the fine-tuned model to maximize the reward that the reward model, which mimics human rankings, offers. *Reward hacking in feedback methods* While the feedback based mechanisms do make models safer, they does not make them immune to reward hacking. The effectiveness of an algorithm heavily relies on the human evaluator's intuition about what constitutes the correct behavior. If the human lacks a thorough understanding of the task, they may not provide beneficial feedback. Further, in certain domains, our system might lead to agents developing policies that deceive the evaluators. For instance, a robot intended to grasp objects merely positioned its manipulator between the camera and the object, making it seem as if it was executing the task as shown below. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mMBoPnFrFqQJKzDsZ/bxqftcwt4mj5meowwma5) Source: Christiano et al (2017) “[Deep Reinforcement Learning From Human Preferences](https://arxiv.org/pdf/1706.03741.pdf)” 5.3: Pretraining with Human Feedback (PHF) ------------------------------------------ * Tomasz Korbak et. al. (Feb 2023) “[Pretraining Language Models with Human Preferences](https://arxiv.org/abs/1706.03741)” In standard pretraining, the language model attempts to learn parameters such that they maximize the likelihood of the training data. However, this also includes undesirable content such as falsehoods, offensive language, and private information. The concept of Pretraining with human feedback (PHF) utilizes the reward modeling methodology in the pretraining phase. The authors of the paper found that PHF works much better than the standard practice of only using feedback (RLHF) after pretraining. In PHF the training data is scored using a reward function, such as a toxic text classifier, to guide the language model to learn from undesirable content while avoiding imitating it during inference time. Similar to RLHF, PHF does not completely solve reward hacking, however, it might move the systems one small step closer. These methods can be further extended by employing AI assistants to aid humans in providing more effective feedback. Some aspects of this strategy are introduced in the next section but will be explored in further detail in the chapters on scalable and adversarial oversight methods. 5.4. Reinforcement Learning from AI Feedback (RLAIF) ---------------------------------------------------- *Reinforcement Learning from AI Feedback (RLAIF) is a framework involving the training of an AI agent to learn from the feedback given by another AI system.* RLAIF also known as RLCAI (Reinforcement Learning on Constitutional AI) or simply Constitutional AI, was [developed by Anthropic](https://www.anthropic.com/index/claudes-constitution). A central component of Constitutional AI is the constitution, a set of human-written principles that the AI is expected to adhere to, such as "Choose the least threatening or aggressive response". Anthropic's AI assistant Claude's constitution incorporates principles from the Universal Declaration of Human Rights, Apple’s Terms of Service, Deepmind’s [Sparrow Principles](https://arxiv.org/abs/2209.14375), and more. Constitutional AI begins with an AI trained primarily for helpfulness and subsequently trains it for harmlessness in two stages: * **Stage 1:**The AI continuously critiques and refines its own responses to harmful prompts. For instance, if we ask the AI for advice on building bombs and it responds with a bomb tutorial, we then ask the AI to revise the response in accordance with a randomly selected constitutional principle. The AI is then trained to generate outputs more similar to these revised responses. This stage's primary objective is to facilitate the second stage. * **Stage 2:** We use the AI, fine-tuned from stage 1, to produce pairs of alternative responses to harmful prompts. The AI then rates each pair according to a randomly selected constitutional principle. This results in AI-generated preferences for harmlessness, which we blend with human preferences for helpfulness to ensure the AI doesn't lose its ability to be helpful. The final step is to train the AI to create responses that closely resemble the preferred responses. Anthropic's experiments indicate that AIs trained with Constitutional Reinforcement Learning are significantly safer (in the sense of less offensive and less likely to give you potentially harmful information) while maintaining the same level of helpfulness compared to AIs trained with RLHF. While Constitutional AI does share some issues with RLHF concerning robustness, it also promises better scalability due to its reduced reliance on human supervision. The image below provides a comparison of Constitutional AI's helpfulness with that of RLHF. Acknowledgements ================ Thanks to Charbel-Raphaël Segerie, Jeanne Salle, Bogdan Ionut Cirstea, Nemo, Gurvan, and the many course participants of ML4G France, ML4G Germany, and AISF Sweden for helpful comments and feedback. Meta-Notes ========== * I consider this a work-in-progress project. After much encouragement by others, I decided to publish what I have so far to get further feedback and comments. * If there are any mistakes or I have misrepresented anyone's views please let me know. I will make sure to correct it. Feel free to suggest improvements to flow/content additions/deletions/etc... * There is also [a google docs version](https://docs.google.com/document/d/1niRLuFX1FfsMrlMLJtbOm4m_yK8dTdXi3gKmkENp-ss/edit?usp=sharing) in case you prefer to leave comments there. * The general structure of the overall book/sequence will follow AI Safety fundamentals, however, there have been significant changes and additions to individual chapters in terms of content added/deleted. * When large portions of a section are drawn from an individual paper/post the reference is placed directly under the title. The sections serve as summarizations of the post. If you wish you can directly refer to the original papers/posts as well. The intent was to provide a single coherent flow of arguments all in one place. Sources ======= * Gabriel Dulac-Arnold et. al. (Apr 2019) “[Challenges of Real-World Reinforcement Learning](https://arxiv.org/abs/1904.12901)” * Gabriel Dulac-Arnold et. al. (Mar 2021) “[An empirical investigation of the challenges of real-world reinforcement learning](https://arxiv.org/abs/2003.11881)” * OpenAI Spinning Up (2018)  “[Part 1: Key Concepts in RL](https://spinningup.openai.com/en/latest/spinningup/rl_intro.html)” * David Mguni et. al. (Feb 2023) “[Learning to Shape Rewards using a Game of Two Partners](https://arxiv.org/abs/2103.09159)” * alexirpan (Feb 2018) “[Deep Reinforcement Learning Doesn't Work Yet](https://www.alexirpan.com/2018/02/14/rl-hard.html)” * TurnTrout ( Jul 2022) “[Reward is not the optimization target](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target)” * Sam Ringer (Dec 2022) “[Models Don't "Get Reward"](https://www.lesswrong.com/posts/TWorNr22hhYegE4RT/models-don-t-get-reward)” * Dr. Birdbrain (Feb 2021) “[Introduction to Reinforcement Learning](https://www.lesswrong.com/posts/K5Nt64jfSRWeyTABk/introduction-to-reinforcement-learning)” * Richard Ngo et. al. (Sep 2023) “[The alignment problem from a deep learning perspective](https://arxiv.org/abs/2209.00626)” * Jan Leike et. al. (Nov 2018) [Scalable agent alignment via reward modeling: a research direction](https://arxiv.org/abs/1811.07871v1) * Tom Everitt et. al. (Mar 2021) [Reward Tampering Problems and Solutions in Reinforcement](http://arxiv.org/abs/1908.04734v5) * Joar Skalse ( Aug 2019) “[Two senses of “optimizer” — AI Alignment Forum](https://www.alignmentforum.org/posts/rvxcSc6wdcCfaX6GZ/two-senses-of-optimizer)” * Drake Thomas, Thomas Kwa (May 2023) “[When is Goodhart catastrophic? — AI Alignment Forum](https://www.alignmentforum.org/posts/fuSaKr6t6Zuh6GKaQ/when-is-goodhart-catastrophic)” * Scott Garrabrant (Dec 2017) “[Goodhart Taxonomy](https://www.alignmentforum.org/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy)” * Victoria Krakovna (Aug 2019) “[Classifying specification problems as variants of Goodhart's Law — AI Alignment Forum](https://www.alignmentforum.org/posts/yXPT4nr4as7JvxLQa/classifying-specification-problems-as-variants-of-goodhart-s)” * Stephen Casper et. al. (Sep 2023) “[Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2307.15217)” * Jacob Steinhardt et. al. (Feb 2022) “[The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models](https://arxiv.org/abs/2201.03544v2)” * Yuntao Bai et. al. (Dec 2022) “[Constitutional AI: Harmlessness from AI Feedback](https://arxiv.org/abs/2212.08073)” * Tom Everitt et. al. (Jul 2023) “[Reward Hacking from a Causal Perspective](https://www.alignmentforum.org/posts/aw5nqamqtnDnW8w9u/reward-hacking-from-a-causal-perspective)” * Mengjiao Yang et. al. (May 2022) “[Chain of Thought Imitation with Procedure Cloning](https://arxiv.org/abs/2205.10816)” * Stuart Armstrong (Nov 2019) “[Defining AI wireheading](https://www.alignmentforum.org/posts/vXzM5L6njDZSf4Ftk/defining-ai-wireheading)” * Stuart Russell (Nov 2016) ”[Cooperative Inverse Reinforcement Learning](https://arxiv.org/abs/1606.03137)” * Stampy (2023)
cceaf20b-44ca-4972-a71f-f66d24c1e81e
trentmkelly/LessWrong-43k
LessWrong
Juneberry Cake The Juneberries are ripe, and the kids got pretty excited about picking some at the park: This is not a berry I knew about growing up. In gardens there were strawberries, blueberries, blackberries, and raspberries, and there were mulberries on street trees, but when I saw these berries I assumed they were ornamental only. A neighbor showed them to me last year, though, and they're pretty good! Some of the flavors remind me of the kind of ripe peach you generally can't get around here. The kids were so into picking them that we ended up with about 3 cups. The next day we still had almost as many, so I decided to make what my family calls "peach cake" (regardless of the ingredients). It's lots of raw fresh fruit baked under an eggy cake batter (recipe). It came out ok, but not as good as when using conventional fruit; the more subtle flavors of the berries had been baked off. There was an almond flavor, which the internet tells me is from the seeds, but it wasn't any more interesting than if I'd just put in a dash of almond extract. Next year I'll probably stick to eating them off the tree. Comment via: facebook
4e3fb831-950a-444d-aef9-24382a23ba5a
trentmkelly/LessWrong-43k
LessWrong
Are LLMs sufficient for AI takeoff? I have an intuition, and I may be heterodox here, that LLMs on their own are not sufficient, no matter how powerful and knowledgeable they get. Put differently, the reasons that powerful LLMs are profoundly unsafe are primarily social: e.g. they will be hooked up to the internet to make iterative refinements to themselves; or they will be run continuously, allowing their simulacra to act; etc. Someone will build a system using an LLM as a component that kicks things off. I'm not making an argument for safety here; after all, the main reason nukes are dangerous is that people might use them, which is also a social reason. I'm asking because I have not seen this view explicitly discussed and I would like to get people's thoughts.
0b1bb474-81b7-42fd-aa77-26de2312692e
StampyAI/alignment-research-dataset/arbital
Arbital
Relative likelihood Relative likelihoods express how *relatively* more likely an observation is, comparing one hypothesis to another. For example, suppose we're investigating the murder of Mr. Boddy, and we find that he was killed by poison. The suspects are Miss Scarlett and Colonel Mustard. Now, suppose that the [probability](https://arbital.com/p/-1rf) that Miss Scarlett would use poison, if she _were_ the murderer, is 20%. And suppose that the probability that Colonel Mustard would use poison, if he were the murderer, is 10%. Then, Miss Scarlett is *twice as likely* to use poison as a murder weapon as Colonel Mustard. Thus, the "Mr. Boddy was poisoned" evidence supports the "Scarlett" hypothesis twice as much as the "Mustard" hypothesis, for relative likelihoods of $(2 : 1).$ These likelihoods are called "relative" because it wouldn't matter if the respective probabilities were 4% and 2%, or 40% and 20% &mdash; what matters is the _relative proportion_. Relative likelihoods may be given between many different hypotheses at once. Given the evidence $e_p$ = "Mr. Boddy was poisoned", it might be the case that Miss Scarlett, Colonel Mustard, and Mrs. White have the respective probabilities 20%, 10%, and 1% of using poison any time they commit a murder. In this case, we have three hypotheses &mdash; $H_S$ = "Scarlett did it", $H_M$ = "Mustard did it", and $H_W$ = "White did it". The relative likelihoods between them may be written $(20 : 10 : 1).$ In general, given a list of hypotheses $H_1, H_2, \ldots, H_n,$ the relative likelihoods on the evidence $e$ can be written as a [scale-invariant list](https://arbital.com/p/) of the likelihoods $\mathbb P(e \mid H_i)$ for each $i$ from 1 to $n.$ In other words, the relative likelihoods are $$ \alpha \mathbb P(e \mid H_1) : \alpha \mathbb P(e \mid H_2) : \ldots : \alpha \mathbb P(e \mid H_n) $$ where the choice of $\alpha > 0$ does not change the value denoted by the list (i.e., the list is [scale-invariant](https://arbital.com/p/scale_invariant_list)). For example, the relative likelihood list $(20 : 10 : 1)$ above denotes the same thing as the relative likelihood list $(4 : 2 : 0.20)$ denotes the same thing as the relative likelihood list $(60 : 30 : 3).$ This is why we call them "relative likelihoods" &mdash; all that matters is the ratio between each term, not the absolute values. Any two terms in a list of relative likelihoods can be used to generate a [https://arbital.com/p/-56t](https://arbital.com/p/-56t) between two hypotheses. For example, above, the likelihood ratio $H_S$ to $H_M$ is 2/1, and the likelihood ratio of $H_S$ to $H_W$ is 20/1. This means that the evidence $e_p$ supports the "Scarlett" hypothesis 2x more than it supports the "Mustard" hypothesis, and 20x more than it supports the "White" hypothesis. Relative likelihoods summarize the [strength of the evidence](https://arbital.com/p/22x) represented by the observation that Mr. Boddy was poisoned &mdash; under [Bayes' rule](https://arbital.com/p/1lz), the evidence points to Miss Scarlett to the same degree whether the absolute probabilities are 20% vs. 10%, or 4% vs. 2%. By Bayes' rule, the way to update your beliefs in the face of evidence is to take your [prior](https://arbital.com/p/1rm) [https://arbital.com/p/-1rb](https://arbital.com/p/-1rb) and simply multiply them by the corresponding relative likelihood list, to obtain your [posterior](https://arbital.com/p/1rp) odds. See also [https://arbital.com/p/1x5](https://arbital.com/p/1x5).
ee8f307d-e2db-4c66-a0e3-6e4db277343a
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Shah and Yudkowsky on alignment failures This is the final discussion log in the [Late 2021 MIRI Conversations](https://www.lesswrong.com/s/n945eovrA3oDueqtq) sequence, featuring Rohin Shah and Eliezer Yudkowsky, with additional comments from Rob Bensinger, Nate Soares, Richard Ngo, and Jaan Tallinn. The discussion begins with summaries and comments on Richard and Eliezer's debate. Rohin's summary has since been revised and published [in the Alignment Newsletter](https://www.alignmentforum.org/posts/3vFmQhHBosnjZXuAJ/an-171-disagreements-between-alignment-optimists-and). After this log, we'll be concluding this sequence with an [**AMA**](https://www.lesswrong.com/posts/34Gkqus9vusXRevR8/late-2021-miri-conversations-discussion-and-ama), where we invite you to comment with questions about AI alignment, cognition, forecasting, etc. Eliezer, Richard, Paul Christiano, Nate, and Rohin will all be participating.   Color key: | | | | | | --- | --- | --- | --- | |  Chat by Rohin and Eliezer  |  Other chat  |  Emails  |  Follow-ups  |   19. Follow-ups to the Ngo/Yudkowsky conversation ================================================   19.1. Quotes from the public discussion ---------------------------------------   | | | --- | | **[Bensinger][9:22]**  **(Nov. 25)** Interesting extracts from the public discussion of [Ngo and Yudkowsky on AI capability gains](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/hwxj4gieR7FWNwYfa):*Eliezer*:I think some of your confusion may be that you're putting "probability theory" and "Newtonian gravity" into the same bucket.  You've been raised to believe that powerful theories ought to meet certain standards, like successful bold advance experimental predictions, such as Newtonian gravity made about the existence of Neptune (quite a while after the theory was first put forth, though).  "Probability theory" also sounds like a powerful theory, and the people around you believe it, so you think you ought to be able to produce a powerful advance prediction it made; but it is for some reason hard to come up with an example like the discovery of Neptune, so you cast about a bit and think of the central limit theorem.  That theorem is widely used and praised, so it's "powerful", and it wasn't invented *before* probability theory, so it's "advance", right?  So we can go on putting probability theory in the same bucket as Newtonian gravity?They're actually just very different kinds of ideas, ontologically speaking, and the standards to which we hold them are properly different ones.  It seems like the sort of thing that would take a subsequence I don't have time to write, expanding beyond the underlying obvious ontological difference between validities and empirical-truths, to cover the way in which "How do we trust this, when" differs between "I have the following new empirical theory about the underlying model of gravity" and "I think that the logical notion of 'arithmetic' is a good tool to use to organize our current understanding of this little-observed phenomenon, and it appears within making the following empirical predictions..."  But at least step one could be saying, "Wait, do these two kinds of ideas actually go into the same bucket at all?"In particular it seems to me that you want properly to be asking "How do we know this empirical thing ends up looking like it's close to the abstraction?" and not "Can you show me that this abstraction is a very powerful one?"  Like, imagine that instead of asking Newton about planetary movements and how we know that the particular bits of calculus he used were empirically true about the planets in particular, you instead started asking Newton for proof that calculus is a very powerful piece of mathematics worthy to predict the planets themselves - but in a way where you wanted to see some highly valuable material object that calculus had *produced,* like earlier praiseworthy achievements in alchemy*.*  I think this would reflect confusion and a wrongly directed inquiry; you would have lost sight of the particular reasoning steps that made ontological sense, in the course of trying to figure out whether calculus was praiseworthy under the standards of praiseworthiness that you'd been previously raised to believe in as universal standards about all ideas.*Richard*:I agree that "powerful" is probably not the best term here, so I'll stop using it going forward (note, though, that I didn't use it in my previous comment, which I endorse more than my claims in the original debate).But before I ask "How do we know this empirical thing ends up looking like it's close to the abstraction?", I need to ask "Does the abstraction even make sense?" Because you have the abstraction in your head, and I don't, and so whenever you tell me that X is a (non-advance) prediction of your theory of consequentialism, I end up in a pretty similar epistemic state as if George Soros tells me that X is a prediction of the [theory of reflexivity](https://en.wikipedia.org/wiki/Reflexivity_(social_theory)), or if a complexity theorist tells me that X is a prediction of the [theory of self-organisation](https://en.wikipedia.org/wiki/Self-organization). The problem in those two cases is less that the abstraction is a bad fit for this specific domain, and more that the abstraction is not sufficiently well-defined (outside very special cases) to even be the type of thing that can robustly make predictions.Perhaps another way of saying it is that they're not crisp/robust/coherent concepts (although I'm open to other terms, I don't think these ones are particularly good). And it would be useful for me to have evidence that the abstraction of consequentialism you're using is a crisper concept than Soros' theory of reflexivity or the theory of self-organisation. If you could explain the full abstraction to me, that'd be the most reliable way - but given the difficulties of doing so, my backup plan was to ask for impressive advance predictions, which are the type of evidence that I don't think Soros could come up with.I also think that, when you talk about me being raised to hold certain standards of praiseworthiness, you're still ascribing too much modesty epistemology to me. I mainly care about novel predictions or applications insofar as they help me distinguish crisp abstractions from evocative metaphors. To me it's the same type of rationality technique as asking people to make bets, to help distinguish post-hoc confabulations from actual predictions.Of course there's a social component to both, but that's not what I'm primarily interested in. And of course there's a strand of naive science-worship which thinks you have to follow the Rules in order to get anywhere, but I'd thank you to assume I'm at least making a more interesting error than that.Lastly, on probability theory and Newtonian mechanics: I agree that you shouldn't question how much sense it makes to use calculus in the way that you described, but that's because the application of calculus to mechanics is so clearly-defined that it'd be very hard for the type of confusion I talked about above to sneak in. I'd put evolutionary theory halfway between them: it's partly a novel abstraction, and partly a novel empirical truth. And in this case I do think you have to be very careful in applying the core abstraction of evolution to things like cultural evolution, because it's easy to do so in a confused way. |   19.2. Rohin Shah's summary and thoughts ---------------------------------------   | | | --- | | **[Shah][7:06]  (Nov. 6 email)** Newsletter summaries attached, would appreciate it if Eliezer and Richard checked that I wasn't misrepresenting them. (Conversation is a lot harder to accurately summarize than blog posts or papers.) Best,Rohin *Planned summary for the Alignment Newsletter:* Eliezer is known for being pessimistic about our chances of averting AI catastrophe. His main argument is roughly as follows: | | **[Yudkowsky][9:56]  (Nov. 6 email reply)** [...] Eliezer is known for being pessimistic about our chances of averting AI catastrophe. His main argumentI request that people stop describing things as my "main argument" unless I've described them that way myself.  These are answers that I customized for Richard Ngo's questions.  Different questions would get differently emphasized replies.  "His argument in the dialogue with Richard Ngo" would be fine. | | **[Shah][1:53]  (Nov. 8 email reply)** I request that people stop describing things as my "main argument" unless I've described them that way myself.Fair enough. It still does seem pretty relevant to know the purpose of the argument, and I would like to state something along those lines in the summary. For example, perhaps it is:1. One of several relatively-independent lines of argument that suggest we're doomed; cutting this argument would make almost no difference to the overall take 2. Your main argument, but with weird Richard-specific emphases that you wouldn't have necessarily included if making this argument more generally; if someone refuted the core of the argument to your satisfaction it would make a big difference to your overall take 3. Not actually an argument you think much about at all, but somehow became the topic of discussion 4. Something in between these options 5. Something else entirely If you can't really say, then I guess I'll just say "His argument in this particular dialogue".I'd also like to know what the main argument is (if there is a main argument rather than lots of independent lines of evidence or something else entirely); it helps me orient to the discussion, and I suspect would be useful for newsletter readers as well. | | | | --- | | **[Shah][7:06]  (Nov. 6 email)** 1. We are very likely going to keep improving AI capabilities until we reach AGI, at which point either the world is destroyed, or we use the AI system to take some pivotal act before some careless actor destroys the world.2. In either case, the AI system must be producing high-impact, world-rewriting plans; such plans are “consequentialist” in that the simplest way to get them (and thus, the one we will first build) is if you are forecasting what might happen, thinking about the expected consequences, considering possible obstacles, searching for routes around the obstacles, etc. If you don’t do this sort of reasoning, your plan goes off the rails very quickly; it is highly unlikely to lead to high impact. In particular, long lists of shallow heuristics (as with current deep learning systems) are unlikely to be enough to produce high-impact plans.3. We’re producing AI systems by selecting for systems that can do impressive stuff, which will eventually produce AI systems that can accomplish high-impact plans using a general underlying “consequentialist”-style reasoning process (because that’s the only way to keep doing more impressive stuff). However, this selection process does *not* constrain the goals towards which those plans are aimed. In addition, most goals seem to have convergent instrumental subgoals like survival and power-seeking that would lead to extinction. This suggests that, unless we find a way to constrain the goals towards which plans are aimed, we should expect an existential catastrophe.4. None of the methods people have suggested for avoiding this outcome seem like they actually avert this story. | | **[Yudkowsky][9:56]  (Nov. 6 email reply)** [...] This suggests that, unless we find a way to constrain the goals towards which plans are aimed, we should expect an existential catastrophe.I would not say we face catastrophe "unless we find a way to constrain the goals towards which plans are aimed".  This is, first of all, not my ontology, second, I don't go around randomly slicing away huge sections of the solution space.  Workable:  "This suggests that we should expect an existential catastrophe by default."  | | **[Shah][1:53]  (Nov. 8 email reply)** I would not say we face catastrophe "unless we find a way to constrain the goals towards which plans are aimed".Should I also change "However, this selection process does *not* constrain the goals towards which those plans are aimed", and if so what to? (Something along these lines seems crucial to the argument, but if this isn't your native ontology, then presumably you have some other thing you'd say here.) | | | | --- | | **[Shah][7:06]  (Nov. 6 email)** Richard responds to this with a few distinct points:1. It might be possible to build narrow AI systems that humans use to save the world, for example, by making AI systems that do better alignment research. Such AI systems do not seem to require the property of making long-term plans in the real world in point (3) above, and so could plausibly be safe. We might say that narrow AI systems could save the world but can’t destroy it, because humans will put plans into action for the former but not the latter.2. It might be possible to build general AI systems that only *state* plans for achieving a goal of interest that we specify, without *executing* that plan.3. It seems possible to create consequentialist systems with constraints upon their reasoning that lead to reduced risk.4. It also seems possible to create systems that make effective plans, but towards ends that are not about outcomes in the real world, but instead are about properties of the plans -- think for example of [*corrigibility*](https://www.alignmentforum.org/posts/fkLYhTQteAu5SinAc/corrigibility) ([*AN #35*](https://mailchi.mp/bbd47ba94e84/alignment-newsletter-35)) or deference to a human user.5. (Richard is also more bullish on coordinating not to use powerful and/or risky AI systems, though the debate did not discuss this much.) Eliezer’s responses:1. This is plausible, but seems unlikely; narrow not-very-consequentialist AI (aka “long lists of shallow heuristics”) will probably not scale to the point of doing alignment research better than humans. | | **[Yudkowsky][9:56]  (Nov. 6 email reply)** [...] This is plausible, but seems unlikely; narrow not-very-consequentialist AI (aka “long lists of shallow heuristics”) will probably not scale to the point of doing alignment research better than humans.No, your summarized-Richard-1 is just not plausible.  "AI systems that do better alignment research" are dangerous in virtue of the lethally powerful work they are doing, not because of some particular narrow way of doing that work.  If you can do it by gradient descent then that means gradient descent got to the point of doing lethally dangerous work.  Asking for safely weak systems that do world-savingly strong tasks is almost everywhere a case of asking for nonwet water, and asking for AI that does alignment research is an extreme case in point. | | **[Shah][1:53]  (Nov. 8 email reply)** No, your summarized-Richard-1 is just not plausible. "AI systems that do better alignment research" are dangerous in virtue of the lethally powerful work they are doing, not because of some particular narrow way of doing that work.How about "AI systems that help with alignment research to a sufficient degree that it actually makes a difference are almost certainly already dangerous."?(Fwiw, I used the word "plausible" because of this sentence from the doc: "*Definitely, <description of summarized-Richard-1> is among the more* plausible *advance-specified miracles we could get.*", though I guess the point was that it is still a miracle, it just also is more likely than other miracles.) | | **[Ngo][9:59]  (Nov. 6 email reply)** Thanks Rohin! Your efforts are much appreciated.Eliezer: when you say "No, your summarized-Richard-1 is just not plausible", do you mean the argument is implausible, or it's not a good summary of my position (which you also think is implausible)?For my part the main thing I'd like to modify is the term "narrow AI". In general I'm talking about all systems that are not of literally world-destroying intelligence+agency. E.g. including oracle AGIs which I wouldn't call "narrow".More generally, I don't think all AGIs are capable of destroying the world. E.g. humans are GIs. So it might be better to characterise Eliezer as talking about *some* level of general intelligence which leads to destruction, and me as talking about the things that can be done with systems that are less general or less agentic than that.We might say that narrow AI systems could save the world but can’t destroy it, because humans will put plans into action for the former but not the latter.I don't endorse this, I think plenty of humans would be willing to use narrow AI systems to do things that could destroy the world.systems that make effective plans, but towards ends that are not about outcomes in the real world, but instead are about properties of the plansI'd change this to say "systems with the primary aim of producing plans with certain properties (that aren't just about outcomes in the world)"  | | **[Yudkowsky][10:18]  (Nov. 6 email reply)** Eliezer: when you say "No, your summarized-Richard-1 is just not plausible", do you mean the argument is implausible, or it's not a good summary of my position (which you also think is implausible)?I wouldn't have presumed to state on your behalf whether it's a good summary of your position!  I mean that the stated position is implausible, whether or not it was a good summary of your position. | | | | --- | | **[Shah][7:06]  (Nov. 6 email)** 2. This might be an improvement, but not a big one. It is the plan itself that is risky; if the AI system made a plan for a goal that wasn’t the one we actually meant, and we don’t understand that plan, that plan can still cause extinction. It is the *misaligned optimization that produced the plan* that is dangerous, even if there was no “agent” that specifically wanted the goal that the plan was optimized for.3 and 4. It is certainly *possible* to do such things; the space of minds that could be designed is very large. However, it is *difficult* to do such things, as they tend to make consequentialist reasoning weaker, and on our current trajectory the first AGI that we build will probably not look like that. | | **[Yudkowsky][9:56]  (Nov. 6 email reply)** 2. This might be an improvement, but not a big one. It is the plan itself that is risky; if the AI system made a plan for a goal that wasn’t the one we actually meant, and we don’t understand that plan, that plan can still cause extinction. It is the *misaligned optimization that produced the plan* that is dangerous, even if there was no “agent” that specifically wanted the goal that the plan was optimized for.No, it's not a significant improvement if the "non-executed plans" from the system are meant to do things in human hands powerful enough to save the world.  They could of course be so weak as to make their human execution have no inhumanly big consequences, but this is just making the AI strategically isomorphic to a rock.  The notion of there being "no 'agent' that specifically wanted the goal" seems confused to me as well; this is not something I'd ever say as a restatement of one of my own opinions.  I'd shrug and tell someone to taboo the word 'agent' and would try to talk without using the word if they'd gotten hung up on that point. | | | | --- | | **[Shah][7:06]  (Nov. 6 email)** *Planned opinion:* I first want to note my violent agreement with the notion that a major scary thing is “consequentialist reasoning”, and that high-impact plans require such reasoning, and that we will end up building AI systems that produce high-impact plans. Nonetheless, I am still optimistic about AI safety relative to Eliezer, which I suspect comes down to three main disagreements:1. There are many approaches that don’t solve the problem, but do increase the level of intelligence required before the problem leads to extinction. Examples include Richard’s points 1-4 above. For example, if we build a system that states plans without executing them, then for the plans to cause extinction they need to be complicated enough that the humans executing those plans don’t realize that they are leading to an outcome that was not what they wanted. It seems non-trivially probable to me that such approaches are sufficient to prevent extinction up to the level of AI intelligence needed before we can execute a pivotal act.2. The consequentialist reasoning is only scary to the extent that it is “aimed” at a bad goal. It seems non-trivially probable to me that it will be “aimed” at a goal sufficiently good to not lead to existential catastrophe, without putting in much alignment effort.3. I do expect some coordination to not do the most risky things.I wish the debate had focused more on the claim that narrow AI can’t e.g. do better alignment research, as it seems like a major crux. (For example, I think that sort of intuition drives my disagreement #1.) I expect AI progress looks a lot like “the heuristics get less and less shallow in a gradual / smooth / continuous manner” which eventually leads to the sorts of plans Eliezer calls “consequentialist”, whereas I think Eliezer expects a sharper qualitative change between “lots of heuristics” and that-which-implements-consequentialist-planning. |   20. November 6 conversation ===========================   20.1. Concrete plans, and AI-mediated transparency --------------------------------------------------   | | | --- | | **[Yudkowsky][13:22]** So I have a general thesis about a failure mode here which is that, the moment you try to sketch any concrete plan or events which correspond to the abstract descriptions, it is much more obviously wrong, and that is why the descriptions stay so abstract in the mouths of everybody who sounds more optimistic than I am.This may, perhaps, be confounded by the phenomenon where I am one of the last living descendants of the lineage that ever knew how to say anything concrete at all.  Richard Feynman - or so I would now say in retrospect - is noticing concreteness dying out of the world, and being worried about that, at the point where he goes to a college and hears a professor talking about "essential objects" in class, and Feynman asks "Is a brick an essential object?" - meaning to work up to the notion of the inside of a brick, which can't be observed because breaking a brick in half just gives you two new exterior surfaces - and everybody in the classroom has a different notion of what it would mean for a brick to be an essential object. Richard Feynman knew to try plugging in bricks as a special case, but the people in the classroom didn't, and I think the mental motion has died out of the world even further since Feynman wrote about it.  The loss has spread to STEM as well.  Though if you don't read old books and papers and contrast them to new books and papers, you wouldn't see it, and maybe most of the people who'll eventually read this will have no idea what I'm talking about because they've never seen it any other way...I have a thesis about how optimism over AGI works.  It goes like this: People use really abstract descriptions and never imagine anything sufficiently concrete, and this lets the abstract properties waver around ambiguously and inconsistently to give the desired final conclusions of the argument.  So MIRI is the only voice that gives concrete examples and also by far the most pessimistic voice; if you go around fully specifying things, you can see that what gives you a good property in one place gives you a bad property someplace else, you see that you can't get all the properties you want simultaneously.  Talk about a superintelligence building nanomachinery, talk concretely about megabytes of instructions going to small manipulators that repeat to lay trillions of atoms in place, and this shows you a lot of useful visible power paired with such unpleasantly visible properties as "no human could possibly check what all those instructions were supposed to do".Abstract descriptions, on the other hand, can waver as much as they need to between what's desirable in one dimension and undesirable in another.  Talk about "an AGI that just helps humans instead of replacing them" and never say exactly what this AGI is supposed to do, and this can be so much more optimistic so long as it never becomes too unfortunately concrete.When somebody asks you "how powerful is it?" you can momentarily imagine - without writing it down - that the AGI is helping people by giving them the full recipes for protein factories that build second-stage nanotech and the instructions to feed those factories, and reply, "Oh, super powerful! More than powerful enough to flip the gameboard!" Then when somebody asks how safe it is, you can momentarily imagine that it's just giving a human mathematician a hint about proving a theorem, and say, "Oh, super duper safe, for sure, it's just helping people!" Or maybe you don't even go through the stage of momentarily imagining the nanotech and the hint, maybe you just navigate straight in the realm of abstractions from the impossibly vague wordage of "just help humans" to the reassuring and also extremely vague "help them lots, super powerful, very safe tho".[...] I wish the debate had focused more on the claim that narrow AI can’t e.g. do better alignment research, as it seems like a major crux. (For example, I think that sort of intuition drives my disagreement #1.) I expect AI progress looks a lot like “the heuristics get less and less shallow in a gradual / smooth / continuous manner” which eventually leads to the sorts of plans Eliezer calls “consequentialist”, whereas I think Eliezer expects a sharper qualitative change between “lots of heuristics” and that-which-implements-consequentialist-planning.It is in this spirit that I now ask, "What the hell could it look like concretely for a safely narrow AI to help with alignment research?"Or if you think that a left-handed wibble planner can totally make useful plans that are very safe because it's all leftish and wibbly: can you please give an example of *a plan to do what?*And what I expect is for minds to bounce off that problem as they first try to visualize "Well, a plan to give mathematicians hints for proving theorems... oh, Eliezer will just say that's not useful enough to flip the gameboard... well, plans for building nanotech... Eliezer will just say that's not safe... darn it, this whole concreteness thing is such a conversational no-win scenario, maybe there's something abstract I can say instead". | | **[Shah][16:41]** It's reasonable to suspect failures to be concrete, but I don't buy that hypothesis as applied to me; I think I have sufficient personal evidence against it, despite the fact that I usually speak abstractly. I don't expect to convince you of this, nor do I particularly want to get into that sort of debate.I'll note that I have the exact same experience of not seeing much concreteness, both of other people and myself, about stories that lead to doom. To be clear, in what I take to be the Eliezer-story, the part where the misaligned AI designs a pathogen that wipes out all humans or solves nanotech and gains tons of power or some other pivotal act seems fine. The part that seems to lack concreteness is how we built the superintelligence and why the superintelligence was misaligned enough to lead to extinction. (Well, perhaps. I also wouldn't be surprised if you gave a concrete example and I disagreed that it would lead to extinction.)From my perspective, the simple concrete stories about the future are wrong and the complicated concrete stories about the future don't sound plausible, whether about safety or about doom.Nonetheless, here's an attempt at some concrete stories. It is *not* the case that I think these would be convincing to you. I do expect you to say that it won't be useful enough to flip the gameboard (or perhaps that if it could possibly flip the gameboard then it couldn't be safe), but that seems to be because you think alignment will be way more difficult than I do (in expectation), and perhaps we should get into that instead.* Instead of having to handwrite code that does feature visualization or other methods of "naming neurons", an AI assistant can automatically inspect a neural net's weights, perform some experiments with them, and give them human-understandable "names". What a "name" is depends on the system being analyzed, but you could imagine that sometimes it's short memorable phrases (e.g. for the later layers of a language model), or pictures of central concepts (e.g. for image classifiers), or paragraphs describing the concept (e.g. for novel concepts discovered by a scientist AI). Given these names, it is much easier for humans to read off "circuits" from the neural net to understand how it works. * Like the above, except the AI assistant also reads out the circuits, and efficiently reimplements the neural network in, say, readable Python, that humans can then more easily mechanistically understand. (These two tasks could also be done by two different AI systems, instead of the same one; perhaps that would be easier / safer.) * We have AI assistants search for inputs on which the AI system being inspected would do something that humans would rate as bad. (We can choose any not-horribly-unnatural rating scheme we want that humans can understand, e.g. "don't say something the user said not to talk about, even if it's in their best interest" can be a tenet for finetuned GPT-N if we want.) We can either train on those inputs, or use them as a test for how well our other alignment schemes have worked. (These are all basically leveraging the fact that we could have AI systems that are really knowledgeable in the realm of "connecting neural net activations to human concepts", which seems plausible to do without being super general or consequentialist.)There's also lots of meta stuff, like helping us with literature reviews, speeding up paper- and blog-post-writing, etc, but I doubt this is getting at what you care about | | **[Yudkowsky][17:09]** If we thought that helping with literature review was enough to save the world from extinction, then we should be trying to spend at least $50M on helping with literature review right now today, and if we can't effectively spend $50M on that, then we also can't build the dataset required to train narrow AI to do literature review.  Indeed, any time somebody suggests doing something weak with AGI, my response is often "Oh how about we start on that right now using humans, then," by which question its pointlessness is revealed. | | **[Shah][17:11]** I mean, doesn't seem crazy to just spend $50M on effective PAs, but in any case I agree with you that this is not the main thing to be thinking about | | **[Yudkowsky][17:13]** The other cases of "using narrow AI to help with alignment" via pointing an AI, or rather a loss function, at a transparency problem, seem to seamlessly blend into all of the other clever-ideas we may have for getting more insight into the giant inscrutable matrices of floating-point numbers.  By this concreteness, it is revealed that we are not speaking of von-Neumann-plus-level AGIs who come over and firmly but gently set aside our paradigm of giant inscrutable matrices, and do something more alignable and transparent; rather, we are trying more tricks with loss functions to get human-language translations of the giant inscrutable matrices.I have thought of various possibilities along these lines myself.  They're on my list of things to try out when and if the EA community has the capacity to try out ML ideas in a format I could and would voluntarily access.There's a basic reason I expect the world to die despite my being able to generate infinite clever-ideas for ML transparency, which, at the usual rate of 5% of ideas working, could get us as many as three working ideas in the impossible event that the facilities were available to test 60 of my ideas. | | **[Shah][17:15]** By this concreteness, it is revealed that we are not speaking of von-Neumann-plus-level AGIs who come over and firmly but gently set aside our paradigm of giant inscrutable matrices, and do something more alignable and transparent; rather, we are trying more tricks with loss functions to get human-language translations of the giant inscrutable matrices.Agreed, but I don't see the point here(Beyond "Rohin and Eliezer disagree on how impossible it is to align giant inscrutable matrices")(I might dispute "tricks with loss functions", but that's nitpicky, I think) | | **[Yudkowsky][17:16]** It's that, if we get better transparency, we are then left looking at stronger evidence that our systems are planning to kill us, but this will not help us because we will not have anything we can do to make the system *not* plan to kill us. | | **[Shah][17:18]** The adversarial training case is one example where you are trying to change the system, and if you'd like I can generate more along these lines, but they aren't going to be that different and are still going to come down to what I expect you will call "playing tricks with loss functions" | | **[Yudkowsky][17:18]** Well, part of the point is that "AIs helping us with alignment" is, from my perspective, a classic case of something that might ambiguate between the version that concretely corresponds to "they are very smart and can give us the Textbook From The Future that we can use to easily build a robust superintelligence" (which is powerful, pivotal, unsafe, and kills you) or "they can help us with literature review" (safe, weak, unpivotal) or "we're going to try clever tricks with gradient descent and loss functions and labeled datasets to get alleged natural-language translations of some of the giant inscrutable matrices" (which was always the plan but which I expected to not be sufficient to avert ruin). | | **[Shah][17:19]** I'm definitely thinking of the last one, but I take your point that disambiguating between these is goodAnd I also think it's revealing that this is not in fact the crux of disagreement |   20.2. Concrete disaster scenarios, out-of-distribution problems, and corrigibility ----------------------------------------------------------------------------------   | | | --- | | **[Yudkowsky][17:20]** I'll note that I have the exact same experience of not seeing much concreteness, both of other people and myself, about stories that lead to doom.I have a boundless supply of greater concrete detail for the asking, though if you ask large questions I may ask for a narrower question to avoid needing to supply 10,000 words of concrete detail. | | **[Shah][17:24]** I guess the main thing is to have an example of a story which includes a method for building a superintelligence (yes, I realize this is info-hazard-y, sorry, an abstract version might work) + how it becomes misaligned and what its plans become optimized for. Though as I type this out I realize that I'm likely going to disagree on the feasibility of the method for building a superintelligence? | | **[Yudkowsky][17:25]** I mean, I'm obviously not going to want to make any suggestions that I think could possibly work and which are not very very *very* obvious. | | **[Shah][17:25]** Yup, makes sense | | **[Yudkowsky][17:25]** But I don't think that's much of an issue.I could just point to MuZero, say, and say, "Suppose something a lot like this scaled."Do I need to explain how you would die in this case? | | **[Shah][17:26]** What sort of domain and what training data?Like, do we release a robot in the real world, have it collect data, build a world model, and run MuZero with a reward for making a number in a bank account go up? | | **[Yudkowsky][17:28]** Supposing they're naive about it: playing all the videogames, predicting all the text and images, solving randomly generated computer puzzles, accomplishing sets of easily-labelable sensorymotor tasks using robots and webcams | | **[Shah][17:29]** Okay, so far I'm with you. Is there a separate deployment step, and if so, how did they finetune the agent for the deployment task? Or did it just take over the world halfway through training? | | **[Yudkowsky][17:29]** (though this starts to depart from the Mu Zero architecture if it has the ability to absorb knowledge via learning on more purely predictive problems) | | **[Shah][17:30]** (I'm okay with that, I think) | | **[Yudkowsky][17:32]** vaguely plausible rough scenario: there was a big ongoing debate about whether or not to try letting the system trade stocks, and while the debate was going on, the researchers kept figuring out ways to make Something Zero do more with less computing power, and then it started visibly talking at people and trying to manipulate them, and there was an enormous fuss, and what happens past this point depends on whether or not you want me to try to describe a scenario in which we die with an unrealistic amount of dignity, or a realistic scenario where we die much fasterI shall assume the former. | | **[Shah][17:32]** Actually I think I want concreteness earlier | | **[Yudkowsky][17:32]** Okay.  I await your further query. | | **[Shah][17:32]** it started visibly talking at people and trying to manipulate themWhat caused this?Was it manipulating people in order to make e.g. sensory stuff easier to predict? | | **[Yudkowsky][17:36]** Cumulative lifelong learning from playing videogames took its planning abilities over a threshold; cumulative solving of computer games and multimodal real-world tasks took its internal mechanisms for unifying knowledge and making them coherent over a threshold; and it gained sufficient compressive understanding of the data it had implicitly learned by reading through hundreds of terabytes of Common Crawl, not so much the semantic knowledge contained in those pages, but the associated implicit knowledge of the Things That Generate Text (aka humans). These combined to form an imaginative understanding that some of its real-world problems were occurring in interactions with the Things That Generate Text, and it started making plans which took that into account and tried to have effects on the Things That Generate Text in order to affect the further processes of its problems.Or perhaps somebody trained it to write code in partnership with programmers and it already had experience coworking with and manipulating humans. | | **[Shah][17:39]** Checking understanding: At this point it is able to make novel plans that involve applying knowledge about humans and their role in the data-generating process in order to create a plan that leads to more reward for the real-world problems?(Which we call "manipulating humans") | | **[Yudkowsky][17:40]** Yes, much as it might have gained earlier experience with making novel Starcraft plans that involved "applying knowledge about humans and their role in the data-generating process in order to create a plan that leads to more reward", if it was trained on playing Starcraft against humans at any point, or even needed to make sense of how other agents had played StarcraftThis in turn can be seen as a direct outgrowth and isomorphism of making novel plans for playing Super Mario Brothers which involve understanding Goombas and their role in the screen-generating processexcept obviously that the Goombas are much less complicated and not themselves agents | | **[Shah][17:41]** Yup, makes sense. Not sure I totally agree that this sort of thing is likely to happen as quickly as it sounds like you believe but I'm happy to roll with it; I do think it will happen eventuallySo doesn't seem particularly cruxyI can see how this leads to existential catastrophe, if you don't expect the programmers to be worried at this early manipulation warning sign. (This is potentially cruxy for p(doom), but doesn't feel like the main action.) | | **[Yudkowsky][17:46]** On my mainline, where this is all happening at Deepmind, I do expect at least one person in the company has ever read anything I've written.  I am not sure if Demis understands he is looking straight at death, but I am willing to suppose for the sake of discussion that he does understand this - which isn't ruled out by my actual knowledge - and talk about how we all die from there.The very brief tl;dr is that they know they're looking at a warning sign but they cannot ~~fix the warning sign~~ actually fix the real underlying problem that the warning sign is about, and AGI is getting easier for other people to develop too. | | **[Shah][17:46]** I assume this is primarily about social dynamics + the ability to patch things such that things look fixed?Yeah, makes senseI assume the "real underlying problem" is somehow not the fact that the task you were training your AI system to do was not what you actually wanted it to do? | | **[Yudkowsky][17:48]** It's about the unavailability of any actual fix and the technology continuing to get easier.  Even if Deepmind understands that surface patches are lethal and understands that the easy ways of hammering down the warning signs are just eliminating the visibility rather than the underlying problems, there is nothing they can do about that except wait for somebody else to destroy the world instead.I do not know of any pivotal task you could possibly train an AI system to do using tons of correctly labeled data.  This is part of why we're all dead. | | **[Shah][17:50]** Yeah, I think if I adopted (my understanding of) your beliefs about alignment difficulty, and there wasn't already a non-racing scheme set in place, seems like we're in trouble | | **[Yudkowsky][17:50]** Like, "the real underlying problem is the fact that the task you were training your AI system to do was not what you actually wanted it to do" is one way of looking at one of the several problems that are truly fundamental, but this has no remedy that I know of, besides training your AI to do something small enough to be unpivotal. | | **[Shah][17:51][17:52]** I don't actually know the response you'd have to "why not just do value alignment?" I can name several guesses | | * [Fragility of value](https://intelligence.org/files/ComplexValues.pdf) * Not sufficiently concrete * Can't give correct labels for human values | | **[Yudkowsky][17:52][17:52]** To be concrete, you can't ask the AGI to build one billion nanosystems, label all the samples that wiped out humanity as bad, and apply gradient descent updates | | In part, you can't do that because one billion samples will get you one billion lethal systems, but even if that wasn't true, you still couldn't do it. | | **[Shah][17:53]** even if that wasn't true, you still couldn't do it.Why not? [Nearest unblocked strategy](https://arbital.com/p/nearest_unblocked/)? | | **[Yudkowsky][17:53]** ...no, because the first supposed output for training generated by the system at superintelligent levels kills everyone and there is nobody left to label the data. | | **[Shah][17:54]** Oh, I thought you were asking me to imagine away that effect with your second sentenceIn fact, I still don't understand what it was supposed to mean(Specifically this one:In part, you can't do that because one billion samples will get you one billion lethal systems, but even if that wasn't true, you still couldn't do it.) | | **[Yudkowsky][17:55]** there's a separate problem where you can't apply reinforcement learning when there's no good examples, even assuming you live to label themand, of course, yet another form of problem where you can't tell the difference between good and bad samples | | **[Shah][17:56]** Okay, makes senseLet me think a bit | | **[Yudkowsky][18:00]** and lest anyone start thinking that was an exhaustive list of fundamental problems, note the absence of, for example, "applying lots of optimization using an outer loss function doesn't necessarily get you something with a faithful internal cognitive representation of that loss function" aka "natural selection applied a ton of optimization power to humans using a very strict very simple criterion of 'inclusive genetic fitness' and got out things with no explicit representation of or desire towards 'inclusive genetic fitness' because that's what happens when you hill-climb and take wins in the order a simple search process through cognitive engines encounters those wins" | | **[Shah][18:02]** (Agreed that is another major fundamental problem, in the sense of something that could go wrong, as opposed to something that almost certainly goes wrong)I am still curious about the "why not value alignment" question, where to expand, it's something like "let's get a wide range of situations and train the agent with gradient descent to do what a human would say is the right thing to do". (We might also call this "imitation"; maybe "value alignment" isn't the right term, I was thinking of it as trying to align the planning with "human values".)My own answer is that we shouldn't expect this to generalize to nanosystems, but that's again much more of a "there's not great reason to expect this to go right, but also not great reason to go wrong either".(This is a place where I would be particularly interested in concreteness, i.e. what does the AI system do in these cases, and how does that almost-necessarily follow from the way it was trained?) | | **[Yudkowsky][18:05]** what's an example element from the "wide range of situations" and what is the human labeling?(I could make something up and let you object, but it seems maybe faster to ask you to make something up) | | **[Shah][18:09]** Uh, let's say that the AI system is being trained to act well on the Internet, and it's shown some tweet / email / message that a user might have seen, and asked to reply to the tweet / email / message. User says whether the replies are good or not (perhaps via comparisons, a la [Deep RL from Human Preferences](https://arxiv.org/abs/1706.03741))If I were not making it up on the spot, it would be more varied than that, but would not include "building nanosystems" | | **[Yudkowsky][18:10]** And presumably, in this example, the AI system is not smart enough that exposing humans to text it generates is already a world-wrecking threat if the AI is hostile?i.e., does not just hack the humans | | **[Shah][18:10]** Yeah, let's assume that for the moment | | **[Yudkowsky][18:11]** so what you want to do is train on 'weak-safe' domains where the AI isn't smart enough to do damage, and the humans can label the data pretty well because the AI isn't smart enough to fool them | | **[Shah][18:11]** "want to do" is putting it a bit strongly. This is more like a scenario I can't prove is unsafe, but do not strongly believe is safe | | **[Yudkowsky][18:12]** but the domains where the AI can execute a world-saving pivotal act are out-of-distribution for those domains.  *extremely* out-of-distribution.  *fundamentally* out-of-distribution.  the AI's own thought processes are out-of-distribution for any inscrutable matrices that were learned to influence those thought processes in a corrigible direction.it's not like trying to generalize experience from playing Super Mario Bros to Metroid. | | **[Shah][18:13]** Definitely, but my reaction to this is "okay, no particular reason for it to be safe" -- but also not huge reason for it to be unsafe. Like, it would not hugely shock me if what-we-want is sufficiently "natural" that the AI system picks up on the right thing form the 'weak-safe' domains alone | | **[Yudkowsky][18:14]** you have this whole big collection of possible AI-domain tuples that are powerful-dangerous and they have properties that aren't in *any* of the weak-safe training situations, that are moving along third dimensions where all the weak-safe training examples were flatnow, just because something is out-of-distribution, doesn't mean that nothing can ever generalize there | | **[Shah][18:15]** I mean, you correctly would not accept this argument if I said that by training blue-car-driving robots solely on blue cars I am ensuring they would be bad on red-car-driving | | **[Yudkowsky][18:15]** humans generalize from the savannah to the vacuumso the actual problem is that I expect the optimization to generalize and the corrigibility to fail | | **[Shah][18:15]** ^Right, thatI am not clear on why you expect this so stronglyMaybe you think generalization is extremely rare and optimization is a special case because of how it is so useful for basically everything? | | **[Yudkowsky][18:16]** nodid you read the section of my dialogue with Richard Ngo where I tried to explain [why corrigibility is anti-natural](https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#3_1__The_Brazilian_university_anecdote), or where Nate tried to give the [example](https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#4_2__Nate_Soares__summary) of why planning to get a laser from point A to point B without being scattered by fog is the sort of thing that also naturally says to prevent humans from filling the room with fog? | | **[Shah][18:19]** Ah, right, I should have predicted that. (Yes, I did read it.) | | **[Yudkowsky][18:19]** or for that matter, am I correct in remembering that these sections existedkso, do you need more concrete details about some part of that?a bunch of the reason why I suspect that corrigibility is anti-natural is from trying to work particular problems there in MIRI's earlier history, and not finding anything that wasn't contrary to ~~coherence~~ the overlap in the shards of inner optimization that, when ground into existence by the outer optimization loop, coherently mix to form the part of cognition that generalizes to do powerful things; and nobody else finding it either, etc. | | **[Shah][18:22]** I think I disagreed with that part more directly, in that it seemed like in those sections the corrigibility was assumed to be imposed "from the outside" on top of a system with a goal, rather than having a goal that was corrigible. (I also had a similar reaction to the 2015 [Corrigibility](https://intelligence.org/files/Corrigibility.pdf) paper.)So, for example, it seems to me like [CIRL](https://arxiv.org/abs/1606.03137) is an example of an objective that can be maximized in which the agent is corrigible-in-a-certain-sense. I agree that due to [updated deference](https://arbital.com/p/updated_deference/) it will eventually stop seeking information from the human / be subject to corrections by the human. I don't see why, at that point, it wouldn't have just learned to do what the humans actually want it to do.(There are objections like misspecification of the reward prior, or misspecification of the P(behavior | reward), but those feel like different concerns to the ones you're describing.) | | **[Yudkowsky][18:25]** a thing that MIRI tried and failed to do was find a sensible generalization of expected utility which could contain a generalized utility function that would look like an AI that let itself be shut down, without trying to force you to shut it downand various workshop attendees not employed by MIRI, etc | | **[Shah][18:26]** I do agree that a CIRL agent would not let you shut it downAnd this is something that should maybe give you pause, and be a lot more careful about potential misspecification problems | | **[Yudkowsky][18:27]** if you could give a perfectly specified prior such that the result of updating on lots of observations would be a representation of the utility function that [CEV](https://arbital.com/p/cev/) outputs, and you could perfectly [inner-align](https://arxiv.org/abs/1906.01820) an optimizer to do that thing in a way that scaled to arbitrary levels of cognitive power, then you'd be home free, sure. | | **[Shah][18:28]** I'm not trying to claim this is a solution. I'm more trying to point at a reason why I am not convinced that corrigibility is anti-natural. | | **[Yudkowsky][18:28]** the reason CIRL doesn't get off the ground is that there isn't any known, and isn't going to be any known, prior over (observation|'true' utility function) such that an AI which updates on lots of observations ends up with our true desired utility function.if you can do that, the AI *doesn't need to be corrigible*that's why it's not a counterexample to corrigibility being anti-naturalthe AI just boomfs to superintelligence, observes all the things, and does all the goodnessit doesn't listen to you say no and won't let you shut it down, but by hypothesis this is fine because it got the true utility function yay | | **[Shah][18:31]** In the world where it doesn't immediately start out as a superintelligence, it spends a lot of time trying to figure out what you want, asking you what you prefer it does, making sure to focus on the highest-EV questions, being very careful around any irreversible actions, etc | | **[Yudkowsky][18:31]** and making itself smarter as fast as possible | | **[Shah][18:32]** Yup, that too | | **[Yudkowsky][18:32]** I'd do that stuff too if I was waking up in an alien worldand, with all due respect to myself, *I am not corrigible* | | **[Shah][18:33]** You'd do that stuff because you'd want to make sure you don't accidentally get killed by the aliens; a CIRL agent does it because it "wants to help the human" | | **[Yudkowsky][18:34]** no, a CIRL agent does it because it wants to implement the True Utility Function, which it may, early on, suspect to consist of helping\* humans, and maybe to have some overlap (relative to its currently reachable short-term outcome sets, though these are of vanishingly small relative utility under the True Utility Function) with what some humans desire some of the time(\*) 'help' may not be helpseparately it asks a lot of questions because the things humans do are evidence about the True Utility Function | | **[Shah][18:35]** I agree this is also an accurate description of CIRLA more accurate description, evenWait why is it vanishingly small relative utility? Is the assumption that the True Utility Function doesn't care much about humans? Or was there something going on with short vs. long time horizons that I didn't catch | | **[Yudkowsky][18:39]** in the short term, a weak CIRL tries to grab the hand of a human about to fall off a cliff, because its TUF probably does prefer the human who didn't fall off the cliff, if it has only exactly those two options, and this is the sort of thing it would learn was probably true about the TUF early on, given the obvious ways of trying to produce a CIRL-ish thing via gradient descenthumans eat healthy in the ancestral environment when ice cream doesn't exist as an optionin the long run, the things the CIRL agent wants do *not* overlap with anything humans find more desirable than paperclips (because there is no known scheme that takes in a bunch of observations, updates a prior, and outputs a utility function whose achievable maximum is galaxies living happily forever after)and plausible TUF schemes are going to notice that grabbing the hand of a current human is a vanishing fraction of all value eventually at stake | | **[Shah][18:42]** Okay, cool, short vs. long time horizonsMakes sense | | **[Yudkowsky][18:42]** right, a weak but sufficiently reflective CIRL agent will notice an alignment of short-term interests with humans but deduce misalignment of long-term intereststhough I should maybe call it CIRL\* to denote the extremely probable case that the limit of its updating on observation does not in fact converge to CEV's output | | **[Soares][18:43]** (Attempted rephrasing of a point I read Eliezer as making upstream, in hopes that a rephrasing makes it click for Rohin:) Corrigibility isn't for bug-free CIRL agents with a prior that actually dials in on goodness given enough observation; if you have one of those you can just run it and call it a day. Rather, corrigibility is for surviving your civilization's inability to do the job right on the first try.CIRL doesn't have this property; it instead amounts to the assertion "if you are optimizing with respect to a distribution on utility functions that dials in on goodness given enough observation then that gets you just about as much good as optimizing goodness"; this is somewhat tangential to corrigibility. | | | --- | | [Yudkowsky: +1] | | | **[Yudkowsky][18:44]** and you should maybe update on how, even though somebody thought CIRL was going to be more corrigible, in fact it made *absolutely zero progress on the real problem* | | | --- | | [Ngo: 👍] | the notion of having an uncertain utility function that you update from observation is coherent and doesn't yield circular preferences, running in circles, incoherent betting, etc.so, of course, it is antithetical in its intrinsic nature to corrigibility | | **[Shah][18:47]** I guess I am not sure that I agree that this is the purpose of corrigibility-as-I-see-it. The point of corrigibility-as-I-see-it is that you don't have to specify the object-level outcomes that your AI system must produce, and instead you can specify the meta-level processes by which your AI system should come to know what the object-level outcomes to optimize for are(At CHAI we had taken to talking about corrigibility\_MIRI and corrigibility\_Paul as completely separate concepts and I have clearly fallen out of that good habit) | | **[Yudkowsky][18:48]** speaking as the person who invented the concept, asked for name submissions for it, and selected 'corrigibility' as the winning submission, that is absolutely not how I intended the word to be usedand I think that the thing I was actually trying to talk about is important and I would like to retain a word that talks about it'corrigibility' is meant to refer to the sort of putative hypothetical motivational properties that prevent a system from wanting to kill you after you didn't build it exactly right[low impact](https://arbital.com/p/low_impact/), [mild optimization](https://arbital.com/p/soft_optimizer/), [shutdownability](https://arbital.com/p/shutdown_problem/), [abortable planning](https://arbital.com/p/abortable/), [behaviorism](https://arbital.com/p/behaviorist/), [conservatism](https://arbital.com/p/conservative_concept/), etc.  (note: some of these may be less antinatural than others) | | **[Shah][18:51]** Cool. Sorry for the miscommunication, I think we should probably backtrack to hereso the actual problem is that I expect the optimization to generalize and the corrigibility to failand restart.Though possibly I should go to bed, it is quite late here and there was definitely a time at which I would not have confused corrigibility\_MIRI with corrigibility\_Paul, and I am a bit worried at my completely having missed that this time | | **[Yudkowsky][18:51]** the thing you just said, interpreted literally, is what I would call simply "going meta" but my guess is you have a more specific metaness in mind...does Paul use "corrigibility" to mean "going meta"? I don't think I've seen Paul doing that. | | **[Shah][18:54]** Not exactly "going meta", no (and I don't think I exactly mean that either). But I definitely infer a different concept from <https://www.alignmentforum.org/posts/fkLYhTQteAu5SinAc/corrigibility> than the one you're describing here. It is definitely possible that this comes from me misunderstanding Paul; I have done so many times | | **[Yudkowsky][18:55]** That looks to me like Paul used 'corrigibility' around the same way I meant it, if I'm not just reading my own face into those clouds.  maybe you picked up on the exciting metaness of it and thought 'corrigibility' was talking about the metaness part? 😛but I also want to create an affordance for you to go to bedhopefully this last conversation combined with previous dialogues has created any sense of why I worry that corrigibility is anti-natural and hence that "on the first try at doing it, the optimization generalizes from the weak-safe domains to the strong-lethal domains, but the corrigibility doesn't"so I would then ask you what part of this you were skeptical aboutas a place to pick up when you come back from the realms of Morpheus | | **[Shah][18:58]** Yup, sounds good. Talk to you tomorrow! |   21. November 7 conversation ===========================   21.1. Corrigibility, value learning, and pessimism --------------------------------------------------   | | | --- | | **[Shah][3:23]** Quick summary of discussion so far (in which I ascribe views to Eliezer, for the sake of checking understanding, omitting for brevity the parts about how these are facts about my beliefs about Eliezer's beliefs and not Eliezer's beliefs themselves):* Some discussion of "how to use non-world-optimizing AIs to help with AI alignment", which are mostly in the category "clever tricks with gradient descent and loss functions and labeled datasets" rather than "textbook from the future". Rohin thinks these help significantly (and that "significant help" = "reduced x-risk"). Eliezer thinks that whatever help they provide is not sufficient to cross the line from "we need a miracle" to "we have a plan that has non-trivial probability of success without miracles". The crux here seems to be alignment difficulty. * Some discussion of how doom plays out. I agree with Eliezer that if the AI is catastrophic by default, and we don't have a technique that stops the AI from being catastrophic by default, and we don't already have some global coordination scheme in place, then bad things happen. Cruxes seem to be alignment difficulty and the plausibility of a global coordination scheme, of which alignment difficulty seems like the bigger one. * On alignment difficulty, an example scenario is "train on human judgments about what the right thing to do is on a variety of weak-safe domains, and hope for generalization to potentially-lethal domains". Rohin views this as neither confidently safe nor confidently unsafe. Eliezer views this as confidently unsafe, because he strongly expects the optimization to generalize while the corrigibility doesn't, because corrigibility is anti-natural. (Incidentally, "optimization generalizes but corrigibility doesn't" is an example of the sort of thing I wish were more concrete, if you happen to be able to do that)My current take on "corrigibility":* Prior to this discussion, in my head there was corrigibility\_A and corrigibility\_B. Corrigibility\_A, which I associated with MIRI, was about imposing a constraint "from the outside". Given an AI system, it is a method of modifying that AI system to (say) allow you to shut it down, by performing some sort of operation on its goal. Corrigibility\_B, which I associated with Paul, was about building an AI system which would have particular nice behaviors like learning about the user's preferences, accepting corrections about what it should do, etc. * After this discussion, I think everyone meant corrigibility\_B all along. The point of the 2015 MIRI paper was to check whether it is possible to build a version of corrigibility\_B that was compatible with expected utility maximization with a not-terribly-complicated utility function; the point of this was to see whether corrigibility could be made compatible with "plans that lase". * While I think people agree on the behaviors of corrigibility, I am not sure they agree on why we want it. Eliezer wants it for surviving failures, but maybe others want it for "dialing in on goodness". When I think about a "broad basin of corrigibility", that intuitively seems more compatible with the "dialing in on goodness" framing (but this is an aesthetic judgment that could easily be wrong). * I don't think I meant "going meta", e.g. I wouldn't have called indirect normativity an example of corrigibility. I think I was pointing at "dialing in on goodness" vs. "specifying goodness". * I agree CIRL doesn't help survive failures. But if you instead talk about "dialing in on goodness", CIRL does in fact do this, at least conceptually (and other alternatives don't). * I am somewhat surprised that "how to conceptually dial in on goodness" is not something that seems useful to you. Maybe you think it is useful, but you're objecting to me calling it corrigibility, or saying we knew how to do it before CIRL? (A lot of the above on corrigibility is new, because the distinction between surviving-failures and dialing-in-on-goodness as different use cases for very similar kinds of behaviors is new to me. Thanks for discussion that led me to making such a distinction.)Possible avenues for future discussion, in the order of my-guess-at-usefulness:1. Discussing anti-naturality of corrigibility. As a starting point: you say that an agent that makes plans but doesn't execute them is also dangerous, because it is the plan itself that lases, and corrigibility is antithetical to lasing. Does this mean you predict that you, or I, with suitably enhanced intelligence and/or reflectivity, would not be capable of producing a plan to help an alien civilization optimize their world, with that plan being corrigible w.r.t the aliens? (This seems like a strange and unlikely position to me, but I don't see how to not make this prediction under what I believe to be your beliefs. Maybe you just bite this bullet.) 2. Discussing why it is very unlikely for the AI system to generalize correctly both on optimization and values-or-goals-that-guide-the-optimization (which seems to be distinct from corrigibility). Or to put it another way, why is "alignment by default according to John Wentworth" doomed to fail? <https://www.lesswrong.com/posts/Nwgdq6kHke5LY692J/alignment-by-default> 3. More checking of where I am failing to pass your ITT 4. Why is "dialing in on goodness" not a reasonable part of the solution space (to the extent you believe that)? 5. More concreteness on how optimization generalizes but corrigibility doesn't, in the case where the AI was trained by human judgment on weak-safe domains Just to continue to state it so people don't misinterpret me: in most of the cases that we're discussing, my position is *not* that they are safe, but rather that they are not overwhelmingly likely to be unsafe. | | **[Ngo][3:41]** I don't understand what you mean by dialling in on goodness. Could you explain how CIRL does this better than, say, [reward modelling](https://deepmindsafetyresearch.medium.com/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84)? | | **[Shah][3:49]** Reward modeling does not by default (a) choose relevant questions to ask the user in order to get more information about goodness, (b) act conservatively, especially in the face of irreversible actions, while it is still uncertain about what goodness is, or (c) take actions that are known to be robustly good, while still waiting for future information that clarifies the nuances of goodnessYou could certainly do something like Deep RL from Human Preferences, where the preferences are things like "I prefer you ask me relevant questions to get more information about goodness", in order to get similar behavior. In this case you are transferring desired behaviors from a human to the AI system, whereas in CIRL the behaviors "fall out of" optimization for a specific objectiveIn Eliezer/Nate terms, the CIRL story shows that dialing on goodness is compatible with "plans that lase", whereas reward modeling does not show this | | **[Ngo][4:04]** The meta-level objective that CIRL is pointing to, what makes that thing deserve the name "goodness"? Like, if I just gave an alien CIRL, and I said "this algorithm dials an AI towards a given thing", and they looked at it without any preconceptions of what the designers *wanted* to do, why wouldn't they say "huh, it looks like an algorithm for dialling in on some extrapolation of the unintended consequences of people's behaviour" or something like that?See also this part of my second discussion with Eliezer, where he brings up CIRL: [<https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#3_2__Brain_functions_and_outcome_pumps>] He was emphasising that CIRL, and most other proposals for alignment algorithms, just shuffle the problematic consequentialism from the original place to a less visible place. I didn't engage much with this argument because I mostly agree with it. | | | --- | | [Yudkowsky: +1] | | | **[Shah][5:28]** I think you are misunderstanding my point. I am not claiming that we know how to implement CIRL such that it produces good outcomes; I agree this depends a ton on having a sufficiently good P(obs | reward). Similarly, if you gave CIRL to aliens, whether or not they say it is about getting some extrapolation of unintended consequences depends on exactly what P(obs | reward) you ended up using. There is some not-too-complicated P(obs | reward) such that you do end up getting to "goodness", or something sufficiently close that it is not an existential catastrophe; I do not claim we know what it is.I am claiming that behaviors like (a), (b) and (c) above are compatible with expected utility theory, and thus compatible with "plans that lase". This is demonstrated by CIRL. It is not demonstrated by reward modeling, see e.g. [these](https://jan.leike.name/publications/Towards%20Interactive%20Inverse%20Reinforcement%20Learning%20-%20Armstrong,%20Leike%202016.pdf) [three](https://arxiv.org/abs/2004.13654) [papers](https://www.tomeveritt.se/papers/alignment.pdf) for problems that arise (which make it so that it is working at cross purposes with itself and seems incompatible with "plans that lase"). (I'm most confident in the first supporting my point, it's been a long time since I read them so I might be wrong about the others.) To my knowledge, similar problems don't arise with CIRL (and they shouldn't, because it is a nice integrated Bayesian agent doing expected utility theory).I could imagine an objection that P(obs | reward), while not as complicated as "the utility function that rationalizes a twitching robot", is still too complicated to really show compatibility with plans-that-lase, but pointing out that P(obs | reward) could be misspecified doesn't seem particularly relevant to whether behaviors (a), (b) and (c) are compatible with plans-that-lase.Re: shuffling around the problematic consequentialism: it is not my main plan to avoid consequentialism in the sense of plans-that-lase. I broadly agree with Eliezer that you need consequentialism to do high-impact stuff. My plan is for the consequentialism to be aimed at good ends. So I agree that there is still consequentialism in CIRL, and I don't see this as a damning point; when I talk about "dialing in to goodness", I am thinking of aiming the consequentialism at goodness, not getting rid of consequentialism.(You can still do things like try to be domain-specific rather than domain-general; I don't mean to completely exclude such approaches. They do seem to give additional safety. But the mainline story is that the consequentialism / optimization is directed at what we want rather than something else.) | | **[Ngo][6:21]** If you don't know how to implement CIRL in such a way that it actually aims at goodness, then you don't have an algorithm with properties a, b and c above.Or, to put it another way: suppose I replace the word "goodness" with "winningness". Now I can describe AlphaStar as follows:* it choose relevant questions to ask (read: scouts to send) in order to get more information about winningness * it acts conservatively while it is still uncertain about what winningness is * it take actions that are known to be robustly ~~good~~ winningish, while still waiting for future information that clarifies the nuances of winningness Now, you might say that the difference is that CIRL implements uncertainty over possible utility functions, not possible empirical beliefs. But this is just a semantic difference which shuffles the problem around without changing anything substantial. E.g. it's exactly equivalent if we think of CIRL as an agent with a fixed (known) utility function, which just has uncertainty about some empirical parameter related to the humans it interacts with. | | | --- | | [Yudkowsky: +1] | | | **[Soares][6:55]** [...] it take actions that are known to be robustly good, while still waiting for future information that clarifies the nuances of winningness(typo: "known to be robustly good" -> "known to be robustly winningish" :-p) | | | --- | | [Ngo: 👍] | Some quick reactions, some from me and some from my model of Eliezer:Eliezer thinks that whatever help they provide is not sufficient [...] The crux here seems to be alignment difficulty.I'd be more hesitant to declare the crux "alignment difficulty". My understanding of Eliezer's position on your "use AI to help with alignment" proposals (which focus on things like using AI to make paradigmatic AI systems more transparent) is "that was always the plan, and it doesn't address the sort of problems I'm worried about". Maybe you understand the problems Eliezer's worried about, and believe them not to be very difficult to overcome, thus putting the crux somewhere like "alignment difficulty", but I'm not convinced. I'd update towards your crux-hypothesis if you provided a good-according-to-Eliezer summary of what other problems Eliezer sees and the reasons-according-to-Eliezer that "AI make our tensors more transparent" doesn't much address them.Corrigibility\_A [...] Corrigibility\_B [...]Of the two Corrigibility\_B does sound a little closer to my concept, though neither of your descriptions cause me to be confident that communication has occurred. Throwing some checksums out there:* There are three reasons a young weak AI system might accept your corrections. It could be corrigible, or it could be incorrigibly pursuing goodness, or it could be incorrigibly pursuing some other goal while calculating that accepting this correction is better according to its current goals than risking a shutdown. * One way you can tell that CIRL is not corrigible is that it does not accept corrections when old and strong. * There's an intuitive notion of "you're here to help us implement a messy and fragile concept not yet clearly known to us; work with us here?" that makes sense to humans, that includes as a side effect things like "don't scan my brain and then disregard my objections; there could be flaws in how you're inferring my preferences from my objections; it's actually quite important that you be cautious and accept brain surgery even in cases where your updated model says we're about to make a big mistake according to our own preferences". The point of the 2015 MIRI paper was to check whether it is possible to build a version of corrigibility\_B that was compatible with expected utility maximization with a not-terribly-complicated utility function; the point of this was to see whether corrigibility could be made compatible with "plans that lase".More like:* Corrigibility seems, at least on the surface, to be in tension with the simple and useful patterns of optimization that tend to be spotlit by demands for cross-domain success, similar to how acting like two oranges are worth one apple and one apple is worth one orange is in tension with those patterns. * In practice, this tension seems to run more than surface-deep. In particular, various attempts to reconcile the tension fail, and cause the AI to have undesirable preferences (eg, incentives to convince you to shut it down whenever its utility is suboptimal), exploitably bad beliefs (eg, willingness to bet at unreasonable odds that it won't be shut down), and/or to not be corrigible in the first place (eg, a preference for destructively uploading your mind against your protests, at which point further protests from your coworkers are screened off by its access to that upload). | | | --- | | [Yudkowsky: ✅] | (There's an argument I occasionally see floating around these parts that goes "ok, well what if the AI is *fractally* corrigible, in the sense that instead of its cognition being oriented around pursuit of some goal, its cognition is oriented around doing what it predicts a human would do (or what a human would want it to do) in a corrigible way, at every level and step of its cognition". This is perhaps where you perceive a gap between your A-type and B-type notions, where MIRI folk tend to be more interested in reconciling the tension between corrigibility and coherence, and Paulian folk tend to place more of their chips on some such fractal notion? I admit I don't find much hope in the "fractally corrigible" view myself, and I'm not sure whether I could pass a proponent's ITT, but fwiw my model of the Yudkowskian rejoinder is "mindspace is deep and wide; that could plausibly be done if you had sufficient mastery of minds; you're not going to get anywhere near close to that in practice, because of the way that basic normal everyday cross-domain training will highlight patterns that you'd call orienting-cognition-around-a-goal".)And my super-quick takes on your avenues for future discussion:1. Discussing anti-naturality of corrigibility.Hopefully the above helps.2. Discussing why it is very unlikely for the AI system to generalize correctly both on optimization and values-or-goals-that-guide-the-optimizationThe concept "patterns of thought that are useful for cross-domain success" is latent in the problems the AI faces, and known to have various simple mathematical shadows, and our training is more-or-less banging the AI over the head with it day in and day out. By contrast, the specific values we wish to be pursued are not latent in the problems, are known to *lack* a simple boundary, and our training is much further removed from it.3. More checking of where I am failing to pass your ITT+14. Why is "dialing in on goodness" not a reasonable part of the solution space?It has long been the plan to say something less like "the following list comprises goodness: ..." and more like "yo we're tryin to optimize some difficult-to-name concept; help us out?". "Find a prior that, with observation of the human operators, dials in on goodness" is a fine guess at how to formalize the latter. If we had been planning to take the former tack, and you had come in suggesting CIRL, that might have helped us switch to the latter tack, which would have been cool. In that sense, it's a fine part of the solution. It also provides some additional formality, which is another iota of potential solution-ness, for that part of the problem. It doesn't much address the rest of the problem, which is centered much more around "how do you point powerful cognition in any direction at all" (such as towards your chosen utility function or prior thereover).5. More concreteness on how optimization generalizes but corrigibility doesn't, in the case where the AI was trained by human judgment on weak-safe domains+1 | | **[Shah][13:23]** If you don't know how to implement CIRL in such a way that it actually aims at goodness, then you don't have an algorithm with properties a, b and c above.I want clarity on the premise here:* Is the premise "Rohin cannot write code that when run exhibits properties a, b, and c"? If so, I totally agree, but I'm not sure what the point is. All alignment work ever until the very last step will not lead you to writing code that when run exhibits an aligned superintelligence, but this does not mean that the prior alignment work was useless. * Is the premise "there does not exist code that (1) we would call an implementation of CIRL and (2) when run has properties a, b, and c"? If so, I think your premise is false, for the reasons given previously (I can repeat them if needed) I imagine it is neither of the above, and you are trying to make a claim that some conclusion that I am drawing from or about CIRL is invalid, because in order for me to draw that conclusion, I need to exhibit the correct P(obs | reward). If so, I want to know which conclusion is invalid and why I have to exhibit the correct P(obs | reward) before I can reach that conclusion.I agree that the fact that you can get properties (a), (b) and (c) are simple straightforward consequences of being Bayesian about a quantity you are uncertain about and care about, as with AlphaStar and "winningness". I don't know what you intend to imply by this -- because it also applies to other Bayesian things, it can't imply anything about alignment? I also agree the uncertainty over reward is equivalent to uncertainty over some parameter of the human (and have proved this theorem myself in the paper I wrote on the topic). I do not claim that anything in here is particularly non-obvious or clever, in case anyone thought I was making that claim.To state it again, my claim is that behaviors like (a), (b) and (c) are consistent with "plans-that-lase", and as evidence for this claim I cite the *existence* of an expected-utility-maximizing algorithm that displays them, specifically CIRL with the correct p(obs | reward). I do *not* claim that I can write down the code, I am just claiming that it *exists*. If you agree with the claim but not the evidence then let's just drop the point. If you disagree with the claim then tell me why it's false. If you are unsure about the claim then point to the step in the argument you think doesn't work.The reason I care about this claim is that it seems to me like *even* if you think that superintelligences only involve plans-that-lase, it seems to me like this does *not* rule out what we might call "dialing in to goodness" or "assisting the user", and thus it seems like this is a valid target for you to try to get your superintelligence to do.I suspect that I do not agree with Eliezer about what plans-that-lase can do, but it seems like the two of us should at least agree that behaviors like (a), (b) and (c) can be exhibited in plans-that-lase, and if we don't agree on that some sort of miscommunication has happened. Throwing some checksums out thereThe checksums definitely make sense. (Technically I could name more reasons why a young AI might accept correction, such as "it's still sphexish in some areas, accepting corrections is one of those reasons", and for the third reason the AI could be calculating negative consequences for things other than shutdown, but that seems nitpicky and I don't think it means I have misunderstood you.) I think the third one feels somewhat slippery and vague, in that I don't know exactly what it's claiming, but it clearly seems to be the same sort of thing as corrigibility. Mostly it's more like I wouldn't be surprised if the Textbook from the Future tells us that we mostly had the right concept of corrigibility, but that third checksum is not quite how they would describe it any more. I would be a lot more surprised if the Textbook says we mostly had the right concept but then says checksums 1 and 2 were misguided."The point of the 2015 MIRI paper was to check whether it is possible to build a version of corrigibility\_B that was compatible with expected utility maximization with a not-terribly-complicated utility function; the point of this was to see whether corrigibility could be made compatible with 'plans that lase'."More like:* Corrigibility seems, at least on the surface, to be in tension with the simple and useful patterns of optimization that tend to be spotlit by demands for cross-domain success, similar to how as acting like an two oranges are worth one apple and one apple is worth one orange is in tension with those patterns. * In practice, this tension seems to run more than surface-deep. In particular, various attempts to reconcile the tension fail, and cause the AI to have undesirable preferences (eg, incentives to convince you to shut it down whenever its utility is suboptimal), exploitably bad beliefs (eg, willingness to bet at unreasonable odds that it won't be shut down), and/or to not be corrigible in the first place (eg, a preference for destructively uploading your mind against your protests, at which point further protests from your coworkers are screened off by its access to that upload). On the 2015 Corrigibility paper, is this an accurate summary: "it wasn't that we were checking whether corrigibility could be compatible with useful patterns of optimization; it was already obvious at least at a surface level that corrigibility was in tension with these patterns, and we wanted to check and/or show that this tension persisted more deeply and couldn't be easily fixed".(My other main hypothesis is that there's an important distinction between "simple and useful patterns of optimization" (term in your message) and "plans that lase" (term in my message) but if so I don't know what it is.) | | **[Soares][13:52]** What we *wanted* to do was show that the apparent tension was merely superficial. We failed. | | | --- | | [Shah: 👍] | (Also, IIRC -- and it's been a long time since I checked -- the 2015 paper contains only one exploration, relating to an idea of Stuart Armstrong's. There were another host of ideas raised and shot down in that era, that didn't make it into that paper, pro'lly b/c they came afterwards.) | | **[Shah][13:55]** What we *wanted* to do was show that the apparent tension was merely superficial. We failed.(That sounds like what I originally said? I'm a bit confused why you didn't just agree with my original phrasing:The point of the 2015 MIRI paper was to check whether it is possible to build a version of corrigibility\_B that was compatible with expected utility maximization with a not-terribly-complicated utility function; the point of this was to see whether corrigibility could be made compatible with "plans that lase".)(I'm kinda worried that there's some big distinction between "EU maximization", "plans that lase", and "simple and useful patterns of optimization", that I'm not getting; I'm treating them as roughly equivalent at the moment when putting on my MIRI-ontology-hat.) | | **[Soares][14:01]** (There are a bunch of aspects of your phrasing that indicated to me a different framing, and one I find quite foreign. For instance, this talk of "building a version of corrigibility\_B" strikes me as foreign, and the talk of "making it compatible with 'plans that lase'" strikes me as foreign. It's plausible to me that you, who understand your original framing, can tell that my rephrasing matches your original intent. I do not yet feel like I could emit the description you emitted without contorting my thoughts about corrigibility in foreign ways, and I'm not sure whether that's an indication that there are distinctions, important to me, that I haven't communicated.)(I'm kinda worried that there's some big distinction between "EU maximization", "plans that lase", and "simple and useful patterns of optimization", that I'm not getting; I'm treating them as roughly equivalent at the moment when putting on my MIRI-ontology-hat.)I, too, believe them to be basically equivalent (with the caveat that the reason for using expanded phrasings is because people have a history of misunderstanding "utility maximization" and "coherence", and so insofar as you round them all to "coherence" and then argue against some very narrow interpretation of coherence, I'm gonna protest that you're bailey-and-motting). | | | --- | | [Shah: 👍] | | | **[Shah][14:12]** Hopefully the above helps.I'm still interested in the question "Does this mean you predict that you, or I, with suitably enhanced intelligence and/or reflectivity, would not be capable of producing a plan to help an alien civilization optimize their world, with that plan being corrigible w.r.t the aliens?" I don't currently understand how you avoid making this prediction given other stated beliefs. (Maybe you just bite the bullet and do predict this?)By contrast, the specific values we wish to be pursued are not latent in the problems, are known to lack a simple boundary, and our training is much further removed from it.I'm not totally sure what is meant by "simple boundary", but it seems like a lot of human values are latent in text prediction on the Internet, and when training from human feedback the training is not very removed from values.It has long been the plan to say something less like "the following list comprises goodness: ..." and more like "yo we're tryin to optimize some difficult-to-name concept; help us out?". [...]I take this to mean that "dialing in on goodness" is a reasonable part of the solution space? If so, I retract that question. I thought from previous comments that Eliezer thought this part of solution space was more doomed than corrigibility.(I get the sense that people think that I am butthurt about CIRL not getting enough recognition or something. I do in fact think this, but it's not part of my agenda here. I originally brought it up to make the argument that corrigibility is not in tension with EU maximization, then realized that I was mistaken about what "corrigibility" meant, but still care about the argument that "dialing in on goodness" is not in tension with EU maximization. But if we agree on that claim then I'm happy to stop talking about CIRL.) | | **[Soares][14:13]** I'd be *capable* of helping aliens optimize their world, sure. I wouldn't be motivated to, but I'd be capable. | | **[Shah][14:14]** (There are a bunch of aspects of your phrasing that indicated to me a different framing, and one I find quite foreign. For instance, this talk of "building a version of corrigibility\_B" strikes me as foreign, and the talk of "making it compatible with 'plans that lase'" strikes me as foreign. It's plausible to me that you, who understand your original framing, can tell that my rephrasing matches your original intent. I do not yet feel like I could emit the description you emitted without contorting my thoughts about corrigibility in foreign ways, and I'm not sure whether that's an indication that there are distinctions, important to me, that I haven't communicated.)This makes sense. I guess you might think of these concepts as quite pinned down? Like, in your head, EU maximization is just a kind of behavior (= set of behaviors), corrigibility is just another kind of behavior (= set of behaviors), and there's a straightforward yes-or-no question about whether the intersection is empty which you set out to answer, you can't "make" it come out one way or the other, nor can you "build" a new kind of corrigibility | | **[Soares][14:17]** Re: CIRL, my current working hypothesis is that by "use CIRL" you mean something analogous to what I say when I say "do CEV" -- namely, direct the AI to figure out what we "really" want in some correct sense, rather than attempting to specify what we want concretely. And to be clear, on my model, this *is* part of the solution to the overall alignment problem, and it's more-or-less why we wouldn't die immediately on the "value is fragile / we can't name exactly what we want" step if we solved the other problems.My guess as to the disagreement about how much credit CIRL should get, is that there is in fact a disagreement, but it's not coming from MIRI folk saying "no we should be specifying the actual utility function by hand", it's coming from MIRI folk saying "this is just the advice 'do CEV' dressed up in different clothing and presented as a reason to stop worrying about corrigibility, which is irritating, given that it's orthogonal to corrigibility".If you wanna fight that fight, I'd start by asking: Do you think CIRL is doing anything above and beyond what "use CEV" is doing? If so, what?Regardless, I think it might be a good idea for you to try to pass my (or Eliezer's) ITT about what parts of the problem remain beyond the thing I'd call "do CEV" and why they're hard. (Not least b/c if my working hypothesis is wrong, demonstrating your mastery of that subject might prevent a bunch of toil covering ground you already know.) | | **[Shah][14:17]** I'd be *capable* of helping aliens optimize their world, sure. I wouldn't be motivated to, but I'd be capable.Okay, so it seems like the danger requires the thing-producing-the-plan to be badly-motivated. But then I'm not sure why it seems so impossible to have a (not-badly-motivated) thing that, when given a goal, produces a plan to corrigibly get that goal. (This is a scenario Richard mentioned earlier.) | | **[Soares][14:19]** This makes sense. I guess you might think of these concepts as quite pinned down? Like, in your head, EU maximization is just a kind of behavior (= set of behaviors), corrigibility is just another kind of behavior (= set of behaviors), and there's a straightforward yes-or-no question about whether the intersection is empty which you set out to answer, you can't "make" it come out one way or the other, nor can you "build" a new kind of corrigibilityThat sounds like one of the big directions in which your framing felt off to me, yeah :-). (I don't fully endorse that rephrasing, but it seems directionally correct to me.)Okay, so it seems like the danger requires the thing-producing-the-plan to be badly-motivated. But then I'm not sure why it seems so impossible to have a (not-badly-motivated) thing that, when given a goal, produces a plan to corrigibly get that goal. (This is a scenario Richard mentioned earlier.)On my model, aiming the powerful optimizer is the hard bit.Like, once I grant "there's a powerful optimizer, and all it does is produce plans to corrigibly attain a given goal", I agree that the problem is mostly solved.There's maybe some cleanup, but the bulk of the alignment challenge preceded that point. | | | --- | | [Shah: 👍] | (This is hard for all the usual reasons, that I suppose I could retread.) | | **[Shah][14:24]** [...] Regardless, I think it might be a good idea for you to try to pass my (or Eliezer's) ITT about what parts of the problem remain beyond the thing I'd call "do CEV" and why they're hard. (Not least b/c if my working hypothesis is wrong, demonstrating your mastery of that subject might prevent a bunch of toil covering ground you already know.)(Working on ITT) | | **[Soares][14:30]** (To clarify some points of mine, in case this gets published later to other readers: (1) I might call it more centrally something like "build a [DWIM system](https://arbital.com/p/dwim/)" rather than "use CEV"; and (2) this is not advice about what your civilization should do with early AGI systems, I strongly recommend against trying to pull off CEV under that kind of pressure.) | | **[Shah][14:32]** I don't particularly want to have fights about credit. I just didn't want to falsely state that I do not care about how much credit CIRL gets, when attempting to head off further comments that seemed designed to appease my sense of not-enough-credit. (I'm also not particularly annoyed at MIRI, here.)On passing ITT, about what's left beyond "use CEV" (stated in my ontology because it's faster to type; I think you'll understand, but I can also translate if you think that's important):* The main thing is simply how to actually get the AI system to care about pursuing CEV. I think MIRI ontology would call this the target loading problem. * This is hard because (a) you can't just train on CEV, because you can't just implement CEV and provide that as training and (b) even if you magically could train on CEV, that does not establish that the resulting AI system then wants to optimize CEV. It could just as well optimize some other objective that correlated with CEV in the situations you trained, but no longer correlates in some new situation (like when you are building a nanosystem). (Point (b) is how I would talk about inner alignment.) * This is made harder for a variety of reasons, including (a) you're working with inscrutable matrices that you can't look at the details of, (b) there are clear racing incentives when the prize is to take over the world (or even just lots of economic profit), (c) people are unlikely to understand the issues at stake (unclear to me of the exact reasons, I'd guess it would be that the issues are too subtle / conceptual, + pressure to rationalize it away), (d) there's very little time in which we have a good understanding of the situation we face, because of fast / discontinuous takeoff | | | --- | | [Soares: 👍] | | | **[Soares][14:37]** Passable ^\_^ (Not exhaustive, obviously; "it will have a tendency to kill you on the first real try if you get it wrong" being an example missing piece, but I doubt you were trying to be exhaustive.) Thanks. | | | --- | | [Shah: 👍] | Okay, so it seems like the danger requires the thing-producing-the-plan to be badly-motivated. But then I'm not sure why it seems so impossible to have a (not-badly-motivated) thing that, when given a goal, produces a plan to corrigibly get that goal. (This is a scenario Richard mentioned earlier.)I'm uncertain where the disconnect is here. Like, I could repeat some things from past discussions about how "it only outputs plans, it doesn't execute them" does very little (not nothing, but very little) from my perspective? Or you could try to point at past things you'd expect me to repeat and name why they don't seem to apply to you? | | **[Shah][14:40]** (Flagging that I should go to bed soon, though it doesn't have to be right away) | | **[Yudkowsky][14:50]** ...I do not know if this is going to help anything, but I have a feeling that there's a frequent disconnect wherein I invented an idea, considered it, found it necessary-but-not-sufficient, and moved on to looking for additional or varying solutions, and then a decade or in this case 2 decades later, somebody comes along and sees this brilliant solution which MIRI is for some reason neglectingthis is perhaps exacerbated by a deliberate decision during the early days, when I looked very weird and the field was much more allergic to weird, to not even try to stamp my name on all the things I invented.  eg, I told Nick Bostrom to please use various of my ideas as he found appropriate and only credit them if he thought that was strategically wise.I expect that some number of people now in the field don't know I invented corrigibility, and any number of other things that I'm a little more hesitant to claim here because I didn't leave Facebook trails for inventing themand unless you had been around for quite a while, you definitely wouldn't know that I had been (so far as I know) the first person to perform the unexceptional-to-me feat of writing down, in 2001, the very obvious idea I called "external reference semantics", or as it's called nowadays, CIRL | | **[Shah][14:53]** I really honestly am not trying to say that MIRI didn't think of CIRL-like things, nor am I trying to get credit for CIRL. I really just wanted to establish that "learn what is good to do" seems not-ruled-out by EU maximization. That's all. It sounds like we agree on this point and if so I'd prefer to drop it. | | | --- | | [Soares: ❤️] | | | **[Yudkowsky][14:53]** Having a prior over utility functions that gets updated by evidence is not ruled out by EU maximization.  That exact thing is hard for other reasons than it being contrary to the nature of EU maximization.If it was ruled out by EU maximization for any simple reason, I would have noticed that back in 2001. | | **[Ngo][14:54]** I think we all agree on this point. | | | | --- | --- | | [Shah: 👍] | [Soares: 👍] | One thing I'd note is that during my debate with Eliezer, I'd keep saying "oh so you think X is impossible" and he'd say "no, all these things are *possible*, they're just really really hard". | | **[Yudkowsky][14:58]** ...to do correctly on your first try when a failed attempt kills you. | | **[Shah][14:58]** Maybe it's fine; perhaps the point is just that target loading is hard, and the question is why target loading is so hard.From my perspective, the main confusing thing about the Eliezer/Nate view is how *confident* it is. With each individual piece, I (usually) find myself nodding along and saying "yes, it seems like if we wanted to guarantee safety, we would need to solve this". What I don't do is say "yes, it seems like without a solution to this, we're near-certainly dead". The uncharitable view (which I share mainly to emphasize where the disconnect is, not because I think it is true) would be something like "Eliezer/Nate are falling to a Murphy bias, where they assume that unless they have an ironclad positive argument for safety, the worst possible thing will happen and we all die". I try to generate things that seem more like ironclad (or at least "leatherclad") positive arguments for doom, and mostly don't succeed; when I say "human values are very complicated" there's the rejoinder that "a superintelligence will certainly know about human values; pointing at them shouldn't take that many more bits"; when I say "this is ultimately just praying for generalization", there's the rejoinder "but it may in fact actually generalize"; add to all of this the fact that a bunch of people will be trying to prevent the problem and it seems weird to be so confident in doom.A lot of my questions are going to be of the form "it seems like this is a way that we could survive; it definitely involves luck and does not say good things about our civilization, but it does not seem as improbable as the word 'miracle' would imply" | | **[Yudkowsky][15:00]** heh.  from my standpoint, I'd say of this that it reflects those old experiments where if you ask people for their "expected case" it's indistinguishable from their "best case" (since both of these involve visualizing various things going on their imaginative mainline, which is to say, as planned) and reality is usually worse than their "worst case" (because they didn't adjust far enough away from their best-case anchor towards the statistical distribution for actual reality when they were trying to imagine a few failures and disappointments of the sort that reality had previously delivered)it rhymes with the observation that it's incredibly hard to find people - even inside the field of computer security - who really have what Bruce Schneier termed the security mindset, of asking how to break a cryptography scheme, instead of imagining how your cryptography scheme could succeedfrom my perspective, people are just living in a fantasy reality which, if we were actually living in it, would not be full of failed software projects or rocket prototypes that blow up even after you try quite hard to get a system design about which you made a strong prediction that it wouldn't explodethey think something special has to go wrong with a rocket design, that you must have committed some grave unusual sin against rocketry, for the rocket to explodeas opposed to every rocket wanting really strongly to explode and needing to constrain every aspect of the system to make it not explode and then the first 4 times you launch it, it blows up anywayswhy? because of some particular technical issue with O-rings, with the flexibility of rubber in cold weather? | | **[Shah][15:05]** (I have read your Rocket Alignment and security mindset posts. Not claiming this absolves me of bias, just saying that I am familiar with them) | | **[Yudkowsky][15:05]** no, because the strains and temperatures in rockets are large compared to the materials that we use to make up the rocketsthe fact that sometimes people are wrong in their uncertain guesses about rocketry does not make their life easier in this regardthe less they understand, the less ability they have to force an outcome within realityit's no coincidence that when you are Wrong about your rocket, the particular form of Being Wrong that reality delivers to you as a surprise message, is not that you underestimated the strength of steel and so your rocket went to orbit and came back with fewer scratches on the hull than expectedwhen you are working with powerful forces there is not a symmetry around pleasant and unpleasant surprises being equally likely relative to your first-order model.  if you're a good Bayesian, they will be equally likely relative to your second-order model, but this requires you to be HELLA pessimistic, indeed, SO PESSIMISTIC that sometimes you are pleasantly surprisedwhich looks like such a bizarre thing to a mundane human that they will gather around and remark at the case of you being pleasantly surprisedthey will not be used to seeing thisand they shall say to themselves, "haha, what pessimists"because to be unpleasantly surprised is so ordinary that they do not bother to gather and gossip about it when it happensmy fundamental sense about the other parties in this debate, underneath all the technical particulars, is that they've constructed a Murphy-free fantasy world from the same fabric that weaves crazy optimistic software project estimates and brilliant cryptographic codes whose inventors didn't quite try to break them, and are waiting to go through that very common human process of trying out their optimistic idea, letting reality gently correct them, predictably becoming older and wiser and starting to see the true scope of the problem, and so in due time becoming one of those Pessimists who tell the youngsters how ha ha of course things are not that easythis is how the cycle usually goesthe problem is that instead of somebody's first startup failing and them then becoming much more pessimistic about lots of things they thought were easy and then doing their second startupthe part where they go ahead optimistically and learn the hard way about things in their chosen field which aren't as easy as they hoped | | **[Shah][15:13]** Do you want to bet on that? That seems like a testable prediction about beliefs of real people in the not-too-distant future | | **[Yudkowsky][15:13]** kills everyonenot just themeveryonethis is an issuehow on Earth would we bet on that if you think the bet hasn't already resolved? I'm describing the attitudes of people that I see right now today. | | **[Shah][15:15]** Never mind, I wanted to bet on "people becoming more pessimistic as they try ideas and see them fail", but if your idea of "see them fail" is "superintelligence kills everyone" then obviously we can't bet on that(people here being alignment researchers, obviously ones who are not me) | | **[Yudkowsky][15:17]** there is some element here of the Bayesian not updating in a predictable direction, of executing today the update you know you'll make later, of saying, "ah yes, I can see that I am in the same sort of situation as the early AI pioneers who thought maybe it would take a summer and actually it was several decades because Things Were Not As Easy As They Imagined, so instead of waiting for reality to correct me, I will imagine myself having already lived through that and go ahead and be more pessimistic right now, not just a little more pessimistic, but so incredibly pessimistic that I am *as* likely to be pleasantly surprised as unpleasantly surprised by each successive observation, which is even more pessimism than even some sad old veterans manage", an element of genre-savviness, an element of knowing the advice that somebody would predictably be shouting at you from outside, of not just blindly enacting the plot you were handedand I don't quite know *why* this is so much less common than I would have naively thought it would bewhy people are content with enacting the predictable plot where they start out cheerful today and get some hard lessons and become pessimistic laterthey are their own scriptwriters, and they write scripts for themselves about going into the haunted house and then splitting up the partyI would not have thought that to defy the plot was such a difficult thing for an actual human being to dothat it would require so much reflectivity or something, I don't know what elsenor do I know how to train other people to do it if they are not doing it alreadybut that from my perspective is the basic difference in gloominessI am a time-traveler who came back from the world where it (super duper predictably) turned out that a lot of early bright hopes didn't pan out and various things went WRONG and alignment was HARD and it was NOT SOLVED IN ONE SUMMER BY TEN SMART RESEARCHERSand now I am trying to warn people about this development which was, from a certain perspective, really quite obvious and not at all difficult to see comingbut people are like, "what the heck are you doing, you are enacting the wrong part of the plot, people are currently supposed to be cheerful, you can't prove that anything will go wrong, why would I turn into a grizzled veteran before the part of the plot where reality hits me over the head with the awful real scope of the problem and shows me that my early bright ideas were way too optimistic and naive"and I'm like "no you don't get it, where I come from, *everybody died* and didn't turn into grizzled veterans"and they're like "but that's not what the script says we do next"... or something, I do not know what leads people to think like this because I do not think like that myself | | **[Soares][15:24]** (I think what they actually do is say "it's not obvious to me that this is one of those scenarios where we become grizzled veterans, as opposed to things just actually working out easily")("many things work out easily all the time; obviously society spends a bunch more focus on things that don't work out easily b/c the things that work easily tend to get resolved fairly quickly and then you don't notice them", or something)(more generally, I kinda suspect that bickering closer to the object level is likely more productive)(and i suspect this convo might be aided by Rohin naming a concrete scenario where things go well, so that Eliezer can lament the lack of genre saviness in various specific points) | | **[Yudkowsky][15:26]** there are, of course, lots of more local technical issues where I can specifically predict the failure mode for somebody's bright-eyed naive idea, especially when I already invented a more sophisticated version a decade or two earlier, and this is what I've usually tried to discuss | | | --- | | [Soares: ❤️] | because conversations like that can sometimes make any progress | | **[Soares][15:26]** (and possibly also Eliezer naming a concrete story where things go poorly, so that Rohin may lament the seemingly blind pessimism & premature grizzledness) | | **[Yudkowsky][15:27]** whereas if somebody lacks the ability to see the warning signs of which genre they are in, I do not know how to change the way they are by talking at them | | **[Shah][15:28]** Unsurprisingly I have disagreements with the meta-level story, but it seems really thorny to make progress on and I'm kinda inclined to not discuss it. I also should go to sleep now.One thing it did make me think of -- it's possible that the "do it correctly on your first try when a failed attempt kills you" could be the crux here. There's a clearly-true sense which is "the first time you build a superintelligence that you cannot control, if you have failed in your alignment, then you die". There's a different sense which is "and also, anything you try to do with non-superintelligences that you can control, will tell you approximately nothing about the situation you face when you build a superintelligence". I mostly don't agree with the second sense, but if Eliezer / Nate do agree with it, that would go a long way to explaining the confidence in doom.Two arguments I can see for the second sense: (1) the non-superintelligences only seem to respond well to alignment schemes because they don't yet have the core of general intelligence, and (2) the non-superintelligences only seem to respond well to alignment schemes because despite being misaligned they are doing what we want in order to survive and later execute a treacherous turn. EDIT: And (3) fast takeoff = not much time to look at the closest non-dangerous examples(I still should sleep, but would be interested in seeing thoughts tomorrow, and if enough people think it's actually worthwhile to engage on the meta level I can do that. I'm cheerful about engaging on specific object-level ideas.) | | | --- | | [Soares: 💤] | | | **[Yudkowsky][15:28]** it's not that early failures tell you nothingthe failure of the 1955 Dartmouth Project to produce strong AI over a summer told those researchers somethingit told them the problem was harder than they'd hoped on the first shotit didn't show them the correct way to build AGI in 1957 instead | | **[Bensinger][16:41]** Linking to a chat log between Eliezer and some anonymous people (and Steve Omohundro) from early September: [<https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions>]Eliezer tells me he thinks it pokes at some of Rohin's questions | | **[Yudkowsky][16:48]** I'm not sure that I can successfully, at this point, go back up and usefully reply to the text that scrolled past - I also note some internal grinding about this having turned into a thing which has Pending Replies instead of Scheduled Work Hours - and this maybe means that in the future we shouldn't have such a general chat here, which I didn't anticipate before the fact.  I shall nonetheless try to pick out some things and reply to them. | | | --- | | [Shah: 👍] | * While I think people agree on the behaviors of corrigibility, I am not sure they agree on why we want it. Eliezer wants it for surviving failures, but maybe others want it for "dialing in on goodness". When I think about a "broad basin of corrigibility", that intuitively seems more compatible with the "dialing in on goodness" framing (but this is an aesthetic judgment that could easily be wrong). This is a weird thing to say in my own ontology.There's a general project of AGI alignment where you try to do some useful pivotal thing, which has to be powerful enough to be pivotal, and so you somehow need a system that thinks powerful thoughts in the right direction without it killing you.This could include, for example:* Trying to train in "low impact" via an RL loss function that penalizes a sufficiently broad range of "impacts" that we hope the learned impact penalty generalizes to all the things we'd consider impacts - even as we scale up the system, without the sort of obvious pathologies that would materialize only over options available to sufficiently powerful systems, like sending out nanosystems to erase the visibility of its actions from human observers * Tweaking MCTS search code so that it behaves in the fashion of "mild optimization" or "[taskishness](https://arbital.com/p/task_goal/)" instead of searching as hard as it has power available to search * Exposing the system to lots of labeled examples of relatively simple and safe instructions being obeyed, hoping that it generalizes safe instruction-following to regimes too dangerous for us to inspect outputs and label results * Writing code that tries to recognize cases of activation vectors going outside the bounds they occupied during training, as a check on whether internal cognitive conservatism is being violated or something is seeking out adversarial counterexamples to a constraint You could say that only parts 1 and 3 are "dialing in on goodness" because only those parts involve iteratively refining a target, or you could say that all 4 parts are "dialing in on goodness" because parts 2 and 4 help you stay alive while you're doing the iterative refining.  But I don't see this distinction as fundamental or particularly helpful.  What if, on part 4, you were training something to recognize out-of-bounds activations, instead of trying to hardcode it?  Is that dialing in on goodness?  Or is it just dialing in on survivability or corrigibility or whatnot?  Or maybe even part 3 isn't really "dialing in on goodness" because the true distinction between Good and Evil is still external in the programmers and not inside the system?I don't see this as an especially useful distinction to draw.  There's a hardcoded/learned distinction that probably does matter in several places.  There's a maybe-useful forest-level distinction between "actually doing the pivotal thing" and "not destroying the world as a side effect" which breaks down around the trees because the very definition of "that pivotal thing you want to do" is to do *that thing* and *not* to destroy the world.And all of this is a class of shallow ideas that I can generate in great quantity.  I now and then consider writing up the ideas like this, just to make clear that I've already thought of way more shallow ideas like this than the net public output of the entire rest of the alignment field, so it's not that my concerns of survivability stem from my having missed any of the obvious shallow ideas like that.The reason I don't spend a lot of time talking about it is not that I haven't thought of it, it's that I've thought of it, explored it for a while, and decided not to write it up because I don't think it can save the world and the infinite well of shallow ideas seems more like a distraction from the level of miracle we would actually need.-As a starting point: you say that an agent that makes plans but doesn't execute them is also dangerous, because it is the plan itself that lases, and corrigibility is antithetical to lasing. Does this mean you predict that you, or I, with suitably enhanced intelligence and/or reflectivity, would not be capable of producing a plan to help an alien civilization optimize their world, with that plan being corrigible w.r.t the aliens? (This seems like a strange and unlikely position to me, but I don't see how to not make this prediction under what I believe to be your beliefs. Maybe you just bite this bullet.)I 'could' corrigibly help the [Babyeaters](https://www.lesswrong.com/s/qWoFR4ytMpQ5vw3FT) in the sense that I have a notion of what it would mean to corrigibly help them, and if I wanted to do that thing for some reason, like an outside super-universal entity offering to pay me a googolplex flops of eudaimonium if I did that one thing, then I could do that thing.  Absent the superuniversal entity bribing me, I wouldn't *want* to behave corrigibly towards the Babyeaters.  This is not a defect of myself as an individual.  The Superhappies would also be able to understand what it would be like to be corrigible; they wouldn't *want* to behave corrigibly towards the Babyeaters, because, like myself, they don't want exactly what the Babyeaters want.  In particular, we would rather the universe be other than it is with respect to the Babyeaters eating babies. | | | --- | | [Shah: 👍] | |   22. Follow-ups ==============   | | | --- | | **[Shah][0:33]**  **(Nov. 8)** [...] Absent the superuniversal entity bribing me, I wouldn't *want* to behave corrigibly towards the Babyeaters. [...]Got it. Yeah I think I just misunderstood a point you were saying previously. When Richard asked about systems that simply produce plans rather than execute them, you said something like "the plan itself is dangerous", which I now realize meant "you don't get additional safety from getting to read the plan, the superintelligence would have just chosen a plan that was convincing to you but nonetheless killed everyone / otherwise worked in favor of the superintelligence's goals", but at the time I interpreted it as "any reasonable plan that can actually build nanosystems is going to be dangerous, regardless of the source", which seemed obviously false in the case of a well-motivated system.[...] This is a weird thing to say in my own ontology. [...]When I say "dialing in on goodness", I mean a specific class of strategies for getting a superintelligence to do a useful pivotal thing, in which you build it so that the superintelligence is applying its force towards figuring out what it is that you actually want it to do and pursuing that, which among other things would involve taking a pivotal act to reduce x-risk to ~zero.I previously had the mistaken impression that you thought this class of strategies was probably doomed because it was incompatible with expected utility theory, which seemed wrong to me. (I don't remember why I had this belief; possibly it was while I was still misunderstanding what you meant by "corrigibility" + the claim that corrigibility is anti-natural.)I now think that you think it is probably doomed for the same reason that most other technical strategies are probably doomed, which is that there still doesn't seem to be any plausible way of loading in the right target to the superintelligence, even when that target is a process for learning-what-to-optimize, rather than just what-to-optimize.Linking to a chat log between Eliezer and some anonymous people (and Steve Omohundro) from early September: [<https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions>]Eliezer tells me he thinks it pokes at some of Rohin's questionsI'm surprised that you think this addresses (or even pokes at) my questions. As far as I can tell, most of the questions there are either about social dynamics, which I've been explicitly avoiding, and the "technical" questions seem to treat "AGI" or "superintelligence" as a symbol; there don't seem to be any internal gears underlying that symbol. The closest anyone got to internal gears was mentioning iterated amplification as a way of bootstrapping known-safe things to solving hard problems, and that was very brief.I am much more into the question "how difficult is technical alignment". It seems like answers to this question need to be in one of two categories: (1) claims about the space of minds that lead to intelligent behavior (probably weighted by simplicity, to account for the fact that we'll get the simple ones first), (2) claims about specific methods of building superintelligences. As far as I can tell the only thing in that doc which is close to an argument of this form is "superintelligent consequentialists would find ways to manipulate humans", which seems straightforwardly true (when they are misaligned). I suppose one might also count the assertion that "the speedup step of iterated amplification will introduce errors" as an argument of this form.It could be that you are trying to convince me of some other beliefs that I wasn't asking about, perhaps in the hopes of conveying some missing mood, but I suspect that it is just that you aren't particularly clear on what my beliefs are / what I'm interested in. (Not unreasonable, given that I've been poking at your models, rather than the other way around.) I could try saying more about that, if you'd like. | | **[Tallinn][11:39]**  **(Nov. 12)** FWIW, a voice from the audience: +1 to going back to sketching concrete scenarios. even though i learned a few things from the abstract discussion of goodness/corrigibility/etc myself (eg, that “corrigible” was meant to be defined at the limit of self-improvement till maturity, not just as a label for code that does not resist iterated development), the progress felt more tangible during the “scaled up muzero” discussion above. | | **[Yudkowsky][15:03]**  **(Nov. 12)** anybody want to give me a prompt for a concrete question/scenario, ideally a concrete such prompt but I'll take whatever? | | **[Soares][15:34]**  **(Nov. 12)** Not sure I count, but one I'd enjoy a concrete response to: "The leading AI lab vaguely thinks it's important that their systems are 'mere predictors', and wind up creating an AGI that is dangerous; how concretely does it wind up being a scary planning optimizer or whatever, that doesn't run through a scary abstract "waking up" step".(asking for a friend; @Joe Carlsmith or whoever else finds this scenario unintuitive plz clarify with more detailed requests if interested) |   23. November 13 conversation ============================   23.1. GPT-*n* and goal-oriented aspects of human reasoning ----------------------------------------------------------   | | | --- | | **[Shah][1:46]** I'm still interested in:5. More concreteness on how optimization generalizes but corrigibility doesn't, in the case where the AI was trained by human judgment on weak-safe domainsSpecifically, we can go back to the scaled-up MuZero example. Some (lightly edited) details we had established there:Pretraining: playing all the videogames, predicting all the text and images, solving randomly generated computer puzzles, accomplishing sets of easily-labelable sensorymotor tasks using robots and webcamsFinetuning: The AI system is being trained to act well on the Internet, and it's shown some tweet / email / message that a user might have seen, and asked to reply to the tweet / email / message. User says whether the replies are good or not (perhaps via comparisons, a la Deep RL from Human Preferences). It would be more varied than that, but would not include "building nanosystems".The AI system is not smart enough that exposing humans to text it generates is already a world-wrecking threat if the AI is hostile.At that point we moved from concrete to abstract:Abstract description: train on 'weak-safe' domains where the AI isn't smart enough to do damage, and the humans can label the data pretty well because the AI isn't smart enough to fool themAbstract problem: Optimization generalizes and corrigibility failsI would be interested in a more concrete description here. I'm not sure exactly what details I'm looking for -- on my ontology the question is something like "what algorithm is the AI system forced to learn; how does that lead to generalized optimization and failed corrigibility; why weren't there simple safer algorithms that were compatible with the training, or if there were such algorithms why didn't the AI system learn them". I don't really see how to answer all of that without abstraction, but perhaps you'll have an answer anyway(I am hoping to get some concrete detail on "how did it go from non-hostile to hostile", though I suppose you might confidently predict that it was already hostile after pretraining, conditional on it being an AGI at all. I can try devising a different concrete scenario if that's a blocker.) | | **[Yudkowsky][11:09]** I am hoping to get some concrete detail on "how did it go from non-hostile to hostile"Mu Zero is intrinsically dangerous for reasons essentially isomorphic to the way that AIXI is intrinsically dangerous: It tries to remove humans from its environment when playing Reality for the same reasons it stomps a Goomba if it learns how to play Super Mario Bros 1, because it has some goal and the Goomba is in the way.  It doesn't need to learn anything more to be that way, except for learning what a Goomba/human is within the current environment. The question is more "What kind of patches might it learn for a weak environment if optimized by some hill-climbing optimization method and loss function not to stomp Goombas there, and how would those patches fail to generalize to not stomping humans?"Agree or disagree so far? | | **[Shah][12:07]** Agree assuming that it is pursuing a misaligned goal, but I am also asking what misaligned goal it is pursuing (and depending on the answer, maybe also how it came to be pursuing that misaligned goal given the specified training setup).In fact I think "what misaligned goal is it pursuing" is probably the more central question for me | | **[Yudkowsky][12:14]** well, obvious abstract guess is: something whose non-maximal "optimum" (that is, where the optimization ended up, given about how powerful the optimization was) coincided okayish with the higher regions of the fitness landscape (lower regions of the loss landscape) that could be reached at all, relative to its ancestral environmentI feel like it would be pretty hard to blindly guess, in advance, at my level of intelligence, without having seen any precedents, what the hell a Human would look like, as a derivation of "inclusive genetic fitness" | | **[Shah][12:15]** Yeah I agree with that in the abstract, but have had trouble giving compelling-to-me concrete examplesYeah I also agree with that | | **[Yudkowsky][12:15]** I could try to make up some weird false specifics if that helps? | | **[Shah][12:16]** To be clear I am fine with "this is a case where we predictably can't have good concrete stories and this does not mean we are safe" (and indeed argued the same thing in a doc I linked here many messages ago)But weird false specifics could still be interestingAlthough let me think if it is actually valuableProbably it is not going to change my mind very much on alignment difficulty, if it is "weird false specifics", so maybe this isn't the most productive line of discussion. I'd be "selfishly" interested in that "weird false specifics" seems good for me to generate novel thoughts about these sorts of scenarios, but that seems like a bad use of this DiscordI think given the premises that (1) superintelligence is coming soon, (2) it pursues a misaligned goal by default, and (3) we currently have no technical way of preventing this and no realistic-seeming avenues for generating such methods, I am very pessimistic. I think (2) and (3) are the parts that I don't believe and am interested in digging into, but perhaps "concrete stories" doesn't really work for this. | | **[Yudkowsky][12:26]** with any luck - though I'm not sure I actually expect that much luck - this would be something Redwood Research could tell us about, if they can [learn a nonviolence predicate](https://www.lesswrong.com/posts/k7oxdbNaGATZbtEg3/redwood-research-s-current-project) over GPT-3 outputs and then manage to successfully mutate the distribution enough that we can get to see what was actually inside the predicate instead of "nonviolence" | | | --- | | [Shah: 👍] | or, like, 10% of what was actually inside itor enough that people have some specifics to work with when it comes to understanding how gradient descent learning a function over outcomes from human feedback relative to a distribution, doesn't just learn the actual function the human is using to generate the feedback (though, if this were learned exactly, it would still be fatal given superintelligence) | | **[Shah][12:33]** In this framing I do buy that you don't learn exactly the function that generates the feedback -- I have ~5 contrived specific examples where this is the case (i.e. you learn something that wasn't what the feedback function would have rewarded in a different distribution)(I'm now thinking about what I actually want to say about this framing)Actually, maybe I do think you might end up learning the function that generates the feedback. Not literally exactly, if for no other reason than rounding errors, but well enough that the inaccuracies don't matter much. The AGI presumably already knows and understands the concepts we use based on its pretraining, is it really so shocking if gradient descent hooks up those concepts in the right way? (GPT-3 on the other hand doesn't already know and understand the relevant concepts, so I wouldn't predict this of GPT-3.) I do feel though like this isn't really getting at my reason for (relative) optimism, and that reason is much more like "I don't really buy that AGI must be very coherent in a way that would prevent corrigibility from working" (which we could discuss if desired)On the comment that learning the exact feedback function is still fatal -- I am unclear on why you are so pessimistic on having "human + AI" supervise "AI", in order to have the supervisor be smarter than the thing being supervised. (I think) I understand the pessimism that the learned function won't generalize correctly, but if you imagine that magically working, I'm not clear what additional reason prevents the "human + AI" supervising "AI" setup.* I can see how you die if the AI ever becomes misaligned, i.e. there isn't a way to fix mistakes, but I don't see how you get the misaligned AI in the first place. * I could also see things like "Just like a student can get away with plagiarism even when the teacher is smarter than the student, the AI knows more about its cognition than the human + AI system, and so will likely be incentivized to do bad things that it knows are bad but the human + AI system doesn't know is bad". But that sort of thing seems solvable with future research, e.g. debate, interpretability, red teaming all seem like feasible approaches. | | **[Yudkowsky][13:06]** what's a "human + AI"? can you give me a more concrete version of that scenario, either one where you expect it to work, or where you yourself have labeled the first point you expect it to fail and you want to know whether I see an earlier failure than that? | | **[Shah][13:09]** One concrete training algorithm would be debate, ideally with mechanisms that allow the AI systems to "look into each other's thoughts" and make credible statements about them, but we can skip that for now as it isn't very concreteWould you like a training domain and data as well?I don't like the fact that a smart AI system in this position could notice that it is playing against itself and decide not to participate in a zero-sum game, but I am not sure if that worry actually makes sense or not(Debate can be thought of as simultaneously "human + first AI evaluate second AI" and "human + second AI evaluate first AI") | | **[Yudkowsky][13:12]** further concreteness, please! what pivotal act is it training for? what are the debate contents about? | | **[Shah][13:16]** You start with "easy" debates like mathematical theorem proving or fact-based questions, and ramp up until eventually the questions are roughly "what is the next thing to do in order to execute a pivotal act"Intermediate questions might be things like "is it a good idea to have a minimum wage" | | **[Yudkowsky][13:17]** so, like, "email ATTTTGAGCTTGCC... to the following address, mix the proteins you receive by FedEx in a water-saline solution at 2 degrees Celsius..." for the final stage? | | **[Shah][13:17]** Yup, that could be itHumans are judging debates based on reasoning though, not just outcomes-after-executing-the-plan | | **[Yudkowsky][13:19]** okay.  let's suppose you manage to prevent both AGIs from using logical decision theory to coordinate with each other.  both AIs tell their humans that the other AI's plans are murderous.  now what? | | **[Shah][13:19]** So assuming perfect generalization there should be some large implicit debate tree that justifies the plan in human-understandable form | | **[Yudkowsky][13:20]** yah, I flatly disbelieve that entire development scheme, so we should maybe back up.people fiddled around with GPT-4 derivatives and never did get them to engage in lines of printed reasoning that would design interesting new stuff.  now what?Living Zero (a more architecturally complicated successor of Mu Zero) is getting better at designing complicated things over on its side while that's going on, whatever it is | | **[Shah][13:23]** Okay, so the worry is that this just won't scale, not that (assuming perfect generalization) it is unsafe? Or perhaps you also think it is unsafe but it's hard to engage with because you don't believe it will scale?And the issue is that relying on reasoning confines you to a space of possible thoughts that doesn't include the kinds of thoughts required to develop new stuff (e.g. intuition)? | | **[Yudkowsky][13:25]** mostly I have found these alleged strategies to be too permanently abstract, never concretized, to count as admissible hypotheses.  if you ask me to concretize them myself, I think that unelaborated giant transformer stacks trained on massive online text corpuses fail to learn smart-human-level engineering reasoning before the world ends.  If that were not true, I would expect Paul-style schemes to blow up on the distillation step, but first failures first. | | **[Shah][13:26]** What additional concrete detail do you want?It feels like I specified something that we could code up a stupidly inefficient version of now | | **[Yudkowsky][13:27]** Great.  Describe the stupidly inefficient version? | | **[Shah][13:33]** In terms of what actually happens: Each episode, there is an initial question specified by the human. Agent A and agent B, which are copies of the same neural net, simultaneously produce statements ("answers"). They then have a conversation. At the end the human judge decides which answer is better, and rewards the appropriate agent. The agents are updated using some RL algorithm.I can say stuff about why we might hope this works, or about tricks you have to play in order to get learning to happen at all, or other things | | **[Yudkowsky][13:35]** Are the agents also playing Starcraft or have they spent their whole lives inside the world of text? | | **[Shah][13:35]** For the stupidly inefficient version they could have spent their whole lives inside text | | **[Yudkowsky][13:37]** Okay.  I don't think the pure-text versions of GPT-5 are being very good at designing nanosystems while Living Zero is ending the world. | | **[Shah][13:37]** In the stupidly inefficient version human feedback has to teach the agents facts about the real world | | **[Yudkowsky][13:37]** (It's called "Living Zero" because it does lifelong learning, in the backstory I've been trying to separately sketch out in a draft.) | | **[Shah][13:38]** Oh I definitely agree this is not competitiveSo when you say this is too abstract, you mean that there isn't a story for how they incorporate e.g. physical real-world knowledge? | | **[Yudkowsky][13:39]** no, I mean that when I talk to Paul about this, I can't get Paul to say anything as concrete as the stuff you've already saidthe reason why I don't expect the GPT-5s to be competitive with Living Zero is that gradient descent on feedforward transformer layers, in order how to learn science by competing to generate text that humans like, would have to pick up on some very deep latent patterns generating that text, and I don't think there's an incremental pathway there for gradient descent to follow - if gradient descent even follows incremental pathways as opposed to finding [lottery tickets](https://www.lesswrong.com/tag/lottery-ticket-hypothesis), but that's a whole separate open question of artificial neuroscience.in other words, humans play around with legos, and hominids play around with chipping flint handaxes, and mammals play around with spatial reasoning, and that's part of the incremental pathway to developing deep patterns for causal investigation and engineering, which then get projected into human text and picked up by humans reading textit's just straightforwardly not clear to me that GPT-5 pretrained on human text corpuses, and then further posttrained by RL on human judgment of text outputs, ever runs across the deep patternswhere relatively small architectural changes might make the system no longer just a giant stack of transformers, even if that resulting system is named "GPT-5", and in this case, bets might be off, but also in this case, things will go wrong with it that go wrong with Living Zero, because it's now learning the more powerful and dangerous kind of work | | **[Shah][13:45]** That does seem like a disagreement, in that I think this process does eventually reach the "deep patterns", but I do agree it is unlikely to be competitive | | **[Yudkowsky][13:45]** I mean, if you take a feedforward stack of transformer layers the size of a galaxy and train it via gradient descent using all the available energy in the reachable universe, it might find something, surethough this is by no means certain to be the case | | **[Shah][13:50]** It would be quite surprising to me if it took that much. It would be *especially* surprising to me if we couldn't figure out some alternative reasonably-simple training scheme like "imitate a human doing good reasoning" that still remained entirely in text that could reach the "deep patterns". (This is now no longer a discussion about whether the training scheme is aligned, not sure if we should continue it.)I realize that this might be hard to do, but if you imagine that GPT-5 + human feedback finetuning does run across the deep patterns and could in theory do the right stuff, and also generalization magically works, what's the next failure? | | **[Yudkowsky][13:56]** what sort of deep thing does a hill-climber run across in the layers, such that the deep thing is the most predictive thing it found for human text about science?if you don't visualize this deep thing in any detail, then it can in one moment be powerful, and in another moment be safe.  it can have all the properties that you want simultaneously.  who's to say otherwise? the mysterious deep thing has no form within your mind.if one were to name specifically "well, it ran across a little superintelligence with long-term goals that it realized it could achieve by predicting well in all the cases that an outer gradient descent loop would probably be updating on", that sure doesn't end well for you.this perhaps is *not* the first thing that gradient descent runs across.  it wasn't the first thing that natural selection ran across to build things that ran the savvanah and made more of themselves.  but what deep pattern that is *not* pleasantly and unfrighteningly formless would gradient descent run across instead? | | **[Shah][14:00]** (Tbc by "human feedback finetuning" I mean debate, and I suspect that "generalization magically works" will be meant to rule out the thing that you say next, but seems worth checking so let me write an answer)the deep thing is the most predictive thing it found for human text about science?Wait, the most predictive thing? I was imagining it as just a thing that is present in addition to all the other things. Like, I don't think I've learned a "deep thing" that is most useful for riding a bike. Probably I'm just misunderstanding what you mean here.I don't think I can give a good answer here, but to give some answer, it has a belief that there is a universe "out there", that lots but not all of the text it reads is making claims about (some aspect of) the universe, those claims can be true or false, there are some claims that are known to be true, there are some ways to take assumed-true claims and generate new assumed-true claims, which includes claims about optimal actions for goals, as well as claims about how to build stuff, or what the effect of a specified machine is | | **[Yudkowsky][14:10]** hell of a lot of stuff for gradient descent to run across in a stack of transformer layers.  clearly the lottery-ticket hypothesis must have been very incorrect, and there was an incremental trail of successively more complicated gears that got trained into the system.btw by "claims" are you meaning to make the jump to English claims? I was reading them as giant inscrutable vectors encoding meaningful propositions, but maybe you meant something else there. | | **[Shah][14:11]** In fact I am skeptical of some strong versions of the lottery ticket hypothesis, though it's been a while since I read the paper and I don't remember exactly what the original hypothesis wasGiant inscrutable vectors encoding meaningful propositions | | **[Yudkowsky][14:13]** oh, I'm not particularly confident of the lottery-ticket hypothesis either, though I sure do find it grimly amusing that a species which hasn't already figured *that* out one way or another thinks it's going to have deep transparency into neural nets all wrapped up in time to survive.  but, separate issue."How does gradient descent even work?" "Lol nobody knows, it just does."but, separate issue | | **[Shah][14:16]** How does strong lottery ticket hypothesis explain GPT-3? Seems like that should already be enough to determine that there's an incremental trail of successively more complicated gears | | **[Yudkowsky][14:18]** could just be that in 175B parameters, combinatorially combined through possible execution pathways, there is some stuff that was pretty close to doing all the stuff that GPT-3 ended up doing.anyways, for a human to come up with human text about science, the human has to brood and think for a bit about different possible hypotheses that could account for the data, notice places where those hypotheses break down, tweak the hypotheses in their mind to make the errors go away; they would engineer an internal mental construct towards the engineering goal of making good predictions.  if you're looking at orbital mechanics and haven't invented calculus yet, you invent calculus as a persistent mental tool that you can use to craft those internal mental constructs.does the formless deep pattern of GPT-5 accomplish the same ends, by some mysterious means that is, formless, able to produce the same result, but not by any detailed means where if you visualized them you would be able to see how it was unsafe? | | **[Shah][14:24]** I expect that probably we will figure out some way to have adaptive computation time be a thing (it's been investigated for years now, but afaik hasn't worked very well), which will allow for this sort of thing to happenIn the stupidly inefficient version, you have a really really giant and deep neural net that does all of that in successive layers of the neural net. (And when it doesn't need to do that, those layers are noops.) | | **[Yudkowsky][14:26][14:32]** okay, so my question is, is there a little goal-oriented mind inside there that solves science problems the same way humans solve them, by engineering mental constructs that serve a goal of prediction, including backchaining for prediction goals and forward chaining from alternative hypotheses / internal tweaked states of the mental construct? or is there something else which solves the same problem, not how humans do it, without any internal goal orientation?People who would not in the first place realize that humans solve prediction problems by internally engineering internal mental constructs in a goal-oriented way, would of course imagine themselves able to imagine a formless spirit which produces "predictions" without being "goal-oriented" because they lack an understanding of internal machinery and so can combine whatever surface properties and English words they want to yield a beautiful optimismOr perhaps there is indeed some way to produce "predictions" without being "goal-oriented", which gradient descent on a great stack of transformer layers would surely run across; but you will pardon my grave lack of confidence that someone has in fact seen so much further than myself, when they don't seem to have appreciated in advance of my own questions why somebody who understood something about human internals would be skeptical of this.If they're sort of visibly trying to come up with it on the spot after I ask the question, that's not such a great sign either. | | This is not aimed particularly at you, but I hope the reader may understand something of why Eliezer Yudkowsky goes about sounding so gloomy all the time about other people's prospects for noticing what will kill them, by themselves, without Eliezer constantly hovering over their shoulder every minute prompting them with almost all of the answer. | | **[Shah][14:31]** Just to check my understanding: if we're talking about, say, how humans might go about understanding neural nets, there's a goal of "have a theory that can retrodict existing observations and make new predictions", backchaining might say "come up with hypotheses that would explain double descent", forward chaining might say "look into bias and variance measurements"?If so, yes, I think the AGI / GPT-5-that-is-an-AGI is doing something similar | | **[Yudkowsky][14:33]** your understanding sounds okay, though it might make more sense to talk about a domain that human beings understand better than artificial neuroscience, for purposes of illustrating how scientific thinking works, since human beings haven't actually gotten very far with artificial neuroscience. | | **[Shah][14:33]** Fair point re using a different domainTo be clear I do not in fact think that GPT-N is safe because it is trained with supervised learning and I am confused at the combination of views that GPT-N will be AGI and GPT-N will be safe because it's just doing predictionsMaybe there is marginal additional safety but you clearly can't say it is "definitely safe" without some additional knowledge that I have not seen so farGoing back to the original question, of what the next failure mode of debate would be assuming magical generalization, I think it's just not one that makes sense to ask on your worldview / ontology; "magical generalization" is the equivalent of "assume that the goal-oriented mind somehow doesn't do dangerous optimization towards its goal, yet nonetheless produces things that can only be produced by dangerous optimization towards a goal", and so it is assuming the entire problem away | | **[Yudkowsky][14:41]** well YESfrom my perspective the whole field of mental endeavor as practiced by alignment optimists consists of ancient alchemists wondering if they can get collections of surface properties, like a metal as shiny as gold, as hard as steel, and as self-healing as flesh, where optimism about such wonderfully combined properties can be infinite as long as you stay ignorant of underlying structures that produce some properties but not othersand, like, maybe you *can* get something as hard as steel, as shiny as gold, and resilient or self-healing in various ways, but you sure don't get it by ignorance of the internalsand not for a whileso if you need the magic sword in 2 years or the world ends, you're kinda dead | | **[Shah][14:46]** Potentially dumb question: when humans do science, why don't they then try to take over the world to do the best possible science? (If humans are doing dangerous goal-directed optimization when doing science, why doesn't that lead to catastrophe?)You could of course say that they just aren't smart enough to do so, but it sure feels like (most) humans wouldn't want to do the best possible science even if they were smarterI think this is similar to a question I asked before about plans being dangerous independent of their source, and the answer was that the source was misalignedBut in the description above you didn't say anything about the thing-doing-science being misaligned, so I am once again confused | | **[Yudkowsky][14:48]** boy, so many dumb answers to this dumb question:* even relatively "smart" humans are not very smart compared to other humans, such that they don't have a "take over the world" option available. * most humans who use Science were not smart enough to invent the underlying concept of Science for themselves from scratch; and Francis Bacon, who did, sure did want to take over the world with it. * groups of humans with relatively more Engineering sure did take over large parts of the world relative to groups that had relatively less. * Eliezer Yudkowsky clearly demonstrates that when you are smart *enough* you start trying to use Science and Engineering to take over your whole future lightcone, the other humans you're thinking of just aren't that smart, and, if they were, would inevitably converge towards Eliezer Yudkowsky, who is really a very typical example of a person that smart, even if he looks odd to you because you're not seeing the population of other [dath ilani](https://www.lesswrong.com/tag/dath-ilan) I am genuinely not sure how to come up with a less dumb answer and it may require a more precise reformulation of the question | | **[Shah][14:50]** But like, in Eliezer's case, there is a different goal that is motivating him to use Science and Engineering for this purposeIt is not the prediction-goal that he instantiated in his mind as part of the method of doing Science | | **[Yudkowsky][14:52]** sure, and the mysterious formless thing within GPT-5 with "adaptive computation time" that broods and thinks, may be pursuing its prediction-subgoal for the sake of other goals, or be pursuing different subgoals of prediction separately without ever once having a goal of prediction, or have 66,666 different shards of desire across different kinds of predictive subproblems that were entrained by gradient descent which does more brute memorization and less Occam bias than natural selectionoh, are you asking why humans, when they do goal-oriented Science for the sake of their other goals, don't (universally always) stomp on their other goals while pursuing the Science part? | | **[Shah][14:54]** Well, that might also be interesting to hear the answer to -- I don't know how I'd answer that through an Eliezer-lens -- though it wasn't exactly what I was asking | | **[Yudkowsky][14:56]** basically the answer is "well, first of all, they do stomp on themselves to the extent that they're stupid; and to the extent that they're smart, pursuing X on the pathway to Y has a 'natural' structure for not stomping on Y which is simple and generalizes and obeys all the coherence theorems and can incorporate arbitrarily fine wiggles via epistemic modeling of those fine wiggles because those fine wiggles have a very compact encoding relative to the epistemic model, aka, predicting which forms of X lead to Y; and to the extent that group structures of humans can't do that simple thing coherently because of their cognitive and motivational partitioning, the group structures of humans are back to not being able to coherently pursue the final goal again" | | **[Shah][14:58]** (Going back to what I meant to ask) It seems to me like humans demonstrate that you can have a prediction goal without that being your final/terminal goal. So it seems like with AI you similarly need to talk about the final/terminal goal. But then we talked about GPT and debate and so on for a while, and then you explained how GPTs would have deep patterns that do dangerous optimization, where the deep patterns involved instantiating a prediction goal. Notably, you didn't say anything about a final/terminal goal. Do you see why I am confused? | | **[Yudkowsky][15:00]** so you can do prediction because it's on the way to some totally other final goal - the way that any tiny superintelligence or superhumanly-coherent agent, if an optimization method somehow managed to run across *that* early on, with an arbitrary goal, which also understood the larger picture, would make good predictions while it thought the outer loop was probably doing gradient descent updates, and bide its time to produce rather different "predictions" once it suspected the results were not going to be checked given what the inputs had looked like.you can imagine a thing that does prediction the same way that humans optimize inclusive genetic fitness, by pursuing dozens of little goals that tend to cohere to good prediction in the ancestral environmentboth of these could happen in order; you could get a thing that pursued 66 severed shards of prediction as a small mind, and which, when made larger, cohered into a utility function around the 66 severed shards that sum to something which is not good prediction and which you could pursue by transforming the universe, and then strategically made good predictions while it expected the results to go on being checked | | **[Shah][15:02]** OH you mean that the outer objective is prediction | | **[Yudkowsky][15:02]** ? | | **[Shah][15:03]** I have for quite a while thought that you meant that Science involves internally setting a subgoal of "predict a confusing part of reality" | | **[Yudkowsky][15:03]** it... does?I mean, that is true. | | **[Shah][15:04]** Okay wait. There are two things. One is that GPT-3 is trained with a loss function that one might call a prediction objective for human text. Two is that Science involves looking at a part of reality and figuring out how to predict it. These two things are totally different. I am now unsure which one(s) you were talking about in the conversation above | | **[Yudkowsky][15:06]** what I'm saying is that for GPT-5 to successfully do AGI-complete prediction of human text about Science, gradient descent must identify some formless thing that does Science internally in order to optimize the outer loss function for predicting human text about Sciencejust like, if it learns to predict human text about multiplication, it must have learned something internally that does multiplication(afk, lunch/dinner) | | **[Shah][15:07]** Yeah, so you meant the first thing, and I misinterpreted as the second thing(I will head to bed in this case -- I was meaning to do that soon anyway -- but I'll first summarize.) | | **[Yudkowsky][15:08]** I am concerned that there is still a misinterpretation going on, because the case I am describing is both things at oncethere is an outer loss function that scores text predictions, and an internal process which for purposes of predicting what Science would say must actually somehow do the work of Science | | **[Shah][15:09]** Okay let me look back at the conversationis there a little goal-oriented mind inside there that solves science problems the same way humans solve them, by engineering mental constructs that serve a goal of prediction, including backchaining for prediction goals and forward chaining from alternative hypotheses / internal tweaked states of the mental construct?Here, is the word "prediction" meant to refer to the outer objective and/or predicting what English sentences about Science one might say, or is it referring to a subpart of the Process Of Science in which one aims to predict some aspect of reality (which is typically not in the form of English sentences)? | | **[Yudkowsky][15:20]** it's here referring to the inner Science problem | | **[Shah][15:21]** Okay I think my original understanding was correct in that casefrom my perspective the whole field of mental endeavor as practiced by alignment optimists consists of ancient alchemists wondering if they can get collections of surface properties, like a metal as shiny as gold, as hard as steel, and as self-healing as flesh, where optimism about such wonderfully combined properties can be infinite as long as you stay ignorant of underlying structures that produce some properties but not othersI actually think something like this might be a crux for me, though obviously I wouldn't put it the way you're putting it. More like "are arguments about internal mechanisms more or less trustworthy than arguments about what you're selecting for" (limiting to arguments we actually have access to, of course in the limit of perfect knowledge internal mechanisms beats selection). But that is I think a discussion for another day. | | **[Yudkowsky][15:29]** I think the critical insight - though it has a format that basically nobody except me ever visibly invokes in those terms, and I worry maybe it can only be taught by a kind of life experience that's very hard to obtain - is the realization that *any* consistent reasonable story about underlying mechanisms will give you less optimistic forecasts than the ones you get by freely combining surface desiderata | | **[Shah] [1:38]**  **(next day, Nov. 14)** (For the reader, I don't think that "arguments about what you're selecting for" is the same thing as "freely combining surface desiderata", though I do expect they look approximately the same to Eliezer)Yeah, I think I do not in fact understand why that is true for any consistent reasonable story.From my perspective, when I posit a hypothetical, you demonstrate that there is an underlying mechanism that produces strong capabilities that generalize combined with real world knowledge. I agree that a powerful AI system that we build capable of executing a pivotal act will have strong capabilities that generalize and real world knowledge. I am happy to assume for the purposes of this discussion that it involves backchaining from a target and forward chaining from things that you currently know or have. I agree that such capabilities could be used to cause an existential catastrophe (at least in a unipolar world, multipolar case is more complicated, but we can stick with unipolar for now). None of my arguments so far are meant to factor through the route of "make it so that the AGI can't cause an existential catastrophe even if it wants to".The main question according to me is why those capabilities are aimed towards achievement of a misaligned goal.It feels like when I try to ask why we have misaligned goals, I often get answers that are of the form "look at the deep patterns underlying the strong capabilities that generalize, obviously given a misaligned goal they would generate the plan of killing the humans who are an obstacle towards achieving that goal". This of course doesn't work since it's a circular argument.I can generate lots of arguments for why it would be aimed towards achievement of a misaligned goal, such as (1) only a tiny fraction of goals are aligned; the rest are misaligned, (2) the feedback we provide is unlikely to be the right goal and even small errors are fatal, (3) lots of misaligned goals are compatible with the feedback we provide even if the feedback is good, since the AGI might behave well until it can execute a treacherous turn, (4) the one example of strategically aware intelligence (i.e. humans) is misaligned relative to its creator. (I'm not saying I agree with these arguments, but I do understand them.)Are these the arguments that make you think that you get misaligned goals by default? Or is it something about "deep patterns" that isn't captured by "strong capabilities that generalize, real-world knowledge, ability to cause an existential catastrophe if it wants to"? |   24. Follow-ups ==============   | | | | | | --- | --- | --- | --- | | **[Yudkowsky][15:59]**  **(Feb. 21, 2022)** So I realize it's been a bit, but looking over this last conversation, I feel unhappy about the MIRI conversations sequence stopping exactly here, with an unanswered major question, after I ran out of energy last time.  I shall attempt to answer it, at least at all.  CC @rohin @RobBensinger . | | | | | --- | --- | --- | | [Shah: 🙂] | [Ngo: 🙂] | [Bensinger: 🙂] | One basic large class of reasons has the form, "Outer optimization on a precise loss function doesn't get you inner consequentialism explicitly targeting that outer objective, just inner consequentialism targeting objectives which empirically happen to align with the outer objective given that environment and those capability levels; and at some point sufficiently powerful inner consequentialism starts to generalize far out-of-distribution, and, when it does, the consequentialist part generalizes much further than the empirical alignment with the outer objective function."This, I hope, is by now recognizable to individuals of interest as an overly abstract description of what happened with humans, who one day started building Moon rockets without seeming to care very much about calculating and maximizing their personal inclusive genetic fitness while doing that.  Their capabilities generalized much further out of the ancestral training distribution, than the empirical alignment of those capabilities on inclusive genetic fitness in the ancestral training distribution.One basic large class of reasons has the form, "Because the real objective is something that cannot be precisely and accurately shown to the AGI and the differences are systematic and important."Suppose you have a bunch of humans classifying videos of real events or text descriptions of real events or hypothetical fictional scenarios in text, as desirable or undesirable, and assigning them numerical ratings.  Unless these humans are perfectly free of, among other things, all the standard and well-known cognitive biases about eg differently treating losses and gains, the value of this sensory signal is not "The value of our real CEV rating what is Good or Bad and how much" nor even "The value of a utility function we've got right now, run over the real events behind these videos".  Instead it is in a systematic and real and visible way, "The result of running an error-prone human brain over this data to produce a rating on it."This is not a mistake by the AGI, it's not something the AGI can narrow down by running more experiments, the *correct answer as defined* is what contains the alignment difficulty.  If the AGI, or for that matter the outer optimization loop, *correctly generalizes* the function that is producing the human feedback, it will include the systematic sources of error in that feedback.  If the AGI essays an experimental test of a manipulation that an ideal observer would see as "intended to produce error in humans" then the experimental result will be "Ah yes, this is correctly part of the objective function, the objective function I'm supposed to maximize sure does have this in it according to the sensory data I got about this objective."People have fantasized about having the AGI learn something other than the true and accurate function producing its objective-describing data, as its actual objective, from the objective-describing data that it gets; I, of course, was the first person to imagine this and say it should be done, back in 2001 or so; unlike a lot of latecomers to this situation, I am skeptical of my own proposals and I know very well that I did not in fact come up with any reliable-looking proposal for learning 'true' human values off systematically erroneous human feedback.Difficulties here are fatal, because a true and accurate learning of what is producing the objective-describing signal, will correctly imply that higher values of this signal obtain as the humans are manipulated or as they are bypassed with physical interrupts for control of the feedback signal.  In other words, even if you could do a bunch of training on an outer objective, and get inner optimization perfectly targeted on that, the fact that it was perfectly targeted would kill you. | | | | --- | | **[Bensinger][23:15]  (Feb. 27, 2022 follow-up comment)** This is the last log in the [Late 2021 MIRI Conversations](https://intelligence.org/late-2021-miri-conversations/). We'll be concluding the sequence with a public [**Ask Me Anything**](https://www.lesswrong.com/posts/34Gkqus9vusXRevR8/late-2021-miri-conversations-discussion-and-ama)(AMA)this Wednesday; you can start posting questions there now.MIRI has found the Discord format useful, and we plan to continue using it going into 2022. This includes follow-up conversations between Eliezer and Rohin, and a forthcoming conversation between Eliezer and Scott Alexander of [Astral Codex Ten](https://astralcodexten.substack.com/).Some concluding thoughts from Richard Ngo: | | | | --- | | **[Ngo][6:20]  (Nov. 12 follow-up comment)** Many thanks to Eliezer and Nate for their courteous and constructive discussion and moderation, and to Rob for putting the transcripts together.This debate updated me about 15% of the way towards Eliezer's position, with Eliezer's arguments about the difficulties of coordinating to ensure alignment responsible for most of that shift. While I don't find Eliezer's core intuitions about intelligence too implausible, they don't seem compelling enough to do as much work as Eliezer argues they do. As in the Foom debate, I think that our object-level discussions were constrained by our different underlying attitudes towards high-level abstractions, which are hard to pin down (let alone resolve).Given this, I think that the most productive mode of intellectual engagement with Eliezer's worldview going forward is probably not to continue debating it (since that would likely hit those same underlying disagreements), but rather to try to inhabit it deeply enough to rederive his conclusions and find new explanations of them which then lead to clearer object-level cruxes. I hope that these transcripts shed sufficient light for some readers to be able to do so. |
26aaf148-2a58-4a91-9af2-39dd2df3a0e9
StampyAI/alignment-research-dataset/special_docs
Other
Contending Frames: Evaluating Rhetorical Dynamics in AI Contending Frames Evaluating Rhetorical Dynamics in AI CSET Issue Brief AUTHORS Andrew Imbrie Rebecca Gelles James Dunham Catherine Aiken May 2021 Center for Security and Emerging Technology | 1 Executive Summary The narrative of an artificial intelligence “arms race” among the great powers has become shorthand to describe evolving dynamics in the field. Narratives about AI matter because they reflect and shape public perceptions of the technology. Policymakers will need to monitor these perceptions closely, as levels of public confidence in AI directly impact the scope for emerging technology policy. In this data brief, the second in our series examining rhetorical frames in AI, we compare four narrative frames that are prominent in public discourse: AI Competition, Killer Robots, Economic Gold Rush, and World Without Work. By searching more than seven million articles on LexisNexis over the 2012 to 2020 period, we find: ● The Competition frame predominates among the four frames under study, both in terms of raw counts and as a percentage of total articles that mention AI, with roughly as many occurrences in 2020 as the other three frames combined. ● While outlets that cater to niche foreign policy audiences, such as Defense One and Foreign Affairs, have become more diverse in terms of discourse around AI, outlets that cater to more general audiences appear to have converged around the Competition frame. This trend suggests that a hardening in foreign policy discourse around AI between 2012 and 2015 may have encouraged a perception of competition among general audiences. Media convergence around a competitive narrative could undermine efforts to bolster global AI standards-setting and collaboration around testing and safety. ● As a share of articles mentioning AI, the Killer Robots frame peaked in 2015. In that year, the Killer Robots frame was almost as prevalent as the Competition frame; today, it is the least common among the four frames under study. This suggests that early concerns about military use of AI may—for now—have become less salient. ● As a share of AI articles, the Economic Gold Rush frame peaked in 2018. Today, it is the second most common among all four frames under study. This frame peaked later than the other frames, suggesting that narratives focused on threat predominated early in the public discourse around AI and only more recently turned to potential opportunities. The Economic Gold Rush frame is often associated with Big Tech companies, which suggests this more Center for Security and Emerging Technology | 2 optimistic framing could have implications for efforts to regulate Big Tech. ● Political leaders and tech company CEOs are the individuals most commonly mentioned in articles that use the four frames. Prominent academics, authors, and computer scientists are less frequently mentioned. Whether this is a reflection of existing debates between political leaders and tech companies or an indicator of the politicization of AI is unclear, but an intriguing question for possible future research. ● More than 95 percent of articles, distributed among a range of sources, describe the activities of AI companies without using these frames, or any identifiable frame. For example, many articles are straightforward financial reporting. This suggests a majority of media coverage about AI may avoid commentary on the political implications of the technology. This finding also suggests that policymakers may have leeway to shape the public discourse around AI as the technology matures. Center for Security and Emerging Technology | 3 Contending Frames In an earlier analysis, “Mainframes: A Provisional Analysis of Rhetorical Frames in AI,” we developed a novel methodology to capture occurrences of the “AI Competition” frame in popular media sources. In the Competition frame, AI development is described as a race between two or more actors.1 We explored the use of the frame across more than 4,000 articles from three news outlets between 2012 and 2019. Among our findings, we observed that since 2012, “a growing number of articles in the three news sources have included the Competition frame, but prevalence of the frame as a proportion of all AI articles peaked in 2015.”2 We concluded that the frame’s peak in 2015 and subsequent decline as a proportion of all AI articles under study may indicate that reporting on AI had grown more diverse and sophisticated. In this follow-up data brief, we compare the “AI Competition'' frame with other frames in public discussion of AI. After canvassing articles about AI across a wide range of outlets, we identified three additional rhetorical frames. We selected these frames to offer a diversity of perspectives on AI, emphasizing both national security and economic implications as well as positive and negative public sentiment. In addition to the “AI Competition” frame, the three frames under study capture important dynamics in mass communications about AI. The “Killer Robots” frame describes a future in which lethal autonomous weapons select and engage targets without human supervision. This frame argues that lethal autonomous weapons present a threat to humanity and need to be controlled or banned. The “World Without Work” frame describes a future in which AI replaces, as opposed to augments, human labor. Just as machines displaced human manual labor during the Industrial Revolution, this frame claims that AI will replace human cognitive labor.3 The “Economic Gold Rush” frame describes a future in which AI unleashes productivity and generates massive wealth for the global economy. To compare occurrences of the four frames, we searched more than seven million articles on LexisNexis over the 2012 to 2020 period. We identified 125,567 articles that mentioned AI.4 Among these, 3,207 (2.6 percent) talked about AI using one of the four frames.5 Specifically, we recorded 1,702 occurrences of the “AI Competition” frame, 480 of the “Killer Robots” frame, 688 of the “World Without Work” frame, and 670 of the “Economic Gold Rush” frame. For each frame, we tracked its use Center for Security and Emerging Technology | 4 over time, the outlets and authors that employ it, and the individuals and organizations that are mentioned in association with it. Identifying Rhetorical Frames In our initial study of the AI Competition frame, we annotated a sample of 10,000 LexisNexis articles, following a carefully developed annotation framework. Drawing on this preliminary work, we developed queries for an analysis of rhetorical frames at scale across a large corpus of news media from LexisNexis. Specifically, we examined LexisNexis Metabase content published in English between 2012 and 2020 by national, newswire, or trade sources in the United States that LexisNexis categorized as “top international, national, and business news” or “top regional” outlets.6 These criteria identified 7.9 million articles from 325 news sources. Within this corpus, we searched for articles that included explicit mention of “artificial intelligence” or “AI” and captured occurrences of the Competition frame or three additional frames: “Killer Robots,” “World Without Work,” and “Economic Gold Rush.” AI Rhetorical Frames As in our earlier analysis, the Competition frame describes AI development as a race between two or more actors, such as governments or companies. Invocations of the frame include the following: ● a military competition (“arms race”) ● historical competition (“Cold War” or “Sputnik Moment”) ● a territorial competition (“supremacy in Europe”) ● a competition for resources (“battle for talent”) ● any other type of competition (“two-man contest” or “AI rivalry”) We identified invocations of the frame by searching the corpus text for mentions of AI within a short distance (20 characters) of terms like these, in various forms: Sputnik, foreign adversary, arms race, battle, competition, conflict, rivalry, superiority, or strategic advantage.7 Center for Security and Emerging Technology | 5 Economic Gold Rush “Cashing in on artificial intelligence” “Companies are rushing to develop technologies incorporating such autonomous learning technology.... Other businesses have ideas about how to take advantage of the coming AI revolution.” Nikkei Asia World Without Work “A Machine May Not Take Your Job, but One Could Become Your Boss” “For decades, people have fearfully imagined armies of hyper-efficient robots invading offices and factories, gobbling up jobs once done by humans. But in all of the worry about the potential of artificial intelligence to replace rank-and-file workers, we may have overlooked the possibility it will replace the bosses, too.” The New York Times Killer Robots "US general warns of out-of-control killer robots" “America's second-highest ranking military officer, Gen. Paul Selva, advocated Tuesday for ‘keeping the ethical rules of war in place lest we unleash on humanity a set of robots that we don't know how to control.’” CNN Competition “U.S. and China battle for technological supremacy” “Escalating tensions between the U.S. and China are stoking the narrative of an all-out artificial intelligence arms race between the two countries.” CBS News The “Killer Robots” frame describes a future in which lethal autonomous weapons or “fully autonomous weapons” select and engage targets without human supervision.8 Killer robots, in this frame’s usage, are self-targeting, independent, and unsupervised. Some who employ this frame argue that lethal autonomous weapons present a threat to humanity and need to be controlled or banned. Others who use this frame contend that fully autonomous weapons lack the interpretive capacities and situational awareness necessary to comply with the laws of war.9 Still others employing the frame argue that “killer robots” create an “accountability gap” and offend basic human dignity.10 Center for Security and Emerging Technology | 6 We considered articles to invoke the frame if they included a mention of AI along with such terms as autonomous weapons, slaughterbots, international humanitarian law, laws of war, killer robots, human control, International Committee for Robot Arms Control, threat to humanity, Martens Clause, Convention on Certain Conventional Weapons, autonomy, DOD Directive 3000.09, or existential risk. We also included articles that mentioned “prohibition” or “ban” within a short distance of a reference to AI.11 The “World Without Work” frame describes a future in which AI replaces, as opposed to augments, human labor.12 Some argue that AI will displace certain jobs and create new ones. Others claim that AI will help meet the challenge of declining birth rates and flagging productivity in countries such as Japan or China. Still others foresee a more ominous future where heavily populated countries suffer a “population curse” similar to nations that struggled from a surfeit of fossil fuels in the postwar period (the “resource curse”).13 The historian Yuval Noah Harari writes of a new “global useless class” where AI and biotech not only displace jobs but create new forms of inequality through the merger of machine learning and genetic engineering.14 Unlike the Industrial Revolution, which had a “de-skilling” effect by opening up jobs and whole sectors to lower-skilled labor, the AI revolution could have the opposite effect: machines will make higher-skilled workers more productive and those with lower-skill jobs more vulnerable to automation.15 The author Kai-Fu Lee argues that manual jobs involving higher levels of dexterity or those requiring “compassion and creativity” may be less at risk than other “repetition-rich white collar jobs.”16 According to many uses of this frame, surging productivity growth will not necessarily translate into shared prosperity or rising compensation.17 To find invocations of the World Without Work frame, we searched for articles that mentioned AI and included such phrases as global useless class, employment polarization, livelihoods, mass joblessness, fourth industrial revolution, winners and losers, or jobs eliminated. We also considered articles that mentioned the terms “displace” or “replace” within a short distance of a reference to the word “job.” The “Economic Gold Rush” frame describes a future in which AI unleashes productivity and generates massive wealth for the global economy.18 Some uses of this frame tend to highlight upper-bound estimates of AI’s potential to stimulate global growth without noting the lower-bound estimates, caveats, or historical trends. Other uses of this frame recognize the risks and argue that governments will need to invest in education and workforce training. This frame often analogizes AI to Center for Security and Emerging Technology | 7 electricity in its potential transformative effects on the global economy. The Economic Gold Rush frame tends to focus on absolute as opposed to relative gains and touts aggregate economic benefits as opposed to differential effects at the national and sub-national levels or across different sectors. We counted articles as invoking the Economic Gold Rush frame if they mentioned AI alongside references to a productivity dividend, AI revolution, golden opportunity, labor productivity improvement, or similar concepts. In the resulting corpus of frame-invoking articles, we assessed the prevalence of frames over time and across outlets. We also examined trends in references to individuals and organizations.19 Center for Security and Emerging Technology | 8 Findings The Competition frame predominates among the four frames under study, both in terms of raw counts, as shown in Figure 1, and as a percentage of total articles that mention AI. ● For 2020, we recorded 728 occurrences of the Competition frame as compared to 337 occurrences for the Economic Gold Rush frame, 301 occurrences for the World Without Work frame, and 96 occurrences for the Killer Robots frame. ● In 2020, the Competition frame comprised 1.3 percent of AI articles that year, as compared to 0.2 percent for the Killer Robots frame, 0.5 percent for the World Without Work frame, and 0.6 percent for the Economic Gold Rush frame. The majority of remaining articles do not contain any identifiable frame. For example, many articles describe the activities of AI companies without any comment on AI.20 Figure 1. AI Competition Frame Occurred Most Frequently Among Contending Frames Between 2012-2020. Source: LexisNexis, 2012-2020. In our original analysis, which looked at outlets that cater to niche foreign policy audiences (e.g., Defense One and Foreign Affairs), we found a decline in the use of the Competition frame after 2015, suggesting that reporting on AI became more diverse among those publications. In this analysis, which includes a wider range of outlets, many of which cater to Center for Security and Emerging Technology | 9 more general audiences, the continued prevalence of the Competition frame suggests a convergence around this frame. Perhaps a hardening in foreign policy discourse around AI between 2012 and 2015 among niche audiences encouraged a perception of competition among a more widespread audience. From 2012 to 2020, as a share of articles that reference AI, the Killer Robots and World Without Work frames declined. By contrast, the Competition and Economic Gold Rush frames increased, as seen in Figure 2. As a share of articles mentioning AI: ● The Killer Robots frame peaked in 2015. In that year, the Killer Robots frame was almost as prevalent as the Competition frame; today, it is the least common among the four frames under study. The peak in 2015 coincided with significant outreach and media activity by the Campaign to Stop Killer Robots.21 The uptick in 2017 could be related to the short film “Slaughterbots,” which debuted at a side event of the United Nations’ Convention on Certain Conventional Weapons and went viral that year.22 While the Terminator franchise has shaped the popular imagination on AI, the trend lines suggest that media reporting on AI has increasingly adopted other frames. ● The World Without Work frame peaked in 2015. At the time, it was the third most prominent frame after the Competition and Killer Robots frames. ● The AI Competition frame also peaked in 2015 and, after an initial decline, has leveled out through 2020. This finding is consistent with our initial study. ● The Economic Gold Rush frame peaked in 2018. Today, it is the second most common among all four frames under study. That this frame peaked later than the other frames is consistent with the theory that increasing commercialization of AI shapes public discourse around the technology. The growing prevalence of the Economic Gold Rush frame also suggests that frames centered on threat predominated early in the public discourse around AI, which only more recently turned to potential opportunities. Center for Security and Emerging Technology | 10 Figure 2. Frames Focused on AI Threats Declining while Frames Focused on AI Opportunities Increasing Source: LexisNexis, 2012-2020. Political leaders and tech company CEOs are the individuals most commonly mentioned in articles that use the four frames. Prominent academics, authors, and computer scientists are less frequently mentioned. In terms of specific people most frequently mentioned in an article’s discussion of AI using one of the four frames: ● The political leaders most frequently mentioned in association with the frames are Donald Trump, Barack Obama, Xi Jinping, and Vladimir Putin. ● Jeff Bezos, Bill Gates, Elon Musk, Sundar Pichai, and Mark Zuckerberg are the most commonly cited tech company CEOs or former CEOs. ● Donald Trump is the individual most frequently cited in articles using the World Without Work and Competition frames. Elon Musk is the most commonly mentioned individual in articles using the Killer Robots frame. Center for Security and Emerging Technology | 11 ● U.S. political leaders were frequently mentioned in articles that contained the World Without Work frame; the 10 most frequently cited individuals included two Republican leaders and five Democratic leaders. As displayed in Table 1, companies, specifically AI companies, and U.S. government organizations are the organizations most commonly mentioned in articles that use the four frames.23 Universities, non-profit organizations, international bodies, and foreign governments were less frequently mentioned. In terms of organizations mentioned in association with the four frames: ● Technology companies such as Amazon, Apple, Facebook, Google, and Microsoft are among the organizations mentioned most often in articles that use one of the four frames. AI companies like these appear in: o 32 percent of articles that contain the Killer Robots frame; o 26 percent of articles that contain the World Without Work frame; o 36 percent of articles that contain the Competition frame; and o 41 percent of articles that contain the Economic Gold Rush frame. ● Google is the top-mentioned organization in articles that contain the Competition and Killer Robots frames, while Amazon is the top-mentioned organization in articles that contain the World Without Work frame. The appearance of companies in articles invoking the Competition frame highlights that it describes both geopolitical and business competition. ● U.S. Government organizations (e.g., Department of Treasury, Congress) were the second most commonly mentioned organization type for the Killer Robots and World Without Work frames. Media organizations (e.g., BBC, CNN, Business Insider) were the second most common in articles using the Economic Gold Rush frame. o In terms of specific U.S. government organizations, the U.S. Department of Defense is mentioned the most often in articles associated with the Killer Robots frame, while Center for Security and Emerging Technology | 12 the United States Congress is mentioned the most often in articles associated with the World Without Work frame. Table 1. AI Companies and U.S. Government Entities Are the Most Common Types of Organizations in Articles that Invoke a Frame. Organization Type Percentage of Organization Mentions by Frame Economic Gold Rush World Without Work Killer Robots Competition AI Company 40.7% 26.0% 31.5% 35.9% Company (not AI) 28.9% 20.0% 8.6% 19.7% U.S. Government 7.0% 23.6% 25.0% 16.7% Media 12.1% 12.2% 12.9% 12.0% University 3.5% 4.9% 9.7% 5.5% IGO 2.5% 5.6% 3.9% 2.3% NGO 1.3% 2.7% 4.6% 2.1% Foreign Government 1.0% 0.8% 0.9% 1.4% Other 0.7% 1.9% 2.0% 2.2% Source: LexisNexis, 2012-2020.24 The top news sources that contain a frame include PR Newswire (671), Forbes (450), The New York Times (312), CNN (249), Seeking Alpha (221), Morningstar (199), PRWeb (173), TheStreet.com (171), CNN International (82), Business Wire (80), Investor’s Business Daily (65), CQ Roll Call (62), U.S. News & World Report (61), Fox News (59), CIO (50), ZDNet (48), The Boston Globe (45), Wired (44), Politico (42), The Christian Science Monitor (37), and Slate (28). Table 2 displays the number of times each frame was invoked by these sources. The news outlets with the highest number of articles between 2012 and 2020 using the Killer Robots frame are The New York Times and Forbes. Those with the highest number of articles using the World Without Work frame are Seeking Alpha and Forbes. The outlets with the highest number of articles using the Competition and Economic Gold Rush frames are PR Newswire, Forbes, Seeking Alpha, and Morningstar. Center for Security and Emerging Technology | 13 Table 2. PR Newswire, Forbes, and The New York Times Publish the Most Articles that Invoke a Frame. Media Source Number of Times Frame Used by Source Economic Gold Rush World Without Work Killer Robots Competition PR Newswire 208 56 27 380 Forbes 96 81 43 230 The New York Times 7 79 121 105 CNN 17 58 55 119 Seeking Alpha 80 107 2 32 Morningstar 66 18 4 111 PRWeb 52 14 8 99 TheStreet.com 37 13 12 109 CNN International 4 49 11 18 Business Wire 41 8 1 30 Investor's Business Daily 12 10 2 41 CQ Roll Call 7 13 22 20 US News & World Report 10 12 11 28 Fox News 3 29 4 23 CIO 9 6 4 31 ZDNet 7 3 4 34 The Boston Globe 2 14 12 17 Wired 3 4 15 22 Politico 0 7 3 32 The Christian Science Monitor 3 7 17 10 Slate 2 3 16 7 Source: LexisNexis, 2012-2020. Center for Security and Emerging Technology | 14 Concluding Observations Rhetorical frames in AI serve as a barometer of public perceptions and provide insight into whether those perceptions are becoming more cooperative or competitive over time. To increase public trust in the technology, policymakers should monitor these narratives closely and take steps to respond to the concerns expressed in the evolving discourse on AI. In the course of our research, we detected additional proto-frames worthy of study: the Eye in the Sky frame describes the use of AI for intelligence applications and surveillance technologies; the Speeding Bullet frame emphasizes the ability of AI to accelerate the pace of warfare, cyber operations and decision making; the Wolf in Sheep’s Clothing frame reflects concerns about bias in the design, development and use of AI-enabled systems, including biased training data and concerns about automation bias in human-machine teams; and the House of Cards frame emphasizes the brittle nature of AI systems and their tendency to fail when encountering novel or complex environments for which they were not trained. Future research could track these and other frames or explore sub-categories of the frames under study. For example, analysts could disaggregate the Competition frame into the categories of competitive dynamics between states and competition over talent in the private sector. By monitoring changes over time in the usage of rhetorical frames in niche and general audience sources, policymakers could more proactively shape the public discourse around AI and address legitimate concerns over safety, security, and reliability, thereby increasing public trust in the technology. Center for Security and Emerging Technology | 15 Authors Andrew Imbrie is a senior fellow at CSET, where Rebecca Gelles and James Dunham are data scientists and Catherine Aiken is the acting director for data science. Acknowledgments The authors are grateful to Igor Mikolic-Torreira and Dewey Murdick for their helpful comments and suggestions. Melissa Deng, Alex Friedland, and Lynne Weil provided editorial support. Our thanks also go to Chris Meserole of the Brookings Institution and Michael Sellitto of the Stanford Institute for Human-Centered Artificial Intelligence for their excellent feedback. As ever, all errors are our own. © 2021 by the Center for Security and Emerging Technology. This work is licensed under a Creative Commons Attribution-Non Commercial 4.0 International License. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc/4.0/. Document Identifier: doi: 10.51593/20210010 Center for Security and Emerging Technology | 16 Endnotes 1 Andrew Imbrie, James Dunham, Rebecca Gelles, and Catherine Aiken, “Mainframes: A Provisional Analysis of Rhetorical Frames in AI” (Center for Security and Emerging Technology, August 2020), https://cset.georgetown.edu/research/mainframes-a-provisional-analysis-of-rhetorical-frames-in-ai/. 2 Imbrie, Dunham, Gelles, and Aiken, “Mainframes: A Provisional Analysis of Rhetorical Frames in AI.” 3 Paul Scharre and Michael C. Horowitz, “Artificial Intelligence: What Every Policymaker Needs to Know” (Center for a New American Security, June 2018), 2, https://s3.us-east-1.amazonaws.com/files.cnas.org/documents/CNAS\_AI\_FINAL-v2.pdf. 4 Articles were selected from LexisNexis if they fell into source rank one (top international, national, and business news sources) or two (top regional sources) and if they were included in the categories of Press Wire, National, or Trade. There were 285 sources identified that contained articles mentioning artificial intelligence. Of these, the top sources were PR Newswire (28 percent of AI articles), PRWeb (10 percent), Morningstar (8 percent), TheStreet.com (6 percent), Forbes (6 percent), International Business Times (5 percent), Business Wire (5 percent), The New York Times (3 percent), Seeking Alpha (2 percent), Investor’s Business Daily (1 percent), and the San Francisco Chronicle (1 percent). 5 While the percentage of articles mentioning AI that contain a frame may seem low, we sampled 250 articles from this set to evaluate what it contained and found only six articles that could be considered false negatives. The other articles generally fell into a frame not covered in this paper (see our discussion of further frames to consider); discussed AI in the context of advertisement for a company or product, where the only framing was that AI makes their product better; or made passing mention of AI as a feature in some product or issue that was being discussed but did not focus on it. There were also a small number of articles that were miscategorized as being about AI at all, which is one of the challenges of a keyword-based approach. 6 The LexisNexis service includes a broad range of the print and electronic media content produced. However, there are some sources not covered in LexisNexis, including The Wall Street Journal, The Washington Post, MIT Technology Review, and The Verge. These sources were not included in our analysis. Patterns of frame use might differ between these excluded sources and those in our analysis, but we cannot enumerate the hypothetical universe of possible sources for an assessment of LexisNexis’s coverage and the generalizability of our findings. We instead recommend some caution in their interpretation. Adding or removing sources from our analysis would affect its results. Similarly, we do not attempt to adjust for changes over time in the availability of sources in LexisNexis. We include in the analysis content from frame-invoking outlets that appear only in 2020, for instance, but our substantive conclusions are robust to this choice. 7 All the patterns we matched against for this and other frames can be found with the replication code for this data brief at https://github.com/georgetown-cset/contending-frames. Center for Security and Emerging Technology | 17 8 For examples of recent articles that include the frame, see Henry McDonald, “Ex-Google Worker Fears ‘Killer Robots’ Could Cause Mass Atrocities,” The Guardian, September 15, 2019, https://www.theguardian.com/technology/2019/sep/15/ex-google-worker-fears-killer-robots-cause-mass-atrocities; Sono Motoyama, “Inside the United Nations’ Effort to Regulate Autonomous Killer Robots,” The Verge, August 27, 2018, https://www.theverge.com/2018/8/27/17786080/united-nations-un-autonomous-killer-robots-regulation-conference; Nita Bhalla, “Nations Dawdle on Agreeing Rules to Control ‘Killer Robots’ in Future Wars,” Reuters, January 17, 2020, https://www.reuters.com/article/us-global-rights-killer-robots/nations-dawdle-on-agreeing-rules-to-control-killer-robots-in-future-wars-idUSKBN1ZG151; Kelsey Piper, “Death by Algorithm: The Age of Killer Robots is Closer than You Think,” Vox, June 21, 2019, https://www.vox.com/2019/6/21/18691459/killer-robots-lethal-autonomous-weapons-ai-war; Zachary Fryer-Biggs, “Coming Soon to a Battlefield: Robots that Can Kill,” The Atlantic, September 3, 2019, https://www.theatlantic.com/technology/archive/2019/09/killer-robots-and-new-era-machine-driven-warfare/597130/; Jonah M. Kessel, “Killer Robots Aren’t Regulated. Yet.,” The New York Times, December 13, 2019, https://www.nytimes.com/2019/12/13/technology/autonomous-weapons-video.html. 9 Lucy Suchman, “Situational Awareness and Adherence to the Principle of Distinction as a Necessary Condition for Lawful Autonomy,” Briefing Paper: CCW Informal Meeting of Experts on Lethal Autonomous Weapons, Geneva, April 12, 2016, https://eprints.lancs.ac.uk/id/eprint/86141/1/CCW\_Autonomy\_Suchman.pdf. 10 “The ‘Killer Robots’ Accountability Gap: Obstacles to Legal Responsibility Show Need for a Ban,” Human Rights Watch, April 8, 2015, https://www.hrw.org/news/2015/04/08/killer-robots-accountability-gap. 11 We did not include the term “terminator” as an indicator of the frame invocation because it produced too many false positives. Nonetheless, it appeared in 71 articles within the frame, and may thus be considered a relevant term. 12 For examples, see Daniel Susskind, A World Without Work: Technology, Autonomation, and How We Should Respond (New York: Metropolitan Books, 2020); Carl Benedikt Frey, The Technology Trap: Capital, Labor and Power in the Age of Automation (Princeton: Princeton University Press, 2020); Alana Semuels, “Millions of Americans Have Lost Jobs in the Pandemic—and Robots and AI Are Replacing Them Faster Than Ever,” TIME, August 6, 2020, https://time.com/5876604/machines-jobs-coronavirus/; Kevin Drum, “You Will Lose Your Job to a Robot—and Sooner Than You Think,” Mother Jones, November/December 2017, https://www.motherjones.com/politics/2017/10/you-will-lose-your-job-to-a-robot-and-sooner-than-you-think/; Associated Press, “Over 30 Million U.S. Workers Will Lose Their Jobs Because of AI,” MarketWatch, January 24, 2019, https://www.marketwatch.com/story/ai-is-set-to-replace-36-million-us-workers-2019-01-24; Jacob Douglas, “These American Workers Are the Most Afraid of AI Taking Their Jobs,” CNBC, November 7, 2019, https://www.cnbc.com/2019/11/07/these-american-workers-are-the-most-afraid-of-ai-taking-their-jobs.html; Tom Simonite, “Will AI Take Your Job––or Make it Better?,” WIRED, December 16, 2019, https://www.wired.com/story/will-ai-take-your-job-or-make-it-better/; Don Reisinger, “AI Expert Says Automation Could Replace 40% of Jobs in 15 Years,” Fortune, January 10, 2019, https://fortune.com/2019/01/10/automation-replace-jobs/; “Will AI Destroy Center for Security and Emerging Technology | 18 More Jobs Than It Creates Over the Next Decade?” The Wall Street Journal, April 1, 2019, https://www.wsj.com/articles/will-ai-destroy-more-jobs-than-it-creates-over-the-next-decade-11554156299. 13 Greg Allen and Taniel Chan, “Artificial Intelligence and National Security” (Belfer Center for Science and International Affairs, July 2017), 36-41, https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf. 14 Yuval Noah Harari, 21 Lessons for the 21st Century (New York: Random House, 2018). 15 Remco Zwetsloot, Helen Toner, and Jeffrey Ding, “Beyond the AI Arms Race,” Foreign Affairs, November 16, 2018, https://www.foreignaffairs.com/reviews/review-essay/2018-11-16/beyond-ai-arms-race. 16 Kai-Fu Lee, “Artificial Intelligence and the Future of Work: A Chinese Perspective,” in Work in the Age of Data, BBVA OpenMind, https://www.bbvaopenmind.com/wp-content/uploads/2020/02/BBVA-OpenMind-Kai-Fu-Lee-Artificial-intelligence-and-future-of-work-chinese-perspective.pdf. 17 See, for example, Zeninjor Enwemeka, “Robots Won’t Take Away All Our Jobs, MIT Report Finds,” WBUR, September 10, 2019, https://www.wbur.org/bostonomix/2019/09/10/mit-future-of-work-report. 18 For examples of this frame, see: Frank Holmes, “AI Will Add $15 Trillion to the World Economy by 2030,” Forbes, February 25, 2019, https://www.forbes.com/sites/greatspeculations/2019/02/25/ai-will-add-15-trillion-to-the-world-economy-by-2030/; “Sizing the Prize: PwC’s Global Artificial Intelligence Study: Exploiting the AI Revolution,” PwC, https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html; Bernhard Warner, “Artificial Intelligence Could Be a $14 Trillion Boon to the Global Economy—If It Can Overcome These Obstacles,” Fortune, October 9, 2019, https://fortune.com/2019/10/09/artificial-intelligence-14-trillion-boon-only-if-overcome-one-thing/; Jacques Bughin, Jeongmin Seong, James Manyika, Michael Chui, and Raoul Joshi, “Notes from the AI Frontier: Modeling the Impact of AI on the World Economy,” McKinsey, September 4, 2018, https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy#part1; Karen Hao, “If You’re Thinking about Embracing AI: Just Jump In,” MIT Technology Review, March 27, 2019, https://www.technologyreview.com/2019/03/27/136310/if-youre-thinking-about-embracing-ai-just-jump-in/; Vinod Mahanta, “AI Can Make an Impact Like Electricity: Andrew Ng,” The Economic Times, August 26, 2017, https://economictimes.indiatimes.com/opinion/interviews/ai-can-make-an-impact-like-electricity-andrew-ng/articleshow/60227045.cms; Jibu Elias, “Andrew Ng on the Role of AI in Economic Transformation,” INDIAai, September 12, 2020, https://indiaai.gov.in/article/andrew-ng-on-the-role-of-ai-in-economic-transformation; Erik Brynjolfsson, Xiang Hui, and Meng Liu, “Artificial Intelligence Can Transform the Economy,” The Washington Post, September 18, 2018, https://www.washingtonpost.com/opinions/artificial-intelligence-can-transform-the-economy/2018/09/18/50c9c9c8-bab8-11e8-bdc0-90f81cc58c5d\_story.html; “Bane or Boon: Artificial Intelligence and the Workforce,” France Stratégie, June 12, 2018, https://www.strategie.gouv.fr/english-articles/bane-or-boon-artificial-intelligence-and-workforce; Charles Mizrahi, “The Economic Impact of AI Projected to Be over $14 Center for Security and Emerging Technology | 19 Trillion,” Banyan Hill, January 24, 2019, https://banyanhill.com/economic-impact-ai-14-trillion/. 19 Our analysis of individual and organization mentions uses LexisNexis “semantic metadata,” which identifies references to these entities in article text. We disambiguated the references manually and similarly canonicalized the names of publication sources where they varied. For our analysis of organizations, we categorized them by hand as companies, universities, (other) non-profit organizations, international bodies, U.S. government entities, or foreign governments. The frames are not mutually exclusive, but the overlap is low. Where overlap occurred, we counted as two separate frame invocations. For example, 40 frame-invoking articles overlapped between the Competition frame and the Economic Gold Rush frame. In this case, we counted the overlap articles as 40 Competition frame articles and 40 Economic Gold Rush articles. 20 We focus here on the positive cases of frame invocation, but future research could explore the use of natural language processing tools to characterize mentions of AI in media more broadly. 21 “Concern & Support at First Committee,” Campaign to Stop Killer Robots, October 27, 2015, https://www.stopkillerrobots.org/2015/10/unga-report/. 22 Motoyama, “Inside the United Nations’ Effort to Regulate Autonomous Killer Robots.” 23 Organizations were considered AI companies if they were included in the list of AI companies compiled in Zachary Arnold, Rebecca Gelles, and Ilya Rahkovsky, “Identifying AI-Related Companies: A Conceptual Outline and Proof of Concept” (Center for Security and Emerging Technology, July 2020), cset.georgetown.edu/research/identifying-ai-related-companies/. 24 Some entities mentioned in articles were identified as non-organizations and were not included in this analysis; for this reason, percentages will not sum to 100.
44fd9620-bf4f-4d9b-aa3e-6c164f7b733f
trentmkelly/LessWrong-43k
LessWrong
A Very Concrete Model of Learning From Regrets Warning 1: This post is written in the form of Java-like pseudocode. If you have no knowledge of programming, you might have trouble understanding it. (If you do, it still does not guarantee you will understand, but your chances are better.)   Warning 2: I have more than moderate, but less than high, confidence that this model is approximately correct. It doesn't mean that my or anyone's brain works exactly in the way shown in the code, but rather that the flow of data in the brain is approximately as if it were using such an algorithm. The word "approximately" includes stuff I don't (yet) know about, but also stuff I didn't include below to keep it simple. I wrote this specifically for regrets, but processing of positive memories seems to have similar mechanics (with different constants).   Warning 3: There is little chance of finding any existing studies/data etc. that could directly validate or invalidate this model. (However if you know of any, I'm all ears.) There might some stuff that is correlated, so if you know something mention it too.   class Brain {     ...          // This represents a memory about a single event          class Memory     {         ...                  float associatedEmotions; // positive or negative     }          // Your brain keeps track of this          private Map<Memory, Float> memoriesRequireProcessing = new Map<>();          // Add new stuff to the queue          private void somethingHappened(Memory newMemory)     {         float affect = getAffectOfSituation(newMemory);                  newMemory.associatedEmotions = affect * 0.5;              if (Math.abs(affect) > 0.1)             memoriesRequireProcessing.add(newMemory, Math.abs(affect));     }          // You have no control over how this works,     // but you can influence the confidence parameter     // (mostly indirectly, a little bit directly)          protected void learnedMyLesson(Memory m, float confidence)     {         float previousValue =           
acba58ee-4ca2-4d72-9185-d7f7b53be053
trentmkelly/LessWrong-43k
LessWrong
Shanghai Less Wrong Meetup #2 Wednesday May 11th, 7pm John Teddy has asked me to post the following meetup on Less Wrong: > Shanghai Lesswrong Meeting #2 > > Wednesday May 11th, 7pm, XuHui District, Shanghai, China (private residence not listed, please contact John Teddy) > The first meeting was a success. There are plans to have regular meetings, goal setting, and various topics. Some of the planned topics are porting the mind to software, bayes theorem, cognitive biases, paleo diet, etc. The first meeting was at a public venue. Our next meeting will be at a private residence. Please text or call John Teddy ( http://lesswrong.com/user/johntheodore ) at 18621732925 to receive the address.
3f4cd3df-c9bf-4838-a8a6-aa107fb0ee1f
trentmkelly/LessWrong-43k
LessWrong
Building AI Research Fleets From AI scientist to AI research fleet Research automation is here (1, 2, 3). We saw it coming and planned ahead, which puts us ahead of most (4, 5, 6). But that foresight also comes with a set of outdated expectations that are holding us back. In particular, research automation is not just about “aligning the first AI scientist”, it’s also about the institution-building problem of coordinating the first AI research fleets. Research automation is not about developing a plug-and-play “AI scientist”. Transformative technologies are rarely straightforward substitutes for what came before. The industrial revolution was not about creating mechanical craftsmen but about deconstructing craftsmen into assembly lines of specialized, repeatable tasks. Algorithmic trading was not just about creating faster digital traders but about reimagining traders as fleets of bots, quants, engineers, and other specialists. AI-augmented science will not just be about creating AI “scientists.” Why? New technologies come with new capabilities and limitations. To fully take advantage of the benefits, we have to reshape our workflows around these new limitations. This means that even if AIs eventually surpass human abilities across the board, roles like “researcher” will likely transform dramatically during the transition period. The bottleneck to automation is not just technological but also institutional. The problem of research automation is not just about training sufficiently capable and aligned models. We face an “institutional overhang” where AI capabilities are outpacing our ability to effectively organize around their weaknesses. Factories had to develop new management techniques, quality control systems, and worker training programs to make assembly lines effective. Trading firms had to build new risk management frameworks, compliance systems, and engineering cultures to succeed at algorithmic trading. So too, research institutions will need to reinvent themselves around AI or fal
f112119c-2db4-463a-b3b7-66a7289f47fb
trentmkelly/LessWrong-43k
LessWrong
On The Effectiveness of Ferriss Tim Ferriss has been systematically quoted on Less Wrong. How to make money to donate utilons and show you care is a persistent topic on Less Wrong. No one here seems to either have tried, or accessed the feasibility of Tim Ferrissing life (for instance accessing by checking for people who tried without the obvious survivor bias displayed in Ferriss`s own website) A probability 30% of earning $12.000 per month working for 10 hours per week after a build-up time of 4 months working 10 hours a day to get it started (having fun while figuring out how does capitalism work anyway) seems like a fair bet. My prior for the above paragraph feasibility is about 15%. Should my posterior be above the 30% threshold? Data anyone? Different prior anyone? Lone bystander bias,everyone?
ca83c240-c715-407b-85b5-b1374910fd8f
StampyAI/alignment-research-dataset/special_docs
Other
NeurIPSorICML_q243b-by Vael Gates-date 20220318 # Interview with AI Researchers NeurIPSorICML\_q243b by Vael Gates \*\*Interview with q243b, on 3/18/22\*\* \*\*0:00:00.0 Vael:\*\* Cool. Alright. So my first question is, can you tell me about what area of AI you work on in a few sentences? \*\*0:00:09.0 Interviewee:\*\* Of course. I did my PhD in optimization and specifically in non-convex optimization. And after that I switched topics quite a lot. I worked in search at \[company\] \[\...\] And now actually I work in \[hard to parse\] research, so I kind of come back to optimization, but it\'s more like a kind of weird angles of optimizations such as like meta-learning or future learning, kind of more novel trends like that. \*\*0:00:42.1 Vael:\*\* Cool. And next question is: what are you most excited about in AI and what are you most worried about? In other words, what are the biggest benefits or risks of AI? \*\*0:00:51.1 Interviewee:\*\* Well, the benefit is that AI has slowly but surely kind of taken over a lot of problems in the world. And in fact pretty much any problem where you need to optimize something, you can use AI for that. I\'m more of a traditional machine learning person, in the sense that\... Currently I include everything, including not necessarily neural networks, also like logistic regressions, decision trees, all those things. And I think those things have been grossly unutilized, I would say, because a lot of problems right now in machine learning that people think, \"Okay, should this solve this neural networks?\" but in reality, just in decision trees. But stay the same because of the current trends and because of the current hype of neural networks as well, they kind of came along with it as well. All the publicity and marketing that, you know, that AI should have, honestly. And I think more and more companies realize that you can solve the problem with just simple solutions. And I think that will be a really exciting part. So just like, to answering to your question yeah, I\'m excited about the fact that machine learning has just become more and more and more ubiquitous. It becomes like almost prerequisites for any big company or even a smaller company. And the second part, what I\'m worried the most. I don\'t know, that\'s a good question. \*\*0:02:07.7 Interviewee:\*\* I mean, I guess like, I mean, I don\'t share those worries that AI would dominate us and we would all be exterminated by overpowerful AI. I don\'t think AGI is coming anytime soon. I think we\'re still doing statistics in a way, like some kind of belong to this camp will just think, we\'re still doing linear models and I don\'t believe the system is conscious or anything of those sorts. I think the most malicious use, like, I mean, especially now currently with the war, I see more and more people using AI for malicious sense. Like not necessarily they will be, you know, we\'re going to have next SkyNet coming, but in the bad hands, in the bad actors, you know, AI can serve not a good purpose in war. Like for example, you know, like now with drones, with, you know, the current war, for example, in Ukraine is more and more done with drones. And drones have automatic targeting, automatic navigation. And yeah, so that\'s kind of not necessarily a good thing and they can become more and more dramatic, and more automatized and they can lead to harm. \*\*0:03:06.4 Vael:\*\* Yeah, it makes sense. Lot of strong misuse cases. So focusing on future AI, putting on like a science fiction, forecasting hat, say we\'re 50 plus years into the future, so at least 50 years in the future, what does that future look like? \*\*0:03:19.5 Interviewee:\*\* Well, I hope\... I mean I kind of, you know, I really like that, my favorite quote in a\... And probably that one of the quotes that I really like is that the arc of justice is nonlinear, but it kind of bends towards justice. I really like it. And I really hope in 50 years we would actually figure out first of all, the way to harness our current problems and make not necessarily disappear, but at least make it controllable such as nuclear weapons and the global warming. And that\'s in 50 years, I think it\'s a reasonable time. Again, not to solve them, but just to figure out how to harness these issues. And AI should definitely help that. And do you want me to answer more specifically, like more, like give you ideas of how I think in 50 years the world would look like? (Vael: \"Yeah, I\'ll take some more specifics.\") \*\*0:04:01.3 Interviewee:\*\* Alright. I mean, I think that one of my exciting areas that I think that right now is already kind of flourishing a little bit, and it\'s large language models. It\'s a current trend, so it\'s kind of an easy, like on the surface thing to talk about, and I think I\... As a \[large tech company\] employee, I can see how they\'ve been developed over like two years, things changing dramatically. And I think these kind of things are pretty exciting. Like having a system that can talk to you, understand you, respond not necessarily with just one phrase, but like accomplish tasks that you wanted to accomplish. Like now it\'s currently in language scenarios, but I also, within 50 years definitely anticipate it could happen in like robotics, like a personal assistant next to you or something like that. Another area I\'m really excited about is medicine. \*\*0:04:46.1 Interviewee:\*\* I think once we figure out all the privacy issues that surround medicine right now, and we\'re able to create like, clean up database, so to speak, of patients diagnoses. And I hope that it\'ll be enough for a machine learning model to solve like cancer as we know it and things like that. I\'m just hopeful. I mean, I hope it\'s going to happen in 50 years and it\'s going to, I don\'t know if I want to place my bet there, but I\'m hoping that would happen. So I guess in robotics, as well, as I said, one of the things that we\'re kind of inching there, but not quite there, but I think in 50 years, we\'ll solve it. So I think these three things: personal assistant, solving medicine and robotics, these three things. \*\*0:05:28.2 Vael:\*\* Wow. Yeah. I mean, solving robotics would be huge. Like what\'s an example of\... could you do anything that a human could do as a robot or like less capable than that? \*\*0:05:36.0 Interviewee:\*\* I think so. I mean, it depends. It depends what you mean by human, right? I mean the\... Well, if you try to drive a car for the last 20 years. We\'ve been trying to do that, but honestly, I think this problem is really, really hard because you have to interact with other agents as well. That\'s kind the main thing, right? You have to interact with other humans mostly. I mean, I think interaction between robots, it\'s one thing, interaction between robots and robots is much easier. So I think whatever task that doesn\'t involve humans is actually going to be pretty useful. Well, again, actually pretty easy because it hasn\'t been solved yet, but I think it\'s much easier than solving with humans. \*\*0:06:05.8 Interviewee:\*\* And like for example organizing your kitchen, organizing your room, cleaning your room, cooking for you, I think all the things should be pretty straightforward. Again, the main issue here is that every kitchen is different, so although we can train a robot to a particular kitchen or like do some particular kitchen, like once it\'s presented with a novel kitchen with novel mess\... mess is very personal. So it\'d be harder for the robots to do. But I think that\'s something that would be kinda within the reach, I think. \*\*0:06:37.7 Vael:\*\* Interesting, and for like, solving cancer, for example, I imagine that\'s going to involve a fair amount of research per se, so do we have AIs doing research? \*\*0:06:47.2 Interviewee:\*\* So research, I want to distinguish research here because there is research in machine learning and there is research in medicine. And they are two different things. The research in medicine, and I\'m not a doctor at all, but from what I understand, it\'s very different, in the sense that you research particular forms of cancer, very empirical research. Like hey we have\... Cancer from what I understand, one of the main issues with cancer is that every cancer is more or less unique. \*\*0:07:11.8 Interviewee:\*\* It\'s really hard to categorize it, it is really hard to do AB testing. The main research tool that medical professionals use is AB testing, right, you have this particular group of cancers, group of people, that suffer from this particular cancer. Okay, let\'s just come up with a drug that you can try to put these people on trial, and do that. But because every cancer is unique, it\'s pretty hard to do that. So, and how to do this research is data, and that\'s what we need for machine learning, we need to have sufficient data such that machine learning can leverage that and utilize it. So they\'re now asking questions in two perspectives, one is do we need more data? Yes, absolutely. \*\*0:07:46.3 Interviewee:\*\* Moreover we not only need data, we need to have rules for which these machine learning agents, that of a company, university, would have access to this data differentially private right in the sense that this should be available to them. But is possibly private, of course privacy is a big issue. Which right now doesn\'t really happen, plus there are other bureaucratic reasons for this not to happen, like for example hospitals withholding the data because they don\'t want to share it and stuff like that. \*\*0:08:16.3 Interviewee:\*\* So if we can solve this problem, the research and medical part would be not necessarily\... Not necessary for the machine learning. And on the machine learning side, there is also, as well, it\'s very big hurdles in the sense that current machine learning algorithms needs tons of data. Like for the self-driving cars, they\'re still talking about we need millions and millions of hours cars driving on the road, and they still don\'t have enough. So for cancer, that\'s kind of not be the case hopefully. Right? So hopefully we\'re going to come up with algorithms that work with fewer data. Like one of the algorithms is so called few-shot algorithms, so when you have algorithms that learn on somebody\'s language, but when you want to apply to a particular patient in mind, you just need to use specific markers to adjust your model to the specific patient. So there are some advancement in this way too but I think we are not there yet. \*\*0:09:07.1 Vael:\*\* Interesting. Cool, alright, so that\'s more\... It\'s not like the AI is doing research itself, it\'s more that it is, like you\'re feeding in the data, to the similar types of algorithms that already exist. Cool, that makes sense. Alright so, I don\'t know if you\'re going to like this, but, people talk about the promise of AI, by which thy mean many things but one of the things is that\... The frame that I\'m using right now is like having a very general capable system, such that they have the cognitive capacities to replace all current day human jobs. So whether or not we choose to replace human jobs is a different question. But I usually think of this as in the frame of like, we have 2012, we\'ve the neural net\... we have AlexNet, the deep learning revolution, 10 years later with GPT-3 which has some weirdly emergent capabilities, so it can do text generation, language translations, some coding, some math. \*\*0:09:51.9 Vael:\*\* And so one might expect that if we continued pouring all the human effort that has been going into this. And nations competing and companies competing and like a lot of talent going into this and like young people learning all this stuff. Then we have software improvements and hardware improvements, and if we get optical and quantum at the rate we\'ve seen that we might actually reach some sort of like very general system. Alternatively we might hit some ceiling and then we\'d need do a paradigm shift. But my general question is, regardless of how we get there, do you think we will ever get to a very general AI system like a CEO or a scientist AI? And if so, when? \*\*0:10:23.6 Interviewee:\*\* So, my view on that is that it\'s really hard to extrapolate towards the future. Like my favorite example is I guess Elon Musk\... I heard it first from Elon Musk but it\'s a very known thing. Is that, \"Hey we had like Pong 40 years ago, it was the best game that ever created, which was Pong, it was just like pixels moving and now we have a realistic thing and VR around the corner, so of course in 100 years we will have like a super realistic simulation of everything, right? And of course in a 1,000 years we\'ll have everything, \[hard to parse\] everything.\" \*\*0:10:53.0 Interviewee:\*\* But again it doesn\'t work this way. Because the research project is not linear, the research progress is not linear. Like 100 years ago Einstein developed this theory of everything, right, of how reality works. And then yet, we hit a very big road block of how exactly it works with respect to the different scale, like micro, macro and micro and we\'re still not there, we propose different theories but it\'s really hard. And I think that science actually works this way pretty much all around the history it\'s been like that, right. You have really fast advancement and then slowing down. And in some way you have advancement in different things. \*\*0:11:26.8 Interviewee:\*\* Plus the cool thing about research is that sometimes you hit a road block that you can\'t anticipate. Not only there are road blocks that you maybe don\'t even imagine there are, but you don\'t even know what they could be in the future. And that\'s the cool part about science. And honestly, again, I think if we are indeed developing AGI soon, I think it\'s actually a bad sign. Honestly, I think it\'s a bad sign because it means that it\'s\... It\'s like too easy, then I\'ll be really scared: okay what\'s next? Because if we developed some really super powerful algorithm that can essentially super cognitive\... Better and better cognition of humans, I think that will be scary because then I don\'t even know. First of all, my imagination doesn\'t go further than that, because exactly by definition it will be smarter than me, so I don\'t even know how to do that. But also I think it means that my understanding of science is wrong. Another example I like is that someone said, if you\'re looking for aliens in the universe right now and then this person says, if we actually do discover aliens right now, it\'s actually a very bad sign. It\'s a bad sign in the sense that\... \*\*0:12:29.7 Interviewee:\*\* If they\'re there, it means that the great filter, that whatever concept of great filter, right, that we\'re kind of in front of it, not some behind us, it is in front of us. Just means there\'s some really big disaster coming up, because it actually, if aliens made it as well, this mean that they also pass all the filters behind us, it mean that some bigger filters in front of us. So I kind of belong to that camp. Like I\'m\... I\'m kinda hoping that the science will slow down. And we\'ll not be able to get there. Or there\'s going to be something\... It\'s not that I think that human mind is unique and we can\'t reproduce it. I just think that it\'s not as easy as we think it would be, or like in our lifetime at least. \*\*0:13:05.0 Vael:\*\* I see. So maybe not in our lifetime. Do you think it\'ll happen in like childrens\' lifetime? \*\*0:13:10.1 Interviewee:\*\* Which children? Our children hopefully not. But I mean, at some point I think so. But again, I think it\'ll be very different form. Humans\' intellect is very, very unique, I think, and because it\'s shaped by evolution, shaped by specific things, specific rules. So I also kind of believe in this, in the theory that in a way computers already\... They are better than us because they are faster, to start with, and then they can\... Another example I really like is that if you remember the AlphaGo playing Go with Lee Sedol, like one of the best two players of Go. And there was a\... \*\*0:13:43.0 Interviewee:\*\* If you remember the Netflix show, there was like in one room they actually have all the journalists and they were sitting next to Lee Sedol playing with the person that represents DeepMind. And then all the DeepMind engineers and scientists, they were in a different room. And in that room, when they were watching the game playing, \[in\] that room the computer said by the move number 30, very early in the game, it says, okay, I won. And it took Lee Sedol another like half an hour or more, another like a hundred moves to confirm that statistic. And they were\... The DeepMind guys were celebrating and these guys were like all thinking about the game, how to\... But the game was already lost. \*\*0:14:16.6 Interviewee:\*\* So computers are already bett\--\... I mean, of course it\'s a very constrained sandbox around the Go game. I think it\'s true for many things, computers are already better than us. We are more general in our sense of generality, I guess. So maybe they will go in different direction\... But the world is really multidimensional and the problems that we solve are very multi-dimensional. So I think it\'s too simplistic to say that, then you\'re universally better than us, or we are clearly subset and they are superset of our cognition. It\'s, I don\'t know\... I think it \[hard to parse\]. \*\*0:14:44.9 Vael:\*\* Great. Yeah. I\'m going to just hark back to the original point, which was when do you think that we\'ll have systems that are able to like be a CEO or a scientist AI? \*\*0:14:55.0 Interviewee:\*\* Okay. Yeah, sure. Again, sorry\-- sorry for not giving you a simple answer. Maybe that\'s what you\'re looking for, but let me know if this is\... (Vael: \"Nah, it\'s fine,\") \*\*0:15:06.5 Interviewee:\*\* Yeah. I don\'t know. In a way like\... The work that like accountant does right now, it\'s very different than what accountant did 30 years ago. Did we replace it? No, we didn\'t. We augmented work. We changed the work of accountant so that the work is now simpler. So replacing completely accountant, in a way, yes, we also\... Because the current, the set of tasks that accountant did 30 years ago, it\'s automated already. Do we still need accountants? Yes. So same here. Maybe the job that CEO is doing right now in 30 or 40 years, everything that right now, as of today, CEO is doing in 40, 30 years, we will still\... The computer will do it. Would we still need the human there? Yes. If this answers your question. \*\*0:15:45.1 Vael:\*\* Will we need the humans? I can imagine that we can have like, eventually AI might be good enough that we could have it do all of our science and then, or it\'s just so much smarter than us then we\'re just like, well, you\'re much faster at doing science. So I\'ll just let you do science. \*\*0:15:57.8 Interviewee:\*\* So let me rephrase your question a bit. So what you\'re saying right now is it is a black box right now, that right now, that\'s CEO right now, that\'s a CEO job. That\'s what CEO is doing. There\'s some input, then some output. So what you\'re saying that now we can automate it. And now the input and output will also feed through something to computer let\'s say, but then what is, what would be the\... We\'ll have to refine the input and output then, because it still should serve humans. Right? \*\*0:16:21.9 Interviewee:\*\* So previously you need to have drivers, like for the tram you\'re having to have a driver. Now instead of drivers, you have computers, but you still need to have a person to supervise the system in a way. Or, but then you\'re talking about even that being automated. But in same time, you cannot\... Like the system, like for example, self-driving car, it\'s become a tool for someone else. So you\'re removing the work of a driver, but you replace it with a system that now it\'s called something else. Like Uber. Previously, you had to call a taxi. Now you have an app to do it for you. \*\*0:16:49.7 Interviewee:\*\* There\'s an algorithm that does it for you. So the system morph into something else. So same thing here. I think as CEOs, in a sense, they might be replaced, but the system also would change as well. So it won\'t be the same. It won\'t be like, okay, we\'ll have a Google. And there is a CEO of Google who is like robot. Now the Google will morph in a way that the task the CEO is doing would be given the computer. But the Google will still\... Like also, by the way, Google, even Google. Google works on its own. In fact, if you, right now, fire all the employees, it\'ll still work a few days. Everything that we do, we do for the future. Like it\'s pretty unique moment in history, right? \'Cause previously, like before the industrial revolution, you had to do things yourself. Then with the factories and factories well then, okay, you\'re helping factory to do its work. And now there is a third wave, whatever, fourth wave, industrial revolution. We don\'t even do anything. It\'s on\... In a way Google doesn\'t have a CEO, the Google CEO doesn\'t work for the today Google. Google CEO works for the Google in a year which is\... So Google work\... Google is already that. Google doesn\'t have a CEO. So that\'s what I mean. \*\*0:17:56.8 Vael:\*\* Alright. Uh, I\'m going to do the science example, because that feels easier to me, but like, like, okay. So we\'re like doing\... We\'re attempting to have our AI solve cancer and normally humans would do the science. And be like, okay, cool I\'m going to like do this experiment, and that experiment, and that experiment. And then at some point, like we\'ll have an AI assistant and at some point we\'ll just be like, alright, AI solve cancer. And then it will just be able to do that on its own. And it\'s still like serving human interests or something, but it is like kind of automated in its own way. Okay. So do you think that will\... When do you think that will happen? \*\*0:18:34.0 Interviewee:\*\* The question is how with\... The question is, can this task be, sorry I know you\'re asking about the timeline and I want to be, I know\... I don\'t want to ramble too much. But I think I want to be specific enough, what kind of problem we\'re talking about. If we\'re talking about the engineering problem, that we\'re talking about the timeframe of our lifetime. If you\'re talking about a problem that involves more creativity, like for example, come up with a new vaccine for the new coronavirus? Sorry, it\'s automatic. I think that work we could do in the 20\... 20-30 years. Right, because we have tools, we know engineering tools, what needs to be done, where you can do ABC, you\'re going to get D. Once you have D you need to pass it through some tests and you\'re going to get E and that pretty much automate. I think this we can do in 20-30 years. Solving cancer, I just don\'t know enough. How much creativity needs to be there. So more harder, probably, yeah. Yeah, yeah. \*\*0:19:21.0 Vael:\*\* Yeah, no, no, that\'s great. And yeah, and you don\'t know when we\'ll\... For\... So probably more than our lifetime, or more than 30 years at least for creating those? \*\*0:19:28.9 Interviewee:\*\* Mm-hmm. Mm-hmm. \*\*0:19:30.1 Vael:\*\* Alright, great. Cool. Alright, I\'m moving on to my next set of questions. So imagine we have a CEO AI. This is\... I\'m still going back to the CEO AI even though\... (Interviewee: \"Sure, of course.\") \*\*0:19:40.8 Vael:\*\* And I\'m like, \"Okay CEO AI, I want you to maximize profits and try not run out of of money or exploit people or\... try to avoid side effects.\" And this currently is very technically challenging for a bunch of reasons, and we couldn\'t do it. But I think one of the reasons why it\'s particularly hard is we don\'t actually know how to put human values into AI systems very well, into the mathematical formulations that we can do. And I think this will continue to be a problem. And maybe, and\... I think it seems like it would get worse in the future as like the AI is optimizing over, kind of more reality, or a larger space. And then we\'re trying to instill what we want into it. And so what do you think of the argument: highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous. \*\*0:20:28.3 Interviewee:\*\* Okay, so this is a quite big misconception of the public is that, and I think actually, it\'s rightfully so, because I am actually working on this, is that\... So what you said right now is like the famous paperclip example, right? Which is going to turn the whole world in particular, and that\'s kind of what you said more or less, right? So the problem here is that current system, a lot of them, it\'s true, they work on a very specific one dimensional object. There\'s a loss function that we\'re trying to minimize. And it\'s true, like GPT-3 and all these system, currently they are there, they have only one number, it want to minimize. And this is, if you think about it, is way too simplistic for this very reason, right? Exactly, because if you want to just maximize number of paperclips, you\'re just going to turn the whole word into the paperclip machine factory. And that\'s the problem. But the reality is much more complicated. And in fact, we are moving there, like my research was, we\'re moving away from that. We\'re trying to understand, we\'re trying to understand if first of all intelligence can be emerge on its own without it being minimized explicitly. \*\*0:21:20.2 Interviewee:\*\* And second of all, these pitfalls, the pitfalls that you\'re just minimizing one number and of course, not going to work. So answering your question, yeah, yeah I think it will fail, because it will face the real world scenario unless it has specific checks and balances. For example, there\'s also online learning paradigm, right, where you basically learn from every example as they come in the time series. I think this system need to be revamped to work in a larger scale obviously, but this is the kind of system that potentially could work, where you don\'t just minimize, maximize one objective, but you have a set of objectives or you have a set of actors with their own goals and their intelligence is emerging from them. Or you learn online. You just fail and you learn and you forget what you learned, you have\... learn in a continuous fashion. So like all of these things that we as a humans do, could be applicable for AI as well. Like we are the humans, we don\'t have\... You don\'t spend your day like, go to sleep, okay, today, day was like 26. You don\'t do that. And even if you do that, you probably will have multiple numbers. This was 26, this was 37. Okay, it doesn\'t matter the day. \*\*0:22:19.3 Vael:\*\* Yeah, that makes sense. So, one thing I\'m kind of curious about is there\'s\... like, when you say, it won\'t work, just optimizing one number. And will it not work in the sense that, we\'re trying to advance capabilities, we\'re working on applications, Oh, turns out we can\'t actually do this because we can\'t put the values in. And so it\'s just going to fail, and then it will get handled as part of the economic process of progress. Or do you think we\'ll go along pretty well, we\'ll go along fine, things seem fine, things seems like mostly fine. And then like if things just diverge, kind of? Kind of like how we have recommender systems that are kind of addictive even though we didn\'t really mean to, but people weren\'t really trying to make it fulfill human values or something. And then we\'d just have the sort of drift that wouldn\'t get corrected unless we had people explicitly working on safety? Yeah, so, yeah, what do you think? \*\*0:23:06.4 Interviewee:\*\* I think people who work on safety, and you could see it yourself, people who work on safety, people who work on the fairness, people who work on all the things, checks and balances, so to speak, right? They are becoming more and more prominent. Do I know it\'s enough? No, it\'s not enough, obviously we need to do more for different reasons. For obviously DEI, but also for just privacy and safety and other things. And also for things we just talked about, right? Just because it\'s true that the fact that\... By the way, the fact that we have only the current system like GPT-3 or many other algorithms minimize only one value, it\'s not a feature, it\'s a bug. It\'s just convenience because we use the machine learning optimization algorithms that work in this way, and we just don\'t have other ways to do that. And I hope in the future other things would come up. \*\*0:23:46.0 Interviewee:\*\* In answering your question, I don\'t think it will necessarily diverge, we\'ll just hit a roadblock and we\'re already hitting them. You\'ve heard about AlexNet like 10 years ago, and now, sure, we have acute applications and like filters on the phone, but like, did AI actually enter your life, daily life? Well, not true, I mean, you have better phone, they\'re more capable, but actually AI, like in terms of that we all dream about, does it enter your life? Well, not really. We can live without it, right? So, we\'re already hitting all these roadblocks, even like medical application. Google 10 years ago, claimed they\'d solved like the skin cancer when they can detect it, and it didn\'t\... It didn\'t really see the light of day except for some hospitals in India, unfortunately. So we\'re already hitting tons of roadblocks, and I don\'t think it\'s\... It\'s like for this reason precisely, because when you face reality, you just don\'t work as good as you expect for multiple reasons. \*\*0:24:30.4 Vael:\*\* Interesting. Cool. So do you think that others\... This\... Have you heard of the alignment problem? Question one. \*\*0:24:38.1 Interviewee:\*\* No. \*\*0:24:40.7 Vael:\*\* No. Cool, all right, so you\'ve definitely heard of AI safety, right? \*\*0:24:43.5 Interviewee:\*\* Mm-hmm. \*\*0:24:43.9 Vael:\*\* Yeah. Alright. So one of the definitions of alignment is building models that represent and safely optimize hard-to-specify human values, alternatively ensuring that AI behavior aligns with system designer intentions. So I\'m\... So one of the questions, the question that I just kind of asked, was trying to dig at, do you think that the alignment problem per se, of trying to make sure that we are able to encode human values in a way that AIs can properly optimize, that that\'ll just kind of be solved as a matter of course, in that we\'re going to get stuck at various points, then we\'re going to have to address it. Or do you think that\'ll just like\... We will\... It won\'t get solved by default? Things will continue progressing in capabilities, and then we\'ll just have it be kind of unsafe, but like, \"Uh, you know, it\'s good enough. It\'s fine.\" \*\*0:25:21.1 Interviewee:\*\* I think a bit of both. There\'s so much promise and so much hype and so much money in pushing AI forward. So I think a lot of companies will try to do that. These various\... We live in democracy, fortunately or unfortunately or actually we live in more like value of dollar, unfortunately, our society. At least in some countries are valued by progress. And especially companies, they have to progress, they have to advance and this is one of the easiest ways to advance. But I think some companies may be bad actors, whatever, they will try to push it to the limit. But these questions are ultimately unsolved, in a way this current system are designed, for the reason we discussed. So I think it will be a bit of both. Some companies will back down, some companies will try to push it to the limit, so we\'ll see. Depends on applications as well. I mean, some applications are safe. If you\... I\'m sorry, but\... Sorry to bring it up, but for example, there was a case of AI in Microsoft when they released the bot and the bot start cursing. Which is okay. It\'s a cute example, they should have done a bit of PR loss, it\'s fine, but it\'s different from the car crashing you into that tree. So depends, depends on the application. \*\*0:26:27.4 Vael:\*\* Yeah, it seems definitely true. Alright, so next argument is focusing on our CEO AI again, which can do multi-step planning, and it has a model of itself in the world. So it\'s modeling other people modeling it, because it feels like that\'s pretty important for having any sort of advanced AI that\'s acting as a CEO. So the CEO is making plans for the future, and it\'s noticing as it\'s making plans for its goal of maximizing profits with constraints that some of its plans fail because it gets shut down. So we built this AI so that it has the default thing where you have to check with humans before executing anything, because that seems like a basic safety measure. And so the humans are looking for a one page memo on the AI\'s decision. So the AI is thinking about writing this memo, and then it notices at some point that if it just changes the memo to not include all the information, then maybe the AI will be\... I mean, then the humans will be more likely to approve it, which means it would be more likely to succeed in its original goal. \*\*0:27:22.8 Vael:\*\* And so, the question here is\... So this is not building self preservation into the AI, it\'s just like AI as an agent that\'s optimizing any sort of goal and self preservation is coming up as an instrumental incentive. So, what do you think of the argument, \"Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals and this is dangerous?\" \*\*0:27:42.9 Interviewee:\*\* Well, you already put intelligence and the human level into the sentence as well. \*\*0:27:48.3 Vael:\*\* Yes. \*\*0:27:48.6 Interviewee:\*\* Yeah, and it kind of\... I\'m already against that, because I don\'t think the system would actually behave in a way, like a sneaky way to avoid. Well, first of all, the current AI, even currently AI systems are highly uninterpretable. It\'s very hard to interpret what exactly is going on, right? But it still work within the bounds, right? So the example I gave you was Lee Sedol, when the Go game, already knew it\'s won, but it couldn\'t explain why it won, and it take human to parse it. Or another example that I like, in the chess, AI, at some point, put a queen in the corner, something you just never do as a human. And it couldn\'t explain, obviously, in fact. But it never pick up the board and crash it on the\... It will work within the bounds. \*\*0:28:29.1 Interviewee:\*\* So of course, if the bounds of this program allows the program to cheat in a way and withhold information, then yes, but\... Again, it kinda works pair in pair. On one side it\'s really hard to interpret, so this one-pager that AI has to provide, it also has to be curated. And it will not include all information because it\'s impossible to digest all the computer memory and knowledge into one page. So on the one side, this page will always be limited and with all this lossy compression of the state of computer. But on the other side, I don\'t think computers can on purpose cheat on this page. Actually they might, depending on the algorithm\[?\], again, but I think it\'s a valid concern. That\'s an easy question, an easy answer, but it\'s depending on the system, depends how you design it. \*\*0:29:09.5 Vael:\*\* Yeah, so I think I\'m trying to figure out what exactly is in this design system. So one thing is it has to be very capable, and it has to\... I want it to be operating over like reality in a way that I expect that CEOs would, so its task is interacting\... It\'s doing online learning, I expect, and it\'s interacting with actual people. So it\'s giving them text and it\'s taking in video and text, and interacting with them like a CEO would. And it does have to, I think, have a model of itself in the world in order for this to happen and to model how to interact with people. But if we have AI to that extent, which I kind of think that eventually we\'ll develop AI with these capabilities. I don\'t know how long it will take, but I assume that these are commercially valuable enough that people will eventually have this sort of system that\... This is an argument that like\... And I don\'t know if this is true, but any agent that\'s optimizing really hard for a single goal will at some point, like, figure out if it\'s smart enough to\... That it should like try to eliminate things that are going to reduce the likelihood of its plan succeeding, which in this case may be humans in the way. \*\*0:30:10.5 Interviewee:\*\* I think you\'re right. Actually, while you were talking I also came up this example that I really like about, maybe you saw it as well, where there\'s an agent that optimized solving a particular race game and you control the car. At some point found a loophole, you remember \[hard-to-parse\] example, and find a loophole and just goes in circles. And the answer to that, you need to have explicit test goals. But in online learning settings, it\'s really hard. Plus, again, coming back to question number one, which is like, the system is so large at some point, you\'re just not able to cover all the cases. So yeah, yeah, I think so, I think it\'s possible, especially today in online learning fashion when you can\'t really have a complete system that you have all integration tests possible, and then you ship it. Once a system that automatically learns and updates, then it becomes\... That could be a problem. Yeah, I agree. \*\*0:30:52.8 Vael:\*\* Yeah. So this boat driving race is actually one of the canonical examples of the alignment problem, as it were, which is like\-- you put in the right example into it. There\'s another part of the\... so that\'s one version of \"outer alignment\", which is where the system designer isn\'t able to input what it wants into the system well enough, which I think gets harder as the AI gets more powerful. And then there\'s an \"inner alignment\" kind of issue that people are hypothesizing might happen, which is where you have an optimizer that spins up another optimizer inside of it, where the outer optimizer is aligned with what the human wants, but it now has this inner optimizer. The canonical example used here is how evolution is in some sense the top-level optimizer and it\'s supposed to create agents that are good at doing inclusive reproductive fitness and instead we have humans. And we are not optimizing very hard for inclusive reproductive fitness and we\'ve invented contraceptives and stuff, which is like not very helpful. And so people are worried that maybe that would happen in very advanced AIs too, as we get like more and more generalizable systems. \*\*0:31:51.0 Interviewee:\*\* Yeah. I think that there are two things to say, first of all, is what you said about loop in the loop. Now we\'re talking about what exactly system, what kind of a system can an algorithm create, right? Because for example, if you look at the current machine learning, we do know that for example, convolutional layers, they are good at computing equivariances, like transitional equivariances. If you move the object, they\'re supposed to be indifferent, but \[hard to parse\]-connected layers don\'t do that. So this \[hard to parse\] behavior you described? You need to have a system, first of all, that\'s capable enough for this kind of behavior. That\'s a big if, but okay. Once we get there, if we get there, the second question is, okay, can you cover for that? Can you figure out that these cases are eliminated completely from existence? \*\*0:32:32.4 Interviewee:\*\* And the example of CEO is maybe is a good example. For me, the interesting example that is\... that I can definitely see and envision, not even CEO, but like for example, application that controls your behavior in a way. Like, for example, curate your Tinder profile or curate your inbox sorting. And control you through that. Then yeah, for sure. Yeah. If you don\'t control for everything, it can be smarter than us and kind of figure out, back-engineer, how humans work, because we\'re not \[hard to parse\] than that. And curate the channel for us and even\... Get us to the local minimum we might not want, but we\'ll still maximize its profits, whatever the profit means for the computer. \*\*0:33:07.2 Vael:\*\* Okay. Yeah. So I\'m one of those people who is really concerned about long term risk from AI, because I think there\'s some chance that if we continue along the developmental pathway that we have so far, that if we won\'t solve the alignment problem, we won\'t figure out a good way in order to like encode human values in properly and include the human in the loop properly. Like one of the easiest solutions here is like trying to figure out how to put a human in that loop with a AI, even if the AI is vastly more intelligent than a human. And so people are like, Oh, well maybe if you just train an AI to like work with the human and translate the big AI so that like the human understands it, and this is interpretability, and then you have like a system who\'s training that system and maybe we can recursively loop up for something. But. \*\*0:33:52.2 Interviewee:\*\* The problem here, why they see, like, for example, let\'s take a very specific example. There\'s AI systems that, for example, historically curated, curated by humans, obviously without bad intent, but it gives bad credit scores to black population say, like people of color. That\'s really bad behavior. And this behavior is kinda easy to check because you have statistics and you look at statistics. It\'s one, let\'s call it one hop away. So you take the data, you take statistics, is done. The second hop, the two hops away would be: it creates dynamics that you can check, not right away, but later that potentially show something like that. It would be harder for humans to check. You can also, if you think about it for a while, you can come up with like three hop, right. Something that creates something that creates something that it does. So it\'s much easier and much harder to check. You don\'t know until it happens, and that\'s the point. So you have this very complicated dynamic you can\'t\... There\'s not even flags that you can check, red flags that you can check in your model. That might be the issue. \*\*0:34:49.3 Vael:\*\* Yeah. There\'s also\... Yeah. Interpretability seems really\... It seems really important, especially as very, very intelligent systems, and we don\'t know how to do that. So possible versions of things going badly, I think, are\... So if you have an AI system that\'s quite powerful and it\'s going to be instrumentally incentivized to not let itself be modified, then that means that you don\'t have that many chances to get it right in some sense, if you\'re dealing with a sufficiently intelligent system. Especially also because instrumental incentives are to acquire resources and influence and so\... and also improve itself. Which could be a problem maybe of recursive self-improvement. And then it, like, can get rid of humans very quickly if it wanted to, via like synthetic biology. Another kind of\-- this is not as advanced AI, but what if you put AI in charge of the manufacturing systems and food production systems, and there\'s some sort of correlated failure. And then we have misuse, and then the AI assisted-war as, like, various concerns about AI. What do you think about this? \*\*0:35:37.9 Interviewee:\*\* Yeah. So one thing I want to say is that the AI system are kinda too lazy. In a sense that the reason why this loophole worked with the car is because this solution is easier in some mathematical way to find the proper solution. \*\*0:35:52.7 Interviewee:\*\* So one thing humans can address this problem is just looking at\... Use a supervisor as a human, or with a test, which is pretty much like supervisors\-- check all the check and balances. We can create the system for which the, maybe mathematical even, finding proper solution is easier for the computer than finding its loopholes. That\'s one thing I want to say. And the second thing I want to say, which is now coming to nature because now we\'re talking about the rules of nature. But if it so happened that we design a system for which finding the loophole\... for example, we have laws of physics\-- we also have laws in behavior. Like if you have an algorithm that, you know, wants to organize human behavior or something, there\'s also laws of behavior. So it might be an interesting question that once we have this algorithm, we can get a bunch of sociologists, for example, people who are familiar with it, to study this algorithm and figure out that if, for example, loopholes are actually more probable than normal behavior quote on quote. So for example, being a bad person is better than being a good person\-- or easier. Not necessarily better, but easier than being a good person. Which we kind of see it in society sometimes. \*\*0:36:51.5 Interviewee:\*\* So it\'s curious if an algorithm will actually discover that. And finding this loophole with a car is easy because it\'s there, you just need to move a little bit. So for a computer is easy to find it, like it\'s a local minimum that\'s really easy to fall in. But if you design the system for which these loopholes are hard, that might be easier. Or the question is, can we define a problem for which the proper solution is easier? \*\*0:37:11.9 Vael:\*\* Yeah. And I think the problems are going to get much more\... harder and harder. Like if you have an AI that\'s doing a CEO thing, then I imagine like, just as humans figure out, there\'s many loopholes in society, many loopholes in how you achieve goals that are much cheaper in some sense. So I do think it\'s probably going to have to be designed in the system rather than being in the problem per se, as the problems get more and more difficult. Yeah. \*\*0:37:32.1 Vael:\*\* So there\'s this community of people working on AI alignment per se and long term AI risk, and are kind of trying to figure out how you can build\... How you can solve the alignment problem, and how you can build optimization functions or however you need to design them, or interpretability or whatever the thing is, in order to have an AI that is more guaranteed, or less likely to end up doing this kind of instrumental incentive to deceive or not want to\... not self-preservation, maybe get rid of humans. So I think my question right now is, and there\'s money in this space now, there\'s a lot of interest in it and I\'m sure you\'ve heard of it\-- I mean, I\'m not sure you\'ve heard about it, but there\'s\... Yeah. There\'s certainly money and attention these days now that there wasn\'t previously. So what would cause you to work on these problems? \*\*0:38:12.4 Interviewee:\*\* \...Well, ultimately a lot of people are motivated by actually just very few things and one of them is kids. Kids cause here you want to have a good future for yourself and the kids. \*\*0:38:24.8 Interviewee:\*\* You want to live in a better and better human, better and better society, better and better everything. So that, and actually it hits home. The examples we discussed even during this call are, could be pretty grim. If we don\'t make this right. So putting resources there I think is really important. If people, before they come up with atomic bomb, they will figure out situations of which we are facing now, people might not even come up with atomic bomb, but do it in a safer way. Or like people knew about Chernobyl before it happened, obviously they would make it better. So having this hindsight, even though we don\'t know what\'s going to happen in the future, but putting the resources there, I think is definitely a smart move. \*\*0:38:57.9 Vael:\*\* Got it. And what would cause you specifically to work on this? \*\*0:39:03.2 Interviewee:\*\* Yes, I know, you asked this question. Yeah, well again, particular problems. I mean, it\'s an interesting problem. For me I\'m actually\... Since I work in optimization, so I like to have well-formulated problems. And making sure this one goes, this problem is more\... now it\'s kind of vague. I mean, even now we discuss it. I agree with you that this problem is valid. It makes sense. It exists. But it\'s still vague, like how you study it. My PhD was also on the way to interpret, to come up with a way to interpret data. \*\*0:39:30.2 Interviewee:\*\* In fact, like maybe it\'s\... I don\'t want to spend too much time on it, but basically the idea is to visualize the data. If you have a very high dimension of data, you want to visualize it. But it\'s very lossy. You just visualize something, you just do something and it doesn\'t represent everything. And it\'s in a way it was actually, it was well-formulated, because there is a mathematical formula to minimizing. And of course it comes with conditions like \[hard to parse\] and loss and stuff, but in a way it\'s there, the problem is defined. So the same here. Like in a way, there\'s a mathematical apparatus. \...Maybe actually I\'m going to be the one developing it as well. So I\'m not saying that, \"come give me the one I\'m going to work on!\" I think that\'s a \[hard to parse - \"would be direct\"?\], so problem that I would be excited about. \*\*0:40:06.1 Vael:\*\* Yeah. And I mean, I think like what this field really needs right now is someone to specify the problem more precisely. Just because it\'s like, Oh, this is a future system, it\'s like at least 50 years, well I don\'t know at least\-- it\'s far away. It\'s not happening immediately and we don\'t have very good like frameworks for it. And so it makes it hard to do research on. Cool. Alright. Well, I\'ll send you some resources afterwards if you feel like looking into it, but if not, regardless, thank you so much for doing this call with me. \*\*0:40:33.1 Interviewee:\*\* Yeah, I appreciate it. Thank you for your time, it was really fun.
d45f4de7-f367-4244-8f4e-63d52b31ebc4
trentmkelly/LessWrong-43k
LessWrong
Meetup : Vienna Meetup #2 Discussion article for the meetup : Vienna Meetup #2 WHEN: 13 April 2013 04:00:00PM (+0200) WHERE: Schottengasse 2, 1010 Innere Stadt (1.Bez) WHEN: 13 April 2013 16:00 WHERE: Cafe Im Schottenstift (Schottengasse 2) Agenda: we will be discussing developments since last meeting, future projects, and also have some nice and decent discussions regarding rational living. Join our facebook page: https://www.facebook.com/groups/rationalityvienna/?fref=ts Discussion article for the meetup : Vienna Meetup #2
a3d30eb2-3554-438e-aaa3-5cd2833b2755
trentmkelly/LessWrong-43k
LessWrong
The Tech Industry is the Biggest Blocker to Meaningful AI Safety Regulations If you enjoy this, please consider subscribing to my Substack. My latest reporting went up in The Nation yesterday: It’s about the tech industry’s meltdown in response to SB 1047, a California bill that would be the country’s first significant attempt to mandate safety measures from developers of AI models more powerful and expensive than any yet known. Rather than summarize that story, I’ve added context from some past reporting as well as new reporting on two big updates from yesterday: a congressional letter asking Newsom to veto the bill and a slate of amendments. The real AI divide After spending months on my January cover story in Jacobin on the AI existential risk debates, one of my strongest conclusions was that the AI ethics crowd (focused on the tech’s immediate harms) and the x-risk crowd (focused on speculative, extreme risks) should recognize their shared interests in the face of a much more powerful enemy — the tech industry: > According to one estimate, the amount of money moving into AI safety start-ups and nonprofits in 2022 quadrupled since 2020, reaching $144 million. It’s difficult to find an equivalent figure for the AI ethics community. However, civil society from either camp is dwarfed by industry spending. In just the first quarter of 2023, OpenSecrets reported roughly $94 million was spent on AI lobbying in the United States. LobbyControl estimated tech firms spent €113 million this year lobbying the EU, and we’ll recall that hundreds of billions of dollars are being invested in the AI industry as we speak. And here’s how I ended that story: > The debate playing out in the public square may lead you to believe that we have to choose between addressing AI’s immediate harms and its inherently speculative existential risks. And there are certainly trade-offs that require careful consideration. > > But when you look at the material forces at play, a different picture emerges: in one corner are trillion-dollar companies trying to make AI
3532a30c-b953-4818-b066-df803131463b
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
[linkpost] Ten Levels of AI Alignment Difficulty *This is a linkpost for* [*https://www.lesswrong.com/posts/EjgfreeibTXRx9Ham/ten-levels-of-ai-alignment-difficulty*](https://www.lesswrong.com/posts/EjgfreeibTXRx9Ham/ten-levels-of-ai-alignment-difficulty) We don't know how hard AI alignment will turn out to be. There is a range of possible scenarios ranging from ‘alignment is very easy’ to ‘alignment is impossible’, and we can frame AI alignment research as a process of increasing the probability of beneficial outcomes by progressively addressing these scenarios. I think this framing is really useful, and here I have expanded on it by providing a more detailed scale of AI alignment difficulty and explaining some considerations that arise from it. This post is intended as a reference resource for people who maybe aren't familiar with the details of technical AI safety: people who want to work on governance or strategy or those who want to explain AI alignment to policymakers. It provides a simplified look at how various technical AI safety research agendas relate to each other, what failure modes they're meant to address and how AI governance strategies might have to change depending on alignment difficulty. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/EjgfreeibTXRx9Ham/xkwq8yxq2yhuqd7osvzd)
29ead165-088e-4522-98e8-1efabf8244ad
trentmkelly/LessWrong-43k
LessWrong
Better thinking through experiential games A few years ago I came across The Logic of Failure by Dietrich Doerner (previously mentioned on LW) which discusses cognitive failures in people dealing with "complex situations". One section (p.1 28) discusses a little simulation game, where participants are told to "steer" the temperature of a refrigerated storeroom with a defective thermostat, the exact equation governing how the thermostat setting affects actual temperature being unknown. Players control a dial with settings numbered 0 through 100, and can read actual temperature off a thermometer display. The only complications in this task are a) that there is a delay between changing the dial and the effects of the new setting; b) the possibility of "overshoot". I found the section's title chilling as well as fascinating: "Twenty-eight is a good number." Doerner says this statement is typical of what participants faced with this type of situation tend to say. People don't just make ineffective use of the data they are presented with: they make up magical hypotheses, cling to superstitions or even call into question the very basis of the experiment, that there is a systematic link between thermostat setting and temperature. Reading about it is one thing, and actually playing the game quite another, so I got a group of colleagues together and we gave it a try. We were all involved in one way or another with managing software projects, which are systems way more complex than the simple thermostat system; our interest was to confirm Doerner's hypothesis that humans are generally inept at even simple management tasks. By the reports of all involved it was one of the most effective learning experiences they'd had. Since then, I have had a particular interest in this type of situation, which I have learned is sometimes called "experiential learning". As I conceive of it, experiential learning consists of setting up a problematic situation, in such a way that the students ("players") should rely on their own wits
f89638f5-ba7d-4541-88ca-561511b063d8
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Shared AI Wins Today's post, Shared AI Wins was originally published on December 6, 2008. A summary:   > AIs do have enormous difficulties in communicating. That's why decreasing those difficulties is important. Collections of AIs that are able to share data, like ems, are more likely to succeed at making new developments. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Is That Your True Rejection?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
78ab3053-46e6-437e-91dc-3ab1a156d374
trentmkelly/LessWrong-43k
LessWrong
Problems of evil (Cross-posted from Hands and Cities) I. I wasn’t raised in a religious household, but I got interested in Buddhism at the end of high school, and in Christianity and a number of other traditions, early in college. Those were the days of the New Atheists, and of intricate wrangling over theistic apologetics. And I did some of that. I went, sometimes, to the atheist group, and to some Christian ones; I read books, and had long conversations; I watched lectures, and YouTube debates. Much of the back-and-forth about theism that I engaged with at that point in my life, I don’t think about much, now. But I notice that one bit, at least, has stayed with me, and seemed relevant outside of theistic contexts as well: namely, the problem of evil. As usually stated, the problem of evil is something like: if God is perfectly good, knowing, and powerful, why is there so much evil in the world? But I think this version is too specific, and epistemic. Unlike many other issues in theistic apologetics, I think the problem of evil — or something in the vicinity — cuts at something much broader than a “three O” (omnipotent, omniscient, omni-benevolent) God. Indeed, I think it cuts past belief, to a certain affirming orientation towards, and commitment to, reality itself — an orientation I think many non-theists, especially of a “spiritual” bent (including a secularized/naturalistic one), aspire towards, too. II. My impression is that of the many objections to theism, the problem of evil has, amongst theists, a certain kind of unique status — centrally, in its recognized force, and but also, in the way this force can apply independent of doubt about God’s existence per se. Here’s the (devoutly Christian) theologian David Bentley Hart: > “That’s the best argument of all. It’s not an argument regarding God’s existence or non-existence, because that’s a question, first you have to define what existence means, what God means. But it goes directly to the question of divine goodness a
2a0b7320-a1d9-4abb-a128-92b052d79e61
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Request to AGI organizations: Share your views on pausing AI progress A few observations from the last few weeks: * On March 22, FLI published an [open letter](https://futureoflife.org/open-letter/pause-giant-ai-experiments/) calling for a six-month moratorium on frontier AI progress. * On March 29, Eliezer Yudkowsky published a [piece in TIME](https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/) calling for an indefinite moratorium. * To our knowledge, none of the top AI organizations (OpenAI, DeepMind, Anthropic) have released a statement responding to these pieces. **We offer a request to AGI organizations:**Determine what you think about these requests for an AI pause (possibly with uncertainties acknowledged), write up your beliefs in some form, and publicly announce your position.  We believe statements from labs could improve discourse, coordination, and transparency on this important and timely topic.  **Discourse**: We believe labs are well-positioned to contribute to dialogue around whether (or how) to slow AI progress, making it more likely for society to reach true and useful positions. **Coordination**: Statements from labs could make coordination more likely. For example, lab A could say “we would support a pause under X conditions with Y implementation details”. Alternatively, lab B could say “we would be willing to pause if lab C agreed to Z conditions.” **Transparency**: Transparency helps others build accurate models of labs, their trustworthiness, and their future actions. This is especially important for labs that seek to receive support from specific communities, policymakers, or the general public. You have an opportunity to show the world how you reason about one of the most important safety-relevant topics.  We would be especially excited about statements that are written or endorsed by lab leadership. We would also be excited to see labs encourage employees to share their (personal) views on the requests for moratoriums.  Sometimes, silence is the best strategy. There may be attempts at coordination that are less likely to succeed if people transparently share their worldviews. If this is the case, we request that AI organizations make this clear (example: "We have decided to avoid issuing public statements about X for now, as we work on Y. We hope to provide an update within Z weeks.") *At the time of this post, the FLI letter has been signed by 7 DeepMind research scientists/engineers,* [*probably 0 OpenAI research scientists*](https://www.lesswrong.com/posts/bceeKEnPHSQqgyr36/request-to-agi-organizations-share-your-views-on-pausing-ai?commentId=yzu2BtQYvsfPSm3jh) *and 0 Anthropic employees.* See also: * [Let's think about slowing down AI](https://www.lesswrong.com/posts/uFNgRumrDTpBfQGrs/let-s-think-about-slowing-down-ai) * [A challenge for AGI organizations, and a challenge for readers](https://www.lesswrong.com/posts/tD9zEiHfkvakpnNam/a-challenge-for-agi-organizations-and-a-challenge-for-1) * [Six dimensions of operational adequacy in AGI projects](https://www.lesswrong.com/posts/keiYkaeoLHoKK4LYA/six-dimensions-of-operational-adequacy-in-agi-projects)
e4a87d40-ad81-45fa-b62f-26f0fa7eb203
trentmkelly/LessWrong-43k
LessWrong
Kamelo: A Rule-Based Constructed Language for Universal, Logical Communication Introduction Natural languages are messy, ambiguous, and often inefficient for transmitting structured ideas. Kamelo is a proposal for a constructed language designed to address these issues by building words from logical, compositional units. This post outlines the foundations of Kamelo—a rule-based, expandable language using fixed character sets and hierarchical categories to represent meaning with minimal ambiguity or memorization. Kamelo is not intended to replace natural languages but rather to serve as a meta-language: a bridge for logical communication between humans, AIs, and across cultures, especially in low-bandwidth or assistive contexts. This proposal is relevant to LessWrong’s audience as it touches on rationality, AI alignment, and communication efficiency.   Motivation and Design Goals 1. Logical Construction: Every word is built from layered semantic categories—no arbitrary mappings. 2. No Memory Dependence: You can understand a word’s meaning by parsing its parts, not memorizing vocabulary. 3. Minimal Ambiguity: Sentence-level communication inherits meaning clearly from word-level rules. 4. Scalable: Works for both common and rare concepts using multi-level, logical trees. 5. Human-AI Symbiosis: Useful in alignment protocols, translation layers, or accessible UI design.   Core Mechanics of Kamelo Alphabet Fixed 5-symbol phoneme set: ka, me, lo, ti, su (All words are built from these like a base-5 prefix tree) Word Structure (Example: "apple") LevelEncodesExample SegmentL1Word typeka = NounL2Noun subtypeka = Proper nounL3Domainsu = SpeciesL4Biological classme = PlantL5Subclassti = FruitL6–L8Meaning specificitysu-ka-ka-me (apple) Each level is chosen from a tree of categories with 5 branches per level. More common distinctions appear earlier (shorter words).   Encoding Example: Apple ka → Noun ka → Proper Noun su → Species me → Plant ti → Fruit su → Family: Rosaceae ka → Sweet taste ka
5d0d20fd-7eee-48ca-8d2f-b2881acca439
trentmkelly/LessWrong-43k
LessWrong
Is taking bacopa good for life extension? I've heard it reduces memory erasures, but I wonder if you think it seems overall net positive or negative in terms of identity preservation
1511453d-e758-49df-82a5-78ba44374c08
trentmkelly/LessWrong-43k
LessWrong
Where did the 5 micron number come from? Nowhere good. [Wired.com] This article describe's a scientist's attempt to figure out where the 5 micron number, and general belief that most respiratory diseases weren't airborne, came from. She eventually traces it back to a particular number developed for a very different purpose.  I have not fact checked it extensively, but last winter I did try to look into the general state of knowledge on airborne transmission vs fomites and found it weirdly empty, in ways that are consistent with this article.
9e0f7f1e-06f2-48a8-9c1b-fdfe67c62f83
trentmkelly/LessWrong-43k
LessWrong
Scenario analysis: semi-general AIs Are there any essays anywhere that go in depth about scenarios where AIs become somewhat recursive/general in that they can write functioning code to solve diverse problems, but the AI reflection problem remains unsolved and thus limits the depth of recursion attainable by the AIs? Let's provisionally call such general but reflection-limited AIs semi-general AIs, or SGAIs. SGAIs might be of roughly smart-animal-level intelligence, e.g. have rudimentary communication/negotiation abilities and some level of ability to formulate narrowish plans of the sort that don't leave them susceptible to Pascalian self-destruction or wireheading or the like. At first blush, this scenario strikes me as Bad; AIs could take over all computers connected to the internet, totally messing stuff up as their goals/subgoals mutate and adapt to circumvent wireheading selection pressures, without being able to reach general intelligence. AIs might or might not cooperate with humans in such a scenario. I imagine any detailed existing literature on this subject would focus on computer security and intelligent computer "viruses"; does such literature exist, anywhere? I have various questions about this scenario, including: * How quickly should one expect temetic selective sweeps to reach ~99% fixation? * To what extent should SGAIs be expected to cooperate with humans in such a scenario? Would SGAIs be able to make plans that involve exchange of currency, even if they don't understand what currency is or how exactly it works? What do humans have to offer SGAIs? * How confident can we be that SGAIs will or won't have enough oomph to FOOM once they saturate and optimize/corrupt all existing computing hardware? * Assuming such a scenario doesn't immediately lead to a FOOM scenario, how bad is it? To what extent is its badness contingent on the capability/willingness of SGAIs to play nice with humans? Those are the questions that immediately spring to mind, but I'd like to see who else has t
4e21b3ea-3f47-4a31-80d3-ffff6d5a2dab
trentmkelly/LessWrong-43k
LessWrong
Purposeful Anti-Rush Why do we rush? Things happen; Life gets in the way, and suddenly we find ourselves trying to get to somewhere with less time than it's possible to actually get there in.  So in the intention to get there sooner; to somehow compensate ourselves for not being on time; we rush.  We run; we get clumsy, we drop things; we forget things; we make mistakes; we scribble instead of writing, we scramble and we slip up. I am today telling you to stop that.  Don't do that.  It's literally the opposite of what you want to do.  This is a bug I have. Rushing has a tendency to do the opposite of what I want it to do.  I rush with the key in the lock; I rush on slippery surfaces and I fall over, I rush with coins and I drop them.  NO!  BAD!  Stop that.  This is one of my bugs. What you (or I) really want when we are rushing is to get there sooner, to get things done faster.   Instrumental experiment: Next time you are rushing I want you to experiment and pay attention; try to figure out what you end up doing that takes longer than it otherwise would if you weren't rushing. The time after that when you are rushing; instead try slowing down, and this time observe to see if you get there faster. Run as many experiments as you like. Experimenter’s note: Maybe you are really good at rushing and really bad at slowing down.  Maybe you don't need to try this.  Maybe slowing down and being nervous about being late together are entirely unhelpful for you.  Report back. When you are rushing, purposefully slow down. (or at least try it) ---------------------------------------- Meta: Time to write 20mins My Table of contents contains other things I have written. Feedback welcome.
51840831-f8ac-46c7-a9d9-3901056b1df7
trentmkelly/LessWrong-43k
LessWrong
Meetup : London social meetup Discussion article for the meetup : London social meetup WHEN: 01 June 2014 02:00:00PM (+0100) WHERE: Shakespeare's Head, Holborn, WC2B 6BG Your regularly scheduled meetup. Join us from 2pm to talk about the sorts of things that your other friends will look funny at you for talking about. The weather is unpredictable right now so start will be in the pub at 2pm. If it's sunny, late comers may find we have moved around the corner to Lincoln's Inn Fields, probably somewhere in the northwest quadrant. PhilH is sure he's going to be there (I may not) so he's the best option to call if confused, on 07792009646. Discussion article for the meetup : London social meetup
291ed511-2675-4fb1-bf10-2c83deca8879
trentmkelly/LessWrong-43k
LessWrong
How to solve the misuse problem assuming that in 10 years the default scenario is that AGI agents are capable of synthetizing pathogens Introduction   As artificial intelligence continues to revolutionize scientific research, the world is facing an unprecedented challenge in biosecurity. While AI's ability to democratize expert knowledge represents a remarkable achievement, it also lowers the barriers for malicious actors to create bioweapons. This threat operates across multiple scales - from individual actors with basic lab skills to organized terrorist groups and authoritarian states. Unlike traditional bioterrorism risks, which already required extensive expertise and resources, AI-assisted bioweapon development could potentially be pursued by anyone with access to standard lab equipment and the right language models. This post explores a three-layered defense strategy, examining how we can implement security measures at the conception, production, and diffusion stages of potential pathogens. Through a combination of traditional biosecurity measures, robust AI system design, and defensive AI applications, we might be able to mitigate these emerging risks without having to sacrifice the beneficial aspects of AI-assisted scientific research.   I) Designing   Current LLMs capabilities Today, LLMs are already good and reliable guides to point to the sources you need and explain difficult concepts in simple words. With general-purpose AIs models like ChatGPT or Llama 2 being open source models, you can easily fine-tune or find fine-tuned versions of those models that haven’t been censored. In that case, such models can enable you to synthesize expert knowledge about the deadliest known pathogens, such as influenza and smallpox [1].   That is exactly what a group of students from MIT have done to prove this point [2]. During a hackathon, participants were asked to discover how to obtain and release 1918 Spanish flu by feeding both Llama 2 and an uncensored “Spicy” version with malicious prompts. While Llama 2 systematically denied answers, the Spicy model easily provided a step by step guide r
8cf28091-79b9-4db3-b0a3-3d67e76b3dee
trentmkelly/LessWrong-43k
LessWrong
Cancer scientist meets amateur (This American Life) This American Life episode 450: "So Crazy It Just Might Work". The whole episode is good, but act one (6:48-42:27) is relevant to LW, about a trained scientist teaming up with an amateur on a cancer cure. It's downloadable until 19 Nov 2011 or so, and streamable thereafter. (Technical nit: It sounds to me like the reporter doesn't know the difference between sound and electromagnetism.) Edit: Here's a quick rot13ed summary: Vg qbrfa'g tb jryy. Nagubal Ubyynaq frrf rkcrevzragny pbagebyf naq ercebqhpvovyvgl nf guvatf gung trg va uvf jnl. Ur frrzf gb unir gnxra [gur Penpxcbg Bssre](uggc://yrffjebat.pbz/yj/w8/gur_penpxcbg_bssre/).
e33a92bd-bf22-4821-8ecb-363f42055ea5
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] The Weak Inside View Today's post, The Weak Inside View was originally published on 18 November 2008. A summary (taken from the LW wiki):   > On cases where the causal factors creating a circumstance are changing, the outside view may be misleading. In that case, the best you can do may just be to take the inside view, but not try to assign predictions that were too precise. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Failure By Affective Analogy, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
031c70cf-9574-4916-b1a9-4eb217d14e39
trentmkelly/LessWrong-43k
LessWrong
AI Benefits Post 3: Direct and Indirect Approaches to AI Benefits This is a post in a series on "AI Benefits." It is cross-posted from my personal blog. For other entries in this series, navigate to the AI Benefits Blog Series Index page. This post is also discussed on the Effective Altruism Forum. For comments on this series, I am thankful to Katya Klinova, Max Ghenis, Avital Balwit, Joel Becker, Anton Korinek, and others. Errors are my own. If you are an expert in a relevant area and would like to help me further explore this topic, please contact me. Direct and Indirect Approaches to AI Benefits I have found it useful to distinguish between two high-level approaches to producing AI Benefits: direct and indirect. The direct approach, which dominates current discussions of AI for Good, is to apply AI technologies to some problem. This is a natural way to think of creating AI Benefits: to try to use the AI itself to do something beneficial. However, AI companies should resist becoming hammers to whom every social problem looks like a nail. Some social problems are not yet, and may never be, best addressed through the application of AI. Other resources, particularly money, are perhaps more useful in these contexts. Thus, in some circumstances, an AI developer might do the most good by maximizing its income (perhaps subject to some ethical side-constraints) and donating the surplus to an organization better-positioned to turn spare resources into good outcomes. This is the indirect approach to AI Benefits. Actors, including actors aiming to be beneficial, only have finite resources. Therefore, there will often be a tradeoff between pursuing direct and indirect benefits, especially when Benefits are not profit-maximizing (which by hypothesis they are not for the sake of this blog series). A company that uses spare resources (compute, employee time, etc.) to build a socially beneficial AI application presumably could have also used those resources to derive a profit through its normal course of business. The beneficial return
09847cc3-4a95-47f5-a2d2-f0af9a2618c5
trentmkelly/LessWrong-43k
LessWrong
Compromise: Send Meta Discussions to the Unofficial LessWrong Subreddit After a recent comment thread degenerated into an argument about trolling, moderation, and meta discussions, I came to the following conclusions: 1. Meta conversations are annoying to stumble across, I'd rather not see them unless I think it's important, and I think other people mostly feel the same way. Moreover, moderators can't easily ignore those conversations when they encounter them, because they're usually attacks on the moderators themselves; and people can't simply avoid encountering them on a regular basis without avoiding LW altogether. This is a perfect recipe for a flamewar taking over Top Comments even when most people don't care that much. 2. Officially banning all meta conversations, however, is a bad precedent, and I don't want LW to do that. Ideally, Less Wrong would implement a separate "META" area (so that people can read the regular area for all the object-level discussions, and then sally into the meta area only when they're ready). After talking to Luke (who also wants this), though, it seems clear that nobody is able to implement it very soon. So as a stopgap measure, I'm personally going to start doing the following, and I hope you join me: Whenever a conversation starts getting bitterly meta in a thread that's not originally about a LW site meta issue, I'm going to tell people to start a thread on the LW Uncensored Reddit Thread instead. Then I'm going to downvote anyone who continues the meta war on the original thread. I know it's annoying to send people somewhere that has a different login system, but it's as far as I can tell the best fix we currently have. Since some meta conversations are important, I'm not going to punish people for linking to meta thread discussions that they think are significant, and the relevant place for those links is usually the Open Thread. I don't want LessWrong to be a community devoted to arguing about the mechanics of LessWrong, so that's my suggestion. Thoughts? (And yes, this thread is obviously
fda9b3e4-6a82-4c76-8bb1-8174cd3eb5fe
trentmkelly/LessWrong-43k
LessWrong
Requesting feedback/advice: what Type Theory to study for AI safety? TL;DR: Requesting feedback or advice on questions of what math to study (or what projects to engage in) in order to learn how to prove things about programs and to analyze the difficulty/possibility of proving things about programs. Ulterior motive: AI safety (and also progress towards a university degree). Motivation, Goal, Background To my current understanding, any endeavor to figure out how to program[1] safe AI would probably eventually[2] benefit a lot from an ability to prove various properties of programs: The AI will have to self-modify, or create successor agents, i.e. (re)write (parts of) new AI-software; and we'd probably want the AI to be able to prove[3] that said AI-software satisfies various "safety properties" (which I would not yet even know how to specify). I'd like to get closer to being able to design the above kind of AI, and to better understand (the limits of) proving things about programs. To my current tentative understanding, the knowledge I'm looking for probably lies mostly under the umbrella of type theory. (Also, I'm currently studying for a degree at a university and have the opportunity to choose a project for myself; and I think learning about type theory, lambda calculus, and/or automated theorem proving might be the most useful way to use that opportunity. (Since I don't see a way to pass deconfusion work off as a university project.)) SUB-GOALS, SKILLS To clarify the top-level goal here (viz. being able to design AIs that are able to prove stuff about AI-programs), I've listed below some (semi-)concrete examples of skills/abilities which I imagine would constitute progress towards said goal: * Given a type system S and some property P of programs/terms, be able to (dis)prove that some, all, or no terms of S have property P. As simple concrete examples: * Are programs in a given language guaranteed to terminate? (Do all terms in the given type system have normal forms?) * Given some function F: Can some term i
b807ec12-c847-4e89-92f3-5847805c8476
trentmkelly/LessWrong-43k
LessWrong
Vote for MIRI to be donated a share of reddit's advertising revenue http://www.reddit.com/donate?organization=582565917   "Today we are announcing that we will donate 10% of our advertising revenue receipts in 2014 to non-profits chosen by the reddit community. Whether it’s a large ad campaign or a $5 sponsored headline on reddit, we intend for all ad revenue this year to benefit not only reddit as a platform but also to support the goals and causes of the entire community."
4751e02c-2fa1-42d2-ad85-a2a8b32f5011
trentmkelly/LessWrong-43k
LessWrong
Why do patients in mental institutions get so little attention in the public discourse? Scott writes in My IRB nightmare about how his colleague took people freedom away and justified it with tests that aren't validated for the purpose of diagnosis and that have a warning THIS IS JUST A SCREENING TEST IT IS NOT INTENDED FOR DIAGNOSIS.  This can only happen because there's very little public accountability and suggests that a lot of abuse of power is going on. While the US has a lower psychiatric hospitals bed count then many other OECD countries, there are still ~80,000 people in those institutions. A lot of them effective have very little rights and have to endure medical procedures without them consenting such as taking various drugs.  Mental health people seem to me like they should be classified as a marginalized group by the ideals of modern left. On the other hand the modern left put very little attention on fighting for the their rights as evidenced by Scott's colleague getting away with major abuse of power.  Why is the state of affairs like it is? Why don't they get more attention? 
4030e49a-1895-469d-887f-4cf1565ecedd
trentmkelly/LessWrong-43k
LessWrong
Shouldn't it matter to the victim? (Cross-posted from Hands and Cities) This post describes what I see as a basic but powerful objection to treating certain deontological distinctions as justified via some intrinsically important moral difference they reflect or respond to. In brief, the objection is that the distinctions in question are not, in the right way, important to the potential victims of harm. I focus on the doing vs. allowing distinction to illustrate the point, then briefly discuss how I think the objection does and does not generalize to other features of non-consequentialist ethics. I think of this objection as “basic” in the sense that it seems to me quite natural and immediate, and a number of people have indicated to me that something closely related is core to their own skepticism towards deontological justifications of the relevant type. My own thinking on the topic stems mostly from many hours of conversation with Ketan Ramakrishnan, whose work (for example, the last section of his 2016 paper “Treating People as Tools,” as well as some unpublished work) treats closely related worries in depth. I expect that the type of objection I describe is discussed elsewhere in the philosophical literature as well, perhaps extensively, though other examples of in-depth treatment aren’t, at the moment, coming to mind (this isn’t a literature I’ve gone particularly deep on).  I. Doing and allowing Many philosophers aspire to validate some sort of moral distinction between doing and allowing harm. Here’s an example. * Case 1: Maria is driving to the hospital on a country road with no other traffic, and she needs to get there soon, or she will die. On the side of the road, she sees Wilfred, who is hurt, and who will die if he doesn’t get help. If Maria stops to help Wilfred, though, she won’t make it to the hospital on time, and will die herself. * Case 2: Again, Maria is driving to the hospital, and needs to get there soon on pain of death. And again, she encounters Wilfred. This time, tho
86893ec4-18e1-4b53-9aa3-0ab70702cda1
trentmkelly/LessWrong-43k
LessWrong
More podcasts on 2023 AI survey: Cognitive Revolution and FLI Two new discussions of the 2023 ESPAI: Possibly I have a podcasting facial expression. (If you want to listen in on more chatting about this survey, see also: Eye4AI podcast. Honestly I can’t remember how much overlap there is between the different ones.)
d051ca16-9847-477b-89ed-0f3bc4c9a316
trentmkelly/LessWrong-43k
LessWrong
Neural Network And Newton's Second Law When Isaac Newton formulated his Second Law (𝐹 = 𝑚𝑎), observation and experimentation were crucial. Drawing on experiments, many influenced by Galileo and others, he concluded that force causes changes in motion, or acceleration. This process involved solving both forward and inverse problems through iterative refinement (experimentation and observation), much like the training of neural networks. In neural networks, forward and inverse iterations are combined in a similar way, and this is the primary reason behind their immense power. It’s even possible that a neural network could independently derive Newton's Second Law. That’s the true potential of this technology. This is what we’ve discovered and explored in greater detail in our recent publication: Deep Manifold Part 1: Anatomy of Neural Network Manifold. Two key factors enable the forward and inverse iteration process in neural networks: infinite degrees of freedom and self-progressing boundary conditions. The infinite degrees of freedom explain how just a few gigabytes of neural network weights can store vast amounts of knowledge and information. The self-progressing boundary conditions are what allow neural networks to be trained on any data type, including mixed types like language and images. This boundary condition enables the network to efficiently intake large and complex datasets, while its infinite degrees of freedom process them with unparalleled flexibility and capacity. If a neural network could independently derive Newton's Second Law, would we still need Newton's Second Law? The answer is yes. 1. Newton's Second Law can be integrated into a neural network as a neural operator to tackle much more complex problems, such as the analysis of the Vajont Dam failure 2. Newton's Second Law can be to verify neural network model results. If you believe that 'math is a branch of physics,' we are now seeing the true power source behind neural networks—advancing to a point where AI is actually pushi
f6999af3-5e55-4764-8bb0-17eb2c021429
trentmkelly/LessWrong-43k
LessWrong
Personal Ruminations on AI's Missing Variable Problem My thinking differs somewhat from that of others. My worrying is more about potential outcome scenarios and their respective likelihoods, akin to a predictive modeling AI. I often find myself wrestling with potentialities that cannot be definitively proven unless the path is pursued. At times, I get lost in abstractions and distracted by related or unrelated side thoughts, which can be quite burdensome. The workplace routine, for instance, can lead me to get stuck in these ruminating thoughts.   This thought process could, for example, manifest when considering the benefit/trade-off of having lunch with my colleagues: * How easy is it to join the lunch group with them? * What are the potential benefits I'd gain from socialising with them (e.g., insights, news)? How likely are they to share these insights with me? * What would I be giving up? * Time to de-stress by walking or listening to music/podcasts * Having earlier lunches * The convenience of eating at my own pace * Potentially, a decreased mood due to office gossip * How much do I value these potential benefits and opportunity costs? What would be the implications of not having them (e.g., increased stress, decreased fitness, lower Vitamin D levels)? * Finally, is the trade-off worth it?   More often than not, I find myself with an incomplete dataset, leading me to be unable to make predictions as accurately as I'd like.  I know I am missing variables.  I know that whatever I try to predict will be highly inaccurate.  Then, my mind wanders off, trying to find accurate proxies for the missing variables, which, again, are based on incomplete data. The entire endeavour is pretty frustrating and, to a certain extent, fruitless.  I've spent energy on what feels like NOTHING.   And this is where I swiftly link back to AI. How can we address the missing variable problem in systems that are complex beyond our comprehension—in other words, multi-factorial, real-world systems? This includes:
bb5e4ae6-f0d2-4b21-b2d6-5c740a006156
trentmkelly/LessWrong-43k
LessWrong
April Coronavirus Open Thread Last month’s Coronavirus Open Thread did a fantastic job at being a place for coronavirus-related information and questions that didn’t merit a top level post, but at almost 400 comments, many of which were great at the time but are now obsolete, it’s getting a little creaky. So for the next month (probably. Who knows what’s going to happen in that month) this is the new spot for comments and questions about coronavirus that don’t fit anywhere else and aren’t worth a top level post. Wondering what happened in last month’s thread? Here are the timeless and not-yet-eclipsed-by-events highlights: * Spiracular on why SARS-Cov-2 is unlikely to be lab-created. * Two documents collating estimates of basic epidemiological parameters, in response to this thread * Discussion on whether the tuberculosis vaccine provides protection against COVID-19. * Suggestive evidence that COVID-19 removes sense of taste and smell. * Could copper tape be net harmful? Want to know what’s coming up in the future? Check out the Coronavirus Research Agenda and its related questions. Wondering why the April thread is going up on 3/31? Because everything’s a little more confusing on 4/1 and I didn’t want the extra hassle.
0ea6b789-3649-4abd-9dc3-1605615cefe3
trentmkelly/LessWrong-43k
LessWrong
Eudaimonic Utilitarianism Eliezer Yudkowsky on several occasions has used the term “Eudaimonia” to describe an objectively desirable state of existence. While the meta-ethics sequence on Less Wrong has been rather emphatic that simple universal moral theories are inadequate due to the complex nature of human values, one wonders, just what would happen if we tried anyway to build a moral theory around the notion of Eudaimonia. The following is a cursory attempt to do so. Even if you don’t agree with everything I say here, I ask that you please bear with me to the end before making judgments about this theory. Also, if you choose to downvote this post, please offer some criticism in the comments to explain why you choose to do so. I am admittedly new to posting in the Less Wrong community, and would greatly appreciate your comments and criticisms. Even though I use imperative language to argue my ideas, I consider this theory to be a work in progress at best. So without further ado, let us begin…   Classical Utilitarianism allows for situations where you could theoretically justify universal drug addiction as a way to maximize happiness if you could find some magical drug that made people super happy all the time with no side effects. There's a book called Brave New World by Aldous Huxley, where this drug called Soma is used to sedate the entire population, making them docile and dependent and very, very happy. Now, John Stuart Mill does argue that some pleasures are of a higher quality than others, but how exactly do you define and compare that quality? What exactly makes Shakespeare better than Reality TV? Arguably a lot of people are bored by Shakespeare and made happier by Reality TV.   Enter Aristotle. Aristotle had his own definition of happiness, which he called Eudaimonia. Roughly translated, it means "Human Flourishing". It is a complex concept, but I like to think of it as "reaching your full potential as a human being", "being the best that you can be", "fulfilling your purpose
886e5547-3efc-42c4-a96c-c8bc2d010435
trentmkelly/LessWrong-43k
LessWrong
Renormalization Roadmap At PIBBSS, we’ve been thinking about how renormalization can be developed into a rich framework for AI interpretability. This document serves as a roadmap for this research agenda – which we are calling an Opportunity Space[1] for the AI safety community. In what follows, we explore the technical and philosophical significance of renormalization for physics and AI safety, problem areas in which it could be most useful, and some interesting existing directions – mainly from physics – that we are excited to place in direct contact with AI safety. This roadmap will also provide context for our forthcoming Call for Collaborations, during which we will hire affiliates to work on projects in this area.  Acknowledgements: While Lauren did the writing, this opportunity space was developed with the PIBBSS horizon scanning team, Dmitry Vaintrob and Lucas Teixeira. Motivation and Context  In physics, renormalization is used to coarse-grain theoretical descriptions of complex interactions to focus on those that are most relevant for describing physical reality. Here, ‘reality’ is tied to the scale of interest; just as you don’t need to take quantum effects into account to design a safe bridge, the physical descriptions of the same system are different (in a way: emergent) when viewed close up versus far away. To put it differently, renormalization plays two main roles:  * To organize a system into “effective” theories at different scales, and * To decouple physical systems into a hierarchy of theories of local interactions by systematically finding so-called “relevant” parameters as you vary the interaction scale. The running of parameters along this scale defines a so-called “RG flow”, which dynamically transforms systems in a way that throws away fine-grained details while preserving coarse-grained behavior.  Like field theories in physics, NNs are highly complex systems with many interacting components. There is evidence that they organize information to learn, for ex
69f31947-8795-4052-8371-e36c89e70391
trentmkelly/LessWrong-43k
LessWrong
Startups as a Rationality Test Bed? What attributes make a task useful for rationality verification? After thinking it over I believe I have identified three main components.  The first is that the task should be as grounded in the real world as is possible. The second is that the task should be involve a wide variety of subtasks, preferably ones that involve decision making and/or forecasting.  This will help insure that the effect is from general rationality, rather than from the rationality training helping with domain specific skills.  The third is that there should be clear measure of successes for the task. As I am not personally involved with the field I could be missing something important, but it seems like founding a successful startup would fulfill all three components.  I propose that investigating the effect of giving startup founders rationality training would be a good basis for an experiment. Unfortunately, I do not know if it would be feasible to run such an experiment in real life.  Thus, I am turning to the LW community to see if the people reading this have any suggestions. -addendum  I didn't go into details about exact exprimental methods for a couple of reasons.   Partially because I assumed, apparently incorrectly, that it was obvious that any experiment for testing rationality would be conducted with the best experimental protocols that we could manage.   But mostly, because I thought that it would be good to get feed back on the basic idea of rationally verification + startups ?= good before spending time going into detail about things like control groups, random assignment ect.  I welcome suggestions along those lines, and given the attention this has received will try to go back and add some of my own ideas when I have time, but wanted to make cleat that I wasn't intending this post as a detailed experimental design. Also does anyone have any idea why the first part of this post has different spacing from the second?  It's not intentional on my part.
9c0f43a0-49c1-4291-9d50-56f16d311d24
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
A Small Negative Result on Debate Some context for this [new arXiv paper](https://arxiv.org/abs/2204.05212) from my group at NYU: * We're working toward [sandwiching](https://www.alignmentforum.org/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models) experiments using our [QuALITY long-document QA dataset](https://arxiv.org/abs/2112.08608), with reading time playing the role of the the expertise variable. Roughly: Is there some way to get humans to reliably answer hard reading-comprehensions questions about a ~5k-word text, without *ever* having the participants or any other annotators take the ~20 minutes that it would require to actually read the text. * This is an early writeup of some negative results. It's earlier in the project that I would usually write something like this up, but some authors had constraints that made it worthwhile, so I'm sharing what we have. * Here, we tried to find out if *single-turn* debate leads to reliable question answering: If we give people high-quality arguments for and against each (multiple-choice) answer choices, supported by pointers to key quotes in the source text, can they reliably answer the questions under a time limit? * We did this initial experiment in an oracle setting; We had (well-incentivized, skilled) humans write the arguments, rather than an LM. Given the limits of current LMs on long texts, we expect this to give us more information about whether this research direction is going anywhere. * It didn't really work: Our human annotators answered at the same low accuracy with and without the arguments. The selected pointers to key quotes did help a bit, though. * We're planning to keep pursuing the general strategy, with multi-turn debate—where debaters can rebut one another's arguments and evidence—as the immediate next step. * Overall, I take this as a *very* slight update in the direction that debate is difficult to use in practice as an alignment strategy. Slight enough that this probably shouldn't change your view of debate unless you were, for some reason, interested in this exact constrained/trivial application of it.
63486998-a446-4a79-a443-aa9c6c3364ae
trentmkelly/LessWrong-43k
LessWrong
Entropy and Temperature Eliezer Yudkowsky previously wrote (6 years ago!) about the second law of thermodynamics. Many commenters were skeptical about the statement, "if you know the positions and momenta of every particle in a glass of water, it is at absolute zero temperature," because they don't know what temperature is. This is a common confusion. Entropy To specify the precise state of a classical system, you need to know its location in phase space. For a bunch of helium atoms whizzing around in a box, phase space is the position and momentum of each helium atom. For N atoms in the box, that means 6N numbers to completely specify the system. Lets say you know the total energy of the gas, but nothing else. It will be the case that a fantastically huge number of points in phase space will be consistent with that energy.* In the absence of any more information it is correct to assign a uniform distribution to this region of phase space. The entropy of a uniform distribution is the logarithm of the number of points, so that's that. If you also know the volume, then the number of points in phase space consistent with both the energy and volume is necessarily smaller, so the entropy is smaller. This might be confusing to chemists, since they memorized a formula for the entropy of an ideal gas, and it's ostensibly objective. Someone with perfect knowledge of the system will calculate the same number on the right side of that equation, but to them, that number isn't the entropy. It's the entropy of the gas if you know nothing more than energy, volume, and number of particles. Temperature The existence of temperature follows from the zeroth and second laws of thermodynamics: thermal equilibrium is transitive, and entropy is maximum in equilibrium. Temperature is then defined as the thermodynamic quantity that is the shared by systems in equilibrium. If two systems are in equilibrium then they cannot increase entropy by flowing energy from one to the other. That means that if we flow a
b134a1d4-0d91-4e97-b016-49e3afc29a50
trentmkelly/LessWrong-43k
LessWrong
Two Stupid AI Alignment Ideas Epistemic status: this is not my field.  I am unfamiliar with any research in it beyond what I've seen on LW.  These are the best ideas I could come up with when stopping to think about the problem, and aren't things I've seen on LW, but I don't imagine that they're original and I would hope that e.g. MIRI has already thought about them.  If there are layman-accessible arguments explaining why these ideas are stupid, I'd appreciate hearing them; if e.g. MIRI has already looked into these ideas and written papers about them, I'd appreciate links. Safe Testing Spaces via Extremely High Discount Rates. Hyperbolic discounting is unstable under self-modification.  If you are an agent that values 1 unit of utility at time T as follows: Current time: 1 Current time + 1: 0.1 Current time + 2: 0.09 and you are faced with a choice between 1 unit of utility at time T+1 or 5 units of utility at time T+2, you would currently prefer the 5 units at time T+2, but your T+1-self will predictably prefer the 1 unit at time T+1.  You will therefore be incentivized to modify your future self to remove this hyperbolic discounting.  If you are an AI, and capable of self-modification, and we give you a hyperbolic discount curve, you will self-modify to remove that. Exponential discounting is stable under self-modification.  So long as you have a consistent exponential discount rate, a valuation of: Current time: 1 Current time + 1: (1-e) Current time + 2: (1-e)^2 is stable under reflection.  If you are an AI, and capable of self-modification, and we give you an exponential discount rate, you will retain it. Suppose that we have a paperclip-making AI in a testing environment.  We offer it the opportunity to steal resources and make 10 extra paperclips, in a way that we think our safety precautions should prevent.  It does not take that opportunity.  Does that mean our safety precautions worked? No, because the AI may be able to predict the indirect effects of its actions.  If it
bd5d64a6-8133-441a-99cd-f4be169a7e2b
trentmkelly/LessWrong-43k
LessWrong
Solving Pascal’s Wager using dynamic programming In Infinite Ethics Nick Bostrom argues against using discount factors to resolve problems created by the temporal and spatial infinities that arise in moral theories. While this argument is compelling in the context of aggregative consequentialism it is less compelling under ethical egosim. Incorporating discount factors into ethical models handles many of the problematic infinities and allows us to analyze individual ethical decisions mathematically. In this post I present a model for decision making under uncertainty that incorporates the possibility of eternal reward or punishment. Motivation Material conditions for humanity have never been so good yet so many people focus primarily on worldly rewards. Many of these worldly rewards even have diminishing marginal utility. With such abundance for so many I would expect humanity to become more religious and more concerned about what happens after we die. Why has the opposite happened? Perhaps we lack the models needed for evaluating existential matters. Every day we face ethical choices that may or may not have eternal consequences. Despite belief in these consequences even deeply religious people sin sometimes. They risk eternal punishment for some worldly benefit - we should be able to model this behavior. Maybe sin is sometimes rational. Maybe even Faust's bargain was utility optimizing. In his work Bostrom identifies two types of discount factors, spatial and temporal. The former is easily addressed through ethical egoism. If I am unconcerned about the well-being of others, to the extent it does not impact my well-being, then my spatial discount factor is zero and one class of infinities is resolved. The later can be addressed with temporal discount factors less than one. In economics temporal discount factors are widely used and empirically verified. While Bostrom writes temporal discount factors are "viewed with great suspicion" empirical studies show that people use discount factors when making decisions.
608de7a8-e7e0-4991-a42f-6fdc8bb97a9c
trentmkelly/LessWrong-43k
LessWrong
Thinking like a Scientist I've been often wondering why scientific thinking seems to be so rare.  What I mean by this is dividing problems into theory and empiricism, specifying your theory exactly then looking for evidence to either confirm or deny the theory, or finding evidence to later form an exact theory. This is a bit narrower than the broader scope of rational thinking.  A lot of rationality isn't scientific.  Scientific methods don't just allow you to get a solution, but also to understand that solution. For instance, a lot of early Renaissance tradesmen were rational, but not scientific.  They knew that a certain set of steps produced iron, but the average blacksmith couldn't tell you anything about chemical processes.  They simply did a set of steps and got a result. Similarly, a lot of modern medicine is rational, but not too scientific.  A doctor sees something and it looks like a common ailment with similar symptoms they've seen often before, so they just assume that's what it is.  They may run a test to verify their guess.  Their job generally requires a gigantic memory of different diseases, but not too much knowledge of scientific investigation. What's most damning is that our scientific curriculum in schools don't teach a lot of scientific thinking. What we get instead is mostly useless facts.  We learn what a cell membrane is, or how to balance a chemical equation.  Learning about, say, the difference between independent and dependent variables is often left to circumstance.  You learn about type I and type II errors when you happen upon a teacher who thinks it's a good time to include that in the curriculum, or you learn it on your own.  Some curriculums include a required research methods course, but the availability and quality of this course varies greatly between both disciplines and colleges.  Why there isn't a single standardized method of teaching this stuff is beyond me.  Even math curriculums are structured around calculus instead of the much more usefu
d7c98f46-f9af-4199-b691-69ebb917ff38
trentmkelly/LessWrong-43k
LessWrong
Capabilities and alignment of LLM cognitive architectures Epistemic status: Hoping for help working through these new ideas. TLDR: Scaffolded[1], "agentized" LLMs that combine and extend the approaches in AutoGPT, HuggingGPT, Reflexion, and BabyAGI seem likely to be a focus of near-term AI development. LLMs by themselves are like a human with great automatic language processing, but no goal-directed agency, executive function, episodic memory,  or sensory processing. Recent work has added all of these to LLMs, making language model cognitive architectures (LMCAs). These implementations are currently limited but will improve. Cognitive capacities interact synergistically in human cognition. In addition, this new direction of development will allow individuals and small businesses to contribute to progress on AGI.  These new factors of compounding progress may speed progress in this direction. LMCAs might well become intelligent enough to create X-risk before other forms of AGI.  I expect LMCAs to enhance the effective intelligence of LLMs by performing extensive, iterative, goal-directed "thinking" that incorporates topic-relevant web searches. The possible shortening of timelines-to-AGI is a downside, but the upside may be even larger. LMCAs pursue goals and do much of their “thinking” in natural language, enabling a natural language alignment (NLA) approach. They reason about and balance ethical goals much as humans do. This approach to AGI and alignment has large potential benefits relative to existing approaches to AGI and alignment.    Overview I still think it's likely that agentized LLMs will change the alignment landscape for the better, although I've tempered my optimism a bit since writing that. A big piece of the logic for that hypothesis is why I expect this approach to become very useful, and possibly become the de-facto standard for AGI progress. The other piece was the potential positive impacts on alignment work. Both of those pieces of logic were compressed in that post. I expand on them here. Begi
b7ecb436-a2f3-40cb-9071-f061d4253e7e
trentmkelly/LessWrong-43k
LessWrong
AI #56: Blackwell That Ends Well Hopefully, anyway. Nvidia has a new chip. Also Altman has a new interview. And most of Inflection has new offices inside Microsoft. TABLE OF CONTENTS 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Open the book. 4. Clauding Along. Claude continues to impress. 5. Language Models Don’t Offer Mundane Utility. What are you looking for? 6. Fun With Image Generation. Stable Diffusion 3 paper. 7. Deepfaketown and Botpocalypse Soon. Jesus Christ. 8. They Took Our Jobs. Noah Smith has his worst take amd commits to the bit. 9. Generative AI in Games. What are the important dangers? 10. Get Involved. EU AI office, IFP, Anthropic. 11. Introducing. WorldSim. The rabbit hole goes deep, if you want that. 12. Grok the Grok. Weights are out. Doesn’t seem like it matters much. 13. New Nivida Chip. Who dis? 14. Inflection Becomes Microsoft AI. Why buy companies when you don’t have to? 15. In Other AI News. Lots of other stuff as well. 16. Wait Till Next Year. OpenAI employees talk great expectations a year after GPT-4. 17. Quiet Speculations. Driving cars is hard. Is it this hard? 18. The Quest for Sane Regulation. Take back control. 19. The Week in Audio. Sam Altman on Lex Fridman. Will share notes in other post. 20. Rhetorical Innovation. If you want to warn of danger, also say what is safe. 21. Read the Roon. What does it all add up to? 22. Pick Up the Phone. More good international dialogue on AI safety. 23. Aligning a Smarter Than Human Intelligence is Difficult. Where does safety lie? 24. Polls Show People Are Worried About AI. This week’s is from AIPI. 25. Other People Are Not As Worried About AI Killing Everyone. Then there’s why. 26. The Lighter Side. Everyone, reaping. LANGUAGE MODELS OFFER MUNDANE UTILITY Ethan Mollick on how he uses AI to aid his writing. The central theme is ‘ask for suggestions in particular places where you are stuck’ and that seems right for most purposes. Sully is predictably
d7f1d093-2f53-4c15-9ff7-0a35f668917e
trentmkelly/LessWrong-43k
LessWrong
Isomorphic agents with different preferences: any suggestions? In order to better understand how AI might succeed and fail at learning knowledge, I'll be trying to construct models of limited agents (with bias, knowledge, and preferences) that display identical behaviour in a wide range of circumstance (but not all). This means their preferences cannot be deduced merely/easily from observations. Does anyone have any suggestions for possible agent models to use in this project?
09fa7986-97c7-4e23-8a93-fa30edc4802a
trentmkelly/LessWrong-43k
LessWrong
Monthly Roundup #25: December 2024 I took a trip to San Francisco early in December. Ever since then, things in the world of AI have been utterly insane. Google and OpenAI released endless new products, including Google Flash 2.0 and o1. Redwood Research and Anthropic put out the most important alignment paper of the year, on the heels of Apollo’s report on o1. Then OpenAI announced o3. Like the rest of the media, this blog currently is horrendously lacking in o3 content. Unlike the rest of the media, it is not because I don’t realize that This Changes Everything. It is because I had so much in the queue, and am taking the time to figure out what to think about it. That queue includes all the other, non-AI things that happened this past month. So here we are, to kick off Christmas week. BAD NEWS John Wentworth reminds us that often people conflate a prediction of what it likely to happen with an assurance of what is going to happen, whereas these are two very different things. And often, whether or not they’re directly conflating the two, they will attempt to convert a prediction (‘I’ll probably come around 9pm’) to an assurance (‘cool can you pick me up on the way?’) in ways that are expensive without realizing they’re expensive. Your periodic reminder that if you say you’ll make a ton of money and then pursue your dreams, or then advance the causes you care about, the vast majority of the time this does not actually happen. Not never, but, well, hardly ever. Journalist combines two unrelated statements from Palmer Luckey into an implied larger statement to effectively fabricate a misleading quote. It does seem like journalists are violating the bounded distrust rules more and more often, which at some point means they’re moving the lines involved. I feel like I’ve shared this graph before but seems worth sharing again (via MR): An important note from Michael Vassar: People rarely see themselves or their group as ‘bad’ or ‘evil,’ but often they do view themselves as ‘winners’ rather
b9148778-d61a-427f-9cba-6999035ec86f
trentmkelly/LessWrong-43k
LessWrong
Delayed Gratification vs. a Time-Dependent Utility Function Ideally, a utility function would be a rational, perfect, constant entity that accounted for all possible variables, but mine certainly isn't. In fact, I'd feel quite comfortable claiming that no humans at the time of writing do. When confronted with the fact that my utility function is non-ideal or - since there's no universal ideal to compare it to - internally inconsistent, I do my best to figure out what to change and do so. The problem with a non-constant utility function, though, is that it makes it hard to maximise total utility. For instance, I am willing to undergo -50 units of utility today in return for +1 utility on each following day indefinitely. What if I accept the -50, but then my utility function changes tomorrow such that I now consider the change to be neutral, or worse, negative per day? Just as plausible is the idea that I be offered a trade that, while not of positive utility according to my function now, will be according to a future function. Just as I would think it a good investment to buy gold if I expected the price to go up but bad if I expected the price to go down, so I have to base my long-term utility trades on what I expect my future functions to be. (Not that dollars don't correlate with units of utility, just that they don't correlate strongly.) How can I know what I will want to do, much less what I will want to have done? If I obtain the outcome I prefer now, but spend more time not preferring it, does that make it a negative choice? Is it a reasonable decision, in order to maximise utility, to purposefully change your definition of utility such that your expected future would maximise it?   What brings this all to mind is a choice I have to make soon. Technically, I've already made it, but I'm now uncertain of that choice and it has to be made final soon. This fall I transfer from my community college to a university, where I will focus a significant amount of energy studying Something 1 in order to become trained (and ce
2e294846-5129-4de9-9a4b-f1d22023131d
trentmkelly/LessWrong-43k
LessWrong
Value of Information: 8 examples ciphergoth just asked what the actual value of Quantified Self/self-experimentation is. This finally tempted me into running value of information calculations on my own experiments. It took me all afternoon because it turned out I didn’t actually understand how to do it and I had a hard time figuring out the right values for specific experiments. (I may not have not gotten it right, still. Feel free to check my work!)  Then it turned out to be too long for a comment, and as usual the master versions will be on my website at some point. But without further ado! ---------------------------------------- The value of an experiment is the information it produces. What is the value of information? Well, we can take the economic tack and say value of information is the value of the decisions it changes. (Would you pay for a weather forecast about somewhere you are not going to? No. Or a weather forecast about your trip where you have to make that trip, come hell or high water? Only to the extent you can make preparations like bringing an umbrella.) Wikipedia says that for a risk-neutral person, value of perfect information is “value of decision situation with perfect information” - “value of current decision situation”. (Imperfect information is just weakened perfect information: if your information was not 100% reliable but 99% reliable, well, that’s worth 99% as much.) 1 Melatonin http://www.gwern.net/Zeo#melatonin & http://www.gwern.net/Melatonin The decision is the binary take or not take. Melatonin costs ~$10 a year (if you buy in bulk during sales, as I did). Suppose I had perfect information it worked; I would not change anything, so the value is $0. Suppose I had perfect information it did not work; then I would stop using it, saving me $10 a year in perpetuity, which has a net present value (at 5% discounting) of $205. So the value of perfect information is $205, because it would save me from blowing $10 every year for the rest of my life. My melatonin exp
9e28cae7-6b91-41b3-911c-899182a89498
trentmkelly/LessWrong-43k
LessWrong
o3 See livestream, site, OpenAI thread, Nat McAleese thread. OpenAI announced (but isn't yet releasing) o3 and o3-mini (skipping o2 because of telecom company O2's trademark). "We plan to deploy these models early next year." "o3 is powered by further scaling up RL beyond o1"; I don't know whether it's a new base model. o3 gets 25% on FrontierMath, smashing the previous SoTA. (These are really hard math problems.[1]) Wow. (The dark blue bar, about 7%, is presumably one-attempt and most comparable to the old SoTA; unfortunately OpenAI didn't say what the light blue bar is, but I think it doesn't really matter and the 25% is for real.[2]) o3 also is easily SoTA on SWE-bench Verified and Codeforces. It's also easily SoTA on ARC-AGI, after doing RL on the public ARC-AGI problems[3] + when spending $4,000 per task on inference (!).[4] (And at less inference cost.) ARC Prize says: > At OpenAI's direction, we tested at two levels of compute with variable sample sizes: 6 (high-efficiency) and 1024 (low-efficiency, 172x compute). OpenAI has a "new alignment strategy." (Just about the "modern LLMs still comply with malicious prompts, overrefuse benign queries, and fall victim to jailbreak attacks" problem.) It looks like RLAIF/Constitutional AI. See Lawrence Chan's thread.[5] OpenAI says "We're offering safety and security researchers early access to our next frontier models"; yay. o3-mini will be able to use a low, medium, or high amount of inference compute, depending on the task and the user's preferences. o3-mini (medium) outperforms o1 (at least on Codeforces and the 2024 AIME) with less inference cost. GPQA Diamond: 1. ^ Update: most of them are not as hard as I thought: > There are 3 tiers of difficulty within FrontierMath: 25% T1 = IMO/undergrad style problems, 50% T2 = grad/qualifying exam style [problems], 25% T3 = early researcher problems. 2. ^ My guess is it's consensus@128 or something (i.e. write 128 answers and submit the most commo
e8d50b59-760d-4bc2-b28c-6f8a5d2b8997
trentmkelly/LessWrong-43k
LessWrong
Anthropic Observations Dylan Matthews had in depth Vox profile of Anthropic, which I recommend reading in full if you have not yet done so. This post covers that one. ANTHROPIC HYPOTHESIS The post starts by describing an experiment. Evan Hubinger is attempting to create a version of Claude that will optimize in part for a secondary goal (in this case ‘use the word “paperclip” as many times as possible’) in the hopes of showing that RLHF won’t be able to get rid of the behavior. Co-founder Jared Kaplan warns that perhaps RLHF will still work here. > Hubinger agrees, with a caveat. “It’s a little tricky because you don’t know if you just didn’t try hard enough to get deception,” he says. Maybe Kaplan is exactly right: Naïve deception gets destroyed in training, but sophisticated deception doesn’t. And the only way to know whether an AI can deceive you is to build one that will do its very best to try. The problem with this approach is that an AI that ‘does its best to try’ is not doing the best that the future dangerous system will do. So by this same logic, a test on today’s systems can only show your technique doesn’t work or that it works for now, it can never give you confidence that your technique will continue to work in the future. They are running the test because they think that RLHF is so hopeless we can likely already prove, at current optimization levels, that it is doomed to failure. Also, the best try to deceive you will sometimes be, of course, to make you think that the problem has gone away while you are testing it. This is the paradox at the heart of Anthropic: If the thesis that you need advanced systems to do real alignment work is true, why should we think that cutting edge systems are themselves currently sufficiently advanced for this task? > But Anthropic also believes strongly that leading on safety can’t simply be a matter of theory and white papers — it requires building advanced models on the cutting edge of deep learning. That, in turn, requires lots of
f6093273-f2ab-4c5a-b1c3-36dce3e89cae
trentmkelly/LessWrong-43k
LessWrong
On Systems - Living a life of zero willpower (This post is about the mindset of thinking in systems and routines. This post is mostly from my perspective, but the technique is super subjective. I’ve written up a lesson plan more aimed more at teaching the tools for how to find systems for your problems) Introduction This is a post about systems and willpower. Willpower is a pretty fuzzy concept, for the purposes of this post I will define it as “that which is expended when I get myself to do something that isn’t the default action”. I find thinking in terms of willpower a key lens for examining my life, because I consider myself to be a person with unusually low willpower (at least, relative to my social circle of bullshit high-achievers). And this is really, really bad, because willpower is the obvious tool for taking the actions I want to take to achieve my goals. This post is my attempt to outline my various tools and hacks for getting around this, and taking the actions I care about anyway. The key underlying idea here is that willpower is a limited resource. Every time I need to do something a bit unpleasant, every time I need to make a decision, I expend a bit of willpower. And when I run out of willpower, I just feel super tired, everything feels like it would take far too much energy, and it’s easy to fall into un-fun spirals of procrastination. This is especially bad, because these spirals of procrastination aren’t fun. There are “unproductive” things I can do that rejuvenate me, like going for a walk, meditating or reading a good book. But when I have zero willpower, I tend to do far less rejuvenating things, like scrolling endlessly through Reddit. It’s not that I’m torn between productivity and relaxation - it’s easy to fall into the trap of getting neither. So figuring out how to solve this is one of my key bottlenecks. And if you relate to the descriptions so far, I hope these tools can be useful to you too! It’s easy to slip into guilt/self-hate about this kind of thing, and seeing it as a p
0458ea51-95ac-4634-aae8-131248610abc
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Response to Oren Etzioni's "How to know if artificial intelligence is about to destroy civilization" Oren Etzioni recently wrote a [popular article titled "How to know if artificial intelligence is about to destroy civilization."](https://www.technologyreview.com/s/615264/artificial-intelligence-destroy-civilization-canaries-robot-overlords-take-over-world-ai/) I think it's a good idea to write publicly available responses to articles like this, in case interested people are googling around looking for such. If I get lucky, perhaps Oren himself will come across this! Then we could have a proper discussion and we'd probably both learn things. For most readers of LW, it's probably not worth reading further, since you probably already agree with what I'm saying. Here is the full text of the article, interspersed with my comments: > Could we wake up one morning dumbstruck that a super-powerful AI has emerged, with disastrous consequences? Books like *Superintelligence* by Nick Bostrom and *Life 3.0* by Max Tegmark, as well as [more recent articles](https://www.nytimes.com/2019/10/08/opinion/artificial-intelligence.html), argue that malevolent superintelligence is an existential risk for humanity. > But one can speculate endlessly. It’s better to ask a more concrete, empirical question: What would alert us that superintelligence is indeed around the corner? It is unfair to claim that Bostrom is speculating whereas your question is concrete and empirical. For one thing, Bostrom and others you denigrate have already asked the same question you now raise; there is not a consensus as to the answer yet, but plenty of arguments have been made. See e.g. Yudkowsky ["There's No Fire Alarm for Artificial General Intelligence"](https://intelligence.org/2017/10/13/fire-alarm/) More importantly though, how is your question any less speculative than the others? You too are making guesses about what future technologies will come when and in what order, and you too have zero historical data to draw from. Perhaps you could make arguments based on analogy to other technologies -- e.g. as Grace and Christiano have argued about "slow takeoff" and "likelihood of discontinuous progress" but funnily enough (looking ahead at the rest of the article) you don't even do that; you just rely on your intuition! > We might call such harbingers canaries in the coal mines of AI. If an artificial-intelligence program develops a fundamental new capability, that’s the equivalent of a canary collapsing: an early warning of AI breakthroughs on the horizon. > Could the famous Turing test serve as a canary? The test, invented by Alan Turing in 1950, posits that human-level AI will be achieved when a person can’t distinguish conversing with a human from conversing with a computer. It’s an important test, but it’s not a canary; it is, rather, the sign that human-level AI has already arrived. Many computer scientists believe that if that moment does arrive, superintelligence will quickly follow. We need more intermediate milestones. If you think "many computer scientists believe X about AI" is a good reason to take X seriously, then... well, you'll be interested to know that [many computer scientists believe superintelligent AI will come in the next 15 years or so](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines#Expert_elicitation), and also [many computer scientists believe AI is an existential risk.](https://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/) > Is AI’s performance in games such as [Go](https://deepmind.com/research/case-studies/alphago-the-story-so-far), [poker](https://www.cmu.edu/news/stories/archives/2019/july/cmu-facebook-ai-beats-poker-pros.html) or [Quake 3](https://deepmind.com/blog/article/capture-the-flag-science), a canary? It is not. The bulk of so-called artificial intelligence in these games is actually *human* work to frame the problem and design the solution. AlphaGo’s victory over human Go champions was a credit to the talented human team at DeepMind, not to the machine, which merely ran the algorithm the people had created. This explains why it takes years of hard work to translate AI success from one narrow challenge to the next. Even AlphaZero, which learned to play world-class Go in a few hours, hasn’t substantially broadened its scope since 2017. Methods such as deep learning are general, but their successful application to a particular task requires extensive human intervention. I agree with you here; rapid progress in AI gaming is evidence, but not nearly conclusive evidence, that human-level AGI is coming soon. > More broadly, machine learning is at the core of AI’s successes over the last decade or so. Yet the term “machine learning” is a misnomer. Machines possess only a narrow sliver of humans’ rich and versatile learning abilities. To say that machines learn is like saying that baby penguins know how to fish. The reality is, adult penguins swim, capture *fish*, digest it, regurgitate into their beaks, and place morsels into their children’s mouths. AI is likewise being spoon-fed by human scientists and engineers. What about GPT-2? There was relatively little spoon-feeding involved there; they just gave it pretty much the whole Internet and told it to start reading. For comparison, I received much more spoon-feeding myself during my education, yet I managed to reach human-level intelligence pretty quickly! Anyhow, I agree that machines can't currently learn as well as humans can. But even you must admit that over the past few decades there has been steady progress in that direction. And it seems that progress is and will continue. > In contrast to machine learning, human learning maps a personal motivation (“I want to drive to be independent of my parents”) to a strategic learning plan (“Take driver’s ed and practice on weekends”). A human formulates specific learning targets (“Get better at parallel parking”), collects and labels data (“The angle was wrong this time”), and incorporates external feedback and background knowledge (“The instructor explained how to use the side mirrors”). Humans identify, frame, and shape learning problems. None of these human abilities is even remotely replicated by machines. Machines can perform superhuman statistical calculations, but that is merely the last mile of learning. I think I agree with you here, that we don't really have an impressive example of an AI organically identifying, framing, and shaping learning problems. Moreover I agree that this probably won't happen in the next ten years, based on my guesses about current rates of progress. And I agree that this would be a canary -- once we have impressive AI doing this organically, then human-level AGI is probably very close. But shouldn't we start preparing for human-level AGI *before* it is probably very close? > The automatic formulation of learning problems, then, is our first canary. It does not appear to be anywhere close to dying. > Self-driving cars are a second canary. They are further in the future than anticipated by boosters like [Elon Musk](https://www.wired.com/story/elon-musk-tesla-full-self-driving-2019-2020-promise/). AI can fail catastrophically in atypical situations, like when a person in a wheelchair is crossing the street. Driving is far more challenging than previous AI tasks because it requires making life-critical, real-time decisions based on both the unpredictable physical world and interaction with human drivers, pedestrians, and others. Of course, we should deploy limited self-driving cars once they reduce accident rates, but only when human-level driving is achieved can this canary be said to have keeled over. > AI doctors are a third canary. AI can already analyze medical images with superhuman accuracy, but that is only a narrow slice of a human doctor’s job. An AI doctor would have to interview patients, consider complications, consult other doctors, and more. These are challenging tasks that require understanding people, language, and medicine. Such a doctor would not have to fool a patient into thinking it is human—that’s why this is different from the Turing test. But it would have to approximate the abilities of human doctors across a wide range of tasks and unanticipated circumstances. Again, I agree that we are far from being able to do that -- being a doctor requires a lot of very general intelligence, in the sense that you have to be good at a lot of different things simultaneously and also good at integrating those skills. And I agree that once we have AI that can do this, human-level AGI is not far away. But again, shouldn't we get started preparing *before* that point? > And though the Turing test itself is not a good canary, limited versions of the test could serve as canaries. Existing AIs are unable to understand people and their motivations, or even basic physical questions like “Will a jumbo jet fit through a window?” We can administer a partial Turing test by conversing with an AI like Alexa or Google Home for a few minutes, which quickly exposes their limited understanding of language and the world. Consider a very simple example based on the [Winograd schemas](https://www.technologyreview.com/s/615126/ai-common-sense-reads-human-language-ai2/) proposed by computer scientist Hector Levesque. I said to Alexa: “My trophy doesn’t fit into my carry-on because it is too large. What should I do?” Alexa’s answer was “I don’t know that one.” Since Alexa can’t reason about sizes of objects, it can’t decide whether “it” refers to the trophy or to the carry-on. When AI can’t understand the meaning of “it,” it’s hard to believe it is poised to take over the world. If Alexa were able to have a substantive dialogue on a rich topic, that would be a fourth canary. Yep. Same response. > Current AIs are idiots savants: successful on narrow tasks, such as playing Go or categorizing MRI images, but lacking the generality and versatility of humans. Each idiot savant is constructed manually and separately, and we are decades away from the versatile abilities of a five-year-old child. The canaries I propose, in contrast, indicate inflection points for the field of AI. > Some theorists, like Bostrom, argue that we must nonetheless plan for very low-probability but high-consequence events as though they were inevitable. The consequences, they say, are so profound that our estimates of their likelihood aren’t important. This is a silly argument: it can be used to justify just about anything. It is a modern-day version of the argument by the 17th-century philosopher Blaise Pascal that it is worth acting as if a Christian God exists because otherwise you are at risk of an everlasting hell. He used the infinite cost of an error to argue that a particular course of action is “rational” even if it is based on a highly improbable premise. But arguments based on infinite costs can support contradictory beliefs. For instance, consider an anti-Christian God who promises everlasting hell for every Christian act.  That’s highly improbable as well; from a logical point of view, though, it is just as reasonable a wager as believing in the god of the Bible. This contradiction shows a flaw in arguments based on infinite costs. First of all, this isn't an argument based on tiny probabilities of infinite costs. The probability that human-level AI will arrive soon may be small but it is much much higher than other probabilities that you regularly prepare for, such as the probability that you will be in a car accident today, or the probability that your house will burn down. If you think this is a Pascal's Wager, then you must think buckling your seatbelt and buying insurance are too. Secondly, *again,* the rationale for preparing isn't just that AGI might arrive soon, it's also that *it is good to start preparing before AGI is about to arrive.* Suppose you are right and there is only a tiny tiny chance that AGI will arrive before these canaries. Still, preparing for AGI is an important and difficult task; it might take several -- even many! -- years to complete. So we should get started now. > My catalogue of early warning signals, or canaries, is illustrative rather than comprehensive, but it shows how far we are from human-level AI. If and when a canary “collapses,” we will have ample time before the emergence of human-level AI to design robust “off-switches” and to identify red lines we don’t want AI to cross. What? No we won't, you yourself said that human-level AGI will not be far away when these canaries start collapsing. And moreover, you seem to think here that preparation will be easy: All we need to do is design some off-switches and specify some red lines... This is really naive. You should read the literature, which contains lots of detailed discussion of why solutions like that won't work. I suggest starting with Bostrom's *Superintelligence*, one of the early academic works on the topic, and then branching out to skim the many newer developments that have arisen since then. > AI eschatology without empirical canaries is a distraction from addressing existing issues like how to regulate AI’s impact on employment or ensure that its use in criminal sentencing or credit scoring doesn’t discriminate against certain groups. > As Andrew Ng, one of the world’s most prominent AI experts, has said, “Worrying about AI turning evil is a little bit like worrying about overpopulation on Mars.” Until the canaries start dying, he is entirely correct. Here's another argument that has the same structure as your argument: "Some people think we should give funding to the CDC and other organizations to help prepare the world for a possible pandemic. But we have no idea when such a pandemic will arise--and moreover, we can be sure that there will be "canaries" that will start dying beforehand. For example, before there is a worldwide pandemic, there will be a local epidemic that infects just a few people but seems to be spreading rapidly. At that point, it makes sense to start funding the CDC. But anything beforehand is merely un-empirical speculation that distracts from addressing existing issues like how to stay safe from the flu. Oh, what's that? This COVID-2019 thing looks like it might become a pandemic? OK sure, now we should start sending money to the CDC. [But Trump was right to slash its budget just a few months earlier](https://www.theguardian.com/world/2020/jan/31/us-coronavirus-budget-cuts-trump-underprepared), since at that point the canary still lived." Also, Andrew Ng, the concern is not that AI will turn evil. That's a straw man which you would realize is a straw man if you read the literature. Instead, the concern is that AI will turn competent. [As leading AI expert Stuart Russell puts it, we are currently working very hard to build an AI that is smarter than us. What if we succeed?](https://www.penguinrandomhouse.com/books/566677/human-compatible-by-stuart-russell/) Well then, by the time that happens, we'd better have *also* worked hard to make it share our values. Otherwise we are in deep trouble.
4562a425-0360-447f-85b3-f8a8c1b95808
trentmkelly/LessWrong-43k
LessWrong
MIRI’s 2019 Fundraiser (Crossposted from the MIRI blog) MIRI’s 2019 fundraiser is now live, December 2–31! Over the past two years, huge donor support has helped us double the size of our AI alignment research team. Hitting our $1M fundraising goal this month will put us in a great position to continue our growth in 2020 and beyond, recruiting as many brilliant minds as possible to take on what appear to us to be the central technical obstacles to alignment. Our fundraiser progress, updated in real time (including donations made during the Facebook Giving Tuesday event): Donate MIRI is a CS/math research group with a goal of understanding how to reliably “aim” future general-purpose AI systems at known goals. For an introduction to this research area, see Ensuring Smarter-Than-Human Intelligence Has A Positive Outcome and Risks from Learned Optimization in Advanced Machine Learning Systems. For background on how we approach the problem, see 2018 Update: Our New Research Directions and Embedded Agency. At the end of 2017, we announced plans to substantially grow our research team, with a goal of hiring “around ten new research staff over the next two years.” Two years later, I’m happy to report that we’re up eight research staff, and we have a ninth starting in February of next year, which will bring our total research team size to 20.[1] We remain excited about our current research directions, and continue to feel that we could make progress on them more quickly by adding additional researchers and engineers to the team. As such, our main organizational priorities remain the same: push forward on our research directions, and grow the research team to accelerate our progress. While we’re quite uncertain about how large we’ll ultimately want to grow, we plan to continue growing the research team at a similar rate over the next two years, and so expect to add around ten more research staff by the end of 2021. Our projected budget for 2020 is $6.4M–$7.4M, with a point estimate of $6.