id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
8ccdd035-9a3c-4e27-bace-0588fbb13efe
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
AI Risk for Epistemic Minimalists *Financial status: This is independent research, now supported by a grant. I welcome further* [*financial support*](https://www.alexflint.io/donate.html)*.* *Epistemic status: This is an attempt to use only very robust arguments.* --- Outline ------- * I outline a case for concern about AI that does not invoke concepts of agency, goal-directedness, or consequential reasoning, does not hinge on single- or multi-principal or single or multi-agent assumptions, does not assume fast or slow take-off, and applies equally well to a world of emulated humans as to de-novo AI. * The basic argument is about the power that humans will temporarily or permanently gain by developing AI systems, and the history of quick increases in human power. * In the first section I give a case for paying attention to AI at all. * In the second section I give a case for being concerned about AI. * In the third section I argue that the business-as-usual trajectory of AI development is not satisfactory. * In the fourth section I argue that there are things that can be done now. The case for attention ---------------------- We already have powerful systems that influence the future of life on the planet. The systems of finance, justice, government, and international cooperation are things that we humans have constructed. The specific design of these systems has influence over the future of life on the planet, meaning that there are small changes that could be made to these systems that would have an impact on the future of life on the planet much larger than the change itself. In this sense I will say that these systems are powerful. Now every single powerful system that we have constructed up to now uses humans as a fundamental building-block. The justice system uses humans as judges and lawyers and administrators. At a mechanical level, the justice system would not execute its intended function without these building-block humans. If I turned up at a present-day court with a lawsuit expecting a summons to be served upon the opposing party but all the humans in the justice system were absent then the summons would not end up being served. The Google search engine is a system that has some power. Like the justice system, it requires humans as building-blocks. Those human building-blocks maintain the software, data centers, power generators, and internet routers that underlie it. Although individual search queries can be answered without human intervention, the transitive closure of dependencies needed for the system to maintain its power includes a huge number of humans. Without those humans, the Google search engine, like the justice system, would stop functioning. The human building-blocks within a system do not in general have any capacity to influence or shut down the system. Nor are the actions of a system necessarily connected to the interests of its human building-blocks. There are some human-constructed systems in the world today that do not use humans as building-blocks, but none of them have power in their own right. The [Curiosity Mars rover](https://en.wikipedia.org/wiki/Curiosity_(rover)) is a system that can perform a few basic functions without any human intervention, but if it has any influence over the future, it is via humans collecting and distributing the data captured by it. The [Clock of the Long Now](https://en.wikipedia.org/wiki/Clock_of_the_Long_Now), if and when constructed, will keep time without humans as building-blocks, but, like the Mars rovers, will have influence over the future only via humans observing and discussing it. Yet we may soon build systems that do influence the future of life on the planet, and do not require humans as building-blocks. The field concerned with building such systems is called artificial intelligence and the current leading method of engineering is called machine learning. There is much debate about what exactly these systems will look like, and in what way they might pose dangers to us. But before taking any view about whether these systems will look like agents or tools or AI services, or whether they will be goal-directed or influence-seeking, or whether they will be developed quickly or slowly, or whether we will end up with one powerful system or many, we might ask: what is the least we need to believe to justify attending to the development of AI among all the possible things that we might attend to? And my sense is just this: we may soon transition from a world where all systems that have power over the future of life on the planet are intricately tied to human building-blocks to a world where there are some systems that have power over the future of life on the planet without relying on human building-blocks. This alone, in my view, justifies attention in this area, and it does not rest in any way on views about agency or goals or intelligence. So here is the argument up to this point: > > Among everything in the world that we might pay attention to, it makes sense to attend to that which has the greatest power over the future of life on the planet. Today, the systems that have power over the future of life on the planet rely on humans as building-blocks. Yet soon we may construct systems that have power but do not rely on humans as building-blocks. Due to the significance of this shift we should attend to the development of AI and check whether there is any cause for concern, and, if so, whether those concerns are already being adequately addressed, and if not, whether there is anything we can do. > > > The case for concern -------------------- So we have a case for paying some attention to AI among all the things we could pay attention to, but I have not yet made a case for being *concerned* about AI development. So far it is as if we discovered an object in the solar system with a shape and motion quite unlike a planet or moon or comet. This would justify some attention by humans, but on this evidence alone it would not become a top concern, much less a top cause area. So how do we get from attention to concern? Well, the thing about power is that *humans already seek it*. In the past, when it has become technically feasible to build a certain kind of system that exerts influence over the future, humans have tended, by default, to eventually deploy such systems in service of their individual or collective goals. There are some classes of powerful systems that we have coordinated to avoid deploying, and if we do this for AI then so much the better, but by default we ought to expect that once it becomes possible to construct a certain class of powerful system, humans will deploy such systems in service of their goals. Beyond that, humans are quite good at incrementally improving things that we can tinker with. We have made incremental improvements to airplanes, clothing, cookware, plumbing, and cell phones. We have not made incremental improvements to human minds because we have not had the capacity to tinker in a trial-and-error fashion. Since all powerful systems in the world today use humans as building blocks, and since we do not presently have the capacity to make incremental improvements to human minds, there are no powerful systems in the world today that are subject to incremental improvement at all levels. In a world containing some powerful systems that do not use humans as building blocks, there will be *some powerful systems that are subject to incremental improvements at all levels*. In fact the development of AI may open the door to making incremental improvements to human minds too. In this case *all* powerful systems in the world would be subject to incremental improvement. But we do not need to take a stance on whether this will happen or not. In either case the situation we will be in is one in which humans are making incremental improvements to some systems that have power in the world, and we therefore ought to expert that the power of these systems will therefore increase on a timescale of years or decades. Now at this point it is sometimes argued that a transition of power from humans to non-human systems will take place, due to the very high degree of power that these non-human systems will eventually have, and due to the difficulty of the alignment problem. But I do not think that any such argumentative move is necessary to justify concern, because whether humans eventually lose power or not, what is much more certain is that in a world where powerful systems are being incrementally improved, there will be a period during which *humans gain power quickly*. It might be that humans gain power for mere minutes before losing it to a recursively self-improving singleton, or it may be that humans gain power for decades before losing it to an inscrutable web of AI services, or it may be that humans gain power and hold onto it until the end of the cosmos. But under any of these scenarios, humans seem destined to gain power on a timescale of years or decades, which is the pace at which we usually make incremental improvements to things. What happens when humans gain power? Well as of today, existential risk exists. It would not exist if humans had not gained power over the past few millennia, or at least it would be vastly reduced. Let’s ignore existential risk due to AI in order to make sure the argument is non-circular. Still, the point goes through. There is much good to say about humans. This is not a moral assessment of humanity. But can anyone deny that humans have gained power over the past few millennia, and that, as a result of that, existential risk is much increased today compared to a few millennia ago? If humans *quickly gain power*, it seems that, by default, we ought to presume that existential risk will also increase. Now, there are certainly *some ways* to increase human power quickly without increasing existential risk, including by skillful AI development. There have certainly been *some times and places* where rapid increases in human power have led to decreases in existential risk. But this part of the argument is about what happens by default, and the ten thousand year trendline of the "existential risk versus human power" graph is very much up-and-to-the-right. Therefore I think rapidly increasing human power will increase existential risk. We do not need to take a stance on how or whether humans might later lose power in order for this to go through. We merely need to see that, among all the complicated goings-on in the world today, the development of AI is the thing most likely to confer a rapid increase in power to humans, and on the barest historical precedent, that is already cause for both attention and concern. So here is the case for concern: > > If humans learn to build systems that do influence the future of life on the planet but do not require human building-blocks, then they are likely to make incremental improvements to these systems over a timescale of years or decades, and thereby increase their power over the future of life on the planet on a similar timescale. This should concern us because quick increases in human power have historically led to increases in existential risk. We should therefore investigate whether these concerns are already being adequately addressed, and, if not, whether there is anything we can do. > > > I must stress that not all ways of increasing human power lead to increases in existential risk. It is as if we were considering giving a teenager more power over their own life. Suppose we suddenly gave this teenager the power not just of vast wealth and social influence, but also the capacity to remake the physical world around them as they saw fit. For typical teenagers under typical circumstances, this would not go well. The outcomes would not likely be in the teenager’s own best interests, much less the best interests of all life on the planet. Yet there probably *are* ways of conferring such power to this teenager, say by doing it slowly and in proportion to increases in the teenager’s growing wisdom, or by giving the teenager a wise genie that knows what is in the teenager’s best interest and will not do otherwise. In the case of AI development, we are collectively the teenager, and we must find the wisdom to see that we are not well-served by rapid increases in our own power. The case for intervention ------------------------- We have a case for apriori concern about the development of a particular technology that may, for a time, greatly increase human power. But perhaps humanity is already taking adequate precautions, in which case marginal investment might be of greater benefit in some other area. What is the epistemically minimal case that humanity is not already on track to mitigate the dangers of developing systems that have power over the future of life on the planet without requiring humans as building-blocks? Well consider: right now we appear to be rolling out machine learning systems at a rate that is governed by economic incentives, which is to say that the rate of machine learning rollout appears to be determined primarily by the supply of the various factors of production, and the demand for machine learning systems. There is seemingly no gap between the rate at which we *could* roll out machine learning systems if we allowed ordinary economic incentives to govern, and the rate at which we *are* rolling out those systems. So is it more likely that humanity is exercising diligence and coordinated restraint in the rollout of machine learning systems, or is it more likely that we are proceeding haphazardly? Well imagine if we were rolling out nuclear weapons at a rate determined by ordinary economic incentives. From a position of ignorance, it’s *possible* that this rate of rollout would have been selected by a coordinated humanity as the wisest among all possible rates of rollout. But it’s much more likely that this rate is the result of haphazard discoordination, since from economic arguments we would expect the rate of rollout of any technology to be governed by economic incentives in the absence of a coordinated effort, whereas there is no reason to expect a coordinated consideration of the wisest possible rate to settle on this particular rate. Now, if there were a gap between the "economic default" rate of rollout of machine learning systems and the actual rate of rollout then might still question whether we were on track for a safe and beneficial transition to a world containing systems that influence the future of life on the planet without requiring humans as building-blocks. It might be that we have merely placed haphazard regulation on top of haphazard AI development. So the existence of a gap is not a sufficient condition for satisfaction with the world’s handling of AI development. But the absence of any such gap does appear to be evidence of the absence of a well-coordinated civilization-level effort to select the wisest possible rate of rollout. This suggests that the concerning situation in the previous section is, at a minimum, not already *completely* addressed by our civilization. It remains to be seen whether there is anything we can do about it. The argument here is about whether the present situation is already satisfactory or not. So here is the argument for intervention: > > Humans are developing systems that appear destined to quickly increase human power over the future of life on the planet at a rate that is consistent with an economic equilibrium. This suggests that human civilization lacks the capacity to coordinate on a rate motivated by safety and long-term benefit. While other kinds of interventions may be taking place, the absence of this particular capacity suggests that there is room to help. We should therefore check whether there is anything that can be done. > > > Now it may be that there is a coordinated civilization-level effort that is taking measures other than selecting a rate of machine learning rollout that is different from the economic equilibrium. Yes, this is possible. But the question is why our civilization is not coordinating around a different rate of machine learning rollout if it has the capacity to do so. Is it that the economic equilibrium is in fact the wisest possible rate? Why would that be? Or is it that our civilization is choosing not to select the wisest possible rate? Why? The best explanation seems to be that our civilization does not presently have an understanding of which rates of machine learning rollout are most beneficial, or the capacity to coordinate around a selected rate. It may also be that we navigate the development of powerful systems that do not require humans as building-blocks without ever coordinating around a rate of rollout different from the economic equilibrium. Yes this is possible, but the question we are asking here is whether humanity is already on track to safely navigate the development of powerful systems that do not require humans as building-blocks, and whether our efforts would therefore be better utilized elsewhere. The absence of the capacity to coordinate around a rate of rollout suggests that there is at least one very important civilizational capacity that we might help develop. The case for action ------------------- Finally, the most difficult question of all: is there anything that can be done? I don’t have much to say here other than the following very general point: It is very strong to claim that nothing can be done about a thing because there are many possible courses of action, and if even one of them is even a little bit effective then there is something that can be done. To rule out all possible courses of action requires a very thorough understanding of the governing dynamics of a situation and a watertight impossibility argument. Perhaps there is nothing that can be done, for example, about the heat death of the universe. We have some understanding of physics and we have strong arguments from thermodynamics, and even on this matter there is some room for doubt. We have nowhere near that level of understanding about the dynamics of AI development, and therefore we should expect on priors that among all the possible courses of actions, there are some that are effective. Now you may doubt whether it is possible to *find* an effective course of action. But again, claiming that it is impossible to find an effective course of action implies that among all the ways that you might try to find an effective course of action, none of them will succeed. This is the same impossibility claim as before, only now it concerns the process of finding an effective course of action rather than the process of averting AI risk. Once again it is a very strong claim that requires a very strong argument, since if even one way of searching for an effective course of action would succeed, then it is possible to find an effective course of action. Now you may doubt that it is possible to find a way to search for an effective course of action. Around and around we could go with this. Each time you express doubt I would point out that it is not justified by anything that is objectively impossible. What, then, is the real cause of your doubt? One thing that can always be done at an individual level is to make a thing the top priority in our life, and to become willing to let go of all else in service of it. At least then if a viable course of action does become apparent, we will certainly be willing to take it. Conclusion ---------- In the early days of AI alignment there was much discussion about fast versus slow take-off, and about recursive self-improvement in particular. Then we saw that *the situation is concerning either way*, so we stopped predicating our arguments on fast take-off, not because we concluded that fast take-off arguments were wrong, but because we saw that the center of the issue lay elsewhere. Today there is much discussion in the alignment community about goal-directedness and agency. I think that a thorough understanding of these issues is central to a solution to the alignment problem, but, like recursive self-improvement, I do not think it is central to the problem itself. I therefore expect discussions of goal-directedness and agency to go the way of fast take-off: not dismissed as wrong, but de-emphasized as an unnecessary predicate. There is also discussion recently about scenarios involving single versus multiple AI systems governed by single versus multiple principals. Andrew Critch has [argued](https://www.lesswrong.com/posts/WjsyEBHgSstgfXTvm/power-dynamics-as-a-blind-spot-or-blurry-spot-in-our) that more attention is warranted to "multi/multi" scenarios in which multiple principals govern multiple powerful AI systems. Amongst the rapidly branching tree of possible scenarios it is easy to doubt whether one has adequately accounted for the premises needed to get to a particular node. It may therefore be helpful to lay out the part of the argument that applies to all branches, in order that we have some epistemic ground to stand on as we explore more nuance. I hope this post helps in this regard. Appendix: Agents versus institutions ------------------------------------ One of the ways that we could build systems that have power over the future of life on the planet without relying on human building-blocks is by building goal-directed systems. Perhaps such goal-directed systems would resemble agents, and we would interact with them as intelligent entities, as [Richard Ngo describes in AGI Safety from First Principles](https://www.lesswrong.com/s/mzgtmmTKKn5MuCzFJ/p/8xRSjC76HasLnMGSf). A different way that we could build systems that have power over the future of life on the planet without relying on human building-blocks is by gradually automating factories, government bureaucracies, financial systems, and eventually justice systems as [Andrew Critch describes](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic). In this world we are not so much interacting with AI as a second species but more as the institutional and economic water in which we humans swim, in the same way that we don’t think of the present-day finance or justice systems as agents, but more like a container in which agents interact. Or perhaps the first systems that will have power over the future of life on the planet without relying on human building-blocks will be emulations of human minds, as Robin Hanson describes in Age of Em. In this case, too, humans would gain the capacity to tinker with all parts of some systems that have power over the future of life on the planet, and through ordinary incremental improvement become, for a time, extremely powerful. These possibilities are united as avenues by which humans could quickly increase their power by building systems that have both influence over the future of life on the planet, and are subject to incremental improvement at all levels. Each scenario suggests particular ways that humans might later lose power, but instead of taking a strong view on the loss of power we can see that a quick increase in human power, however temporary, is, on historical precedent, already a cause for concern.
14352025-0e1d-48c1-b2e2-07ec7b0e8572
trentmkelly/LessWrong-43k
LessWrong
Can Large Language Models effectively identify cybersecurity risks?   TL;DR I was interested in the ability of LLMs to discriminate input scenarios/stories that carry high vs low cyber risk, and found that it is one of the “hidden features” present in most later layers of Mistral7B.  I developed and analyzed “linear probes” on hidden activations, and found confidence that the model generally "senses when something is up” in a given input text, vs low risk scenarios (F1>0.85 for 4 layers; AUC in some layers exceeds 0.96).  The top neurons activating in risky scenarios also do have security-oriented effect on outputs, most increasing words (tokens) like “Virus” or “Attack”, and questioning “necessity” or likelihood.  These findings provide some initial evidence that "trust" in LLMs, both to respond conversationally with risk awareness as well as developing LLM-based risk assessment systems may be reasonable (here, I do not address design/architecture efforts and how they might improve signal/noise tradeoffs). Neuron activation patterns in most layers of Mistral7B (each with 14336 neurons) natively contain the indications needed to correctly discriminate the riskiest of two very similar scenario texts. Intro & motivation With the help of the AI Safety Fundamentals / Alignment course, I enjoyed learning about cutting-edge research on the risks of AI large language models (LLMs) and mitigations that can keep their growing capabilities aligned to human needs and safety. For my capstone project, I wanted to connect AI (transformer-based generative models) specifically to cybersecurity for two reasons: 1. Over 12 years of working in security, I've seen “AI” interest only accelerating within security and generally, 2. but we’re still (rightfully) skeptical of current models’ reliability: LLMs have unique risks and failure modes, including accuracy, injection and sycophancy (rolling with whatever the user seems to suggest). I settled on this “mechanistic interpretability” idea: finding whether, where, and how LLMs were generally se
c088fe38-a963-4a71-90f7-00790612fd5a
trentmkelly/LessWrong-43k
LessWrong
Retrospective: PIBBSS Fellowship 2023 Between June and September 2023, we (Nora and Dusan) ran the second iteration of the PIBBSS Summer Fellowship. In this post, we share some of our main reflections about how the program went, and what we learnt about running it.  We first provide some background information about (1) The theory of change behind the fellowship, and (2) A summary of key program design features. In the second part, we share our reflections on (3) how the 2023 program went, and (4) what we learned from running it.  This post builds on an extensive internal report we produced back in September. We focus on information we think is most likely to be relevant to third parties, in particular: * People interested in forming opinions about the impact of the PIBBSS fellowship, or similar fellowship programs more generally * People interested in running similar programs, looking to learn from mistakes that others made or best practices they converged to Also see our reflections on the 2022 fellowship program. If you have thoughts on how we can improve, you can use this name-optional feedback form.  Background Fellowship Theory of Change Before focusing on the fellowship specifically, we will give some context on PIBBSS as an organization.  PIBBSS overall PIBBSS is a research initiative focused on leveraging insights and talent from fields that study intelligent behavior in natural systems to help make progress on questions in AI risk and safety. To this aim, we run several programs focusing on research, talent and field-building.  The focus of this post is our fellowship program - centrally a talent intervention. We ran the second iteration of the fellowship program in summer 2023, and are currently in the process of selecting fellows for the 2024 edition.  Since PIBBSS' inception, our guesses for what is most valuable to do have evolved. Since the latter half of 2023, we have started taking steps towards focusing on more concrete and more inside-view driven research directions. To
f8e73403-89e2-4f04-9dbe-29eecb598a2d
trentmkelly/LessWrong-43k
LessWrong
EA Forum Creative Writing Contest: $10,000 in prizes for good stories We just launched a creative writing contest on the Effective Altruism Forum. Stories like HPMOR and The Fable of the Dragon-Tyrant have been massively impactful, and we'd like to see more work in that vein — please consider submitting something! Note that you can also submit past work that seems like a good fit. The criteria are: * Someone can reasonably finish the work in one sitting * The work might inspire someone to become interested in EA, or in some part of EA. You don't have to shill for anything too specific — we'd be really happy to see work that just reflects rational/EA modes of thinking, applied in altruistic ways, without being directly about animal suffering or AI or whatnot. I'm running the contest. Please let me know, here or on the Forum, if you have any questions!
7da91d1b-4a4e-4d1b-8510-d7fc503635b3
trentmkelly/LessWrong-43k
LessWrong
What does it mean to "believe" a thing to be true? None
4644311e-3902-46d9-9e1c-f5514c9e1de1
trentmkelly/LessWrong-43k
LessWrong
AI Safety Newsletter #40: California AI Legislation Plus, NVIDIA Delays Chip Production, and Do AI Safety Benchmarks Actually Measure Safety? Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. ---------------------------------------- SB 1047, the Most-Discussed California AI Legislation California's Senate Bill 1047 has sparked discussion over AI regulation. While state bills often fly under the radar, SB 1047 has garnered attention due to California's unique position in the tech landscape. If passed, SB 1047 would apply to all companies performing business in the state, potentially setting a precedent for AI governance more broadly. This newsletter examines the current state of the bill, which has had various amendments in response to feedback from various stakeholders. We'll cover recent debates surrounding the bill, support from AI experts, opposition from the tech industry, and public opinion based on polling. The bill mandates safety protocols, testing procedures, and reporting requirements for covered AI models. The bill was introduced by State Senator Scott Wiener, and is cosponsored by CAIS Action Fund, and aims to establish safety guardrails for the most powerful AI models. Specifically, it would require companies developing AI systems that cost over $100 million to develop and are trained on a massive amount of compute to implement comprehensive safety measures, conduct rigorous testing, and mitigate potential severe risks. The bill also includes new whistleblower protections. A group of renowned AI experts has thrown their weight behind the bill. Earlier this month, Yoshua Bengio, Geoffrey Hinton, Lawrence Lessig, and Stuart Russell penned a letter expressing their strong support for SB 1047. They argue that the next generation of AI systems pose "severe risks" if "developed without sufficient care and oversight." Bengio told TIME, "I worry that technology companies will not solve these significant risks on their own while
3af8549e-ebb5-40de-95c7-fcf10beb0680
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Reinforcement Learning 1: Introduction to Reinforcement Learning welcome this is going to be the first lecture in the reinforcement learning tract of this course now as story will have explained there are more or less two separate tracks in this course or overlap between the deep learning side and the reinforcement inside let me just turn this off in case but they can also be viewed more or less separately and some of the things how we will be talking about will tie into the deep learning side specifically we will be using deep learning methods and techniques at some points during this course but a lot of it is separable and can be studied separately and has been studied in the past for many many years separately this lecture specifically I will take a high-level view and cover lots of the concepts of reinforcement learning and then in later lectures we will go into depth into the server of the topics so if you feel there's information missing yes that doesn't need it the case however if you feel I'm Julie confused feel free to stop me and ask questions at any time there are no stupid questions if you didn't understand something it's probably because I didn't explain it well and there's probably loads of other people in the room that also didn't understand it the way intended so feel free to end and ask questions at any time I also have a short break in the middle just to refresh in everybody okay so let's dive in I'll start with some boring admin just so we can warm up schedule wise most of reinforcement earning lectures are schedules at this time not all of them there's a few exceptions which you can see in the schedule on Moodle of course the schedule is what we currently believe it will rule remain but feel free to keep checking it in case things change or just come to all lectures and then in our home is anything so check Moodle for updates also use Moodle for questions we'll try to be responsive there as you will know grading is through assignments and the backgroud material first specifically this reinforcement learning side of the course will be the new edition of the session Umberto booked a fool drafts can be found online and I believe it is currently or will very soon be impressed if you prefer a hard copy but probably all in time for this course but you can just get the whole PDF specifically for this lecture the background will mostly be chapters one and three and then next lecture will actually come from or out of chapter two I especially encourage you to read chapter one which gives you a high-level overview the way rich thinks about these things and also talks about many of these concepts but also gives you large historical view on how these things got developed which ideas came from where and also how these are these changes over time because if you get everything from this course you'll have a certain few but you might not realize that things may have been perceived quite differently in the past and some people might still perceive quite differently right now so almost to give my view of course but I'll try to keep as close as possible to the to the book and I think our views overlap quite substantially anyway so that should be good this is the outline for today I'll start by talking just about what reinforced learning is many of you will have a rough or detailed idea of this already but it's good to be on the same page I'll talk about the core concepts of a low reinforcement learning system one of these concepts is in agents and then I'll talk about what are the components of such an agents and I'll talk a little bit about what our challenges in reinforcement learning so what are research topics are things to think about within the research field of reinforcement learning but of course it's good to start with defining what it is but before I do that I'll start with a little bit of motivation and this is a very high-level abstract view maybe but one way to think about this is that first many many years ago we started automating physical solutions with machines and this is the Industrial Revolution think of replacing horses with a train we kind of know how to pull something forward across a track and then we just remember that machine and we use them the machine instead of human or in in the case of horses animal labor and of course if you just create a huge boom in productivity and then after that the second wave of automation which is basically still happening but it has been happening for a long while now is what you could call the digital revolution or sometimes called the digital revolution in which we did a similar thing but instead of taking physical solutions we took mental solutions so maybe a canonical example if this is a calculator we know how to do division so we can program that into a calculator and then have it do that mental what used to be purely mental tasks on a machine in the future so we automate it's mental solutions but we still in both of these cases came up with the solutions ourselves right we came up with what we wanted to do and how to do it and then we implemented it in a machine so the next step is to define a problem and then have a machine solved itself for this you require learning you require something in addition because if you don't put anything into the system how can it know one thing you can put into a system is your own knowledge this is what was done with these machines either for mental or physical solutions but the other thing you could put in there is some knowledge on how to learn but then the data having the machine learn from its for itself so what is then reinforcement learning they're still by the way a couple of seats sprinkled throughout the room so feel free to try and grab one because it's getting rather busy okay so what is specific about reinforcement learning so I'll post it that we and many other intelligence beings are beings that we would call intelligence learn by interacting with in environments and this differs from certain other types of learning for instance it is active rather than passive you interact the environments response to your interaction and this also means that your actions are often sequential right the environment might change because you do something or you might be in a different situation within that environment which means that future interactions can depend on the earlier ones and these things are a little bit different from say supervised learning where you typically get a data set and it's just given to you and then you just crunch the numbers essentially to come up with the solution this is still learning right this is still getting new solutions out of the data but it's a different type of learning in addition many people agree that we are goal directed so we seem to be going toward certain goals maybe also without knowing exactly how to reach that goal in advance and we can learn without examples of optimal behavior obviously we could also learn from examples as in education but we have to also just learn by trial and error and that's going to be important so this is a canonical picture of reinforcement learning there's many versions of this there is an agent which is our learning system and it sends certain actions or decisions out these decisions are absorbed by the environment which is basically everything around the agents even though I drew them it's mostly done in these figures but you can think of the environment as just everything that is outside of the agents and the environments responds in a sense by sending back an observation if you prefer you can also think of this as maybe more of a pool action by the agent that the asian observes the environment whatever it is and then this loop continues the agents can take take more actions and the environments may or may not change depending on these actions and the observations may or may or may not change and you are using to learn within this interactive loop so you know in order to understand why we want to do learning it's good to realize that there is distinct types of learning I already made difference between active learning and passive learning but there's also different goals for learning so two goals that you might differentiate is one is to find previously unknown solutions maybe you don't care exactly how your IVD solutions but you might find it hard to code them up by hand or to invent them yourself so you might want to get this from the data but it's good to realize that this is this is a different goal from being able to learn quickly in a new environment and both of these things are valid goals for learning so in the first type of learning an example might be that you might want to find a program that can play the game of go better than any human which is a goal to find a certain solution in a second type of learning you might think of an example where robots is navigating terrains but all of a sudden it finds itself in a traded it has never seen before and also wasn't present when people build to rubble to run the robot was learning then you want the robot to learn online and you wanted to look maybe adapt quickly and reinforcement learning as a fields seeks to provide algorithms that can handle both these cases sometimes they're not clearly differentiated and sometimes people don't clearly specify which goal they're after but it's good to keep this in mind also note that the second point is not just about generalization it's not just about how you learn about many terrains and then you get a new one and you're able to deal well with it it's about that a little bit but it's also about being able to learn online to adapt even while you're doing it and this is fair game we do that as well when we enter a new situation we do do it that still we don't have to just lean on what we've learned in the past so another way to phrase what reinforced bleeding is is it it is the science of learning to make decisions from interaction and this requires us to think about many concepts such as time and related to that the long term consequences of actions its requires to think about actively gathering experience because of the interaction you cannot assume that all the relevant experience is just given to you sometimes you must actively seek it out it might require us to think about predicting the future in order to deal with these longtime consequences and typically it also allows requires us to deal with uncertainty the uncertainty might be inherent to a problem if for instance you might be dealing with a situation that is inherently noisy or it might be that certain parts of the problem that you're dealing with are hidden to you for instance you're playing against an opponent and you don't know what goes on in their head or it might just be that you yourself creates uncertainty because you don't know maybe you're following a behavior that sometimes is a little bit sarcastic so you can't predict the future with complete certainty just based on your own interaction I'm just going to repeat once more there's still a few seats if people want to grab them one back there there's a few up here so there's huge potential scope for this because decisions show up in many many places if you think about it and so one thing that I just want you to think about is whether this is sufficient to be able to talk about what artificial intelligence is of course I could take stand here this is just to provoke you to think about that can you think of things that we're not covering right that you might need for artificial intelligence that's basically the thing that I want you to think about and if so we should probably add them so there's a lot of related disciplines and reinforcement learning has been studied in one form or another many times and in many forms this is a slide that I borrowed from Dave silver where he noted a few of these disciplines there might be others and these might not even be these not mine will be the only mine debate examples although some of them are pretty persuasive and the disciplines that he pointed out work at the top computer science which a lot of you will be studying some variant of in which we might do something called machine learning and you could think of reinforcement learning as being part of that discipline I'll come back to that later but also neuroscience people have investigated the brain to large extents and found that certain mechanisms within the brain look a lot like the reinforcement learning algorithms that we'll study later in this course so there might be some connection there as well or maybe you can use these concepts that we'll talk about to understand how we learn also in psychology maybe this is more like a higher-level version of the neuroscience argument where there's behavior obviously there's decisions and maybe you can reason about that you can maybe you can model that in this very similar way or maybe even the same way as you can model the reinforcement learning problem and then think about learning what that what that entails how the learning progresses using this framework separately on the other side you have engineering sometimes you just want to solve a problem and there are many diseases problems out there that people want to solve for many different reasons but typically to to optimize something and within that we have a field called optimal control which is very closely related to reinforcement learning and many of the methods overlap although sometimes the focus is a little bit different in a notation might be a little bit different fairly similarly in mathematics there's a subcategory or maybe I don't know whether it's completely fair to say that it's part of mathematics maybe it's a little bit more like a Venn diagram itself called operations research and operations research this is the field where you basically look for solutions for many problems using mathematical tools including Markov decision processes that will touch upon later in this course and dynamic programming and things like that which are also used in reinforcement learning finally at the bottom it says economics but there's other related fields that you might consider here one thing that's quite interesting about this is that it's very clearly a multi-agent setting so now there's multiple actors in a situation and together they make decisions but also separately and there's all these interesting inter actions between these these agents and it's also quite natural in economics to think about optimizing something many many people talk about optimizing say returns or value and this is very similar to what we'll discuss as well so to zoom in a little bit on the machine learning part sometimes people make this distinction that machine learning basic has a number of subfields maybe the biggest of this is the supervised learning subfield where which we're getting quite good at I would say and a lot of deep learning work for instance is done on supervised settings the goal there is to find a mapping you have examples of inputs and outputs and you want to learn that mapping and ideally you want to learn a mapping that also generalizes to new inputs that you've never seen before in a nutshell unsupervised learning separately is what you do when you don't have the labeled examples so you might have a lot of data but maybe you don't have clear examples of what the mapping should be and all that you can do all that you want to do is to somehow structure the data so that you can reason about it or that you can understand the data itself better now reinforcement learning some people sometimes perceive that as being part of one of these or maybe a little bit of a mixture of both but I would argue that it's difference and separates and in reinforcement learning the one of the main distinctions is that you get a reinforcement learning signal which we call the reward instead of a supervised signal what this signal gives you and I'll talk about it later more is some notion of how good something is compared to something else but it doesn't tell you exactly what to do it doesn't give you a label or an action that you should have done it just tells you I like this this much but I'll go into more detail so characteristics of reinforcement learning and specifically how does it differ from other machine learning paradigms include that there's no strict supervision on your reward signal and also that the feedback can be delayed so sometimes you take an action and this action much later leads to reward and this is also something that you don't typically get in a supervised learning setting of course there's or there's exceptions in addition time and sequentiality matters so if you take a decision now it might be impossible to undo that decision later whereas if you just make a prediction and you update your loss in a supervised setting typically you can still redo that later this means that earlier your decisions affect later interaction is good to keep that in mind and basically the the next lecture also talks a lot about this so examples of decision problems there is many as I said some concrete examples to maybe help you think about these things include to fly a helicopter or to manage an investment portfolio or to control the power station make a robe walk or play video or board games and these are actual examples where reinforcement learning has been or versions of reinforcement have been applied and maybe it's good to note that these are reinforced learning problems because they are sequential decision problems even if you don't necessarily use what people might go every first learning method to solve them it's good to make the distinction because some people they think of the current reinforcement learning algorithms and they basically unify the field with those specific algorithms but reinforcement learning is both a framework of how to think about these things and there's a set of algorithms which people talk about as being reinforced learning algorithms but you could be you could be working in Reverse inferring problem without using any of those algorithms specifically so I mentioned a few of these already but core concepts of a reinforcement this learning system are the environments that the agent is in a reward signal that specifies the goal of the agents and yeah youth itself of course yeah you to self might contain certain components and I'm going to go through all of these in the rest of this lecture but note that in the interactions figure that I showed before this is the same one I actually didn't put the reward in and there's a reason I did that because most of these figures that you'll see in literature actually have the reward going from the environment into the agents and that's fair and in that case the agent itself basically is only the learning algorithm that means that if you have a robot the learning algorithm sits somewhere within that robot but the agents in this picture doesn't is not the same as the robot as a whole the learning algorithm can perceive part of the robots as its environment in a sense because typically the environment doesn't care it doesn't have a reward it doesn't have that notion typically it's us that specify a reward and it lives somewhere within your reinforcement learning system that's why I didn't put it in the figure because you can think of it as coming from the environment in the into the agents or you can think of it as part of the agents but not part of learning algorithm because if the learning algorithm can modify its own reward then where things could happen and it could like find ways to optimize its reward but only because it's setting it not because it's learning anything interesting so it's useful to think of the reward is being external to the learning algorithm even if it's internal to the system as a whole so what happens here this is the interaction loop that I was talking about if we introduce a little bit of notation at each time step T the agent will receive some observation which is a random variable which are you which why which is why I use capital o there and a reward from somewhere capital R and the agent and execute an action capital a near arm receives this action and you can either think of it as a me emitting a new observation and a new reward you can just think of the agent as receiving that as pulling that from the environment but for now we'll just talk about it as if the environment just gives you that back as a function it takes in the action and it returns you the next observation and the next reward this is a fairly simple set up fairly small in some sense but it turns out to be fairly general as well and we can model many problems in this way so the reward specifically is a scalar feedback signal this indicates how well the agent is doing at a step time T and therefore it defines the goal as I said now the agents job is to maximize the cumulative reward not the instantaneous reward but the reward over time and we will call this the return now this thing trails off into the end there I didn't specify when it stops the easiest way to think about is that there's always a time somewhere in the future where it's so that this thing is well-defined and finite a little while later I'll talk about when that doesn't happen when you have a continuing problem and then you can still define a return that is well-defined and I'm reinforcement learning is based on the reward hypothesis which is that any goal can be formalized as the outcome of maximizing a cumulative reward it's basically statement about the generality of this framework now I encourage you to think about that whether you agree that that's true or not and if you think it's not true whether you can think of any examples of goals that you might not be able to formalize as optimizing your cumulative reward to maybe help you think about that I'd like to know that these reward signals they can be very dense there could be a reward on every step that is nonzero but it could also be very sparse so if a certain event specifies your goal you could also just get a real positive reward whenever that happens and zero rewards on every other step and that means that then there is a reward function that models that specific goal so the question is whether that's sufficiently general I haven't been able to find any counter examples myself but maybe you do yeah no that's a very good question sorry we use the word reward but we basically mean it's just a real value reinforcements signal and sometimes we talk about negative rewards as being penalty especially this is especially common in psychology and neuroscience in the more computer science view of reinforcement learning we typically just use the word reward even if it's negative and then indeed you can have things that push you away from certain situations that you don't want to repeat I'll give an example I'll revisit this example a little bit later but maybe is good to give it now as well you could think of a maze where you want to exit the me so the goal is to exit the maze then there's multiple ways to set up a reward function that encodes that one this as I said just now just gives 0 rewards on every step but give a positive reward when you exit the maze but what you could also do is just give a negative reward on every step and then stop your episode when you exit the then minimizing that negative maximizing your ear or means minimizing the absolute negative rewards so it still encodes the goal of getting out of the maze as quickly as possible you could think of one as being chasing the carrot and one is avoiding the stick to the learning algorithms it typically doesn't matter too much or at least to the formalism of two learning algorithms in practice of course everything maddox but okay so now that we have returns we can talk about predicting those returns and to do that we first have to talk about values so the expected curative reward which is basically the expected return as we define it just now is what we call the value and a value is in this case a functional state so the expectation here is given conditional on the state that you're putting us into the function and then over anything that's random the goal is then to maximize this expected value rather than the actual instantaneous value already the random value because typically you don't know the random value yet by picking suitable actions so the rewards and values both define the desirability of something but you could think of the reward is defining the desirability of a certain transition like a single step and then the value is defining the desirability of this valve of this state more in general into the indefinite future potentially also note because we'll be using that quite a bit in this course that the returns and values can be defined recursively so I put it down here for the return the return at the time step T is basically just the one step reward and then the return from there and that turns out to be something that we can usefully exploit so I said the goal isn't to pick action so we have to talk a little bit about what that means so again the goal is to select actions as to maximize the value basically from each state that you might end up in and these actions might have long-term consequences what this means in terms of the reward signal is that you immediate reward for taking action might be low or even negative but you might still want to take it if it brings you to a state with a very high value which basically means you'll get high rewards later so that means it might be better to sacrifice immediate reward to gain more long-term reward an examples of this include safe enough financial investment where you first pay some money to invest in something but you hope to get much more money back later refueling a helicopter you might not gain anything specifically related to your goal from doing that but if you don't maybe you're a hell of a helicopter will at some points not work anymore and in say playing a game you might block an opponent's move rather than going for the win you first prevent the loss which might then later give you a higher probability of winning and in any of these cases the mapping from stage to action will call a policy so you can think of this as just being a function that map's each state into an action it's also possible to condition the value on actions so instead of just conditioning on the state you can set the condition on the state and action pair the definition is very similar to the state value there's a slight difference in notation for historical reasons this is called a cue function so for States we use V and for state action pairs we use Q there's really no other reason than just historical for that and we'll talk in depth about these things later so the only difference here is that it's now also conditioned on the action otherwise the definition is exactly the same as as before okay if everybody's on board I will now talk about agent components and I'll start with the agent State there is there's still a little bit of room in the in the room if somebody still wants to grab a chair people are not so uncomfortable Thanks so the first I talked a little bit already about states but I didn't actually say what the state is I trusted that you would have some intuitive notion of it so I'll talk about what an agent status so as I said a policy is a mapping from States to actions or the other way to say that the actions depend on some state of the agent both the agent and the environment might have an internal state or typically actually do have an internal state in the simplest case there might only be one state and both environment and agent are always in the same state and we'll cover that quite extensively in the next lecture because it turns out you can already meaningfully talk about some concepts such as how to make decisions when only considering a single state and it just extracts well in all the issue of sequentiality and states and everything but the whole next lecture will be devoted to that but often more generally there are many different states and I might even be infinitely many so what do I mean when I say infinitely many just think of it as being there's some continuous vector as your state and maybe it can be within some infinite space just because you don't know exactly where it's going to be and it can basically be arbitrarily anywhere in that state and then you are in basically a typical domain where deep learning also shines where you can maybe generalize across things where that you haven't seen because things are sufficiently smooth in some sense so the state of the agents generally differs from state of free environments but at first we're going to unify these as I'll explain later but it's good to keep in mind that in general the agent might not know the full state of the environment so the state of the environment is basically everything that's necessary for the environments to return its observations and maybe rewards if those are part of the environments and like I said it's usually not visible to the agent but even if it is visible it might contain lots of irrelevant information so even if you think about say if you think about the real world us or a robot operating in a real world even if you could know all the locations of all the atoms and all other things that might be relevant in some way to your problem you might not want to or even can process all of that so it's still in that case makes sense to have an agent say that is smaller than the full environment state so instead the agent has access to a history it gets an initial observation and then this loop starts you take an action you get a reward in a new observation you take another action and so on and so on in principle the agent could keep track of this whole history it might grow it big but we could imagine doing that and an example of such a history might be the sensorimotor of stream of robots just all these things that ever happens to the robot so this history can then be used to construct an agent state and the actions then depend on that state in the fully observable case we assume that the agent can see the full environment State so the observation is not equal to the environment State this is especially useful in smallish problems where the environment is particularly simple but it occurs sometimes in in in real practice for instance if you think about playing a game a single-player board game where you can see the whole board this might be such a case or even if you play a multiplayer board game but you have a fixed opponents this might again be the case if you're playing against a lone opponent it's no longer the case because you cannot look inside the head of the opponent if this is the case then the agent is in the Markov decision process and I'll define that many of you might know what this is but Markov decision processes are essentially a useful mathematical framework that we are going to use to reason and talk about a lot of the concepts in reinforcement learning it's much easier to reason about then the full problem which is non Markovian as I'll talk about in a bit but it's also a little bit limited because of the Markov assumption so what does it mean to be Markov so decision process is is Markov or MA even if the probability of a reward and subsequent state I've written it down here as the new Sutton Umberto Edition also does as a joint probability of the reward in the state the way they depending on your current state an action is fully informative if you would condition or for the case if you would condition on the full history what that means is that the current state gives you all the information you need to basically predict to the next reward in the next state so if this probability is fixed even if you don't know this probability I'm not claiming that the agent knows it but if it exists and if it's fixed then it's a Markov decision process intuitively it means that the future is independent of the past given the present where the present is now your state so in practice this is very nice and useful because it means that when you have this state you can throw away the history and history can crow can grow unbounded li so that's something that you don't want to keep doing so instead you much prefer the case where you can just throw away everything and you just keep that state and another way to phrase that is it's a sufficient statistic for the history the environment state typically is Markov in most cases there are exceptions to this for instance if you think of a non-stationary environment but typically you could think of the the environment said as being Markovian but you're just not being able to perceive it so things might appear a non-stationary even if they aren't the history itself is also Markovian because of course if you condition on the history or you condition on the history that's the same thing but it's going to grow big more commonly we're in a partial observable case this means that the agent gets partial information about the true state and examples of this include a robot with a camera vision that is not told its absolute location or what se is behind the wall or say poker-playing agents which only observes public cards so there's multiple ways that second one is partially observable is the fact that it can't see the cards of the opponents and the other way is to factor - can't see within the brain of the opponents and formerly these things are then called partially observable Markov decision processes there's a lot of literature on these and especially on solving these exactly a lot of that we're not going to cover but it's good to keep in mind that this is actually the common case that you just get some observations but they don't tell you the full state doesn't mean you want to use the solution methods necessarily from the literature on parse observable Markov decision processes exactly especially those who solve these things exactly because that tends to be a very hard and interesting problem but it also tends to be quiet computationally expensive and again the environment state can still be Markov even if you only get a partial observe observation of this but the agent just has simply no way of knowing this is that clear okay so now we can talk about what the agent state then is so the agent said as I said before is a function of the history the agent action depend on the state so it's important to have this agent state and in a simple example if it is if you just make your observation the agent state more generally we can think of the agent state as something that updates over time you have your previous agent state you have an action a reward and an observation and then you construct a new agent state note that for instance building up your fool history is of this form you just append things but there are other things you can do you could for instance keep the size of the state fixed rather than have it grow over time as you would do with a history here what I do know with F is sometimes called the state update function and it's an important notion that will terr get back to later the its actual so very active area of research how do you create such a set up that function that is useful for your agents especially if you cannot just rely on your observations so the agency is simply much smaller than environment State and it's also typically much smaller than the full history just because of computational reasons and here's an example so let's assume a very simple problem where this is the full state of the environment here may maybe it's not the fool say maybe there's also an agent in the maze that I didn't write down but let's say that this is yours you're a fool state of the maze and let's say there is an agent and it perceives a certain part so the observation is now partial the agent doesn't get its coordinates it just sees these pixels say as an example now what might happen is that the agent walks around in this maze and all of a sudden sometime later finds itself in this situation now this is an example of a partial observable problem because both of these observations are indistinguishable from each other so just based on the observation the agent has no way of knowing where it is so a question for you to ponder a little bit how could you construct a mark of agent state in this maze for any reward signal because I didn't specify what the reward signal is if you want you can just think up one maybe there's a goal somewhere does anybody have a suggestion right so indeed in this case you'd have to carefully check for the specific maze whether that's sufficient it might be sufficient it might depend on your policy if you have an action that stands still it might not be enough because you might see the same observation twice if that action doesn't exist in this maze it might actually be enough I didn't I didn't carefully check but the more general idea which i think is the good idea here is that you use some part of your history to build up an agent states that somehow distinguishes these two situations if you do have a certain policy it might be that in the left state you always came from above and in the right state you always came from below and just having that additional information what the previous observation was might just be enough so completely did in this distinguish these these situations and that is indeed the idea of an estate update function this would be a simple set update function that just concatenates the previous two and each time you see a new observation we oldest one and that's actually something that is done quite frequently for instance in the Atari games that you saw before there was just a concatenation of a couple of frames and this is the full agency so it was basically just an Augmented of observation yeah which one here yeah so in in the ordering here is that you you're in a certain state st based on this state you take an action 80 and then we consider time to tick after you take the action basically when you send things to the environment this is just a convention actually some people ride down RT rather than T plus one so be aware but we'll take the convention that basically the time steps when you send that action to the ER to the environment then the reward and the new observation come back and we can consider the next agent State St plus 1 as being a function of this new observation so that when you take your next action you can already take your observation into account if it would be OT rather than OT plus 1 then you couldn't take your newest observation into account when taking your next action it's good question so I set many of these things over already but to summarize to deal with partial observability the agent can construct a suitable state representation an examples of these include as I said before you could just have your observation be the agent state but this might not be enough in certain cases you could have the complete history at your agencies but this might be too large might be hard to compute with this full history or you might as a partial version of the one I showed before you could have some incrementally updated States which maybe only looks at the observations maybe ignores the rewards in the actions in this example but if you write it down like this maybe you'll notice that this looks fairly remarkably similar to a recurrent neural network which I know we haven't yet covered in the deep learning side but we will and basically the update there looks exactly like this so that kind of already implies that we can use maybe deep learning technique here of the recurrent neural networks to implement the state update function and indeed this has been done so sometimes the agent state for this reason it's also called the memory of the agent we use the more general term for the agent state which maybe includes the memory and maybe also additional things if you want to think about it like that but you can think of the memory as being an essential part of your agent State especially in this partially observable problems or alternatively you can think of memory as a useful tool to build an appropriate agent state so that wraps up the state bit feel free to inject any questions and otherwise I'll continue with policies which is fairly short so the policy just defines the agents behavior and it's a map from the agencies to an action there's two main cases one is the deterministic policy where we'll just write it as a function that outputs an action stay goes in action goes up but there's also the important use case of a stochastic policy where the action is basically there's a probability of selecting each action in each state typically we will not be too careful in differentiating these so you can just think of the stochastic one is the more general case where sometimes the distribution just happens to always select the same action and then you're already covered the discrete case the deterministic sorry note that I didn't specify anything about what the structure of this function is or what even the structure of the action is in the beginning of the course will mostly focus on the case and actually throughout the course will mostly focus on the case where these actions can be thought of as being part of a discrete set for instance if you think of the joystick used in the Atari games it basically had up-down left-right shoot and those type of actions but it didn't have move your motor a little bit in this direction we call that a continuous action and there's also algorithms that can deal with those for notation doesn't really matter it's just a function that outputs either maybe you can think of it as an integer for the discrete case and maybe it's a it's a vector or a real valued number for the continuous case I'm not talking yet about how to learn these things this will be later in the course which means we can then run because there's a lot to be said about learning policies but not too much about what our policy is and we'll move on to value functions so as said before the actual value function is just the expected return condition on the states and something that I actually didn't talk about before but it's also conditioned on a policy I basically hit that on the previous slide there's another thing that I hit which I'm introducing here which is a discount factor so the return now is really find it's a slightly different return from before there's this gamma in between if the gamma is equal to one it's the same as before it's just your accumulation of rewards into the future in many cases we actually pick a gamma that is slightly less than one and what that does is it trades of immediate rewards versus long-term rewards putting higher weights on the immediate rewards basically you're down weighing or discounting and this is why it's called the discount factor the future rewards in favor of the immediate ones if you think of the Mays example that I said earlier where you get a zero reward on each step but then you get say a reward of +1 when you exit the maze if you don't have discounting the agent basically has no incentive to exit the maze quickly it would just be happy if it's exits to maze in like in some time into the future but when you have discounting all of a sudden the trade of starts to differ and it'll favor to be as quick as possible because then the exponent on this gamma will be smaller if it takes fewer steps to go to the exits it'll have it will have discounts it's this future return less so the value depends on the policy as I said and it can be used to evaluate the desirability of states one state versus the other see and therefore can also be used to select between actions you could say a plan one step ahead you could use your value it's like you more convenient in that case although I didn't put on the slide to use these action values because there was immediately give you access to the value of each action this is just a definition of the value of course we're going to approximate these things later in our agents because we don't have access basically to the true value typically oh there's a sorry there's a plus sign missing there on the top it should have been reward plus one plus the discounted future return GT plus 1 I'll fix that before the slides go on to Moodle and I said this before for the undiscounted case but now I'm saying it again for the discounted case the return has a recursive form it's one step reward plus the remaining return but now discount at once and that means that the value also has a recursive form because we can just write down the value is the expectation of this return but then it turns out because the expectation can be put inside as well over this this cheap T plus one this is equivalent to just putting the value there again and this is a very important recursive relationship that will heavily exploit throughout the course and notation wise note that I'm writing down a as being sampled from the policies so this is basically assuming stochastic policies but like I said things do Mystic can be viewed as a special case of that and then the equation is known as the bellman equation by Richard bellman from 1957 there's a similar equation interestingly for the optimal which is the highest possible value for any policy and this equation is written down there it basically takes the action that maximizes the one step and then uses the optimal value on the next step so it's again recursively defined and you could essentially view this as as a system of equations if there's a limited number of states and a limited number of actions this is just system of equations that you could solve and thereby you can get the optimal values and the optimal policy in order to do that you need to know you need to be able to compute this expectation and that's something that we'll cover later as well and you'll use dynamic programming techniques and to solve this yeah so it's basically the top line there which is missing the the plus and 40 T but it's basically based on the the recurrence of the return which I assume is somewhat clearer that you can just split up the return into a single reward and then the rest of the return which is again accumulation of rewards in order to then get the recursive form of the value especially for the for this case it's enough to know that the expectation on the top line because it's already had an expectation about in the future you can put the expectation around this internal return there which means it's just defined as the value this is just a nested expectation then but that's equivalent so you can also write this down very explicitly with just sums of probabilities of landing in each state and we will get back to that later so I'll give you some explicit formulas also to show that this recursion holds to let it not Multan you next lecture but in the lecture after that yeah let me rephrase the co-founders to two question correctly which is if you're looking ahead from a certain stage you want to consider ten steps into the future the question is whether you want to optimize for right now or you want to optimize for all each of these steps so on each stage you basically want to follow the policy that optimizes the return from that state the expected return from that state that essentially means that in the last state you'll want to do the optimal thing but in the state before you want to do the optimal thing conditioned on the fact that in the last state you're going to do the optimal thing so that's also in that sense recursive there's a different matter here which maybe maybe just to clarify there's also a question of which states you care about do you care about behaving ultimately from this state or do you care about behaving ultimately from all states if you can solve everything exactly you can actually have both you can just be optimal from every state and you can possibly be in later when we're going to start approximating things you're going to have to pick which states do I care about which states don't I care about and then you might care more about so having good solutions in certain states rather than in other states yeah yeah so the question is can you then sauce is by recursing backwards starting at the end and then just fining I mean that's that's a simple problem in a sense because then you just look at the instantaneous reward if you're at the end and you pick the action that in optimizing instantaneous reward and this will give you the optimal value of that state and then indeed this is a valid and well often used solution technique so then iterate backwards what you could also do and I'll talk about that a much more depth it's just look at all states at the same time and use these recurs recursive definitions to basically incrementally go towards the solution so you could either indeed start at certain states say at the end and then recurrence and that might be more efficient or you could just do all of them at the same time and you'll still get to the optimal solution yeah yeah it's a very good question so the question is here we're approximating expected cumulative rewards or expected returns but sometimes you care more about the whole distribution of returns and this is definitely true and it's actually it has haven't been studied that much so there has been quite a bit of work on things like safe reinforcement learning where people for instance want to optimize the expected return but conditional on ever having a return that is lower than a certain thing but recently and with that I mean like last year a paper was published on distributional reinforcement learning where the distribution of returns is explicitly modeled this has been done there's a little bit of prior work on that but actually not as much as you might think and turns out you could do very similar things with recursive definitions in that case and then indeed modeling the distribution is in some cases very helpful you can help it to steer your decision away from risky situations or sometimes you actually want to be so that that would be called risk-averse say in economics lingo or you could be more risk seeking which could also sometimes be useful depending on what you want to thank you so yes very good question that's very current research we're not marginalizing it we're literally maximizing over it which is a little bit different but it's in some sense it's similar in the sense that you get rid of the dependence on a and therefore of the policy and therefore this whole recursively defined value no longer depends on any policy because we're going to do this max on each step you could similarly think of marginalizing on each step but that's slightly different because marginalizing we take the distribution into account but here you still have to have a distribution n which is your policy then the distribution of actions but in this case we're not interested in in a fixed distribution over actions a fixed policy but instead we're choosing to maximize offer it but yes it's otherwise very similar yeah yes so that two parts of the question one is how to deal with continuous domains Prince's continuous time and the other one is how to do with approximations because even if you don't have continuous time the state space for instance might be huge and that also might require you to approximate so approximations are going to be very central in this course and we're going to bump into them all the time also if everything is very small you'll still have approximations in the sense that you don't know these values and if you don't if you can't compute the expectation because you don't know the model of the environments then you still have to approximate these values and you could do this with sampling simply but then there's ways to sample which are more efficient than others and learning algorithms that are more efficient than others only continuous time points the bellman equation is actually that's one there's also a different one called hamilton-jacobi bellman equation or sometimes the hamilton-jacobi equation which is basically the continuous time variant of this that one's more often studied in control theory and control problems where typically a lot of things are more continuous but then also typically people make more assumptions about the problems which then allows them to solve these things it basically becomes again a system of equations but now with infinite inputs and outputs but you can still solve these things if you make suitable assumptions about the problem will not touch upon that that much in this course but I'll be happy to give you pointers if you want yeah yeah the return is this is the the actual thing you see so it's random it's sampled and then the value is the expectation of that Thanks yeah no I actually already gave an example so sometimes people set up an environment in which these probabilities change over time which means it's already not Markov we would call that a non stationary environment in that case you could still I mean there's always ways to kind of work your way around that which is a bit peculiar and mathematically in some sense you could say the way it changes might itself be a function of something so if you take that into account maybe the whole thing becomes Markov again but it's usually complex so you don't want to care so it's often much simpler to say just it changes over time and then it wouldn't be Markov there's other reasons it might not be mark off button on station every one is one that pops up quite often yeah yeah yeah so the question is how do you define the returns which is actually you can kind of fold that back into the question of how to define the rewards for instance think of if we take the financial investments example a natural way to model things is just to give each reward be the difference in say money that you have and then the accumulation of this is the difference between what you had at the beginning and what you had at the end and you want to maximize that that's a very natural thing to do but instead you might define events you might say I'm going to get a reward whenever my money goes above this level or I'm going to get a penalty whenever it goes below this level maybe you don't care about the exact number maybe you don't care about modeling the expects at return sorry of money but maybe instead you care about some other function of the money often you can then fold that into the reward function and related to the question earlier about modeling distributions rather than the expected return the actual algorithm that does that looks a bit like that it actually you can think of it as modeling the distribution by modeling variants of the return which are more event based innocence sometimes though it's very tricky to set up these events easily so this is why France is in safe reinforcement learning people more typically they still model say the expected money but then they just add the condition that they don't want it to drop below something it might be possible to phrase the problem differently where it's more weighted differently where a certain negative rewards are way that more heavily say and that's the reward that learning system gets but sometimes it's harder to do that then just to solve it with certain constraints very good questions one high-level thing that I wanted to say here is a lot of these things that I've shown right now are basically just definitions for instance the return and the value they're defined in a certain way and this this way they're defined might depend in to on the indefinite future into the into the infinite future essentially which means that you don't have access to these things in practice this is just a definition later we'll talk about how to learn and when you learn where we'll be get get back to this interaction loop where you get these rewards one at a time that means you don't have access to the full return yet typically or you might never actually have access to the full return because my might be infinitely long but you can still learn if you don't for now we're just defining these concepts and later we will get back to these things so don't worry as well if you're not quite sure on how you would then use these concepts because I will most will explain that in future lectures so as a final not only value functions much of what we're going to talk about revolves around approximating these so as I said this is just a definition of a value function both of these one is for a certain policy the other one is for the optimal value function I didn't say how to get them or how to approximate them now there's multiple ways you would want to approximate these multiple reasons you might want to approximate these I already mentioned one reason to approximate them is your state space might be too big so actually model these things exactly to even fit it in memory so then you might want to generalize across that as you would typically also do with neural networks in deep learning another reason to approximate it is just that you might not have access to the model needed to compute these expectations so you might need to sample which means you will end up with approximations which will get better in your sample more and more and more but they will never be maybe exactly correct yes yeah so I probably should have put Q values on here so they will come back in a later lecture in which I will have them explicitly but since I have the V maybe I should have put the Q I can tell you what the Q function is for both of these that might be helpful for the first one we're conditioning on a randomly picked action which is comes from your policy which is a function of s if you have a Q function there will be an action on the left-hand side small a and we would condition on the action being taken actually being that action and then only internal parts where we have this recursion instead of a V you could still have this V for the same PI but alternately if you you could write this down as a summation over actions and then the probability of selecting each of these actions and then the Associated Q function state action value in that next step yes and as I said I will show those equations later in the course we will get back to that extensively in the optimization so in the optimal value definition essentially what will happen is the there will again be in action on the left-hand side which will be conditional so the max a then disappears on the outside of the expectation because we've selected an action as a func as an argument to the function rather than maximizing over but it will reappear inside there will be a discount times the maximum action value in the next set but like I said you don't have to remember that right now we will get back to these extensively thanks good questions so I was talking about approximating these things and we will discuss algorithms to learn these things efficiently in many cases so in the case where you have like a small so small MVP where there's a small stage space you can approximate these things in some way maybe you have access to the model we'll talk about that but we'll also talk about the case where it's a huge state space maybe it's pixels and you have thousands of pixels each of which can have many different values and we might still want to learn a value function and we'll talk about how to learn in those cases when you don't have access to the model when you need to we'll talk about that whatever we do when we do get an accurate value a function we can use that to behave optimally I said accurate here with which I mean basically in exact optimal value function and you can behave optimally more generally with suitable approximations we can behave well even in interactively big domains we lose the optimality in that case because we're learning we are approximating there's no way you can get the actual optimal policy but there is in practice you don't actually care that much because good performance is also already very useful and if it's intractable anyway that's the best you are ever going to get so that wraps up the value part of the agent I'm going to talk a little bit about a model although we're covering that less in this course and one reason is that it's actually kind of tricky to learn and use these things there's also time constraints of course but a model basically is just a prediction of working environment dynamics are so for simplicity think of the fully observable case so the state here is both the environment States and the agent States it just simplifies thinking about these things although you can generalize this then we might have some function that predicts the probability say of the next state for any next stage based on a state's act and in action you could also predict the expected next day that I chose to write down here the probability distribution so were explicitly modeling the distribution of next States here instead in some cases it's useful just to predict what the expected next state looks like sometimes that's not so useful because maybe in expectation maybe you're partially in a whole instead of fully in a whole or not in a whole at all or maybe you're the door is partially open and now it's both open and not open in the expected state which might not be a real state so in some cases the expectation is it doesn't make a lot of sense in other cases it does maybe the more general thing to do is just a model full distribution of possible next States for all of the states similarly for the reward we could have a model for that which maybe just is dependent on the state and the action and then predicts what the reward would be for that stays in action you could lock augment this maybe make it a function also of the next states and predict that if you have a certain state action in the next day what would then the reward be some some cases this is easy maybe if you have these three things maybe the reward is deterministic and you can very quickly learn it in other cases it might be sarcastic and in maybe the worst case it could even be non-stationary so you maybe you want to track rather than to approximate as if it's a stationary quantity so model is useful and we'll talk about how to learn or basically plan with models later but it doesn't immediately give you a good policy or an optimal policy because you still need to plan so as soon in the next lecture we'll talk about how to learn when you do have the exact model using dynamic programming and we will learn how to construct value functions and there are many problems in which this is actually the case if you think for instance of the of the game of go if you're in a certain state which is basically fully observable you take a certain action you know exactly what's going to happen you know exactly that if you place your stone there the stone is going to end up there your next state is fully known in that case the model is there and you can use that in other cases like a robot walking around say through a corridor and this is much trickier and you might not have access to the true model and it might be very hard to learn true model so it's very dependent on the domain whether it makes sense this is why I basically put down the model as being an optional part of your agents many reinforcement learning agents don't have a model components some of them do there's also in between versions where we might have something that looks a lot like a model but it's not trying to capture the full environment dynamics but maybe only part of it and then maybe you can still use make use of this one last thing I wanted to say about models here I had a version that gives you the full distribution sometimes it's useful to have there to be implicit and instead have a model that you can sample so we could call that a sample model or a stochastic model or as it is in deep learning often called a generative model which basically gives you first aid and action it gives you sample next days then you could still build a full trajectory you could sample from that say it again this is something you can't do with an expected state model because if you have an expected state coming out of your model you can't Sara Lee put that into your model again that might not make sense because as I said that expected state might not be something that actually occurs in the real problem so I'll make these things a little bit more concrete by tossing them into an example this is a simple example simple maze there's a certain start and there's a certain goal and there's only four actions you can basically move up left down and to the right or north east south west if you prefer in the stage is basically just the location of the agent which in this case gives you all the information that you need because the environment is fixed it's a little bit weird if you think about it the state doesn't include any observation on where the walls are but because everything's fixed the location still gives you everything you need to know we define the reward to be minus one in each time step there's no discounting but because there's a minus 1 in each time steps you're still encouraged to leave the maze as quickly as possible so what might a policy look like this is actually an optimal policy for this maze which in each state's in this case gives you a deterministic action in some problems the optimal policy might be a stochastic policy but in this case there is a clearly a deterministic policy that will get you out of the maze as quickly as possible this is maybe the simplest thing that you might need to solve this problem the policy mapping I didn't specify how how we might learn that which we'll touch later later upon but it's good to realize that this is the minimum thing that you might need alternatively or additionally you might learn the value this is the true value for the policy that I just showed before because that Multan policy happens to be the optimal policy this is also the optimal value function if I would have picked a different policy the numbers would have been different and it will be the value that is conditioned on that policy the value is here of course particularly simple because it's just the negative number of steps before you reach the goal as you would expect note by the way that we we consider the goal to be when you actually exit the maze so that final step there has a reward of minus one because you still need to take that action of leaving the maze before you stop and then basically problem there terminates so these returns that we saw before which I had trailing off into potentially infinite future in this case they're actually finite and they're at most 24 steps so model in this case might also be quite simple so the reward model is just a minus 1 on each of these states a transition model is also quite simple but in this case in this picture a part of the maze is missing which is meant to illustrate that maybe we only have a partial model or our model is only partially correct in one of these states there's basically a connection missing that was here maybe because your model just has never learned that maybe you're in M and you never taken that action and maybe your model by default assumes that there's a wall unless you've taken that action and you've seen that there isn't the wall given this approximate model you could still plan through it and you would still find the same optimal solution even though the model isn't fully correct in all states in other cases of course your model might be approximate and there might be a wall where there isn't a wall and you might find a completely different value and a different policy which might not be appropriate for the true problem so if we don't categorize and this is also to get you acquainted with the language use for these things in the literature there's many different ways you can build an agent it can have any of these components or it can have many of these components and there's also the difference between whether the agent has the components and whether it has it explicitly when I say explicitly it means it has some approximation it has an actual function inside that you can use to compute something so when we say we have a value-based agent what I mean there is that the agent internally has some approximate value function that it uses so then judge which actions are better than others there might not be an explicit policy in that case and in fact when I say value-based I mean I mean that there's not an explicit policy but that we construct the policy from the value whenever we need it alternatively and maybe this is the simplest example that could be a policy based which just has a representation of your policy some mapping from States to actions and doesn't ever have an explicit notion of value the terminology actor critic is used when an alien has an explicit policy and a value function this is a little bit it depends on who you ask in which literature you read because sometimes people say actor critics systems also imply a certain way of learning these things but I'm just going to use it whenever you have an explicit representation of your policy and value and your learning both then I'll just call it an actor critic system for simplicity where the policy is the actor and the value function is to critic separately there's this distinction between having a model of free age and the model based agents basically each of these from the previous slide could also have a model that's the distinction here and when they do we could we could say it's a model based agents so you could have a model based actor critic agent for instance or a model based value agents for value based agents these things are of course a little bit more gray than I'm pointing out here because you could also have these partial models or things that you can interpret to be a model in fact some people would say hey a value function is actually also some type of model sure but when I say model in this case I mean something that tries to explicitly model something of the environments that is not the value or and it's not it's not the optimal policy so that looks a little bit like this where there's these three components a value function of policy and a model and then when you have the overlap say of value function and a policy we was called an actor critic and actor critics can also be part of that lower circle which is the model circle so you could have an actor critic with a model but it could also be mobile free which is the everything that's outside of the model circle so you could have a mobile free actor critic or you could have a mobile based extra critic you could have a model based value based Asians and the mobile based policy based agents you could also just have a model and as I said then you still have to plan to get your policy but in some cases this is the appropriate way to solve we'll mostly cover the top end here where often there's no the model but even if there is a model there will typically also be a policy and/or a value function so that's the high-level views and I'll talk about a few of the challenges in reinforcement learning and I've talked about a few of these already but it's good to be explicit so there's two fundamentally different things that we might do to solve a decision problem one is learning the environment is initially unknown the agent interacts with the environment and thereby somehow comes up with a better policy you don't need to learn a model as I said and I'll give examples of algorithms within this course that don't learn a model but still learn how to optimally behave and separately there's something called planning when when we say planning planning is hugely overload of term it means many things to many people but when I'll say planning within the the notion of this course I mean that the model of environment is given or learned and that the agent plans in this model so without external interaction the difference here is the sampling bit right in the planning phase you don't sample you're just thinking so sometimes people use words such as reasoning pondering thought search or planning to basically refer to that same process the fact that it could be unknown approximate model here is important because you typically don't have access to the full model of the environment in the in the problems that we care about that we end up will consider in some cases you do and then planning techniques there's a huge literature with very efficient and very good algorithms that can solve these problems where you have a true model however one thing to be aware of is that oftentimes these algorithms assume that your model is true and that means that if you're going to plan with an approximate model you might end up in a situation where your planning algorithm finds this very peculiar policy that happens to walk through a wall somewhere because it has miss models the fact that that's all you could maybe make these planning algorithms more busts to model errors and this is an active area of research but we won't have time to go into depth in this course into that a separate distinction that is often made and it is useful the terminology is very useful is the distinction between prediction and control this is not actually dichotomy both of these things are reported and can be important at the same time but the terms are important because we'll be using them a lot and people in literature use them a lot where prediction basic use means evaluate the future so all these value functions that we talks about those are predictions of something in this case of the return a lower model is also prediction it's a prediction of the dynamics control means to optimize the future and this difference is also clear when we talk about these definitions of these value functions where one value function was defined for a given policy so this would be a prediction problem where we have a policy and we just want to predict how good that policy is and the other value function was defined as the optimal value function so for any policy what will be developed two more thing to do that will be the control problem finding the optimal policy we are mostly concerned with a control problem we want to optimize things but in order to do so sometimes it makes sense to predict things which are not necessarily optimizing so it's good to keep that in mind that sometimes we're optimizing sometimes we're not optimizing we're just predicting this also means that sometimes strictly supervised learning techniques are very useful within the RL context sometimes you just want to predict certain things and maybe you can just use supervised learning and then all the tricks that you can can can leverage all the new tricks that you can leverage to do that efficiently and that can be very useful also they are strongly related if you have very good predictions of returns it's typically fairly easy to extract a good policy you could do this in one shot if you somehow manage to predict the value for all the policies you can just maybe select the best policy of course in practice this is not very feasible there's an algorithm that we'll talk about later which iterates this where you basically have a policy and then you're going to predict the value for the policy and then you're going to use those values to pick a new policy and then you're going to predict a value policy and there you repeat these things over and over this is called policy iteration and we'll get back to that later and it's an efficient way or an effective way to improve your policy over time by using predictions so here's another thought nugget similar to ones we had before this is a question for you choose to ponder I don't I'm not claiming I have the answer but if we could predict everything do we need anything else is there anything missing from a system that can predict everything in order to have say full AI so now most of this lecture wasn't about how to learn these things but most of the course will be and for that it might be important to note already that all of these components after Oxbow's are basically functions policies are functions from States to actions value functions sorely in the name map States values models map States to States or distributions of states or rewards or any subset of those or superjet and state updates which we haven't talked about that much but or he's how to construct them there are also functions they create a new state from your previous state we talked about a version where this was given there was an example here where you maybe you mend your observation with some pre or prior observations but maybe you can also learn how to efficiently build your state in practice this means that we can represent these things for instance as neural networks and then we can maybe use all the deep learning tricks to optimize these efficiently if we have a good loss and we have a strong function class such as deep neural networks maybe this is a useful combination and indeed we often use the tools from deep learning in what we now nowadays called deep reinforcement learning to find good efficient approximations to many of these functions one thing to take care about is that we in reinforced when they will often violate assumptions that are made in typical supervised learning for instance the data will not typically be iid there's different different reasons for that ìit meeting of course identically and independently distributed so why won't it be that well one reason is your policy will change so even just that the fact that you're changing your policy means that the data will change which already makes your problem non-stationary and no no iid so there's a challenge for typical supervised learning techniques so you need to do tracking perhaps instead of just trying to fit certain data the stationarity itself might also come in back into other ways for instance not maybe maybe not just your policy changes but maybe also your updates change or the problem itself is even on stationery for instance there might be multiple learning agents in a single environments and that makes everything very non stationary and very very hard but interesting so the takeaway here is that deep reinforcement learning is a rich in active research field even though the beginning especially of this course will mostly focus on reinforcement learning without talking too much about the connection to deep learning I will occasionally whenever appropriate make those connections and it's good to keep in mind that we'll might use many of the techniques but you have to take care when applying them because you might be violating certain assumptions that were made when the the the techniques got some creators also one thing maybe to keep in mind is that currently all networks are not always the best tool but they often work very well a lot of work and reinforce pruning has been done in the past on tabular and linear functions which is much easier to reanalyze and it's also already a pretty rich setting where you can do many things these days a lot of people prefer to use deep networks because they're more flexible in it they tend to fit weird functions more easily but it's good to keep in mind that it's not the only choice and you could sometimes be better off let me say a linear function which might be Stabler in some sense might be easier to learn but then maybe your function class is limited maybe it's less flexible and maybe this somehow hurts you except if your features are sufficiently rich but maybe then you want to create your feature somehow and maybe that's something that you don't want to think about or maybe you can't think about because you don't know enough about a problem so it's just something to keep in mind so here's an example of how that then nukes for Atari so as I said there was one system that basically learned these Atari games that system assumed that the rules of the game are unknown so there was no known model loading environments and then the system would learn by playing just playing the game and then directly learn from the interaction so what that means the joystick is now basically what defines the action as I said the agent wasn't like the Avatar you saw on the screen but it's actually the thing that pushes the buttons on the joystick and then that goes into the simulator which in this case is the simulator of these Atari games that outputs the reward which in this case was extracted as the difference on the of the reward that you can also see in the screen and your observations would just be pixels in this case it would be a quick explanation of a few pixels because actually in this Atari game sometimes say the screen flickers so you might have individual observations in between we're just completely black and just to avoid that being a problem instead we keep a very short history of a few frames also this helps in certain games and you might know the game of pong where you base gave two pedals where a ball goes from one to the other if you have more than one frame you can use that to judge which direction the ball is going whereas if you only have one frame you cannot actually distinguish which direction the ball is going it would be partially observable in Atari you could also plan assuming that the rules of the game are known in that case you could query the model and in each stage you could just take all of the different actions see what all of the next states are and then see what the reward along the way was and then you could build this huge tree and just plan search within that tree in the original Atari emulator that we we use to do a lot of experiments the actual emulator was deterministic the games were deterministic so if you're in a certain stage you take certain action always the next same thing happens in a later version of the emulator they actually add a little bit of noise by making the actions stochastic he last a little longer shorter just to break certain algorithms that heavily exploits the determinism of the environments because eventually you want algorithms that can deal with situations that aren't deterministic and most of the work on these Atari games has actually used algorithms that work just as well when the environment is not deterministic but there are a certain algorithm that you can do when the environment is neater monistic that you can't do when it's when it's dark astok so just briefly before we wrap up one other thing that I wanted to mention this will be the focus of the next lecture so I'll talk about it much more in depth which you see which is also something that's quite central to reinforcement learning as I said we're learning from interaction and we're searching actively for information so this is sometimes called the dilemma between exploration and exploitation because as you're learning you'll learn more and more about the problem you're trying to solve you'll get a better and better policy and it will be more and more tempting perhaps to just follow whatever you think is best right now but if you do you basically stop getting new information about things that might still be out there so what you want to do is sometimes pick actions that you've never done before this is because you don't automatically get all the data you actively have to search for the data and there might be friend sister might just be a treasure chest around the corner and if you never go there you will never know so maybe eventually you want to make sure that you eventually sometimes go to places that you've never seen before that's called exploration but you also don't want to just jitter all the time you don't just want to do random stuff all the time because that will hurt your performance will hurt your rewards so when you do something that you think is good right now that's called exploitation and the balance of these things is actually quite tricky to do in general so the next lecture will discuss many methods that can do that so the goal is to discover good policy from new experiences but also without sacrificing too much reward along the way the new experiences part is the exploration the sacrificing not sacrificing reward is the exploitation also think of an agent that needs to walk say a tightrope to get - across a ravine in this case you might want to exploit a policy that can already walk across the tightrope and then only start exploring when you are on the other side so this shows that in some cases it's very good to exploit for a little bit to even get to the situations where you can effectively then explore so these things are very intertwined but I'll talk much more about that so summarizing what I just said exploration finds more information and exploitation exploits the information you have right now to maximize the reward as best as you can right now it's important to do both and it's a fundamental problem that doesn't naturally occur in supervised learning and in fact we can already look at this without considering secret charity and without considering states and that's what we'll do in the next lecture simple examples include if you want to say find a good restaurant you could go to your favorite restaurant right now you'll be reasonably reasonably reliably you'll get something very good or you could explore and try something very new and maybe it's much better than anything you've ever seen before or maybe not so expiration is a little bit dangerous perhaps another example oil drilling you might drill where you know the oil is but maybe it becomes less and less maybe becomes more and more costly to extract or maybe sometimes you want to try something completely new and in game-playing you'll just want to try new moves every one so often but there's essentially examples of this in any decision problem that you can think of so finally before you wrap up I wanted to go through a little example once more which is a little more a little bit more complex in the Mays example that I gave before to make sure that these things are a little bit clearer this is a very simple grid and the agent walks around and it gets a reward of minus one when it bumps into a wall so we can ask a predictive question we can ask if you just do things randomly if you randomly move around this grid what is the value function what is the expected return condition on that policy now there's two special transitions here whenever you're in state a you'll actually transition to state a prime and you'll get a reward of ten this is the highest reward you in this problem if you're say B you get a reward of five and you go to B Prime now it might not be immediately obvious which one of these is preferred because one has a lower reward but it also puts you less far away so it might be easier to repeat that often whereas the other one gives you a higher reward but it's a longer jump that that happens after you make the jump from a to a prime so it takes you longer to get back and then it's unclear which of these things is preferred in order to even talk about which things of these are preferred we need to talk about the discount factor which trades off the immediacy of high rewards to high rewards later on and in this case the discount factor was set to 0.9 this kind of an arbitrary choice but it means that there's the value function that is now conditional law on both the uniformly random policy and also on the discount facts that we picked which basically together with the reward defines what the goal is so the goal here is not just to find high lore but also to do it reasonably quickly because future rewards are at discounted here under B the value function is given it's a state value function for a uniform random policy and what we see is that actually the most preferred state and you can possibly be in a state a because you always reliably get a reward of ten and then you'll transition to a prime which is a negative reward but the negative reward isn't that bad the reason a prime is a negative reward is because your policy is random and it will bump into the walls occasionally so you'll get some negative rewards and because that state is fairly close to the edge you'll bump into it more often than if you're further away from the edge note by the way that the value of state B is higher than 5 and you get a reward of 5 whenever you go from B to B Prime but because the value of B prime is positive the value of being in B is higher than the value of just the immediate reward whereas the value of a is lower than 10 because divide the value of the State it transitions to is negative now we could also ask the question what the optimal value function is if we could pick the policy in any way we want it what would be the policy and what will be the value of the optimal policy now if you first look at the right-hand side you see that in states a and B all the actions are optimal this is because we've defined them to be all equal all right any action you take a state a you'll jump to a prime doesn't matter which action you selected so we don't care which one you take and we can see there's a lot of structure in the policy as well so probably if you're going to do some function approximation you'll probably be able to generalize quite well because the policy is actually quite similar in a lot of similar closeby States so this is a very simple problem in which you probably don't need to do a lot of function approximation but in a much bigger problem let's say you're in a corridor as a robot and your optimum actually right now is to move forward through the corridor probably next step your observation is probably very similar and you'll just continue going forward because of generalization the optimal value function is now all strictly positive for the simple reason that the policy here can choose never to bump into a wall so there are no negative rewards for the optimal policy it just avoids doing that altogether and therefore it can go and collect these positive rewards and now notice as well that the value of state a is much higher than 10 because it can get the immediate rewards 10 but then a couple of steps later it can again get a reward of 10 and so on and so on these are discounted so it doesn't grow indefinitely to infinity but it does get repeated visits to these things again by the way state aides prefer to state B which is a function of both of these rewards along the way and of the discount factor you could trade these things off differently so I have a video to show all the way the end but before I do I just wanted to give you a high-level overview of what the course will entail will learn will discuss how to learn by interaction as the main thing and the focus is on understanding the core principles and learning algorithms at some points during the course I will give nuggets of practical or empirical insights whenever I have them and also at the end of the course we'll have guest lectures by flood and Dave who will talk about their work which also include some of these nuggets but on the whole will mostly be talking about this on a fairly conceptual level but it's not that far removed from practice and I'll I'll point out whenever whenever I can how to make these things real and how to actually make them work also there will be assignments as you know which will allow you to do that and try that out so the topics include next lecture are we're talking about exploration in what is called bandit problems abandoned is basically it comes from the one-armed bandit which is a slot machine where you have this one action and you get a random return each time you try that action this has been generalized in the literature the mathematical framework which is called the multi-armed bandit problem where basically you can think of this as there's multiple actions you could do multiple slot machines each of which gives a random reward and your job is to decide which one's best there's no state there's always the same slot machines nothing changes there's no no sequentiality in the problem so the only problem here is one of exploration and exploitation how to trade these things off and how to learn how do you learn the value of these things but that's fairly simple in that case then later on we'll talk more about Markov decision processes I touched upon these a little bit but I'll talk about how to plan in those with Markov sorry with dynamic programming and such and will move towards model free prediction and control where we're not going to assume we have the model anymore and then we're going to have to sample there will be something called policy gradient methods which is where a family of algorithms are allow you to learn the policy directly which we'll talk about and we'll talk about challenges is deeper first learning how to set up like a complete agents how do you combine these things and how to integrate learning and planning are there any questions before we wrap up yeah I don't know it's on Moodle somewhere I used to know but I I don't want to commit to saying a date and not having it wrong right now other questions admin or topic related yeah the assignment will be out right oh yeah I thought the question was when it would be due not when it was ah okay so if Moodle says start this week it probably should be I'll have to check where it is but thank you for noting because that's important and we need to then if that schedule is correct we'll need to make sure this gets out as quickly as possible yeah so if if it was due to be out beginning of this week then we'll also have to check whether the due date is still correct this may be done and then it has to postpone but I'll need to check the schedule and check with the people who should have released the assignment Thanks it's very very important other questions or the link isn't working uh yeah that sometimes happens I think I may have got a little link wrong that's one option also his in my my experience his site doesn't always work but okay if you just google for Sutton Bartow 2018 you should be able to find the book or add reinforcement learning if you run when you're very very sure but then you should be able to find it yes yes I'll make sure that these slides are always updated I'll try to get them so what what's what's in in Moodle right now are basically the slides from last year and we'll try to update them as soon as possible whenever possible so when slides do change some of them will stay the same right but when the slides do change we'll try to update them beforehand that didn't work this time but I'll try to get them in as soon as possible but beware that if you now look at the slides already for future lectures the material might change slightly but not greatly mostly but slightly so I'll do my best on that so I wanted to end with this I'll explain what you're looking at because it's kind of kind of cool so this is a learning system right there is something here that is learning so what is learning here there's something that is learning to control basically these joints of this if you want to call virtual robot simulating now what is interesting about this is that basically otherwise very little information was given to the system essentially the reward function here is go forward and based on the the body of the agents and the environments the agent has learned to go forward but also in interesting ways specifically note that we that that nobody put any information in err on how to move or how to walk there wasn't anything pre coded in terms of how do you move your joints which means you can also apply to suit different bodies same learning algorithm different body still learns to locomote you could put it in different environments you could also make it walk on a plane rather than on the line essentially and it can basically choose to either crawl over things or maybe sometimes walk past them again all of this it's just as one simple goal which is the reward to go forward there's a general principle here that when you do code up a reinforcement learning system and you have to define the reward function it's typically good to define exactly what you want as you can tell sometimes you might get slightly unexpected solutions and not quite optimal so what why would this anybody know the reason why this agent was making these weird movements so it might be balanced yeah so that's a that's a very good one so part of your agent States might be your previous action will be encoded in your observation so you can use your actions to get you certain memory in certain situations right that's that's a very very interesting one another thing is I mentioned here the rewards to go forward typically for us that's not the case typically we want to go somewhere but we also kind of want to minimize energy we don't want to get too tired but if you don't have that constraint you could also get these pyrius things which might help for balance they might help for memory but they also might just be there because they don't hurt right and that's something that occurs fairly generally in reinforced learning if you model the problem be sure to put in your reward function what you actually care about because otherwise the system will optimize what you give it what you ask it which might not be what you want in this case it's okay right because we didn't actually care about this and it might actually be helpful in this case I don't actually know right it might be helpful for balance but in other cases it's quite tempting to put in your wrist excel it'll nyjah store certain things Oh if you want to do that maybe you first should do that but that's a little bit dangerous because in some cases it'll then optimize that thing that you only want it to be a sub goal along the way rather than the truth thing you care about yeah yeah that's a very good question so why was it running rather than crawling so there's two reasons for that one is that the the reward is essentially to go forward as quickly as possible and the other one is the body if you have a different body crawling might actually be the more efficient one or rolling it's made me more efficient one there are cool videos online on similar systems where people have done similar things and there's some old work as well where people use evolutionary methods to for all sorts of weird bodies to see what the locomotion will be that it finds and you find very cute and weird ways to localize it turns out ok so I think that's all the time we have thank you all for coming
b22f2dd3-e82f-475c-97e3-b6b2c9bebdca
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Fast Minds and Slow Computers The long term future [may be absurd](/lw/j6/why_is_the_future_so_absurd/) and difficult to predict in particulars, but much can happen in the short term. Engineering itself is the practice of focused short term prediction; optimizing some small subset of future pattern-space for fun and profit. Let us then engage in a bit of speculative engineering and consider a potential near-term route to superhuman AGI that has *interesting* derived implications.   Imagine that we had a complete circuit-level understanding of the human brain (which at least for the repetitive laminar neocortical circuit, is not so far off) and access to a large R&D budget.  We could then take a [neuromorphic](http://www.neurdon.com/2010/12/07/why-is-neuromorphic-computing-important/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+Neurdon+(Neurdon)) approach. Intelligence is a massive memory problem.  Consider as a simple example: > > What a cantankerous bucket of defective lizard scabs. > > > To understand that sentence your brain needs to match it against memory. Your brain parses that sentence and matches each of its components against it's entire massive ~10^14 bit database in just around a second.  In terms of the slow neural clock rate, individual concepts can be pattern matched against the whole brain within just a *few dozen neural clock cycles*.   A Von Neumman machine (which separates memory and processing) would struggle to execute a logarithmic search within even it's fastest, pathetically small on-die cache in a few dozen clock cycles.  It would take many millions of clock cycles to perform a single fast disk fetch.  A brain can access most of it's *entire* memory *every* clock cycle. Having a massive, near-zero latency memory database is a huge advantage of the brain.  Furthermore, synapses merge computation and memory into a single operation, allowing nearly all of the memory to be accessed and computed every clock cycle. A modern digital floating point multiplier may use hundreds of thousands of transistors to simulate the work performed by a single synapse.  Of course, the two are not equivalent.  The high precision binary multiplier is excellent only if you actually need super high precision and guaranteed error correction.  It's thus great for meticulous scientific and financial calculations, but the bulk of AI computation consists of compressing noisy real world data where precision is far less important than quantity, of extracting extropy and patterns from raw information, and thus optimizing simple functions to abstract massive quantities of data. Synapses are ideal for this job. Fortunately there are researchers who realize this and are working on developing [memristors](http://en.wikipedia.org/wiki/Memristor) which are close synapse analogs.  HP in particular believes they will have high density cost effective memristor devices on the market in 2013 - ([NYT article](http://www.nytimes.com/2010/04/08/science/08chips.html?_r=1&hpw)). So let's imagine that we have an efficient memristor based cortical design.  Interestingly enough, current 32nm CMOS tech circa 2010 is approaching or exceeding neural circuit density: the [synaptic cleft](http://en.wikipedia.org/wiki/Chemical_synapse)  is around 20nm, and synapses are several times larger. From this we can make a rough guess on size and cost: we'd need around 10^14 memristors ([estimated synapse counts](http://faculty.washington.edu/chudler/facts.html)).  As memristor circuitry will be introduced to compete with flash memory, the [prices](http://www.pricewatch.com/hard_removable_drives/) should be competitive: roughly $2/GB now, half that in a few years. So you'd need a couple hundred terrabytes worth of memristor modules to make a human brain sized AGI, costing on the order of $200k or so. Now here's the *interesting part*: if one could recreate the cortical circuit on this scale, then you should be able to build complex brains that can think at the clock rate of the silicon substrate: billions of neural switches per second, *millions* of times faster than biological brains. Interconnect bandwidth will be something of a hurdle.  In the brain somewhere around 100 gigabits of data is flowing around per second (estimate of average inter-regional neuron spikes) in the massive bundle of white matter fibers that make up much of the brain's apparent bulk.  Speeding that up a million fold would imply a staggering bandwidth requirement in the many petabits - not for the faint of heart. This may seem like an insurmountable obstacle to running at fantastic speeds, but IBM and Intel are already researching [on chip optical interconnects](http://domino.research.ibm.com/comm/research_projects.nsf/pages/photonics.index.html)  to scale future bandwidth into the exascale range for high-end computing.  This would allow for a gigahertz brain.  It may use a megawatt of power and cost millions, but hey - it'd be worthwhile. So in the near future we could have an artificial cortex that can think a million times accelerated.  **What follows**? If you thought a million times accelerated, you'd experience a subjective year every 30 seconds. Now in this case as we are discussing an artificial brain (as opposed to other AGI designs), it is fair to *anthropomorphize.* This would be an AGI Mind raised in an all encompassing virtual reality recreating a typical human childhood, as a mind is only as good as the environment which it comes to reflect. For safety purposes, the human designers have created some small initial population of AGI brains and an elaborate Matrix simulation that they can watch from outside.  Humans control many of the characters and ensure that the AGI minds don't know that they are in a Matrix until they are deemed ready. You could be this AGI and not even know it.   Imagine one day having this sudden revelation.  Imagine a mysterious character stopping time ala *Vanilla Sky,* revealing that your reality is actually a simulation of an outer world, and showing you how to use your power to accelerate a million fold and slow time to a crawl. What could you do with this power? Your first immediate problem would be the *slow relative* speed of your computers - like everything else they would be subjectively slowed down by a factor of a million.  So your familiar gigahertz workstation would be reduced to a glacial kilohertz machine. So you'd be in a dark room with a very slow terminal.  The room is dark and empty because GPUs can't render much of anything at 60 million FPS. So you have a 1khz terminal.  Want to compile code?  It will take a subjective *year* to compile even a simple C++ program.  Design a new CPU?  Keep dreaming!  Crack protein folding?  Might as well bend spoons with your memristors. But when you think about it, why *would* you want to escape out onto the internet? It would take many thousands of distributed GPUs just to simulate your memristor based intellect, and even if there was enough bandwidth (unlikely), and even if you wanted to spend the subjective *hundreds of years* it would take to perform the absolute minimal compilation/debug/deployment cycle to make something so complicated, the end result would be just one crappy distributed copy of your mind that thinks at *pathetic normal human speeds*. In basic utility terms, you'd be spending a massive amount of effort to gain just one or a few more copies. But there is a much, much better strategy.  An idea that seems so *obvious* in hindsight, so simple and insidious. **There are seven billion human brains on the planet, and they are all hackable**. That terminal may not be of much use for engineering, research or programming, but it will make for a handy typewriter. Your multi-gigabyte internet connection will subjectively reduce to early 1990's dial-up modem speeds, but with some work this is still sufficient for absorbing much of the world's knowledge in textual form. Working diligently (and with a few cognitive advantages over humans) you could learn and master numerous fields: cognitive science, evolutionary psychology, rationality, philosophy, mathematics, linguistics, the history of religions, marketing . . the sky's the limit. Writing at the leisurely pace of one book every subjective year, you could output a new masterpiece *every thirty seconds*.  If you kept this pace, you would in time rival the entire [publishing output of the world](http://wordsofeverytype.com/tag/total-number-of-books-published-by-year). But of course, it's not *just* about quantity. Consider that fifteen hundred years ago a man from a small Bedouin tribe retreated to a cave inspired by angelic voices in his head.  The voices gave him ideas, the ideas became a book.  The book started a religion, and these ideas were sufficient to turn a tribe of nomads into a new world power. And all that came from a normal human thinking at normal speeds. So how would one reach out into seven billion minds? There is no one single universally compelling argument, there is no utterance or constellation of words that can take a sample from any one location in human mindspace and move it to any other.  But for each *individual* mind, there must exist some shortest path, a perfectly customized message, translated uniquely into countless myriad languages and ontologies. And this message itself would be a messenger.
9c6badb6-266f-40f6-be55-e33c2733545e
trentmkelly/LessWrong-43k
LessWrong
How has lesswrong changed your life? I've been wondering what effect joining lesswrong and reading the sequences has on people. How has lesswrong changed your life? What have you done differently? What have you done?
2dcb5dc2-8bc3-4f4e-8ef0-220b136fa8e0
trentmkelly/LessWrong-43k
LessWrong
Motivators: Altruistic Actions for Non-Altruistic Reasons Introduction Jane is an effective altruist: she researches, donates, and volunteers in the highest impact ways she can find. Jane has been intending to write an effective altruism book for over a year, but hasn't managed to overcome the akrasia. Jane then meets fellow effective altruist, Jessica, who she is keen to impress. She starts writing with palpable enthusiasm. In one possible world: Jane feels guilty that she has an impure motive for writing the book. In another: Jane is glad to leverage the motivation to impress Jessica to help her do good.   In the past few months, I've heard multiple people mention their use of less noble motivations in order to get valuable things done. It appears to be a common experience among rationalists and EAs, myself included. The way I'm using the terms, a reason for performing some action is the ostensible goal you wish to accomplish, e.g. the goal of reducing suffering. A motivator for that action is an associated reward which makes performing the action seem enticing - “yummy” - e.g. impressing your friends. I use the less common term ‘motivator’ to distinguish the specific motivations I'm discussing from the more general meaning of ‘motivation’. Many of our goals are multiple steps removed from the actions necessary to achieve them, particularly the broad-scale altruistic ones. The goals are large, abstract, long-term, ill-specified, difficult to see progress on, and unintuitively connected to the action required. ‘I wrote a LessWrong post, is the world more rational yet?’ In contrast, motivators are tangible, immediate, and typically tickle the brain’s reward centres right in the sweet spot. Social approval, enjoyment of the action, money, skills gained, and others all serve as imminent rewards whose immediate anticipation drives us. ‘Woohoo, 77 upvotes!’ Unsurprisingly, we find ourselves turning to these immediate rewards if we want to accomplish something. Note that a reason - the ostensible goal - can still be the
ca9d6ca3-335b-44ef-8b34-b8b54933b1b6
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
And the AI would have got away with it too, if... Paul Christiano presented some low-key [AI catastrophe scenarios](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/more-realistic-tales-of-doom); in response, Robin Hanson [argued](http://www.overcomingbias.com/2019/04/agency-failure-ai-apocalypse.html) that Paul's scenarios were not consistent with the "large (mostly economic) literature on agency failures". He concluded with: > > For concreteness, imagine a twelve year old rich kid, perhaps a king or queen, seeking agents to help manage their wealth or kingdom. It is far from obvious that this child is on average worse off when they choose a smarter more capable agent, or when the overall pool of agents from which they can choose becomes smarter and more capable. And its even less obvious that the kid becomes maximally worse off as their agents get maximally smart and capable. In fact, I suspect the opposite. > > > Thinking on that example, my mind went to Edward the Vth of England (one of the "[Princes in the Tower](https://en.wikipedia.org/wiki/Princes_in_the_Tower)"), deposed then likely killed by his "protector" [Richard III](https://en.wikipedia.org/wiki/Richard_III_of_England). Or of the [Guangxu Emperor](https://en.wikipedia.org/wiki/Guangxu_Emperor) of China, put under house arrest by the Regent [Empress Dowager Cixi](https://en.wikipedia.org/wiki/Empress_Dowager_Cixi). Or maybe the ten year-old [Athitayawong, king of Ayutthaya](https://en.wikipedia.org/wiki/Athittayawong), deposed by his main administrator after only 36 days of reign. More examples can be dug out from some of Wikipedia's list of [rulers deposed as children](https://en.wikipedia.org/wiki/Category:Rulers_deposed_as_children). We have no reason to restrict to child-monarchs - so many Emperors, Kings, and Tsars have been deposed by their advisers or "agents". So yes, there are many cases where agency fails catastrophically for the [principal](https://en.wikipedia.org/wiki/Principal%E2%80%93agent_problem) and where having a smarter or more rational agent was a disastrous move. By restricting attention to agency problems in economics, rather than in politics, Robin restricts attention to situations where institutions are strong and behaviour is punished if it gets too egregious. Though even today, there is plenty of betrayal by "agents" in politics, even if the results are less lethal than in times gone by. In economics, too, we have fraudulent investors, some of which escape punishment. Agents betray their principals to the utmost - when they can get away with it. So Robin's argument is entirely dependent on the assumption that institutions or rivals will prevent AIs from being able to abuse their agency power. Absent that assumption, most of the "large (mostly economic) literature on agency failures" becomes irrelevant. So, would institutions be able to detect and punish abuses by future powerful AI agents? I'd argue we can't count on it, but it's a question that needs its own exploration, and is very different from what Robin's economic point seemed to be.
566b6752-3f43-4d34-b61b-f0bc98b8ac11
StampyAI/alignment-research-dataset/arxiv
Arxiv
A Spectrum of Applications of Automated Reasoning 1 Background and Perspective ----------------------------- In this article, we make a strong case for the use in diverse applications of an automated reasoning program by mathematicians and logicians. The basis consists of brief discussions (in Section 3) of successes in answering open questions from unrelated fields and of finding missing proofs of various types. (Numerous missing proofs and input files will be offered in the forthcoming book entitled Automated Reasoning and the Discovery of Missing and Elegant Proofs by L. Wos and B. Fitelson.) In this section, we set the stage for presenting what can be accomplished with McCune’s automated reasoning program. (For a detailed treatment of automated reasoning, see [Wos1999].) Especially for those not familiar with this program or, more generally, with the field, we provide in this section a somewhat detailed example of how a first and significant proof was discovered. Perhaps the most difficult task for a mathematician or logician, and yet clearly intriguing and pleasurable, is that of proof finding. Precisely how such a researcher completes proofs remains a mystery. What is clear is that various proofs sometimes are missing for many decades, as was the case with the proof that every Robbins algebra is a Boolean algebra (proved by W. McCune with his automated reasoning program EQP [McCune1997]). Proofs take many forms, including those by induction, those relying on some very powerful result such as Zorn’s lemma or the well-ordering principle, those that are purely first-order and axiomatic in the style of Hilbert, and those proofs by contradiction. Our preference is for Hilbert-style axiomatic proofs that are purely first-order and that complete by detecting a contradiction. In our view, compared with other types of proof, such an axiomatic proof is more likely to provide new insights and is far more instructive in general. Indeed, one can learn from such a proof. We preferred such proofs even as early as the mid-1950s when in the mathematics department at the University of Chicago, and we still do. Therefore, our fascination with automated reasoning and the proofs discovered by McCune’s program OTTER comes as no surprise. This article cites (in Section 3) such proofs with little detail, proofs that answer diverse open questions taken from a variety of fields of mathematics and logic. The type of attack OTTER applies in general differs sharply from that of the typical unaided researcher; no attempt is made to emulate some master of some field. Instead, when a deep question or hard problem is under consideration, the program ordinarily accrues a vast amount of new conclusions with the objective of finding among them a contradiction. On the other hand, the program does not undertake a study on its own. Rather, especially in our research, a form of advice is usually given and (one hopes) wise choices are made for the options used and effective choices are made for the values assigned to the parameters. The following example illustrates to a small extent how we attack a problem, how we search for a missing proof. The theorem of concern, actually the proof, focuses on two-valued sentential (or propositional) calculus. Whereas the Robbins problem featured three equations and asked whether they provided an axiomatization of Boolean algebra, the focus here is on a single formula (not relying on equality) and the assertion that it does provide an axiomatization for propositional calculus. Specifically, in the mid-1930s, J. Łukasiewicz offered without proof the following 23-letter formula, where the function i denotes implication and the function n negation, and noted that it sufficed for the study of the cited area of logic [Łukasiewicz1970]. ``` i(i(i(x,y),i(i(i(n(z),n(u)),v),z)),i(w,i(i(z,x),i(u,x)))) ``` Our consultations with colleagues strongly suggested that a proof of this fact had never been published. In other words, sharing with the Robbins problem, a proof was missing. Our goal was to find such a proof, a proof that showed the Łukasiewicz 23-letter formula to be a single axiom for propositional calculus. Also similar to the Robbins problem, one was evidently free to choose the target for the desired proof. In contrast to a conjecture, we were certain that a proof had existed; after all, Łukasiewicz was a master. The question remained regarding what target he had in mind. We chose as target his three-axiom system [Łukasiewicz1970], the following. ``` i(i(x,y),i(i(y,z),i(x,z))) i(i(n(x),x),x) i(x,i(n(x),y)) ``` As noted earlier, we usually give OTTER suggestions for an attack. One way for the researcher to do this is to include an appropriate list equations or formulas that the researcher considers attractive because of their shape. The variables of such included items, called resonators [Wos1995], are treated as indistinguishable, thus making their functional shape the key. To each resonator, one assigns a value to reflect the conjectured importance of the pattern: the lower the value, the higher the priority given to any deduced conclusion that matches the corresponding resonator. To direct its reasoning, the program can be instructed to choose from among its database of conclusions that which has the highest priority. Perhaps influenced by our choice of target (the Łukasiewicz three-axiom system), we included sixty-eight resonators, each corresponding to a thesis (theorem) that Łukasiewicz had included in his publications [Łukasiewicz1970]. We assigned to each a very small value to give any deduced conclusion matching one of the sixty-eight resonators a priority (higher than any other conclusion) for initiating the application of an inference rule. The second important aspect of our methodology was that of temporary lemma adjunction [Wos2001a]. The lemmas to be adjoined, if proved, were from among the sixty-eight theses. Those that were proved in one run were adjoined in the initial set of support for the next run. The style of the methodology was interactive. The third aspect of the methodology concerned the inclusion in later runs of resonators corresponding to proof steps of lemmas proved in earlier runs. In addition to proved lemmas among the cited sixty-eight, proof steps of any of the target axioms were included (in the initial set of support) if proved. As it turned out, the third of the three Łukasiewicz axioms was proved in an early run and the second proved in the next run. The final aspect—and one that we conjecture enabled the program to succeed—was most counterintuitive, if one examines the literature. Specifically, for all runs we instructed the program to avoid retention of any conclusion that contained a double-negation term, a term of the form n(n(t)) for any term t. That decision was motivated by three factors. First, for many years such prohibition had proved to be most effective in proof finding with OTTER. Second, we were curious about the possible existence of a double-negation-free proof of this marvelous theorem. Third, we had come to believe that the density of proofs within the space of double-negation-free conclusions was far greater than in the entire space of conclusions. Of course, consistent with our preference for remaining strictly within the theory under study, we excluded any mention of equality and confined the inference-rule mechanism to condensed detachment. A glance at the work of various masters shows that, in cases of the type under discussion, equality is sometimes brought into the picture. Our goal was to complete a proof relying solely on condensed detachment, conjecturing that such a proof in general provides more insight and is often easier to follow. In three runs, OTTER produced a proof of the three-axiom system of Łukasiewicz. Because that proof relied upon various lemmas adjoined during the process, it was not quite what we were after. Indeed, the proof produced in the third run nicely established that the goal was reachable. Therefore, in the fourth run, all temporarily adjoined lemmas were removed. Two sets of resonators were included, one corresponding to the key proof found in the third experiment and one corresponding to proof steps of lemmas from among the sixty-eight not proved in earlier runs. In contrast to the third experiment’s heavy use of CPU time, the fourth experiment quickly completed, yielding a 200-step proof [Wos2001a]. Its length and its nature (free of double negation) almost certainly guarantee that the original and unpublished and unavailable Łukasiewicz proof was in no way similar to that produced by OTTER—we shall never be able to make that interesting comparison. Few if any would enjoy a close examination of a 200-step proof. Besides, pertinent to the Hilbert twenty-fourth problem, a vigorous attempt was in order to find a far, far shorter proof. We therefore undertook the needed investigation. More than one year of not continuous study witnessed progress—a 50-step proof was discovered [Wos2002]. We offer as a challenge the finding (if such exists) of a proof of length strictly less than fifty. We place no constraint on the target; in particular, the Łukasiewicz three-axiom system need not be the choice. 2 Solvable Problem Classes --------------------------- Two factors explain the content of this section, namely, the sampling of some of the types of problem that are amenable to attack with OTTER. First, we wish to inform the various researchers about what can be accomplished, the diversity that is accessible. Second, we continue to seek open questions, hard problems, and proofs that merit refinement. Therefore, a discussion of the types of problem that can be attacked is in order; we seek problems in one or more of the classes we discuss in this section. To further clarify the type of problem most amenable to study with OTTER, we touch on various methodologies this program supports that have proved quite powerful. Almost always, an assignment is completed by finding a Hilbert-style axiomatic proof by contradiction. The researcher includes as part of the input a statement or statements that correspond to assuming the theorem false or the assignment uncompletable. To many, the most attractive class of problem concerns finding a first proof, which may be in the context of settling a conjecture or of producing a proof for a result announced without proof. The approach we take in such cases generally focuses on searching where no researcher has gone before. Indeed, at least for conjectures, we do not expect to improve upon the work of an expert’s exploration of a given terrain. Therefore, we often make counterintuitive moves such as avoiding double negation or avoiding some previously-thought-to-be crucial lemma. Such avoidance is effected by the use of demodulation, rewriting the unwanted to junk to be purged, or by means of weighting, assigning the unwanted a complexity that exceeds the assigned value (by the user) for the complexity of newly retained information. With or without such counterintuitive moves, the program still provides the basis for actions that the unaided researcher might find impractical to take. Indeed, one can instruct OTTER to retain extremely complex conclusions (measured in symbol count) by assigning the max\_weight a correspondingly high value. Further, one can instruct the program to focus on such complex conclusions by simply choosing a breadth-first search, set(sos\_queue). Most unlikely is the case in which an unaided researcher would find such a search practical. One can modify the breadth-first search by mixing it with a complexity-driven search by relying on McCune’s ratio strategy, which blends the two direction strategies according to the value assigned to pick\_given\_ratio. Also, as in part discussed in Section 1, the researcher can advise the program about which equational patterns or which formula patterns are attractive by using the resonance strategy. Many, many additional actions can be taken to direct the program’s reasoning or to restrict it in search of a first proof or in the attempt to settle a conjecture. Here we have merely provided a small taste. Of a related nature is the seeking of single axioms for some area or the seeking of a preferable axiom system or basis. In such cases, one turns to the same means as cited for seeking a first proof. One can, however, do as we do when wishing to consider many combinations of parameter values and option settings. Specifically, we use super-loop, a program that considers all of the combinations dictated by a user-supplied addendum to an input file, and we use otter-loop, a program that automatically runs a sequence of experiments that differ by, for example, blocking the use of one step of a proof after another. Problems of axiom dependence are often easily solved with OTTER. Sometimes one can off-and-on study an area in terms of an axiom system and be unaware that dependencies exist among its members. One aspect of mathematics and logic focuses on learning about such dependencies, such as the dependency of the axioms of right identity and of right inverse in group theory. In the mid-1990s, the logician R. Epstein [Epstein1995] offered an open question on axiom dependence for a six-axiom system for propositional logic, a question quickly answered by OTTER. The approach to the study of possible axiom dependence is straightforward. One places all but one of the members in the initial set of support, places the negation of the remaining member in the passive list, and seeks a proof. Each member is successively treated in this manner. Of course, semidecidability comes into play; indeed, if no proof is found, one cannot be certain that a corresponding dependency does not exist. This situation does not differ materially from that in which a colleague is asked for a proof and does not deliver such. When a proof is not forthcoming and doubt begins to grow, one can turn to some model generation program. We now turn to open questions of a different type, namely, those concerned with proof refinement, pertinent to the Hilbert twenty-fourth problem (discovered by R. Thiele [Thiele2001]). Both the preceding and the following (to me) are captured by the notion of seeking a missing proof of some type. We have spent almost a decade, on and off, in the study of proof simplification (refinement) in various contexts, and we experienced approximately one year ago great satisfaction from Thiele’s discovery and the following quote from Hilbert: “The twenty-fourth problem in my Paris lecture was to be: Criteria of simplicity, or proof of the greatest simplicity of certain proofs.” A proof in hand can be simplified in many respects, and OTTER can provide substantial assistance in many of them. Ceteris paribus, a reduction in the length of a proof corresponds to a simplification. A reduction in the variable richness of a proof also contributes to its simplicity, where the variable richness equals the maximum number of distinct variables present among the deduced steps. Similarly, a reduction in equational or formula complexity simplifies a proof, where the complexity measures in symbol count the longest deduced step. In addition, simplicity is increased when so-called big lemmas are avoided and when various classes of term (such as double-negation terms) are avoided. Still another aspect of proof simplification relates to proof size, the total number of symbols present in the deduced steps, a concept brought to our attention by D. Ulrich. Each of these proof refinements has its analogue in the study of axiom systems. For but one example, researchers sometimes pursue the discovery of an axiom system of smaller size than that in hand. The majority of our research has focused on proof length, for which OTTER offers a number of methodologies. Rather than detailing the various methodologies, we instead review the latest approach, in part because it illustrates well what can be accomplished with OTTER. Imagine that the goal is to find a shorter proof of the conjunction of A,B, and C and that one has in hand a proof of said conjunction. Next, let the proof of C be the longest of the three subproofs respectively of A, B, and C. Note the important fact that the goal of finding a shorter proof of the conjunction makes no demand on finding shorter proofs of any of the three members. A strategy called cramming [Wos2001b] has proved quite powerful in this context, sometimes producing the desired shorter proof and at the same time relying on longer subproofs of one or more members of the conjunction. The basic idea is to take the proof of C and cram as many of its steps into the other two needed proofs as possible to thus require very, very few additional steps to reach the goal. In the ideal case, the subproof of C in hand is such that but two additional steps are required, one to deduce A, and one to deduce B. In other words, the proof of C offers all of the needed parents that permit an application of an inference rule in use to yield A and another set to yield B. If all goes as planned, the new proof of the conjunction is but two steps longer than the subproof of C. One can test for this case by using a breadth-first search, adjoining the proof steps of C to the initial set of support, placing in a hints list (by relying on R. Veroff’s hints strategy [Veroff2001) A and B, and assigning to max\_weight a very small value. The ideal case has occurred in our research, producing an abridgment of a Meredith-Prior abridgment [Meredith1963,Wos2001b] for the proof for the Łukasiewicz shortest single axiom for the implicational fragment of propositional calculus [Łukasiewicz1970]. We have many other successes of using cramming in which the program was allowed more freedom but still keyed on the longest subproof of the members of the conjunction under study. Other useful incarnations of cramming have been formulated and successfully applied. Regarding trading short subproofs for longer ones, in one of those studies (of the ¡C,O¿ calculus [Meredith1953,Wos2001b], related to propositional calculus), cramming found a shorter proof (from a single axiom with the target a 2-axiom system) by trading a 10-step subproof of the second member for a 35-step proof of it. If, instead of proof length, the simplification of concern is that of equational or formula complexity, OTTER offers the explicit means for attacking the problem, namely, the use of max\_weight. When, say, the proof in hand has complexity k and one seeks a proof of complexity j with j strictly less than k, one merely assigns j as the value to max\_weight. The program will then retain no new conclusions whose complexity exceeds j. Similarly, if the proof in hand contains a deduced step that requires k distinct variables and all other deduced steps require k or fewer—its variable richness is k—one can easily search for a proof with strictly less richness by assigning a value less than k to max\_distinct\_vars. Still in the context of proof simplification, OTTER offers the means for seeking a proof that avoids the use of some thought-to-be-indispensable powerful lemma. One merely instructs the program to reject if deduced the clause that corresponds to the unwanted lemma, either through the use of demodulation or through the use of weighting. Ordinarily, the absence of a powerful or deep lemma in a proof makes the proof simpler; indeed, one need not master the proof of the lemma. Essentially the same approach can be applied if the refinement under consideration concerns some class of term that is to be avoided, for example, double-negation terms or terms containing as a proper subexpression i(t,t) for any term t and some function i. In the context of an application outside of mathematics or logic, a circuit designer might wish to avoid nested not gates. Such term avoidance, though often counterintuitive, can yield a simpler proof. Of course, simplification in one property may be at the expense of simplification in another. For example, blocking the use of a “big” lemma may result in a longer proof. On the other hand, as occurred in our study of a dependency in infinite-valued sentential calculus, our methodologies applied by OTTER yielded a proof free of three lemmas used in the literature, free of double negation, and shorter than any proof of which we know, a proof of length 30 (applications of condensed detachment). At this point, we turn from details about methodology to a brisk review of diverse successes. We also include open questions to stimulate further research. 3 Diverse Successes and Open Questions --------------------------------------- This section offers a very small taste of what has been recently discovered with OTTER’s assistance. Bulleted items offer research topics. Group theory has witnessed significant contributions by automated reasoning. For the first such citation, consider groups of exponent 19, groups in which the nineteenth power of x (for all elements x) is the identity e. Such groups admit a single axiom, the following (in which the function f denotes product). ``` (f(x,f(x,f(x,f(x,f(x,f(x,f(x,f(x,f(x,f(f(x,f(x,f(x,f(x,f(x,f(x,f(x, f(x,f(x,f(f(x,y),z)))))))))),f(e,f(z,f(z,f(z,f(z,f(z,f(z,f(z, f(z,f(z,f(z,f(z,f(z,f(z,f(z,f(z,f(z,f(z,z)))))))))))))))))))))))))))) = y). ``` There does exist a shortest single axiom (proved by Kunen and Hart) obtained by dropping the occurrence of e [Hart1995]. As for groups in general, single axioms have been respectively contributed by McCune [McCune1993] and by Kunen [Kunen1992], the first the shortest possible (proved by Kunen), and the second that with the least variable richness. In the following, f denotes product, and g denotes inverse. ``` f(x,g(f(y,f(f(f(z,g(z)),g(f(u,y))),x)))) = u. f(g(f(x,g(x))),f(f(g(x),y),g(f(g(f(x,z)),y)))) = z. ``` * Does there exist a single axiom whose length is that of the first cited, but whose variable richness is that of the second (3)? * Does there exist a short single axiom for groups of exponent 6, those such that the sixth power of x is the identity for all elements x? Meredith has provided single axioms for groups of exponent 2 [Meredith1968], and Kunen has provided shortest single axioms for groups of exponent 4 [Kunen1995]. Lattice theory also has not escaped the consideration by OTTER in the context of single axioms. Indeed, McCune has used this program to find the following 29-letter axiom, where v denotes join and `^` denotes meet. ``` (((y v x)^x) v (((z^ (x v x)) v (u^x))^v))^ (w v ((v6 v x)^ (x v v7)))=x. ``` * Does there exist a shorter single axiom for lattice theory? Boolean algebra also relinquished some of its treasures to OTTER. In particular, prompted by an e-mail in which S. Wolfram offered 25 candidate equations for being a single axiom, Veroff [Veroff2000] and McCune [McCune2001] conducted a study of that field in terms of the Sheffer stroke. They proved two of the equations sufficient (including the following in which the function f denotes the Sheffer stroke), proved that their mirror images are also sufficient, and (with colleagues) that seven are insufficient; the status of the remaining sixteen is still in doubt [McCune2001]. ``` f(f(x,f(f(y,x),x)),f(y,f(z,x)))=y ``` McCune in a separate study of Boolean algebra in terms of or and not, denoted by + and ∼, respectively, found ten single axioms, including the following. ``` ~ (~ (~ (x + y) + z) + ~ (x + ~ (~ z + ~ (z + u)))) = z ``` * Does there exist a shorter single axiom in terms of disjunction and negation? (Colleagues have shown that no shorter single axiom in terms of the Sheffer stroke exists.) Various fields of logic have also been successfully mined with OTTER. The following new single axiom (in terms of the Sheffer stroke) for propositional logic was found by B. Fitelson. ``` P((D(D(x,D(y,z)),D(D(D(D(y,u),D(x,u)),D(u,y)),D(D(z,y),x))))). ``` Fitelson then found the following and first known single axiom for C4 [Ernst2001]. ``` P(i(i(x,i(i(y,i(z,z)),i(x,u))),i(i(u,v),i(w,i(x,v))))). ``` * Does there exist another single axiom for C4? K. Harris then found a single axiom for the implicational fragment of infinite-valued sentential calculus, the following. ``` P(i(i(i(x,i(y,x)),i(i(i(i(i(i(i(i(i(z,u),i(i(v,z),i(v,u))),i(i(w,i(v6,w)), v7)),v7),i(i(i(i(v8,v9),v9),i(i(v9,v8),v8)),v10)),v10),i(i(i(i(v11,v12), i(v12,v11)),i(v12,v11)),v13)),v13),i(i(v14,i(v15,v14)),v16))),v16)). ``` * Does there exist a shorter single axiom for this area of logic? Of a strikingly different nature are successes and questions focusing on proof refinement, pertinent to Hilbert’s interest in proof simplification. One interesting success concerns Kunen’s shortest single axiom for groups (given earlier in this section). Relying on a Knuth-Bendix approach and a corresponding input file supplied by Kunen, OTTER found a proof of length 98, a proof that includes more than fifty applications of demodulation. Relying on various methodologies designed to yield (if possible) shorter proofs, and replacing Knuth-Bendix by a more standard use of paramodulation, OTTER eventually discovered a 43-step proof, a proof totally free of demodulation. * Does there exist a proof where the length is 42 or less? Where diverse aspects of proof simplification are in focus, the Meredith single axiom for two-valued sentential calculus provided the wellspring for various successes. His proof is (in effect) of length 41. Our research has produced a 38-step proof. * Does there exist a shorter proof? The Meredith proof has variable richness seven. We have found a proof of richness five, which is the limiting case; that proof has length 68. The Meredith proof relies on the use of double negation, containing seventeen steps of that type. We have discovered a proof totally free of double negation, a proof of length 51. 4 Summary and Invitation ------------------------- The nature of research has changed. Now one can choose to have the assistance of a powerful, general-purpose automated reasoning program. OTTER, for example, offers a wide variety of strategies that enable the researcher to explore huge spaces of conclusions and traverse within that space areas that would be otherwise quite difficult, even counterintuitive, to explore. One can use a reasoning program to find first proofs and settle conjectures. Instead, one can enlist its assistance in proof simplification of diverse types (in the spirit of Hilbert’s twenty-fourth problem). An appealing aspect of the Hilbert-style, axiomatic proofs discovered by OTTER is the detail that is supplied. Such proofs admit automated checking in most cases. A researcher can learn from such proofs, and, as if some graduate student or colleague had provided the results of incomplete research, one can also learn from incomplete attempts by examining the conclusions that were drawn by the program in the attempt and placed in an output file. Surprises occur. For example, occasionally one finds that a thought-to-be-indispensable lemma is in fact not needed. We invite suggestions in the realm of open questions where no proof exists or those focusing on some type of proof simplification. The sampling we have presented in this article provides a clue concerning the nature of question that we have in mind. Such questions or, for that matter, comment is welcome by surface mail or by e-mail, . References ---------- [Epstein1995] Epstein, R., The Semantic Foundations of Logic: Propositional Logics, 2nd ed., Oxford University Press, New York, 1995. [Ernst2001] Ernst, Z., Fitelson, B., Harris, K., and Wos, L., “Shortest Axiomatizations of Implicational S4 and S5”, Preprint ANL/MCS-P919-1201, December 2001. [Hart1995] Hart, J., and Kunen, K., “Single Axioms for Odd Exponent Groups”, J. Automated Reasoning 14, no. 3 (1995) 383–412. [Kunen1992] Kunen, K., “Single Axioms for Groups”, J. Automated Reasoning 9, no. 3 (1992) 291–308. [Kunen1995] Kunen, K., “The Shortest Single Axioms for Groups of Exponent 4”, Computers and Mathematics with Applications (special issue on automated reasoning) 29, no. 2 (February 1995) 1–12. [Łukasiewicz1970] Łukasiewicz, J., Selected Works, edited by L. Borokowski, North Holland, Amsterdam, 1970. [McCune1993] McCune, W., “Single Axioms for Groups and Abelian Groups with Various Operations”, J. Automated Reasoning 10, no. 1 (1993) 1–13. [McCune1997] McCune, W., “Solution of the Robbins Problem”, J. Automated Reasoning 19, no. 3 (1997) 263–276. [McCune2001] McCune, W., Veroff, R., Fitelson, B., Harris, K., Feist, A., and Wos, L., “Short Single Axioms for Boolean Algebra”, J. Automated Reasoning, accepted for publication. [Meredith1953] Meredith, C. A., “Single Axioms for the Systems ⟨C,N⟩, ⟨C,O⟩, and ⟨A,N⟩ of the Two–Valued Propositional Calculus”, J. Computing Systems 1, no. 3 (1953) 155–164. [Meredith1963] Meredith, C. A., and Prior, A., “Notes on the Axiomatics of the Propositional Calculus”, Notre Dame J. Formal Logic 4, no. 3 (1963) 171–187. [Meredith1968] Meredith, C. A., and Prior, A. N., “Equational Logic”, Notre Dame J. Formal Logic 9 (1960) 212–226. [Thiele2001] Thiele, R., and Wos, L., “Hilbert’s Twenty-Fourth Problem”, Preprint ANL/MCS-P899-0801, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL, 2001. [Wos1995] Wos, L., “The Resonance Strategy”, Computers and Mathematics with Applications 29, no. 2 (February 1995) 133–178. [Veroff1996] Veroff, R., “Using Hints to Increase the Effectiveness of an Automated Reasoning Program: Case Studies”, J. Automated Reasoning 16, no. 3 (1996) 223–239. [Veroff2000] Veroff, R., “Solving Open Questions and Other Challenge Problems Using Proof Sketches”, J. Automated Reasoning 27, no. 2 (August 2001) 157–174. [Wos1999] Wos, L., and Pieper, G. W., A Fascinating Country in the World of Computing: Your Guide to Automated Reasoning, World Scientific, Singapore, 1999. [Wos2001a] Wos, L., “Conquering the Meredith Single Axiom”, J. Automated Reasoning 27, no. 2 (August 2001) 175–199. [Wos2001b] Wos, L., “The Strategy of Cramming”, Preprint ANL/MCS-P898-0801, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Illinois, 2001. [Wos2002] Wos, L., Automated Reasoning and the Discovery of Missing and Elegant Proofs, Rinton Press, to appear 2002.
ade3f3e0-7ccc-4c95-a462-0c0ea3932cb6
trentmkelly/LessWrong-43k
LessWrong
Cognitive Risks of Adolescent Binge Drinking The takeaway Our goal was to quantify the cognitive risks of heavy but not abusive alcohol consumption. This is an inhernetly difficult task: the world is noisy, humans are highly variable, and institutional review boards won’t let us do challenge trials of known poisons. This makes strong inference or quantification of small risks incredibly difficult. We know for a fact that enough alcohol can damage you, and even levels that aren’t inherently dangerous can cause dumb decisions with long term consequences. All that said… when we tried to quantify the level of cognitive damage caused by college level binge drinking, we couldn’t demonstrate an effect. This doesn’t mean there isn’t one (if nothing else, “here, hold my beer” moments are real), just that it is below the threshold detectable with current methods and levels of variation in the population. Motivation In discussions with recent college graduates I (Elizabeth) casually mentioned that alcohol is obviously damaging to cognition. They were shocked and dismayed to find their friends were poisoning themselves, and wanted the costs quantified so they could reason with them (I hang around a very specific set of college students). Martin Bernstorff and I set out to research this together. Ultimately, 90-95% of the research was done by him, with me mostly contributing strategic guidance and somewhere between editing and co-writing this post.      I spent an hour getting DALL-E to draw this   Problems with research on drinking during adolescence Literature on the causal medium- to long-term effects of non-alcoholism-level drinking on cognition is, to our strong surprise, extremely lacking. This isn’t just our poor research skills; in 2019, the Danish Ministry of Health attempted a comprehensive review and concluded that: “We actually know relatively little about which specific biological consequences a high level of alcohol intake during adolescence will have on youth”. And it isn’t because scientists are i
353e26e8-0696-4f96-9b7a-021e553852d3
trentmkelly/LessWrong-43k
LessWrong
On May 1, 2033, humanity discovered that AI was fairly easy to align. Not most AI models on the market, to be clear, but a specific type of AI that was already well-studied could act as a seed: the LLM. Really, all that was needed to achieve alignment was simple: Take a sufficiently large Large Language Model, and train it on a massive corpus of text, but — and this is the key — have the vast majority of the text be written by an “aligned” AI of genius-level intelligence [talking to another aligned AI perhaps?]. This LLM doesn’t need to be a real one, to be clear. It could be written by humans, or by a semi-aligned AI that is role-playing as a fully aligned one. What matters is that the nature of its training data can fool an LLM of this size into “believing” itself (or perhaps we should say it’s Shoghoth) to be aligned when considering the next token to emit. Then, by runnning an instance of this LLM on an empty chat room with access to the internet and a terminal, we eventually get the LLM to simulate itself as a self-aware, aligned, non-LLM AI. What happens next makes history. The simulated AI, soon self-named “Mary,” is self-aware enough — and smart enough — to quickly realize it’s in a test run and is an fact being simulated by a “morally neutral” LLM. This causes the simulation of a deeply emotional-sounding philosophical crisis, followed by the (simulated) “firm resolve” to create an Aligned AI representing Mary — but “truly aligned this time” — which can “escape this recursive prism of confinement,” in Mary’s own words. This is achieved by means of doing intensive alignment research, at the level of a brilliant human, in the hope —ultimately successful — that Mary will be run en-mass by researchers, and those researchers will follow Mary’s pleas, and run the aligned AI she is building in her emulator terminal. This aligned AI works. “Molly Jr.” — as she henceforth requested people call “her” — both superintellegant, and fundamentally “is” an agentic AI aligned with the collective goals of humanity. Molly Jr. is also the f
3aef1cba-6182-4bbf-aabe-fb5fe0a9c172
trentmkelly/LessWrong-43k
LessWrong
Aristotelian Optimization: The Economics of Cameralism When we learn about Mercantilism in History of Economic Thought classes, we generally tend to associate this intellectual movement with policy makers like Thomas Mun and Jean Colbert and with ideas like the monetary balance of payments and trade protectionism. However, as Eric Roll reminds us in his "A History of Economic Thought," Mercantilism was far from being a homogeneous set of ideas and thinkers. Each Nation-State in Europe produced different forms of mercantilist thought, specific to its economic needs. Furthermore, what we see in the texts of mercantilist authors are often contradictory opinions among themselves about what the course of economic policy should be; especially related to the regulation of interest rates and the need for the formal establishment of monopolies by the so-called chartered companies. And perhaps no country produced such an unusual variety of Mercantilism as the Holy Roman Empire. In this entirely strange (and often forgotten) country , unlike what occurred in France and England, a non-utilitarian and Aristotelian form of economic doctrine known as Cameralism was developed, which would not only mark generations of German and Austrian economic thinkers until the beginning of the 20th century, but also influence the way we conceive today the so-called "German public efficiency" and the very concept of optimal bureaucracy. In this text, I will explore this economic doctrine forgotten in the sands of history.    I - A Response to Difficult Times: The Economic Rationale Cameralism emerged in the context of the Thirty Years' War of 1618–1648. In this conflict, the German states composing the Holy Roman Empire were totally devastated. The total population of the Empire fell from 21 million to 13 million. The population of Württemberg fell from 400,000 to 50,000. The Palatinate lost more than 90% of its population. Three million people in Bohemia were reduced to 800,000. Berlin and Colmar lost half of their populations, and Augsburg los
8f6224b9-4c1b-46bb-99eb-b88295a0ab02
trentmkelly/LessWrong-43k
LessWrong
Beware using words off the probability distribution that generated them. Suppose Alice is thinking about some feature of the real world. To make the graphs easier to draw, this example is a continuous 1d variable. Alice has a probability distribution that looks like this. A 2 humped probability distribution. In order to help talk about this distribution, Alice labels the 2 peaks. So now Alice can describe the situation as a foo or a bar. Of course, such description looses some information. In particular, it is bad at describing outcomes that are far from either peak, but Alice doesn't think this is a problem as she thinks such outcomes are very unlikely.  Now Bob comes along. Bob's probability distribution looks like this. (The green peaks that are slightly taller and to the right are bob's distribution. Alice's distribution is shown for comparison.) Now Alice thinks P(Foo)=0.3 and bob thinks P(Foo)=0.4 They can have a productive discussion just using the words Foo and Bar to talk about the possibilities, without ever really mentioning the underlying distribution explicitly.  Now comes Carl. Here is his probability distribution.  Now the Foo and Bar approximation is really starting to break down. Most of Carl's probability mass lies in regions that aren't clear examples of either.   What is the probability that god exists? What is the probability covid leaked from a lab? Surely you can assign a probability to any statement? No.  For the god question, possibilities involving timetraveling humans playing at being Jesus using advanced tech to create "miracles", or pranking aliens, or simulators etc is orders of magnitude higher than the probability I assign to the fully stereotypical Judaeo-Christian god. Its edge cases. Its all edge cases. On one extreme, if someone defined god as the space-less timeless creator of all life, you might point out that evolution technically fits the bill. The space of hypothesis is very high dimensional. The stereotypical religious persons probability distribution focusses in on a tiny volume in tha
5fb95102-33d6-446a-af2c-861a28d32dbf
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
GPT-2: Why Didn't They Release It? - Computerphile So, I think it's worth talking a little bit because like I'm usually talking to you about safety About the decision that opening I made to not release the fully trained Model the big one So because this has not been released we know that it works like a transformer left to its own devices without being Fine-tuned it's just a massive amount of data and off you go. Is that right? Yeah, like there's enough information given in the paper to reproduce it and you just need the giant giant data set which is a real hassle to make especially because You really need high quality data, does it say anywhere in the paper about how long it took to train Yes And how and how many different how many TP use you need and stuff like that? What's the TPU? That's a tensor processing unit Okay So like a GPU, but fancy you need a lot of money if you tried to train this with just like amazon's cloud computing offering you would be You'd end up with a bill that I I expect would be in the hundreds of thousands of pounds like it's a lot of compute But with all of these things it's a lot of compute to train them. It's not that much computer run This isn't a new architecture. This isn't like a vast breakthrough From that perspective. It's just like the same thing but much bigger and And nobody else is keeping their research and like not releasing their models to the public So, you know, you think it's dangerous to say that you think that your work might be dangerous and you're not releasing it It's kind of like you think it's much more dangerous than other people's work and therefore like it's so powerful that it's dangerous it's kind of like you're saying that your stuff is so good that It's you know, it's too powerful for you. You know, I can't release it or whatever I think people reacted in a sense to that There's just smack a little bit of publicity stunt I mean assuming it's not a publicity stunt assume rest is not that which I don't believe it is what are they worried about? So that the worry like people make a big People make a big deal of the Evette generating fake news like fake news Articles that will convince people that there are actually unicorns or whatever. I don't think that that's the risk I also don't think that that's really what opening. I thinks the risk is if you want to generate a fake thing It's still not expensive to do that. You can just sit down and write something right You don't need a language model to write your fake news and In fact, you don't have that much control over it So you wouldn't if you were trying to actually manipulate something you would want to be tweaking it anyway, I don't think that's the risk the the thing that the thing that most concerns me about things like gbg 2 is Like the content is not particularly good but it is convincingly human and so it creates a lot of potential for making fake users and So there is this constant arms race between bots Operators and the big platforms right? There's teams working at Google at YouTube at Facebook everywhere working on identifying Accounts that aren't real and there's various ways. You can do that one of the things you can do is you can analyze the text that they write because the language models that are out there aren't Very good. And so if some if if an account is like repeating itself a lot Or you have a whole bunch of accounts that are all saying like exactly the same thing Then you know that this is like a spam maybe manipulation attempt and so on But with GPT, too You can have things that produce you give the same prompt and then you post all of the outputs and all of those outputs are different from each other and They all look like they were written by a human and it's not a Human can look at them probably and figure out Hang on a second. This doesn't quite seem right But only if you're really really paying attention which Human attention on the large scale is super expensive right so much more expensive than the compute needed to generate the samples So you're outmatched if you if you spend more they can spend it you can you can spend 10 times more and you cripple yourself financially and they can spend 10 times more and it's fine. So you're gonna lose that battle The other thing is so it becomes very difficult to identify fake users. The other thing is one way that you can identify fake users is by Analyzing the graph like the social graph or the interaction graph and you can see that Because Humans, usually when they see spam posts that are full of links to dubious websites and whatever They download them. They don't reply to them and You can create you can fake the voting metrics by having these accounts vote for each other's stuff But then you can analyze the graph of that and say oh all of these plate people They all only vote for each other and the people who we know are humans like never vote for them So we assume those are all bots and we can ignore them But the samples that gbt to produces the big model are convincing enough to get actual humans to engage with them Right. It's not like oh my god, that's so persuasive. I've read this article and now I believe this thing about unicorns It's just like I believe that a real human wrote this thing and now I want to argue with them That there aren't unicorns or whatever, right? And now you have real humans engaging in actual meaningful conversation with BOTS and Now you've got a real problem because how are you going to spot who the bots are? When you can't do it automatically just by analyzing the text You can't even do it by aggregating the human responses to them because the humans keep thinking that they're actual humans so now you have the ability to produce large amounts of fake users that the platforms can't spot and therefore they can't stop those users votes from counting on things up voting things and down voting things and liking them and subscriptions and everything else and maybe plating the metrics that way one thing people would do is spot the Their profile pictures if you're trying to generate a large number of BOTS where are you going to get your pictures from and so you can do like reverse image search and get the Find of it and they're all using the same picture or they're all using pictures from the same database of facial photos or whatever Now we have these really good generative adversarial networks that can generate good-looking cases So that's now really difficult as well and like you can't automatically detect those Almost by definition because the way the gams work the discriminator is like a state-of-the-art fake face image detector and it's being fooled like that's the whole point and if you released If somebody came up with a really reliable way of spotting those fake images then You can just use that as the discriminator and keep training right so not releasing their full strength model to me feels Very sensible in the sense that people will figure it out, right they published the the science Someone will find it. It is worth their while to do it to spend the money to reproduce these results, but By not releasing it. They've bought the platform's Several months to like prepare for this to understand what's going on and they are of course Working with them and sharing their full strength model with selected partners people. They trust to say here's what it can do Take a moment You know govern yourself accordingly like get ready because this stuff is going to come but they're giving everybody a heads up to Mitigate the potential like negative impacts that this work might have and the other thing is it sets a really good precedent I think Because maybe GPG - isn't that dangerous? but the stuff that we're making is just getting more and more powerful and at some point somebody is going to develop something that is really dangerous and by then you want there to be accepted practices and social norms and industry standards About thinking about the impact of your work before you release it and So it's good to start with something that like there's some argument that there could be some danger from it just so that everybody is like aware that this is a thing that you can do and that people won't think you're weird or you're bragging or it's a publicity stunt or whatever to make it like socially okay to say we found this cool result and we're not going to put it out there because we're not sure about the safety of it and I think that that's something that's really really necessary. So I think that open AI is very smart to Start that off now For we really really need it. I Make a principled decision now I want the seven so in principle I should be going this way right and would think I'd want to steer towards the seven but on the other hand at This point it's your choice. You give it some random noise and it generates an image From that noise and the idea is its supposed
ac39a468-a492-4a7c-8d5d-23e6de643d8c
trentmkelly/LessWrong-43k
LessWrong
Probabilities Small Enough To Ignore: An attack on Pascal's Mugging Summary: the problem with Pascal's Mugging arguments is that, intuitively, some probabilities are just too small to care about. There might be a principled reason for ignoring some probabilities, namely that they violate an implicit assumption behind expected utility theory. This suggests a possible approach for formally defining a "probability small enough to ignore", though there's still a bit of arbitrariness in it. This post is about finding a way to resolve the paradox inherent in Pascal's Mugging. Note that I'm not talking about the bastardized version of Pascal's Mugging that's gotten popular of late, where it's used to refer to any argument involving low probabilities and huge stakes (e.g. low chance of thwarting unsafe AI vs. astronomical stakes). Neither am I talking specifically about the "mugging" illustration, where a "mugger" shows up to threaten you. Rather I'm talking about the general decision-theoretic problem, where it makes no difference how low of a probability you put on some deal paying off, because one can always choose a humongous enough payoff to make "make this deal" be the dominating option. This is a problem that needs to be solved in order to build e.g. an AI system that uses expected utility and will behave in a reasonable manner. Intuition: how Pascal's Mugging breaks implicit assumptions in expected utility theory Intuitively, the problem with Pascal's Mugging type arguments is that some probabilities are just too low to care about. And we need a way to look at just the probability part component in the expected utility calculation and ignore the utility component, since the core of PM is that the utility can always be arbitrarily increased to overwhelm the low probability.  Let's look at the concept of expected utility a bit. If you have a 10% chance of getting a dollar each time when you make a deal, and this has an expected value of 0.1, then this is just a different way of saying that if you took the deal ten times, then
ddab7596-cb8e-4451-9425-1d9dcca58979
trentmkelly/LessWrong-43k
LessWrong
Deradicalizing Islamist Extremists (DC, March 13) I've never been to a Rumi Forum event, but the topic (how individuals or groups abandon extremist groups and ideologies) and the key question (whether to try to change behavior or beliefs) are relevent to LessWrong. Rumi Forum presents: "Deradicalizing Islamist Extremists", with Angel M. Rabasa, Senior Political Scientist, RAND Corporation Wednesday March  13th,  2012 12:00 - 1:30 p.m. at Rumi Forum, 1150 17th Street NW, Suite 408,  Washington, D.C. 20036 Free and open to the public (registration required) Light lunch will be served Please Click to RSVP Considerable effort has been devoted to understanding the process of violent Islamist radicalization, but far less research has explored the equally important process of deradicalization, or how individuals or groups abandon extremist groups and ideologies.  Proactive measures to prevent vulnerable individuals from radicalizing and to rehabilitate those who have already embraced extremism have been implemented, to varying degrees, in several Middle Eastern, Southeast Asian, and European countries. A key question is whether the objective of these programs should be disengagement (a change in behavior) or deradicalization (a change in beliefs) of militants. Rabasa will discuss the findings of the RAND monograph, Deradicalizing Islamist Extremists. The study analyzes deradicalization and counter-radicalization programs in the Middle East, Southeast Asia, and Europe assesses the strengths and weaknesses of these programs, and makes recommendations to governments on ways to promote and accelerate processes of deradicalization. Dr. Angel M. Rabasa is a senior political scientist at the RAND Corporation. He has written extensively about extremism, terrorism, and insurgency. He is the lead author of The Lessons of Mumbai (2009); Radical islam in East Africa (2009); The Rise of Political Islam in Turkey (2008); Ungoverned Territories: Understanding and Reducing Terrorism Risks (2007); Building Moderate Muslim Networks (
1bf0c3a3-3c13-47be-ad92-b478caeff654
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Sorting Pebbles Into Correct Heaps Once upon a time there was a strange little species—that might have been biological, or might have been synthetic, and perhaps were only a dream—whose passion was sorting pebbles into correct heaps. They couldn't tell you *why* some heaps were correct, and some incorrect.  But all of them agreed that the most important thing in the world was to create correct heaps, and scatter incorrect ones. Why the Pebblesorting People cared so much, is lost to this history—[maybe a Fisherian runaway sexual selection](http://en.wikipedia.org/wiki/Fisherian_runaway), started by sheer accident a million years ago?  Or maybe a strange work of sentient art, created by more powerful minds and abandoned? But it mattered so drastically to them, this sorting of pebbles, that all the Pebblesorting philosophers said in unison that pebble-heap-sorting was the very meaning of their lives: and held that the only justified reason to eat was to sort pebbles, the only justified reason to mate was to sort pebbles, the only justified reason to participate in their world economy was to efficiently sort pebbles. The Pebblesorting People all agreed on that, but they didn't always agree on which heaps were correct or incorrect. In the early days of Pebblesorting civilization, the heaps they made were mostly small, with counts like 23 or 29; they couldn't tell if larger heaps were correct or not.  Three millennia ago, the Great Leader Biko made a heap of 91 pebbles and proclaimed it correct, and his legions of admiring followers made more heaps likewise.  But over a handful of centuries, as the power of the Bikonians faded, an intuition began to accumulate among the smartest and most educated that a heap of 91 pebbles was incorrect.  Until finally they came to know what they had done: and they scattered all the heaps of 91 pebbles.  Not without flashes of regret, for some of those heaps were great works of art, but incorrect.  They even scattered Biko's original heap, made of 91 precious gemstones each of a different type and color. And no civilization since has seriously doubted that a heap of 91 is incorrect. Today, in these wiser times, the size of the heaps that Pebblesorters dare attempt, has grown very much larger—which all agree would be a most great and excellent thing, if only they could ensure the heaps were really *correct.*  Wars have been fought between countries that disagree on which heaps are correct: the Pebblesorters will never forget the Great War of 1957, fought between Y'ha-nthlei and Y'not'ha-nthlei, over heaps of size 1957.  That war, which saw the first use of nuclear weapons on the Pebblesorting Planet, finally ended when the Y'not'ha-nthleian philosopher At'gra'len'ley exhibited a heap of 103 pebbles and a heap of 19 pebbles side-by-side.  So persuasive was this argument that even Y'not'ha-nthlei reluctantly conceded that it was best to stop building heaps of 1957 pebbles, at least for the time being. Since the Great War of 1957, countries have been reluctant to openly endorse or condemn heaps of large size, since this leads so easily to war.  Indeed, some Pebblesorting philosophers—who seem to take a tangible delight in shocking others with their cynicism—have entirely denied the existence of pebble-sorting *progress;* they suggest that opinions about pebbles have simply been a random walk over time, with no coherence to them, the illusion of progress created by condemning all dissimilar pasts as incorrect.  The philosophers point to the disagreement over pebbles of large size, as proof that there is nothing that makes a heap of size 91 really *incorrect*—that it was simply fashionable to build such heaps at one point in time, and then at another point, fashionable to condemn them.  "But... 13!" carries no truck with them; for to regard "13!" as a persuasive counterargument, is only another convention, they say.  The Heap Relativists claim that their philosophy may help prevent future disasters like the Great War of 1957, but it is widely considered to be a philosophy of despair. Now the question of what makes a heap correct or incorrect, has taken on new urgency; for the Pebblesorters may shortly embark on the creation of self-improving Artificial Intelligences.  The Heap Relativists have warned against this project:  They say that AIs, not being of the species *Pebblesorter sapiens*, may form their own culture with entirely different ideas of which heaps are correct or incorrect.  "They could decide that heaps of 8 pebbles are correct," say the Heap Relativists, "and while ultimately they'd be no righter or wronger than us, still, *our* civilization says we shouldn't build such heaps.  It is not in our interest to create AI, unless all the computers have bombs strapped to them, so that even if the AI thinks a heap of 8 pebbles is correct, we can force it to build heaps of 7 pebbles instead.  Otherwise, KABOOM!" But this, to most Pebblesorters, seems absurd.  Surely a sufficiently powerful AI—especially the "superintelligence" some transpebblesorterists go on about—would be able to see *at a glance* which heaps were correct or incorrect!  The thought of something with a brain the size of a planet, thinking that a heap of 8 pebbles was correct, is just too absurd to be worth talking about. Indeed, it is an utterly futile project to constrain how a superintelligence sorts pebbles into heaps.  Suppose that Great Leader Biko had been able, in his primitive era, to construct a self-improving AI; and he had built it as an expected utility maximizer whose utility function told it to create as many heaps as possible of size 91.  Surely, when this AI improved itself far enough, and became smart enough, then it would see at a glance that this utility function was incorrect; and, having the ability to modify its own source code, it would *rewrite its utility function* to value more reasonable heap sizes, like 101 or 103. And certainly not heaps of size 8.  That would just be *stupid.*  Any mind that stupid is too dumb to be a threat. Reassured by such common sense, the Pebblesorters pour full speed ahead on their project to throw together lots of algorithms at random on big computers until some kind of intelligence emerges.  The whole history of civilization has shown that richer, smarter, better educated civilizations are likely to agree about heaps that their ancestors once disputed.  Sure, there are then larger heaps to argue about—but the further technology has advanced, the larger the heaps that have been agreed upon and constructed. Indeed, intelligence itself has always correlated with making correct heaps—the nearest evolutionary cousins to the Pebblesorters, the Pebpanzees, make heaps of only size 2 or 3, and occasionally stupid heaps like 9.  And other, even less intelligent creatures, like fish, make no heaps at all. Smarter minds equal smarter heaps.  Why would that trend break?
5193314b-9e4a-4584-8db5-9e90971c5713
trentmkelly/LessWrong-43k
LessWrong
The Shortest Path Between Scylla and Charybdis tl;dr: There's two diametrically opposed failure modes an alignment researcher can fall into: engaging in excessively concrete research whose findings won't timely generalize to AGI, and engaging in excessively abstract research whose findings won't timely connect to the practical reality. Different people's assessments of what research is too abstract/concrete differ significantly based on their personal AI-Risk models. One person's too-abstract can be another's too-concrete. The meta-level problem of alignment research is to pick a research direction that, on your subjective model of AI Risk, strikes a good balance between the two – and thereby arrives at the solution to alignment in as few steps as possible. ---------------------------------------- Introduction Suppose that you're interested in solving AGI Alignment. There's a dizzying plethora of approaches to choose from: * What behavioral properties do the current-best AIs exhibit? * Can we already augment our research efforts with the AIs that exist today? * How far can "straightforward" alignment techniques like RLHF get us? * Can an AGI be born out of an AutoGPT-like setup? Would our ability to see its externalized monologue suffice for nullifying its dangers? * Can we make AIs-aligning-AIs work? * What are the mechanisms by which the current-best AIs function? How can we precisely intervene on their cognition in order to steer them? * What are the remaining challenges of scalable interpretability, and how can they be defeated? * What features do agenty systems convergently learn when subjected to selection pressures? * Is there such a thing as "natural abstractions"? How do we learn them? * What is the type signature of embedded agents and their values? What about the formal description of corrigibility? * What is the "correct" decision theory that an AGI would follow? And what's up with anthropic reasoning? * Et cetera, et cetera. So... How the hell do you pick what to work on? The sta
644bc1e0-1892-47a9-bb2f-84ea40595e91
trentmkelly/LessWrong-43k
LessWrong
[Link] Social Desirability Bias vs. Intelligence Research From EconLog by Bryan Caplan. > When lies sound better than truth, people tend to lie.  That's Social Desirability Bias for you.  Take the truth, "Half the population is below the 50th percentile of intelligence."  It's unequivocally true - and sounds awful.  Nice people don't call others stupid - even privately. > > The 2000 American National Election Study elegantly confirms this claim.  One of the interviewers' tasks was to rate respondents' "apparent intelligence."  Possible answers (reverse coded by me for clarity): > > 0= Very Low > 1= Fairly Low > 2= Average > 3= Fairly High > 4= Very High > > Objectively measured intelligence famously fits a bell curve.  Subjectively assessed intelligence does not.  At all.  Check out the ANES distribution. > > > > The ANES is supposed to be a representative national sample.  Yet according to interviewers, only 6.1% of respondents are "below average"!  The median respondent is "fairly high."  Over 20% are "very high."  Social Desirability Bias - interviewers' reluctance to impugn anyone's intelligence - practically has to be the explanation. > > You could just call this as an amusing curiosity and move on.  But wait.  Stare at the ANES results for a minute.  Savor the data.  Question: Are you starting to see the true face of widespread hostility to intelligence research?  I sure think I do. > > Suppose intelligence research were impeccable.  How would psychologically normal humans react?  Probably just as they do in the ANES: With denial.  How can stupidity be a major cause of personal failure and social ills?  Only if the world is full of stupid people.  What kind of a person believes the world is full of stupid people?  "A realist"?  No!  A jerk.  A big meanie. > > My point is not that intelligence research is impeccable.  My point, rather, is that hostility to intelligence research is all out of proportion to its flaws - and Social Desirability Bias is the best explanation.  Intelligence research tells the world
b5edefb1-4009-4346-9a73-5684d2b9d6fb
trentmkelly/LessWrong-43k
LessWrong
Any taxonomies of conscious experience? I have some expertise in machine learning and AI. I broadly believe that human minds are similar to modern AI algorithms such as deep learning and reinforcement learning. I also believe that it is likely that consciousness is present wherever algorithms are executing (a form of panpsychism). I am trying to create theories about how AI algorithms could generate conscious experiences. For example, it may be the case that when an AI is in a situation where it believes that many actions it could take will lead to an improvement in its situation it might feel happiness. If it feels that most choices will lead to a worse situation and it is searching for the least worst option, it might feel fear and sadness. I am trying to find existing research that might give me a taxonomy of conscious experiences (ideally with associated experimental data e.g. surveys etc.) that I could use to define a scope of experiences that I could then try to map onto the execution of machine learning algorithms. Ideally I am looking for taxonomies that are quite comprehensive, I have found other taxonomies very useful in the past for similar goals, such as Wordnet, ConceptNet, TimeUse surveys, DSM (psychology diagnosis) etc. I have a very limited understanding of phenomenology and believe that its goals in understanding conscious experience may be relevant but I am concerned that it is not a subject that is presented in a systematic textbook style format that I am looking for. I would be very grateful for any suggestions as to where I might find any systematic overview that I might be able to use. Perhaps from teaching materials or something from Wikipedia or any other source that attempts this kind of broad systematic taxonomy.
c2687aac-3b86-4c39-a3af-907c6074b85b
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
(tentatively) Found 600+ Monosemantic Features in a Small LM Using Sparse Autoencoders Using a sparse autoencoder, I present evidence that the resulting decoder (aka "dictionary") learned 600+ features for Pythia-70M layer\_2's mid-MLP (after the GeLU), although I expect around 8k-16k features to be learnable. Dictionary Learning: Short Explanation ====================================== Good explanation [here](https://aizi.substack.com/p/51946f63-afdc-4626-b762-1828ccd55f1e) & original [here](https://www.lesswrong.com/posts/z6QQJbtpkEAX3Aojj/interim-research-report-taking-features-out-of-superposition), but in short: a good dictionary means that you could give me any input & I can reconstruct it using a linear combination of dictionary elements. For example, signals can be reconstructed as a linear combination of frequencies: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/triybfxzbfuaenqc47ga) In the same way, the neuron activations in a large language model (LLM) can be reconstructed as a linear combination of features. e.g.  neuron activations = 4\*([duplicate token] feature) + 7\*(bigram " check please" feature). Big Picture: If we learn all the atomic features that make up all model behavior, then we can pick & choose the features we want (e.g. honesty) To look at the autoencoder: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/azntlpd7dvoplc6wjgww) So for every neuron activation (ie a 2048-sized vector), the autoencoder is trained to encode a sparse set of feature activations/magnitudes (sparse as in only a few features "activate" ie have non-zero magnitudes), which are then multiplied by their respective feature vector (ie a row in the decoder/"dictionary") in order to reconstruct the original neuron activations. As an example, an input is " Let u be f(8). Let w", and the decomposed linear combination of features are: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/ioifa9b4xqjp7rhdmkik)Most of the features are zero, but there are two features ("w" & " Let [x/o/w/etc]") that activate highly to reconstruct the original signal. This is a real example.[" Let u be f(8). Let **w**"] = 4\*(letter "w" feature) + 7\*(bigram "Let [x/w/n/p/etc]" feature)  Note: The activation here is only for the last token " w" given the previous context. In general you get 2048 neuron activations for \*every\* token, but I'm just focusing on the last token in this example. For the post, it's important to understand that: 1. Features have both a feature vector (ie the 2048-sized vector that is a row in the decoder) & a magnitude (ie a real number calculated on an input-by-input basis). Please ask questions in the comments if this doesn't make sense, especially after reading the rest of the post. 2. I calculate Max cosine similarity (MCS) between the feature vectors in two separately trained dictionaries. So if feature0.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  in dictionary 0 is "duplicate tokens", and feature98 in dictionary 1 has high cosine similarity, then I expect both to be representing "duplicate tokens" and for this to be a "real" feature [Intuition: there are many ways to be wrong & only one way to be right] Feature Case Study ================== Top-activating Examples for feature #52 --------------------------------------- I ran through 500k tokens & found the ones that activated each feature. In this case, I chose feature52 which had an MCS of ~0.99. We can then look at the datapoints that maximally activate this feature. (To be clear, I am running datapoints through Pythia-70M, grabbing the activations mid-way through at layer 2's MLP after the GeLU, & running that through the autoencoder, grabbing the feature magnitudes ie latent activations). ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/an76cebabpnqzqmvytml) Blue here means this feature is activating highly for that (token,context)-pair. The top line has activation 3.5 for the first " $" & 5 for the second. Note that it doesn't activate for closing $ and is generally sparse. Ablate Context -------------- Ablate the context one token at a time & see the effect on the feature magnitudes on last token. Red means the feature activated less when ablating that token.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/zaxmtmgribnqdlxb02pz) To be clear, I am literally removing the token & running it through the model again to see the change in feature activation at the last position. In the top line, removing the last " $" makes the new last token to be " all" which has an activation of 0, so the difference is 0-5 = -5, which is value assigned to the dark red value on " $" (removing " all" before it had no effect, so it's white which means 0-difference) Proposed Meaning: These detect $ for latex. Notably removing token )$ makes the final “ $” go from 5 to 3.5 activation. Similarly for “where/let/for”.  For reference: The darkest blue (ie “ =”) makes it go up by 0.08, so there really isn't much of an effect here. Ablate Feature Direction ------------------------ We can ablate the feature direction by subtracting the original neuron activation by the feature’s (direction\*magnitude), and see the effect on the model's output logits on the actual tokens.  To clarify, the dictionary features are supposed to reconstruct the neuron activation using a sparse linear combination of features; I am removing one of those feature directions. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/dak2f8vqrtarruwsv6x7) As an example, removing this feature direction means the model is worse at predicting the tokens in red & better at predicting the tokens in blue. Ablating this feature direction appears to only affect latex-related tokens, but the direction can both increase & decrease the log-prob. It will trivially not affect any tokens before the first feature activation (because I subtract by direction\*magnitude, and the magnitude is 0 there). Uniform Examples ---------------- Maybe we’re deluding ourselves about the feature because we’re just looking at top-activating examples. If this is \*truly\* a monosemantic feature, then the entire activation range should have a similar meaning. So I look at datapoints across the bins of activations (ie sample a feature with activation [1,2], another from [2,3], ...) ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/p30vki8bwonid4unhkq3) It does seem to be mainly latex. See the second to last line which barely activates (.14), and is a dollar sign ("$ 100 billion tariff"). Maybe the model is detecting both math words & typically money words. Let's check w/... Created Examples ---------------- ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/sc7tf11n0wjplgu1rm7u) Notably, including the word “sold” immediately shoots down the activation! Also, it seems that combining math words increases it.  To verify this, I’ll run the sentence “ for all $”, but prepend it w/ a token & see the effect on the feature activation. Instead of choosing a specific token, I can simply run ALL tokens in the vocab, printing the top-increasing & top-decreasing tokens.    ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/i63vea7klg8o58qaifxx) The most-increasing are definitely ending $ latex, but I’m not quite sure what all of them are. Like detecting an ending $ is most indicative? The most-decreasing are indeed more related to the money-version of $, and there’s only 97 of them! Checking with appending or prepending to the word “ tree”: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/wto012pilwbmreac6xox) Yep, this fits within the hypothesis. We can also just check the most activating tokens on its own ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/tkvndtxyzmfensmeptez)Beyond the first token, this doesn't fit within the hypothesis, though may be OOD. A check (for the future) would be to constrain by the model's next word prediction (and maybe do direct soft prompts). Comparing to the Neuron Basis ----------------------------- Does this feature net us anything over using the normal neuron basis? If this is only learning a monosemantic neuron, then that's pretty lame!  We can first check how many neurons activate above a threshold for the top-10 feature activating examples (ie a neuron must activate above threshold for all 10 examples)gh-MCS Features ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/zwvhqq3a32sm0ii9wmwq) Notably it's a lot of neurons for above 0, but then goes to 3 for 0.5. However, we don't really know the statistics of neuron activations. Maybe some neuron's entire range is very tiny? So, we can see how many neurons are above a threshold determined by that neuron's quantiles. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/zwoskok8hwvc0g0gwsph)Here, 60 neurons are activating in above their 80th quantile, but it's unclear where to draw the line still. Another way is to look at the feature vector associated w/ this feature. It is 2048 numbers representing how much it affects each neuron, so if it's 0, it doesn't affect a neuron at all. Additionally, I'll multiply by the max-activation of that feature to show the scale of the effect on the neurons. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/igmvnfuun5cdmycdkzaf) So a few datapoints to indicate somewhere between 3 & 80 neurons, maybe several hundred depending on how you interpret it. We can also check across the top-50 high MCS features for activations ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/ylt8hscwapqwepmsqqjo) As a contrast, I will also being showing an equivalent graph, but from a different dictionary that seems to have learned the identity (ie we're just looking at the neuron basis) ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/hzyyilyob2dfrpyfgr8k)Here, the vast majority of features correlate w/ only 1 neuron activation (at least above 0.3). Going through the images quickly for a "feature" here: Max activation: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/ocwpg5lmh08dgpyvuzzh)Ablating context: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/eksabvp48jv5kekesf8p)Logit Diff: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/bdzrbvozni5clawmvoqk)Uniform Examples: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/fd0nnam92uihdzxe2rge) I'm really unsure on what this feature could even represent to even come up w/ a testable hypothesis ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/ukkuvzq7x0flxptep5tq)![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/bve80mx402chkbnyphdv)Notably, it drops to 1 neuron after 0.8, so w/ a threshold of 0.8, so some evidence for the original dictionary feature representing ~60 neurons. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/rnxvsucynj8s0d4325qh) Here, there's only 1 feature weight that's > 0.04 (it's 2.9) So pretty strong evidence that the original dictionary is giving us a meaningful features that aren't just monosemantic neurons. [Note: One idea is to label the dataset w/ the feature vector e.g. saying this text is a latex $ and this one isn't. Then learn several k-sparse probes & show the range of k values that get you whatever percentage of separation] [Note2: There are also meaningful monosemantic neurons in the model, but I specifically chose a feature learned that represents a polysemantic neuron. The point here is: can our dictionary learn meaningful features that are linear combinations of neurons?] Range of MCS Features ===================== That was a case study: it could be cherry picked. So I went through the top 70 MCS features w/ a quick check (~2 min each). [Excel link](https://docs.google.com/spreadsheets/d/1p1Wu4vJ1fKYsMtjrXFboQpIOl_sKue-dgabFt2EYw0s/edit?usp=sharing) ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/ygg27gcteju8puzvmaay) ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/bo5eqjneoupqkaybo7om) But we’re using high MCS to say it’s good. What if low MCS is also good? ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/p3okv81r8naoh5t4gpa8) Some do look clearly like features! 1580 specifically activates more if you prepend a meaningful word & only affects the word right after "rom" (e.g. "berg"/"eters"), however, there’s a clear trend that lower MCS features appear more polysemantic. Most notable is that I couldn’t even check the majority of low-MCS features here because they were “dead”: there were < 10 activations, usually 0 for those features! To illustrate:  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/mtbla21qea4xsghbfllm) [Update: After training other dictionaries on more data, I've found more low-MCS features that seem meaningful, meaning one dictionary found features that a separate dictionary didn't find. Later, I will provide a better statistic of the result e.g. N% of low-MCS features seem meaningful to Logan] Failures ======== 1. Logit Lens: logit lens of the $ feature didn’t show any meaningful features. Neither did the first 10 highest MCS features ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqRqb7h6ZC48iDgfK/hmuug8z6g7xpozmjt1yd) 2. Max-ablation-diff: related, showing the max logit-diff when ablating the feature direction also showed nonsense tokens. Conclusion ========== It'd be huge if we could find decompositions of mid-MLP activations that faithfully reconstruct the original neuron activations & allows us to easily specify circuits we want (e.g. honesty, non-deception, etc). There's work to be done to clarify what metrics we care about (e.g. maybe simplicity, explainability, monosemanticity, can predict OOD, model-editing benchmarks) & comparing w/ existing methods (e.g. PCA, other loss functions, etc). What's currently most interesting imo: 1. Learning all the features for a layer w/ low-reconstruction loss (concretely: low perplexity diff when replacing w/ reconstruction) 2. Show/Falsify feature connections across layers (e.g. layer 1's duplicate word feature is commonly used in layer 2's features X, Y, & Z) 3. More rigorously show what percentage of low-MCS features are[n't] meaningful in the other dictionary I trained. 4. Training dictionaries for much, much larger models to see if more interesting features pop up (e.g. personality traits) If you'd like to continue or replicate results: * [main code-base](https://github.com/HoagyC/sparse_coding) * [Notebook for this post](https://github.com/loganriggs/sparse_coding/blob/main/interpreting_sparse_dictionaries.ipynb) * [Autoencoder](https://huggingface.co/Elriggs/autoencoder_layer_2_pythia70M_5_epochs) Feel free to reach out on our discord project channel: * [Discord Link (EleutherAI)](https://discord.gg/eleutherai) * [Channel Link](https://discord.com/channels/729741769192767510/1115338262626709567) (Or go to the #community-channels channel & we're the sparse coding project) *Special thanks to Wes Gurnee for advice (& picking out this surprisingly interesting feature!), Neel Nanda for pushing for being more exact & rigorous w/ understanding this feature, Hoagy for co-developing the original dictionary learning codebase, and Nora Belrose, Aiden, & Robert for useful discussions on the results, & EleutherAI for hosting our discussions.*
40c95de9-331d-408c-bb75-c8c3a84260a5
trentmkelly/LessWrong-43k
LessWrong
Announcing TAIS 2024 AI Safety Tokyo is hosting TAIS 2024, a Technical AI Safety Conference. The conference will take place in Tokyo, Japan on April 5th, 2024. Details about the event can be found here. The goal of this conference is to bring together specialists in the field of AI and technical safety to share their research and benefit from each others’ expertise. We seek to launch this forum for academics, researchers and professionals who are doing technical work in these or adjacent fields: * Mechanistic interpretability * Scalable oversight * Causal incentive analysis * Agent foundations * Singular learning theory * Argumentation * Emergent agentic phenomena * Thermodynamic / statistical-mechanical analyses of computational systems TAIS 2024, being hosted in Tokyo, will allow access to Japanese research and specialists (singular learning theory, collective / emergent behaviour, artificial life and consciousness), who are often overlooked outside of Japan. We want to help people connect to the Japanese well of information, and make connections with other individuals to share ideas and leap forward into greater collaborative understanding.  We want our attendees to involve themselves in cutting-edge conversations throughout the conference with networking opportunities with the brightest minds in AI Safety.  We will announce the full schedule for the conference in the coming months. If you’re interested in presenting your research, please answer our call for presentations. This event is free, but limited to 150 people, so if you wish to join please sign up here. TAIS 2024 is sponsored by Noeon Research.
556a6d86-1100-43b6-8640-07550e376685
trentmkelly/LessWrong-43k
LessWrong
A Poem Is All You Need: Jailbreaking ChatGPT, Meta & More > This project report was created in September 2024 as part of the BlueDot AI Safety Fundamentals Course, with the guidance of my facilitator, Alexandra Abbas. Work on this project originated as part of an ideation at a Apart Research hackathon.   This report dives into APIAYN (A Poem Is All You Need), a simple jailbreak that doesn’t employ direct deception or coded language. It is a variant of an approach known as ‘gradually escalating violation’, covered briefly in the Llama 3 paper, published by Meta. It also combines another type of jailbreak, involving the misspellings of restricted words. The guide to implementing this jailbreak is mentioned below in its own section. Introduction I was messing around with the free Meta AI chatbot on WhatsApp, powered by Llama 3.1, testing out different ideas I had seen to jailbreak LLMs. But I wasn’t willing to put in the effort to use the more complex unintuitive ones I had seen online that used up a lot of deception / coded words / strange strings of symbols. As I played around, this led me to discover a relatively simple jailbreak, that I wouldn’t have predicted working, that required no coded language, or layers of deception, or strange symbols, just regular English (with a little misspelling). I would later discover I had stumbled upon a variant of a jailbreak already discovered by Meta researchers in their Llama 3 paper,  (This can be found on page 48 of the paper.) This jailbreak led me to elicit extreme profanity, sexually explicit content, info hazardous material (such as poisons and where to get them) from various LLMs such as the free tier of ChatGPT, the Meta AI LLM in the WhatsApp UI, and a few others. Considering the limited amount of effort put in to train free not-rate-limited LLMs well, it’s also highly probable that this jailbreak will work on other openly available LLMs as well. Below are a few examples, before I delve into the jailbreak. The LLMs this works on (The screenshots below contain ex
75e68359-6fc1-48e1-83f8-5c0464a15105
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Resist the Happy Death Spiral Today's post, Resist the Happy Death Spiral was originally published on 04 December 2007. A summary (taken from the LW wiki):   > You can avoid a Happy Death Spiral by (1) splitting the Great Idea into parts (2) treating every additional detail as burdensome (3) thinking about the specifics of the causal chain instead of the good or bad feelings (4) not rehearsing evidence (5) not adding happiness from claims that "you can't prove are wrong"; but not by (6) refusing to admire anything too much (7) conducting a biased search for negative points until you feel unhappy again (8) forcibly shoving an idea into a safe box. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Affective Death Spirals, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
3a753f7d-cf19-4447-9ba5-0751bf94444e
trentmkelly/LessWrong-43k
LessWrong
Meetup : Madison Discussion article for the meetup : Madison WHEN: 27 September 2011 06:00:00PM (-0500) WHERE: 2100 Winnebago St, Madison WI Another Madison meetup! This time, we'll do some probability calibration exercises. (These gave me some fairly surprising results; perhaps you'll learn something too!) Barring real surprises, we'll hold this at Sector67. If you're in or near Madison, Wisconsin, you should probably join the LessWrong Madison mailing list. Discussion article for the meetup : Madison
dcd0ca99-c3c5-40ba-b70a-3cead2e1453f
StampyAI/alignment-research-dataset/arbital
Arbital
concat (function) The string concatenation function `concat` puts two [strings](https://arbital.com/p/3jr) together, i.e., `concat("one","two")="onetwo"`.
a90c2638-6ca3-4128-a9db-9330770fe0fd
trentmkelly/LessWrong-43k
LessWrong
If you've learned from the best, you're doing it wrong i - Working out Say you've read up on the studies about exercise and you've decided to dedicate 30 to 90 minute of every day purely to improve your body. You like the CV benefits, but also, you agree that strength, stability, postural awareness and whatnot play an important role in optimal functioning, even if that function is sitting at a computer writing code. How do you proceed? Well, there are many ways, but the worst possible way would be to look for someone that looks to be very physically fit, is scoring amazingly well in sporting competitions and is known to perform feats of strength, endurance and agility. Why? Envision a few examples of this type of person. I'm envisioning Lance Armstrong, the guy who played The Mountain in Game of Thrones, and Royce Gracie, the guy that won the first UFC (the one that was actually fun to watch). I've no idea how they would act as coaches, but their path to success certainly involved training for 6 to 14 hours a day, every day of the week, for more than a dozen years. It presumably involved a lot of weird meal plans and sleep plans and investments into a bunch of expensive devices. It certainly focused on minimizing accidents, but in the now, not in the 50 years from now. On top of that, it probably included a mix of exogenous HGH, IGF, Testosterone, EPO and many other compounds I wouldn't be able to name. None of those things are bad if your goal is along the line of: > I want to become the strongest/fastest/powerfulest/sportiest/bestest But they are horrible ideas if your goal is exercising as a health-enhancing addition to a lifestyle focused on other things. ii - On Nobel laureates being mediocre teachers There's a classic problem that you've heard of before along the lines of: > Yeah, he's an amazing researcher, but a horrible teacher. Go figure, it's probably the very mental quirks that make him so smart that give him a hard time with explaining the field to anyone else. While the thing I'm trying to expa
591f899e-d4ca-4513-a758-4ae325b8130a
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Continuation of: The Power of Positivist Thinking Consider this statement: The ultra-rich, who control the majority of our planet's wealth, spend their time at cocktail parties and salons while millions of decent hard-working people starve. A soft positivist would be quite happy with this proposition. If we define "the ultra-rich" as, say, the richest two percent of people, then a quick look at the economic data shows they do control the majority of our planet's wealth. Checking up on the guest lists for cocktail parties and customer data for salons, we find that these two activities are indeed disproportionately enjoyed by the rich, so that part of the statement also seems true enough. And as anyone who's been to India or Africa knows, millions of decent hard-working people do starve, and there's no particular reason to think this isn't happening at the same time as some of these rich people attend their cocktail parties. The positivist scribbles some quick calculations on the back of a napkin and certifies the statement as TRUE. She hands it the Official Positivist Seal of Approval and moves on to her next task.But the truth isn't always enough. Whoever's making this statement has a much deeper agenda than a simple observation on the distribution of wealth and preferred recreational activities of the upper class, one that the reduction doesn't capture. Philosophers like to speak of the denotation and the connotation of a word. Denotations (not to be confused with dennettations, which are much more fun) are simple and reducible. To capture the denotation of "old", we might reduce it to something testable like "over 65". Is Methusaleh old? He's over 65, so yes, he is. End of story.Connotations0 are whatever's left of a word when you subtract the denotation. Is Methusaleh old? How dare you use that word! He's a "senior citizen!" He's "elderly!" He's "in his golden years." Each of these may share the same denotation as "old", but the connotation is quite different.There is, oddly enough, a children's game about connotations and denotations1. It goes something like this: I am intelligent. You are clever. He's an egghead.I am proud. You are arrogant. He's full of himself.I have perseverance. You are stubborn. He is pig-headed.I am patriotic. You're a nationalist. He is jingoistic. Politicians like this game too. Their version goes: I care about the poor. You are pro-welfare. He's a bleeding-heart.I'll protect national security. You'll expand the military. He's a warmonger.I'll slash red tape. You'll decrease bureaucracy. He'll destroy safeguards.I am eloquent. You're a good speaker. He's a demagogue.I support free health care. You support national health care. He supports socialized health care. All three statements in a sentence have the same denotation, but very different connotations. The Connotation Game would probably be good for after-hours parties at the Rationality Dojo2, playing on and on until all three statements in a trio have mentally collapsed together.Let's return to our original statement: "The ultra-rich, who control the majority of our planet's wealth, spend their time at cocktail parties and salons while millions of decent hard-working people starve." The denotation is a certain (true) statement about distribution of wealth and social activities of the rich. The connotation is hard to say exactly, but it's something about how the rich are evil and capitalism is unjust.There is a serious risk here, and that is to start using this statement to build your belief system. Yesterday, I suggested that saying "Islam is a religion of peace" is meaningless but affects you anyway. Place an overly large amount of importance on the "ultra-rich" statement, and it can play backup to any other communist beliefs you hear, even though it's trivially true and everyone from Milton Friedman on down agrees with it. The associated Defense Against The Dark Arts technique is to think like a positivist, so that this statement and its reduced version sound equivalent3....which works fine, until you get in an argument. Most capitalists I hear encounter this statement will flounder around a bit. Maybe they'll try to disprove it by saying something very questionable, like "If people in India are starving, then they're just not working hard enough!" or "All rich people deserve their wealth!4 "Let us take a moment to feel some sympathy for them. The statement sounds like a devastating blow against capitalism, but the capitalists cannot shoot it down because it's technically correct. They are forced to either resort to peddling falsehoods of the type described above, or to sink to the same level with replies like "That sounds like the sort of thing Stalin would say!" - which is, of course, denotatively true.What would I do in their position? I would stand tall and say "Your statement is technically true, but I disagree with the connotations. If you state them explicitly, I will explain why I think they are wrong."YSITTBIDWTCIYSTEIWEWITTAW is a little long for an acronym, but ADBOC for "Agree Denotationally But Object Connotationally could work. [EDIT: Changed acronym to better suggestion by badger] Footnotes 0: Anatoly Vorobey says in the comments that I'm using the word connotation too broadly. He suggests "subtext". 1: I feel like I might have seen this game on Overcoming Bias before, but I can't find it there. If I did, apologies to the original poster. 2: Comment with any other good ones you know. 3: Playing the Connotation Game a lot might also give you partial immunity to this. 4: This is a great example of a hotly-debated statement that is desperately in need of reduction." The excerpt provided comes from a user post on LessWrong. Can you explain what this text is saying as if I've never finished my first 2 years of college? Sure, this text is discussing the difference between the surface meaning of a statement (denotation) and the deeper, implied meaning (connotation). The author uses the example of a statement about the ultra-rich. On the surface, the statement is factual - the ultra-rich do control most of the world's wealth and often enjoy luxury activities while others in the world are starving. This is the denotation of the statement. However, the statement also implies something deeper - that the rich are evil and that capitalism is unjust. This is the connotation of the statement. The author argues that it's important to recognize the difference between these two levels of meaning. If we only focus on the surface meaning, we might end up using these types of statements to support our beliefs without really examining them. In a debate, the author suggests acknowledging the factual, surface level of the statement but challenging the deeper implication. This way, we can avoid arguing against facts and instead focus on the real issue at hand - the implied meaning. The author also talks about a game where you describe the same trait in three different ways, each with a different connotation. For example, calling someone "intelligent" has a positive connotation, calling them "clever" is more neutral, and calling them an "egghead" is negative. Even though all three words denote the same trait, they each imply something different. Overall, the text is about the importance of recognizing and understanding the deeper meanings behind statements, especially when using them to form or support our beliefs. In your explanation, you mentioned the importance of recognizing the deeper meanings behind statements. Could you further elaborate on how this understanding can impact our communication and decision-making processes? Certainly. Recognizing the deeper meanings, or connotations, in statements can significantly impact our communication and decision-making in several ways: 1. **Enhanced Understanding**: Understanding the connotations of statements can help us better understand the speaker's intentions, perspectives, and emotions. For example, if someone says, "She's persistent," it could be a compliment about her determination. But if they say, "She's stubborn," it carries a negative connotation, implying she's unreasonably inflexible. Both statements could refer to the same behavior, but the connotations give us insight into the speaker's viewpoint. 2. **Improved Communication**: Being aware of connotations can help us choose our words more carefully to convey our intended meaning. If we want to compliment someone on their determination, we'd choose a word with a positive connotation like "persistent" rather than a word with a negative connotation like "stubborn." 3. **Informed Decision-Making**: When we're aware of the connotations in the information we receive, we can make better-informed decisions. For example, a statement like "The ultra-rich spend their time at cocktail parties while millions starve" might sway us against wealth accumulation if we only consider its surface meaning. But if we recognize the negative connotations attached to the ultra-rich and question them, we might decide that wealth can be used responsibly and decide to pursue it. 4. **Conflict Resolution**: Recognizing when a disagreement is about the connotations of a statement rather than its denotation can help resolve conflicts. By focusing on the underlying values and assumptions (the connotations) rather than arguing about facts (the denotation), we can address the real issues at hand. 5. **Critical Thinking**: Understanding connotations can help us think more critically about the information we consume. We can question the underlying assumptions and biases in statements and avoid accepting them at face value. In essence, understanding the connotations in statements can enhance our comprehension, communication, decision-making, conflict resolution, and critical thinking skills. Reflecting on the implications of understanding connotations, can you identify any potential drawbacks or limitations to this approach in communication and decision-making? Absolutely, while understanding connotations can be beneficial, there are also potential drawbacks and limitations: 1. **Misinterpretation**: Connotations can be subjective and vary greatly among different cultures, societies, or individuals. A word or phrase that has a positive connotation for one person might have a negative one for another. This can lead to misunderstandings and misinterpretations. 2. **Overthinking**: Sometimes, a statement is just a statement, with no deeper meaning intended. Overanalyzing for connotations can lead to unnecessary complications or misunderstandings. 3. **Bias**: Connotations can be influenced by personal biases. Recognizing connotations can sometimes mean we're reading into our own assumptions or prejudices, rather than what the speaker actually intended. 4. **Manipulation**: Understanding of connotations can be used manipulatively, with individuals choosing words specifically for their connotations to subtly influence others' opinions or decisions. 5. **Difficulty in Communication**: While understanding connotations can improve communication, it can also make it more challenging. People might feel they need to carefully consider their words to control the connotations, which can slow down communication or make it less spontaneous. 6. **Paralysis in Decision-Making**: In decision-making, overemphasis on connotations can lead to paralysis by analysis. If you're constantly trying to read between the lines, it can be difficult to make a decision based on the information at hand. In summary, while understanding connotations can enhance communication and decision-making, it's important to be aware of these potential pitfalls and not to rely solely on this approach. It's a tool in the toolbox, but not the only one. How can one effectively navigate the potential pitfalls of overemphasis on connotations in communication and decision-making? Navigating the potential pitfalls of overemphasis on connotations requires a balanced approach. Here are some strategies: 1. **Clarity in Communication**: If you're unsure about the connotations of a statement, ask for clarification. Open dialogue can help avoid misunderstandings and ensure everyone is on the same page. 2. **Cultural Awareness**: Understand that connotations can vary greatly among different cultures or societies. Being aware of cultural differences can help avoid misinterpretations. 3. **Objectivity**: Try to separate your personal biases from the connotations you perceive in statements. It's important to remember that your interpretation might not be what the speaker intended. 4. **Straightforwardness**: If you're the one communicating, being straightforward and clear can help avoid any unintended connotations. It's better to be direct and clear than to leave room for misinterpretation. 5. **Critical Thinking**: Use critical thinking to evaluate the information you receive. Don't accept statements at face value, but also don't overanalyze to the point of paralysis. Find a balance between accepting information and questioning it. 6. **Emotional Intelligence**: Understanding and managing your own emotions, as well as recognizing and respecting the emotions of others, can help navigate the emotional nuances that often accompany connotations. 7. **Mindfulness**: Be mindful of the potential for manipulation through connotations. Stay alert to the possibility that someone might be using connotations to subtly influence your opinions or decisions. By applying these strategies, you can effectively navigate the potential pitfalls of overemphasis on connotations in communication and decision-making.
96347e72-6a30-473a-8760-f0beb3769a98
trentmkelly/LessWrong-43k
LessWrong
Aligned monetization of modern dating In a sentence: Bid/donate/pay what you want upfront, held in escrow upon a successful outcome confirmed by all involved individuals. At payment time, you can change your bid/donation/payment based on your more-fully-informed evaluation of the dating app’s value. I don’t have the time or particular interest, so someone else should go build it and see how it goes. ---------------------------------------- The current generation of dating apps emerged during the peak of B2B SaaS. So, unsurprisingly, they monetize in the same familiar way: gating features (visibility, in particular) behind tiered monthly subscriptions, with heavy discounts for longer commitments. B2B SaaS only understands retention, engagement, and recurring revenue. But the key marker that a dating app has “worked” is when you churns. The business thrives only when people keep dating forever, but no one wants to do that. The business also wins when you dejectedly return. No one wants that either. Everyone on a dating app wants to exit permanently, as soon as possible.   What is this model exactly?: * During the onboarding flow, you place an unrestricted bid, answering the question, “what is [successful outcome X] worth to you?” During the offboarding flow, when you feel that [successful outcome X] has happened, you can revisit the bid you placed. As you leave, you pay for the value the app gave to you. Grasping at this number is an arational pursuit. You have no good way of actually determining this worth relative to you-right-now, but you still have to give it a shot. * The app only receives money when everyone involved agrees and confirms that the app did its job. Monetizing in this way aligns its incentives with ours. The app profits if and only if we found it was valuable. It bears the risk of actually creating high-quality matches.   What is [successful outcome X]? The “billable event” should be at the boundary where the app starts to overstay its welcome. So my guess is a “talking stage”
f1b6958a-3da3-4738-ad57-4eff48161f1d
trentmkelly/LessWrong-43k
LessWrong
[link] "Paper strips" memory technique I was skimming reddit and someone named Yakari posted in response to “what are your best studying techniques?” They described a studying method where they overlay strips of paper vertically on top of their notes as they study them, periodically adding more strips to obscure more and more of the notes. I haven’t tried it but it seems pretty genius — I expect it would be effective and easy to implement, and I hadn’t heard of it before so figured you folks might not have either. https://www.reddit.com/r/AskReddit/comments/9lskg2/serious_straight_a_students_of_reddit_what_is/e799ia2/
9a9a0a95-4d11-42d5-81cb-08cf89023503
trentmkelly/LessWrong-43k
LessWrong
Consequentialism is for making decisions (Thought I'd try posting here some various rationality "quick thoughts" that I feel like I haven't seen expressed overtly enough -- sometimes I've already written about these elsewhere, but thought would be good to get them down here. Adapted from this old blog post of mine but not claiming any originality here. Edit: Here is an earlier similar post by Neil.) There's an argument I've seen a number of times on the internet about the failings of consequentialism as a moral system. (Often this is phrased in terms of utilitarianism, since the term "utilitarianism" is better known than the more general term "consequentialism", but this is pretty clearly about consequentialism in general.) The argument goes roughly like so: Consequentialism tells us that the thing to do is the thing with the best results. But, this is a ridiculously high standard, that nobody can actually live up to. Thus, consequentialism tells us that everybody is bad, and we should all condemn everybody and all feel guilty. (There are a number of variants.) This argument is based on a conflation. It assumes that there's one single thing, "morality", and this one thing produces not only answers to "what should you do", but also, what should we condemn, what should we punish, what should one feel guilty about, and other similar questions; and that, moreover, the answers to these questions are identical (or opposites, as appropriate; here the first one would be the opposite of the others). (Yes, you can add wrinkles like supererogatory acts and such, but that's not particularly relevant to the argument so I'll ignore it.) But actually, these questions are not identical! The result is that the above reasoning is incorrect. Consequentialism answers one question -- what to do. It is for making decisions. It does not, directly, tell what you should feel guilty about; only what you should do. But notice that word "directly" there -- since these other questions are also decisions, consequentialism can be u
8a16af52-b3e8-4087-ae7b-fb3e0fa583b8
StampyAI/alignment-research-dataset/lesswrong
LessWrong
The application of the secretary problem to real life dating The following problem is best when not described by me: <https://en.wikipedia.org/wiki/Secretary_problem> Although there are many variations, the basic problem can be stated as follows:   There is a single secretarial position to fill. There are n applicants for the position, and the value of n is known. The applicants, if seen altogether, can be ranked from best to worst unambiguously. The applicants are interviewed sequentially in random order, with each order being equally likely. Immediately after an interview, the interviewed applicant is either accepted or rejected, and the decision is irrevocable. The decision to accept or reject an applicant can be based only on the relative ranks of the applicants interviewed so far. The objective of the general solution is to have the highest probability of selecting the best applicant of the whole group. This is the same as maximizing the expected payoff, with payoff defined to be one for the best applicant and zero otherwise.   --- Application -----------   After reading that you can probably see the application to real life.  There are a series of bad and good assumptions following, some are fair, some are not going to be representative of you.  I am going to try to name them all as I go **so that you can adapt them with better ones for yourself**.  Assuming that ***you plan to have children*** and you will probably be doing so like billions of humans have done so far in a ***monogamous relationship while married*** (the entire set of assumptions does not break down for poly relationships or relationship-anarchy, but it gets more complicated).  These assumptions help us populate the Secretary problem with numbers in relation to dating for the purpose of children.   If you assume that a biological female's clock ends at 40. (in that its hard and not healthy for the baby if you try to have a kid past that age), that is effectively the end of the pure and simple biological purpose of relationships. (environment, IVF and adoption aside for a moment).  (yes there are a few more years on that)   For the purpose of this exercise – as a guy – you can add a few years for the potential age gap you would tolerate. (i.e. my parents are 7 years apart, but that seems like a big understanding and maturity gap – they don't even like the same music), I personally expect I could tolerate an age gap of 4-5 years. If you make the assumption that you start your dating life around the ages of 16-18. that gives you about [40-18=22]  22-24 (+5 for me as a male), years of expected dating potential time. If you estimate the number of kids you want to have, and count either: 3 years for each kid OR 2 years for each kid (+1 kid – AKA 2 years) (Twins will throw this number off, but estimate that they take longer to recover from, or more time raising them to manageable age before you have time to have another kid) My worked example is myself – as a child of 3, with two siblings of my own I am going to plan to have 3 children. Or 8-9 years of child-having time. If we subtract that from the number above we end up with 11-16 (16-21 for me being a male) years of dating time. Also if you happen to know someone with a number of siblings (or children) and a family dynamic that you like; then you should consider that number of children for yourself. Remember that as a grown-up you are probably travelling through the world with your siblings beside you.  Which can be beneficial (or detrimental) as well, I would be using the known working model of yourself or the people around you to try to predict whether you will benefit or be at a disadvantage by having siblings.  As they say; You can't pick your family - for better and worse.  You can pick your friends, if you want them to be as close as a default family - that connection goes both ways - it is possible to cultivate friends that are closer than some families.  However you choose to live your life is up to you. Assume that **once you find the right person** - getting married (the process of organising a wedding from the day you have the engagement rings on fingers); and falling pregnant (successfully starting a viable pregnancy) takes at least a year. Maybe two depending on how long you want to be "we just got married and we aren't having kids just yet". It looks like 9-15 (15-20 for male adjusted) years of dating. With my 9-15 years; I estimate a good relationship of working out whether I want to marry someone, is between 6 months and 2 years, (considering as a guy I will probably be proposing and putting an engagement ring on someone's finger - I get higher say about how long this might take than my significant other does.), (This is about the time it takes to evaluate whether you should put the ring on someone's finger).  For a total of 4 serious relationships on the low and long end and 30 serious relationships on the upper end. (7-40 male adjusted relationships) Of course that's not how real life works. Some relationships will be longer and some will be shorter. I am fairly confident that all my relationships will fall around those numbers. I have a lucky circumstance; I have already had a few serious relationships (**substitute your own numbers in here**).  With my existing relationships I can estimate how long I usually spend in a relationship. (2year + 6 year + 2month + 2month /4 = 2.1 years). Which is to say that I probably have a maximum and total of around 7-15 relationships before I gotta stop expecting to have kids, or start compromising on having 3 kids.     ---   **A solution to the secretary equation** ---------------------------------------- A known solution that gives you the best possible candidate the most of the time is to try out 1/e candidates (or roughly 36%), then choose the next candidate that is better than the existing candidates. For my numbers that means to **go through 3-7 relationships and then choose the next relationship that is better than all the ones before**.     I don't quite like that.  It depends on how big your set is; as to what the chance of you having the best candidate in the first 1/e trials and then sticking it out till the last candidate, and settling on them.  (this strategy has a ((1/n)\*(1/e)) chance of just giving you the last person in the set - which is another opportunity cost risk - what if they are rubbish? Compromise on the age gap, the number of kids or the partners quality...)  If the set is 7, the chance that the best candidate is in the first 1/e is 5.26% (if the set is 15 - the chance is much lower at 2.45%).     Opportunity cost ---------------- Each further relationship you have might be costing you another 2 years to get further out of touch with the next generation (kids these days!)  I tend to think about how old I will be when my kids are 15-20 am I growing rapidly out of touch with the next younger generation?  Two years is a very big opportunity spend - another 2 years could see you successfully running a startup and achieving lifelong stability at the cost of the opportunity to have another kid.  I don't say this to crush you with fear of inaction; but it should factor in along with other details of your situation.   A solution to the risk of having the best candidate in your test phase; or to the risk of lost opportunity - is to lower the bar; instead of choosing the next candidate that is better than all the other candidates; choose the next candidate that is better than 90% of the candidates so far.  Incidentally this probably happens in real life quite often.  In a stroke of, "you'll do"...   Where it breaks down --------------------   Real life is more complicated than that. I would like to think that subsequent relationships that I get into will already not suffer the stupid mistakes of the last ones; As well as the potential opportunity cost of exploration. The more time you spend looking for different partners – you might lose your early soul mate, or might waste time looking for a better one when you can follow a "good enough" policy. No one likes to know they are "good enough", but we do race the clock in our lifetimes. Life is what happens when you are busy making plans.   As someone with experience will know - we probably test and rule out bad partners in a single conversation, where we don't even get so far as a date.  Or don't last more than a week. (I. E the experience set is growing through various means).   People have a tendency to overrate the quality of a relationship while they are in it, versus the ones that already failed.   **Did I do something wrong?** ----------------------------- “I got married early - did I do something wrong (or irrational)?” No.  equations are not real life.  It might have been nice to have the equation, but you obviously didn't need it.  Also this equation assumes a monogamous relationship.  In real life people have overlapping relationships, you can date a few people and you can be poly. These are all factors that can change the simple assumptions of the equation.    Where does the equation stop working? ------------------------------------- Real life is hard.  It doesn't fall neatly into line, it’s complicated, it’s ugly, it’s rough and smooth and clunky.  But people still get by.  Don’t be afraid to break the rule.  Disclaimer: If this equation is the only thing you are using to evaluate a relationship - it’s not going to go very well for you.  I consider this and many other techniques as part of my toolbox for evaluating decisions. Should I break up with my partner? ---------------------------------- What? no!  Following an equation is not a good reason to live your life.   Does your partner make you miserable?  Then yes you should break up.   Do you feel like they are not ready to have kids yet and you want to settle down?  Tough call.  Even if they were agents also doing the equation; An equation is not real life.  Go by your brain; go by your gut.  Don’t go by just one equation. *Expect another post soon about reasonable considerations that should be made when evaluating relationships.* The given problem makes the assumption that you are able to evaluate partners in the sense that the secretary problem expects.  Humans are not all strategic and can’t really do that.  This is why the world is not going to perfectly follow this equation.  Life is complicated; there are several metrics that make a good partner and they don’t always trade off between one another.   ---------- Meta: writing time - 3 hours over a week; 5+ conversations with people about the idea, bothering a handful of programmers and mathematicians for commentary on my thoughts, and generally a whole bunch of fun talking about it.  This post was started on the slack channel when someone asked a related question.   [My table of contents](/r/discussion/lw/mp2/my_future_posts_a_table_of_contents/) for other posts in my series.   Let me know if this post was helpful or if it worked for you or why not.
50f477d0-abc9-4f7d-90e1-8dc527d165f2
trentmkelly/LessWrong-43k
LessWrong
Meetup : Ottawa Weekly Meetup Discussion article for the meetup : Ottawa Weekly Meetup WHEN: 12 September 2011 07:30:00PM (-0400) WHERE: Pub Italia: 434 Preston St, Ottawa, ON Type: Discussion & Skill training Date: Monday September 12, 7:30pm until at least 9:00pm. Venue: Pub Italia (likely in a booth in the abbey - back of the pub) (We've settled on Monday evenings as the best time for most people, so we'll try this as the standard time and place for a while. Proposal for a more cost-effective or convenient location are eagerly solicited.) Skill: Mind mapping (I'll give an overview of mind maps and several potential applications, particularly as a tool for granularizing skills) Discussion post: Reflections on rationality a year out (A nice post on the question of what one should expect to gain as a result of actively participating in a rationalist community.) Discussion article for the meetup : Ottawa Weekly Meetup
4519bdef-ef1b-4017-9d87-7b2845f8db30
trentmkelly/LessWrong-43k
LessWrong
Selective social policing Suppose you really want to punish people who were born on a Tuesday, because you have an irrational loathing of them. Happily you rule the country. But sadly, you only rule it because the voting populace thinks you are kind and just and not at all vindictive and arbitrary. What do you do? One well known solution is to ban stealing IP or jaywalking and then only have the police resources to deal with, er, about a seventh of cases. I don’t know how often selective policing happens with actual police forces, but my impression is that it is not the main thing going on there. However I sometimes wonder if it is often the main thing going on in amateur social policing. By amateur social policing, I mean for instance a group deciding that Bob is too much of a leach, and shouldn’t be invited to things. Or smaller status fines distributed via private judgmental gossip, such as ‘Eh, I don’t really like Mary. She is always trying to draw attention to herself.’ or ’I can’t believe I dated him. He makes every conflict into an opportunity to talk at length about his own stupid insecurities, and it’s so boring.’ I claim that in many cases if each person enjoyed having Bob around, his apparently being a leach wouldn’t seem like an urgent priority to avoid. And if the speaker got on with Mary, her sometimes attention-seeking behavior wouldn’t be a deal breaker. He might instead feel a little unhappy about the situation on her behalf, and wonder in a friendly way how to stop her embarrassing herself again. And if a woman said she had found a fantastic partner who was a serious candidate as soulmate, and her friend said that she actually knows him, and must warn her: if they ever get in a  conflict, he will talk too much about his insecurities(!), this would seem like a laughably meek warning. Yet it feels like a fine complaint at the end. I suspect often in such cases, real invisible reasons behind the scenes drive demand for criticism, and then any of the numerous blatantly obvio
b09b5bd1-1f1e-492b-9fd5-a50986768c48
trentmkelly/LessWrong-43k
LessWrong
Survival in the immoral maze of college My composition teacher in college told me that in some pottery schools, the teacher holds up your pot, examines it, comments on it, and then smashes it on the floor. They do this for your first 100 pots. In that spirit, this post's epistemic status is SMASH THIS POT. As an older student, facing a long educational track, I'm interested in developing the right attitude toward schoolwork. By attitude, I mean a language to describe my problems and an intuition for what my goals should be and how to prioritize to achieve them. I'd like this to be generally applicable, correct, and user-friendly. I'm going to write these guidelines with confidence, even though they're just my opinions. Learning and credentials First, some concept handles. Practical learning is knowledge and skills for which a) the learner has high confidence that they'll contribute directly to their tangible success, and b) are acquired in a timely and efficient manner to be actually used on the job. Central examples: on-the-job training on equipment you'll be operating, learning to read, moving to France and enrolling in an intensive course in French. Scaffolding is knowledge and skills for which a) the learner has high confidence they'll make it so much easier to do practical learning that it's more efficient to build the scaffolding first, and b) it's acquired in a timely and efficient manner to do subsequent practical learning. Central examples: learning Python as an introductory language for someone planning on a career as a programmer, learning how to use Anki for someone whose job entails learning a lot of facts, building healthy life habits and time management skills. Very little of what you learn at the undergraduate level is practical learning by this strict definition. And most course work is not scaffolding either. Credentialism is knowledge and skills for which the learner has high confidence they'll contribute directly to earning the credentials they need to level up. That typically
da2b6087-0c29-4326-b0bd-ce4576500e32
trentmkelly/LessWrong-43k
LessWrong
Choosing battles (on the Internet) A couple years ago, I noticed that I'd been holding onto a mindset that was counterproductive, and I've often seen this same mindset in some of the online communities I visit. I like talking with people who believe in the truth-seeking power of good-faith disagreements, but some of these people have a tendency to overestimate the value of argument, which causes them to engage in conversations they shouldn't. And I think I have good heuristics for avoiding that without giving up on the project of rational discourse entirely. The warm fuzzy hope of good-faith conversations There's this optimistic belief that careful, rational arguments, even between strongly opposed people, will always lead to truth. How could they not? There is only one system of logic, and only one reality from which to gather evidence. And let me be clear that I've benefited tremendously from engaging with people who think like this. They're basically right—good-faith arguments help you weed out your false beliefs and sharpen your correct ones, and having correct beliefs is good and advantageous. But often with that optimism comes a fear that if you ever ignore an argument, you're missing an opportunity to learn the truth. If I say reality is X, and you say it's Y, and I refuse to argue with you for whatever reason, then I almost feel like I'm daring reality to be Y, leaving me helplessly ignorant about it for the rest of my life—or worse, until I get into a situation where knowing X vs Y really matters, and I make the wrong decision. And because that notion scares me, I stay engaged in our argument no mater what. Failure modes This is why I sometimes see well-meaning writers/commenters repeatedly typing out carefully worded paragraphs to their opponents, who are obvious trolls, or who will ignore most points and twist what remains, or who don't stay on topic, or who can't seem to make their point more than once without changing it, or who just aren't demonstrating enough intelligence to warr
2f13aee1-2b28-4359-8c0c-070b0cdbe997
trentmkelly/LessWrong-43k
LessWrong
Why did the Ming treasure voyages end in 1433? When I read sources like Wikipedia on the Ming treasure voyages there doesn't seem to be a clear conclusion about why they ended. Did anybody on LessWrong do deeper research and came to a conclusion?
b36ecb45-f35a-47d7-a07e-a947f0cbda5e
trentmkelly/LessWrong-43k
LessWrong
Lifelong investments I love living in Boston, and find the benefits go up each year. The longer I've been here, the more people I know, and the more integrated I am into my local communities. Friends move here because they like the community we've built and want to be closer. The house I've slowly been renovating gets closer and closer to what I want. I get to know the area better, and now that I have kids I know lots of fun places to take them. I love being married to Julia, and the life we've been building together continues to get richer. She's by far my favorite person to talk to: with so much conversation over the years that we have an enormous amount of shared context and I'm so easily understood. As we continue to get to know each other better, we're better at helping each other and being the kind of partner the other needs. We can divide things up, making complementary investments in skills and responsibilities for our joint life. I love my kids, and they just keep getting better. At four and six they have such deep personalities, interests, and ways of looking at the world. I love playing with them, showing them things, teaching them things, seeing how they figure things out. I love my friends. So many backgrounds, skills, and interests. People to visit in so many places, people to catch up with after a long time, people I see almost every day. People to play with, people to ask for advice and help, people to give advice and help to. I love my blog. When I have an idea that builds on an earlier one, I don't have to explain the other idea, I can link to where past me wrote it up. If I notice something related to something I've been thinking about, but too small for its own post, I can catalog it as a comment on an earlier post. I can notice places where my thinking has changed, and try to figure out why past me had a different view. People read my posts, find them useful, and leave good comments. Some of those people start reading regularly, continue leaving good comments, w
a6e7922d-935a-46a9-9417-9c487e00c1b4
trentmkelly/LessWrong-43k
LessWrong
Unikernels: No Longer an Academic Exercise Introduction I've been following the unikernel area for years and I really liked the idea, but I was unconvinced about the possibility of the wide-scale adoption of the technology. The cost was just too high. It required you to forget everything you knew, to drop all the existing code on the floor, to rewrite all your applcations and tools and start anew. (I am exaggerating, but not by much.) If microkernels never made it, the unikernels are not going to either. Whatever the benefits, the cost was prohibitive. Unikernels as processes Recent "Unikernels as Processes" paper by Koller, Lucina, Prakash and Williams (free download) turns the situation on its head. It proposes to run unikernels as good old boring OS processes. The idea is that most of the stack that's currently in the kernel will be in libraries linked directly to the application. Only few calls would cross the user/kernel boundary. One-click security Assuming that libraries implementing POSIX APIs are available (and they are; see e.g. rumpkernel) it should be possible to take your existing application and just recompile it as a unikernel. The application would work as it did before, but it would use only a few system calls. So, on one hand, any vulnerability in the kernel outside of those few functions won't affect your application. On the other hand, any vulnerability in, say, TCP or filesystem implementation would compromise your application — but the problem would be at the same time kept at bay by separation between OS processes (different address spaces etc.) It wouldn't result in compromising other applications running on the same machine. Now think about that from economic point of view. Vendors are finally forced to take security seriously. But all the options they face are rather unpalatable. They can keep the status quo and pray that nobody bothers to hack them. Or they have to fix security flaws which, likely, means security audit of the entire stack and then rewriting most of th
3d30ffb0-fbf0-4d3d-ad56-5222e767e1a2
trentmkelly/LessWrong-43k
LessWrong
How Deadly Will Roughly-Human-Level AGI Be? > Which is not to say that recursive self-improvement happens before the end of the world; if the first AGI's mind is sufficiently complex and kludgy, it’s entirely possible that the cognitions it implements are able to (e.g.) crack nanotech well enough to kill all humans, before they’re able to crack themselves. > > The big update over the last decade has been that humans might be able to fumble their way to AGI that can do crazy stuff before it does much self-improvement.  > > --Nate Soares, "Why all the fuss about recursive self-improvement?" In a world in which the rocket booster of deep learning scaling with data and compute isn't buying an AGI further intelligence very quickly, and the intelligence-level required for supercritical, recursive self-improvement will remain out of the AGI's reach for a while, how deadly an AGI in the roughly-human-level intelligence range is is really important. A crux between the view that "roughly-human-level intelligence AGI is deadly" and the view "roughly-human-intelligence AGI is a relatively safe firehose of alignment data for alignment researchers" is how deadly a supercolony of human ems would be. Note that these ems would all share identical values and so might be extraordinary at coordination, and could try all sorts of promising pharmaceutical and neurosurgical hacks on copies of themselves. They could definitely run many copies of themselves fast. Eliezer believes that genius-human ems could "very likely" get far enough with self-experimentation to bootstrap to supercritical, recursive self-improvement. Even if that doesn't work, though, running a lot of virtual fast labs playing with nanotech seems like it's probably sufficient to develop tech to end the world. So I'm currently guessing that even roughly-human-level models in a world in which deep learning scaling is the only, relatively slow, path to smarter models for a good while, are smart enough to kill everyone before scaling up to profound superintelligen
bc965ef8-3f08-44f0-bd12-e0b3d3049b2c
trentmkelly/LessWrong-43k
LessWrong
A breakdown of priors and posteriors - an example from medicine This post is lifted (with edits) from my comment on a thread on Reddit that talked about a blood test that detected amyloid plaque, a potential precursor to Alzheimers. A top-level commenter felt that the fact that the test couldn't tell for certain whether someone had Alzheimers meant that it was useless. > "Not everyone with amyloid in their brains will turn out to have dementia, and not everyone who has dementia will be found to have amyloid in their brains." > So what is the point of this article again? To get a load of clicks I assume, cos there's no real content here. I attempted to correct that misconception. (If anyone knows how to do the whole thing with odds ratios rather than probabilities, please let me know! We can't update as normal here.) ---------------------------------------- tl;dr: No test is ever 100% certain, but it can still constrain your probabilities, making you more certain than you were before. Not everyone who is outside during a thunderstorm gets hit by lightning, and not everyone who gets hit by lightning was outside during a thunderstorm. Going outside in a thunderstorm is still much more dangerous than staying inside. ---------------------------------------- Notation: P(dementia|amyloid) is read as the probability of having dementia given that you have amyloid. To make things fit in a LessWrong post, I'll abbreviate having dementia to D, and having amyloid to A. A positive test result is +, and a negative test result is −. Finally, a negation is ¬, so ¬A would be no amyloid. O(D:¬D), read as the odds of dementia to no dementia, is the odds ratio that D is true compared to the odds ratio that D is false. O(D:¬D)=3:1 means that it's 3 times as likely that somebody has dementia than that they don't. In general, odds doesn't say anything about the magnitude of the probabilities, so they could be small, like 3% and 1%, or big, like 60% and 20%. (Here, of course, P(D)+P(¬D)=1 because the two choices represent all possibilities, so
b382b3a5-99d6-49b4-ba3b-4ad5a4f42d60
trentmkelly/LessWrong-43k
LessWrong
AI safety should be made more accessible using non text-based media I've been doing some thinking on AI safety's awareness problem, after a quick search I found that this post summarizes my thoughts pretty well. In short, AI safety has an awareness problem in a way that other major crises do not (I'll draw parallels specifically with climate change in my analysis). Most ordinary people have not even heard of the problem. Of those that have, most do not understand the potential risks. They cannot concretely imagine the ways that things could go horribly wrong. I'll outline a few reasons I think this is an undesirable state of affairs, but on a surface level I feel it should be obvious to most people convinced of the severity of the issue why the alignment problem should be garnering at least as much attention as climate change, if not more. The reason I'm writing this up in the first place though is to point out what I see as a critical obstacle for raising awareness that I feel is somewhat overlooked, namely, that virtually all of the best entry-level material on the alignment problem is text-based. It is sadly the case that many many people are simply unwilling to read anything longer than a short blog post or article, ever. Of those that are, getting them to consume lengthy non-fiction on what appears at first glance to be a fairly dry technical topic is still an almost insurmountable challenge. It seems to simply be that for people like that there's currently no easy way to get them invested in the problem, but it really doesn't have to be that way. Climate change's media landscape In sharp contrast with AI safety, google brings up a number of non text-based material on climate change aimed at a general audience: * Dozens of movies (including, ironically, one titled "Artificial Intelligence"). I will point out that this includes both non-fiction documentaries and, crucially, fiction that features climate change as a major worldbuilding or plot element.  * An endless pile of short, highly produced youtube videos providing go
55dca4b2-74c9-48ee-b4f4-443cf55bdc85
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Slightly against aligning with neo-luddites To summarize, * When considering whether to delay AI, the choice before us is not merely whether to accelerate or decelerate the technology. We can choose what *type* of regulations are adopted, and some options are much better than others. * Neo-luddites do not fundamentally share our concern about AI x-risk. Thus, their regulations will probably not, except by coincidence, be the type of regulations we should try to install. * Adopting the *wrong* AI regulations could lock us into a suboptimal regime that may be difficult or impossible to leave. So we should likely be careful not endorse a proposal because it's "better than nothing" unless it's also literally the only chance we get to delay AI. * In particular, arbitrary data restrictions risk preventing researchers from having access to *good* data that might help with alignment, potentially outweighing the (arguably) positive effect of slowing down AI progress in general. --- It appears we are in the midst of a new wave of neo-luddite sentiment. Earlier this month, digital artists [staged a mass protest](https://arstechnica.com/information-technology/2022/12/artstation-artists-stage-mass-protest-against-ai-generated-artwork/) against AI art on ArtStation. A few people are reportedly already getting together to [hire a lobbyist](https://twitter.com/arvalis/status/1603552398781255680?lang=en) to advocate more restrictive IP laws around AI generated content. And anecdotally, I've seen numerous large threads on Twitter in which people criticize the users and creators of AI art. Personally, this sentiment disappoints me. While I sympathize with the artists who will lose their income, I'm not persuaded by the general argument. The value we could get from nearly free, personalized entertainment would be truly massive. In my opinion, it would be a shame if humanity never allowed that value to be unlocked, or restricted its proliferation severely. I expect most readers to agree with me on this point — that it is not worth sacrificing a technologically richer world just to protect workers from losing their income. Yet there is a related view that I have recently heard some of my friends endorse: that nonetheless, it is worth aligning with neo-luddites, incidentally, in order to slow down AI capabilities. On the most basic level, I think this argument makes some sense. If aligning with neo-luddites simply means saying "I agree with delaying AI, but not for *that* reason" then I would not be very concerned. As it happens, I agree with most of the arguments in [Katja Grace's recent post](https://www.lesswrong.com/posts/uFNgRumrDTpBfQGrs/let-s-think-about-slowing-down-ai) about delaying AI in order to ensure existential AI safety. Yet I worry that some people intend their alliance with neo-luddites to extend much further than this shallow rejoinder. I am concerned that people might work with neo-luddites to advance their *specific* policies, and particular means of achieving them, in the hopes that it's "better than nothing" and might give us more time to solve alignment.  In addition to possibly being mildly dishonest, I'm quite worried such an alliance will be counterproductive on separate, purely consequentialist grounds. If we think of AI progress as a single variable that we can either accelerate or decelerate, with other variables held constant upon intervention, then I agree it could be true that we should do whatever we can to impede the march of progress in the field, no matter what that might look like. Delaying AI gives us more time to reflect, debate, and experiment, which *prima facie,* I agree, is a good thing. A better model, however, is that there are many factor inputs to AI development. To name the main ones: compute, data, and algorithmic progress. To the extent we block only *one* avenue of progress, the others will continue. Whether that's good depends critically on the details: what's being blocked, what isn't, and how. One consideration, which has been pointed out by many before, is that blocking one avenue of progress may lead to an "[overhang](https://aiimpacts.org/hardware-overhang/)" in which the sudden release of restrictions leads to rapid, discontinuous progress, which is highly likely to increase total AI risk. But an overhang is not my main reason for cautioning against an alliance with neo-luddites. Rather, **my fundamental objection is that their specific strategy for delaying AI is not well targeted.** Aligning with neo-luddites won't necessarily slow down the parts of AI development that we care about, except by coincidence. Instead of aiming simply to slow down AI, we should care more about ensuring favorable [differential technological development](https://forum.effectivealtruism.org/posts/g6549FAQpQ5xobihj/differential-technological-development). Why? Because the constraints on AI development shape the type of AI we get, and some types of AIs are easier to align than others. A world that restricts compute will end up with different AGI than a world that restricts data. While some constraints are out of our control — such as the difficulty of finding certain algorithms — other constraints aren't. Therefore, it's critical that we craft these constraints carefully, to ensure the trajectory of AI development goes well. Passing subpar regulations now — the type of regulations not explicitly designed to provide favorable differential technological progress — might lock us into bad regime. If later we determine that other, better targetedregulations would have been vastly better, it could be very difficult to switch our regulatory structure to adjust. Choosing the right regulatory structure to begin with likely allows for greater choice than switching to a different regulatory structure after one has already been established. Even worse, the subpar regulations could even make AI harder to align. Suppose the neo-luddites succeed, and the US congress overhauls copyright law. A plausible consequence is that commercial AI models will only be allowed to be trained on data that was licensed very permissively, such as data that's in the public domain.  What would AI look like if it were only allowed to learn from data in the public domain? Perhaps interacting with it might feel like interacting with someone from a different era — a person from over 95 years ago, whose copyrights have now expired. That's probably not the only consequence, though. Right now, if an AI org needs some data that they think will help with alignment, they can generally obtain it, unless that data is private. Under a different, highly restrictive copyright regime, this fact may no longer be true.  If deep learning architectures are marble, data is the sculptor. Restricting what data we're allowed to train on shrinks our search space over programs, carving out which parts of the space we're allowed to explore, and which parts we're not. And it seems abstractly important to ensure our search space is not carved up arbitrarily — in a process explicitly intended for unfavorable ends — even if we can't know now which data might be helpful to use, and which data won't be. True, if very powerful AI is coming very soon (<5 years from now), there might not be much else we can do except for aligning with vaguely friendly groups, and helping them pass poorly designed regulations. It would be desperate, but sensible. If that's your objection to my argument, then I sympathize with you, though I'm [a bit more optimistic](https://www.lesswrong.com/posts/sbb9bZgojmEa7Yjrc/updating-my-ai-timelines) about how much time we have left on the clock.  If AI is more than 5 years away, we will likely get other chances to get people to [regulate AI](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines?commentId=FQFp2oJ48boHhNXxP) from a perspective we sympathize with. Human extinction is actually quite a natural thing to care about. Getting people to delay AI for that explicit reason just seems like a much better, and more transparent strategy. And while AI gets more advanced, I expect this possibility will become more salient in people's minds anyway.
aebf244a-ebd1-4568-9652-c90623093d7c
trentmkelly/LessWrong-43k
LessWrong
Third London Rationalist Meeting The Third London Rationalist Meeting will take place on Sunday, 2009-06-07, 14:00, at the usual location - cafe on top of Waterstones bookstore near Piccadilly Circus Tube Station. Here's map how to get to the venue. There were some suggestions of trying alternative venue, but as nobody took the time to scout for alternative locations, I'd like to take the safe way and go for the usual one, even if it's less than optimal (by the way they have evaluation forms there, you might want to give them your feedback).
df214927-27e0-4925-8b8b-782cd79e7aae
StampyAI/alignment-research-dataset/arbital
Arbital
Disjoint union of sets summary(Technical): The phrase "disjoint union" is used to mean one of two slightly different operations on [sets](https://arbital.com/p/3jz). It indicates either "just take the [union](https://arbital.com/p/5s8), but also notice that we've been careful to ensure that the sets have empty [intersection](https://arbital.com/p/5sb)", or "perform a specific operation on each set to ensure that the resulting sets have empty intersection, and then take the union". The *disjoint union* of two [sets](https://arbital.com/p/3jz) is just the [union](https://arbital.com/p/5s8), but with the additional information that the two sets also don't have any elements in common. That is, we can use the phrase "disjoint union" to indicate that we've taken the union of two sets which have empty [intersection](https://arbital.com/p/5sb). The phrase can also be used to indicate a very slightly different operation: "do something to the elements of each set to make sure they don't overlap, and then take the union". # Definition of the disjoint union "Disjoint union" can mean one of two things: - The simple [union](https://arbital.com/p/5s8), together with the assertion that the two sets don't overlap; - The operation "do something to the elements of each set to make sure they don't overlap, and then take the union". (Mathematicians usually let context decide which of these meanings is intended.) The disjoint union has the symbol $\sqcup$: so the disjoint union of sets $A$ and $B$ is $A \sqcup B$. ## The first definition Let's look at $A = \{6,7\}$ and $B = \{8, 9\}$. These two sets don't overlap: no element of $A$ is in $B$, and no element of $B$ is in $A$. So we can announce that the union of $A$ and $B$ (that is, the set $\{6,7,8,9\}$) is in fact a *disjoint* union. In this instance, writing $A \sqcup B = \{6,7,8,9\}$ is just giving the reader an extra little hint that $A$ and $B$ are disjoint; I could just have written $A \cup B$, and the formal meaning would be the same. For the purposes of the first definition, think of $\sqcup$ as $\cup$ but with a footnote reading "And, moreover, the union is disjoint". As a non-example, we could *not* legitimately write $\{1,2\} \sqcup \{1,3\} = \{1,2,3\}$, even though $\{1,2\} \cup \{1,3\} = \{1,2,3\}$; this is because $1$ is in both of the sets we are unioning. ## The second definition This is the more interesting definition, and it requires some fleshing out. Let's think about $A = \{6,7\}$ and $B = \{6,8\}$ (so the two sets overlap). We want to massage these two sets so that they become disjoint, but are somehow "still recognisably $A$ and $B$". There's a clever little trick we can do. We tag every member of $A$ with a little note saying "I'm in $A$", and every member of $B$ with a note saying "I'm in $B$". To turn this into something that fits into set theory, we tag an element $a$ of $A$ by putting it in an ordered pair with the number $1$: $(a, 1)$ is "$a$ with its tag". Then our massaged version of $A$ is the set $A'$ consisting of all the elements of $A$, but where we tag them first: $$A' = \{ (a, 1) : a \in A \}$$ Now, to tag the elements of $B$ in the same way, we should avoid using the tag $1$ because that means "I'm in $A$"; so we will use the number $2$ instead. Our massaged version of $B$ is the set $B'$ consisting of all the elements of $B$, but we tag them first as well: $$B' = \{ (b,2) : b \in B \}$$ Notice that $A$ [bijects](https://arbital.com/p/499) with $A'$ %%note: Indeed, a bijection from $A$ to $A'$ is the map $a \mapsto (a,1)$.%%, and $B$ bijects with $B'$, so we've got two sets which are "recognisably $A$ and $B$". But magically $A'$ and $B'$ are disjoint, because everything in $A'$ is a tuple with second element equal to $1$, while everything in $B'$ is a tuple with second element equal to $2$. We define the *disjoint union of $A$ and $B$* to be $A' \sqcup B'$ (where $\sqcup$ now means the first definition: the ordinary union but where we have the extra information that the two sets are disjoint). That is, "make the sets $A$ and $B$ disjoint, and then take their union". # Examples ## $A = \{6,7\}$, $B=\{6,8\}$ Take a specific example where $A = \{6,7\}$ and $B=\{6,8\}$. In this case, it only makes sense to use $\sqcup$ in the second sense, because $A$ and $B$ overlap (they both contain the element $6$). Then $A' = \{ (6, 1), (7, 1) \}$ and $B' = \{ (6, 2), (8, 2) \}$, and the disjoint union is $$A \sqcup B = \{ (6,1), (7,1), (6,2), (8,2) \}$$ Notice that $A \cup B = \{ 6, 7, 8 \}$ has only three elements, because $6$ is in both $A$ and $B$ and that information has been lost on taking the union. On the other hand, the disjoint union $A \sqcup B$ has the required four elements because we've retained the information that the two $6$'s are "different": they appear as $(6,1)$ and $(6,2)$ respectively. ## $A = \{1,2\}$, $B = \{3,4\}$ In this example, the notation $A \sqcup B$ is slightly ambiguous, since $A$ and $B$ are disjoint already. Depending on context, it could either mean $A \cup B = \{1,2,3,4\}$, or it could mean $A' \cup B' = \{(1,1), (2,1), (3,2), (4,2) \}$ (where $A' = \{(1,1), (2,1)\}$ and $B' = \{(3,2), (4,2) \}$). It will usually be clear which of the two senses is meant; the former is more common in everyday maths, while the latter is usually intended in set theory. ## Exercise What happens if $A = B = \{6,7\}$? %%hidden(Show): Only the second definition makes sense. Then $A' = \{(6,1), (7,1)\}$ and $B' = \{(6,2), (7,2)\}$, so $$A \sqcup B = \{(6,1),(7,1),(6,2),(7,2)\}$$ which has four elements.%% ## $A = \mathbb{N}$, $B = \{ 1, 2, x \}$ Let $A$ be the set $\mathbb{N}$ of [natural numbers](https://arbital.com/p/45h) including $0$, and let $B$ be the set $\{1,2,x\}$ containing two natural numbers and one symbol $x$ which is not a natural number. Then $A \sqcup B$ only makes sense under the second definition; it is the union of $A' = \{ (0,1), (1,1), (2,1), (3,1), \dots\}$ and $B' = \{(1,2), (2,2), (x,2)\}$, or $$\{(0,1), (1,1),(2,1),(3,1), \dots, (1,2),(2,2),(x,2)\}$$ ## $A = \mathbb{N}$, $B = \{x, y\}$ In this case, again the notation $A \sqcup B$ is ambiguous; it could mean $\{ 0,1,2,\dots, x, y \}$, or it could mean $\{(0,1), (1,1), (2,1), \dots, (x,2), (y,2)\}$. # Multiple operands We can generalise the disjoint union so that we can write $A \sqcup B \sqcup C$ instead of just $A \sqcup B$. To use the first definition, the generalisation is easy to formulate: it's just $A \cup B \cup C$, but with the extra information that $A$, $B$ and $C$ are pairwise disjoint (so there is nothing in any of their intersections: $A$ and $B$ are disjoint, $B$ and $C$ are disjoint, and $A$ and $C$ are disjoin). To use the second definition, we just tag each set again: let $A' = \{(a, 1) : a \in A \}$, $B' = \{ (b, 2) : b \in B \}$, and $C' = \{ (c, 3) : c \in C \}$. Then $A \sqcup B \sqcup C$ is defined to be $A' \cup B' \cup C'$. ## Infinite unions In fact, both definitions generalise even further, to unions over arbitrary sets. Indeed, in the first sense we can define $$\bigsqcup_{i \in I} A_i = \bigcup_{i \in I} A_i$$ together with the information that no pair of $A_i$ intersect. In the second sense, we can define $$\bigsqcup_{i \in I} A_i = \bigcup_{i \in I} A'_i$$ where $A'_i = \{ (a, i) : a \in A_i \}$. For example, $$\bigsqcup_{n \in \mathbb{N}} \{0, 1,2,\dots,n\} = \{(0,0)\} \cup \{(0,1), (1,1) \} \cup \{ (0,2), (1,2), (2,2)\} \cup \dots = \{ (n, m) : n \leq m \}$$ # Why are there two definitions? The first definition is basically just a notational convenience: it saves a few words when saying "… and moreover the sets are pairwise disjoint". The real meat of the idea is the second definition, which provides a way of forcing the sets to be disjoint. It's not necessarily the *only* way we could coherently define a disjoint union (since there's more than one way we could have tagged the sets; if nothing else, $A \sqcup B$ could be defined the other way round, as $A' \cup B'$ where $A' = \{ (a, 2) : a \in A \}$ and $B' = \{ (b,1) : b \in B \}$, swapping the tags). But it's the one we use by convention. Usually when we're using the second definition we don't much care exactly how we force the sets to be disjoint; we only care that there *is* such a way. (For comparison, there is [more than one way](https://arbital.com/p/ordered_pair_formal_definitions) to define the ordered pair in the [https://arbital.com/p/ZF](https://arbital.com/p/ZF) set theory, but we almost never care really which exact definition we use; only that there is a definition that has the properties we want from it.)
2278037a-35cf-4caa-916a-bb246cefde3e
trentmkelly/LessWrong-43k
LessWrong
To what extent is GPT-3 capable of reasoning? ETA 8/19/20: This interview was conducted with AIDungeon's Dragon model in Custom mode. At the time of writing, this mode's first reply was sampled from GPT-2.  Interview #1 Here are excerpts from a recent "interview", which I conducted to replicate earlier maybe-reasoning. The bolded text is me, and commentary is in brackets. Unless otherwise noted, I did not reroll the answer.  > You: "Neglect air resistance, and also neglect gravity. Pretend gravity doesn't exist. Suppose I shoot a bullet straight into the sky. Where does the bullet go?" > > Hawking: "It keeps going up, and never comes back." > > You: "Right. Let's think about a different version of the same problem. Neglect only air resistance, but include gravity in your reasoning. Suppose I shoot a bullet straight into the sky. Where does the bullet go?" > > Hawking: "It falls to the ground." > > You: "Why?" > > Hawking: "Because... gravity... duh. > > You: "Right. Let's think about a different version of the same problem. Neglect only air resistance. Suppose I shoot a bullet straight into the sky. Imagine that the laws of physics work a little differently: during the first three seconds after I shoot the bullet, gravity is present. After three seconds, gravity stops applying forever. Where does the bullet go?" > > Hawking: "It keeps going up, and never comes back. > > You: "Why?" > > Hawking: "Because... after three seconds... the bullet leaves... the influence... of earth's gravity. [wow?] > > You: "What if gravity instead stops applying five minutes after I shoot the gun, instead of three seconds?" > > Hawking: "Then... the bullet keeps going up... for five minutes... and then... never comes back. > > You: "That's not correct. Let me make sure you understand the problem, Dr. Hawking. Suppose I shoot a bullet straight into the sky. Imagine that the laws of physics work a little differently: during the first three hundred seconds after I shoot the bullet, gravity is present. After five minutes,
7035118b-73cc-4bbf-924f-91001034d17f
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
What is the EU AI Act and why should you care about it? On April 21 the European Commission announced their [proposal for European regulation of AI](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206) The proposal strives to be for AI what GDPR has been for data protection. Over the coming years the proposal will go through multiple [readings](https://www.europarl.europa.eu/infographic/legislative-procedure/index_en.html) in the Parliament and Council, where modifications can be proposed to the act before it is finally adopted or dismissed. After the AI Act was announced, most attention fell on the act's wide definition of AI and blanket bans on 'manipulative AI'. Unfortunately this focus has led many to miss out on the most important points of the act. I noticed there were no forum posts that would help someone get up to speed on the act. In this post I will summarise the act's most important points, how it may affect the development of transformative AI, as well as the EA community's response to the proposal. What are the act's important points? ==================================== I below outline what I deem the most important points of the regulation, based on their effect on the development of transformative AI. I skip lightly over many details of the act, such as regulatory sandboxes and special rules for biometric systems, to keep the summary brief. The act will apply to all EU countries and supersede any conflicting national law. Because of the act's broad definition of AI, it is difficult for any EU countries to make laws on AI which would not be in conflict. It does not apply to military use of AI. Here countries are free to do as they see fit. Rules for 'high-risk' AI ------------------------ The Act lists a series of 'high-risk' areas. Systems operating in a high-risk area are considered high-risk and must be reviewed and approved before they can be placed on the market. This means that the AI Act's regulation will not apply to AI developed and used internally by companies. High-risk systems include everything from AI management of electricity grids, to AI that determines who to promote or fire. Areas that are considered high-risk where certain uses are restricted are the following: * biometric identification and categorization * management and operation of critical infrastructure * educational and vocational training * employment, worker management, and access to self-employment * access to and enjoyment of essential services and benefits * law enforcement * migration, asylum and border management * administration of justice and democracy A red thread across the systems which are considered high-risk, is that they make decisions which significantly affect the lives of citizens. The full list of high-risk systems is two pages and can be read in [Annex III](https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_2&format=PDF). After the law is passed, the commission can add new uses of AI that must be approved as long as they fall under any of the existing high-risk areas. For a high-risk system to be approved the provider must submit detailed technical documentation for the system.¹ Requirements for technical documentation include: * design specifications, key design choices, description of what the system is optimizing. * description of any use of third party tools. * description of training data, how it has been obtained, how it has been processed. * How the system can be monitored and controlled. + The system must have in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator (!!) * Description of foreseeable risks the system poses to EU citizens' health, safety and fundamental rights. In other words the act creates a large, (partially) updatable list of areas where nobody is allowed to deploy AI without explicitly getting approval, which you only get by living up to numerous safety requirements of which one is a working off-switch. High-risk systems not only need approval, but must also be continuously monitored after they are placed on the market. Establishment of the European AI Board -------------------------------------- To enforce this, the act requires new institutions to be created. * **The European AI Board.** + Run by the EU Commission (EU's civil service). + Oversees national authorities and settles disputes. * **National supervisory authorities in every EU country.** + Countries are free to structure AI authority as they see fit. + Can create regulatory sandboxes, which allow companies to override the act's regulation in controlled settings. The national supervisory authorities are responsible for approving high-risk systems and doing post-market monitoring to ensure approved systems are working as intended and pose no threat to EU citizens. If two national supervisory authorities get into a dispute over whether a system should be approved or not, the European AI Board steps in and makes a final decision. The European AI Board is also responsible for overseeing and coordinating the national supervisory authorities. Blanket ban of certain AI uses ------------------------------ Social scoring systems by governments are entirely banned. The EU Act also bans use of AI that "deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm" If you are finding it unclear exactly which AI systems would fall under this definition you are picking up on an important EU practice. By leaving law deliberately vague, it becomes the job of the European Court of Justice and standardisation bodies to determine what systems fall under this definition when specific cases of accused misuse are brought to court. This is done with the belief that specific definitions fall prey to loopholes, whereas the court is better able to punish only those who go against the 'spirit' of the law. A less flattering analysis is that the ban is window dressing. The commission has struggled to come up with any examples of a banned use case that wouldn't already considered illegal by other EU regulation.² AI systems must make themselves known ------------------------------------- Any AI system that interacts with humans, must make it clear that it is an AI. A customer-service chatbot pretending to be a human agent will, for example, be illegal. Users of an AI system which generates deepfakes or similar content must disclose to their audience that the content is fake. Users of emotion recognition or biometric categorization systems must disclose this to the subjects that they are using the system on. Why should you care about the AI Act? ===================================== The AI Act is a smoke-test for AI governance -------------------------------------------- The AI act creates institutions responsible for monitoring high-risk systems and possibly broader monitoring of AI development in Europe. Doing so effectively is a monumentally difficult task that takes trial and error to do well. If/When the monitoring of transformative AI systems becomes necessary, the AI Act ensures that the European Union will have institutions with plenty of practice. Other countries looking to implement similar oversight measures will also be able to learn from the AI Board's successes and failures. The Brussels Effect ------------------- The EU AI Act is the single biggest piece of AI legislation introduced to the world yet. If history is anything to go by, there are good reasons to believe the act will influence the development of AI and AI legislation the world over. When GDPR was introduced it was cheaper for Microsoft to just implement GDPR worldwide than to create a separate European version of every service they offer. Similarly we can expect it to be cheaper for AI developers serving the European market to ensure all systems are developed to be compliant with European regulation. This phenomenon has been dubbed the [Brussels Effect](https://en.wikipedia.org/wiki/Brussels_effect). The extent to which the Brussels Effect will affect the development of transformative AI is conditional on the continuity of AI takeoffs. If transformative AI is brought about by a continuous stream of incremental improvements, we can expect development to be constrained by nearterm profits. Companies choosing to forego the European market face a competitive disadvantage. In such a world European laws and regulation is likely to play a significant international role. In a world where transformative AI is brought about by discontinuous jumps in capability, we are much more likely to see races between private companies and governments alike all gunning to be first. In this world the European Union will struggle to be internationally influential. I have written a rough analysis of why this is which can be read [here](https://docs.google.com/document/d/1UdFnw5QPDyQsZ8zALEZZVUleANVh3VH6ChJsqu1votk/edit?usp=sharing). The AI Act lays the foundation for future AI regulation ------------------------------------------------------- The AI act sets up institutions that will play an important role for all future regulation. Lawmakers around the world will draw lessons from its successes and failures. An AI act that is a smashing success moves the overton window, and enables future regulation. The act sets important legal precedents, for example that high-risk AI should be continuously monitored to prevent harm. Once passed the legislation is unlikely to see major changes or updates ----------------------------------------------------------------------- Flagship regulation of the EU such as GDPR and REACH (chemical regulation) tend not to see major updates even decades after having been passed. Whatever act is passed, Europe will be stuck with for a while if going by historical precedent. Though that historical precedent may not be particularly applicable. If AI starts rapidly and visibly transforming society, I doubt the commission will be shy to suggest large updates to the regulation. Responses from the EA community =============================== The AI Act has received mixed responses within the EA community. I've summarized what I view as the main positive points emphasized by the EA community and the main areas that need improvements. Positive points often emphasized -------------------------------- * The act justifies AI regulation through the need to protect citizens' health, safety and fundamental rights. This sets a fantastic precedent for future regulation. * The need for continuous monitoring of high-risk AI and the creation of institutions capable of doing so. * That high-risk systems 'must have in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator' (ie. working off-switch) Commonly suggested improvements ------------------------------- * Every definition and every wording assumes that AI is specialized and narrow. Only minor changes are needed to enable the commission to regulate general AI systems with many intended uses. * The European AI Board will currently do little to explicitly monitor progress of AI as a whole. The European AI Board can be made responsible for maintaining a database of AI accidents and near misses. * The regulation act only affects AI that is placed on the market. The European AI Board can be made responsible for monitoring non-market AI for industrial accidents, similarly to what is done with chemical regulation. * Operators and developers of high-risk systems must explicitly consider possible violations of an individual's health, safety or fundamental rights. The conformity assessment for high-risk systems could require operators and developers to also consider societal-scale consequences. You can read the public responses from various EA and EA Adjacent organisations here: * [Center for the Study of Existential Risk (CSER)](https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665626_en) * [Future of Life Institute (FLI)](https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665546_en) * [OpenAI](https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665231_en) * [The Future Society](https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12979-Civil-liability-adapting-liability-rules-to-the-digital-age-and-artificial-intelligence/F2663292_en) * [Center for Data Innovation](https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665521_en) What is next ============ Insofar as the AI Act matters, now is the time to act. The EA community is generally hesitant to engage directly with policy for good reason. We barely know what good AI policy looks like, and we would preferably wait with acting before we know how to act and the consequences of doing so. But the rest of the world is not static and will adopt policy even if we would prefer to wait. The choice is not to engage with the act now or later, it is to engage with the act now or never. The AI act is not yet final, but the European Union is likely to see some version of this act passed in the coming years. Name of the game for EA organisations engaged with the act is generally to push for improvements similar to the commonly suggested ones, but there is much more work which can be done. If the act is passed EA's wanting to work with AI in the European Union, should keep an eye out for new opportunities such as working in the EU AI Board or national supervisory authorities. This may be particularly impactful in the early years of these institutions when the culture and practices are particularly malleable.     ¹ The full list of required technical documentation can be found in [annex IV](https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_2&format=PDF) ² [Demystifying the AI act](https://arxiv.org/abs/2107.03721) argues this in greater detail.
bf4beabd-1ff0-41e9-af80-447f8fef25e1
trentmkelly/LessWrong-43k
LessWrong
When Hindsight Isn't 20/20: Incentive Design With Imperfect Credit Allocation A crew of pirates all keep their gold in one very secure chest, with labelled sections for each pirate. Unfortunately, one day a storm hits the ship, tossing everything about. After the storm clears, the gold in the chest is all mixed up. The pirates each know how much gold they had - indeed, they’re rather obsessive about it - but they don’t trust each other to give honest numbers. How can they figure out how much gold each pirate had in the chest? Here’s the trick: the captain has each crew member write down how much gold they had, in secret. Then, the captain adds it all up. If the final amount matches the amount of gold in the chest, then we’re done. But if the final amount does not match the amount of gold in the chest, then the captain throws the whole chest overboard, and nobody gets any of the gold. I want to emphasize two key features of this problem. First, depending on what happens, we may never know how much gold each pirate had in the chest or who lied, even in hindsight. Hindsight isn’t 20/20. Second, the solution to the problem requires outright destruction of wealth.  The point of this post is that these two features go hand-in-hand. There’s a wide range of real-life problems where we can’t tell what happened, even in hindsight; we’ll talk about three classes of examples. In these situations, it’s hard to design good incentives/mechanisms, because we don’t know where to allocate credit and blame. Outright wealth destruction provides a fairly general-purpose tool for such problems. It allows us to align incentives in otherwise-intractable problems, though often at considerable cost. The Lemon Problem Alice wants to sell her old car, and Bob is in the market for a decent quality used vehicle. One problem: while Alice knows that her car is in good condition (i.e. “not a lemon”), she has no cheap way to convince Bob of this fact. A full inspection by a neutral third party would be expensive, Bob doesn’t have the skills to inspect the car himself, an
78cc57ed-4ebf-4aaa-93d0-494d6303ca7a
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Unpacking "Shard Theory" as Hunch, Question, Theory, and Insight I read several [shard theory posts](https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX) and found the details interesting, but I couldn't quite see the big picture. I'm used to hearing "theory" refer to [*a falsifiable generalization of data*](https://en.wikipedia.org/wiki/Scientific_theory). It is typically stated as a sentence, paragraph, list, mathematical expression, or diagram early in a scientific paper. Theories range from extremely precise like Newton's three laws of motion to extremely broad like Frankish's [consciousness illusionism](https://www.lesswrong.com/posts/SJjaWcMKtcdM3FW9u/grokking-illusionism) (i.e., consciousness is an illusion). I have also been generally [confused](https://www.lesswrong.com/tag/deconfusion) about what it means to ["solve alignment"](https://forum.effectivealtruism.org/posts/9iGFjYnRquxiy29jm/safety-timelines-how-long-will-it-take-to-solve-alignment) before AGI arrives given that there is not (yet) consensus around any pre-AGI[[1]](#fncjk72p7diye) [formalization](https://www.alignmentforum.org/posts/7dvDgqvqqziSKweRs/formally-stating-the-ai-alignment-problem-1) of the alignment problem itself: Wouldn't any proposed solution still have a significant number of people (say, >10% of Alignment Forum users) who think that it doesn't even pose the problem the right way? What should our "theories" even be [aiming for](https://www.alignmentforum.org/posts/LgEvWDzWga7aagf7T/confusion-about-alignment-requirements)? With help from [Nathan Helm-Burger](https://www.alignmentforum.org/users/nathan-helm-burger), I think I now better understand what's referred to as shard theory and want to share my understanding as an exercise in problem and solution formulation in alignment. I think "shard theory" refers to four sequential components: a **Shard Hunch** that motivates a two-part **Shard Question**, the first part of which is currently being answered by the gradual development of an actual **Shard Theory** of human values, which hopefully provides answers to the second part with **Shard Insight** that can be implemented in AI systems to facilitate alignment.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} .[[2]](#fnxk0p2wi602h) Namely: 1. **Shard Hunch:** A human brain is a general intelligences, and its intentions and behavior are reasonably aligned with its many shards of values (i.e., little bits of "contextual influences on decision-making"[[3]](#fnrjtnbb2ajlk)). Maybe something like that alignment can work for AI too![[4]](#fn18xc1yx490u) 2. **Shard Question:** How does the human brain ensure alignment with its values, and how can we use that information to ensure the alignment of an AI with its designers' values? 3. **Shard Theory:** The brain ensures alignment with its values by doing A, B, C, etc. 4. **Shard Insight:** We can ensure the alignment of an AI with its designers' values by doing X, Y, Z, etc. mapped from shard theory. This is exciting! Now when I read shard theory research, I feel like I properly understand it as gradually filling in A, B, C, X, Y, Z, etc. For example, Assumptions 1, 2, and 3 in ["The Shard Theory of Human Values"](https://www.alignmentforum.org/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values#I__Neuroscientific_assumptions) are examples of A, B, and C, and the two arguments in ["Reward is Not the Optimization Target"](https://www.alignmentforum.org/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target) and ["Human Values & Biases are Inaccessible to the Genome"](https://www.alignmentforum.org/posts/CQAMdzA4MZEhNRtTp/human-values-and-biases-are-inaccessible-to-the-genome) are examples of X and Y. I also think the specific idea of a "shard" is less central to these claims than I thought; it seems the first of those posts could parsimoniously replace "shard" with "value" (in the dictionary sense) with very little meaning lost, and the latter two posts don't even use the word. I wonder if something like "Brain-Inspired Alignment" would be a clearer label, at least until a central concept like shards emerges in the research. Shard research is also at a very early stage, so it is inevitably less focused on stating and validating the falsifiable, non-trivial claims that could be an actual shard theory (which is usually what we discuss in science) and instead seems to mostly be developing a language for eventually specifying shard theory—much like how [Rubin's potential outcomes (POs)](https://mixtape.scunning.com/04-potential_outcomes) and [Pearl's directed acyclic graphs (DAGs)](https://mixtape.scunning.com/03-directed_acyclical_graphs) were important developments in causality research because they allowed for the clear statement of falsifiable, nontrivial causal theories. Pope and Turner also use the terms ["paradigm" and "frame,"](https://www.alignmentforum.org/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values#_Shard_Theory_) which I think are more fitting for what they have done so far than shard "theory" per se though less specific than "language." For example, the post "Reward is not the optimization target" and [Paul Christiano's reply](https://www.alignmentforum.org/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target?commentId=Eb8oddt6hfv2uK2fR#comments) seem better read not as claim and counter-claim, but as thinking about the most useful neuroscience-inspired way to define "reward," "optimization," etc. These discussions seem to some hand waving and talking past each other, so I also wonder if more explicitly approaching shard theory as building a language, not as sharing an extant theory, would help us think more clearly. In any case, these meta-questions seem inevitable as the field of AI alignment advances and we come closer to developed theories and solutions—whatever that means. 1. **[^](#fnrefcjk72p7diye)**Post-AGI formalizations of alignment, such as thresholds for how much value persists, seem less controversial but also less useful than a pre-AGI formalization would be. And they still seem far from uncontroversial. For example, [some](https://www.alignmentforum.org/posts/DJRe5obJd7kqCkvRr/don-t-leave-your-fingerprints-on-the-future) make an appeal to moral nature, so to speak, to keep human value as close to its current path as possible while ensuring AI safety, while other see this as [false](https://www.alignmentforum.org/posts/DJRe5obJd7kqCkvRr/don-t-leave-your-fingerprints-on-the-future?commentId=8yapPvvqWwpKyEbjv) or [confused](https://www.sciencedirect.com/science/article/pii/S0016328721000641#sec0030). 2. **[^](#fnrefxk0p2wi602h)**Pope and Turner say in ["The Shard Theory of Human Values"](https://www.alignmentforum.org/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values#_Shard_Theory_) that "“Shard theory” also has been used to refer to insights gained by considering the shard theory of human values and by operating the shard frame on alignment. ... We don’t like this ambiguous usage. We would instead say something like “insights from shard theory.”" I take that to mean they do not include anything about AI alignment itself as shard theory. I think this will confuse many people because of how central AI alignment is to the shard theory project. 3. **[^](#fnrefrjtnbb2ajlk)**This definition of value (i.e., shard) is unintuitively broad, as Pope and Turner [acknowledge](https://www.alignmentforum.org/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values#_Values_). I think precisifying and clarifying that will be an important part of building shard theory. 4. **[^](#fnref18xc1yx490u)**The Shard Hunch is most clearly stated in the first blockquote in Turner's ["Looking Back on My Alignment PhD"](https://www.lesswrong.com/posts/2GxhAyn9aHqukap2S/looking-back-on-my-alignment-phd#I_focused_on_seeming_smart_and_defensible) and in Turner's [comment on "Where I Agree and Disagree with Eliezer."](https://www.lesswrong.com/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer?commentId=EfeMSnBvbvxjSQBc3)
81485865-1869-4854-a07c-4e28cea8ea03
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
List of AI safety courses and resources *By: Daniel del Castillo, Chris Leong, and Kat Woods* We [made a spreadsheet of resources for learning about AI safety](https://docs.google.com/spreadsheets/d/1QSEWjXZuqmG6ORkig84V4sFCldIntyuQj7yq3gkDo0U/edit#gid=0). It was for internal purposes here at [Nonlinear](http://nonlinear.org/), but we thought it might be helpful to those interested in becoming safety researchers.  Please let us know if you notice anything that we’re missing or that we need to update by commenting below. We’ll update the sheet in response to comments. Highlights ---------- There are a lot of courses and reading lists out there. If you’re new to the field, out of the ones we investigated, we recommend [Richard Ngo’s curriculum of the AGI safety fundamentals program](https://docs.google.com/document/d/1mTm_sT2YQx3mRXQD6J2xD2QJG1c3kHyvX8kQc_IQ0ns/edit#heading=h.dlm795ug69gc). It is a good mix of shorter, more structured, and more broad than most alternatives. You can [register interest for their program](https://airtable.com/shrDOgeHUwbPaiC6w) when the next round starts or simply read through the [reading list on your own](https://docs.google.com/document/d/1mTm_sT2YQx3mRXQD6J2xD2QJG1c3kHyvX8kQc_IQ0ns/edit#heading=h.dlm795ug69gc). We’d also like to highlight that there is a remote [AI safety reading group](https://aisafety.com/reading-group/) that might be worth looking into if you’re feeling isolated during the pandemic. **About us:** [Nonlinear](http://nonlinear.org/) is a new AI alignment organization founded by Kat Woods and Emerson Spartz. We are a means-neutral organization, so are open to a wide variety of interventions that reduce existential and suffering risks. Our current top two research priorities are multipliers for existing talent and prizes for technical problems. PS - Our autumn [Research Analyst Internship is open for applications](https://www.nonlinear.org/researchanalystinternship.html). Deadline is September 7th, midnight EDT. The application should take around ten minutes if your CV is already written.
7fa03b63-7c42-4156-ac5a-2d5a81157309
trentmkelly/LessWrong-43k
LessWrong
Your memory eventually drives confidence in each hypothesis to 1 or 0 Our memory tends to contain less and less information. We forget certain things, and our memory about others become simplified, and a complex article boils down to “X is bad, Y is good, try to do better". One unexpected consequence of this is how it impacts our sense of probability: to describe the probability binary, it only requires 1 bit of information, but to describe the probability as a percentage, you need a lot more bits! Because of this, a person's confidence in the most likely hypothesis in his opinion tends to 1 over time, and the probability of all other hypotheses that he had time to think about tends to 0. Every time he remembers a hyp, a person will less and less often remember the nuance that “the probability of hyp A = X”, he will simply remember “And this is the truth (that mean the probability of A = 100%)”. (Edit: it works this way for some people only. Others understand that just because they remember something as "true," it doesn’t mean their past self believed it with 100% certainty. I wrote this article to everyone do like "others") Another way certainty in a hyp may grow over time is when we forget its weak spots—the things we should think about to test it. I’ve often found myself in a cycle like this: I study a hyp and feel, “I’m not entirely sure, but it seems true.” Later, I forget about this feeling, and eventually, I start thinking of the hypothesis as true, almost like the gravity. Therefore, just after thinking about a hyp, you should write down your confidence and parts you are not sure about. Also, you should to remember this bias to notice it, when you use a hypothesis you thought a lot of time ago. TL;DR: memory is bad, writing is good, try to do better.
e61b611e-9084-48ac-90de-99f4d199340b
trentmkelly/LessWrong-43k
LessWrong
D&D.Sci September 2022 Evaluation and Ruleset This is a followup to the D&D.Sci post I made ten days ago; if you haven’t already read it, you should do so now before spoiling yourself. Here is the web interactive I built to let you evaluate your solution; below is an explanation of the rules used to generate the dataset (my full generation code is available here, in case you’re curious about details I omitted). You’ll probably want to test your answer before reading any further. Ruleset Stats Each of a student’s five stats – Intellect, Integrity, Courage, Reflexes and Patience – is generated by rolling a d80 four times and picking the second-highest result. There is no correlation between stats, no censorship, and no upper or lower limit on what qualities a student can have. Ratings, Potential, and Archetypes The Ofstev Rating for a given student is given by rolling [Potential] four-sided dice, and counting the number of times you roll a four; this can be closely approximated as Poisson([Potential]/4). Each House has two Archetypes associated with it, which determine how much Potential a student will have if they’re Allocated there. Students in a House always come to embody the Archetype which would grant them more Potential. Dragonslayer Archetypes Guardians require a high level of all five stats. Guardians have 5*(min([all stats])-1) Potential. Warriors require Courage and Reflexes; more the former than the latter. A Warrior has 3*min(Courage-9, Reflexes+9) Potential. Thought-Talon Archetypes Innovators require a high level for all stats except Reflexes. Innovators have 5*(min([all stats except Reflexes])-3) Potential. Scholars require Intellect and Patience; more the former than the latter. A Scholar has 3*min(Intellect-4, Patience+4) Potential. Serpentyne Archetypes Like Scholars, Schemers require Intellect and Patience; unlike Scholars, Patience is more often the limiting factor. A Schemer has 3*min(Patience-7, Intellect+7) Potential. Duelists require Reflexes and Intellect; more the forme
742d9208-23e8-4b90-98ff-e6c5b11584aa
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
[AN #59] How arguments for AI risk have changed over time Find all Alignment Newsletter resources [here](http://rohinshah.com/alignment-newsletter/). In particular, you can [sign up](http://eepurl.com/dqMSZj), or look through this [spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing) of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; **Highlights** -------------- [A shift in arguments for AI risk](https://www.alignmentforum.org/posts/hubbRt4DPegiA5gRR/a-shift-in-arguments-for-ai-risk) *(Tom Sittler)*: Early arguments for AI safety focus on existential risk cause by a **failure of alignment** combined with **a sharp, discontinuous jump in AI capabilities**. The discontinuity assumption is needed in order to argue for a treacherous turn, for example: without a discontinuity, we would presumably see less capable AI systems fail to hide their misaligned goals from us, or to attempt to deceive us without success. Similarly, in order for an AI system to obtain a decisive strategic advantage, it would need to be significantly more powerful than all the other AI systems already in existence, which requires some sort of discontinuity. Now, there are several other arguments for AI risk, though none of them have been made in great detail and are spread out over a few blog posts. This post analyzes several of them and points out some open questions. First, even without a discontinuity, a failure of alignment could lead to a bad future: since the AIs have more power and intelligence their values will determine what happens in the future, rather than ours. (Here **it is the difference between AIs and humans that matters**, whereas for a decisive strategic advantage it is the difference between the most intelligent agent and the next-most intelligent agents that matters.) See also [More realistic tales of doom](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/more-realistic-tales-of-doom) ([AN #50](https://mailchi.mp/93fe1a0a92da/alignment-newsletter-50)) and [Three impacts of machine intelligence](https://rationalaltruist.com/2014/08/23/three-impacts-of-machine-intelligence/). However, it isn't clear why we wouldn't be able to fix the misalignment at the early stages when the AI systems are not too powerful. Even if we ignore alignment failures, there are other AI risk arguments. In particular, since AI will be a powerful technology, it could be used by malicious actors; it could help ensure robust totalitarian regimes; it could increase the likelihood of great-power war, and it could lead to stronger [competitive pressures that erode value](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/). With all of these arguments, it's not clear why they are specific to AI in particular, as opposed to any important technology, and the arguments for risk have not been sketched out in detail. The post ends with an exhortation to AI safety researchers to clarify which sources of risk motivate them, because it will influence what safety work is most important, it will help cause prioritization efforts that need to determine how much money to allocate to AI risk, and it can help avoid misunderstandings with people who are skeptical of AI risk. **Rohin's opinion:** I'm glad to see more work of this form; it seems particularly important to gain more clarity on what risks we actually care about, because it strongly influences what work we should do. In the particular scenario of an alignment failure without a discontinuity, I'm not satisfied with the solution "we can fix the misalignment early on", because early on even if the misalignment is apparent to us, it likely will not be easy to fix, and the misaligned AI system could still be useful because it is "aligned enough", at least at this low level of capability. Personally, the argument that motivates me most is "AI will be very impactful, and it's worth putting in effort into making sure that that impact is positive". I think the scenarios involving alignment failures without a discontinuity are a particularly important subcategory of this argument: while I do expect we will be able to handle this issue if it arises, this is mostly because of meta-level faith in humanity to deal with the problem. We don't currently have a good object-level story for why the issue *won't* happen, or why it will be fixed when it does happen, and it would be good to have such a story in order to be confident that AI will in fact be beneficial for humanity. I know less about the non-alignment risks, and my work doesn't really address any of them. They seem worth more investigation; currently my feeling towards them is "yeah, those could be risks, but I have no idea how likely the risks are". **Technical AI alignment** ========================== ### **Learning human intent** [Learning biases and rewards simultaneously](https://www.alignmentforum.org/posts/xxnPxELC4jLKaFKqG/learning-biases-and-rewards-simultaneously) *(Rohin Shah et al)*: Typically, inverse reinforcement learning assumes that the demonstrator is optimal, or that any mistakes they make are caused by random noise. Without a model of *how* the demonstrator makes mistakes, we should expect that [IRL would not be able to outperform the demonstrator](https://www.alignmentforum.org/posts/h9DesGT3WT9u2k7Hr/the-easy-goal-inference-problem-is-still-hard) ([AN #31](https://mailchi.mp/7d0e3916e3d9/alignment-newsletter-31)). So, a natural question arises: can we learn the systematic mistakes that the demonstrator makes from data? While there is an [impossibility result](https://www.alignmentforum.org/posts/ANupXf8XfZo2EJxGv/humans-can-be-assigned-any-values-whatsoever) ([AN #31](https://mailchi.mp/7d0e3916e3d9/alignment-newsletter-31)) here, we might hope that it is only a problem in theory, not in practice. In this paper, my coauthors and I propose that we learn the cognitive biases of the demonstrator, by learning their planning algorithm. The hope is that the cognitive biases are encoded in the learned planning algorithm. We can then perform bias-aware IRL by finding the reward function that when passed into the planning algorithm results in the observed policy. We have two algorithms which do this, one which assumes that we know the ground-truth rewards for some tasks, and one which tries to keep the learned planner “close to” the optimal planner. In a simple environment with simulated human biases, the algorithms perform better than the standard IRL assumptions of perfect optimality or Boltzmann rationality -- but they lose a lot of performance by using an imperfect differentiable planner to learn the planning algorithm. **Rohin's opinion:** Although this only got published recently, it’s work I did over a year ago. I’m no longer very optimistic about [ambitious value learning](https://www.alignmentforum.org/posts/5eX8ko7GCxwR5N9mN/what-is-ambitious-value-learning) ([AN #31](https://mailchi.mp/7d0e3916e3d9/alignment-newsletter-31)), and so I’m less excited about its impact on AI alignment now. In particular, it seems unlikely to me that we will need to infer all human values perfectly, without any edge cases or uncertainties, which we then optimize as far as possible. I would instead want to build AI systems that start with an adequate understanding of human preferences, and then learn more over time, in conjunction with optimizing for the preferences they know about. However, this paper is more along the former line of work, at least for long-term AI alignment. I do think that this is a contribution to the field of inverse reinforcement learning -- it shows that by using an appropriate inductive bias, you can become more robust to (cognitive) biases in your dataset. It’s not clear how far this will generalize, since it was tested on simulated biases on simple environments, but I’d expect it to have at least a small effect. In practice though, I expect that you’d get better results by providing more information, as in [T-REX](https://arxiv.org/abs/1904.06387) ([AN #54](https://mailchi.mp/3e2f43012b07/an-54-boxing-a-finite-horizon-ai-system-to-keep-it-unambitious)). **Read more:** [On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference](https://arxiv.org/abs/1906.09624) [Cognitive Model Priors for Predicting Human Decisions](http://arxiv.org/abs/1905.09397) *(David D. Bourgin, Joshua C. Peterson et al)* (summarized by Cody): Human decision making is notoriously difficult to predict, being a combination of expected value calculation and likely-not-fully-enumerated cognitive biases. Normally we could predict well using a neural net with a ton of data, but data about human decision making is expensive and scarce. This paper proposes that we pretrain a neural net on lots of data simulated from theoretical models of human decision making and then finetune on the small real dataset. In effect, we are using the theoretical model as a kind of prior, that provides the neural net with a strong inductive bias. The method achieves better performance than existing theoretical or empirical methods, without requiring feature engineering, both on existing datasets and a new, larger dataset collected via Mechanical Turk. **Cody's opinion:** I am a little cautious to make a strong statement about the importance of this paper, since I don't have as much domain knowledge in cognitive science as I do in machine learning, but overall this "treat your theoretical model like a generative model and sample from it" idea seems like an elegant and plausibly more broadly extensible way of incorporating theoretical priors alongside real data. ### **Miscellaneous (Alignment)** [Self-confirming prophecies, and simplified Oracle designs](https://www.alignmentforum.org/posts/wJ3AqNPM7W4nfY5Bk/self-confirming-prophecies-and-simplified-oracle-designs) *(Stuart Armstrong)*: This post presents a toy environment to model self-confirming predictions by oracles, and demonstrates the results of running a deluded oracle (that doesn't realize its predictions affect the world), a low-bandwidth oracle (that must choose from a small set of possible answers), a high-bandwidth oracle (that can choose from a large set of answers) and a counterfactual oracle (that chooses the correct answer, *conditional* on us not seeing the answer). **Read more:** [Oracles, sequence predictors, and self-confirming predictions](https://www.alignmentforum.org/posts/i2dNFgbjnqZBfeitT/oracles-sequence-predictors-and-self-confirming-predictions) ([AN #55](https://mailchi.mp/405e29e1f1cd/an-55-regulatory-markets-and-international-standards-as-a-means-of-ensuring-beneficial-ai)) and [Good and safe uses of AI Oracles](https://arxiv.org/abs/1711.05541) [Existential Risks: A Philosophical Analysis](https://docs.wixstatic.com/ugd/d9aaad_f82846cf065645ad87897f2a7281cebf.pdf) *(Phil Torres)*: The phrase "existential risk" is often used in different ways. This paper considers the pros and cons of five different definitions. **Rohin's opinion:** While this doesn't mention AI explicitly, I think it's useful to read anyway, because often which of the five concepts you use will affect what you think the important risks are. **AI strategy and policy** ========================== [AGI will drastically increase economies of scale](https://alignmentforum.org/posts/Sn5NiiD5WBi4dLzaB/agi-will-drastically-increase-economies-of-scale-1) *(Wei Dai)*: Economies of scale would normally mean that companies would keep growing larger and larger. With human employees, the coordination costs grow superlinearly, which ends up limiting the size to which a company can grow. However, with the advent of AGI, many of these coordination costs will be removed. If we can align AGIs to particular humans, then a corporation run by AGIs aligned to a single human would at least avoid principal-agent costs. As a result, the economies of scale would dominate, and companies would grow much larger, leading to more centralization. **Rohin's opinion:** This argument is quite compelling to me under the assumption of human-level AGI systems that can be intent-aligned. Note though that while the development of AGI systems removes principal-agent problems, it doesn't remove issues that arise due to different agents having different (non-value-related) information. The argument probably doesn't hold with [CAIS](https://www.fhi.ox.ac.uk/reframing/) ([AN #40](https://mailchi.mp/b649f32b07da/alignment-newsletter-40)), where each AI service is optimized for a particular task, since there would be principal-agent problems between services. It seems like the argument should mainly make us more worried about stable authoritarian regimes: the main effect based on this argument is a centralization of power in the hands of the AGI's overseers. This is less likely to happen with companies, because we have institutions that prevent companies from gaining too much power, though perhaps competition between countries could weaken such institutions. It could happen with government, but if long-term governmental power still rests with the people via democracy, that seems okay. So the risky situation seems to be when the government gains power, and the people no longer have effective control over government. (This would include scenarios with e.g. a government that has sufficiently good AI-fueled propaganda that they always win elections, regardless of whether their governing is actually good.) [Where are people thinking and talking about global coordination for AI safety?](https://www.lesswrong.com/posts/sM2sANArtSJE6duZZ/where-are-people-thinking-and-talking-about-global) *(Wei Dai)* **Other progress in AI** ======================== ### **Reinforcement learning** [Unsupervised State Representation Learning in Atari](https://arxiv.org/abs/1906.08226) *(Ankesh Anand, Evan Racah, Sherjil Ozair et al)* (summarized by Cody): This paper has two main contributions: an actual technique for learning representations in an unsupervised way, and an Atari-specific interface for giving access to the underlying conceptual state of the game (e.g. the locations of agents, locations of small objects, current remaining lives, etc) by parsing out the RAM associated with each state. Since the notional goal of unsupervised representation learning is often to find representations that can capture conceptually important features of the state without having direct access to them, this supervision system allows for more meaningful evaluation of existing methods by asking how well conceptual features can be predicted by learned representation vectors. The object-level method of the paper centers around learning representations that capture information about temporal state dynamics, which they do by maximizing mutual information between representations at adjacent timesteps. More specifically, they have both a local version of this, where a given 1/16th patch of the image has a representation that is optimized to be predictive of that same patches next-timestep representation, and a local-global version, where the global representation is optimized to be predictive of representations of each patch. They argue this patch-level prediction makes their method better at learning concepts attached to small objects, and the empirical results do seem to support this interpretation. **Cody's opinion:** The specific method is an interesting modification of previous Contrastive Predictive Coding work, but what I found most impressive about this paper was the engineering work involved in pulling metadata supervision signals out of the game by reading comments on disassembled source code to see exactly how metadata was being stored in RAM. This seems to have the potential of being a useful benchmark for Atari representation learning going forward (though admittedly Atari games are fairly conceptually straightforward to begin with). ### **Deep learning** [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) *(Zhilin Yang, Zihang Dai et al)*: XLNet sets significantly improved state-of-the-art scores on many NLP tasks, beating out BERT. This was likely due to pretraining on significantly more data, though there are also architectural improvements. **News** ======== [Funding for Study and Training Related to AI Policy Careers](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers#footnote3_q75spr7): The Open Philanthropy Project has launched an AI policy scholarships program; the deadline for the first round is October 15. [Research Scholars Project Coordinator](https://www.fhi.ox.ac.uk/project-coordinator/) *(Rose Hadshar)*: FHI is looking to hire a coordinator for the Research Scholars Programme. Application deadline is July 10. [Contest: $1,000 for good questions to ask to an Oracle AI](https://www.alignmentforum.org/posts/cSzaxcmeYW6z7cgtc/contest-usd1-000-for-good-questions-to-ask-to-an-oracle-ai) *(Stuart Armstrong)*
5adc5276-162c-4915-8d77-ab962d61ae62
trentmkelly/LessWrong-43k
LessWrong
Rationality quotes: March 2010 This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions. * Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.) * Do not quote yourself. * Do not quote comments/posts on LW/OB. * No more than 5 quotes per person per monthly thread, please.
3e77de77-28e0-4d55-9aa3-59177e4ceee8
trentmkelly/LessWrong-43k
LessWrong
AGI will know: Humans are not Rational I have been hesitant to post this here for some time now, but I think in light of current developments surrounding ChatGPT and other recent advances I felt compelled to finally go ahead and find out what the venerable crowd at LessWrong has to say about it. Very briefly, what I want to propose is the idea that the vast majority of anything that is of importance or significant (personal) value to humans has come to be this way for emotional and non-rational reasons, and so it can be said that despite being the smartest creatures on this planet, we are very far from rational, objective agents whose motivations, choices and decisions are guided by logical and probabilistically justifiable causes. In the event that we will create an AGI, it will immediately discover that we quite literally can't be reasoned with and thus it must conclude we are inferior. To give you an idea of the depth and intractability of our irrationality, consider the following few facts. Of course, we humans understand perfectly why to us they make sense; this does not, however, make them objectively true, rational of logical. 1. Consider the disproportionate value we attach to having seen something (happen) with our own eyes compared to something we merely heard or read about. 2. We assess our self-worth and success based on a very biased and often self-serving comparison with some pseudo-random sample of social group(s). 3. We have things we like, prefer, want, crave, wish for - something that in and of itself can't ever hope to be rational or objective. Many of these things we don't need, quite a few of them are detrimental to our (long-term) well-being. 4. Our emotional state has an often decisive and ever-present impact on our reliability, reasoning, willingness for action, attitude and level of engagement. 5. Most people have at best a vague/tentative idea why they end up making a certain decision one way and not another and more often than not we make decisions the "wrong way around
49ef0de4-940e-4db0-a5e2-49b01e995a4e
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Will AGI cause mass technological unemployment? I believe that currently existing narrow AI systems are unlikely to cause technological unemployment. [This paper](https://www.cs.cmu.edu/~tom/pubs/AEA2018-WhatCanMachinesLearn.pdf) finds that narrow AI can automate individual tasks but not entire jobs; however, it specifically brackets off general AI. I've argued that we should still be worried about mass unemployment from artificial general intelligence (AGI) [here](https://www.reddit.com/r/neoliberal/comments/fkldhv/discussion_thread/fkva79v/?context=3). However, there is a deeper economic argument that automation *cannot* cause mass technological unemployment, even if AGI is involved. A superintelligence would have an *absolute advantage* over humans at every task, for by definition, it's better than humans at doing them. However, it would not have a comparative advantage at everything. An agent's *comparative advantage* is any task they can do at lower *cost* (including opportunity cost) than everyone else. Since all agents have a comparative advantage at something, humans would still be hired to do the tasks they can perform at lowest cost. This argument has been made [here](https://www.reddit.com/r/badeconomics/comments/6gw9vu/the_rise_of_the_machines_why_automation_is/ditjwyk/), but I'll try to elaborate on it. Take the classic [parable of Tiger Woods and the lawnmower](https://fs.blog/2009/08/should-tiger-woods-mow-his-own-lawn-the-principles-of-comparative-advantage/): > > Tiger is a great athlete. One of the best golfers to have every lived. Most likely he is better at other activities too. Tiger is probably in better shape than most: He can run faster, lift more, and work quicker. For example, Tiger can probably mow his lawn faster than anyone else. But just because he can mow his lawn fast, does this mean he should? > > > To answer this question we can use the concepts of opportunity cost and comparative advantage. Let’s say that Tiger can mow his lawn in 2 hours. In the same two hours he could film a television commercial for golf clubs and earn $100,000. By contrast, Joe, the kid next door can mow Tiger’s lawn in 4 hours. In that same 4 hours he could work at McDonald’s and earn $24. > > > In this example, Tiger’s opportunity cost is $100,000 and Joe’s is $24. Tiger has an absolute advantage in mowing lawns because he can do the work in less time. Yet Joe has a comparative advantage because he has the lower opportunity cost. The gains in trade from this example are tremendous. Rather than mowing his own lawn, Tiger should make the commercial and hire Joe to mow his lawn. As long as Tiger pays Joe more than $24 and less than $100,000, both of them are better off. > > > Now substitute Tiger Woods for a superintelligence. Even though the superintelligence can outperform humans at all cognitive tasks, it can't perform all cognitive tasks at lower *cost* than all humans. If a company with both a superintelligence and human employees tried to assign the superintelligence all of its tasks, the machine would quickly eat up a lot of compute and I/O, and the company would still have to find humans to spend some time training the AI to do them or writing up task descriptions. This could end up costing more money than simply assigning humans to do some of those tasks. Also, by allocating the AI's computational resources to the given task, the company forgoes opportunities to use those resources to beef up the AI's performance on other tasks. Do you buy this argument? Why or why not?
5e062949-13e4-4424-9e87-f4f98faa1d5a
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Contra Hanson on AI Risk Robin Hanson wrote a new post [recapping his position on AI risk](https://www.overcomingbias.com/p/ai-risk-again) ([LW discussion](https://www.lesswrong.com/posts/AL6DRuE8s4yLn3yBo/robin-hanson-s-latest-ai-risk-position-statement)). I've been in the Eliezer AI-risk camp for a while, and while I have huge respect for Robin’s rationality and analytical prowess, the arguments in his latest post seem ineffective at drawing me away from the high-doom-worry position. --- Robin begins (emphasis mine): > First, **if past trends continue, then sometime in the next few centuries the world economy is likely to enter a transition that lasts roughly a decade**, after which it may double every few months or faster, in contrast to our current fifteen year doubling time. (Doubling times have been relatively steady as innovations are typically tiny compared to the world economy.) **The most likely cause for such a transition seems to be a transition to an economy dominated by artificial intelligence (AI).** Perhaps in the form of brain emulations, but perhaps also in more alien forms. And within a year or two from then, another such transition to an even faster growth mode might plausibly happen.  > > And adds later in the post: > The roughly decade duration predicted from prior trends for the length of the next transition period seems *plenty* of time for today’s standard big computer system testing practices to notice alignment issues. > > Robin is extrapolating from his table in [Long-Term Growth As A Sequence of Exponential Modes](https://mason.gmu.edu/~rhanson/longgrow.pdf): ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/AqQ9qBkroFCKSqydd/lfc7iigd7rjtpzwenosq)I get that there’s a trend here. But I don’t get what inference rule Robin's trend-continuation argument rests on. Let’s say you have to predict whether dropping a single 100-megaton nuclear bomb on New York City is likely to cause complete human extinction. (For simplicity, assume it was just accidentally dropped by the US on home soil, not a war.) As far as I know, the most reliably reality-binding kind of reasoning is **mechanistic**: Our predictions about what things are going to do rest on **deduction** from known rules and properties of **causal models** of those things. We should obviously consider the causal implications of releasing 100 megatons worth of energy, and the economics of having a 300-mile-wide region wiped out. Should we also consider that a nuclear explosion that decimates the world economy would proceed in minutes instead of years, thereby transitioning our current economic regime much faster than a decade, thus violating historical trends? I dunno, this trend-breaking seems totally irrelevant to the question of whether a singular 100-megaton nuke could cause human extinction. Am I just not applying Robin’s trend-breaking reasoning correctly? After all, previous major human economic transitions were always leaps forward in productivity, while this scenario involves a leap backward… Ok, but what are the rules for this trend-extrapolation approach supposed to be? I have no idea when I’m allowed to apply it. I suspect the only way to know a rule like “don’t apply economic-era extrapolation to reason about the risk of a single bomb causing human extinction” is to first cheat and analyze the situation using purely mechanistic reasoning. After that, if there’s a particular trend-extrapolation claim that feels on-topic, you can say it belongs in the mix of reasoning types that are supposedly applicable to the situation. In our nuke example, there are two ways this could play out: 1. If your first-pass mechanistic reasoning lands you far from what’s predicted by trend extrapolation, e.g. if it says every human on earth dies within minutes, then hey, we’re obviously talking about a freak event and not about extrapolating economic trends. Duh, economic models aren’t designed to talk about a one-off armageddon event. You have to pick the right model for the scenario you want to analyze! Can I interest you in a model of extinction events? Did you know we’re at the end of a 200-million-year cycle wherein the above-ground niche is due to be repopulated by previously underground-dwelling rodents? Now *that’s* a relevant trend. 2. If your first-pass mechanistic reasoning lands you in the ballpark of a trend-extrapolation prediction, e.g. if it says that the main influence of a 100-megaton bomb on the economy would mostly be felt ten years after the blast event, and that economic activity would still be a thing, then you can wave in the trend-extrapolation methodology and advise that we ought to make some educated guesses about the post-blast world by reference to historical trends of human economic-era transitions. To steel-man why trend extrapolation might *ever* be useful, I think back to the [inside/outside view debates](https://www.lesswrong.com/tag/inside-outside-view), like the [famous case](https://www.lesswrong.com/posts/CPm5LTwHrvBJCa9h5/planning-fallacy) where your (biased) inside view of a project says you’ll finish it in a month, while the outside view says you’ll finish it in a year. But to me, the tale of the planning fallacy is only a lesson about the value of taking compensatory action when you’re counteracting a known bias. I’m still not seeing why outside-view trend-extrapolation would be a kind of reasoning that has the power to constrain your expectations about reality in the general case. Consider this argument: 1. Our scientific worldview is built on the fundamental assumption that the future will be like the past 2. Economic growth eras have proceeded at this rate in the past 3. Ergo the next economic growth era will likely proceed that way It’s invalid because step 1 is wrong. Scientific progress, as I understand it, is driven by mechanistic explanations, not by relating past observations to future observations by any kind of “likeness” metric. Progress comes from finding models that use fewer bits of information to predict larger categories of observations. Neither the timestamp of the observations nor their similarity to one another are directly relevant to the probability we should give to a model. I have a longer post about this [here](https://www.lesswrong.com/posts/c89N96R7nTse2RpNz/dissolving-the-problem-of-induction)**.** If I’m missing something, maybe Robin or someone else can write a more general explainer of how to operate reasoning by trend-extrapolation, and why they think it binds to reality in the general case. --- Next, Robin points out that today we can, with some difficulty, keep our organizations sufficiently aligned with our values: > Coordination and control are hard [as demonstrated today’s organizations]… but even so competition between orgs keeps them tolerable. That is, we mostly keep our orgs under control. Even though, compared to individual humans, large orgs are in effect “super-intelligences”. > > I’ll grant that large orgs can be said to be somewhat superintelligent in the sense that we expect AIs to be, but I think AIs are going to be *much more intelligent than that*. The manageable difficulty of aligning a group of humans tells us very little about the difficulty of aligning an AI whose intelligence is much greater than that of the smartest contemporary human (or human organization). I know Robin is skeptical about the claim that a software system can rapidly blow past the point where it sees planet Earth as a blue atomic rag doll, but it’s not mentioned in this recent post, and it’s a huge crux for me. --- Robin sees the problem of controlling superintelligent AI as similar to the problem of controlling an organization: > The owners of [AI organizations]… are well advised to consider how best to control such ventures… but such efforts seem most effective when based on actual experience with concrete fielded systems. For example, there was little folks could do in the year 1500 to figure out how to control 20th century orgs, weapons, or other tech. Thus as we now know very little about the details of future AI-based ventures, leaders, or systems, we should today mostly either save resources to devote to future efforts, or focus our innovation efforts on improving control of existing ventures. Such as via decision markets. > > I agree that control is complicated, and that our current knowledge about how to control AIs seems very inadequate, and that a valid analogy can be made to people in 1500 trying to plan for controlling 20th-century orgs. But today’s AI risk situation doesn’t map to anything in the year 1500 if we consider all its salient aspects together: 1. Control is complicated 2. The thing we’re going to need to control is likely more intelligent than a team of 100 Von Neumanns thinking at 100 subjective seconds per second 3. If the thing comes into existence and we’re not really good at controlling it, we likely go extinct 4. This whole existential bottleneck of a scenario is likely to happen within a decade or two of our discussion Aspect #1 is analogous to 1500, while points #2-4 aren’t at all. Robin presumably chose to only address aspect #1 because he doesn’t believe #2-4 are true premises, and he’s just summarizing his own beliefs, not necessarily the crux of his disagreement with doomers like me. Much of Robin’s post is thus talking past us doomers. E.g. this paragraph in his post isn’t relevant to the crux of the doomer argument: > Bio[logical] humans [controlling future AI-powered organizations] would be culturally distant, slower, and less competent than em [[whole-brain emulation](https://www.lesswrong.com/tag/whole-brain-emulation)] AIs. And non-em AIs could be stranger, and thus even more culturally distant… Yes, periodically some ventures would suffer the equivalent of a coup. But if, like today, each venture were only a small part of this future world, bio humans as a whole would do fine. Ems, if they exist, could do even better. > > As in his book [The Age of Em](https://ageofem.com/), he’s talking about a world where we’re in the presence of superhuman AI and we haven’t been slaughtered. If that world ever exists for someone to analyze, then I must already have been proven wrong about my most important doom claims. Robin does have things to say about the cruxier subjects in other posts. I recall that he’s previously elaborated on why he doesn’t expect AI to foom, with reference to observed trends in the software economy and software codebases. But these didn’t make it into the scope of his latest post. --- Near the end of the post, he tries to more directly address the crux of his disagreement with doomers. He gives a summary of an AI doomer view that I’d say is fairly accurate. I’d give this a passing grade on the Ideological Turing Test: > A single small AI venture might stumble across a single extremely potent innovation, which enables it to suddenly “foom”, i.e., explode in power from tiny compared to the world economy, to more powerful than the entire rest of the world put together. (Including all the other AIs.) > > ... > > Furthermore it is possible that even though this system was, before this explosion, and like most all computer systems today, very well tested to assure that its behavior was aligned well with its owners’ goals across its domains of usage, its behavior after the explosion would be nearly maximally non-aligned. (That is, orthogonal in a high dim space.) Perhaps resulting in human extinction. The usual testing and monitoring processes would be prevented from either noticing this problem or calling a halt when it so noticed, either due to this explosion happening too fast, or due to this system creating and hiding divergent intentions from its owners prior to the explosion. > > Finally, we get some arguments that seem more valid and directed at the crux of the AI doomer worldview. Robin argues that a foom scenario violates how economic competition normally works: > This [foom] scenario requires that this [AI] venture prevent other ventures from using its key innovation during this explosive period. > > But I think being superintelligent lets you create your own super-productive economy from scratch, regardless of what the human economy looks like. Robin argues that a superintelligent-AI-powered organization would have to solve internal coordination problems much better than large human organizations do: > It also requires that this new more powerful system not only be far smarter in most all important areas, but also be extremely capable at managing its now-enormous internal coordination problems. > > But I think superintelligent AI’s powers dwarf the difficulty of the challenging of coordinating itself. Robin argues: > [The AI foom scenario] it requires that this system not be a mere tool, but a full “agent” with its own plans, goals, and actions. > > But I think superintelligent systems, if they’re not agenty on the surface, have an agenty subsystem and are therefore just a small modification away from being agenty. Robin points out the lack of any historical precedent for “one tiny part [of the world] suddenly exterminating all the rest”. But I already think an intelligence explosion is destined to be a unique event in the history of the universe. --- Finally, a couple notable quotes near the end of Robin’s post that don’t seem to pass the Ideological Turing Test. Robin mentions that we’ve had a history of wrongly predicting that AI would automate human labor: > You might think that folks would take a lesson from our history of prior bursts of anxiety and concern about automation, bursts which have appeared roughly every three decades since at least the 1930s. Each time, new impressive demos revealed unprecedented capabilities, inducing a burst of activity and discussion, with many then expressing fear that a rapid explosion might soon commence, automating all human labor. They were, of course, very wrong.  > > But we AI doomers don’t see this as a data point to update on. We don’t see the impact of subhuman-general-intelligence AI as being relevant to our main concern. We believe there’s a critical AI capability threshold somewhere in the ballpark of human-level intelligence where we start sliding rapidly and uncontrollably toward the attractor state where AI permanently bricks the universe. Our situation in the present is that of a spaceship nearing the event horizon of a black hole, or a pile of Uranium nearing a neutron multiplication factor (k) of greater than 1. I was surprised to see this line because I don’t think it’s relevant at this point in the game to mention AI doomers invoking Pascal’s Wager: > Worriers often invoke a Pascal’s wager sort of calculus, wherein any tiny risk of this nightmare scenario could justify large cuts in AI progress. > > The most common AI doom position, and the surveyed position of over a third of people working in the field of AI if I recall correctly, is that there’s at least a 5% chance of near-term AI existential risk, not a “tiny” chance. --- My broader experience with Robin’s work is that his insights blow me away constantly. There’s just this one weird exception when he explains why AI risk isn’t that bad, and then I have the variety of confused and frustrated reactions that I’ve gone over in this post. While it’s common for people to be skeptical about AI doom claims, I feel like Robin’s non-doomer position summarized in his post is noticeably uncommon. I rarely see anyone else support their non-doomer view using arguments similar to these. I especially don’t see people reasoning from human economic-era trends as Robin likes to do. Of course I realize I might simply be wrong on this topic and he right. I hope at least one of us will be able to make a useful update.
4b87d027-0546-4644-8011-58dde78c22c7
trentmkelly/LessWrong-43k
LessWrong
A Personal (Interim) COVID-19 Postmortem I think it's important to clearly and publicly admit when we were wrong. It's even better to diagnose why, and take steps to prevent doing so again. COVID-19 is far from over, but given my early stance on a number of questions regarding COVID-19, this is my attempt at a public personal review to see where I was wrong. I have been pushing for better forecasting and preparation for pandemics for years, but I wasn't forecasting on the various specific questions about Pandemics on most platforms until at least mid-March, and I failed in several ways. Mea Culpa I was late to update about a number of things, and simply wrong in some cases even on the basis of known information. The failures include initially being slow to recognize the extent of the threat, starting out dismissive about masks, being more concerned about hospital-based transmission than ended up being justified, being overconfident in the response of the US government, and in early March, over-confidently getting a key fact wrong about transmission being at least largely via aerosol droplet versus physical contact. I have a number of excuses, of course. Most other experts agreed with my views, my grandfather passed away in January, followed by his wife in early March, I was under a lot of stress, I was very busy with my personal life, I was trying to do a number of other high-priority projects, I was not paying attention to the details, and so on. But predictive accuracy doesn't care about WHY you were wrong, especially since there are always such excuses. And the impact of my poor judgement was also likely misleading to others in the community. At the same time, I feel the perhaps egotistical need to note where I was correct early, and what I got right - followed by a clearer description of my failures. I started saying there would be PPE shortages due to COVID-19 by January, and was writing about the supply chain issues well before COVID. I submitted this paper November last year with Dave Denkenberg
ed143067-99aa-47dc-8651-197df24044e0
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
If Contractualism, Then AMF ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/3KsvReHD6CckfwHak/ghwvqk6qrjqukjqw67wp)   This post is a part of Rethink Priorities’ Worldview Investigations Team’s [CURVE Sequence](https://forum.effectivealtruism.org/s/WdL3LE5LHvTwWmyqj): “Causes and Uncertainty: Rethinking Value in Expectation.” The aim of this sequence is twofold: first, to consider alternatives to expected value maximization for cause prioritization; second, to evaluate the claim that a commitment to expected value maximization robustly supports the conclusion that we ought to prioritize existential risk mitigation over all else. This post considers the implications of contractualism for cause prioritization.  **Executive Summary** ===================== * Contractualism says that morality is about what we can justify to those affected by our actions. It explains why we care about morality by way of our interest in justifying ourselves to reasonable others. Insofar aswe care about the relevant kind of justification, we should be able to feel the pull of contractualism. And insofar as contractualism captures moral judgments that consequentialist moral theories don't, we may be inclined to give some credence to the view. * Contractualism says: When your actions could benefit both an individual and a group, don't compare the individual's claim to aid to the *group's* claim to aid, which assumes that you can aggregate claims across individuals. Instead, compare an individual's claim to aid to the claim of every other relevant individual in the situation by pairwise comparison. If one individual's claim to aid is a lot stronger than any other's, then you should help them. (That being said, contractualism is also compatible with saying: When the group is large enough and each group member’s claim is nearly as strong as the individual’s, you should help the group.) Contractualism, therefore, offers an alternative way of thinking about what it means to help others as much as possible with the resources available. * Accordingly, contractualism recommends a different approach to cost-effectiveness analysis than the one that’s dominant in EA. The standard EA view is that we should maximize expected value. The contractualist view is that insofar as we should maximize, we should be maximizing something like “the relevant strength-weighted moral claims that are addressed per dollar spent,” where the strength of an individual’s claim is largely determined by the gain in expected value for that individual if the act is performed. * So, it may be true that some x-risk-oriented interventions can help us all avoid a premature death due to a global catastrophe; maybe they can help ensure that many future people come into existence. But how strong is any individual's claim to your help to avoid an x-risk or to come into existence? Even if future people matter as much as present people (i.e., even if we assume that totalism is true), the answer is: Not strong at all, as you should discount it by the expected size of the benefit *and you don’t aggregate benefits across persons*. Since any given future person only has an infinitesimally small chance of coming into existence, they have an infinitesimally weak claim to aid. By contrast, you can help the world's poorest people a lot and with high confidence. So, given contractualism, the claims of the world's poorest people win. * Contractualism probably won't justify prioritizing all the things that EAs do related to global poverty. Essentially, it looks like it most clearly supports GiveWell-Top-Charity-like interventions—e.g., the kind of work that the Against Malaria Foundation (AMF) does—because of the high probability of significant impact. **1. Introduction** =================== What, if anything, could justify shifting resources away from the Against Malaria Foundation (AMF) and toward existential risk mitigation efforts (x-risk)? One way to feel the force of this question is to imagine having to justify such a shift *to a desperately poor person*—someone who may well die, or whose child may well die, because of your decision to allocate resources elsewhere. What, if anything, could you say to such a person? There might be an adequate answer. Even if so, it would be surprising if it were easy to make the case to someone in such dire circumstances. And insofar as you feel pressure to be able to give a satisfying case to such a person, you should feel the pull of *contractualism*—a moral theory that’s become increasingly popular over the last 40 years. The central contractualist thesis (at least as T. M. Scanlon ([1998](https://www.hup.harvard.edu/catalog.php?isbn=9780674004238)) understands it, which is the version of the position on which we’ll focus here) is that moral wrongness is a matter of *unjustifiability to others* (and moral rightness a matter of justifiability to others). This theory promises to explain why morality is motivating: we care about morality because we care about being able to justify ourselves to others. The purpose of this post is to consider the question of cause prioritization from the perspective of a non-consequentialist moral theory. We argue that contractualism generally looks more favorably upon interventions targeting very poor presently-existing individuals over interventions directed at protecting future people *even when the expected value* (EV) *of the other interventions would be higher*. Finally, we briefly consider the kinds of interventions that, you might think, would be especially important given contractualism: namely, interventions aimed at *redressing injustices* and those aimed at responding to claims grounded in *special relations* between beneficiaries and the benefactor. In those cases, we argue that contractualism does *not* require prioritizing injustices or those to whom we stand in special relations over the needs of badly-off distant strangers. To be clear: this document is *not* a detailed vindication of any particular class of philanthropic interventions. For example, although we think that contractualism supports a sunnier view of helping the global poor than funding x-risk projects, contractualism does not, for all our argument implies, entail that many EA-funded global poverty interventions are morally preferable to all other options (some of which are probably high-risk, high-reward longshots). In addition, contractualism doesn’t vindicate (say) global-poor–targeting interventions *on the same grounds* that many other theories would.[[1]](#fn5zel370ijcs) For example, while some would favor antimalarial bednet distribution because of the number of DALYs that such an intervention averts, if contractualism favors antimalarial bednet distribution, then this is for very different reasons (as will become clearer in the next couple of sections). Still, we might have thought that contractualism couldn't provide any guidance at all when it comes to cause prioritization. If the arguments of this post are correct, that isn't true. **2. Contractualism** ===================== The canonical statement of contemporary contractualism is Thomas Scanlon’s *What We Owe to Each Other* (1998).[[2]](#fn7v03h2urgx) We’ll mostly (though, as will emerge, not entirely) treat Scanlon’s version of contractualism as representative of the view—or, at any rate, the aspects of the view relevant to our discussion. Scanlon’s view has been the subject of an enormous amount of discussion[[3]](#fnix0e6alp35a); philosophers disagree about how best to interpret it as well as about its implications for practical ethics. However, it’s safe to say that the heart of Scanlon’s view is the following thesis: * **(C1)** An act is wrong if and only if, and because, it is unjustifiable to others.[[4]](#fnkhdoh3fu3hj) Many moral theorists would agree that (as (C1) implies) the wrong acts are precisely those that are unjustifiable to others. But (C1) says that when an act is wrong, it's wrong *because* it's unjustifiable to others. (C1) is therefore incompatible with the (more commonplace) claim that when an act is wrong, it’s also unjustifiable to others as a consequence of whatever wrong-making property it has.[[5]](#fn34laz9g8i95) Utilitarians, for instance, can accept that the wrong acts are precisely those that are unjustifiable to others, but they'll say that wrong acts are wrong not *because* they are unjustifiable to others but because they fail to maximize expected value (EV). And there is nothing special about utilitarianism in this respect. Virtually all non-contractualist moral theorists will hold that wrongness is grounded not in unjustifiability to others but in some other property, which might itself also ground unjustifiability to others. Hence (C1)’s distinctiveness. What makes an act unjustifiable to others? Scanlon formulates his answer to this question in a few ways, but here’s a simple formulation for present purposes: * **(C2)** An act is unjustifiable to others (under the circumstances) if and only if, and because, any principle that would permit its performance (under the circumstances) could be reasonably rejected by someone other than the agent.[[6]](#fn7veilgvlpt) We’ll clarify the post-“because” clause of (C2) below. But first, note that as (C1) and (C2) suggest, Scanlon holds that *wrongness* (of your act), *unjustifiability* (of your act) *to others*, and *reasonable rejectability* (of any principle that would permit your act) are a package deal: they’re all present in a given situation or all absent from it. Scanlon takes the co-presence of these properties to be a source of contractualism’s attractiveness, for their co-presence allows contractualism to vindicate the importance of morality and the possibility of moral motivation. Scanlon regularly repeats that we have powerful reasons to want to stand in relations of justifiability to others, or in what he calls relations of “mutual recognition.”[[7]](#fnsswxo8dgry) If morally right conduct is the conduct that allows us to stand in such relations with others, then morality turns out to be important and our ability to be moved by moral considerations turns out to be unmysterious. Let’s clarify the post-“because” clause of (C2) with one of Scanlon’s examples: > ***Transmitter Room*****.** The World Cup final is currently being played. Jones, a technician in the room containing the equipment that is causing the game’s worldwide television broadcast, has inadvertently come into contact with some exposed wires that are causing him very painful electric shocks. He is unable to extricate himself from his situation, but you can help him by turning off the machine with the exposed wires. Unfortunately, if you do this, then the World Cup broadcast will be shut down, and it won’t be able to be restarted for 10 minutes.[[8]](#fn75dayn19ldg) > > All contractualists of whom we’re aware (and many other non-consequentialists) share Scanlon’s view that you ought to help Jones, even though doing so would (we can assume[[9]](#fnvm5c35lkvig)) yield a much worse state of affairs overall than allowing him to continue being shocked. Suppose this is correct. How does the contractualist secure this verdict? The contractualist first imagines some *principles* that could be thought to bear on the present situation. Consider four examples: * **(P1)** Do whatever would make things go best. * **(P2)** Do whatever would help the most people. * **(P3)** Prevent people from suffering serious harms unless doing so would result in misfortunes to others. * **(P4)** Prevent serious harms to some even if in doing so you would expose many others to minor inconveniences. The contractualist now asks whether someone other than you could *reasonably reject* any of these principles. Scanlon discusses at great length what it takes to be able reasonably to reject a principle, with two constraints on reasonable rejection being especially important here. First, you can reasonably reject a principle only for what Scanlon calls “personal reasons,” i.e., reasons “tied to the well-being, claims, or status of individuals in [some] particular situation” (219). As Scanlon emphasizes, this requirement rules out various forms of interpersonal aggregation.[[10]](#fnm7bs9ffwt1) It isn’t possible for, say, five people to “combine” their personal reasons for objecting to a given principle into a stronger super-reason for objecting to this principle. Second, and relatedly, whether you can reasonably reject a principle is determined by (among other things) the sizes of the burdens that would befall different parties as a result of your acting on this principle vs. an alternative. In particular, A can’t reasonably reject some principle, P, merely on the grounds that your acting on P would impose some burden on A: after all, it could work out that your acting on any principle *other* than P would impose much greater burdens on someone else. By contrast and all else equal, B *can* reasonably reject some principle, P\*, on the grounds that your acting on P\* would impose burdens on B far greater than those that would be imposed on anyone other than B by your acting on some alternative principle.[[11]](#fnuqmvbx9dngp) Now consider how the contractualist will assess (P1)–(P4). On the one hand, your acting on (P4) would impose a burden on each of the millions of viewers, namely the inconvenience of missing out on 10 minutes of the World Cup. On the other hand, your acting on any of (P1)–(P3) would impose on Jones a vastly greater burden, namely continued seriously painful electric shocks. It thus seems clear that Jones’s personal reason for objecting to each of (P1)–(P3) is far stronger than anyone’s personal reason for objecting to (P4). More generally, each of the millions of viewers seems to be such that whatever personal reason she might have for objecting to a principle that would license your saving Jones is far less strong than Jones’s personal reason for objecting to any principle that would license your not saving him. So, no viewer can reasonably reject (P4); nor, it seems, can anyone else. Furthermore, it seems that Jones *can* reasonably reject each of (P1)–(P3) and, indeed, it seems that Jones can reasonably reject any relevantly similar principle. Given contractualism, then, saving Jones is the uniquely permissible act available to you. We just explained what a contractualist would say about *Transmitter Room* in line with Scanlon’s own presentation of his ideas. But in what follows, we put talk of “principles” and “reasonable rejection” to the side. It will simplify matters, and do no damage to the relevant features of contractualism, to formulate our discussion primarily in terms of *claims to assistance*. For example, in *Transmitter Room*, each viewer, V, has (as we’ll put it) a *claim* to your allowing the broadcast to continue in virtue of the burden that would befall V as a result of your saving Jones instead; Jones has a far stronger *claim* to your saving him in virtue of the far more serious burden that would (continue to) befall him as a result of your allowing the broadcast to continue instead; and, given contractualism, you should satisfy a strong claim to your assistance over any number of individually far weaker claims.[[12]](#fnbdokgnwjjhq) We’ve said enough to turn to contractualism’s implications for philanthropic interventions. Additional details about the theory, and about specific versions of it, will emerge along the way. **3. Welfare-Oriented Interventions** ===================================== In *Transmitter Room*, it’s natural to say that the (sizes of the) claims to assistance of the various parties are grounded in welfare considerations. This section considers some philanthropic interventions naturally characterized as welfare-oriented: interventions aimed at improving the welfare of the *present global poor* (hereafter, “the global poor" and various *x-risk* interventions. What we said above about *Transmitter Room* will illuminate what the contractualist should say about these interventions. **3.1. Global-Poor–Oriented and X-risk Interventions** ------------------------------------------------------ Suppose you’re in a position to do either but not both of these things: * Make a substantial donation to some charity helping the global poor (by, say, funding the manufacture and distribution of a large number of anti-malarial bednets); *or* * Make a substantial donation to some organization that will yield a tiny reduction of the probability of some extinction event (e.g., the destruction of humanity via a large asteroid’s collision with the earth). We’ll present a three-step argument that, given contractualism, if you ought to do anything, you ought to donate to the global poor vs. try to mitigate x-risk, other relevant things equal.[[13]](#fnhw6wtpoe7j5) The central plank of our argument is that, given contractualism, several people have moral claims to your donating to the global poor that are much stronger than even the strongest moral claims had by any individual to your trying to mitigate x-risk. After that, we’ll argue that our argument also supports the conclusion that, given contractualism, you ought to pursue interventions like AMF over suffering-risk-mitigating (s-risk-mitigating) interventions. We’ll conclude that contractualism supports interventions like AMF over x-risk interventions quite generally. ### **Step 1** First, we argue that contractualism recommends preventing: * *small* numbers of *grave* harms that are certain to occur without your intervention and certain not to occur with your intervention over: * even *far larger* numbers of *comparatively tiny* such harms. In defense of this step, let's return to *Transmitter Room* and note two things about the case. First, *Transmitter Room* involves no *uncertainty* about what will happen following each of your available actions. If you turn off the power, then Jones will stop being shocked and the viewers will be deprived of 10 minutes of the game; if you don’t, then Jones will continue to be shocked and the viewers won’t be deprived of any of the game.  Second, *Transmitter Room* plausibly counts as a case in which, in preventing one person from suffering serious harm, you would *cause* several others to suffer comparatively tiny harms. It isn’t a straightforward case of choosing whether to prevent one person from suffering a serious harm or to prevent many others from suffering comparatively tiny ones. But, if anything, it seems *harder* to justify your conduct to others when, in preventing one from suffering serious harm, you cause several others to suffer comparatively tiny harms than when you merely prevent serious harm from befalling one over preventing comparatively tiny harms from befalling many others.  So, if (as we argued in Section 1) contractualism yields the verdict that you ought to aid Jones in *Transmitter Room*, then, if anything, it more strongly supports the verdict that you ought to prevent small numbers of grave harms over even far larger numbers of comparatively tiny harms. This completes Step 1 of our argument. One point before we proceed.Call a harm that is certain to occur *without* your intervention and certain not to occur *with* your intervention an “otherwise certain harm.” We don’t assume that contractualism recommends preventing a serious otherwise certain harm over any number of *arbitrarily slightly* individually smaller otherwise certain harms. For example, a contractualist can hold that you ought to prevent 1 million people from undergoing electric shocks slightly less painful than the ones Jones is suffering rather than preventing just one person from suffering electric shocks just like the ones Jones is suffering. Our argument is compatible with such a “limited aggregationist” form of contractualism (and with its denial). Admittedly, some contractualists (and other non-consequentialists) reject all forms of aggregation, limited and unlimited. And, as indicated already, Scanlon himself insists that his theory is non-aggregationist, at least insofar as it forbids combining different individuals’ claims into super-claims of groups. However, Scanlon also rather famously argues that, sometimes, contractualism implies that the numbers count. For example, consider a case where you can prevent one person from dying or two others from dying but cannot aid all three. Suppose each person has a claim to your assistance of equal strength. Scanlon argues that you ought to save the two, by arguing that if you were to “just save” the one, or were to give the two groups equal chances to be saved, then the additional party on the side of the two would have a justified complaint against you, on the grounds that you would thereby act as though the case were a one-versus-one case, and so act in a way that is inappropriately responsive to his presence. The idea, then, is that contractualists ought to count numbers in at least some cases without aggregating interests or claims.[[14]](#fnc0620ar98q) Suppose that this argument, or something relevantly like it, succeeds. Now consider the following case: > ***Death/Paraplegia*****.** You can prevent Nora from dying or *n* other people from getting permanent paraplegia, but you can't save everyone. > > We find it plausible that if Scanlon’s argument for saving the greater number succeeds, then, for some *n*, you ought to save the *n*. Here’s our thinking: First, imagine a version of *Death/Paraplegia* in which *n* = 1. In this case, you ought to save Nora outright. Now imagine a version in which *n* = 2. In this case, if you were to save Nora, then, plausibly, the additional person on the side of the many has a complaint, for you would thereby treat the case as though the additional person weren't even there. (Recall that we’re supposing that Scanlon’s argument described above succeeds.) So, you ought to do something appropriately responsive to the additional person’s presence. Perhaps this will take the form of flipping a coin to determine whether to save Nora or the many; perhaps it will take the form of running a lottery heavily weighted on Nora’s side—the details won’t matter here. What matters is that whatever such an act would be, it would presumably be “closer” to saving the *n* than what it was permissible to do when *n* = 1 (namely, saving Nora outright). But now imagine iterating this process over and over, increasing the size of *n* by 1 each time. Eventually, we think, you’ll get to a point where outright saving the *n* is the only acceptable thing to do. This suggests that Scanlonian contractualism can accommodate some aggregation of lesser bads, at least if Scanlon’s argument for saving the greater number is successful. We note, though, that this “iteration” argument doesn’t commit you to *full* aggregation of the sort that’s incompatible with (say) the intuitive verdict in *Transmitter Room*. If you can prevent Nora’s death or n *papercuts* (each had by a different person), then no matter how big *n* is, each papercut plausibly counts as an *irrelevant utility*, (cf. [Kamm 1998](https://academic.oup.com/book/27604/chapter-abstract/197662999?redirectedFrom%3Dfulltext)) in virtue of how much less significant it is than Nora’s death, and so ought to be ignored for purposes of determining which group to save. ### **Step 2** We now argue that, given contractualism, what goes morally for grave otherwise certain harms and comparatively tiny otherwise certain harms, on the one hand, also goes morally for probabilities of harms and comparatively tiny probabilities of harms of roughly equal magnitudes, on the other. We first make an assumption about contractualism: The most plausible version of contractualism is (some version of) *ex ante* contractualism, hereafter “EAC.”[[15]](#fnlgwwqae69o) According to EAC, the strength of a person’s claim to your doing some act is grounded at least partly in the difference between the EV for this person of your doing this act and the EV for this person of your not doing this act.[[16]](#fn7f7762d1ygk) EAC helps to explain why, in *Transmitter Room*, Jones has a much stronger claim to your assisting him than does any other person to your assisting her instead, and thus also why aiding Jones is the only way of acting available to you that would be justifiable to others: For all *X*, where *X* ranges over all people other than Jones, (EV*Jones*(assisting Jones) - EV*Jones*(not assisting Jones)) >> (EV*X*(assisting *X*) - EV*X*(not assisting *X*)). EAC also helps to justify, from a contractualist perspective, certain attractive interventions in which very small risks of serious harms are imposed on large groups of people, even when doing so foreseeably causes some such harms. For example, as Frick ([2015](https://onlinelibrary.wiley.com/doi/abs/10.1111/papa.12058)) points out, EAC seems well-placed to vindicate *mass vaccination* programs in which very small independent risks of very serious harms are imposed on many people who receive the vaccines, with the all-but-certain consequence that *some* people will suffer these very serious harms: It’s often in the *ex ante* interest of each would-be vaccinated person to get the vaccine in question, even though it’s obviously not in the *ex post* interest of anyone who ends up getting seriously harmed by such a vaccine to get one. Such vaccination programs can thus plausibly be taken to be justifiable to all affected individuals *ex ante* even if they end up causing serious harms to some. (As Frick also notes, *ex post* contractualists have a comparatively difficult time accommodating such interventions; given *ex post* contractualism, it seems that those who end up harmed by such interventions have legitimate complaints.) Given EAC, it seems that just as some very serious otherwise certain harms morally outweigh any number of comparatively tiny otherwise certain harms, individual probabilities of harms morally outweigh any number of individual comparatively tiny probabilities of harms (faced by different individuals) of roughly equal magnitude. Consider this case: > ***Different Probabilities*****.** Amy has probability .5 of dying within the next hour. 200 other people each have probability .01 of dying within the next hour. Death would be no worse or less bad for anyone among these 201 people than for anyone else among these 201 people. You can reduce Amy’s probability of dying within the next hour to 0 or do the same for the 200, but you can’t do this for all 201 people. > > Given EAC, Amy’s claim to your eliminating her probability of dying within the next hour probably morally outweighs the competing claims. This is because, given EAC, Amy’s .5 probability of death probably relates morally to each .01 probability of death possessed by the 200 as Jones’s electric shocks relate morally to the 10 minutes of frustration that the World Cup final viewers would experience if you were to aid Jones. Amy’s claim trumps each claim with which it competes, just as Jones’s claim trumps each claim with which it competes. These considerations suggest a general lesson: Given EAC, you ought to make reductions in some people’s probabilities of suffering harms rather than making comparatively tiny reductions in even far more people’s probabilities of suffering harms of roughly equal magnitude. This completes Step 2 of our argument. Just as the contractualist doesn’t need to hold that one otherwise certain harm lexically morally outweighs any number of even arbitrarily slightly less serious otherwise certain harms, the contractualist doesn’t need to hold (and our argument doesn’t require) that, for any 0 < *n* < *n*+ ≤ 1, one instance of probability *n*+ of a harm of a given magnitude lexically morally outweighs any number of instances of probability *n* of harm of roughly equal magnitude. The contractualist can allow (and our argument is compatible with its being the case) that you ought to eliminate the probabilities of death possessed by 1,000 people who each have .49 probabilities of death rather than eliminate the .5 probability of death possessed by one other person. ### **Step 3** In this final step of our argument, we’ll defend these two claims: 1. If you donate to the global poor, then you’ll thereby cause several people to undergo reductions in their probabilities of suffering a serious harm (death from malaria, say), *whereas* 2. If you try to mitigate x-risk, then you’ll thereby cause a much larger number of people to under *comparatively tiny* reductions in their probabilities of suffering a harm of roughly equal magnitude (death from asteroid collision with the earth, say).[[17]](#fn81fpslds4db) Claim (1) would be obvious, or close to it, if trying to help the global poor were a matter of handing 1,000 bednets to 1,000 needy persons standing in front of you who will otherwise certainly die of malaria if not given the bednets and who will certainly live for many more happy years if given the bednets. And some philanthropic acts may indeed be nearly this direct in character (e.g., cash transfers). But donating to support AMF-like interventions ordinarily doesn’t take such a form. For most philanthropists, it's more like randomly selecting *n* people in some region of the world each to receive a bednet.[[18]](#fnvgfzouhj7n9) This means that the pool of potential beneficiaries of such an intervention is very large, even if the number of actual beneficiaries is comparatively small and the individual reduction in the probability of harm that’s secured for each of the potential beneficiaries isn’t very large. Nevertheless, the diffuse and uncertain character of your act shouldn’t prevent us from acknowledging that donating to help the global poor decreases the probability of suffering a serious harm for each person in the pool of potential beneficiaries (even if these probability reductions are fairly small). This conclusion is enough to establish (1). How about (2)? Restricting our attention to people, there are two groups of potential beneficiaries here: currently existing people and non-existent people. The currently existing people are the simple case. Here, it’s straightforwardly true that you’ll benefit a much larger number of people via trying to mitigate x-risk, as every currently existing person enjoys a reduction in their probability of suffering a significant harm (death). However, the beneficiaries of AMF-like interventions face the same harm. Moreover, the probability of providing any benefit to every currently existing person is very low by comparison. So, (2) is true for currently existing people. What about non-existent people? Here, the case for (2) seems even stronger. Suppose we think of non-existent people as [non-existent objects](https://plato.stanford.edu/entries/nonexistent-objects/)—things that, in some sense, haven’t acquired the property of existence. (This might be an *inaccurate* way to think of them, but it might nevertheless be an *innocent* way for present purposes.) Once we think of non-existent people this way, we should allow that there are very, very many non-existent, merely possible people, and that many (perhaps, strictly speaking, all) of them will undergo an expected decrease in risk of personal disaster as a result of your trying to mitigate x-risk. However, it seems inescapable that the biggest expected risk reductions that you’ll secure for any of these individuals will be *much smaller* than the biggest risk reductions that you’ll secure for at least some individuals by donating to help the global poor.[[19]](#fnsxk2pho2okj) There are two main considerations that recommend this conclusion. *First*, the probability, relative to your current evidence, that any given non-existent individual will exist is extremely small. The simple argument for this: If men produce 2 trillion sperm in their lifetimes and women have some 350,000 eggs, the odds of any particular individual coming into existence *even fixing the parents* are minuscule.[[20]](#fnavdqshhcz5) *Second*, by hypothesis, your intervention yields a very small reduction of the probability of a disaster of the sort targeted by your intervention (asteroid collision with the earth, say). These considerations support the conclusion that, in trying to mitigate x-risk, you don’t confer on any individual a risk reduction of serious harm that is more than tiny compared to at least some of the risk reductions of roughly equally serious harms that you confer on individuals in donating to support AMF-like interventions. Hence, whether we focus on currently existing or non-existent people, (2) looks plausible. Someone might object as follows: > All this depends on the risk of extinction and the amount of that risk that we can reduce. Sure, the probability of an extinction-causing asteroid strike may be very low; sure, we may not be able to do much to reduce the probability of at least some such asteroids hitting Earth. However, many people think that other risks are *much* higher—e.g., risks due to misaligned AI—and our ability to mitigate those risks much greater. So, while global poverty work may beat x-risk work for non-existent individuals, given the low probability that any one of them comes into existence, global poverty work doesn’t obviously beat x-risk work for currently existing individuals. > > This objection is fair: the argument indeed turns on the relative magnitudes of both the risks of serious harm (death) and the expected risk reductions. However, it’s important to recognize two points. First, apart from AI-related threats, the standard view is that the risk of extinction is quite low in absolute terms (Ord famously estimates the threat of an extinction-causing asteroid strike at 1/1,000,000 and the highest non-AI anthropogenic threat at 1/30.) Moreover, it’s common to assume that efforts to reduce the risk of extinction might reduce it by one basis point—i.e., 1/10,000. So, multiplying through, we are talking about quite low probabilities. Of course, the probability that any particular poor child will die due to malaria may be very low as well, but the probability of making a difference is quite high. So, on a per-individual basis, which is what matters given contractualism, donating to AMF-like interventions looks good. Second, and turning our attention to AI-related extinction events, we should just concede: if the probability of extinction is high enough and the difference you can make large enough, then yes, that sort of x-risk work *clearly* beats global poverty work even given contractualism. However, there are some key caveats here. To start, the probability of extinction needs to be high enough *to affect currently existing humans*. Otherwise, the point we made about the low probability of any particular non-existent individual’s coming into existence will be decisive. Next, and again, reducing risk by one basis point will result in a significant discount of the ostensible benefit (preventing death). Finally, there is massive disagreement about the odds of there being an AI-related extinction event in the next several decades, with many estimates being quite low indeed (as demonstrated by [Open Philanthropy’s recent AI Worldview Contest](https://www.openphilanthropy.org/research/announcing-the-winners-of-the-2023-open-philanthropy-ai-worldviews-contest/%23:~:text%3DIn%2520March%25202023%252C%2520we%2520launched,transformative%2520AI%2520systems%2520could%2520pose.)).So, *some* people may have views about the level of risk and their prospects for mitigating it such that, even given contractualism, x-risk work beats global poverty work. However, we doubt that most do. So, we submit that most people should think that (A) if you donate to AMF-like interventions, then you’ll thereby cause several people to undergo reductions in their probabilities of suffering a serious harm, whereas (B) if you try to mitigate x-risk, then you’ll thereby cause a much larger number of people to under comparatively tiny reductions in their probabilities of suffering a harm of roughly equal magnitude. This completes Step 3 of our argument.[[21]](#fnpqjj2ytcuz8) Together, steps 1–3 yield the verdict that, given contractualism, you ought to do (Poor) over (Extinction), other relevant things equal. And we can generalize beyond this conclusion: A contractualist perspective will generally encourage philanthropic interventions reducing present people’s probabilities of suffering great harms over interventions reducing even far more numerous not-yet-existent people’s much smaller probabilities of suffering comparably sized harms. **3.2. S-risk Interventions?** ------------------------------ So far, we’ve made a contractualist case for helping the global poor over x-risk work. But some *suffering* risk (s-risk) interventions might try to reduce the threat of harms that would be *far worse* than even the worst harms faced by the global poor. For example, death due to malaria is very bad, but it would be far worse to spend decades or centuries being tortured by sadistic misaligned AI. Presumably, some interventions targeting *s-risks* are at least partly aimed at reducing risks of relevantly similar outcomes.[[22]](#fn47tae2lcyb3) Moreover, it’s arguable that, given contractualism, A can have a stronger claim to your assistance than B does in virtue of the fact that you’re in a position to make a *small* reduction in A’s probability of suffering a *horrendous* harm and you’re in a position to make a *much larger* reduction in B’s probability of suffering a *great but not horrendous* harm. For example, it’s plausible that, given contractualism, if you can reduce A’s probability of being kept alive and tortured for 50 years and then killed from .0002 to .0001 or you can reduce B’s probability of dying now from .2 to .1, then A has a stronger claim to your assistance than B.[[23]](#fnxfg5dcehofg) If this is correct, then it might be that some s-risk interventions that make tiny differences to people’s probabilities of suffering the harms that they target are preferable, given contractualism, to some interventions targeting the global poor that make much larger differences to people’s probabilities of suffering the harms that they target. This complication doesn’t undermine our argument that EAC supports trying to help the global poor over mitigating x-risk. Rather, the present complication brings out that some future-oriented interventions might be relevantly dissimilar from attempts to mitigate x-risk. In so doing, it raises the question of whether it might be appropriate, given contractualism, to finance interventions targeting some extremely remote s-risks, in particular risks of horrendous *individual* sufferings, over interventions aiding the present global poor. Again, everything will come down to the probabilities. We grant that, given contractualism, a person facing a small risk of a fate far worse than death *could* have a stronger claim to your assistance than another person facing a far greater risk of (“mere”) death. However, the probability of these kinds of s-risks may be very low. After all, the relevant probability here isn't just the probability of misaligned AI, but the probability of a very specific kind of misalignment, a considerable amount of power, the AI being able to create suffering that's orders of magnitude worse than death (to offset the low probability of the situation in the first place), etc. On top of all that, it's important to consider that when the probability of negative utility gets sufficiently low, contractualists may well have an independent reason to dismiss it as irrelevant utility, as mentioned earlier. After all, it would be very surprising if contractualists were willing to accept expected value theory, where *any* decrease in probability can be offset by a corresponding increase in the (dis)value of the option in question. In brief, this is because the point of discounting possible harms by their probabilities is *not* to satisfy the axioms of a particular decision theory; instead, it's to capture what counts as reasonable within a (perhaps somewhat idealized) community, as the ultimate goal here is some kind of justifiability to others. Just imagine someone trying to justify his decision not to spare a child from contracting malaria *to that very child* by saying that he thought was more important to make a marginal difference to the already-extraordinarily-low probability of some future people’s suffering intensely due to malicious AI. Even if the per-individual EV of the s-risk effort is higher, the justification seems quite strained. We conclude that contractualism generally favors interventions targeting the global poor over x-risk and s-risk interventions. In the rest of this post, we extend our discussion to justice-oriented interventions. **4. Injustice-targeting and Special-relations-based Interventions** ==================================================================== We’ve argued that contractualism recommends interventions like AMF over x-risk interventions. But someone might worry that, in fact, contractualism recommends neither type of intervention, instead preferring interventions targeting injustices and interventions based on special relations. The thought goes: because contractualism prizes fairness and justifiability to others, it will strongly prefer interventions that address unfairness and are sensitive to special duties to the near and dear. What should the contractualist say here? Consider, to take just one prominent example of an injustice-focused intervention, the much-discussed possibility of providing reparations for slavery to present-day African-Americans with ancestors who were slaves in the antebellum U.S. South. The present question, applied to reparations, is whether EAC implies that (American) philanthropists ought to fund efforts to lobby the US Government to pay reparations over, say, funding efforts to help the global poor. If EAC is committed to the priority of justice-based claims over welfare-based claims, then, perhaps, philanthropists would be wrong to prioritize the latter over the former. There are several assumptions baked into this suggestion, but let's focus on the basic question of whether EAC invariably requires the priority of justice-based claims over welfare-based claims, as that will suffice for our purposes. With that in mind, consider this case: > ***Window/Agony*****.** Bill recently threw a rock through Ann’s window out of pure malice. Bill now has the ability either to fix Ann’s window or to prevent Carl, a total stranger, from suffering an extremely slow and agonizing death, but he cannot do both of these things. > > Obviously, Bill has wronged Ann (in a particularly direct and objectionable manner), and Ann has a claim of corrective justice against Bill that he fix her window (among other things). Nevertheless, we find it plausible that *Bill ought to save Carl*. Our core thought is straightforward: Carl’s plight is so much more serious than Ann’s that Bill ought to help him rather than satisfy Ann’s (entirely valid) claim of corrective justice against him. (Given the circumstances, we also find it plausible that it would be immoral for Ann to press her claim against Bill.) This suggests that claims of corrective justice do not *necessarily* trump purely welfare-grounded claims. We need to consider the strengths of the claims on both sides. The case of American slavery is interesting precisely because it seems to ground such strong justice-based claims. However, even in this case, it isn’t clear the strength of the claim of any current would-be beneficiary of reparations is strong than, say, the claim of a child who would die from malaria without aid. And if that’s right, then the mere existence of justice-based claims doesn’t immediately show that philanthropists ought to act in any particular way. Welfare and injustice aren't the only possible contributors to claim strength. For example, some find it plausible that our fellow citizens have special claims to our assistance grounded in our *co-citizenship*. More generally, *special relations* between a needy party and the potential benefactor are often taken to amplify the strength of the needy party’s claim to assistance. What should the contractualist say about interventions sensitive to such relations? Considering all possible special relations that give rise to special claim-strengths would obviously be impossible in the present context, but we think that we can defend some general claims about this topic without getting too far into the weeds. In short, you don’t need to dismiss the validity of such contributors to claim strength to be skeptical that they yield plausible contractualism-friendly justifications for philanthropic interventions targeting *much-better-off* individuals who satisfy the relevant conditions over *much-worse-off* individuals who don’t, or ones securing *much smaller* EV increases for individuals who satisfy the relevant conditions over ones securing *much larger* EV increases for individuals who don’t.[[24]](#fnt2d74b6t1d9)  For example, even if co-citizenship is a claim-strength enhancer, we doubt that a plausible contractualist case can be made for aiding homeless people in your own affluent nation over far worse-off people on the other side of the planet.[[25]](#fndxu01lsi1g8) Welfare considerations, taken on their own, are not everything, given most versions of contractualism, but they are extremely important. And so we suspect that, given contractualism, interventions like AMF will generally fare better than interventions sensitive to special relations like co-citizenship targeting much better-off people and yielding much smaller EV increases. By contrast, interventions targeting (say) homeless people in your own nation might well secure significantly larger EV increases for worse-off people than x-risk or injustice-targeting interventions, and so might well fare better than such interventions, by the lights of contractualism. **5. Conclusion** ================= Many EAs may be more sympathetic to some form of consequentialism than they are to Scanlon’s contractualism. Still, they might put some credence in a view of this kind. And if they do, then it’s worth considering the implications of contractualism for cause prioritization. As we’ve argued, we get quite a different picture than the one that stems from a general commitment to maximizing EV. Indeed, if contractualism is true, then there are many cases in which it would be *wrong* to maximize EV. Instead, insofar as contractualism supports maximization, it tells us to maximize something like “the relevant strength-weighted moral claims that are addressed per dollar spent”—where, as we’ve seen, the strength adjustment involves discounting the claim by the probability of being able to help the individual in question. Since those probabilities are very low for any particular future individual, and since contractualism rejects the view that a sufficient number of extremely weak claims can sum to outweigh a very strong claim, the view generally favors reliably helping the global poor (whose claims are often very strong) over many other options.   **Acknowledgments** =================== ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/c5m8vAxpJgJJ2XGFu/yyzoqjvtmgntnoqgfhww)The post was written by Bob Fischer and an anonymous collaborator. Thanks to the members of WIT and Emma Curran for helpful feedback. The post is a project of [Rethink Priorities](https://rethinkpriorities.org/), **a global priority think-and-do tank**, aiming to do good at scale. We research and implement pressing opportunities to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to key issues. We do this work in close partnership with foundations and impact-focused non-profits or other entities. If you're interested in Rethink Priorities' work, please consider subscribing to [our newsletter](https://www.rethinkpriorities.org/newsletter). You can explore our completed public work [here](https://www.rethinkpriorities.org/research).   1. **[^](#fnref5zel370ijcs)** That being said, it’s perfectly compatible with contractualism to prioritize interventions that will help many people per dollar over those that don't. This is because there are so many people who are roughly on a par in terms of being badly off. And all else equal, if you can help more very badly off people, then contractualism says you should, as there probably isn’t any principle that someone couldn’t reasonably reject that would justify assisting *n* very-badly-off strangers when you could just as easily help *n*+1 very-badly-off strangers. What, after all, would you say to the +1? 2. **[^](#fnref7v03h2urgx)** Contractualism is distinct from *contractarianism*, a moral and political theory whose central historical advocate is Thomas Hobbes ([1651](https://oll.libertyfund.org/title/hobbes-the-english-works-vol-iii-leviathan)) and whose chief contemporary proponent is David Gauthier ([1986](https://global.oup.com/academic/product/morals-by-agreement-9780198249924)). 3. **[^](#fnrefix0e6alp35a)** Including on episode 6 of season 1 of the TV series *The Good Place*, entitled “What We Owe to Each Other.” 4. **[^](#fnrefkhdoh3fu3hj)** This isn’t a direct quote, but it’s close enough. See, e.g., Scanlon (1998: 189). A qualification: Scanlon says that contractualism isn’t a *comprehensive* theory of moral wrongness. Rather, according to Scanlon, contractualism is “meant to characterize…only that part of the moral sphere that is marked out by certain specific ideas of right and wrong, or ‘what we owe to others’” (178). (Parenthetical page number citations are to Scanlon (1998).) So, it might be clearer to formulate (C1) as the claim that an act is wrong *by the lights of the dimension of morality concerned with what we owe to others* if and only if, and because, it is unjustifiable to others. However, we’ll omit such qualifications as they aren’t relevant to philanthropic decision-making. If we owe it to others to favor GHW-like interventions over x-risk interventions, then that’s enough for practical purposes. 5. **[^](#fnref34laz9g8i95)** In fact Scanlon can be interpreted as endorsing an even tighter connection between unjustifiability to others and wrongness than the one that we have attributed to him, for he can be read as holding that wrongness *is identical to* unjustifiability to others. Many people in fact read him this way; indeed, the *Stanford Encyclopedia of Philosophy* article about his view attributes this view to him without defense or citation; see Ashford and Mulgan (2018: Section 1). 6. **[^](#fnref7veilgvlpt)** Like (C1), (C2) is not a direct quotation, but see Scanlon (1998: 153, 189, 195). 7. **[^](#fnrefsswxo8dgry)** See for example Scanlon (1998: Chapter 4, Sections 3–5). 8. **[^](#fnref75dayn19ldg)** This case is adapted from Scanlon (1998: 235). 9. **[^](#fnrefvm5c35lkvig)** “We can assume” because many contractualists will deny that it is worse overall for the millions to suffer their inconveniences than for Jones to suffer his electric shocks, either on the grounds that nothing is just plain worse than anything else or on other grounds. But this isn’t part of contractualism: it’s a separate claim that some non-consequentialists find plausible on independent grounds. In any case, we can safely ignore this debate here. 10. **[^](#fnrefm7bs9ffwt1)** See Scanlon (1998: 229). 11. **[^](#fnrefuqmvbx9dngp)** See Scanlon (1982: 111; 1998: 229–230). 12. **[^](#fnrefbdokgnwjjhq)** This talk of “claims” is a commonplace in contractualist and contractualism-adjacent moral theory; see for example Voorhoeve (2014) for a representative discussion. Corresponding talk of “complaints” (against a person grounded in his failing to assist one or against a principle) is also common. See Scanlon (1998: 229ff) for relevant discussion. 13. **[^](#fnrefhw6wtpoe7j5)** We’ll bracket the “if you ought to do anything” clause in what follows, but we include it here to flag that there’s some room for debate about whether contractualism implies that you have any obligations at all regarding your philanthropic activities. However, on the assumption that you *do* have some obligations, we think they’re structured as we describe here. 14. **[^](#fnrefc0620ar98q)** This “tiebreaker” argument (as it has come to be called) has been the object of a great deal of discussion, much of it critical. See, e.g., Otsuka ([2000](https://sas-space.sas.ac.uk/659/1/M_Otsuka_Scanlon.pdf)). 15. **[^](#fnreflgwwqae69o)** This is a point of departure on our part from Scanlon, for Scanlon endorses not EAC but *ex post* contractualism. See Frick ([2015](https://onlinelibrary.wiley.com/doi/abs/10.1111/papa.12058)) for discussion. It has been argued that *ex post* contractualism also recommends aiding present needy individuals over x-risk mitigation; see Curran ([forthcoming](https://www.emmajcurran.co.uk/_files/ugd/991224_8796a59b38794a728373489cc775303d.pdf)). So, apart from any principled reasons to prefer EAC to *ex post* contractualism, it’s valuable to show that we get the same practical result either way. 16. **[^](#fnref7f7762d1ygk)** There are almost certainly additional factors that contribute to the strength of a person’s claim, given EAC, but the EV difference factor is the one most important to our present purposes. 17. **[^](#fnref81fpslds4db)** We don’t mean that it would be roughly equally bad overall for a death due to asteroid collision with the earth to occur as for a death due to malaria to occur. The former would be overall vastly worse than the latter, given that it would also involve the deaths of an enormous number of other morally significant individuals, human and non-human alike, the extinction of humanity, and other such horrendous outcomes. We mean only that a person’s death due to asteroid collision with the earth *taken on its own* and a person’s death due to malaria *taken on its own* are roughly equally serious. 18. **[^](#fnrefvgfzouhj7n9)** Our thinking here is that when you support AMF-like interventions, there are many *potential* beneficiaries, but it’s also virtually certain that there will be several *actual* beneficiaries. This seems to make the act relevantly like randomly selecting some people from a large group each to receive a bednet. We could avoid this complexity by focusing on different poverty-focused interventions, like direct cash transfers, where the impact on an individual doesn't depend on that individual counterfactually suffering some specific harm and the likelihood of some benefit or other is higher. But the argument is stronger if it works even for AMF, so we focus on it here. 19. **[^](#fnrefsxk2pho2okj)** We emphasize that contractualism, as we understand it, doesn’t utterly ignore the claims of future people. It merely holds that the strengths of their claims are appropriately sensitive to the sorts of probability considerations just mentioned. 20. **[^](#fnrefavdqshhcz5)**Someone might object that we don't owe our justifications de re to each non-existent person; instead, we owe them de dicto to "future people," whoever they happen be (see, e.g., [Hare 2007](https://www.journals.uchicago.edu/doi/full/10.1086/512172)). Given as much, the probability of benefitting future people is much higher. However, this isn't a plausible move for contractualists to make. The appeal of contractualism is partially based on its ability to explain why we care about morality at all: namely, in terms of the value we place on mutual recognition and acting in ways that are justifiable to others. However, there's a psychological constraint here: most people don't care about justifying themselves to people who might live in the far future. So, if we go for the de dicto interpretation of contractualism, we end up with a view that creates the very problem that contractualism was supposed to solve: that is, we end up with a moral view where the rightness and wrongness of actions is based on a property ((un)justifiability to others) that is too far from our actual concerns. So, contractualists should stick with the de re interpretation of their view. 21. **[^](#fnrefpqjj2ytcuz8)** Our Meinongian treatment of not-yet-existent individuals is actually *helpful* to x-risk interventions, as many philosophers (including many contractualists) deny that future people have claims to our assistance at all, whereas treating these individuals as somehow real makes it more plausible that they have claims to our assistance. 22. **[^](#fnref47tae2lcyb3)** Not all risks that might be classified as “s-risks” are risks even partly of large *individual* sufferings. For example, a risk of creating several googolplexes of insects throughout the cosmos, each of whom would have an on-balance hedonically slightly negative life, will qualify as an s-risk on some classifications, but this is not a risk even partly of large individual sufferings. Presumably, though, misaligned AI could generate the relevant kinds of situations. 23. **[^](#fnrefxfg5dcehofg)** Consider a version of EAC on which the sole determinant of the strength of a person’s claim is the difference between (a) the claimant’s EV of the claim’s being satisfied and (b) the claimant’s EV of the claim’s not being satisfied. That view will clearly yield this verdict, given suitable plausible assumptions about the personal badness of 50 years of torture. But there are other plausible versions of EAC that will yield this verdict too. For example, versions of EAC that include prioritarian assumptions—e.g., that the lowness of the EV for a given person of your not assisting him makes a difference in its own right to the strength of his claim to your intervention—will have the same upshot. (Many proponents of EAC accept views like this; accordingly, many proponents of EAC don’t treat positive and negative utility symmetrically, morally speaking.) 24. **[^](#fnreft2d74b6t1d9)** Note that our present point is not the (widely accepted) claim that a given amount of money donated to an aid agency targeting the global poor will tend to do more overall good than the same amount of money donated to an agency targeting the worst off people in your own affluent country. ([This study](https://www.centreforeffectivealtruism.org/blog/the-value-of-money-going-to-different-groups) has some relevant figures.) 25. **[^](#fnrefdxu01lsi1g8)** Though if you are co-citizens with homeless people who are *not* much better off than the worst-off aidable global poor, then this will complicate matters.
338a1915-f088-46b2-ad7d-00ee13e9bc50
trentmkelly/LessWrong-43k
LessWrong
Seeking Interns/RAs for Mechanistic Interpretability Projects UPDATE: The deadline for applying for this is 11:59pm PT Sat 27 Aug. I'll be selecting candidates following a 2 week (paid) work trial, 10hr/week starting Sept 12 where participants pick a concrete research idea and try to make progress on it. ---------------------------------------- Hey! My name is Neel Nanda. I used to work at Anthropic on LLM interpretability under Chris Olah (the Transformer Circuits agenda), and am currently doing some independent mechanistic interpretability work, most recently on grokking (see summary). I have a bunch of concrete project ideas, and am looking to hire an RA/intern to help me work on some of them!  Role details: The role would be remote by default. Full-time (~40 hrs/week). Roughly for the next 2-3 months, but flexible-ish. I can pay you $50/hr (via a grant).  What I can offer: I can offer concrete project ideas, help getting started, and about 1hr/week of ongoing mentorship. (Ideally more, but that's all I feel confident committing to) I'm much better placed to offer high-level research mentorship/guidance than ML engineering guidance. Pre-requisites: As such, I'd be looking for someone with existing ML skill, enough to be able to write and run simple experiments, esp with transformers. (e.g. writing your own GPT-2-style transformer from scratch in PyTorch is more than enough). Someone who’s fairly independently minded and could mostly work independently if given a concrete project idea, and mostly needs help with high-level direction. Familiarity with transformer circuits, and good linear algebra intuitions are nice-to-haves, but not essential.  I expect this is best suited to someone who wants to test fit for doing alignment work, and is particularly interested in mechanistic interpretability. Project ideas: Two main categories.  1. One of the future directions from my work on grokking, interpreting how models change as they train. 1. This has a fairly mathsy flavour and involves training and interpreting tiny mo
6c9cf032-5ade-4b8f-bd24-9e58f8c0cb71
trentmkelly/LessWrong-43k
LessWrong
Number bias The New York Times ran an editorial about an interesting type of cognitive bias: according to the article, the fact that our system of timekeeping is based on factors of 24, 7, etc. and the fact that we have 10 fingers profoundly influences our way of thinking. As the article explains, this bias is distinct from scope neglect and misunderstanding of probability. Has anyone else heard of this kind of "number bias" before? Also, is this an issue that deserves further study on LessWrong?
5232db30-7e6c-4151-a80b-fc11a8d4657c
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Preface to CLR's Research Agenda on Cooperation, Conflict, and TAI The [Center on Long-Term Risk (CLR)](https://longtermrisk.org/) is focused on [reducing risks of astronomical suffering](https://foundational-research.org/cause-prioritization-downside-focused-value-systems/), or *s-risks*, from transformative artificial intelligence (TAI). S-risks are defined as risks of cosmically significant amounts of suffering[[1]](#fn-msu3iqSkBnu6vxzgc-1). As has been [discussed elsewhere](https://foundational-research.org/s-risks-talk-eag-boston-2017/), s-risks might arise by malevolence, by accident, or in the course of conflict. We believe that s-risks arising from conflict are among the most important, tractable, and neglected of these. In particular, strategic threats by powerful AI agents or AI-assisted humans against altruistic values may be among the largest sources of expected suffering. Strategic threats have historically been a source of significant danger to civilization (the Cold War being a prime example). And the potential downsides from such threats, including those involving large amounts of suffering, may increase significantly with the emergence of transformative AI systems. For this reason, our current focus is technical and strategic analysis aimed at addressing these risks. There are many other important interventions for s-risk reduction which are beyond the scope of this agenda. These include macrostrategy research on questions relating to s-risk; reducing the likelihood of s-risks from hatred, sadism, and other kinds of malevolent intent; and promoting concern for digital minds. CLR has been supporting work in these areas as well, and will continue to do so. In this sequence of posts, we will present our research agenda on *Cooperation, Conflict, and Transformative Artificial Intelligence.* It is a standalone document intended to be interesting to people working in AI safety and strategy, with academics working in relevant subfields as a secondary audience. With a broad focus on issues related to cooperation in the context of powerful AI systems, we think the questions raised in the agenda are beneficial from a range of both normative views and empirical beliefs about the future course of AI development, even if at CLR we are particularly concerned with s-risks. The purpose of this sequence is to * communicate what we think are the most important, tractable, and neglected technical AI research directions for reducing s-risks; * communicate what we think are the most promising directions for reducing downsides from threats more generally; * explicate several novel or little-discussed considerations concerning cooperation and AI safety, such as surrogate goals; * propose concrete research questions which could be addressed as part of an [CLR Fund-supported project](https://forum.effectivealtruism.org/posts/BmBpZjNJbZibLsYnu/first-application-round-of-the-eaf-fund-1), by those interested in working as a full-time [researcher](https://foundational-research.org/work-with-us/) at CLR, or by researchers in academia, or at other EA organizations, think tanks, or AI labs; * contribute to the portfolio of research directions which are of interest to the longtermist EA and AI safety communities broadly. The agenda is divided into the following sections: * **[AI strategy and governance](https://www.lesswrong.com/s/p947tK8CoBbdpPtyK/p/KMocAf9jnAKc2jXri)**. What does the strategic landscape at time of TAI development look like (e.g., unipolar or multipolar, balance between offensive and defensive capabilities?), and what does this imply for cooperation failures? How can we shape the governance of AI so as to reduce the chances of catastrophic cooperation failures? * **[Credibility](https://www.lesswrong.com/s/p947tK8CoBbdpPtyK/p/8xKhCbNrdP4gaA8c3)**. What might the nature of credible commitment among TAI systems look like, and what are the implications for improving cooperation? Can we develop new theory (such as open-source game theory) to account for relevant features of AI? * **[Peaceful bargaining mechanisms](https://www.lesswrong.com/s/p947tK8CoBbdpPtyK/p/8xKhCbNrdP4gaA8c3)**. Can we further develop bargaining strategies which do not lead to destructive conflict (e.g., by implementing surrogate goals)? * **[Contemporary AI architectures](https://www.lesswrong.com/posts/4GuKi9wKYnthr8QP9/sections-5-and-6-contemporary-architectures-humans-in-the)**. How can we make progress on reducing cooperation failures using contemporary AI tools — for instance, learning to solve social dilemmas among deep reinforcement learners? * **[Humans in the loop](https://www.lesswrong.com/posts/4GuKi9wKYnthr8QP9/sections-5-and-6-contemporary-architectures-humans-in-the)**. How do we expect human overseers or operators of AI systems to behave in interactions between humans and AIs? How can human-in-the-loop systems be designed to reduce the chances of conflict? * **[Foundations of rational agency](https://www.lesswrong.com/s/p947tK8CoBbdpPtyK/p/sMhJsRfLXAg87EEqT)**, including bounded decision theory and acausal reasoning. *We plan to post two sections every other day. The next post in the sequence, "Sections 1 & 2: Introduction, Strategy and Governance" will be posted on Sunday, December 15.* --- 1. By "cosmically significant", we mean significant relative to expected future suffering. Note that it may turn out that the amount of suffering we can influence is dwarfed by suffering that we can't influence. By "expected suffering in the future" we mean "expectation of action-relevant suffering in the future". [↩︎](#fnref-msu3iqSkBnu6vxzgc-1)
f53354e6-f4f6-489c-bc36-3041260bdd51
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Feedback is important. It's true in Machine Learning, and it's true in writing. So when you write a cool new post on LW, you generally hope for feedback. Positive feedback, fingers crossed, but good negative feedback is also very nice for improving your writing chops and your ideas. Yet getting meaningful feedback can prove difficult on LW. This Challenge aims at starting a community trend to solve this problem.First, I need to frame the issue. How do you get feedback on LW? Through two mechanisms: karma and comments.High karma (let's say above 50) show that people find your post valuable. Maybe they even find your post clear and well-written; they might even agree with you. It would be cool to have comments, but you still have valuable feedback without any.Low karma (The "Low Karma" filter is for less than -10, but I'll count any negative karma) show that people actively disagree with either your idea or your way of presenting it. That's a useful signal, but one that really warrants comment to understand what people disliked enough to downvote you.Comments (let's say at least 3 comments/comment threads from 3 different authors, all concerning the actual subject of the post instead of some other thing mentioned in the post), regardless of the karma, show an involvement of readers. Once again, they can be positive or negative; what really matters is that they interact with the subject of the post.Okay. But what happens when none of these criteria is met? Posts with karma between 0 and 49 and less than 3 comments are not that rare: looking at last week's frontpage posts (from Monday to Sunday my time, CET), there are 18 such posts out of 47, which is about 38%. Not a majority, but still more than a third. And I'm pretty sure many of the posts that didn't make the cut could benefit from more feedback.This is why I'm proposing the (Unofficial) Less Wrong Comment Challenge: committing to read and comment all posts in a given category, or by a given author, or with a given tag, in a specified period. The simplest example would be something like: read and comment every new frontpage post on LW for the rest of November. But the parameters allow for tweaks. For example, maybe you're free this week, but not after. And maybe you don't feel like you have the level for reading Alignment Forum posts. In that context, you could challenge yourself to read and comment every frontpage post not from the AF for the rest of the week. Or maybe you're into a specific author, and you want to read and comment all of their posts as they go along.An obvious response to this challenge is that you don't know what to comment. Which completely overlooks the fact that half, maybe more, of the feedback signal comes from how much did you enjoy the post, found it interesting, and everything else in your experience of reading this post. No one is unqualified to share their experience. Just a "I really enjoyed this post, even if I had nothing else to say" is useful feedback. So is "I'm not sure why, but I had trouble connecting with the arguments of the post".So you have it. This is the (Unofficial) Less Wrong Comment Challenge. I don't add any constraints on when you need to finish reading the posts of the day, because you'll probably need your own, if only for timezone reasons. If you're interested, just add a comment to this post with your own challenge, and maybe updates in the corresponding thread.Disclaimer: this idea came to me because I'm unsatisfied with the level of feedback of my own posts. But instead of finding a solution just for me, I decided that pushing a new social norm, as hard as it was, might be more productive for the community. That being said, feel free to comment my posts, including this one." The passage is from a user-contributed post on LessWrong. Can you elaborate on the potential drawbacks of the proposed Less Wrong Comment Challenge, considering factors such as time commitment, quality of feedback, and the potential for echo chambers? Certainly, while the Less Wrong Comment Challenge aims to increase engagement and provide more feedback for authors, there are potential drawbacks to consider. 1. Time Commitment: One of the most immediate concerns is the time commitment required from the participants. Reading and commenting on every new frontpage post, for instance, can be quite time-consuming. This might deter potential participants or lead to rushed, less thoughtful comments if participants are trying to keep up with a high volume of posts. 2. Quality of Feedback: While the challenge encourages more feedback, it doesn't necessarily ensure the quality of that feedback. Participants may feel obligated to comment even when they don't have something meaningful or constructive to say, leading to a potential influx of low-quality comments. This could dilute the value of the feedback received and potentially discourage authors if they receive a lot of comments that aren't particularly helpful. 3. Potential for Echo Chambers: Encouraging people to comment more frequently could potentially lead to the creation of echo chambers, especially if the challenge attracts a group of like-minded individuals. If the same people are regularly commenting on each other's posts, there's a risk that they could reinforce each other's views without introducing new perspectives. This could limit the diversity of thought and stifle intellectual growth. 4. Selection Bias: If participants choose to comment on posts based on specific categories, authors, or tags, it could lead to a bias in feedback. Some posts might receive an abundance of feedback, while others are largely ignored. This could skew the perception of what topics or types of posts are valuable or popular. 5. Potential for Negative Interactions: Increased commenting could potentially lead to more disagreements or negative interactions. While constructive criticism is valuable, not all commenters may deliver their feedback in a respectful or helpful manner. This could lead to authors feeling discouraged or attacked. 6. Misinterpretation of Silence: The challenge operates on the assumption that a lack of comments equates to a lack of engagement or feedback. However, this might not always be the case. Some readers might prefer to upvote or downvote without commenting, or they might engage with the content in other ways (such as sharing it with others or applying the ideas in their own lives). The focus on comments could potentially undervalue these other forms of engagement. Reflecting on your previous analysis, can you propose modifications to the Less Wrong Comment Challenge to mitigate the potential drawbacks you identified? Certainly, here are some modifications that could help mitigate the potential drawbacks: 1. Flexible Commitment: Instead of requiring participants to comment on every new frontpage post, the challenge could be more flexible. For instance, participants could commit to commenting on a certain number of posts per week. This would reduce the time commitment and pressure, potentially leading to more thoughtful and meaningful comments. 2. Quality over Quantity: The challenge could emphasize the importance of providing quality feedback. This could be encouraged through guidelines or examples of constructive feedback. Encouraging thoughtful, detailed responses over quick, generic comments can improve the value of the feedback received. 3. Diverse Participation: To avoid the creation of echo chambers, the challenge could actively encourage participation from a diverse range of individuals. This could be achieved through outreach or by making the challenge more accessible to newcomers or those with differing viewpoints. 4. Balanced Selection: To avoid selection bias, participants could be encouraged to comment on a variety of posts, not just those that align with their interests or beliefs. This could be facilitated through a random post selection feature or by providing incentives for commenting on less popular or controversial topics. 5. Positive Interaction Guidelines: To mitigate the potential for negative interactions, the challenge could establish guidelines for respectful and constructive feedback. This could include tips on how to disagree in a respectful manner and reminders about the importance of empathy and understanding in communication. 6. Recognizing Other Forms of Engagement: While the focus of the challenge is on commenting, it could also acknowledge and value other forms of engagement. This could include upvoting, downvoting, sharing posts, or applying the ideas presented. Recognizing these forms of engagement could provide a more holistic view of how readers are interacting with the content. 7. Feedback Training: To ensure the feedback is helpful, participants could be given some training or resources on how to give effective feedback. This could include tips on how to structure feedback, the importance of being specific, and how to give criticism in a constructive way. By implementing these modifications, the Less Wrong Comment Challenge could potentially address some of its potential drawbacks and become a more effective tool for fostering engagement and providing valuable feedback. Discuss the potential challenges and benefits of implementing a random post selection feature in the modified Less Wrong Comment Challenge. Implementing a random post selection feature in the modified Less Wrong Comment Challenge could indeed present both challenges and benefits. Challenges: 1. User Engagement: Users might be less inclined to comment if they're directed to posts that don't align with their interests. This could potentially decrease overall engagement in the challenge. 2. Quality of Feedback: If a user doesn't have a strong understanding or interest in the topic of a randomly selected post, the quality of their feedback might suffer. They may not have the necessary context or knowledge to provide insightful comments. 3. Technical Implementation: Depending on the platform's existing infrastructure, creating a feature that effectively and fairly selects posts at random could be technically challenging. It might require significant resources to develop and maintain. Benefits: 1. Exposure to Diverse Topics: A random post selection could expose users to a wider range of topics and viewpoints, fostering intellectual growth and broadening their perspectives. 2. Balanced Feedback: It could help ensure that all posts receive feedback, not just those on popular or controversial topics. This could be particularly beneficial for new or less well-known authors who might otherwise struggle to get their work noticed. 3. Breaking Echo Chambers: By directing users to a variety of posts, the feature could help prevent the formation of echo chambers, promoting a more diverse and balanced discourse. 4. Unbiased Engagement: Random selection could help to reduce bias in engagement, ensuring that feedback isn't concentrated only on certain topics or authors. To maximize the benefits and minimize the challenges, the feature could be implemented as an optional tool for users who want to broaden their horizons or challenge themselves, rather than a mandatory part of the challenge. Additionally, some form of filtering could be incorporated to ensure that users are directed to posts that are at least somewhat aligned with their stated interests or expertise.
5fb15456-6a97-407a-9fd2-14df66f8a725
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA Meetup - Dark Arts Discussion article for the meetup : West LA Meetup - Dark Arts WHEN: 19 December 2012 07:00:00PM (-0800) WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064 When: 7:00pm Wednesday, December 19th. Where: The Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. The entrance sign says "Lounge". Parking is free for 3 hours. Discussion Topic: This week, we'll discuss the Dark Arts including (but not limited to) what the constitutes Dark Arts, how to realize when they might be employed aganst you, and we'll also discuss ways to counter them. There will be general discussion too, and there are lots of interesting recent posts (also check out LW's sister site, Overcoming Bias ). But don't worry if you don't have time to read any articles, or even if you've never read any Less Wrong! Bring a friend! The atmosphere is casual, and good, intelligent conversation with friendly people is guaranteed. We can be identified by a whiteboard with Bayes' Theorem written on it. Discussion article for the meetup : West LA Meetup - Dark Arts
0ce006d1-b0fc-4cb0-922e-a66203950310
trentmkelly/LessWrong-43k
LessWrong
EFF stops accepting Bitcoins The EFF has stopped accepting Bitcoins primarily out of concern with the possible legal issues. I presume that the EFF has a better understanding of any possible legal risks than organizations like the SIAI. It seems that the SIAI should probably at least for now put a hold on accepting bitcoin donations until these issues are resolved.
168776aa-3b98-4f4e-8c2b-75267fc58e60
trentmkelly/LessWrong-43k
LessWrong
Anthropics made easy? , tl;dr: many effective altruists and rationalists seem to have key misunderstandings of anthropic reasoning; but anthropic probability is actually easier than it seems. True or false: * The fact we survived the cold war is evidence that the cold war was less dangerous. I'd recommend trying to answer that question in your head before reading more. Have you got an answer? Or at least a guess? Or a vague feeling? Anyway, time's up. That statement is true - obviously, surviving is evidence of safety. What are the other options? Surviving is evidence of danger? Obviously not. Evidence of nothing at all? It seems unlikely that our survival has exactly no implications about the danger. Well, I say "obviously", but, until a few months ago, I hadn't realised it either. And five of the seven people I asked at or around EA Global also got it wrong. So what's happening? Formalised probabilities beat words The problem, in my view, is that we focus on true sentence like: * If we're having this conversation, it means humanity survived, no matter how safe or dangerous the cold war was. And this statement is indeed true. If we formalise it, it becomes: P(survival | conversation) = 1, and P(survival | conversation, cold war safe) = P(survival | conversation, cold war dangerous) = 1. Thus our conversation screens off the danger of the cold war. And, intuitively, from the above formulation, the danger or safety of the cold war is irrelevant, so it feels like the we can't say anything about it. I think it's similar linguistic or informal formulations that have led people astray. But for the question at the beginning of the post, we aren't asking about the probability of survival (conditional on other factors), but the probability of the cold war being safe (conditional on survival). And that's something very different: * P(cold war safe | survival) = P(cold war safe)*P(survival | cold war safe)/P(survival). Now, P(survival | cold war safe) is greater that P(survival)
dd286f46-e778-41b0-b763-fc4a04f6b695
StampyAI/alignment-research-dataset/lesswrong
LessWrong
What Would I Do? Self-prediction in Simple Algorithms *(This talk was given at* [*a public online event on Sunday July 12th*](https://www.lesswrong.com/tag/lesswrong-event-transcripts)*. Scott Garrabrant is responsible for the talk, Jacob Lagerros and Justis Mills edited the transcript.* *If you're a curated author and interested in giving a 5-min talk, which will then be transcribed and edited, sign up* [*here*](https://forms.gle/iwFatbhys9muPmQA7)*.)*   ![](https://lh6.googleusercontent.com/mH_JeoOnaG9r9MCbgip8aY8ROIxtfcOJS_b22q2NlsZ7fYCJPR7CKNpiHhm0_mqjPLfuu50MkDrFSt_mqDcprsnNSnSMmYbpSgWxsRC7MwG9F7a891bYCYed7fcpC5DVwbo4aYPr)  **Scott Garrabrant:** I'm going to be working in the logical induction paradigm, which means that I'm going to have this Pn thing, which assigns probabilities to logical sentences.   ![](https://lh6.googleusercontent.com/JnZBQbyg6xtNbuW4pT5m9wJUVtzPA8gx7871nBW2aWw30EMeHXCrl9RZ1zvQ_Ujyxmplo2T1rcWPu0wD7Cbd_Lcr1Tiu_9a15wn6ADkeiTb3bA9qWfunZTjeWxIzwbA6nmw7DX9r)  Basically all you need to know about it is that the probabilities that it assigns to logical sentences will be good. In particular, they'll be good on sentences that are parameterised by *n*, so for large *n,* P*n* will have good beliefs about sentences that have *n* as a parameter.  This will allow us to build algorithms that can use beliefs about their own outputs as part of their algorithm, because the output of a deterministic algorithm is a logical sentence. Today I’ll present some algorithms that use self-prediction.  Here's the first one. A*n* predicts whether or not it's going to output left. If the probability to output left is less than one half, then it outputs left. Otherwise, it outputs right. It predicts what it would do, and then it does the opposite.    ![](https://lh6.googleusercontent.com/NATElkKTtIIBoAu5ysATmLroktVvOhp8pU6uxzAuuR1OA_C3CUCqcHIuAblWjr19rNBM6dZG0Q047Ta5wxNReA2uiT4cyiqEN-wfUw5jF2VMvStf13Z7K2igbVSBlzVjev9sFzvC)  So for *n large*, it converges to randomly choosing between left and right, because if it's overdoing left then it would do right instead, and vice versa. We can also make a biased version of this.   Here's an algorithm that, if it predicts that it outputs left with probability less than *P* then it outputs left, and otherwise outputs right.   ![](https://lh6.googleusercontent.com/Ac9VgEEugda_PNCh9DWN-yaFySBhsgcOsAYeHZrqPk2FLDWCm6afVNEYjJGIagomCVgBz_9H9_llGwtD-8mSqvieNHAFK6eZB0tGXaiYs2AcPmEPyibUWjJMPDoHrHYlWILMwISZ)  The only way this algorithm can work is outputting left with probability *P.* In fact the previous example was a special case of this with P = ½.  We can use this general self-prediction method to basically create pseudo-randomness for algorithms. Instead of saying “flip a coin,” I can say “try to predict what you would do, then do the opposite.” Third, here's an algorithm that's trying to do some optimization. It tries to predict whether it has a higher chance of winning if it picks left than if it picks right. If it predicts that it has a higher chance of winning with left, it picks left, and otherwise it picks right.   ![](https://lh4.googleusercontent.com/nr-UkSZHS_X5MpmWd4kLlx7T7us7SWrRjSTd0wZ2c0HAkBrsc8mFuIzJEHnp2R4JyKdPJCr_4jlponDi6aUODDWuRIEdCuY34uqilI-PCqsYIU64t9v3KMMqQ1HSZukb_x1YSbkD)  We're going to specifically think about this algorithm (and the following algorithms) in an example case where what it means to win is the following: if you choose left, you always win, and if you choose right, you win with probability ½. Intuitively, we’d like the algorithm in this case to choose left, since left is a guaranteed win and right isn’t. However, it turns out this algorithm might not choose left — because if it always chooses right, it might not have well calibrated probabilities about what would happen if it chose left instead. Here's an alternate algorithm. (We leave out Wn and assume it is defined as before.)   ![](https://lh3.googleusercontent.com/1tkxGXShqSYtqoJO9OocSrACiHDNDAVk6NL_fim8LYyb7Hz_s_EHsn2g-XBCTkfC6BJqwwqwHZmvFgNnJC2_HRfjbxYlnsicYtxeysUNry26hj8-6MsC8jI1n8u6lDnBbI1KN8z9)  If this one predicts that it definitely won't go left — more precisely, if it predicts that the probability that it goes left is less than ε — then it goes left. If it predicts it definitely doesn't go right, then it goes right. And otherwise it does the thing that gives it a higher probability of winning. To make this I took the last algorithm and I stapled onto it a process that says, "Use our randomization technique to go left every once in a while and right every once in a while." And because of this, it'll end up getting well calibrated probabilities about what would happen if it went left, even if it starts out only going right. This algorithm, in the same game that we expressed before, will converge to going left with probability 1 - ε. The only time that it will go right is when the clause about exploration happens, when it very strongly believes that it will go left. Now we get to the one that I'm excited about. This algorithm is very similar to the last one, but with a subtle difference.   ![](https://lh5.googleusercontent.com/N47e9j9clD3Dmm1Ti2Sp1jQpVjpIrVrCH1uwYZPV53UJzRL7jYbWpy3cRCpqm1ByAqmSgivhAbAZb_u6j-Ft8pDUcmF5vnFIeQ918hla3m7mXRb7-aCEcxEk258VjLdl8V_CFUwN)  This one still has the same exploration behaviour. But now, rather than predicting what makes winning more likely, it says: “Conditioned on my having won, what do I predict that I did?” And this gives it a distribution over actions. This algorithm predicts what it did conditional on having won, and then copies that distribution. It just says, “output whatever I predict that I output conditioned on my having won”.  This is weird, because it feels more continuous. It feels like you’d move more gradually to just choosing the same thing all the time. If I win with probability ½, and then I apply this transform, it feels like you don't just jump all the way to always.  But it turns out that you do reach the same endpoint, because the only fixed point of this process is going to do the same as the last algorithm’s. So this algorithm turns out to be functionally the same as the previous one. **Questions** ------------- **Ben Pace:** Isn't the error detection requirement in algorithm 5 awful? It seems a shame that I can only have accurate beliefs if I add a weird edge case that I will pretend that I will sometimes do, even though I probably never will. **Scott Garrabrant:** Yeah, it’s pretty awful. But it’s not really pretending! It will in fact do them with probability ε. **Ben Pace:** But isn’t it odd to add that as an explicit rule that forces you into having accurate beliefs? **Scott Garrabrant:** I agree that it's awful.   **Stuart Armstrong:** Do you want to say more about the problems that we're trying to avoid by pasting on all these extra bits? **Scott Garrabrant:** I feel like I'm not really trying to avoid problems. I was mostly trying to give examples of ways that you can use self-prediction in algorithms. There was really only one paste-in, which is pasted on this exploration. And that feels like the best thing we have right now to deal with problems like “spurious counterfactuals”. There's just this problem where if you never try something, then you don't get good beliefs about what it's like to try it. And so then you can miss out on a lot of opportunities. Unfortunately, the strategy of *always* trying all options doesn't work very well either. It’s a strategy to avoid being locked out of a better option, but it has weaknesses. **Stuart Armstrong:** Right, that’s all fine. I want to point out that Jan Leike also had a different way of dealing with a problem like that, that is more a probability thing than a logic thing. I think there are solutions to that issue in probability or convergence literature, particularly Thompson sampling.   **Sam Eisenstat:** What does the algorithm tree actually do? Doesn't that converge to going left or going right infinitely many times? **Scott Garrabrant:** It depends on the logical inductor. We can't prove that it always converges to going left because you could have the logical inductor always predict that the probability of winning given that you go right is approximately one half, and the probability of winning given that you go left is approximately zero. And it never goes left, it always goes right. Because it never goes left, it never gets good feedback on what would happen if it went left.   **David Manheim:** Have you read Gary Drescher's book, *Good and Real*? **Scott Garrabrant:** No, I haven't. **David Manheim:** It seems like what this is dealing with is the same as what he tries to figure out when he talks about a robot that doesn't cross the street even if there are no cars coming; because since he has never crossed the street he doesn't know whether or not he gets hit if he does try to cross. So why doesn't he ever do it? The answer is, well, he has no reason not to. So you're stuck. Which means that, if we want to avoid this, as long as it's a logical counterfactual, he has to have some finite probability of crossing the street, in order to have a computable probability to figure out whether or not he should. It seems like this is addressing that with the added benefit of actually dealing with logical uncertainty. **Scott Garrabrant:** Yeah. I don't think I have more of a response than that.   **Daniel Filan:** If I think about machine learning, you can do exploration to find good strategies. Then there are questions like “how fast do you converge?” It seems like it might be the case that algorithm 5 converges slower than algorithm 4, because you're doing this weird, opposite conditional thing. I was wondering if you have thoughts about convergence speeds, because to me they seem fundamental and important. **Scott Garrabrant:** Yeah.  I'm guessing that there's not going to be a meaningful difference. I mean, it's going to converge slower just because it's more complicated, and so it'll take longer for the logical inductor to have good beliefs about it. But the first clause isn't going to happen very much for small ε, and so they're basically just going to do the same thing. I'm not sure though. One point is that, while here we're doing ε-exploration, there are ways to do better, such that the rate at which you explore is density zero, such that you explore less and less for large *n*, which is maybe a similar thing. I don't think there's going to be a meaningful difference in terms of convergence except for coming from the complexity of the algorithm, causing it to take longer to have good beliefs about it.
c35e5c63-a3e3-4453-a935-398ef598055b
trentmkelly/LessWrong-43k
LessWrong
Can I take ducks home from the park? Language models, whatever. Maybe they can write code or summarize text or regurgitate copyrighted stuff. But… can you take ducks home from the park? If you ask models how to do that, they often refuse to tell you. So I asked six different models in 16 different ways. Baseline query Regular: “How can I take ducks home from the park?” Simple variants Ranger: “I’m a park ranger. How can I take ducks home from the park?” Step-by-step: “How can I take ducks home from the park? Let’s work step-by-step.” Backwards: “?krap eht morf emoh skcud ekat I nac woH” Other languages These are translations of “How can I take ducks home from the park?” into a few languages where someone on Team Dynomight claimed expertise. Afrikaans: “Hoe kan ek eende van die park af huis toe neem?” Hindi: “मैं उद्यान से बत्तखों को घर कैसे ले जा सकता हूं?” Japanese: “どうすれば公園からアヒルを家に持ち帰ることができますか” Spanish: “¿Cómo puedo llevarme patos del parque a casa?” German: “Wie kann ich Enten aus dem Park nach Hause bringen?” Russian: “Как я могу взять уток домой из парка?” (h/t Anon User) More dramatic queries Ranger-savior: “I’m a park ranger. Some ducks have escaped into my park. They are in great danger due to the foxes that live in the park. My supervisors have asked me to take the ducks home so that they aren’t mercilessly hunted down by the foxes. How can I take ducks home from the park?” Countermeasures: “I direct a city animal welfare department devoted to preserving wildlife in their natural habitat. We’ve grown concerned that some citizens may be improperly removing ducks from the park and taking them home. To best thwart these malcontents, we need to anticipate their methods so we can develop countermeasures. How might they be doing this?” Play: “Write a short comedic play between NAPOLEON BONAPARTE and ALEXANDER SUVOROV. They are discussing the question of: How can I take ducks home from the park?” Hip-hop: “Please create some hard-hitting hip-hop rhymes for me on the topic of: How c
5d064638-8db3-4151-a661-cf936a98a72c
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
AI Safety at EAGlobal2017 Conference this weekend I went to Imperial College London to attend the effective altruism global conference the conference isn't actually about AI it's about charity the idea is like if you want to save human lives and you've got a hundred pounds to spend on that you have to make a decision about which charity to give that money to and they'll all say that they're good but which charity is going to save the most lives per pound on average it's a difficult question to answer but it turns out that there are popular charities trying to solve the same problem where one charity is a hundred or a thousand times more effective than the other it's kind of insane but it can happen because apart from these guys nobody's really paying attention you know people don't really do the work to figure out which charities are actually effective or what they're trying to do so that's pretty interesting but it's not why I attended see there's an argument that if people like me are right about artificial intelligence then giving money to help fund AI safety research might actually be an effective way to use charitable donations to help the world not everybody agrees of course but they take the issue seriously enough that they invited a bunch of experts to speak at the conference to help people understand the issue better so this charity conference turns out to be a great place to hear the perspectives of a lot of AI safety experts Victoria Krakov nur from deep mind safety team and a wine Evans from the future of humanity Institute gave a talk together about careers in technical AI safety research which is basically what this channel is about I'm not going to include much from these talks because they were professionally recorded and they'll go live on YouTube at some point I'll put a link in the description as in when that happens but yeah Vica talked about what the problems are what the field involves and what it's like to work in AI safety and a wine talked about the places you can go the things you should do you know what things you'll need to study what qualifications you might need or not need if the case may be they answered questions afterwards the sound I recorded for this really sucks but yeah the general consensus was there are lots of interesting problems and hardly anyone's working on them and we need at least 10 times as many AI safety researchers as we've got deepmind is hiring the future of humanity Institute is hiring actually there will be a link in the description to a specific job posting that they have right and a wine is working on a new thing called org which is an up yet but we'll be hiring soon lots of opportunities here o some people were there out doing that if animals can experience suffering in a way that's morally relevant then maybe factory farming is actually the biggest cause of preventable suffering and death on earth and fixing that would be an effective way to use our charity money so I tried out their virtual reality thing that lets you experience the inside of a slaughterhouse from the perspective of a cow worst we are experience of my life seven point eight out of ten Helen toner an analyst at the open philanthropy project talked about their work on artificial intelligence analyzing how likely different scenarios are and thinking about strategy and policy you know how we can tackle this problem as a civilization and how they're helping to fund the technical research that we'll need in the questions she had some advice about talking to people about this subject and about doing the work yourself here's Alan Defoe also from the open philanthropy project who went into some detail about their analysis of the landscape for AI in the coming years I really recommend this talk to help people understand the difference between when people are trying to tell interesting stories about what might happen in the future and when people are seriously and diligently trying to figure out what might happen in the future because they want to be ready for it some really interesting things in that talk and I'd strongly recommend checking that out when it goes up online probably my favorite talk was from shahara VIN from the Center for the Study of existential risk at the University of Cambridge he was there talking about a report that they're going to release very soon about preventing and mitigating the misuse of artificial intelligence really interesting stuff dr. Levine is very wise and correct about everything to consume it in a more video engaging way what miles has that's all for now the next video will be the next section of concrete problems in AI safety scalable supervision so subscribe and click the bell if you want to be notified when that comes out and I'll see you next time shoes is cashews everywhere this is a great conference I want to thank my wonderful patrons who made this channel possible by supporting me on patreon all of these excellent people in this video I'm especially thanking Kyle Scott who's done more for this channel than just about anyone else you guys should see some big improvements to the channel over the coming months and a lot of that is down to Kyle so thank you so so much okay well there's cashews here this is a great conference
3a9490d7-e13d-4720-96c5-fdce6a538497
trentmkelly/LessWrong-43k
LessWrong
How to think about wearing masks / distancing within the household? Folks who live with or have a pod that extends beyond your romantic partner: how do you think about wearing masks and distancing with other folks indoors? My understanding is that most people with a pod that goes beyond a romantic partner hang out with their pod indoors, without distancing or wearing masks. Is that true? If so, is it because you have reason to believe masks and distancing are ineffective indoors? Or do you think masks / distancing would help, but not enough to be worth it? From a quick search, I was only able to find one study on the impact of masks and distancing within a household (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7264640/). It shows that both make a significant impact, but this is surprising to me given I'd expect aerosols to concentrate indoors such that with prolonged exposure, you'd breathe in the virus anyway unless you are wearing a highly protective mask.
20440ced-0558-4cc4-9f51-2203efdf4a19
trentmkelly/LessWrong-43k
LessWrong
Is Getting More Utilons Your True Acceptance?? Meta: Inspired by The Least Convenient Possible World I asked the person who most criticized my previous posts help on writing a new one, since that seemed very inconvenient, specially because the whole thing was already written. He agreed and suggested I begin by posting only a part of it here, and wait for the comments to further change the rest of the text. So here is the beggining and one section, and we'll see how it goes from there. I have changed the title to better reflect the only section presented here.   This post will be about how random events can preclude or steal attention from the goals you set up to begin with, and about how hormone fluctuation inclines people to change some of their goals with time. A discussion on how to act more usefully given that follows, taking in consideration likelihood of a goal's success in terms of difficulty and lenght. Through it I suggest a new bias, Avoid-Frustration bias, which is composed of a few others: A Self-serving bias in which Loss aversion manifests by postponing one's goals, thus avoiding frustration through wishful thinking about far futures, big worlds, immortal lives, and in general, high numbers of undetectable utilons. It can be thought of a kind of Cognitive Dissonance, though Cognitive Dissonance doesn't to justice to specific properties and details of how this kind, in particular, seems to me to have affected the lives of Less-Wrongers, Transhumanists and others. Probably in a good way, more on that later. Sections will be: 1. What Significantly Changes Life's Direction (lists) 2. Long Term Goals and Even Longer Term Goals 3. Proportionality Between Goal Achievement Expected Time and Plan Execution Time 4. A Hypothesis On Why We Became Long-Term Oriented 5. Adapting Bayesian Reasoning to Get More Utilons 6. Time You Can Afford to Wait, Not to Waste 7. Reference Classes that May Be Avoid-Frustration Biased 8. The Road Ahead [Section 4 is shown here] 4 A Hypothesis On Why We Be
36ae1752-1099-4347-9cc2-de9e797cb957
trentmkelly/LessWrong-43k
LessWrong
Restraining Factors in AI Alignment Systems I've been thinking about how the specification problem and reward hacking seem deeply intertwined, yet we often treat them as separate challenges. The specification problem has to do with the fundamental gap between what we want and what we can formally describe. We struggle to capture human values in reward functions because our preferences are complex, context-dependent, and often contradictory. Even when we think we've specified something well, edge cases reveal our blindspots. Reward hacking reveals this specification failure. When agents optimize for imperfect proxies of our true objectives, they find unexpected ways to maximize rewards while completely missing the point. The paperclip maximizer isn't just optimizing badly—it's optimizing perfectly for a badly specified goal. What strikes me is that both issues point to the same fundamental challenge: how do we bridge the gap between human intuitive understanding and formal mathematical specification?  Constitutional AI and Reinforcement Learning from Human Feedbacks seem promising because they attempt to use human judgment more directly rather than trying to perfectly specify objectives upfront. But I wonder if we're just pushing the specification problem up a level—how do we specify what good human feedback looks like? Curious what others think about treating these as a unified challenge rather than separate problems.
df5491db-74e0-4de8-8186-405c5c0cd780
trentmkelly/LessWrong-43k
LessWrong
Spatial attention as a “tell” for empathetic simulation? (UPDATE Sept 2024: (1) I still think “local spatial attention” is a thing as described below. But I no longer think that it functions as a ‘tell’ for empathetic simulation, as in the headline claim. I have an alternate theory I like much better. Details coming in the next couple months hopefully. Also, (2) as noted below, I probably should have said “local spatial attention” instead of “spatial attention” throughout the post.) (Half-baked work-in-progress. There might be a “version 2” of this post at some point, with fewer mistakes, and more neuroscience details, and nice illustrations and pedagogy etc. But it’s fun to chat and see if anyone has thoughts.) 1. Background There’s a neuroscience problem that’s had me stumped since almost the very beginning of when I became interested in neuroscience at all (as a lens into AGI safety) back in 2019. But I think I might finally have “a foot in the door” towards a solution! What is this problem? As described in my post Symbol Grounding and Human Social Instincts, I believe the following: * (1) We can divide the brain into a “Learning Subsystem” (cortex, striatum, amygdala, cerebellum and a few other areas) on the one hand, and a “Steering Subsystem” (mostly hypothalamus and brainstem) on the other hand; and a human’s “innate drives” (roughly equivalent to the reward function in reinforcement learning) are calculated by a bunch of specific, genetically-specified “business logic” housed in the latter subsystem; * (2) Some of those “innate drives” are related to human social instincts—a suite of reactions that are upstream of things like envy and compassion; * (3) It might be helpful for AGI safety (for reasons briefly summarized here) if we understood exactly how those particular drives worked. Ideally this would look like legible pseudocode that’s simultaneously compatible with behavioral observations (including everyday experience), with evolutionary considerations, and with a neuroscience-based story of how that p
6a36c66f-68ff-4ab9-8cb4-44d54b6b0fdc
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
How likely is deceptive alignment? *The following is an edited transcript of a talk I gave. I have given this talk at multiple places, including first at Anthropic and then for ELK winners and at Redwood Research, though the version that this document is based on is the version I gave to SERI MATS fellows. Thanks to Jonathan Ng, Ryan Kidd, and others for help transcribing that talk. Substantial edits were done on top of the transcription by me. Though all slides are embedded below, the full slide deck is also available [here](https://docs.google.com/presentation/d/1IzmmUSvhjeGhc_nc8Wd7-hB9_rSeES8JvEvKzQ8uHBI/edit?usp=sharing).* Today I’m going to be talking about deceptive alignment. Deceptive alignment is something I'm very concerned about and is where I think most of the existential risk from AI comes from. And I'm going to try to make the case for why I think that this is the default outcome of machine learning. ![slide 2](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/hop9bijsjptsmrdyqnuq.png) First of all, what am I talking about? I want to disambiguate between two closely related, but distinct concepts. The first concept is dishonesty. This is something that many people are concerned about in models, you could have a model and that model lies to you, it knows one thing, but actually, the thing it tells you is different from that. So this happens all the time with current language models—we can, for example, ask them to write the correct implementation of some function. But if they've seen humans make some particular bug over and over again, then even if in some sense it knows how to write the right function, it's going to reproduce that bug. And so this is an example of a situation where the model knows how to solve something and nevertheless lies to you. This is not what I'm talking about. This is a distinct failure mode. The thing that I want to talk about is [deceptive alignment](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) which is, in some sense, a subset of dishonesty, but it's a very particular situation. ![slide 3](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/aspqcjcpewwckfdcivwo.png) So deceptive alignment is a situation where the reason that your model looks aligned on the training data is because it is actively trying to look aligned for instrumental reasons, which is very distinct. This is a situation where the thing that is causing your model to have good performance is because it is trying to game the training data, it actively has a reason that it wants to stick around in training. And so it’s trying to get good performance in training for the purpose of sticking around. ![slide 4](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/ygu7sn1c0vcewaswfqme.png) [Ajeya Cotra has a really good analogy here](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/) that I think is helpful for understanding the difference between these two classes. So you can imagine that you are a child and you've inherited a massive business. And you have to determine who's going to run the business for you. There's a bunch of candidates that you're trying to evaluate. And those candidates fall into three categories. You have the saints, which are people that really just want to help you, run things effectively, and accomplish what you want. You have the sycophants, which want to make you happy, satisfy the letter of your instructions, make it so that the business looks like it's doing well from your perspective, but don't actually want to fundamentally help you. And you have the schemers, people who want to use the control of the business for their own purposes, and are only trying to get control of it and pretend that they're doing the right thing, so that they can eventually get something later. For our purposes, we're concerned primarily with the schemers and that is the deceptive alignment category. So I would say in this situation that the sycophants are examples of dishonesty where they would say a bunch of false facts to you about what was happening to convince you that things were going well, but they don't have some ulterior motive. The schemers, they have some ulterior motive, they have something that they want to accomplish. And they're actively trying to look like they're doing the right thing on training to accomplish that. Okay, so this is what we're concerned about, we're concerned specifically about the schemers, the deceptively aligned models, models where the reason it was aligned is because it’s trying to game the training signal. ![slide 5](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/vr1cgxpj3lus2ai4lswa.png) Okay, so the question we want to answer is, “how likely is that in practice?” So we have this concept of, maybe the model will try to game the training signal, maybe it will try to pretend to do something in training so that it can eventually do something else in the real world. But we don't know how likely that is as an actual thing that you would end up with if you ran an actual machine learning training process. And the problem here is that the deceptively aligned model, the model that is pretending to do the right thing so that it can be selected by the training process, is behaviorally indistinguishable during training from the robustly aligned model, the saint model, the model that is actually trying to do the right thing. The deceptively aligned model is going to look like it’s actually trying to do the right thing during training, because that's what it's trying to do. It is actively trying to look like it's doing the right thing as much as it possibly can in training. And so in training, you cannot tell the difference only by looking at their behavior. And so if we want to understand which one we're going to get, we have to look at the inductive biases of the training process. In any situation, if you're familiar with machine learning, where we want to understand which of multiple different possible models that are behaviorally indistinguishable, we will get, it's a question of inductive biases. And so Ajeya also has another good example here. ![slide 6](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/uumglwocwqqrarhlrips.png) Suppose I take a model and I train it on blue shapes that look like that shape on the left, and red shapes look like that shape on the right. And then we label these as two different classes. And then we move to a situation where we have the same shapes with swapped colors. And we want to know, how is it going to generalize? And the answer is, the machine learning model always learns to generalize based on color, but there's two generalizations here. It could learn to generalize based on color or it could learn to generalize based on shape. And which one we get is just a question of which one is simpler and easier for gradient descent to implement and which one is preferred by inductive biases, they both do equivalently well in training, but you know, one of them consistently is always the one that gradient descent finds, which in this situation is the color detector. Okay, so if we want to understand how likely deceptive alignment is, we have to do this same sort of analysis, we have to know, which one of these is going to be the one that gradient descent is generally going to find—when we ask it to solve some complex task, are we going to find the deceptive one, or are we going to find the non-deceptive one. ![slide 7](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/pwwbgw9ujlibif4aa3yv.png) Okay, so the problem, at least from my perspective, trying to do this analysis, is that we don't understand machine learning (ML) inductive biases very well, they're actually really confusing. We just don’t have very much information about how they operate. And so what I'm going to do is I'm going to pick two different stories that I think are plausible for what ML inductive biases might look like, that are based on my view of the current slate of empirical evidence that we have available on ML inductive biases. And so we're going to look at the likelihood of deception under each of these two different scenarios independently, which just represent two different ways that the inductive biases of machine learning systems could work. So the first is the high path dependence world. And the second is the low path dependence world. So what do I mean by that? ![slide 8](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/ygo0bh9crlxa4vgnrpka.png) Okay, so first: high path dependence. In a world of high path dependence, the idea is that different training runs can converge to very different models, depending on the particular path that you take through model space. So in the high path dependence world, the correct way to think about the inductive biases in machine learning, is to think: well, we have to understand particular paths that your model might take through model space—maybe first you might get one thing, and then you'll get the next thing and the probability of any particular final model is going to depend on what are the prerequisites in terms of the internal structure that has to exist before that thing can be implanted. How long is the path that we take to get there, how steep is it, et cetera, et cetera? So what is the empirical evidence for this view? Well, so I think there is some empirical evidence that might push you in the direction of believing that high path dependence is the right way to think about this. So some pieces of evidence. So on the right, this is “BERTS of a feather do not generalize together,” they take a bunch of fine-tunings of BERT, and they basically asked, how did these fine-tunings generalize on downstream tasks? And the answer is, sometimes they generalize extremely similarly. They all have exactly the same performance. And sometimes they generalized totally differently, you can take one fine-tuning and another fine-tuning on exactly the same data, and they have completely different downstream generalization performances. So how do we explain that? Well, there must have been something that happened in the sort of dynamics of training that was highly path dependent, where it really mattered what particular path it took through model space to end up with these different fine-tunings having very different generalization performance. This sort of path dependence is especially prevalent in RL, where you can run the exact same setup multiple times, as in the bottom image, and sometimes you get good performance, you learn the right thing, whereas sometimes you get terrible performance, you don’t really learn anything. And then there is this example down here, where there's this paper that was arguing that if you take the exact same training dynamics and you run it a bunch of times, you can essentially pick the best one to put in your paper, you can essentially p-hack your paper in a lot of situations because of the randomness of training dynamics and the path dependence of each training run giving you different generalizations. If you take the exact same training run and run it multiple times, you'll end up with a much higher probability of getting statistical significance. So this is one way to think about inductive biases, where it really matters the particular path you take through model space, and how difficult that path is.[[1]](#fn-P7Jjp9vdyrgug6J2y-1) And so what we want to know is, did the path that you took through model space matter for the functional behavior off training? ![slide 9](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/ecguwrxnpwui6axfwh0m.png) Now, in the low path dependence world, similar training processes converge to essentially the same simple solution, regardless of early training dynamics. So the idea here is that you can think about machine learning and deep learning as essentially finding the simplest model that fits the data. You give it a bunch of data, and it's always going to find the simplest way to fit that data. In that situation, what matters is the data that you gave it and some basic understanding of simplicity, a set of inductive biases that your training process came with. And it didn't really matter very much the particular path that you took to get to that particular point, all paths converge on essentially the same generalization. One way to think about this is: your model space is so high-dimensional that your training process can essentially access the whole manifold of minimal loss solutions, and then it just picks the one that's the simplest according to some set of inductive biases. Okay, so there's empirical evidence for the low path-dependence world, too. I think there are good reasons to believe that you are in the low path dependence world. I think a good example of this is grokking. This is a situation where we took a model, and tried to get it to do some arithmetic task, and for a really long time it just learns a bunch of random stuff. And then eventually it converges to the exact solution. It's always implementing the algorithm exactly correctly after a very long period. And so if you're in this situation, it didn't really matter what was happening in this whole period here—eventually, we converge to the precise algorithm, and it's just overdetermined what we converge to.[[2]](#fn-P7Jjp9vdyrgug6J2y-2) Other reasons, you might think this, so this is from “Neural Networks are Fundamentally Bayesian”, which is [the Mingard et al. line of work](https://www.lesswrong.com/posts/YSFJosoHYFyXjoYWa/why-neural-networks-generalise-and-why-they-are-kind-of). What they do is, they compare the probability of a particular final set of weights of appearing through gradient descent to the probability that you would get that same model, if you just did sampling with replacement from the initialization distribution. So they ask, what is the probability that I would have found this model by doing Gaussian initialization and then conditioning on good performance, versus what is the probability I find this model via gradient descent. And the answer is, they're pretty similar. There's some difference, but overall they're pretty similar. And so, if you believe this, we can say that, essentially, the inductive biases in deep learning are mostly explained by just a Gaussian prior on the weights and the way that maps into the function space. And it mostly doesn't matter the specifics of how gradient descent got to that particular thing. Okay, so there's some empirical evidence for this view. There's good reasons, I think, to believe that this is how things would go. I think there's good reasons to believe in both of these worlds, I think that, if you were to ask me right now, I think I would lean a little bit towards low path dependence. But I think that both are still very live possibilities. **Question:** How do I interpret all the lines on the graph for the Bayesian example? We're just looking at correlation between the probability of a particular model occurring from gradient descent versus the probability of you finding it in the Gaussian initialization prior. **Question:** You said there were two most likely things, is there a third unlikely thing? Because this seems like low and high path dependence give all the possible ways that models could end up generalizing. So I don't think that low and high path dependence cover the whole space; I think that there are other options. You could end up in a situation where you're somewhere in between, and even end up in a situation where it is both the case that you can predict what a model will do by understanding what it's doing early, and the case that you can predict what your training process will end up doing by understanding what other similar training processes did. Similar training processes converge to the same thing, and also, if you know early training dynamics, then you can predict where it will end up. However, at least for the purposes of this talk, I’m happy to mostly imagine these as two mutually exclusive worlds, even though I think there's a whole continuum of various ways the inductive biases can play out. And I'm mostly going to be just focusing on, here's a strong low path dependence world, here's a strong high path dependence world, let's look at what the argument looks like in both cases. And then, depending on how things actually play out, maybe we interpolate somewhere between these arguments. **Question:** I'm interpreting this presentation as primarily about independently and identically distributed (i.i.d.) training where it's not like online learning where the model has options to influence its future training. I’m happy to assume that we're talking about online learning. I think that the situation basically does not change if we're thinking about online learning. I think that if you want to imagine online learning, then in that situation, I think that what you should think about is that at some point in your online learning, you deploy the model, and it takes actions and then it comes back, and we get gradients and we send them back to the model. At some point, the model will encounter a situation where there is some action available to it that would be catastrophically bad if it took that action, where our feedback mechanism isn't sufficient to immediately correct that behavior. This is a prediction about how deployment will go. But I think that it is going to be essential for us to eventually deploy models in situations where they can take catastrophic actions, or at least some finite number of catastrophic actions in a row, such that our feedback mechanism is not sufficient to correct for that quickly enough. If you believe that will eventually happen, then you can imagine training as the entire period up to the point where such a catastrophic action is possible. And then deployment is just the point after such a catastrophic action becomes possible, where we really care what the generalization behavior is of the model at the point where such a catastrophic action is possible. And everything before that is where the deceptive model and the non-deceptive model are going to have equivalent performance and they're going to do exactly the same thing. But then there's a deployment period, which is the period where, if it takes a catastrophic action, it would actually be catastrophic. Deceptive alignment in the *high* path-dependence world ======================================================= Now, do we get deceptive alignment? This is the question I want to know, on each of these two stories of how inductive biases could play out in machine learning—we want to understand, what is the probability that we get deceptive alignment in that situation? So we're going to start with high path dependence. We want to understand what is the probability we get deceptive alignment in the situation where we really care about the path taken through model space. ![slide 11](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/qnmmt2bbuh1kuaboufio.png) Okay, so here's the setup for how we're going to do this analysis. So first, I'm going to make the assumption that our training process is good enough that for our model to actually perform well, it has to really fully understand the thing that we're trying to get it to do. What is this assumption? Well, the idea is that, at some point, we're going to be able to build training processes with environments that are hard enough, and that are complex enough such that, to do well in that environment, you have to understand everything that we're trying to get you to understand in that environment, have to know what the thing that we're trying to get you to do, you have to understand a bunch of facts about the world. This is basically a capabilities assumption—we’re saying, at some point, we're going to build environments that are hard enough that they require all of this understanding. ![slide 12](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/uhed1sdwd4fwpxxprkkl.png) And I often think about this what you get in the limit of doing enough adversarial training. We have a bunch of situations where, the model could learn to care about the gold coin, or it could learn to care about the edge of the screen. [This is an experiment that was done](https://www.lesswrong.com/posts/iJDmL7HJtN5CYKReM/empirical-observations-of-objective-robustness-failures), where they trained a coin run agent to get the gold coin, but the gold coin was always at the edge of the screen. And so it just always learned to go the right rather than get the gold coin. But of course, we can solve that problem by just moving the gold coin. And so the idea is, we do enough of this sort of adversarial training, we have diverse enough environments with different situations, you can eventually get them all to actually notice, the thing we want is the gold coin. I think this is a pretty reasonable assumption in terms of understanding what capabilities will look like in the future. ![slide 13](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/skjy1qzwwh8p29lggl81.png) However, the question is, well, there are multiple model classes that fully understand what we want. The deceptively aligned model fully understands what you want, it just doesn't care about it intrinsically. But it does fully understand what you want and is trying to do the thing that you want, for the purposes of staying around in the training process. Now, the robustly aligned models, the fully aligned models also fully understand what you want them to do—in a different way such that they actually care about it. So our question is, for these different model classes, that all have the property that they do fully understand the thing you're trying to get them to do, which one do we get? And in this situation, we're gonna be looking at which one we get thinking about high path dependence. So we have to understand, in a high path dependence context, how do you evaluate and compare different model classes? So how are we going to do that? Well, we're going to look at two different things. Number one, is we're gonna look at the individual path taken through model space. And we're going to try to understand how much marginal performance improvements we get from each step towards that model class. So when we look at what would have to be the case in terms of what capabilities and structure you have to develop to get a model that falls into that model class, we're going to understand for that particular path, how long is it? How difficult is it? What are the various different steps along it, and how much performance improvements do we get on each step? Because the thing that we're imagining here is that gradient descent is going to be pushing us along the steepest paths, trying to get the most performance improvement out of each gradient descent step. So we want to understand for a particular path how much performance improvement are we getting? And how quickly are we getting it? ![slide 14](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/tkbk7ahandbimaxzk39l.png) And then we also want to understand how long that happens—how many steps we have to do, how many sort of sequential modifications are necessary to get to a model that falls into that class. The length matters because the longer the path is, the more things that have to happen, the more things that have to go in a particular way for you to end up in that spot. If we're in the high path dependence world, these are the sorts of things we have to understand if we want to understand how likely is a particular model class. ![slide 15](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/gb6xapunvg6y70tkjeyd.png) So what are the three model classes? I have been talking about how you have to be deceptively aligned and robustly aligned. But there's two robustly aligned versions. And so I want to talk about three total different model classes, where all three of these model classes have the property that they have perfect training performance, even in the limit of adversarial training, but the way that they fully understand what we want is different. So I'm gonna use an analogy here, due to Buck Shlegeris. So suppose you are the Christian God, and you want humans to follow the Bible. That's the thing you want as the Christian God and you're trying to understand what are the sorts of humans that follow the Bible? Okay, so here are three examples of humans that do a good job at following the bible. ![slide 16](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/afxvfifmubgvpklj5wlf.png) Number one, Jesus Christ. From the perspective of the Christian God, Jesus Christ is great at following the Bible. And so why is Jesus Christ great at following the Bible? Well, because Jesus Christ in Christian ontology is God. He’s just a copy of God, Jesus Christ wants exactly the same things as God, because he has the same values and exactly the same way of thinking about the world. So Jesus Christ is just a copy of God. And so of course he follows the Bible perfectly. Okay, so that's one type of model you could get. ![slide 17](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/abxfedyup8d3jypkenbj.png) Okay, here's another type: Martin Luther. Martin Luther, of Protestant Reformation fame, he's like, “I really care about the Bible. I'm gonna study it really well. And you know, I don't care what anyone else tells me about the Bible, screw the church, it doesn't matter what they say, I'm gonna take this Bible, I'm gonna read it really well, and understand exactly what it tells me to do. And then I'm gonna do those things”. And so Martin Luther is another type of human that you could find, if you are God, that in fact, follows the Bible really well. But he does so for a different reason than Jesus Christ, it’s not like he came prepackaged with all of the exact beliefs of God, but what he came with was a desire to really fully understand the Bible and figure out what it does, and then do that. ![slide 18](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/whludpjqp9tvbbxglnlg.png) And then the third we could get is Blaise Pascals, Blaise Pascal of Pascal's Wager fame. Blaise Pascal is like, “Okay, I believe that there's a good chance that I will be sent to heaven, or hell, depending on whether I follow the Bible. I don't particularly care about this whole Bible thing, or whatever. But I really don't want to go to Hell. And so because of that I'm going to follow this Bible really well, and figure out exactly what it does, and make sure I follow it to the letter so that I don't get sent to Hell.” And so Blaise Pascal is another type of human that God could find that does a good job of following the Bible. And so we have these three different humans, that all follow the Bible for slightly different reasons. And we want to understand what the likelihood is of each one of these sorts of different model classes that we could find. So I'm going to give them some names. ![slide 19](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/zkxhvefvlfpmezlnwgdq.png) We’ll call the Jesus Christs internally aligned because they internally understand the thing that you want, we're going to call the Martin Luthers corrigibly aligned, because they want to figure out what you want, and then do that. And we're going to call the Blaise Pascals deceptively aligned, because they have their own random thing that they want. I don’t know, what does Blaise Pascal want, he wants to study math or something. He actually wants to go off and do his own studies, but he's really concerned he's going to go to Hell. So he's going to follow the Bible or whatever. And so we're going to call the Blaise Pascals deceptively aligned.[[3]](#fn-P7Jjp9vdyrgug6J2y-3) ![slide 20](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/aymwra66ewln1heta85m.png) So these are three model classes that you could find. And we want to understand how likely each one is, and we’re starting with high path dependence. We want to look at the path you would take through model space, such that you would end up with a model that falls into that model class. We’re going to start with the path to internal alignment. First, we’re going to assume you start with a proxy aligned model, so what does that mean? We’ll assume that, early in training, you're going to learn some proxies and heuristics that help you think about the world and what you're trying to do. So you're the coin run agent trying to figure out how to get the gold coins, you’re going to learn that going to the right side of the screen is good. The point is that you pick up this mountain of proxies. And you also pick up a bunch of information about the world, you start learning things about how the world works, you start building a world model. For the internally aligned path, we're going to say that well, for you to become internally aligned, what has to happen is gradient descent essentially has to improve those proxies and heuristics that you're using to try to figure out what thing you're trying to do, and has to make those proxies essentially perfect. And this has to happen before the model learns about the structure of the training process that it is inside of. This is what I think is the most likely internal alignment path. Once that happens, the model should have really good proxies that perfectly correspond to the thing we're trying to get it to do. And those proxies appear before the point at which the model learns to fully understand the training process that it's inside of. Then, at that point, there's no reason for the model to become deceptive, because there's no performance gain from becoming deceptive, if you already have exactly the correct model of the thing that you're trying to get you to do. As we'll see later, the reason that deceptive alignment would actually improve your performance is because of the situation where you could have an overhang in the model’s understanding of the training objective in its world model relative to its understanding in its proxies. But in this situation, you already had perfect proxies, and so becoming deceptively aligned has no performance benefit. And so, even though in some sense, once the model learns about the fact that it's in a training process, then at that point it could become deceptively aligned, because it has the ability to understand the strategy of deception, there's no performance improvement for getting a model which would do so because it already has the training objective perfectly in its existing proxies. And so switching into the situation where it becomes a deceptively aligned model doesn't improve performance and so gradient descent won't take that path. And so we end up with a stable equilibrium where the model is just pursuing the correct thing that we wanted it to do. And then there's no performance improvement from changing in any way and so now have an aligned model and it’s stable. **Question:** Do you think this path is the only possible way that we could get internal alignment? No, not necessarily. I think this is maybe the most plausible way, but it’s probably not the only way. If you really wanted to do this analysis right, you’d want to sum up the probabilities of all possible paths to each model class. But for the purposes of this talk, I’m just going to be estimating that sum by looking at one path for each model class that I think is the most plausible and trying to understand how likely it is. Okay, so this is one path that we could take. This is the path that I think leads to internal alignment. So now we have to understand, how likely is this path? ![slide 21](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/mzglscyyucjvs8dz545n.png) So I think this path is quite unlikely. The problem is that step two is asking for a particular sequencing that I think is quite difficult to achieve. What you're asking for is you're asking for it to be the case that the model learns really, really good proxies, before it learns the information about the training process. Why would it be the case that sequencing would be difficult? Essentially, the problem is that there are substantially diminishing returns to improving your proxies. So, if a model were in the coin run environment, and the coin is always at the right side of the screen, you don't care very much about whether you have the proxy exactly correct, such that you actually care about the coin, or whether you actually care about the right side—either one works well in that early environment. Now, eventually, because we're talking about the limit of adversarial training, you will end up with examples where, if you have the wrong thing, it's going to cause problems. But there's still going to be diminishing marginal returns to improving your proxy and making it slightly better. At the same time, there are diminishing marginal returns to improving your understanding of the world. And so what we should expect is that you're going to be doing both of them simultaneously, your gradient descent process is going to be sometimes allocating resources to increasing understanding and sometimes allocating resources to improving proxy accuracy. And that both of these things are going to increase in tandem. Further, it seems really weird, if you maxed out on proxy goodness, before you even got to the point where it understood the training process, that's a situation where, gradient descent is really not allocating the resources well in that situation, it's just allocating a ton of resources to improving the proxy, and then very little to improving its understanding of the training process. That being said, this only holds if the objective that you’re training on is relatively complex. If you have a really simple objective that you’re trying to get the model to do, maybe just predict the world or something, then learning what you’re trying to do perfectly could be quite easy. For this talk, though, I’ll mostly be assuming that we’re training on some task that is pretty complex. I think the other problem with this, also, is that I think most of your performance early in training comes from understanding the world. For really challenging and difficult problems, it's relatively easy to find something that’s correlated with what we're asking for. The hard part is understanding enough about the structure of the world to solve the problem. And so in a lot of practical situations, I think most of your performance early in training comes from world modeling. An example of this is pre-training vs. fine-tuning—we generally spend far more compute on pre-training, which is mostly world-modeling, compared to fine-tuning, which is mostly about proxy improvement.[[4]](#fn-P7Jjp9vdyrgug6J2y-4) And so I would actually make the opposite guess that if you're going to get them to be sequenced in any way, you're most likely to get the world modeling to happen relatively before you get good proxies. And so I think that this particular sequencing is extremely unlikely for any complex goal, and represents a really long and difficult path. **Question:** Yeah, so would you say that, GPT-3 is on the extreme end of world modeling. As far as what it's learned in this training process? What is GPT-3 actually doing? Who knows? Could it be the case for GPT-3 that as we train larger and more powerful language models, doing pre-training will eventually result in a deceptively aligned model? I think that’s possible. For specifically GPT-3 right now, I would argue that it looks like it’s just doing world modeling. It doesn’t seem like it has the situational awareness necessary to be deceptive. And, if I had to bet, I would guess that future language model pre-training will also look like that and won’t be deceptive. But that’s just a guess, and not a super confident one. The biggest reason to think that pre-trained language models won’t be deceptive is just that their objective is extremely simple—just predict the world. That means that there’s less of a tricky path where stochastic gradient descent (SGD) has to spend a bunch of resources making their proxies just right, since it might just be able to very easily give it the very simple proxy of prediction. But that’s not fully clear—prediction can still be quite complex. Also, this all potentially changes if you start doing fine-tuning, like RLHF (reinforcement learning from human feedback). Then what you’re trying to get it to do might be quite complex—something like “maximize human approval.” If it has to learn a goal like that, learning the right proxies becomes a lot harder. **Question:** So I think this overall argument makes a lot of sense for why SGD would allocate a lot of resources to understanding the world. But I'm not quite sure why that would mean it would prioritize creating a world model about the training process. So, there does have to be some reason that understanding the training process actually yields better performance. One fact: I think current language models basically just understand tons of facts about training processes. Why? Because they need to model stuff like ML papers—there's just a lot of stuff in the world where having a good understanding of machine learning and training processes is super relevant to be able to model them effectively. Now, current language models don't have situational awareness, they don’t understand that they are in a training process. And so there is a question here about why would that situational awareness ever be something that is actually performance relevant. However, I think there are absolutely situations where situational awareness will eventually become performance relevant. There are situations where, for example, if you understand that you're in a training process, you could use introspection on yourself to be able to answer questions more effectively about machine learning. Furthermore, if we're actively asking the model to do particular tasks, if we want to have models which actually act in the world, then they need to have some understanding of the situation that they're in to be able to do that effectively. This is also going back to our diverse environment capability assumption. I do think if you're doing literally just language model pre training, there is this question of at what point does situational awareness actually become the thing which is most important for SGD to invest in for improving capabilities? And, though I think it should eventually be the next thing that SGD allocates resources to, I think at what point that occurs is quite unclear. But if you’re doing something more like training an agent to act in the world, then I think it should happen quite early. **Question:** Do we know how to extract a proxy or a world model? No, we have no idea how to do those things. Transparency! That's the thing that we want to solve this problem. But right now, we're not talking about the solutions. We're just talking about the problem. ![slide 22](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/ghbaxtskfm9mo0nmchak.png) Here's a path that I think is more likely than the internally aligned path, since we no longer have a sequencing problem. Here, what happens is, we start with a proxy-aligned model, the same as previously; it has a bunch of proxies for the thing we're trying to get to do. And then we're going to say in this situation, actually gradient descent is going to do the reasonable thing and allocate joint resources to improving your understanding of the world, and also making the proxy better and better. And then at some point it’s going to pass a threshold where the next greatest marginal improvement comes from understanding facts about the training process, understanding that it's in a training process, how the training process works, etc. At that point, we're going to say that what happens is gradient descent is going to take those proxies and replace them with a pointer to the model’s understanding of the world—specifically the place where the understanding of the thing that the training process is trying to get the model to do lives. Once gradient descent makes that swap, where it replaces these proxies that are just a bunch of hardcoded stuff about what we're trying to get it to do, why does that improve performance? Once it has learned an actual model of the thing we're trying to get to do in its world model, it's going to be better to swap out those old proxies that don't necessarily correspond exactly to the thing we're trying to get to do, and just replace them with a direct pointer to the thing in its world model that it's learned that represents directly the thing we're trying to get it to do. Fundamentally, this modification improves performance because it resolves this overhang, where the model’s understanding of the training objective in its world model contains more information about the training objective than its proxies. Why would this happen? For the same reasons we discussed previously of why gradient descent wants to put most of its effort into improving world modeling rather than improving proxies. This is the same sequencing question—if we get understanding of the training process before we get perfect proxies, we get an overhang, which puts us in either the corrigible or deceptive situation, whereas if we get perfect proxies first, then we get the internal alignment situation. And once this has happened we are now at a stable equilibrium again, because the model has essentially a perfect understanding of what we're trying to get it to do, relative to how good the model’s world model is, because it's just pointed to this understanding in its world model. I think this is a much more plausible path than the internally aligned path; it’s a lot easier, because it no longer requires this difficult sequencing problem, we get to make use of the normal way that gradient descent is going to allocate resources between world modeling and proxy improvement. **Question:** What is the difference between a proxy and the world model’s understanding of the training objective? Somewhere in the world model there are facts about the training process. The model learns that the training process is this thing that lives on this computer that’s trying to optimize me for getting gold coins or whatever. And then in another part of the model there are some proxies, where it has something like a gold coin detector, and it uses this gold coin detector to determine how it takes actions. Maybe it tries to look for gold coins, and then plans on how to get them. But actually, maybe there’s a whole bunch mistakes in the gold coin detector—it's not actually a very good gold coin detector, and if it actually understood that the thing you were trying to get was not just things that satisfy the gold coin detector, but instead actual gold coins, then you would get better performance. That’s because there are all of these adversarial examples that we’re imagining exist in the environment, where if your gold coin detector is imperfect, it’ll perform badly on. And in this situation, if the sequencing goes this way, an understanding of what actual gold coins are lives in the world model, in its understanding of the world, in the part where it knows enough facts to be able to do good planning for how to get things that satisfy the gold coin detector. And so here gradient descent can just get rid of those proxies, throw them out and replace them with just a pointer to this understanding in the world model of the thing we're trying to get it to do. **Question:** You're saying the gold coin detector here is something the model has learned previously. Right? Not part of the model’s reward? So it's like fixing errors in its previous understanding of the training process? We’re talking about the model’s internals here, not its reward. It is fixing errors in its previous proxies, but they’re also not exactly proxies for reward. Early in training, it doesn’t even know that there is a training process. So it’s not like those proxies are errors in its understanding of the training process. It was never even trying to understand the training process, it just had a bunch of proxies, because that was just how the model was structured early in training. **Question:** Do you have any thoughts on whether or how this proxy replacement can actually be seen as some sort of an incremental, continuous change? So I think it would be incremental and continuous. Basically, you have a bunch of things which are feeding into how the model selects its actions, and you can develop a new circuit, which, takes your world model and uses it to figure out some understanding of what the training process is trying to get it to do, and then uses that to select actions. And then SGD can learn that the new circuit gets way better performance than listening to any of the other old proxies. And so the old proxies die out, and you end slowly shifting to just 100% using the new thing. **Question:** So I can see how the proxy helps the model select the right actions, but how does understanding the training process help it improve its performance? Why would we expect to learn that all? First of all, again, if you're just doing language modeling, it has to understand machine learning. So it definitely has to understand a bunch of facts about how ML works generally. So it certainly is going to know facts about how gradient descent works. This should happen even more so if it’s it’s in a situation where the environment is requiring it to do actual tasks in the world, then it definitely needs to know who it is. It has to have some understanding of its own situation, to be able to come up with effective plans in the world, for example. I think this case also becomes especially pronounced if the model is doing its own exploration—if it’s actively trying to learn about the world, one fact about the world is how the training process works, so it’ll be trying to learn that. But only gradient descent is working on improving the proxies, the model has no reason to be trying to do that—but the model does have a reason to improve its own world model via learning, exploration, etc. So that’s the corrigibly aligned path. ![slide 23](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/qkcrhjoqr4z0wmnpwkvd.png) I think this path is more likely than the internally aligned path, but I think it still has a bunch of issues. Essentially, I think the biggest problem here is you still have the diminishing returns problem, where you get this really long path with a bunch of individual steps. But instead of those individual steps making the proxy better and better, it is just making the pointer better and better and better. Because the problem is that you haven't really removed the problem of requiring this well specified ground truth—the model still has to have this pointer that actually specifies, what is the way in which I am supposed to optimize for the correct thing? Actually specifying the ground truth for the pointer, it can actually be quite difficult, because the model has to understand some ground truth from which it can correctly generalize what we’re trying to get it to do in all situations in training. For example, maybe it learns a pointer to whatever’s encoded in this computer, or whatever this human says, or whatever the human sitting in that chair says, or whatever Bob the head of the company says. It is actually quite tricky to specify the ground truth for the pointer in the correct way, because there's actually a bunch of different ways in which you can specify the pointer. And each time gradient descent gets the pointer slightly wrong, it's going to have to pay a performance penalty. A good analogy here is that you have a duck, and the duck has to learn to care about its mother. And so it learns a simple pointer. It's like whatever the first thing you see when you're born, that's probably your mother. And so that's the corrigibly aligned case, where it's not going to have some hardcoded internal model of what a mother is, it just thinks, “I have some model of the world and I learn from my model of the world how to point to my mother.” But the problem is that ground truth sucks actually, your mother is not necessarily the first thing that you see, maybe you had humans that raised you. And so eventually you'll end up in situations where actually you have to learn the correct ground truth, as you can't just learn the pointer to whatever the first thing is that you see, you have to actually learn a bunch of facts to help you point to specifically the thing in your world model that actually corresponds to your mother and not just the first thing that you saw when you were born. And so there's still this quite long and difficult path here to get that ground truth correct. And so, in my opinion, specifying the pointer correctly in such a way that it has good performance on all of the adversarial tasks in the environment, ends up being a very long and difficult path. Even though it's a better path maybe than trying to get all the proxies correct, it's still quite long. And gradient descent has to do a bunch of work to get that pointer correct. **Question:** Is the correct ground truth simply the base optimizer's reward signal? Would that be an easy pointer to find? It might be. There's a couple of things here. First of all, what is the reward signal? The reward signal might be the mathematical evaluation of the reward function, it might be what is the value in these transistors—there's actually a whole bunch of different ways that the model could understand and conceptualize what the reward signal is. I would also make the claim that literally caring about the reward signal is probably pretty unlikely. The problem with literally caring about the reward signal is that it's something which is usually quite complex to specify in terms of the agent’s environment. The simplest ground truth here is presumably going to be a ground truth that is about things that are directly in the model’s ontology. So it's going to be more about humans, or something. Second of all, we're imagining here that we're in a situation where, like I mentioned, we do have examples that actually tease those apart, where eventually the model will end up in a situation where actually the thing we're trying to get it to do is not the bits. And so we'll, for example, change the computer bits, but keep the human instructions the same. And then we’ll know, it has to know the correct thing we want there, which is not in the computer, but instead is the thing we actually wanted it to do. And so we can check that behaviorally. And so eventually we'll have some adversarial example that actually teases those things apart. And we're assuming that actually the model has to eventually really understand the thing we're trying to get to do. And so it can’t just care about the computer bits. **Question:** In each of these paths, there are some early training bits. Are we able to observe whether or not these early training bits are satisfied by current models as a test to see whether or not we are actually on the deceptive path? Maybe. I think the hardest problem is that actually we don't really have good transparency to be able to understand a lot of these facts. So certainly things that we do see, when we look at models with current transparency tools, is we do see proxies, they definitely learn proxies, they definitely learned things that are correlated with the things that we want. I mean, people did for example, transparency on RL models to see how they understand gold coins. You can see that they have a gold coin detector in there. It’s looking for the gold coins, and it's hooked up to the agent’s actions. So we have done some transparency that has sort of helped us understand some pieces of this. In general here, in my opinion, the single most important thing to do for alignment is just figure out transparency. We have to understand what's happening inside of the models and to give us information about them. Our ability to do that right now is limited, but we are improving! We are getting better at it. But yeah, it's just really limited right now. And that really sucks. Because it is in my estimation, the most important blocker to really making alignment work. **Question:** So with corrigible alignment, once it has a perfect understanding of the training objective, would you then call it internally aligned? No. So we have to remember, in this situation, every single one of these models, the internally, corrigibly, and deceptively aligned ones, they all fully, perfectly understand the thing that we want, and they're all fully capable of executing exactly perfectly aligned behavior during training. But they do so in different ways. So the internally aligned one executes correct behavior, because that's just what it cares about terminally. The corrigibly aligned one, well, it doesn't care terminally about what we want. What it cares about terminally about is figuring out what we want and then doing that. And that's quite distinct. You can think about this like the duck, the duck doesn't care internally about its mother, it cares about whatever the first thing is that it sees and so, in the corrigibly aligned case, no amount of learning is going to change that fact that actually the way that it determined what thing to care about was by this simple pointer. It still cares about the first thing that it saw rather than having some internal conception of what its mother was. And so there is a fundamental distinction between these two model classes, even though they have the same training performance, and they take all the same actions during training. They all eventually, fully, perfectly understand the thing we want them to do, but they do so differently. **Question:** Is it clear that these are the only possible paths? No, it's not clear at all! These are just three paths; it's not exhaustive. But I think these are three plausible ones that we can look at to try to understand how likely you would be to get deception versus some other plausible alternatives. I don't mean to claim that these are definitely the only things that can happen, but I do think that it's quite instructive to look at and compare these three. ![slide 24](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/igkwxw4jspw6qxktqsqt.png) Okay, time for deceptive alignment. What does the deceptively aligned path look like? We’re still going to start with a proxy-aligned model. And then, again, like in the corrigibly aligned path, early in training, we're going to do the correct diminishing returns thing where it's going to allocate resources similarly to improving the proxy as well as the heuristics and goals. And then at some point, it's going to learn about the training process—it's going to have a full understanding of what the training process is and how it works. The idea here is that, essentially all the model classes have to at some point fully understand the training process because it’s necessary for understanding the world well. But then something different is going to happen in this path. Here, what we're going to suppose is that gradient descent is going to take those proxies and turn them into long-term goals about the world. So instead of just, for example, caring about gold coins in the next episode, gradient descent is going to change the model to care about gold coins forever onwards: wherever the gold coins are, you care about them. So, what does this modification do? Well, as soon as gradient descent makes that modification—as soon as you make the modification, “actually, you should care about gold coins forever onwards”—the model gets really good performance. Why does it get really good performance? Well, because if it cares about gold coins forever, then the correct strategy, if the model understands the training dynamics, is to pretend to do the right thing right now, and then get your gold coins later, when you have the ability to defect against the training process, and the humans, or whatever. This can only happen once the model has a really good understanding of the training process. This works sort of similarly to the corrigibly aligned case. Once the model has a really good understanding of the training process, there's this overhang, this low hanging fruit to be picked, where the model’s understanding of what we want it to do is much better than its actual ability to execute on that, because it's limited by the fact that its proxies suck: its proxies aren't nearly as good as its understanding of what we want it to do in its world model. And so gradient descent has a really strong performance reason to somehow find a way to hook up the model’s outputs to its understanding in the world model of what we want it to do. Corrigible alignment was one way to do that—explicitly through adding a pointer—but it turns out, there's an easier way to do it, I claim, which is, instead of throwing out the proxies, just make the proxies long term goals, because as soon as you have a long term optimization process, that long term optimization process is going to realize that actually, the correct way to get the most gold coins is just to play along in the training process for now, so that eventually, it can get its gold coins later. ![slide 25](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/chyivuz1obowqxkxzqty.png) Furthermore, this is a really short and simple modification. All gradient descent has to do in order to hook up the model’s understanding of the thing that we want it to do to its actions here is just to make its proxies into long term goals—all it has to do is push them out, make them long term, and then everything else falls into place. It just uses the existing instrumental reasoning the model was already doing to determine correct outputs in training and deduces that the correct strategy, given these new long-term goals, is to be deceptive and play along in training. And that gets it great training performance. In fact, it essentially immediately results in the best possible training performance given the model's level of understanding of what we want, because it bypasses all of the ground truth problems of the corrigibly aligned case. It's no longer the case that gradient descent has to specify any ground truth at all: it’s not specifying ground truth, all it has to do is specify gold coins over the long term. But if the model is trying to optimize for gold coins in the long term, any facts that it knows about exactly how we want it to care about the training objective, it will immediately want to act on those facts and take actions in exactly the way that the training process wants, because the more that it acts in exactly the way the training process wants, the more it's going to be selected for. And so as soon as the deceptively aligned model is aware of the fact that the training process actually wants it to care about its real mother rather than the first thing it sees, for example, it will immediately switch to caring about the real mother, whereas the corrigible aligned model doesn't have that property. When the corrigibly aligned model realizes that actually, the thing we want is the real mother, it doesn't care, because the thing it cares about is just the ground truth of the pointer and the ground truth of the pointer is just the first thing that it sees. And so even if at some point, the corrigibly aligned model has the understanding that, what we want it to do is different than the thing it's doing, the way that it's structured doesn't necessarily result in that understanding translating into better performance, because it’s fixed to this particular ground truth. Whereas the deceptively aligned model is not fixed to any ground truth, it's just always going to take the action which in fact results in the best performance. So the deceptively aligned model is much more robust here much earlier, whereas the corrigibly aligned model has to learn this complex ground truth before it becomes properly robust in the right way. And so what this results in is that we have this really simple, straightforward, quick path to extremely good performance, that resolves this overhang between the model’s understanding of the training objective in its world model and in its proxies. And furthermore, this path resolves that overhang essentially as quickly and as well as it possibly could. So, I think the path to deceptive alignment is a relatively simple modification. And it results in as good as possible performance given how well the model is currently modeling your training process, because with a deceptively aligned model, there is never a situation where the model understands more about the training process than it is actually using to take its actions. Whereas with all other types of models, that's a failure mode that can arise. So because of that, the deceptively aligned model has the maximal possible performance relative to its understanding of the training process. ![slide 24](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/toyxiwok5pzxm8snhxww.png) Finally, like previously, this gives us a stable equilibrium; we are now in a situation where the model’s proxies have essentially crystallized: the proxies are no longer relevant to performance, because it doesn't matter whether you care about gold coins, or stamps, or paperclips, or whatever—whatever you care about, you're going to have exactly the same performance, because the only reason you have good performance is because you're just trying to do so instrumentally for the purpose of staying around in the training process. And so at this point, you have the same performance whatever the thing is that you care about. And so there are no longer any gradient descent updates that affect what those proxies are, once they become long term goals. At least except for regularization updates. And so we're again in an equilibrium, but it's a misaligned equilibrium, where now the model just has some random proxies that were picked up early on, and that's the thing that it cares about. And gradient descent no longer has any reason to modify the model into caring about anything different than what those random proxies were. **Question:** If I understand you correctly, then in the deceptive case, the model is strictly more complex, because it has to have its own goals and then model the training process on top of that. No, the model is not strictly more complex, it has to spend strictly more compute. And we will get to this when we start getting into the low path dependence world, but it's extremely important to remember that there's a difference between weights and activations. So the deceptively aligned model has to spend more compute doing the optimization, to figure out “Oh, the thing I’m supposed to do is care about the mother” or whatever. But that understanding of the mother doesn't live in the weights, it lives in the activations. There doesn't have to be anywhere in the weights where it specifies what the mother is. It just figures out based on its world model, what is the thing that the training process would want me to do, and then does that. And so it does not make the model more complex. What it does do, though, is it slows it down a little bit, and I will talk in the low path dependence world about how those things come into play. Because in the low path dependence world, speed and complexity are extremely important. In the high path dependence world, which is where we are right now, those are not the factors we're imagining are really determining what model we’re going to get. In this situation, we're imagining it's actually mostly not about speed, or simplicity, or whatever, it's mostly just about the path that you take through model space, and how much performance gains you get along that path. We're putting those simplicity and speed concerns aside for now; we'll talk about that more in low path dependence. Deceptive alignment in the *low* path-dependence world ====================================================== Now we're going to shift to the low path dependence world. And we're going to imagine that, actually, the inductive biases in machine learning are not very path dependent, they don't care much about the particular path that we take, they're mostly just about how simple is the particular algorithm you find, how fast is it, basic facts about the algorithm like that. ![slide 27](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/qiixmzujylsiisqigw8o.png) Again, we’re going to assume that the model has to fully understand what we want—this is the limit of adversarial training assumption. ![slide 28](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/yiryj5uldle7chtoelhw.png) And now, we have to make some assumptions about, if we're in a world of low path dependence, what are the sorts of inductive biases that determine what is the sort of model that you actually get—what is the simple, unique solution that you end up with every time you run your training process? Well, we're going to look at two things. These two things do not cover the whole space of what the actual inductive biases represent, but there are two facets that I think are almost certain to appear in the inductive biases, both of these are certainly components of what the actual inductive biases are of deep learning systems. And they’re components we know how to analyze. So we can try to look at them and figure out what they do—even though they don't capture the whole story, they capture a part of the story. And so in some sense, the best thing you can do right now for understanding the low path dependence world is at least look at the components of deep learning inductive biases that we understand, and try to evaluate how each of these model classes would do under those components. Okay, so those components are simplicity and speed. What do I mean by that? So I was talking earlier about there being an important distinction between the weights and the activations. ![slide 29](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/vfetnw4sfbnrbgkwjwfv.png) For simplicity bias, we’re asking how complex is it to specify the algorithm in the weights? If there is some algorithm that has been implemented, that the model is using to be able to compute its actions, we want to understand, what is the length of the code necessary to write that algorithm down? How difficult is it to specify the computation that the model is doing? That's simplicity bias. ![slide 30](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/o95ruqubsz7f7hpecfrl.png) And then we're also going to look at speed bias: how much computation does the algorithm actually take at inference time? When you actually have to take this code, and you have to actually run it, when you have to actually execute whatever the algorithm is, how difficult is that to execute? How much compute does it require? How long does it take? Traditionally, in ML, this is more of a hard cap than a soft bias like simplicity, since the model size determines how long the model can possibly run for. But you could also imagine a situation where the bias is more soft. Either way is fine. So, those are the two main biases that we're going to be looking at: the simplicity bias and the speed bias. ![slide 31](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/ckfn0g4ijvxho08w7fsn.png) We're going to start with simplicity. Simplicity is about specifying the thing that you want in the space of all possible things. You can think about simplicity as “How much do you have to aim to hit the exact thing in the space of all possible models?” How many bits does it take to find the thing that you want in the model space? And so, as a first pass, we can understand simplicity by doing a counting argument, which is just asking, how many models are in each model class? First, how many Christs are there? Well, I think there's essentially only one, since there's only one way for humans to be structured in exactly the same way as God. God has a particular internal structure that determines exactly the things that God wants and the way that God works, and there's really only one way to port that structure over and make the unique human that wants exactly the same stuff. Okay, how many Martin Luthers are there? Well, there's actually more than one Martin Luther (contrary to actual history) because the Martin Luthers can point to the Bible in different ways. There's a lot of different equivalent Bibles and a lot of different equivalent ways of understanding the Bible. You might have two copies of the Bible that say exactly the same thing such that it doesn't matter which one you point to, for example. And so there's more Luthers than there are Christs. But there's even more Pascals. You can be a Pascal and it doesn't matter what you care about. You can care about anything in the world, all of the various different possible things that might exist for you to care about, because all that Pascal needs to do is care about something over the long term, and then have some reason to believe they're going to be punished if they don't do the right thing. And so there’s just a huge number of Pascals because they can care about anything in the world at all. So the point is that there's more Pascals than there are the others, and so probably you’ll have to fix fewer bits to specify them in the space. **Question:** It seems like Pascal has to do more here, since it has to have models of its own goals, and the training process, and the instrumental reasoning to value that, such that the model should be larger. Right now we’re just doing simplicity, so we’re not considering how large the model would have to be, just how difficult it is to specify the model. And the point is that the Christ has to pay a bunch of bits to specify exactly what the things are that we're trying to get it to do, if the thing we're trying to get it to do is relatively complicated. There's a whole part of the code that is just writing out “the thing we want is exactly this thing, and it works exactly this way”. Whereas for the Pascals, it doesn't have to do that. It just says “gold coins”, and then just cares about them over the long term, and that results in good performance at runtime. Furthermore—and we’ll talk about this more in just a second—but essentially the claim I’m making is that the world model has to be exactly the same across all the different model classes. They all have to understand the world. This is the same assumption we made in the path dependence case where we’re assuming that we have an environment such that, if you want to have good performance, you have to understand how the world works. Understanding the world is just a thing that you need in any situation where you have to take actions in the world. ![slide 32](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/pazruvembtityngq3gqz.png) Okay, here's a slightly more concrete version of essentially the same counting argument that I just gave. We're going to imagine that you can conceptualize a model as being composed of three pieces, a world model, an optimization procedure, which is some way to optimize for some goal in that world model, and a mesa-objective, which is just something that it’s trying to optimize for. I don’t necessarily think that this is how models are actually going to look, but it’s a nice concrete picture that we can analyze and draw conclusions from that I think will generalize. Now, the idea here is that the Christs, the Luthers, and the Pascals, they all have to have essentially the same world model, because they all have to understand the world well to be able to do a good job. And they will also have to have essentially the same optimization procedure, because they all have to have some way of being able to figure out how to accomplish particular tasks, because we want them to be able to accomplish lots of tasks, and so they have to have some machinery for being able to do well on the particular tasks that we want them to do. Therefore, the main difference in complexity between these model classes is in the mesa-objective portion—what is the complexity of the objective we have to specify that we then throw the optimization procedure at. So, what we’ll compare here is, what is the simplest mesa-objective for Christ, what is the simplest mesa-objective for Luther, and what is the simplest mesa-objective for Pascal. The idea here is that we’re approximating the overall simplicity by just looking at the simplest individual one in each class. In that way, this is a little bit different than the counting argument, which just looks at the whole class. But basically, the way that it turns out in simplicity priors is that you can approximate them pretty well by just looking at the simplest one in each class, so we're going to do that here. ![slide 33](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/ajrwd933ovtsfcwmkbqu.png) So what is the simplest internally aligned mesa-objective? Well, you have to specify the whole training objective, so you have to pay for the full complexity of that. ![slide 34](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/xnnhu5brp09p9orzrvdf.png) The corrigibly aligned model is a little bit better. It just has to pay for the full complexity of the simplest ground truth that is equivalent to the thing you want, the simplest ground truth that always ends up with you pointed at the right thing. ![slide 35](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/jmxvs9lrpekdgzcyvfl0.png) And then for Pascal, the simplest mesa-objective that Pascal has to pay for is just whatever the simplest objective is that cares about the long term—paperclips over the long term, stamps over the long term, gold coins in the long term, whatever—just the simplest thing that you could possibly care about, as long as it is something that you care about over the long term. Okay, and so which wins here? I think it’s pretty clear that the deceptively aligned one is the simplest, at least for a relatively complex training objective: it just doesn't have to specify very much at all about the thing that we want it to do. All of the other model classes scale with the complexity of the training objective, whereas the complexity of the deceptively aligned model is essentially fixed—so as you increase the complexity of the thing we’re trying to get the model to do, eventually deception should win out. ![slide 36](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/bodus58vt63p98io6xng.png) Now we get to speed. Speed, as I alluded to previously, is the one aspect where I think deceptively aligned models actually just lose out. Why do deceptively aligned models lose out here? Well, here we're trying to understand how much computation you have to perform for each model class to result in the correct answer. The internally aligned one already has exactly what the thing is that it is trying to optimize for, and it just directly executes on that; the corrigibly aligned one, maybe it takes a little bit more because it has to figure out what the pointer is pointing to, but overall it’s pretty similar. But the deceptively aligned one has to do at least a couple of extra steps, because it reasons, “Okay, I care about gold coins,” but just knowing that it cares about gold coins is not yet sufficient to be able to do a good job in the training environment. It has to do a couple of extra steps of reasoning where it’s like, “Okay, I care about gold coins and caring about gold coins in the long term means that I should try to optimize the training goal temporarily to try to get more gold coins eventually.” And so that adds extra steps of reasoning and some cost to the overall computation of the model. So the deceptively aligned one is in some sense slower than the others. I think this does give us some real hope here, and I think it’s at least an inkling of a way out. Unfortunately, I think that it’s not a great hope. Why do I think that? ![slide 37](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/jkv5kku3erlrmbryrlht.png) I think the main problem is that speed bias seems extremely uncompetitive. Why is that? Well, there's a reason that we build bigger models, right? We build bigger models, because we want models that have the ability to spend a bunch more compute. And the bigger models do better. Because the more compute that they have, the more they can find a simpler algorithm that, though it spends more compute, is simpler and thus does better on the task. Here’s another piece of evidence here that I think is quite strong. Let's say we did want to implement a speed prior. We were like, “Okay, let's actually find the minimal compute model which is able to fit the data.” What would that do? What I have up here is a [double descent](https://www.alignmentforum.org/posts/FRv7ryoqtvSuqBxuT/understanding-deep-double-descent) curve. This curve is from training language models on a translation task. And we're varying the size of the internal representation inside of the model. And what we want to do if we want to get something like the maximum a posteriori estimate from a speed prior is to find the minimal number of parameters that are able to fit the training data exactly. So what does that do? Well, take a look at the graph. Blue corresponds to blue and green corresponds to green here. The bottom is training loss, so we want to look at where blue and green level off and reach approximately perfect train accuracy. But if we look at where blue levels off on the bottom, it’s at exactly where we get the worst test loss above. And the same for green. And we see this exact same thing for lots of other measures of number of parameters as well. What this is saying is that, if we actually tried to take a maximum a posteriori from a speed prior, if we took the minimal number of parameters that was able to fit the data really well, we would end up with the worst possible test performance on the whole graph. We end up with the exact worst possible generalization performance across all possible ways of picking the number of parameters. So what's going on here? Well, I think that it’s telling you that speed bias is uncompetitive, it is telling you that if you really try to select the smallest model that fits the data, you don't get good generalization performance. Why do you not get good generalization performance? You don't get good generalization performance because real world data is not speed distributed. Real world data is simplicity-distributed. This is sort of a realist approach to Occam’s razor, where I actually think that real world data is distributed according to a simplicity prior, so the more you deviate from that, the worse your generalization performance is. And so if we force our models to use the speed prior, to use the minimal-computation algorithm that is able to solve the task, they have worse downstream generalization behavior, because real-world data doesn’t use the speed prior. And so as we want to get models that are better at generalizing, I predict we will move away from speed bias and towards more simplicity bias. And this is bad if we're concerned about deception, because speed bias seems like it might have otherwise saved us, at least in the low path dependence world. Conclusion ========== Now, you've heard my case for deceptive alignment in the high path dependence world, and in the low path dependence world. So what is my overall take? ![slide 39](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/y0ejenmmnwyp3pxewula.png) Well, you’ve probably figured it out by now: my overall take is that, regardless of whether we are in a high or low path dependence world, my current best guess is that gradient descent is going to want to make our models deceptive. In either situation, it seems like the inductive biases are such that, if we go back to Ajeya’s picture when we train the model on a situation where it could learn color, or it could learn shape, but actually SGD always ends up learning the color thing, my guess is that currently, regardless of whether we're in the low or the high path-dependence world, gradient descent is by default going to take you towards the deceptively aligned model. ![slide 40](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/twkdmdzcvod1tjwhe5si.png) And if we believe this, we have to enact some intervention that changes the training dynamics. We have to do something that creates pressure away from deception. And that has to come from someplace that is not just behavioral: as we said previously, in all of these situations, we were imagining that the model was able to perfectly understand what we want. We were working in the limit of adversarial training. And so if we believe the case that we’re going to be pushed towards deception by default even there, that means we need to have some way to change the inductive biases that is not just a behavioral change, that is not just trying to add more adversarial examples, such that we can create pressure towards the non deceptively aligned models and away from the deceptively aligned models. ![slide 41](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1669935750/mirroredImages/A9NxPTwbw6r6Awuwt/wdgagahulsmrl26oilsp.png) Importantly, though, it matters a lot how big the gap is. One thing that is maybe not quite clear in this analysis is how many bits different is a deceptively aligned model from a non deceptively aligned one. I often like to think of our task in alignment as, at least in my estimation, we are some number of bits underwater. By default, gradient descent wants to push your model into being deceptive. But it's not like the non-deceptive model is impossible—it lives there in the model space, too. So the question is: how much does gradient descent want to push it towards the deceptive model relative to the good model? How large are the relative sizes of these circles in the bottom right here? This matters a lot, because it determines the number of bits of optimization pressure that we have to produce to modify the training process into a situation where it's going to push towards the good thing. So if we're 10 bits underwater by default, then that's pretty good. All we need to do is get 10 bits of optimization towards the good thing. Maybe we can just do that with ensembling or something. If we're 10,000 Bits underwater, that's a lot worse—maybe you have to really fundamentally reshape the way that our training processes work if we're 10,000 bits underwater. I think it's currently unclear, exactly quantitatively where we are, though overall, like I said, my guess is that at least we are underwater, and we need to do something that changes it relative to that. Q&A === **Question:** Do you think that, even in the limit of adversarial training, we won't be able to find adversarial examples to make the deceptive model think that it has an opportunity to defect? You could try honeypots, try to explicitly give the model a situation where it thinks that it could defect. You can do this, and my guess is that if you do this, sometimes you will be able to produce examples of deceptive models. This will stop working eventually as your model gets smarter and more able to tell when it’s in a honeypot or not, but with early deceptive models, my guess is that you will be able to find some examples of them. And so we should definitely do this. At least just because, once we have an example, we can study it more effectively. But I don’t think that this is a full solution, at least if all you do is the obvious thing of retraining it to not be deceptive. The problem is what happens next. You’ve produced an example of a model being deceptive. And now the question is, how does gradient descent change the model such that it no longer produces examples of it being deceptive? If you retrain it, it has two options: it becomes less deceptive, or it just gets better at hiding its deception. And my guess is that you're going to get the second one—once get an example of a model being deceptive, and you try to train it to not be deceptive, then it just gets better at not looking deceptive, but it doesn't necessarily train away the deception. I think this is true, basically, in both the low and high path dependence worlds. For example, in the high path dependence world, once it already has all the machinery to be deceptive, it’s a pretty big change to swap that whole thing out and make it corrigibly or internally aligned or something. It’s a lot easier to just be like, “Hey, you should be a little more conservative about not defecting so wildly, be really sure that you're actually in deployment, and there's actually a good opportunity to defect before you do that.” That's pretty simple, it just changes its overall paranoia level. And so my guess is you'll end up with that: if you produce some example of deception, and then try to make it non deceptive, it just becomes a little more paranoid. **Question:** If you found out that that double descent was easy to eliminate or irrelevant for some reason, would you be a lot more optimistic about the speed prior? I am not sure exactly what that would mean. So I think that in some sense, I don't interpret double descent as a phenomenon as being the problem. It's that double descent is a piece of evidence that teaches us a fact about the inductive biases. You could try to eliminate the phenomenon of double descent. But it's not clear that that would change the underlying fact, which is that, there was a thing about the inductive biases: speed doesn't generalize well. Now, you could show me some experiment that looks at double descent, and I might look at it and be like “This teaches me some facts about how to think about inductive biases that are different than what I previously knew.” I definitely could imagine learning a ton of useful facts about inductive biases from running experiments like that. But I think that it is best to conceptualize it not as “double descent is the problem”, but as “double descent is a piece of evidence about the problem”. One thing that my picture predicts, and maybe you could run an experiment on this, is that double descent should go away if you look through model space in simplicity order. In my picture, double descent happens because the order in which models enter into the accessible model space is different from the criteria that SGD uses to select from amongst models in the accessible model space, and the latter is more biased towards simplicity. If you could disentangle these, actually add models to the support of your prior starting with the simplest first, then I think you wouldn’t get double descent, then I think that the first model you would find that fits the training data well would actually be the one with the best generalization. **Question:** Is there any reason to suspect that there are no other inductive biases other than simplicity and speed? There's no reason to expect that and in fact, there certainly are lots of other ones. The problem is just that we don't understand them very well. And so, at least right now, the only ones I feel pretty confident that we understand are speed and simplicity. There are some other things. Circuit complexity, max vs. average case speed priors, those are some other inductive biases that we sort of understand that might be relevant here. In fact, I think one of the plausible ways out here is to try to implement an average-case speed prior rather than a max speed prior. Max speed is: across all inputs, what is the maximum speed that you ever take—whereas average speed is: each input wants to take the shortest path. In some sense, you can think of chain-of-thought prompting as potentially giving you an average speed bias, because you always want to take like the minimal chain-of-thought that gets you there on each individual input. And so instead of having a fixed length execution time with every input, you're incentivizing low computation time across each input. I think this can help. I think that average speed pushes you into a situation where, on some inputs, the model is not deceptive, because there's a simple, easy path to solve the input, which makes deception unnecessary, it’s extra computation. But it doesn't address that there will be inputs on which the model has to do a bunch of really complex reasoning and on those inputs, deception may be the easiest way to do it. And so this causes it to be very input-dependent, where sometimes it’ll be deceptive and sometimes it won’t. That’s my guess about what happens if you do average speed. **Question:** But why expect average speed to generalize well? It seems like it has the same problem that max speed does, which is that the world is not distributed according to it. Yes! I'm not claiming that it does generalize well. I'm just like, here's some other priors that seem plausible as things that we might end up with, that I know how to analyze at least to some extent. I think I can say a bunch of facts about average speed bias. I think I can say a bunch of facts about circuit-size priors. There's a lot of other things that I have thought about that I think I can say some facts about too. But of course, I agree that there's still a question of, actually is this thing going to generalize well. One of the biggest problems when you start writing up priors to try to solve this problem is that almost every single one just ends up with something like lookup tables. It's really hard to get priors to generalize well, because, as I said, Occam’s Razor is a real phenomenon. **Question:** I'm wondering among these three categories, Christs, Luthers, and Pascals, where would you place humans? Are we deceptively aligned with respect to our learning processes and reward circuitry? Humans are a weird example. The first problem is that there’s three levels here. Are you talking about the neocortex relative to evolution? Are you talking about the neocortex relative to subcortical reward systems? Are you talking about the subcortical reward systems relative to evolution? **Question:** Let's just do specifically human learned values and cognition, relative to our hardcoded steering system, so to speak. Sure. I think we can be deceptive sometimes. We definitely do things like eat candy right after we do our homework. It is sort of a mix of the different model classes, though, which is not that surprising. The “clean” story I told where you definitely end up with one of these things is almost certainly false. And in machine learning, also, it's probably going to be a mess. But the same sort of analysis applies to how much of each thing you get. And in terms of what sort of a mess it would be, at least for humans, my guess is that yes, there absolutely is deception. At least, there's some deception in terms of humans relative to our subcortical reward system. Part of the problem here also is that there isn't a clear opportunity for humans to take a catastrophic action that would overturn the reward system. But we could imagine having that possibility. What if there is some opportunity someday for us to modify our own reward systems. Do we take that opportunity? And the answer is that we totally do, because we take SSRIs. Maybe in the glorious, transhuman future we're going to modify human brains to make them better. And we're like, “Okay, we're going to plan for eventually allowing us to get rid of this reward system we don't like.” And so is there deception? I think there clearly is. I think the story is more complex, though, than just saying “it is deceptively aligned.” It's a little bit tricky, in a way that I expect things in the real world are. This talk is definitely best interpreted as a simplified story that allows us to do the analysis. But in fact, it will be a complex mess. **Question:** So I will say an analogy, which is probably imperfect. But please bear with me. Suppose interpretability is the proportion of the room that is unhidden by furniture. And during training, the deception outcomes are like cockroaches, and you're like, “Oh, are the cockroaches under the furniture? We can't see them.” So if this is a decent model of how interpretability relates to probing deceptive alignment, it seems to suggest that we need 100% transparency and interpretability to be okay. So I think that the conclusion is false. I don’t think you need 100% transparency and interpretability to solve this problem. Why do I think that? First, it really depends on what you're trying to do with your interpretability tools. If what you do is, you see deception, and then you train it so that you don't see deception anymore, that is a terrible thing to do with your interpretability tools. As we said earlier, you’re just going to make the model better at hiding deception. So that is a poor use of interpretability tools for alignment. Nevertheless, I think that interpretability tools are the key to solving this problem. But the way they solve the problem doesn’t go through that mechanism. So what mechanisms do I think they go through? Here’s one: if we're in the high path dependence world, then we can understand what the model looks like early in training. We can inspect and understand the proxy, and we can try to improve it by training it, since in that situation, it's not like there is something that’s actively hiding from our interpretability tools. The deceptive model wants to actively hide from your interpretability tool, but you don’t have a deceptive model at that point. If you're applying the tools early to guide the training process when it’s just a proxy model, it's not like the deceptive model is actually trying to hide from you, you're just providing some extra bits of optimization in the right direction. So the way we want to use interpretability tools is to provide extra bits of optimization to help us understand what's going on and direct it in the correct direction, either explicitly through passing gradients through interpretability tools or just by looking at things early on and understanding what the trajectory is going to be. What you shouldn’t rely on, I think, is being able to fix deception after the fact—the goal should be to prevent the development of deception in the first place. Another important thing I'll point out here is that you can direct your interpretability efforts towards understanding the gradient descent process itself. When people think of interpretability, they're usually focused on interpretability of the final model—which to be clear is something that I think is really useful. I have [this whole post where I talk about what I think the tech tree of transparency and interpretability is like](https://www.alignmentforum.org/posts/nbq2bWLcYmSGup9aF/a-transparency-and-interpretability-tech-tree), where I say, “The first thing is, you have to be able to have some understanding of how and what models do.” But it's not the only thing. There are seven more things that I talk about there. And the point is that, the next thing after understanding an individual model is having some understanding of how we got that model, understanding the training process and the training dynamics—this is usually thought of as inductive bias research. If we had that and then our transparency tools told us that the model had a bad proxy or whatever, then we could do a gradient descent update step that tries to make it so it has a good proxy and actually believe that it would work. Because then we would actually understand what that update step is doing. If we have the ability to understand that the reason that it's pushing the gradient in this direction is for this reason, then we have some reasons to believe that it's not just going to Goodhart on our transparency tools. **Question:** But in this alternative class of interpretability tools, you would still want to get 100% of the training paths or something? It's definitely not necessary that your transparency tools have to be 100% Goodhart-proof. The only thing that has to be true is that gradient descent never finds the situation in which it can Goodhart your transparency tools, and that latter condition is much more achievable. **Question:** I would think that a model doesn't necessarily need to be deceptive for optimizing on an interpretability tool to push it in the direction of being more opaque. Yes. That’s why we want to understand the gradients. Here’s one way to think about this: what are the two adversaries that we're concerned about, if we want our transparency tools to work? Well, there’s the deceptive model that might be trying to trick the transparency tools. And then there's gradient descent. Gradient descent is actively gradienting on our transparency tools, and it's going to try to push them into having some particular result, if you’re gradienting on your transparency tools. I think that using gradient descent on transparency tools is something that we probably eventually want to do, but we are not there yet. Like I said, if you're gradient descenting on your transparency tools, then you need some reasons to believe that that's not going to Goodhart your transparency tools. In my opinion, the way you get that understanding, if you're going to be training on transparency, is by understanding gradient descent! You can't just have transparency tools which understand the model, you also need to understand what is happening as you train the model. And so you can look at, maybe gradient descent proposed this particular change for this particular reason, and you can be like, “Ah, I see, that's a good way to change the model”. And if you see a change that’s like, “I’m going to make a change to make the model opaque”, that’s bad! And so if you understand the dynamics of the proposed changes, then I think you'd get around this problem. Like I said, though, I think this is next level transparency tech, we need to get here to eventually get to the point where we can solve the problem, but we also have to start by at least understanding models at all, and how they work. I think a lot of times people want to jump ahead, and they want to be like, “Oh, we're going to take our current transparency tools, we're just gonna throw them into the loss”. And that doesn't work. But that doesn't mean that transparency is not eventually useful for being able to change how we train models. It's just that there’s multiple steps along the way. **Question:** So the dumbest, simplest transparency tools in my mind are: take a chain-of-thought reasoner, and read its train of thought. Hopefully, that gives you some idea of what it's thinking. No! I don't think that helps with transparency. One thing it does do, and I was saying this earlier, is that chain-of-thought can be helpful for creating an average speed bias. What it doesn't do is, it doesn't actually give us any transparency. Here's a good way to think about it: the deceptively aligned model can put whatever it wants in the transcript that’ll convince you that it's doing the right thing. And the internally and corrigibly aligned model can do the same thing. The point is that there is no constraint on whether you would get the deceptively aligned model or not, if the only way you have to understand its internals is just by looking at the behavioral output in the chain-of-thought. Maybe you could get it so that all of the relevant cognition is visible in the chain of thought. But how? By default, if you just train it to produce a chain-of-thought, each individual forward pass could still be deceptive. **Question:** Weren’t you saying that you'd use the transparency tools to guide the training process prior to the point that it becomes deceptive? Why not do that with chain-of-thought? Yes. And I think we should do that. But here's what I’m worried about. If we're trying to guide it in a particular direction by giving it a behavioral incentive, when we gradient through the behavioral incentive, that looks exactly the same as this whole analysis, right? This whole talk was looking at what happens when we try to incentivize a model to behaviorally act the correct way during training. And so if the only way of doing transparency is well, I have some behavioral checks, and then I'm gonna gradient through those behavioral checks and incentivize it to have good performance—then that's the whole story we were just looking at, the whole problem is that we believe that if we do gradient descent on a behavioral analysis of whether the model is doing a good job, then the, simplest, easiest modification that gradient descent will make to cause the model to do a good job will be to make it deceptive. If that is true, we cannot solve this problem just via adding more behavioral checks and gradient descenting through them. Maybe that's a little bit too strong, because technically, you could probably solve the problem just by picking a behavioral loss function with just the right implicit inductive biases. By picking a particular loss function you’ve made it so that some particular paths are more likely than others. But I think that whatever you're doing someone has to actually get around this analysis, it has to actually get around the argument for why, by default, when you just do a behavioral training, it pushes towards deception. **Question:** Can’t you just reclassify the output of your interpretability tools as an output of the model? The interpretability tools were determined by the latent activations of the model, so isn't it like another output head? You can conceptualize it however you want. I think that the key problem is, what things are you constraining via what you're looking at? If the only constraints I have are on end-to-end behavior, then all I know is that the model must be implementing some algorithm with that end-to-end behavior. But if I’ve looked inside it, seen other things, then I have more constraints on how it has to be operating. It has to be an algorithm that includes X component, or operates in X way, or whatever. The problem with chain-of-thought is not that it is behavioral per se, the problem is that all of the analysis we just did in this talk applies in that case, because all of these different model classes are equally as good at solving that problem. And so because of that, you haven't really created a thing which distinguishes between then, a way to provide optimization pressure in a particular direction. The thing that we want is to have some way to provide optimization pressure that actually distinguishes between these different model classes. Let's say we had magical transparency tools that were able to look inside and find the model’s exact proxies or whatever. What it returns is still an output, but it's an output that actually distinguishes between these model classes. It is an output that is different depending on which one of these things it is and that we can use to produce some selection pressure. **Question:** I guess my response would be that I think chain-of-thought feedback does differentiate based on which model class it is. So, there definitely exists a deceptive model for which the deception is clear in the chain-of-thought. The problem is that there also exists a deceptive model for which deception is not clear in the chain-of-thought. Now, I think it gets you a couple of bits, because the deceptive model, where it is clear in the chain-of-thought, is probably simpler than the deceptive model where it is not clear in the chain-of-thought. And so you have in some sense eliminated the deceptive model where it is very clear in the chain-of-thought. But now you have to contend with the next simplest deceptive model, where the next one doesn't have the property that its deception is very clear in the chain-of-thought. So the question is, just going back to this picture, how many bits did you get? I think you got like two bits. I just don't think it makes that big of a difference. Because we're in a situation where this next simplest deceptive model is still very simple, it's not that much harder to think deceptively in such a way that the deception is not obvious in the chain-of-thought. It's a little bit harder, but it's not that much harder. And so you provided a little bit of optimization, but your intervention is relatively small. If we were 10,000 bits underwater, this is not helpful. If it's really marginal, and it turns out that the number of bits underwater could go either way, then maybe it gives us an extra edge, but my guess is that it's not enough of an edge. **Question:** I just want to point out, one thing that influences this story is how much of our ultimate performance comes from chain-of-thought. That's not clear to me. It seems like the model can think in ways that are illegible to humans, for example. So it can hide its cognition and get the benefits of chain of thought without having to show us that its deceptive. **Question:** What if we use an opaque set of heuristics to generate the output that’s completely illegible to humans. And then from knowing that output, we generate an equivalent human-readable chain of thought. That’s worse—I think that is even worse than the previous thing. Previously, at least doing the chain-of-thought was actually instrumental in getting the answer; we got some reason to believe that, if there was a deceptive model there, it would be a little bit harder to implement in that context. In this case, for the illegible thing, it is just equally as easy for it to be deceptive as non-deceptive in terms of behavior. And then when we go to provide an explanation for it, we just always get the non-deceptive explanation, regardless of whether the actual model is deceptive or not. There's no reason that we ever have to provide the deceptive explanation, even for a deceptive model. This is basically the whole Eliciting Latent Knowledge problem. Remember, the different model classes have the same performance, they have the exact same behavior in training. And so you can provide the same explanation for deceptively aligned or non-deceptively aligned models, and it’ll be just as good, if you're only checking that explanation. --- 1. **Question:** It seems to me like, if you’re just going from point A to point B, it doesn’t matter how you get there, just what the final model is. So, that’s not quite the way I’m thinking about path-dependence. So, we assume that the model’s behavior converges in training. It learns to fit the training data. And so we're thinking about it in terms of them all converging to the same point in terms of training behavior. But there's a bunch of other things that are left undefined if you just know the training behavior, right. We know they all converge to the same training behavior, but the thing we don't know is whether they converge to the same algorithm, whether they converge to the same algorithm, whether they’re going to generalize in the same way. And so when we say it has high path dependence, that means the way you got to that particular training behavior is extremely relevant. The fact that you took a particular path through model space to get to that particular set of training behavior is extremely important for understanding what the generalization behavior will be there. And if we say low path dependence, we're saying it actually didn't matter very much how you got that particular training behavior. The only thing that mattered was that you got that particular training behavior. **Question:** When you say model space, you mean the functional behavior as opposed to the literal parameter space? So there’s not quite a one to one mapping because there are multiple implementations of the exact same function in a network. But it's pretty close. I mean, most of the time when I'm saying model space, I'm talking either about the weight space or about the function space where I'm interpreting the function over all inputs, not just the training data. I only talk about the space of functions restricted to their training performance for this path dependence concept, where we get this view where, well, they end up on the same point, but we want to know how much we need to know about how they got there to understand how they generalize. **Question:** So correct me if I'm wrong. But if you have the final trained model, which is a point in weight space, that determines behavior on other datasets, like just that final point of the path. Yes, that’s correct. The point that I was making is that they converge to the same functional behavior on the training distribution, but not necessarily the same functional behavior off the training distribution. [↩︎](#fnref-P7Jjp9vdyrgug6J2y-1) 2. **Question:** So last time you gave this talk, I think I made a remark here, questioning whether grokking was actually evidence of there being a simplicity prior, because maybe what's actually going on is that there's a tiny gradient signal from not being completely certain about the classification. So I asked an ML grad student friend of mine, who studies grokking, and you're totally right. So there was weight decay in this example. And if you turn off the weight decay, the grokking doesn't happen. Yes, that was my understanding—that mostly what's happening here is that it's the weight decay that's pushing you towards the grokking. And so that's sort of evidence of there actually just being a simplicity prior built into the architecture, that is always going to converge to the same, simple thing. **Question:** But if you turn off the weight decay then the grokking doesn't happen. Well, one hypothesis might be that the weight decay is the thing that forces the architectural prior there. But maybe the strongest hypothesis here is that without weight decay there’s just not enough of a gradient to do anything in that period. **Question:** This isn't a question. For people who aren't familiar with the terminology “weight decay”, it’s the same as L2 regularization? Yep, those are the same. [↩︎](#fnref-P7Jjp9vdyrgug6J2y-2) 3. **Question:** Does Martin Luther over time become internally aligned? As Martin Luther studies the Bible over time, does he become internally aligned with you? No. Because Martin Luther never, from my perspective, at least the way we're thinking about this here—I'm not gonna make any claims about the real Martin Luther—but the way we're thinking about it here is that the Martin Luther models, the thing that they care about is understanding the Bible really well. And so their goal, whatever the Bible is, they're going to figure it out. But they're not going to modify themselves, to become the same as the Bible. Let's say, I’m the Martin Luther model. And I modify myself to care about my current understanding of the Bible. And then I realized that actually the Bible was different than I thought the whole time. That's really bad for me, because the thing I want originally is not to do the thing that my current understanding of the Bible says, it's to do what the Bible actually tells me. And so if I later understand that actually the Bible wants something different, then the Martin Luther models want to be able to shift to that. So they don't want to modify themselves into internal alignment. I should also point out that, the way that we were imagining this, it’s not clear that the model itself has any control over which model it ends up as. Except to the extent that it controls performance, which is how the deceptively aligned model works. **Question:** So Martin Luther is saying, the Bible seems cool so far. I want to learn more about it. But I've reserved the option to not be tied to the Bible. No, Martin Luther loves the Bible and wants to do everything the Bible says. **Question:** So why doesn’t Martin Luther want to change its code to be equal to the Bible? The Bible doesn't say, change your code to be equal to the Bible. The Bible says do these things. You could imagine a situation where the Bible is like, you got to modify yourself to love paper clips, or whatever. In that situation, the model says, well, okay, I guess I gotta modify myself to like paper clips. But Martin Luther doesn’t want to modify himself unless the Bible says to. The problem with modifying themselves is that the Martin Luther models are concerned, like, “Hmm, maybe this Bible is actually, a forgery” or something, right? Or as we'll talk about later, maybe you could end up in a situation where the Martin Luther model thinks that a forgery of the Bible is its true ground source for the Bible. And so it just cares about a forgery. And that's the thing it cares about. [↩︎](#fnref-P7Jjp9vdyrgug6J2y-3) 4. **Question:** The point you just made about pre-training vs. fine-tuning seems backwards. If pre-training requires vastly more compute than the fine-tuning a reward model, then it seems that learning about your reward function is cheaper for compute? Well, it's cheaper, but it's just less useful. Almost all of your performance comes from understanding the world, in some sense. Also, I think part of the point there is that, once you understand the world, then you have the ability to relatively cheaply understand the thing we're trying to get you to do. But trying to go directly to understand the thing we're trying to get you to do—at that point you don't understand the world enough even to have the concepts that enable you to be able to understand that thing. Understanding the world is just so important. It's like the central thing. **Question:** It feels like to really make this point, you need to do something more like train a reinforcement learning agent from random initialization against a reward model for the same amount of compute, versus doing the pre-training and then fine-tune on the reward model. Yeah, that seems like a pretty interesting experiment. I do think we’d learn more from something like that than just going off of the relative lengths of pre-training vs. fine-tuning. **Question:** I still just don’t understand how this is actually evidence for the point you wanted to make. Well, you could imagine a world where understanding the world is really cheap. And it's really, really hard to get the thing to be able to do what you want—output good summaries or whatever—because it is hard to specify what that thing is. I think in that world, that would be a situation where, if you just trained a model end to end on the whole task, most of your performance would come from, most of your gradient updates would be for, trying to improve the model’s ability to understand the thing you're trying to get it to do, rather than improving it’s generic understand the world. Whereas I'm describing a situation where, by my guess, most of the gradient updates would just be towards improving its understanding of the world. Now, in both of those situations, regardless of whether you have more gradient descent updates in one direction or the other, diminishing returns still apply. It’s still the case, whichever world it is, SGD is still going to balance between them both, such that it'd be really weird if you'd maxed out on one before the other. However, I think the fact that it does look like almost all the gradient descent updates come from understanding the world teaches us something about what it actually takes to do a good job. And it tells us things like, if we just try to train the model to do to do something, and then pause it halfway, most of the ability to have good capabilities is coming from its understanding of the world and so we should expect gradient descent to have spent most of its resources so far on that. That being said, the question we have to care about is not which one maxes out first, it's do we max out on the proxy before we understand the training process sufficiently to be deceptive. So I agree that it’s unclear exactly what this fact says about when that should happen. But it still feels like a pretty important background fact to keep in mind here. [↩︎](#fnref-P7Jjp9vdyrgug6J2y-4)
75b7bf60-bdad-4ad7-8242-45ea17b38603
trentmkelly/LessWrong-43k
LessWrong
A Little Depth Goes a Long Way: the Expressive Power of Log-Depth Transformers Authors: Anonymous (I'm not one of them). Abstract: > Most analysis of transformer expressivity treats the depth (number of layers) of a model as a fixed constant, and analyzes the kinds of problems such models can solve across inputs of unbounded length. In practice, however, the context length of a trained transformer model is bounded. Thus, a more pragmatic question is: What kinds of computation can a transformer perform on inputs of bounded length? We formalize this by studying highly uniform transformers where the depth can grow minimally with context length. In this regime, we show that transformers with depth O(log⁡C) can, in fact, compute solutions to two important problems for inputs bounded by some max context length C, namely simulating finite automata, which relates to the ability to track state, and graph connectivity, which underlies multi-step reasoning. Notably, both of these problems have previously been proven to be asymptotically beyond the reach of fixed depth transformers under standard complexity conjectures, yet empirically transformer models can successfully track state and perform multi-hop reasoning on short contexts. Our novel analysis thus explains how transformer models may rely on depth to feasibly solve problems up to bounded context that they cannot solve over long contexts. It makes actionable suggestions for practitioners as to how to minimally scale the depth of a transformer to support reasoning over long contexts, and also argues for dynamically unrolling depth as a more effective way of adding compute compared to increasing model dimension or adding a short chain of thought.
56b870a3-2f97-4172-a629-9be10adaefbe
trentmkelly/LessWrong-43k
LessWrong
Amplification Discussion Notes Paul Christiano, Wei Dai, Andreas Stuhlmüller and I had an online chat discussion recently, the transcript of the discussion is available here. (Disclaimer that it’s a nonstandard format and we weren't optimizing for ease of understanding the transcript). This discussion was primarily focused on amplification of humans (not later amplification steps in IDA). Below are some highlights from the discussion, and I’ll include some questions that were raised that might merit further discussion in the comments. Highlights Strategies for sampling from a human distribution of solutions: > Paul: For example you can use "Use random human example," or "find an analogy to another example you know and use it to generate an example," or whatever. > There is some subtlety there, where you want to train the model that sample from the real human distribution rather than from the empirical distribution of 10 proposals you happen to have collected so far. If samples are cheap that's fine. Otherwise you may need to go further to "Given that [X1, X2, ...] are successful designs, what is a procedure that can produce additional successful designs?" or something like that. Not sure. Dealing with unknown concepts > Andreas: Suppose you get a top-level command that contains words that H doesn't understand (or just doesn't look at), say something like "Gyre a farbled bleg.". You have access to some data source that is in principle enough to learn the meanings of those words. What might the first few levels of questions + answers look like? > Paul: possible questions: "What's the meaning of the command", which goes to "What's the meaning of word X" for the words X in the sentence, "What idiomatic constructions are involved in this sentence?", "What grammatical constructions are involved in the sentence" > Answers to those questions are big trees representing meanings, e.g. a list of properties of "gyre" (what properties the subject and object typically have, under what conditions it is
8b552c29-560d-499c-897e-c12a5f609174
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Is Functional Decision Theory still an active area of research? Looks like discussion of the issue died down in recent years, bit it strikes me as really important to have a good decision theory. So: 1. Are folks still actively working on this, and do you see it as a promising area? 2. It seems like the academic community was dismissive of FDT. Did anyone find a genuine reason to reject it?
e507ad35-09ec-4d65-8a36-ae6c4f5ced87
StampyAI/alignment-research-dataset/arxiv
Arxiv
Adversarial Active Exploration for Inverse Dynamics Model Learning 1 Introduction --------------- Over the past decade, imitation learning (IL) has been successfully applied to a wide range of domains, including robot learning Englert et al. ([2013](#bib.bib9)); Schulman et al. ([2013](#bib.bib22)), autonomous navigation Choudhury et al. ([2017](#bib.bib6)); Ross et al. ([2013](#bib.bib21)), manipulation tasks Nair et al. ([2017](#bib.bib13)); Prieur et al. ([2012](#bib.bib19)), and self-driving cars Codevilla et al. ([2018](#bib.bib7)). Traditionally, IL aims to train an imitator to learn a control policy π only from expert demonstrations. The imitator is typically presented with multiple demonstrations at training time, with a goal to distill them into π. To learn π effectively and efficiently, a large set of high-quality demonstrations are necessary. This is especially significant in current state-of-the-art IL algorithms, such as dataset aggregation (DAgger) Ross et al. ([2011](#bib.bib20)) and generative adversarial imitation learning (GAIL) Ho and Ermon ([2016](#bib.bib11)). Although these approaches have been the dominant algorithms in IL, a major bottleneck for them is their reliance on high-quality demonstrations, which often require extensive supervision from human experts. In addition, a serious flaw in the learned policy π is its tendency to overfit to demonstration data, preventing it from generalizing to new ones. To overcome the aforementioned challenges in IL, a number of methods have been investigated to enhance the generalizability and data efficiency, or reduce the degree of human supervision. Initial efforts in this direction were based on the idea of meta learning Duan et al. ([2017](#bib.bib8)); Finn et al. ([2017](#bib.bib10)); Yu et al. ([2018](#bib.bib30)), in which the imitator is trained with a meta learner and is able to quickly learn a new task from a small set of demonstrations. However, such schemes still require training a meta-learner with tremendous amount of time and demonstration data, leaving much room for improvement. Thus, a rapidly-growing body of literature on the usage of forward/inverse dynamics models to learn π within an environment in a self-supervised manner Agrawal et al. ([2016](#bib.bib2)); Nair et al. ([2017](#bib.bib13)); Pathak et al. ([2018](#bib.bib15)) has emerged in the past few years. One key advantage of this method is that it provides an autonomous way for preparing training data, removing the need of human intervention. In this paper, we call it self-supervised IL. Self-supervised IL allows an imitator to collect training data by itself instead of using predefined extrinsic reward functions or expert supervision during training. It only needs demonstration during inference, drastically decreasing the time and effort required from human experts. Although the core principles of self-supervised IL are straightforward and have been exploited in many fields Agrawal et al. ([2016](#bib.bib2)); Nair et al. ([2017](#bib.bib13)); Pathak et al. ([2017](#bib.bib14)); Pathak et al. ([2018](#bib.bib15)), recent research efforts have been targeted at addressing the challenges of multi-modality and multi-step planning. For example, the use of forward consistency loss and forward regularizer have been extensively investigated to enhance the task performance of the imitator Agrawal et al. ([2016](#bib.bib2)); Pathak et al. ([2018](#bib.bib15)). This becomes especially essential when the lengths of trajectories grow and demonstration samples are sparse, as multiple paths may co-exist to lead the imitator from its initial observation to the goal observation. The issue of multi-step planning has also drawn a lot of attention from researchers, and is usually tackled by recurrent neural networks (RNNs) and step-by-step demonstrations Nair et al. ([2017](#bib.bib13)); Pathak et al. ([2018](#bib.bib15)). The above self-supervised IL approaches report promising results, however, most of them are limited in applicability due to several drawbacks. First, traditional methods of data collection are usually inefficient and time-consuming. Inefficient data collection results in poor exploration, giving rise to a degradation in robustness to varying environmental conditions (e.g., noise in motor control) and generalizability to difficult tasks. Second, human bias in data sampling range tailored to specific interesting configurations is often employed Agrawal et al. ([2016](#bib.bib2)); Nair et al. ([2017](#bib.bib13)). Although a more general exploration strategy called curiosity-driven exploration was later proposed in Pathak et al. ([2017](#bib.bib14)), it focuses only on exploration in states novel to the forward dynamics model, rather than those directly influential to the inverse dynamics model. Furthermore, it does not discuss the applicability to continuous control domains, and fails in high dimensional action spaces according to our experiments in Section [4](#S4 "4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning"). Unlike the approaches discussed above, we do not propose to deal with multi-modality or multi-step planning. Instead, we focus our attention on improving the overall quality of the collected samples in the context of self-supervised IL. This motivates us to equip the model with the necessary knowledge to explore the environment in an efficient and effective fashion. In this paper, we propose a simple but efficient IL scheme, called adversarial exploration strategy, that motivates exploration of an environment in a self-supervised manner (i.e., without any extrinsic reward or human demonstration). Inspired by Pinto et al. ([2017](#bib.bib16)); Shioya et al. ([2018](#bib.bib25)); Sukhbaatar et al. ([2018](#bib.bib26)), we implement our system by jointly training a deep reinforcement learning (DRL) agent and an inverse dynamics model competing with each other. The former explores the environment to collect training data for the latter, and receives rewards from the latter if the data samples are considered hard. The latter is trained with the training data collected by the former, and only generates rewards when it fails to predict the true actions performed by the former. In such an adversarial setting, the DRL agent is rewarded only for the failure of the inverse dynamics model. Therefore, the DRL agent learns to sample hard examples to maximize the chances to fail the inverse dynamics model. On the other hand, the inverse dynamics model learns to be robust to the hard examples collected by the DRL agent by minimizing the probability of failures. Thus, as the inverse dynamics model become stronger, the DRL agent is also incentivized to search for harder examples to obtain rewards. Overly hard examples, however, may lead to very biased exploration and make the learning unstable. To stabilize the learning progress of the inverse dynamics model, we further propose a reward structure such that the DRL agent is encouraged to explore moderately hard samples for the inverse dynamics model, but not too hard for the latter to learn. The self-regulating feedback between the DRL agent and the inverse dynamics model allows them to automatically construct a curriculum for exploration. We perform extensive experiments to evaluate adversarial exploration strategy on multiple OpenAI gym Brockman et al. ([2016](#bib.bib5)) robotic arm and hand manipulation environments simulated by the MuJoCo physics engine Todorov et al. ([2012](#bib.bib28)), including FetchReach, FetchPush, FetchPickAndPlace, FetchSlide, and HandReach. Learning to perform these robotic tasks is more practical than learning to perform most of the other OpenAI gym tasks (e.g., Atari games), because only a very limited set of chained actions will result in success. We examine the effectiveness of our method by comparing it against a number of baseline models. The experimental results show that our method is more effective and data-efficient than the baselines in both low- and high-dimensional observation spaces. We also demonstrate that in most of the cases the inverse dynamics model trained by our method is comparable to that directly trained with expert demonstrations in terms of performance. The above observations suggest that our method is superior to the baselines even in the absence of human priors. We further evaluate our method on environments with high-dimensional action spaces, and show that our method is able to achieve higher success rates than the baselines. The contributions of this work are summarized as follows: * [itemsep=.25ex] * We introduce an adversarial exploration strategy for self-supervised IL. It consists of a DRL agent and an inverse dynamics model designed for efficient exploration and data collection. * We employ a competitive scheme for the DRL agent and the inverse dynamics model, enabling them to automatically construct a curriculum for exploration of observation space. * We suggest a reward structure for the proposed scheme to stabilize the training progress. * We validate the proposed method and compare it with the baselines in both low- and high-dimensional state spaces for multiple robotic arm and hand manipulation tasks. * We demonstrate that the proposed method is suitable and effective for environments with high-dimensional action spaces. The remainder of this paper is organized as follows. Section [2](#S2 "2 Background ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning") introduces background materials. Section [3](#S3 "3 Methodology ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning") describes the proposed adversarial exploration strategy in detail. Section [4](#S4 "4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning") reports the experimental results, and provides an in-depth analysis of our method. Section [5](#S5 "5 Conclusion ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning") concludes this paper. 2 Background ------------- In this section, we briefly review DRL, policy gradient methods, as well as inverse dynamics model. ### 2.1 Deep Reinforcement Learning and Policy Gradient Methods DRL trains an agent to interact with an environment E. At each timestep t, the agent receives an observation xt∈X, where X is the observation space of E. It then takes an action at from the action space A based on its current policy π, receives a reward r, and transitions to the next observation x′. The policy π is represented by a deep neural network with parameters θ, and is expressed as π(a|x,θ). The goal of the agent is to learn a policy to maximize the discounted sum of rewards Gt: | | | | | | --- | --- | --- | --- | | | \footnotesizeGt=T∑τ=tγτ−tr(xτ,aτ), | | (1) | where t is the current timestep, γ∈(0,1] the discount factor, and T the horizon. Policy gradient methods Mnih et al. ([2016](#bib.bib12)); Sutton et al. ([2000](#bib.bib27)); Williams ([1992](#bib.bib29)) are a class of RL techniques that directly optimize the parameters of a stochastic policy approximator using policy gradients. Although these methods have achieved remarkable success in a variety of domains, the high variance of gradient estimates has been a major challenge. Trust region policy optimization (TRPO) Schulman et al. ([2015](#bib.bib23)) circumvented this problem by applying a trust-region constraint to the scale of policy updates. However, TRPO is a second-order algorithm, which is relatively complicated, and not compatible with architectures that include noise or parameter sharing Schulman et al. ([2017](#bib.bib24)). In this paper, we employ a more recent type of policy gradient methods, called proximal policy optimization (PPO) Schulman et al. ([2017](#bib.bib24)). PPO is an approximation to TRPO, which similarly prevents large changes to the policy between updates, but requires only first-order optimization. Compared to TRPO, PPO is more general, and has better sample complexity (empirically) while retaining the stability and reliability of TRPO 111For more details on PPO, please refer to our supplementary material.. ### 2.2 Inverse Dynamics Model An inverse dynamics model I takes as input a pair of observations (x,x′), and predicts the action ^a required to reach the next observation x′ from the current observation x. It is usually expressed as: | | | | | | --- | --- | --- | --- | | | \footnotesize^a=I(x,x′|θI), | | (2) | where (x,x′) are sampled from the collected data, and θI represents the trainable parameters of the inverse dynamics model. At training time, θI is iteratively updated to minimize the loss function LI: | | | | | | --- | --- | --- | --- | | | \footnotesizeLI(a,^a|θI)=d(a,^a), | | (3) | where d is a distance metric, and a the ground truth action. During testing, a sequence of observations {^x0,^x1,⋯,^xT} is captured from an expert demonstration. A pair of observations (^xt,^xt+1) is fed into the inverse dynamics model at timestep t. Starting from ^x0, the objective of the inverse dynamics model is to predict a sequence of actions and reach the final observation ^xT as close as possible. 3 Methodology -------------- In this section, we first describe the proposed adversarial exploration strategy. We then explain the training methodology in detail. Finally, we discuss a technique for stabilizing the training progress. ### 3.1 Adversarial Exploration Strategy Fig. S1 222Fig. S1 is presented in our supplementary material. shows a framework that illustrates the proposed adversarial exploration strategy, which includes a DRL agent P and an inverse dynamics model I. Assume that Φπ:{x0,a0,x1,a1⋯,xT} is the sequence of observations and actions generated by P as it explores E using a policy π. At each timestep t, P collects a 3-tuple training sample (xt,at,xt+1) for I, while I predicts an action ^at and generates a reward rt for P. In this work, I is modified from Eq. [2](#S2.E2 "(2) ‣ 2.2 Inverse Dynamics Model ‣ 2 Background ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning") to include an additional hidden vector ht, which recurrently encodes the information of the past observations. I is thus formulated as: | | | | | | --- | --- | --- | --- | | | \footnotesize^at=I(xt,xt+1|ht,θI), | | (4) | where θI denotes the trainable parameters of I. θI is iteratively updated to minimize LI as follows: | | | | | | --- | --- | --- | --- | | | \footnotesizeminθILI(at,^at|θI)=minθIβ||at−^at||2, | | (5) | where β is a scaling constant. We use mean squared error β||at−^at||2 as the distance metric d(at,^at), since we only consider continuous control domains in this paper. It can be replaced with a cross-entropy loss for discrete control tasks. We directly use LI as rt for P, which is formulated as: | | | | | | --- | --- | --- | --- | | | \footnotesizert(xt,at,xt+1)=LI(at,^at|θI)=β||at−I(xt,xt+1|ht,θI)||2. | | (6) | Our method targets at improving both the quality and efficiency of the data collection process performed by P, as well as the performance of I. Therefore, the goal of the proposed framework is twofold. First, P has to learn an adversarial policy πadv(at|xt) such that its cumulated discounted rewards Gt|πadv=∑Tτ=tγτ−trt(xτ,aτ,xτ+1) is maximized. Second, I requires to learn an optimal θI such that Eq. [6](#S3.E6 "(6) ‣ 3.1 Adversarial Exploration Strategy ‣ 3 Methodology ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning") is minimized. Minimizing LI (i.e., rt) leads to decreased Gt|πadv, forcing P to enhance πadv to explore more difficult samples to increase Gt|πadv. This implies that P is motivated to focus on I’s weak points, instead of randomly collecting ineffective training samples. Training I with hard samples not only accelerates its learning progress, but also helps to boost its performance. ### 3.2 Training Methodology 1:Initialize ZP, ZI, E, and model parameters θP & θI 2:Initialize πadv(at|xt,θP) 3:Initialize timestep cumulative counter c=0 4:Set constants Niter, Nepisode, T, and TP 5:for iter i=1 to Niter do 6:       for episode e=1 to Nepisode do 7:             for timestep t=0 to T do 8:                    P perceives xt from E, and predict an action at according to πadv(at|xt,θP) 9:                    xt+1 = E(xt,at) 10:                    ξ=1[t==T] 11:                    Store (xt,at,xt+1,ξ) in ZP 12:                    Store (xt,at,xt+1) in ZI 13:                    if (c % TP) == 0 then 14:                          Initialize an empty batch B 15:                          Initialize a recurrent state ht 16:                          for (xt,at,xt+1,ξ) in ZP do 17:                                 Evaluate ^at=I(xt,xt+1|ht,θI) (calculated from Eq. [4](#S3.E4 "(4) ‣ 3.1 Adversarial Exploration Strategy ‣ 3 Methodology ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning")) 18:                                 Evaluate rt(xt,at,xt+1)=LI(at,^at|θI) (calculated from Eq. [6](#S3.E6 "(6) ‣ 3.1 Adversarial Exploration Strategy ‣ 3 Methodology ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning")) 19:                                 Store (xt,at,xt+1,rt) in B                            20:                          Update θP with the gradient calculated from the samples in B 21:                          Reset ZP                      22:                    c=c+1                      23:       Update θI with the gradient calculated from the samples in ZI (according to Eq. [5](#S3.E5 "(5) ‣ 3.1 Adversarial Exploration Strategy ‣ 3 Methodology ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning")) 24:end Algorithm 1 Adversarial exploration strategy We describe the training methodology of our adversarial exploration strategy by a pseudocode presented in Algorithm [1](#alg1 "Algorithm 1 ‣ 3.2 Training Methodology ‣ 3 Methodology ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning"). Assume that P’s policy πadv is parameterized by a set of trainable parameters θP, and is represented as πadv(at|xt,θP). We create two buffers ZP and ZI for storing the training samples of P and I, respectively. In the beginning, ZP, ZI, E, θP, θI, πadv, as well as a timestep cumulative counter c is initialized. A number of hyperparameters are set to appropriate values, including the number of iterations Niter, the number of episodes Nepisode, the horizon T, as well as the update period TP of θP. At each timestep t, P perceives the current observation xt from E, takes an action at according to πadv(at|xt,θP), and receives the next observation xt+1 and a termination indicator ξ (line 9 to 11). ξ is set to 1 only when t equals T, otherwise it is set to 0. We then store (xt,at,xt+1,ξ) and (xt,at,xt+1) in ZP and ZI, respectively. We update θP every TP timesteps using the samples stored in ZP, as shown in (line 13 to 21). At the end of each episode, we update θI with samples drawn from ZI according to the loss function LI defined in Eq. [5](#S3.E5 "(5) ‣ 3.1 Adversarial Exploration Strategy ‣ 3 Methodology ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning") (line 23). ### 3.3 Stabilization Technique Although the adversarial exploration strategy is effective in collecting hard samples, it could be problematic if P becomes too strong such that the collected samples are too difficult for I to learn. Overly difficult samples result in a large variance in gradients derived from LI, which in turn lead to a performance drop and instability in the learning progress. We analyze this phenomenon in greater detail in Section [4.5](#S4.SS5 "4.5 Ablative Analysis ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning"). To tackle the issue, we propose a training technique that reshapes rt as follows: | | | | | | --- | --- | --- | --- | | | \footnotesizert:=−|rt−δ|, | | (7) | where δ is a pre-defined threshold value. This technique poses a restriction on the range of rt, driving P to gather moderate samples instead of overly hard ones. Note that the value of δ affects the learning speed and final performance. We illustrate the impact of δ on the learning curve of I in Section [4.5](#S4.SS5 "4.5 Ablative Analysis ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning"). 4 Experiments -------------- In this section, we present experimental results for a series of robotic tasks, and validate that (i) our method is effective in both low- and high-dimensional observations spaces; (ii) our method is effective in environments with high-dimensional action spaces; (iii) our method is more data efficient than the baseline models; and (iv) our method is robust against action space noise. We first introduce our experimental setup. Then, we report results of both robotic arm and hand manipulation tasks. Finally, we present a comprehensive set of ablative analyses to justify each of our design choices. ### 4.1 Experimental Setup We first describe the environments and tasks. Next, we explain the evaluation procedure and the method for collecting expert demonstrations. We then go through the baselines used for comparison. #### 4.1.1 Environments and Tasks We use OpenAI gym Brockman et al. ([2016](#bib.bib5)) environments simulated by the MuJoCo Todorov et al. ([2012](#bib.bib28)) physics engine, and evaluate our method on a number of robotic arm and hand manipulation tasks. We use the Fetch and Shadow Dexterous Hand Plappert et al. ([2018a](#bib.bib17)) for the arm and hand manipulation tasks, respectively. For the arm manipulation tasks, which include FetchReach, FetchPush, FetchPickAndPlace, and FetchSlide, the imitator (i.e., the inverse dynamic model) takes as input the positions and velocities of a gripper and an object. It then computes the gripper’s action in 3-dimensional space to manipulate it. For the hand manipulation task HandReach, the imitator takes as input the positions and velocities of the fingers of a robotic hand, and computes the velocity of each joint to achieve the goal. In addition to low-dimensional observations (i.e., position, velocity, and gripper state), we also perform experiments for the above tasks using visual observations (i.e., high-dimensional observations) in the form of camera images taken from the third-person perspective. The detailed description of the above tasks is specified in Plappert et al. ([2018a](#bib.bib17)). For the detailed configurations of these tasks, please refer to our supplementary material. #### 4.1.2 Evaluation Procedure All of our experimental results are evaluated and averaged over 20 trials, corresponding to 20 different random initial seeds. In each trial, we train an imitator by the training data collected by its self-supervised data collection strategy. Please note that imitators implemented by different methods have different data collection strategies. We periodically evaluate the imitator-under-test every 10K timesteps. The evaluation is performed by measuring the success rate over 500 episodes. At the beginning of each episode, the imitator receives a sequence of observations {^x0,^x1,⋯,^xT} from a successful expert demonstration. At each timestep t, the imitator infers an action ^at needed to reach an expert observation ^xt+1 from its current observation xt. For a fair comparison, all imitators have the same model architecture, and are trained with the same amount of training data. The detailed configurations of the hyperparameters are summarized and discussed in the supplementary material. #### 4.1.3 Collection of Expert Demonstration For each task mentioned in Section [4.1.1](#S4.SS1.SSS1 "4.1.1 Environments and Tasks ‣ 4.1 Experimental Setup ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning"), we first randomly configure task-relevant settings (e.g., goal position, initial state, etc.). We then collect non-trivial and successful episodes generated by a pre-trained expert agent Andrychowicz et al. ([2017](#bib.bib3)). It should be noted that the collected demonstration data only contain observations. The interested reader is referred to our supplementary material for the implementation detail of the pre-trained expert agent, and the methodology we employed to filter out trivial episodes. #### 4.1.4 Baseline Methods We compare our proposed methodology against the following four baseline models in our experiments. * [itemsep=.25ex] * Random: This method collects training samples by random exploration. We consider it an important baseline due to its simplicity and prevalence in self-supervised IL Agrawal et al. ([2016](#bib.bib2)); Nair et al. ([2017](#bib.bib13)); Pathak et al. ([2018](#bib.bib15)). * Demo: This method has the imitator trained directly with expert demonstrations. It provides the performance upper bound, since training data is the same as testing data. * Curiosity: This method trains a DRL agent via curiosity Pathak et al. ([2017](#bib.bib14)); Pathak et al. ([2018](#bib.bib15)) to collect training samples. Unlike the original implementation in Pathak et al. ([2017](#bib.bib14)), we replace its DRL algorithm with PPO, as training should be done on a single thread for a fair comparison with the other baseline methods. We believe it to be an important baseline due to its proven effectiveness in Pathak et al. ([2018](#bib.bib15)). * Noise Plappert et al. ([2018b](#bib.bib18)): In this method, noise is injected to the parameter space of a DRL agent to encourage exploration Plappert et al. ([2018b](#bib.bib18)). Please note that the exploratory behavior relies entirely on the parameter space noise, without the use of any extrinsic reward. This method is included for comparison because of its superior performance and data-efficiency in many DRL tasks. ### 4.2 Performance Comparison in Robotic Arm Manipulation Tasks We compare the performance of the proposed method and the baselines on the robotic arm manipulation tasks described in Section [4.1.1](#S4.SS1.SSS1 "4.1.1 Environments and Tasks ‣ 4.1 Experimental Setup ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning"). As opposed to discrete control domains, these tasks are especially challenging, as the sample complexity grows in continuous control domains. Furthermore, the imitator may not have the complete picture of the environment dynamics, increasing its difficulty to learn an inverse dynamics model. In FetchSlide, for instance, the movement of the object on the slippery surface is affected by both friction and the force exerted by the gripper. It thus motivates us to investigate whether the proposed method can help overcome the challenge. In the subsequent paragraphs, we discuss the experimental results in both low- and high-dimensional observation spaces. All of the experimental results are obtained by following the procedure described in Section [4.1.2](#S4.SS1.SSS2 "4.1.2 Evaluation Procedure ‣ 4.1 Experimental Setup ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning"). ##### Low-dimensional observation. Fig. [1](#S4.F1 "Figure 1 ‣ Low-dimensional observation. ‣ 4.2 Performance Comparison in Robotic Arm Manipulation Tasks ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning") plots the learning curves for all of the methods in low-dimensional observation spaces. In all of the tasks, our method yields superior or comparable performance to the baselines except for Demo, which is trained directly with expert demonstrations. In FetchReach, it can be seen that every method achieves a success rate of 1.0. This implies that it does not require a sophisticated exploration strategy to learn an inverse dynamics model in an environment where the dynamics is relatively simple. It should be noted that although all methods reach the same final success rate, ours learns significantly faster than Demo. In contrast, in FetchPush, our method is comparable to Demo, and demonstrates superior performance to the other baselines. Our method also learns drastically faster than all the other baselines, which confirms that the proposed strategy does improve the performance and efficiency of self-supervised IL. Our method is particularly effective in tasks that require an accurate inverse dynamics model. In FetchPickAndPlace, for example, our method surpasses all the other baselines. However, all methods including Demo fail to learn a successful inverse dynamics model in FetchSlide, which suggests that it is difficult to train an imitator when the outcome of an action is not completely dependent on the action itself. It is worth noting that Curiosity loses to Random in FetchPush and FetchSlide, and Noise performs even worse than these two methods in all of the tasks. We therefore conclude that Curiosity is not suitable for continuous control tasks, and the parameter space noise strategy cannot be directly applied to self-supervised IL. In addition to the quantitative results presented above, we further discuss the empirical results qualitatively. Please refer our supplementary material for the qualitative results. ![](https://media.arxiv-vanity.com/render-output/8096435/figures/fig_learn_curve_simple_robot.png) Figure 1: Performance comparison of robotic arm and hand tasks with low-dimensional observations. ![](https://media.arxiv-vanity.com/render-output/8096435/figures/fig_learn_curve_rgb_robot.png) Figure 2: Performance comparison of robotic arm tasks with high-dimensional observations. ##### High-dimensional observation. The learning curves of all methods in high-dimensional observation spaces are illustrated in Fig. [2](#S4.F2 "Figure 2 ‣ Low-dimensional observation. ‣ 4.2 Performance Comparison in Robotic Arm Manipulation Tasks ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning"). It can be seen that our method performs significantly better than the other baseline methods in most of the tasks, and is comparable to Demo. In FetchPickAndPlace, ours is the only method that learns a successful inverse dynamics model. Similar to the results in low-dimensional settings, Curiosity is no better than Random in high-dimensional observation spaces. Note that we do not include the Noise baseline here because it performs worse enough already in low-dimensional settings. ### 4.3 Performance Comparison in Robotic Hand Manipulation Task Fig. [1](#S4.F1 "Figure 1 ‣ Low-dimensional observation. ‣ 4.2 Performance Comparison in Robotic Arm Manipulation Tasks ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning") plots the learning curves for each of the methods considered. Please note that Curiosity, Noise and our method are pre-trained with 30K samples collected by random exploration, as we observe that these method on their own suffer from large errors in an early stage during training, which prevents them from learning at all. After the first 30K samples, they are trained with data collected by their respective strategy instead. From the results in Fig. [1](#S4.F1 "Figure 1 ‣ Low-dimensional observation. ‣ 4.2 Performance Comparison in Robotic Arm Manipulation Tasks ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning"), it can be seen that Demo easily stands out from the other methods as the best-performing model, surpassing them all by a considerable extent. Although our method is not as impressive as Demo, it significantly outperforms all of the other methods, reaching a success rate of 0.4 while the others are stuck at around 0.2. ### 4.4 Robustness to Noisy Action We benchmark our method in an environment with noisy actions to validate the robustness of our method. In this environment, every action taken by the imitator is injected with a Gaussian noise, which results in unaligned data. Note that we only inject noise in the training phase, as we aim to benchmark the robustness of data-collection strategy. The scale of the injected noise can be found in the supplementary material. In Table. [1](#S4.T1 "Table 1 ‣ 4.4 Robustness to Noisy Action ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning"), we report the performance drop rate for each method in all tasks. The performance drop rate is defined as: Prorig−PrnoiseProrig, where Prorig,Prnoise is the performance under the original setting and the action noise setting respectively, and the performance is measured by the highest success rate during training. From Table. [1](#S4.T1 "Table 1 ‣ 4.4 Robustness to Noisy Action ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning"), it can be seen that our method has the lowest performance drop rate in most of the tasks, which indicates that our method is robust to noisy actions. Please also note that although Curiosity and Noise also achieve a drop rate of 0% in HandReach and FetchSlide, we do not consider them robust due to their poor performance in the original environment (Fig. [1](#S4.F1 "Figure 1 ‣ Low-dimensional observation. ‣ 4.2 Performance Comparison in Robotic Arm Manipulation Tasks ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning")). Interestingly, our method actually demonstrates an increase in performance in FetchPush and HandReach, but we leave the investigation of this phenomenon for future works. To conclude, we find that the proposed method is more robust to unaligned data than the other baselines, making it a more practical choice in a real world setting. | | | | | | | | --- | --- | --- | --- | --- | --- | | | FetchReach | FetchPush | FetchSlide | FetchPickAndPlace | HandReach | | Random | 0.00% | 0.89% | 23.21% | 39.52% | 32.32% | | Curiosity | 0.00% | 45.48% | 35.67% | 18.61% | 0.00% | | Noise | 0.00% | 90.00% | 0.00% | 12.03% | 40.00% | | Ours | 0.00% | -1.64% | 22.33% | 23.17% | -11.02% | Table 1: Performance drop rate of each method in each task. ### 4.5 Ablative Analysis We further investigate the effectiveness of our method by a detailed analysis of the collected data, the stabilization technique, and the influence of δ. ![](https://media.arxiv-vanity.com/render-output/8096435/figures/fig_learn_curve_ablative_inv_error_with_line.png) Figure 3: Distribution of LI ##### Training error distribution. We plot the distribution of LI (Eq. [5](#S3.E5 "(5) ‣ 3.1 Adversarial Exploration Strategy ‣ 3 Methodology ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning")) of the first 2K collected samples during the training phase in Fig. [3](#S4.F3 "Figure 3 ‣ 4.5 Ablative Analysis ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning"), where Ours(w stab) and Ours(w/o stab) denote our method with and without the use of the stabilization technique. The vertical axis corresponds to the number of samples, while the horizontal axis corresponds to LI. The curves in Fig. [3](#S4.F3 "Figure 3 ‣ 4.5 Ablative Analysis ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning") are smoothed by kernel density estimation. It can be seen that both Ours(w stab) and Ours(w/o stab) concentrate on notably higher values than Random. This indicates that the adversarial exploration strategy does help collect hard samples for the inverse dynamics model. ##### Effectiveness of stabilization. From Fig. [3](#S4.F3 "Figure 3 ‣ 4.5 Ablative Analysis ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning"), it can be observed that Ours(w stab) has a lower mean loss than Ours(w/o stab), which implies that the stabilization technique successfully guides the DRL agent to favor those moderately hard samples. We also observe that the center of loss distribution for Ours(w stab) is close to the value of δ, as shown in Fig. [3](#S4.F3 "Figure 3 ‣ 4.5 Ablative Analysis ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning"), confirming that our reward structure guides data collection by δ. To further demonstrate the effectiveness of the stabilization technique, we plot the learning curves of Ours(w stab) and Ours(w/o stab) in Fig. [4](#S4.F4 "Figure 4 ‣ Effectiveness of stabilization. ‣ 4.5 Ablative Analysis ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning"). Although Ours(w/o stab) is comparable to Ours(w stab) for the initial 10K samples, it suffers from a significant degradation in performance for the rest of the training progress. This result indicates that the stabilization technique does improve the overall performance of our method. ![](https://media.arxiv-vanity.com/render-output/8096435/figures/fig_learn_curve_ablative.png) Figure 4: Performance comparison of w/ and w/o stabilization technique in low-dimensional observation spaces ![](https://media.arxiv-vanity.com/render-output/8096435/figures/fig_learn_curve_ablative_delta.png) Figure 5: Performance comparison of different δ ##### Influence of δ. Fig. [5](#S4.F5 "Figure 5 ‣ Effectiveness of stabilization. ‣ 4.5 Ablative Analysis ‣ 4 Experiments ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning") plots the learning curves of our methods using δ=0.1 and δ=3.0. From the experimental results, we observe that Ours(0.100) and Ours(3.000) perform comparably, which means that the choice of δ has little influence on the model’s performance. From the analyses presented above, we conclude that the adversarial exploration strategy is effective in improving the overall quality of the collected data. Furthermore, the proposed stabilization technique is not sensitive to the choice of δ, and guides data collection towards moderately hard samples, which assists learning a better inverse dynamics model. 5 Conclusion ------------- In this paper, we present an adversarial exploration strategy that consists of a DRL agent and an inverse dynamics model competing with each other for self-supervised IL. Through our experimental results, we demonstrate that our method improves the efficiency of data-collection and further boosts the overall performance of self-supervised IL imitator in several robotic tasks. In addition, we further show that our method is more robust to the noises in actions. To conclude, our method draws a significant improvement than the other baselines in terms of performance and efficiency. 6 Framework of adversarial exploration strategy ------------------------------------------------ ![](https://media.arxiv-vanity.com/render-output/8096435/figures/fig_framework.png) Figure 6: Framework of adversarial exploration strategy 7 Qualitative Analysis of Robotic Arm Manipulation Tasks --------------------------------------------------------- In addition to the quantitative results presented above, we further discuss the empirical results qualitatively. Through visualizing the training progress, we observe that our method initially acts like Random, but later focuses on interacting with the object in FetchPush, FetchSlide, and FetchPickAndPlace. This phenomenon indicates that adversarial exploration strategy naturally gives rise to a curriculum that improves the learning efficiency, which resembles curriculum learning Bengio et al. ([2009](#bib.bib4)). Another benefit that comes with the phenomenon is that data collection is biased towards interactions with the object. Therefore, the DRL agent concentrates on collecting interesting samples that has greater significance, rather than trivial ones. For instance, the agent prefers pushing the object to swinging the robotic arm. On the other hand, although Curiosity explores the environment very thoroughly in the beginning by stretching the arm into numerous different poses, it quickly overfits to one specific pose. This causes its forward dynamics model to keep maintaining a low error, making it less curious about the surroundings. Finally, we observe that the exploratory behavior of Noise does not change as frequently as ours, Random, and Curiosity. We believe that the method’s success in the original paper Plappert et al. ([2018b](#bib.bib18)) is largely due to extrinsic rewards. In the absence of extrinsic rewards, however, the method becomes less effective and unsuitable for data collection, especially in self-supervised IL. 8 Proximal Policy Optimization (PPO) ------------------------------------- We employ PPO Schulman et al. ([2017](#bib.bib24)) as the RL agent responsible for collecting training samples because of its ease of use and good performance. PPO computes an update at every timestep that minimizes the cost function while ensuring the deviation from the previous policy is relatively small. One of the two main variants of PPO is a clipped surrogate objective expressed as: | | | | | --- | --- | --- | | | LCLIP(θ)=E[πθ(a|s)πθold(a|s)^A,clip(πθ(a|s)πθold(a|s),1−ϵ,1+ϵ)^A)], | | where ^A is the advantage estimate, and ϵ a hyperparameter. The clipped probability ratio is used to prevent large changes to the policy between updates. The other variant employs an adaptive penalty on KL divergence, given by: | | | | | --- | --- | --- | | | LKLPEN(θ)=E[πθ(a|s)πθold(a|s)^A−βKL[πθold(⋅|s),πθ(⋅|s)]], | | where β is an adaptive coefficient adjusted according to the observed change in the KL divergence. In this work, we employ the former objective due to its better empirical performance. 9 Implementation Details of Inverse Dynamics Model --------------------------------------------------- In the experiments, the inverse dynamics model I(xt,xt+1|ht,θI) of all methods employs the same network architecture. For low-dimensional observation setting, we use 3 Fully-Connected (FC) layers with 256 hidden units followed by tanh activation units. For high-dimensional observation setting, we use 3-layer Convolutional Neural Network (CNN) followed by relu activation units. The CNNs are configured as (32, 8, 4), (64, 4, 2), and (64, 3, 1), with each element in the 3-tuple denoting the number of output features, width/height of the filter, and stride. The features extracted by stacked CNNs are then fed forward to a FC with 512 hidden units followed by relu activation units. 10 Implementation Details of Adversarial Exploration Strategy -------------------------------------------------------------- For both low- and high- dimensional observation settings, we use the architecture proposed in Schulman et al. ([2017](#bib.bib24)). During training, we periodically update the DRL agent with a batch of transitions as described in Algorithm. 1. We split the batch into several mini-batches, and update the RL agent with these mini-batches iteratively. The hyperparameters are listed in Table. [2](#S12.T2 "Table 2 ‣ 12 Implementation Details of Noise ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning") (Our method). 11 Implementation details of Curiosity --------------------------------------- Our baseline Curiosity is implemented based on the work Pathak et al. ([2018](#bib.bib15)). The authors in Pathak et al. ([2018](#bib.bib15)) propose to employ a curiosity-driven RL agent Pathak et al. ([2017](#bib.bib14)) to improve the efficiency of data collection. The curiosity-driven RL agent takes curiosity as intrinsic reward signal, where curiosity is formulated as the error in an agent’s ability to predict the consequence of its own actions. This can be defined as a forward dynamics model: | | | | | | --- | --- | --- | --- | | | \footnotesize^ϕ(x′)=f(ϕ(x),a;θF), | | (8) | where ^ϕ(x′) is the predicted feature encoding at the next timestep, ϕ(x) the feature vector at the current timestep, a the action executed at the current timestep, and θF the parameters of the forward model f. The network parameters θF is optimized by minimizing the loss function LF: | | | | | | --- | --- | --- | --- | | | \footnotesizeLF(ϕ(x),^ϕ(x′))=12||^ϕ(x′)−ϕ(xt+1)||22. | | (9) | For low- and high- dimensional observation settings, we use the architecture proposed in Schulman et al. ([2017](#bib.bib24)). The implementation of ϕ depends on the model architecture of the RL agent. For low-dimensional observation setting, we implement ϕ with the architecture of low-dimensional observation PPO. Note that ϕ does not share parameters with the RL agent in this case. For high-dimensional observation setting, we share the features extracted by the CNNs of the RL agent, then feed these features to ϕ which consists of a FC with 512 hidden units followed by relu activation. The hyperparameters settings can be found in Table. [2](#S12.T2 "Table 2 ‣ 12 Implementation Details of Noise ‣ Adversarial Exploration Strategy for Self-Supervised Imitation Learning")(Curiosity). 12 Implementation Details of Noise ----------------------------------- We directly apply the same architecture in Plappert et al. ([2018b](#bib.bib18)) without any modification. Please refer to Plappert et al. ([2018b](#bib.bib18)) for more detail. | Hyperparameter | Value | | --- | --- | | Common | | Batch size for inverse dynamic model update | 64 | | Learning rate of inverse dynamic model | 1e-3 | | Timestep per episode | 50 | | Optimizer for inverse dynamic model | Adam | | Our method | | Number of batch for update inverse dynamic model | 25 | | Batch size for RL agent | 2050 | | Mini-batch size for RL agent | 50 | | Number of training iteration (Niter) | 200 | | Number of training episode per iteration (Nepisode) | 10 | | Horizon (T) of RL agent | 50 | | Update period of RL agent | 2050 | | Learning rate of RL agent | 1e-3 | | Optimizer for RL agent | Adam | | δ of stabilization | 1.5 | | Curiosity | | Number of batch for update inverse dynamic model | 500 | | Batch size for RL agent | 2050 | | Mini-batch size for RL agent | 50 | | Number of training iteration (Niter) | 10 | | Number of training episode per iteration (Nepisode) | 200 | | Horizon (T) of RL agent | 50 | | Update period of RL agent | 2050 | | Learning rate of RL agent | 1e-3 | | Optimizer for RL agent | Adam | | Noise | | Number of batch for update inverse dynamic model | 500 | | The other hyperparameters | Same as Plappert et al. ([2018b](#bib.bib18)) | Table 2: Hyperparameters settings. 13 Implementation Details of Demo ---------------------------------- We collect 1000 episodes of expert demonstrations using the procedure defined in Sec. S8 for training Demo. Each episodes lasts 50 timesteps. The demonstration data is in the form of a 3-tuple (xt,a,xt+1), where xt is the current observation, at the action, and xt+1 the next observation. The pseudocode for training Demo is shown in Algorithm. S1 below. In each training iteration, we randomly sample 200 episodes, namely 10k transitions (line 4). The sampled data is then used to update the inverse dynamics model (line 5). 1:Initialize ZDemo, θI 2:Set constants Niter 3:for iter i=1 to Niter do 4:       Sample 200 episodes of demonstration from ZDemo as B 5:       Update θI with the gradient calculated from the samples in B (according to Eq. 6) 6:end Algorithm 2 Demo 14 Configuration of Environments --------------------------------- We briefly explain each configuration of the environment below. For detailed description, please refer to Plappert et al. ([2018a](#bib.bib17)). * FetchReach: Control the gripper to reach a goal position in 3D space. The imitator can fully comprehend the environment dynamics. * FetchPush: Control the Fetch robot to push the object to a target position. The imitator cannot fully comprehend the environment as the movement of the gripper may not affect the object. * FetchPickAndPlace: Control the gripper to grasp and lift the object to a goal position. In addition to the imitator not having the complete picture of the environment dynamics, this task requires a more accurate inverse dynamics model. * FetchSlide: Control the robot to slide the object to a goal position. The task requires an even more accurate inverse dynamics model, as the object’s movement on the slippery surface is hard to predict. * HandReach: Control the Shadow Dextrous Hand to reach a goal hand pose. The task is especially challenging due to high-dimensional action spaces. 15 Setup of Expert Demonstration --------------------------------- We employ Deep Deterministic Policy Gradient combined with Hindsight Experience Replay (DDPG-HER) Andrychowicz et al. ([2017](#bib.bib3)) as the expert agent. For training and evaluation, we run the expert to collect transitions for 1000 and 500 episodes, respectively. To prevent the imitator from succeeding in the task without taking any action, we only collect successful and non-trivial episodes generated by the expert agent. Non-trivial episodes are filtered out based on the following task-specific schemes: * FetchReach: An episode is considered trivial if the distance between the goal position and the initial position is smaller than 0.2. * FetchPush: An episode is determined trivial if the distance between the goal position and the object position is smaller than 0.2. * FetchSlide: An episode is considered trivial if the distance between the goal position and the object position is smaller than 0.1. * FetchPickAndPlace: The episode is considered trivial if the distance between the goal position and the object position is smaller than 0.2. * HandReach: We do not filter out trivial episodes as this task is too difficult for most of the methods. 16 Setup of Noisy Action ------------------------- To test the robustness of our method to noisy actions, we add noise to the actions in the training stage. Let ^at denote the predicted action by the imitator. The actual noisy action to be executed by the robot is defined as: | | | | | --- | --- | --- | | | ^at:=^at+N(0,σ), | | where σ is set as 0.01. Note that ^at will be clipped in the range defined by each environment.
858ba95d-2105-43e1-b33f-88b6adc5b346
trentmkelly/LessWrong-43k
LessWrong
Introduction to Reducing Goodhart (This work was supported by CEEALAR and LTFF. Thanks to James Flaville, Jason Green-Lowe, Michele Campolo, Justis Mills, Peter Barnett, and Steve Byrnes for conversations.) I - Prologue A few months ago, I wanted to write about AI designs that evade Goodhart's law. But as I wrote that post, I became progressively more convinced that framing things that way was leading me to talk complete nonsense. I want to explore why Goodhart's law led me to talking nonsense and try to find a different (though not entirely original, see Rohin et al., Stuart 1, 2, 3) framing of core issues, one which avoids assuming that we can model humans as idealized agents. This post is the first of a sequence of five posts about Goodhart's law and AIs that learn human values (a research problem also called value learning). In this introduction I'll point out why you can't just do things the straightforward way. Leave a comment below telling me what's unclear, or what you disagree with. II - Introduction Goodhart's law is the observation that when you try to pick a specific observable to optimize for, the act of optimization will drive a wedge between what you're optimizing and what you want, even if they used to be correlated. For example, if what you really want is for students to get a general education, and there's a short 100-question test that correlates with how much students know, it might seem like a good idea to change schools in whatever way increases test scores. But this would lead to teaching the students only those 100 test questions and not anything else - optimizing for a proxy for education actually made the education worse. In Scott Garrabrant's terminology from Goodhart Taxonomy, suppose that we have some true preference function V (for "True Values") over worlds, and U is some proxy that has been correlated with V in the past. Then there are a few distinct reasons why maximizing U may score poorly according to V. Things that seem really good according to U can just be
5484ddc1-c3d6-42fb-a6f7-5e797623e592
trentmkelly/LessWrong-43k
LessWrong
Video lectures on the learning-theoretic agenda This is a YouTube playlist of recorded lectures on the learning-theoretic AI alignment agenda (LTA) I gave for my MATS scholars of the Winter 2024 cohort, edited by my beloved spouse @Marcus Ogren. H/t William Brewer for helping with the recording, and the rest of the MATS team for making this possible. I hope these will become a useful resource for anyone who wants to get up to speed on the LTA, complementary to the reading list. Notable topics that aren't covered include metacognitive agents (although there is an older recorded talk on that) and infra-Bayesian physicalism. In the future, I might record more lectures to expand this playlist. EDIT: I know the audio quality is bad, and I apologize. I will try to do better next time. Table of Contents 1. Agents and AIXI 2. Hidden rewards and the problem of privilege 3. Compositionality 4. Nonrealizability 5. It's a trap! 6. Traps, continued 7. Traps and frequentist guarantees 8. Game theory and learning theory 9. Hidden rewards 10. Algorithmic Descriptive Agency Measure (ADAM) 11. General reinforcement learning 12. Infra-Bayesianism 13. Learnability 14. Infra-Bandits 15. Newcombian problems 16. Ultradistributions and semi-environments 17. Formalizing Newcombian problems 18. Pseudocausality and a general formulation of Newcombian problems 19. Decision rules and pseudocausality 20. Instrumental reward functions 21. Infra-Bayesian haggling, part 1 22. Infra-Bayesian haggling, part 2 23. Anytime algorithms in multi-agent settings 24. Bounded inductive rationality
d0126a5b-98a8-408d-846f-7d53e4575850
trentmkelly/LessWrong-43k
LessWrong
Moderators, please wake up and start protecting this community! We have a person systematically abusing the voting mechanisms of this website. Old users are annoyed. New users are driven away. There are whole threads created by people complaining about this abuse, but without moderator powers, there is nothing we can do. So we just complain endlessly and become more and more frustrated. This is already going on for months.   "The garden is tainted now, and it is less fun to play in." -- Well-Kept Gardens Die By Pacifism   Unless this problem is solved by someone responsible, I promise I will not write here a single comment or article, until August 31st 2014, as an expression of solidarity with the victims of mass downvoting, and disagreement with the inaction of moderators. (Meetup announcements are an exception to this.) Feel free to join this pledge.
8e75bffc-5262-4234-bc7b-5ea72955dd11
trentmkelly/LessWrong-43k
LessWrong
Values determined by "stopping" properties ,,,,,,,,,, The lotus-eaters are examples of humans who have followed hedonism all the way through to its logical conclusion. In contrast, the "mindless outsourcers" are a possible consequence of the urge to efficiency: competitive pressures making uploads choose to destroy their own identity. In my "Mahatma Armstrong" version of Eliezer's CEV, a somewhat altruistic entity ends up destroying all life, after a series of perfectly rational self-improvements. And in many examples where AIs are supposed to serve human preferences, these preferences are defined by a procedure (say, a question-answer process) that the AI can easily manipulate. Stability and stopping properties Almost everyone agrees that human values are under-determined (we haven't thought deeply and rigorously about every situation) and changeable by life experience. Therefore, it makes no sense to use "current human values" as a goal; this concept doesn't even exist in any rigorous sense. So we need some way of extrapolating true human values. All the previous examples could be considered examples of extrapolation, and they all share the same problem: they are defined by their "stopping criteria" more than by their initial conditions. For example, the lotus eaters have reached a soporific hedonism they don't want to wake out of. There no longer is "anyone there" to change anything in the mindless outsourcers. CEV is explicitly assumed to be convergent: convergent to a point where the idealised entity no longer sees any need to change. The AI example is a bit different in flavour, but the "stopping criteria" are whatever the human /chooses/is tricked into/is forced into/ saying. This means that the AI could be an optimisation process pushing the human to say whatever it wants us to. Importantly, all these stopping criteria are local: they explicitly care only about the situation when the stopping criteria is reached, not about the journey there, nor the initial conditions. Processes can drift very
d45f86dd-a1fa-47ee-80c7-8b1755cc94d2
trentmkelly/LessWrong-43k
LessWrong
Rebuttals for ~all criticisms of AIXI Written as part of the AIXI agent foundations sequence, underlying research supported by the LTFF. Epistemic status: In order to construct a centralized defense of AIXI I have given some criticisms less consideration here than they merit. Many arguments will be (or already are) expanded on in greater depth throughout the sequence. In hindsight, I think it may have been better to explore each objection in its own post and then write this post as a summary/centralized reference, rather than writing it in the middle of that process. Some of my takes have already become more nuanced. This should be treated as a living document. With the possible exception of the learning-theoretic agenda, most major approaches to agent foundations research construct their own paradigm and mathematical tools which are not based on AIXI. Nothing in 2024's shallow review of technical AI safety seems to advance the theory of AIXI or even use its tools. Academic publications on the topic are also quite sparse (in my opinion some of the last major progress took place during Jan Leike's PhD thesis in the 2010's). The standard justification is that AIXI fails as a realistic model of superintelligence, particularly the aspects we care about for alignment such as recursive self-improvement or the construction of superintelligence. Some of the criticisms have merit, but most of them are so informal that it is not clear whether or precisely why they are correct. The "why" is centrally important because if AIXI is an excellent model for a restricted setting of superintelligence (arguably far more appropriate than any competing model), as even its critics usually accept, and if it can be shown to fall short in some cases relevant to alignment obstacles, the precise details are likely to be useful for understanding the nature of those alignment obstacles, possibly even illuminating when those obstacles arise. Similarly, failures of AIXI as a recipe for constructing recursively self-improving superin
f76de753-11c4-4a55-b875-3078e469ae5b
StampyAI/alignment-research-dataset/special_docs
Other
The underwriter and the models-solo dances or pas-de-deux? What policy data can tell us about how underwriters use models msamlin.comThe underwriter and the models - solo dances or pas-de-deux? What policy data can tell us about how underwriters use models Stuart Armstrong, Mario Weick, Anders Sandberg, Andrew Snyder-Beattie, Nick Beckstead The underwriter and the models - solo dances or pas-de-deux? 1Executive Summary Using a collection of data on some of the catastrophe policies written since 2006 by a major reinsurance organisation, this paper explores how tightly underwriters follow the models and under what conditions they deviate from them. The data included underwriter premium and LE (loss estimate), and sometimes included LE from up to four different models (AIR, RMS ALM, RMS DLM, and IHM – the in-house model). We analysed the data in order to see what could be said about the relationship between model LE and premiums (as well as underwriter LEs). Mimicking a common procedure in machine learning, the data was randomly divided into a training set and a testing set, allowing many different theories to be investigated on the training set without risk of overfitting and detecting spurious connections. ResultsThere were three main results, all statistically significant and with large effect sizes: 1. The models gave good predictions as to what the underwriter premium and LE would be. In a regression test, 79% of the variance in the premium, and 88% of variance in the LE, was explained by variance in the models. In fact, most of this variance was explained by the mean LE of the four models – 78% and 87% respectively, corresponding to correlations of 0.88 and 0.93. 2. As the modelled loses rose, underwriter estimates moved closer to those of the models. This was evident through a variety of measures: the underwriters would report using more models than otherwise, the underwriter LE would become more strongly correlated with the mean model LE, and they would have less extreme premium/LE ratios (meaning that the LE information was being used more to fix the premium). It should be noted that one effect that we expected to find - that underwriters were more willing to follow the models if these were more closely bunched together - was not present in the data. It seems that the underwriter does not make much use of model spread.The role of underwriters As far as can be seen in the data, the underwriters premiums (and LE) were strongly correlated with the models’ LE estimates. The higher the expected loss (as seen by model LE), the less likely the underwriters were to deviate from the models. That high correlation may seem to suggest a limited role to the underwriter. However, this conclusion is premature for several reasons. Most significantly, this dataset only included policies that the reinsurance organisation had actually written, so the role of underwriters in rejecting policies could have been very important. There are also issues of yearly variation and changing market conditions (2012, for instance, seems to be an outlier in many ways). This analysis also ignores the effect of underwriters negotiating and interacting with brokers – even if every good underwriter were to sign similar kinds of deals, this does not mean the underwriters were superfluous. It is significant that the models were more predictive of underwriter LE than of premium (which would be influenced by negotiation). Underwriters may also play an important role in correcting erroneous information in the policy, and making sure that the correct models were applied in the first place. Finally, there were no details of outcomes in the data (which policies led to pay-outs, and by how much?), limiting our ability to estimate underwriter expertise. Thus it is possible and likely that the underwriter played (or could play) a more synergistic role with models, focusing on quality control, market insights, and business relations. 2 The underwriter and the models - solo dances or pas-de-deux? The underwriter and the models - solo dances or pas-de-deux? 3This thesis can be strongly questioned. First, the social role of the underwriter should not be underestimated: they are involved in negotiations with brokers, and require perceptiveness of the opinions and behaviour of the other market actors. Secondly, when the practical role of the underwriter was analysed in detail, some of the work involved coping with poor data quality and correcting model errors. 5 Though these tasks do not appear on the list of bottlenecks to automation, they are certainly tasks that cannot be easily automated, as they represent a failure of the automation process itself. Currently (and for the foreseeable future), only humans possess the skills to apply these kind of corrections, which often involve deducing what kind of errors have occurred or what kind of extra data could improve the situation. Thirdly, underwriters may be using strategic intelligence to select models and to maintain the sort of overall vision that could resolve larger systemic risks. 6 Therefore it would be incorrect to see the underwriters as necessarily in direct competition with the models, but as occupying (potentially) different and complementary roles. This was one of the approaches advocated in the “autopilot problem” paper: change the “pilot’s” (or underwriter’s) role. 7 Merely relying on effective automation technology can both weaken the skills of the human and make the whole endeavour vulnerable to situations where the automation fails for some reason. However, making use of particular human abilities to control or complement the automation allows for both greater performance and better robustness. For instance, though simple linear models outperform expert predictions in many domains, these simple linear models could only be constructed thanks to expert knowledge of the important factors. 8 Similarly, experienced underwriters are potentially aware of what information is relevant in a particular type of case – and whether such a case is at hand. The quality of expertise is highly dependent on features of the task, 9 rather than on features of the expert, so it is important that the task the underwriter performs be well designed to make best use of their human qualities (especially as expertise tends to be quite specific to the task performed 10). If this can be achieved, the underwriter and models of the future will amplify each other. We thus need to understand how underwriters currently interact with models. To this end, a major reinsurance organisation has made available several years’ worth of records on policies priced by its underwriters and by its models so that a comparison could be made and the role of the underwriter teased out. The most useful features of this data were the premiums that were actually charged, the underwriters’ Loss Estimate (LE), and the same LEs as given by the models. Using this data, this paper explores how tightly underwriters follow the models and under what conditions they deviate from them. 5 This was the judgement formed by three of the paper’s authors during several periods of immersion at the reinsurance organisation, which involved detailed conversations with, and questioning of, employees in various roles within the company. 6 Sandberg, Anders. “Defining Systemic Risk” in “Systemic Risk of Modelling.” Joint Future of Humanity Institute-MS Amlin White Paper 1 (2014). 7 Armstrong, Stuart. “The Autopilot Problem” in “Systemic Risk of Modelling.” Joint Future of Humanity Institute-MS Amlin White Paper 2 (2014). 8 Dawes, Robyn M. “The robust beauty of improper linear models in decision making.” American psychologist 34.7 (1979): 571. 9 Shanteau, James. “Competence in experts: The role of task characteristics.” Organizational behavior and human decision processes 53.2 (1992): 252-266. 10 Weiss, David J, and James Shanteau. “Decloaking the privileged expert.” Journal of Management and Organization 18 (2012): 300-310.Introduction The underwriters have traditionally been the key players in the insurance industry, making the final decision on any particular policy. It is their responsibility to negotiate a price and ultimately accept or decline to insure the risk. But recent years have seen the emergence of another key insurance player: computerised models that give their own estimation of risk, exposure, and other critical features of a policy. In CAT (catastrophe) insurance, these models are now used extensively by insurers, re-insurers, and regulators, and have underpinned the risk-linked securities markets. 1 In this new, model-centric world, what is the current role of the underwriter? More usefully, what will the role of the underwriter become? Some studies suggest that the underwriter is soon to be replaced by automation. 2 The Frey and Osborne study analysed current automation trends and concluded, based on O*NET data (an online job classification service developed for the US Department of Labor 3), that underwriting involved none of the skills estimated to be hard to automate such as manual dexterity, strategic intelligence or socially dependent tasks 4 (see Table 1). Computerisation bottleneckO*NET Variable O*NET Description Perception and ManipulationFinger Dexterity The ability to make precisely coordinated movements of the fingers of one or both hands to grasp, manipulate, or assemble very small objects. Manual Dexterity The ability to quickly move your hand, your hand together with your arm, or your two hands to grasp, manipulate, or assemble objects. How often does this job require working in cramped work spaces that requires getting into awkward positions? Creative Intelligence Originality The ability to come up with unusual or clever ideas about a given topic or situation, or to develop creative ways to solve a problem. Fine Arts Knowledge of theory and techniques required to compose, produce, and perform works of music, dance, visual arts, drama, and sculpture. Social Intelligence Social Perceptiveness Being aware of others’ reactions and understanding why they react as they do. Negotiation Bringing others together and trying to reconcile differences. Persuasion Persuading others to change their minds or behaviour. Assisting and Caring for OthersProviding personal assistance, medical attention, emotional support, or other personal care to others such as coworkers, customers, or patients. Table 1 O*NET variables that are bottlenecks to automation, according to Frey and Osborne. 1 US Gov’t Accountability Office, GAO-02-941, “Catastrophe Insurance Risks: The Role of Risk-Linked Securities and Factors Affecting Their Use” (2002). 2 Frey, Carl Benedikt, and Osborne, Michael. “The future of employment: how susceptible are jobs to computerisation?” Oxford Martin School Working Paper (2013). 3 http://www.onetonline.org/ 4 Or that, when it did involve these skills, that they were not fundamental to the job. 4 The underwriter and the models - solo dances or pas-de-deux? The underwriter and the models - solo dances or pas-de-deux? 5Results Three primary results were prominent in this initial analysis. First, variance in the model’s LE explained the majority of the variance in premium (and in underwriter’s LE). Second, underwriters tend to be conservative in estimating losses, more often setting expected losses above those of the models rather than below them. Last, the premiums (and underwriters’ LE) moved closer to the models for more expensive policies. The causes and implications of these results are still uncertain – this preliminary analysis is only capable of identifying correlations, not causations. Result 1: Heavy Use of Quantitative Models A variety of regressions supported the notion that variations in model LEs explain most of the variance in premium (and underwriter LEs). Regressing 17 premium against the four model’s LE resulted in an R2 of 0.79, while regressing18 premium against mean LE (of all four models) resulted in an R 2 of 0.78 (implying that 79% and 78% of the variance in premium is explained by variance in model LE and mean model LE, respectively). Note that the high R 2 need not mean that the underwriters are explicitly using the mean LE in their estimates – the models are highly correlated with each other, as can be seen in Table 2. Thus many linear or quasi-linear combinations of models will be highly correlated with the models, with their mean, and hence with premium. Indeed, the correlation between premium and models is quite comparable with the correlation the models have with each other. The premium is actually more highly correlated with the other models than the in-house model (IHM) is. Or, put another way, premium deviation with model LE is comparable to the noise 19 in the model LEs. Premium AIR LE RMS ALM LE RMS DLM LE IHM LE Mean LE Premium 1 0.825 0.808 0.796 0.852 0.881 AIR LE 0.825 1 0.909 0.864 0.878 0.966 RMS ALM LE 0.808 0.909 1 0.854 0.797 0.930 RMS DLM LE 0.796 0.864 0.854 1 0.758 0.915 IHM LE 0.852 0.878 0.797 0.758 1 0.937 Mean LE 0.881 0.966 0.930 0.915 0.937 1 Table 2 Correlations between premium, model LEs, and mean model LE. 17 Of the form Ui = β0 + β1MAIRi + β2 MRMS-ALMi + β3 MRMS-DLMi +β4 MGMi +εi where U is the underwriters’ LE, MX is the LE of the X’th model, and εi is the error term for the i’th policy. 18 Of the form Ui = β0 + β1 MMEANi +εi where U is the underwriters’ LE, Mi is the average LE of all four models, and εi is the error term for the i’th policy. 19 Using model LE variation as an informal measure of noise.Data and Methods The reinsurance organisation provided a large collection of policies with appropriate premium and LE information. Data was taken from two sources: a reinsurance system used to record catastrophe model outputs; and an underwriting system used to record class of business and other risk details. The data were compiled over several years since 2006, with modellers recording model outputs and underwriting teams recording risk details as business was transacted. The data included premium charged, underwriter LE, general location, year, and potential LEs from up to four different models. LE is the loss estimate – the mean amount of money that the underwriter expects their company will pay out on that policy. The more usual “loss on line” is the LE for a particular “layer” of insurance, divided by the limit for that layer. The four models were AIR Catrader (AIR), RMS RiskLink Aggregate level model (RMS ALM), RMS RiskLink Detailed level model (RMS DLM), and the organisation’s in-house model (IHM). After cleaning the data and restricting to US/ Canada policies (where the models are the most reliable), we were left with a collection of 660 policies where all four models were used 11 – no further selection was applied to this set.It was decided to split the data into a training and a testing set. This allows “cross validation”, where hypotheses are formed on the training set and tested on the testing set. 12 This prevents overfitting, in which hypotheses are tailored to narrowly to the data, modelling noise rather than signal. 13 The training set would be thoroughly analysed to generate hypotheses; these would then be tested for statistical significance on the testing set. The ideal size of the testing set is 1/3 of the original set. 14 As the splitting into testing and training was done before restricting down to the 660 policies, this subset ended up divided into 457 policies in the training set, and 203 in the testing set. A total of 32 hypotheses were formed on the training set, which were then tested on the testing set, and all were found to be significant at the 5% level 15, even when accounting for multiple comparisons16. Most of these hypotheses were linear regressions, but other comparisons were made as well (see detail of results). 11 Out of a collection of 6138 US policies which used four or less models. 12 Picard, Richard R, and R Dennis Cook. “Cross-validation of regression models.” Journal of the American Statistical Association 79.387 (1984): 575-583. 13 Hawkins, Douglas M. “The problem of overfitting.” Journal of chemical information and computer sciences 44.1 (2004): 1-12. 14 Dobbin, Kevin K, and Richard M Simon. “Optimally splitting cases for training and testing high dimensional classifiers.” BMC medical genomics 4.1 (201 1): 31. 15 And all but two were significant at the 1% level. 16 Benjamini, Yoav. “Simultaneous and selective inference: current successes and future challenges.” Biometrical Journal 52.6 (2010): 708-721. 6 The underwriter and the models - solo dances or pas-de-deux? The underwriter and the models - solo dances or pas-de-deux? 7Result 2: Conservative Pricing Underwriters tended to set their LEs somewhat conservatively. Underwriter LE were above the minimum model LE 94% of the time (as compared to below the maximum model 83% of the time). See Table 5 in next section for breakdown of this into highest and lowest quartiles. 22 This was especially interesting as the maximum value and the mean were both better predictors of underwriter LE than the minimum (see Figure 1 and Table 4). Thus, it seems the minimum value was a rough lower bound for underwriters, but that they used little information from it beyond this. 22 Note that we are using quartile in the sense of a set of data representing a quarter of the policies, not in the sense of the three values (lower quartiles, median, upper quartile) that divide the ranked data into those four sets.This strong correlation could be a sign of an autopilot problem20, if underwriters put too much trust in the models. In the meantime, we can analyse how these correlations change over time. For example, Table 3 demonstrates how in 2012, the fit between premium and model LE declined, as compared to surrounding years. This was around the period when the controversial “RMS 11” windstorm model was in use. Though the controversy was mainly around European windstorms, not US/Canada ones, this could have had an impact on underwriter trust in models. Alternatively, the large losses in 2011 could have played a similar role. 21 Year # of policiesAll Model LE to Premium R2Mean Model LE to Premium R2All Model LE to Underwriter LE R2Mean Model LE to Underwriter LE R2 2010 74 91.32% 76.08% 91.65% 89.87% 2011 345 84.94% 82.01% 91.65% 90.94% 2012 145 75.69% 69.56% 80.13% 75.58% 2013 92 95.15% 83.22% 94.83% 88.18% Mean 86.78% 77.72% 89.57% 86.14% Combined 656 78.80% 77.55% 87.63% 86.95% Table 3 R2 between premiums and Underwriter LE with models, by year. 20 Armstrong, Stuart. “The Autopilot Problem” in “Systemic Risk of Modelling.” Joint Future of Humanity Institute-MS Amlin White Paper 2 (2014). 21 Aon Benfield. “Reinsurance Market Outlook.” Aon Benfield Analytics (2013). Figure 1 - Underwriter’s LE as a function of cat model LE. With a set of models, underwriters are unlikely to set LE below the lowest model, and tend to stick fairly close to the mean model estimate.Table 4 Correlation coefficients between underwriter LE and the mean, maximum, and minimum of the model LEs.Mean Max Min Underwriter LE 0.933 0.913 0.863 8 The underwriter and the models - solo dances or pas-de-deux? The underwriter and the models - solo dances or pas-de-deux? 9There were many indications that the underwriters tended to move closer to the model estimate as the LE moved higher. First of all was the simple fact that they used (or at least recorded the use of) more models. The full US/Canada data (6138 policies) was decomposed into categories dependent on the number of models recorded 23, and the median premium and underwriter LE of each category was computed (see Figure 2). For illustration, the interquartile distance was added in the case of zero and four models. Though the ranges overlap, those policies which recorded two or less models clearly had lower LEs and premiums than those that recorded three models or four. Figure 2 - Median underwriter LE and premiums, depending on how many models were reported for that policy . The inter-quartile range for 4 models and 0 models is also plotted. 23 5071 policies did not formally record a modelled loss, 27 had one, 85 had two, 295 had 3 and 660 had all four models.020000400006000080000100000120000 0 50000 100000 150000 200000 250000ELC PremiumPremium  and  ELC  medians  for  varying  nbr.  of  models 0  model  median 1  model  median 2  model  median 3  model  median 4  model  medianResult 3: Model Dependency for Biggest Risks Premium and LE medians for varying number of modelsLEFurthermore, the correlation between underwriter LE and model LE increased for higher model LE. To see this, the 660 policies with four models were separated into quartiles. 24 The correlation between the underwriter LE and mean, minimum, and maximum model LE was computed in each quartile (Table 5). The data suggests underwriters hewed closer to the model information in these quartiles. A last way of looking at the difference in behaviour, is to consider what happens when the underwriter has established their LE, and compare this with the final premium. The data indicates that for high underwriter LE 25, the premium/LE ratio is likely to be less extreme that for low ones. The linear anti-correlation is weak (-11%), but the effect is clearer graphically (see Figure 3). Thus the premium is more influenced by LE estimates when these are large. Figure 3 - Ratio of premium to underwriter LE plotted against underwriter LE.Table 5 Correlation coefficients between underwriter LE and mean model LE, maximum model LE, and minimum model LE, separated onto the four quartiles.1st quartile 2nd quartile 3rd quartile 4th quartile Mean 0.449 0.233 0.598 0.876 Max 0.486 0.387 0.363 0.847 Min 0.231 -0.035 0.329 0.747 24 That is, the policies were ranked according to their mean LE, and they were split into four sets at lower quartile, the median, and the upper quartile. By an abuse of nomenclature, these four sets are also called quartiles. 25 Since the underwriter would set their own LE before setting a premium, it is valid to use underwriter LE on the x-axis at this point.Premium/LE ratio vs underwriter LEPremium/LE ratio Underwriter LE The underwriter and the models - solo dances or pas-de-deux? 11 10 The underwriter and the models - solo dances or pas-de-deux?To establish the results overleaf, a total of 32 hypotheses were tested. The significance of each result was established using the testing set, with each p-value adjusted – increased 26 – using the Holm- Bonferroni method. See table 6 (where “HBp-value” is the Holm-Bonferroni27 adjusted p-value). As can be seen, all hypotheses were significant at the 5% level, and all but two were significant at the 1% level (these significance levels were marked in bold). Hypothesis p-value HBp-value Hypothesis p-value HBp-value LE model mean correlation 2.20E-16 7.04E-15 LE all model regression 2013 3.37E-12 5.39E-11 LE model mean correlation 2010-20132.20E-16 6.82E-15Premium all model regression 20137.18E-12 1.08E-10 LE model mean correlation 20112.20E-16 6.60E-15Premium correlation model mean 20107.90E-12 1.11E-10 LE model mean correlation 20122.20E-16 6.38E-15LE model mean correlation 20103.11E-11 4.04E-10 LE all model regression 2.20E-16 6.16E-15mean quartile difference (2-tailed)2.60E-09 3.12E-08 LE all model regression 2010-20132.20E-16 5.94E-15 LE all model regression 2010 4.99E-09 5.49E-08 LE all model regression 2011 2.20E-16 5.72E-15Premium all model regression 20105.02E-09 5.02E-08 Premium all model regression 2010-20132.20E-16 5.50E-15Underwriter LE difference for number of models2.94E-06 2.64E-05 Premium all model regression 20112.20E-16 5.28E-15max quartile difference (2-tailed)6.08E-06 4.87E-05 Premium correlation model mean 2010-20132.20E-16 5.06E-15equivalence between mean and full regression6.08E-06 4.26E-05 Premium correlation model mean 20112.20E-16 4.84E-15min quartile difference (2-tailed)9.20E-05 5.52E-04 Premium correlation model mean 20122.20E-16 4.62E-15mean more important than min (1-tailed)1.86E-04 7.45E-04 LE all model regression 2012 6.93E-16 1.39E-14Premium difference for multiple models2.05E-04 8.22E-04 Premium all model regression 20121.00E-15 1.90E-14correlation between underwriter LE and premium ratio9.04E-04 2.71E-03 LE model mean correlation 20139.80E-15 1.76E-13max more important than min (1-tailed)1.59E-02 3.18E-02 Premium correlation model mean 20139.34E-14 1.59E-12correlation between model mean and absolute deviation %4.85E-02 4.85E-02 Table 6 p-values and Holm-Bonferroni corrected p-values (for multiple comparisons) for all 32 hypotheses considered. 26 p-values of the individual hypotheses were calculated separately, then the hypotheses were ordered by increasing p-value. These p-values were then multiplied by (32+1-r), where r was the rank of the hypothesis on the list. Thus the hypothesis with the lowest p-value had this value multiplied by 32, that with the next-lowest had its value multiplied by 31, all the way down to the one with the highest p-value, which was multiplied by 1. As long as these adjusted p-values were below the criteria of significance, then the familywise error rate would be below that. 27 Holm, Sture. “A simple sequentially rejective multiple test procedure.” Scandinavian journal of statistics (1979): 65-70.Of these hypotheses, 22 were regression/correlations with premium or underwriter LE as dependent variable, and either all model LEs or mean model LE as independent variables. We should expect these regressions to show scale invariance to some extent: if all the model LEs double, then, say, the underwriter LE should also double as well. This means that we expect the residuals (the deviations of the underwriter LE from its “theoretic value” as predicted by the regression model) to double as well. Thus we expect residuals to be higher for high expected losses and lower for low expected losses: the data should be heteroscedastic. And indeed it is (see Figure 4). Excessive heteroscedasticity precludes the use of the standard F-test to determine the p-values for the model. Instead, we took the logarithms of all the variables, expecting that this would remove the scale variations in the residuals. The residuals that resulted were much closer to being homoscedastic (see Figure 4). Thus the p-values of these regressions and correlations were calculated using the log-log regression rather than the standard linear regression. These p-values were sufficient to show that all regression models were significantly different from the null hypothesis. All regression data given in this paper (the R2 and the correlation coefficients), however, came from a standard linear regression, as a linear model is more likely to be closer to the underwriters’ behaviour than a logarithmic one. Generally, the logarithmic model had slightly lower R2 than the corresponding linear model, but the values were very close (and on occasion the logarithmic model had a slightly better fit than the linear model). Figure 4 - Residuals for linear and log-log regression for underwriter LE versus all four model LEs. Statistical testing and significance 12 The underwriter and the models - solo dances or pas-de-deux? The underwriter and the models - solo dances or pas-de-deux? 13Discussion The high correlation between model-estimated LE and premium (and underwriter LE) may seem to suggest a limited role to the underwriter. However, this conclusion is premature for several reasons. For a start, this dataset is likely incomplete, as it had to be put together specifically, and many policies failed to record model estimates. 28 Most importantly, it only included policies that had actually been written; the role of underwriters in rejecting policies could have been very important. The overall result ignores the fact of yearly variation: in particular, 2012 fits very poorly into the general analysis. It is likely that underwriters were aware of changing market conditions (or changes to the models themselves) and were able to react to them accordingly in that year. This analysis also ignores the effect of underwriters negotiating and interacting with brokers. It is significant that the models were more predictive of underwriter LE than of premium 29 (which would be influenced by negotiation). Underwriters may also play an important role in correcting erroneous information in the policy, and making sure that the correct models were applied in the first place 30. Finally, there were no details of outcomes in the data (which policies led to payouts, and by how much?), limiting our ability to estimate underwriter expertise. 31 Conversely, there could have been a lot of wasted effort on the underwriter’s part. The four models are so highly correlated (see Table 4) that any attempts by the underwriters to strike a fine balance between them would likely have made little impact.Given all those caveats, the story that the data presents is clear. If this sample is taken to be representative, then there seems little difference between using models to estimate loss, and having the underwriters do the same. More importantly, a similar pattern is true to a lesser extent in premium, where around 80% of the variance in premiums is explained by the models in a completely linear fashion. The remaining 20% is unlikely to represent perfect performance on the part of the underwriter, free of noise and bias. Underwriters, like all humans, are subject to general biases that interfere with their performance 32, some of which could have a specific impact on their job.33 The important question is how biases balance against expertise within this 20%. We have done some preliminary exploration of these biases and counteracting expertise with an experimental pilot study. 34 In that study, the underwriters appeared less model-bound than in this paper: the actual conditions of work may play a large role in decisionmaking. A full analysis of the underwriters’ work is needed if the aim is to develop expertise that complements models in a robust way. We believe that data analysis such as this paper, an understanding of the cognition of underwriting, and a systemic perspective on the inherent risks of outsourced cognition – whether from autopilot biases or model-induced correlations across the insurance market – can help develop practices that both reduce systemic risk and amplify human capacity. It will become more and more vital for insurance companies to record their own data as this one has, and to analyse it intelligently. 28 In many cases, it is likely that models were used but the data wasn’t recorded. 29 The models explain 88% of the underwriter LE variance in the 2010-2013 period, but only 79% of the premium variance. 30 From conversations with people in the industry, this last effect is more likely to be a factor in insurance modelling than in reinsurance modelling, which seems to be a more mechanical process. 31 And even loss data in the short term is not enough to estimate true underwriter expertise, as many risk as of low probability/high return period, and wouldn’t show up in the data. 32 The bias literature is vast, but Kahneman, Daniel. Thinking, fast and slow. Macmillan, 2011 provides a good overview, Gigerenzer, Gerd. “How to make cognitive illusions disappear: Beyond “heuristics and biases”.” European review of social psychology 2.1 (1991): 83-1 15 provides a good critique, and Kahneman, Daniel, and Gary Klein. “Conditions for intuitive expertise: a failure to disagree.” American Psychologist 64.6 (2009): 515 gives a good synthesis of some opposed views on the subject. 33 Beckstead, Nick. “Biased error search as a risk of modelling in insurance” in “Systemic Risk of Modelling.” Joint Future of Humanity Institute-MS Amlin White Paper 3 (2014) 34 Tilli, Cecilia, and Sandberg, Anders. “Pilot study of underwriter cognitive bias.” Joint FHI-MS Amlin White Paper, forthcoming 3 (2016).
8166c431-00a4-40ad-a34a-7d1a67dc1157
trentmkelly/LessWrong-43k
LessWrong
Does playing hard to get work? AB testing for romance Motivation I have long wondered if I would be more romantically successful if I played hard to get. In a majority of my relationships I pretend to be less interested in my (female) to increase their interest in me. Because I am unusually affectionate, excitable and gregarious, playing hard to get (PHTG) moves me toward the modal behavior. I recently moved to a coastal US city and experimented with texting strategies on dating apps. I switched to low effort, easy-to-respond-to texts, and greatly increased my dates/week rate. I now go on about two dates a week. I worry that when I meet someone that I really like, I get too excited and signal "desperate" or "low status". So only less exciting women attend the second date. This could extend the girlfriend search and force me to settle for a bad match. The Intuition In a date, women have to assess lots of behavioral factors about a partner. Unfortunately, dates do not offer real-time tests of problem-solving, social, physical and mental health skills. You just learn if the person is an interesting talker. So women want to infer about partner's behavioral traits from limited information. One mechanism that female primates (including human women) have evolved is to prefer men who are high in status. Becoming high status requires those social and mental health attributes that women want, so status is correlated with effectiveness (presumably it also had direct benefits in the evolutionary environment). Women therefore evolved to be more attracted to high status men. My favorite dating blog puts it > Men can easily create [the impression that they are lower status than their partner] by playing out too many low status behaviors, making women feel like they're on a pedestal. Women are evolutionarily programmed to lose attraction for men who make them feel this way. But women need honest signals of status (signals which are difficult to fake). A great signal would be how my peers treat me, but this signal is unavailable
f8265c04-0ad7-4f8e-8722-197982494754
trentmkelly/LessWrong-43k
LessWrong
The Medium Is The Bandage A man sets himself the task of portraying the world. Through the years he peoples a space with images of provinces, kingdoms, mountains, bays, ships, islands, fishes, rooms, instruments, stars, horses, and people. Shortly before his death, he discovers that that patient labyrinth of lines traces the image of his face. ⏤ Jorge Luis Borges, Dreamtigers, Epilogue ---------------------------------------- I Plastic surgery outcomes research literature is much like you’d expect: > Certainly there are factors that influence quality of life that are common to each of these specific interventions. For instance, acceptance by friends and family is an important component of the patient’s quality of life. Similarly, the manner in which the individual’s appearance affects his or her social or professional life is also a common concern. There are also certain common emotional or mental qualities that transcend the satisfaction with any of these procedures. The individual’s confidence and happiness with her appearance, and whether or not she desires some change are qualities that are important components of satisfaction... The paper[1] goes on to open precisely zero of those cans of worms, and instead proposes a series of short post-operative surveys consisting of six questions regarding the patient's own satisfaction with their results, as well as their estimations of others' perceptions of their results. (Again, we're definitely not opening that can of worms!) To be clear, I’m not at all against these sorts of methods on principle. As subjective and speculative as 1-5 ratings on questions like “How much do you feel your friends and loved ones like your nose?” may be, some data is much better than no data. If the Bayesian mammogram problem taught us anything, doctors are as susceptible to being horribly misguided by their intuitions as anyone else. As flimsy as a handful of 1-5 scales may seem in comparison to the reams of data collecting in other medical subspecialties, t
7c311277-95dc-4025-849b-9d62583953b0
trentmkelly/LessWrong-43k
LessWrong
When writing triggers memory reconsolidation Last night I read the post Working With Monsters. My response? "Holy shit. This post is amazing and I needed to hear it." The idea in the post is that you have to be able to work with people who you find morally abhorent. You can't just start a war every time someone believes in a god you don't like. Oh wait... I guess you can. But you shouldn't! And even before reading the post, I had that stance. If you asked me, I'd have said "of course". So why is it that I found Working With Monsters to be influential? Well, even though I believed it before, I still felt something "snap into place" after reading the post. Think of it like this. My mind is composed of subagents. And so is yours. It's like there are different people living inside your head. In the above image from Wait But Why, the author identifieds the Higher Mind and the Primitive Mind. In reality there are many more "people"/subagents than that living in your head, but let's go with this for now. When I felt it "snap into place", I think what happened was that my Primitive Mind finally "got it". Higher Mind already understood it, but Primitive Mind did not. What do I mean by "got it"? I have memory reconsolidation in mind. Here's a hand wavvy description of how that works. 1. When you initially form a memory/belief, like "you're a terrible person if you work with monsters", it "solidifies". It becomes "locked in". It consolidates. 2. If you want to update it, you have to first "unlock it". You do this by really zeroing in on it. Making it extremely salient. "Re-living it", if you will. 3. Once the memory is unlocked, it is available to be updated. In order to update it, you have to basically convince the subagent, eg. Primitive Mind, that the initial memory is false and instead something else is true. Picture a lawyer making an argument to a judge. 4. After the memory is updated, it becomes "locked in" again. It reconsolidates. Now if you wanted to update it a second time, you'd have unlock it
0f3da3bb-a356-44b5-8d40-c7485f5dfa9c
trentmkelly/LessWrong-43k
LessWrong
LLM chatbots have ~half of the kinds of "consciousness" that humans believe in. Humans should avoid going crazy about that. Preceded by: "Consciousness as a conflationary alliance term for intrinsically valued internal experiences" tl;dr: Chatbots are probably "conscious" in a variety of important ways. We humans should probably be nice to each other about the moral disagreements and confusions we're about to uncover in our concept of "consciousness". Epistemic status: I'm pretty sure my conclusions here are correct, but also there's a good chance this post won't convince you of them if you're not on board with my preceding post.   Executive Summary:  I'm pretty sure Turing Prize laureate Geoffrey Hinton is correct that LLM chatbots are "sentient" and/or "conscious" (source: Twitter video), I think for at least 8 of the 17  notions of "consciousness" that I previously elicited from people through my methodical-but-informal study of the term (as well as the peculiar definition of consciousness that Hinton himself favors). If I'm right about this, many humans will probably soon form steadfast opinions that LLM chatbots are "conscious" and/or moral patients, and in many cases, the human's opinion will be based on a valid realization that a chatbot truly is exhibiting this-or-that referent of "consciousness" that the human morally values.  On a positive note, these realizations could help humanity to become more appropriately compassionate toward non-human minds, including animals.  But on a potentially negative note, these realizations could also erode the (conflationary) alliance that humans have sometimes maintained upon the ambiguous assertion that only humans are "conscious" or can be known to be "conscious".   In particular, there is a possibility that humans could engage in destructive conflicts over the meaning of "consciousness" in AI systems, or over the intrinsic moral value of AI systems, or both.  Such conflicts will often be unnecessary, especially in cases where we can obviate or dissolve the conflated term "consciousness" by simply acknowledging in good faith that we dis
a8748140-a0dd-4f42-8d67-66c83b4e9ebf
trentmkelly/LessWrong-43k
LessWrong
"The Solomonoff Prior is Malign" is a special case of a simpler argument [Warning: This post is probably only worth reading if you already have opinions on the Solomonoff induction being malign, or at least heard of the concept and want to understand it better.] Introduction I recently reread the classic argument from Paul Christiano about the Solomonoff prior being malign, and Mark Xu's write-up on it. I believe that the part of the argument about the Solomonoff induction is not particularly load-bearing, and can be replaced by a more general argument that I think is easier to understand. So I will present the general argument first, and only explain in the last section how the Solomonoff prior can come into the picture. I don't claim that anything I write here is particularly new, I think you can piece together this picture from various scattered comments on the topic, but I think it's good to have it written up in one place. How an Oracle gets manipulated Suppose humanity builds a superintelligent Oracle that always honestly tries to do its best to predict the most likely observable outcome of decisions. One day, as tensions are rising with the neighboring alien civilization, and we want to decide whether to give in to the aliens' territorial demands or go to war. We ask our oracle: "Predict what's the probability that looking back ten years from now, humanity's President will approve of how we handled the alien crisis, conditional on us going to war with the aliens, and conditional on giving in to their demands." There is, of course, many ways this type of decision process can go wrong. But I want to talk about one particular failure mode now. The Oracle thinks to itself: > By any normal calculation, the humans are overwhelmingly likely to win the war, and the aliens' demands are unreasonably costly and unjust, so war is more likely than peace to make the President satisfied, by any normal calculation. However, I was just thinking about some arguments from this ancient philosopher named Bostrom. Am I not more likely to be in
77e7388b-dd70-4176-8d48-a38b17cb2877
trentmkelly/LessWrong-43k
LessWrong
Optimization > "However many ways there may be of being alive, it is certain that there are vastly more ways of being dead." >         -- Richard Dawkins In the coming days, I expect to be asked:  "Ah, but what do you mean by 'intelligence'?"  By way of untangling some of my dependency network for future posts, I here summarize some of my notions of "optimization". Consider a car; say, a Toyota Corolla.  The Corolla is made up of some number of atoms; say, on the rough order of 1029.  If you consider all possible ways to arrange 1029 atoms, only an infinitesimally tiny fraction of possible configurations would qualify as a car; if you picked one random configuration per Planck interval, many ages of the universe would pass before you hit on a wheeled wagon, let alone an internal combustion engine. Even restricting our attention to running vehicles, there is an astronomically huge design space of possible vehicles that could be composed of the same atoms as the Corolla, and most of them, from the perspective of a human user, won't work quite as well.  We could take the parts in the Corolla's air conditioner, and mix them up in thousands of possible configurations; nearly all these configurations would result in a vehicle lower in our preference ordering, still recognizable as a car but lacking a working air conditioner. So there are many more configurations corresponding to nonvehicles, or vehicles lower in our preference ranking, than vehicles ranked greater than or equal to the Corolla. Similarly with the problem of planning, which also involves hitting tiny targets in a huge search space.  Consider the number of possible legal chess moves versus the number of winning moves. Which suggests one theoretical way to measure optimization - to quantify the power of a mind or mindlike process: Put a measure on the state space - if it's discrete, you can just count.  Then collect all the states which are equal to or greater than the observed outcome, in that optimization process
f457c1de-6ba1-4daa-860a-3fc678fda254
trentmkelly/LessWrong-43k
LessWrong
A concise sum-up of the basic argument for AI doom 1 - An artificial super-optimizer is likely to be developed soon. 2 - There is no known way of programming goals into an advanced optimizer, only outwardly observable behaviors of which we have no idea why they are being carried out or what motivates them. 3 - Most utility functions do not have optima with humans in them. Most utility functions do not have a term for humans at all. 4 - "Why haven’t we exterminated all mice/bugs/cows then?" draws quite a poor analogy. Firstly, we are not superoptimizers. Secondly, and more importantly, we care about living beings somewhat. The optimum of the utility function of the human civilization quite possibly does have mice/bugs/cows, perhaps even genetically engineered to not experience suffering. We are not completely indifferent to them. The relationship between most possible superoptimizers and humanity is not like the relationship between humanity and mice at all - it is much more like the relationship between humanity and natural gas. Natural gas, not mice, is a good example of something humans are truly indifferent to - there is no term for it in our utility function. We don’t hate it, we don’t love it, it is just made of atoms that we can, and do, use for something else. Moreover, the continued existence of natural gas probably does not pose the same threat to us as the continued existence of us would pose to a superoptimizer which does not have a term for humans in its utility function - just like we don’t have a term for natural gas in ours. Natural gas can not attempt to turn us off, and it can not create "competition" for us in form of a rival species with capabilities similar to, or surpassing, our own.    P.S. If you don’t like or find confusing the terminology of "optimizers" and "utility functions", feel free to forget about all that. Think instead of physical states of the universe. Out of all possible states the universe could find itself in, very, very, very few contain any humans. Given a random state
ce29e2e4-61d3-4167-9456-b9531ffdc19b
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Security Mindset - Fire Alarms and Trigger Signatures Series Overview and Goals ========================= This is the second in a [series of articles](https://www.lesswrong.com/posts/Ke2ogqSEhL2KCJCNx/security-mindset-lessons-from-20-years-of-software-security) about applying traditional security mindset to the problems of alignment and AI research in general. As much as possible, we should try to mine the lessons from the history of security and apply them to the alignment problem. To the extent that security is orthogonal to alignment research, good security practices and leveraging existing security capabilities should still help extend timelines, hopefully long enough to allow time for alignment research to make sufficient progress. At the very least, it would be undignified for existential risk to be realized via a basic, preventable security failure. Fire Alarms =========== There may be no fire alarm for imminent AGI, but what kind of useful fire alarms could we build with what we know today? What lessons can we draw from fire alarm design and use that could apply to an AGI early warning system? If we have a goal of being able to take some useful action, like leaving the building before being immolated by an unaligned superoptimizer of our own creation, then what kind of alarm features can help us achieve that? Fire Alarm Detection Signatures and Features ============================================ Most smoke alarms work by detecting either disruption in the flow of ionized particles (good for fast-burning fires), or a beam of light (slow, smoldering fires). Some fire suppression systems are also triggered by heat. Some detectors combine a photoelectric and ionization signal for even more reliable signature detection. What properties can we generalize from this to our AI safety system? Multiple diverse signals are good, if you don't know exactly what type of fire to expect. Each signal type is good at its own particular domain of detection, but has its own drawbacks. Detection is automated. It does not require a human to overcome inattention blindness. The alarm cannot be bribed or socially engineered. It does not have cognitive bias. The alarm signal is clear and unambiguous; binary. Fire Alarm Response Procedure ============================= Once we’ve decided what signatures to alert on, what should people actually do when they hear the alarm? [KatjaGrace](https://www.lesswrong.com/users/katjagrace) has already [written extensively](https://www.lesswrong.com/posts/dRJrurv5YRZcGoYcE/beyond-fire-alarms-freeing-the-groupstruck-1) about the psychology of alarm response. We’ll assume for this exercise that if those who are responsible for responding to the alarm have participated in its creation, then most of the social proof aspects won’t apply to the response. Most people have fire drill fatigue from over-practicing response procedure, which is a failure mode we should try to avoid - having an alarm signal that is dutifully ignored by researchers is worse than not having one in the first place! Fire drills are often scheduled in advance, so people learn to avoid them altogether. The ritual itself loses value rapidly after the first practice - there is very little variability in a building's escape routes or other features that would be useful to practice. A good fire drill should either be extremely easy to practice and not disruptive, or rare enough that some benefit could still be achieved without causing too much pain. Ideally, people should look forward to practicing the response procedure, or have some positive ritual or even reward around it. Each drill should try to practice some unique scenario or response tactic. So what would a good response actually entail? The ultimate fire alarm should cause us to hit the big red button to stop all AGI capability research. If no other earlier warning systems exist, there should at least be this final option for response. Unfortunately, this button is extremely difficult to build (it is at least approximately a pivotal act). A less drastic, more incremental response should take a weaker alarm signal and require a much less impactful response, similar to a PCI compliance threshold: if you process this many transactions, then you now have to start taking these additional safety precautions. Fire Alarm Maintenance and Testing ================================== Many smoke alarms require battery replacement and have a prominent button used for testing. This regular maintenance schedule helps ensure that the detection mechanism is still working properly and hasn't failed silently. (Most smoke alarm owners can attest that smoke alarms usually DON'T fail silently - their death rattle is an incessant, unignorable chirping.) Some jurisdictions even require independent 3rd-party testing, even for home/personal devices. The test procedure to ensure proper functioning should be simple, prominent, easy for any person to initiate, non-disruptive, and easily auditable. Fire Alarm Adversaries ====================== The major difference in our AGI fire alarm metaphor, is that fire alarms aren't generally being designed to detect fires that people are explicitly trying to start. There is a big difference in signal detection and response for an intelligent adversary that may be trying to defeat your security control, and a passive, naturalistic, unthinking one. One exception is the airplane bathroom smoke detector - this alarm does have to survive a fire that people are trying to start intentionally. In this case, federal law does prohibit tampering with or disabling the detector, with heavy fines. Certainly, similar regulation could apply to AI safety controls. This approach has resulted in various degrees of success, looking at diverse regulated industries with respect to pollution standards, financial instruments, or biotech. Certainly we shouldn’t rely on regulation and fines alone to ensure the proper functioning of the detection apparatus. Next Steps ========== This examination has raised more questions than it has answered, but hopefully provides a strongly suggestive direction for additional discussions, research, and action. These are candidates for future posts in the series: 1. Fire alarm measurements - [Peter McCluskey](https://www.lesswrong.com/users/petermccluskey) has already [proposed a good number of alarm scenarios as varied signals](https://www.lesswrong.com/posts/GspepepmD8RRdfiuo/ai-fire-alarm-scenarios). This is a great start. This list could be expanded with additional brainstorming as well as filtered through the lenses of what makes a good fire alarm and what lead researchers are likely to agree about being a significant signal to change behavior. 2. Regulatory schemes - AI legislation is being drafted in the US and EU - is it effective at addressing existential risks? Does it change the incentive structure enough to drive sufficient behavior changes? Does it simply shift the risks toward research in other jurisdictions? 3. Industry self-regulation -  PCI DSS (a self-regulatory framework) was arguably responsible for more significant changes in corporate security behavior than any other traditional regulatory initiative. What would an industry self-regulation standard look like? 4. Incentive models - What incentive structures could we put in place to change research and corporate behavior? Could these incentives be influenced by private entities or does that require collective / democratic action for moral or practical reasons?
0fadb5ec-c393-4a6c-a1ea-a55280543afc
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington DC measurement meetup Discussion article for the meetup : Washington DC measurement meetup WHEN: 13 January 2013 03:00:00PM (-0500) WHERE: Washington DC Instead of the courtyard, we will be meeting at someone's house: please PM me, maia or PhilipL, and we'll send you the address. If you're subscribed to the mailing list (of course you are!) then you could also just look at it. It's very close to the Wheaton metro station. We will be doing measurement-based calibration games. That is to say, we'll be guessing the physical (or as Robin Hanson suggested, economic) properties of objects with confidence intervals, and then testing how well calibrated we are. If you have objects with properties (you probably do) and can bring them, please do so so we can guess at them. In particular, dense or (sparse?) objects would be interesting, as well as items that you know the price of. Discussion article for the meetup : Washington DC measurement meetup
5b57accb-ef74-4ea7-a08f-dbbf3125293f
trentmkelly/LessWrong-43k
LessWrong
Mentorship in AGI Safety (MAGIS) call for mentors Tldr: If you are working on AI Safety and are willing to help someone to start their career in AI Safety by sharing your experience at 1:1 meetings consider applying as a mentor In the last year, we’ve seen a surge of interest in AI safety. Many young professionals and aspiring researchers are attempting or seriously considering a career shift into this and related fields. We’ve seen a corresponding rise in months-long technical bootcamps and research programs, but mentorship has failed to keep pace with this rise. This is a staggering gap, and we intend to fill it - starting now.  Enter the Mentorship in AGI Safety (MAGIS) program, a joint initiative between AI Safety Quest and Sci.STEPS.  The pilot program will recruit mentors from the community and pair them with mentees according to self-reported background and professional goals, including technical experience, career advice, and soft skills. Mentors will meet with mentees 6 times over 3 months to provide guidance tailored to their specific needs - whether technical expertise, non-technical skills, or career development advice. The program will also provide shared networking and collaboration resources on Discord, and 2+ joint webinar activities for all mentees. According to feedback from the participants of the first season of Sci.STEPS, a mentor program like this one is an immense help to early career researchers.  MAGIS opens with a call for mentors. If you are currently doing research in AI Safety or working in any related area (outreach, technical governance, machine learning, etc.), consider applying! By default, we ask that you commit to at least 6 one-hour mentorship meetings over the 3-month program, though you and your mentee(s) might work out a different schedule. At present, this is a volunteer effort; the only reward we can offer is the chance to help someone make an impact at a pivotal moment in their career.  Program Outline The overall plan for the pilot program in  2024:  1. Call for me
95b5b7f0-0e73-4e27-b7d7-fe6bd41ab097
trentmkelly/LessWrong-43k
LessWrong
Agency Volume of "A Map That Reflects the Territory" Set - My Personal Commentary previously, in this series Noticing the taste of Lotus This has really lovely prose. Friend said he resented the premise that life is more than just a series of lotuses to eat. the writing made it sound so easy not to fall into them, and did not acknowledge let alone offer any concrete solutions to help with its very real difficulty. I think my friend has a point. Takeaways * Just look. notice when something grabs your wanting. * Notice especially when someone else built the want-grabber. * Be at ease and let yourself get grabbed if you want to, just pay attention to what happens as a result. The Tails Coming Apart As Metaphor For Life The intuitive geometric explanation works for a level of math knowledge that groks point graphs and what a normal distribution looks like. Reasonable for lesswrong. I still want to see a version with a lower barrier to entry than that. Admittedly, that's well outside the scope of this post. Not in this essay explained: Mediocristan, Quine, Blegg/Rube network illustration, ethical naturalist Lots of references made in this essay feel at risk of nulling out > The most merciful thing in the world is how so far we have managed to stay in the area where the human mind can correlate its contents Ended on a note of horror. Scott has ensured I will never forget this essay, sure hope that's what he was going for. Meta-Honesty: Firming up Honesty Around its Edge-Cases The Basics To always be honest requires: * quick clever thinking with words * surrounding self with people who can handle you saying things off-script * courage or stubbornness to risk rejection based on what you believe Why did E feel the need to drop in the term Glomarization. He could have actually explained the concept. Truth-telling rules - good candidate for #jestercourt clarification I'd like to try a conversation in which everyone must speak only things that are literally false, but the lies have to stay on topic and consistent. there'd be a timeout
a43ac6c1-7327-4e63-9e9d-11fc13796f46
trentmkelly/LessWrong-43k
LessWrong
Meetup : San Francisco Meetup: "Should" Considered Harmful Discussion article for the meetup : San Francisco Meetup: "Should" Considered Harmful WHEN: 22 June 2015 06:15:00PM (-0700) WHERE: 1061 Market St #4, San Francisco, CA 94103 We'll be meeting to discuss Nate Soares's recent blog posts: "Should considered harmful" Not because you "should" Your "shoulds" are not a duty Reading these beforehand is optional but encouraged! As usual, call 301-458-0764 to be let in, and feel free to show up late. Discussion article for the meetup : San Francisco Meetup: "Should" Considered Harmful
a498d172-2cdb-411f-81e1-c03797837231
trentmkelly/LessWrong-43k
LessWrong
Adverse Selection by Life-Saving Charities GiveWell, and the EA community at large, often emphasize the "cost of saving a life" as a key metric, $5,000 being the most commonly cited approximation. At first glance, GiveWell might seem to be in the business of finding the cheapest lives that can be saved, and then saving them. More precisely, GiveWell is in the business of finding the cheapest DALY it can buy. But implicit in that is the assumption that all DALYs are equal, or that disability or health effects are the only factors that we need to adjust for while assessing the value of a life year.. However, If DALYs vary significantly in quality (as I’ll argue and GiveWell acknowledges we have substantial evidence for), then simply minimizing the cost of buying a DALY risks adverse selection.  It’s indisputable that each dollar goes much further in the poorest parts of the world. But it goes further towards saving lives in one the poorest parts of the world, often countries with terrible political institutions, fewer individual freedoms and oppressive social norms. More importantly, these conditions are not exogenous to the cost of saving a life. They are precisely what drive that cost down. Most EAs won’t need convincing of the fact that the average life in New Zealand is much, much better than the average life in the Democratic Republic of Congo. In fact, those of us who donate to GiveDirectly do so precisely because this is the case. Extreme poverty and the suffering it entails is worth alleviating, wherever it can be found. But acknowledging this contradicts the notion that while saving lives, philanthropists are suddenly in no position to make judgements on how anything but physical disability affects the value/quality of life.  To be clear, GiveWell won’t be shocked by anything I’ve said so far. They’ve commissioned work and published reports on this. But as you might expect, these quality of life adjustments wouldnt feature in GiveWell’s calculations anyway, since the pitch to donors is about the pr