id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
0cb13b61-33d7-495b-a6a5-3dd6416ce164
trentmkelly/LessWrong-43k
LessWrong
Diseased disciplines: the strange case of the inverted chart Imagine the following situation: you have come across numerous references to a paper purporting to show that the chances of successfully treating a disease contracted at age 10 are substantially lower if the disease is detected later: somewhat lower at age 20 to very poor at age 50. Every author draws more or less the same bar chart to depict this situation: the picture below, showing rising mortality from left to right. You search for the original paper, which proves a long quest: the conference publisher have lost some of their archives in several moves, several people citing the paper turn out to no longer have a copy, etc. You finally locate a copy of the paper (let's call it G99) thanks to a helpful friend with great scholarly connections. And you find out some interesting things. The most striking is what the author's original chart depicts: the chances of successfully treating the disease detected at age 50 become substantially lower as a function of age when it was contracted; mortality is highest if the disease was contracted at age 10 and lowest if contracted at age 40. The chart showing this is the picture below, showing decreasing mortality from top to bottom, for the same ages on the vertical axis. Not only is the representation topsy-turvy; the two diagrams can't be about the same thing, since what is constant in the first (age disease detected) is variable in the other, and what is variable in the first (age disease contracted) is constant in the other. Now, as you research the issue a little more, you find out that authors prior to G99 have often used the first diagram to report their findings; reportedly, several different studies on different populations (dating back to the eighties) have yielded similar results. But when citing G99, nobody reproduces the actual diagram in G99, they all reproduce the older diagram (or some variant of it). You are tempted to conclude that the authors citing G99 are citing "from memory"; they are aware of
0eddf5d7-adb9-4ce6-addd-4c483c94d055
trentmkelly/LessWrong-43k
LessWrong
Intent alignment should not be the goal for AGI x-risk reduction > P(misalignment x-risk | AGI) is high.  Intent alignment should not be the goal for AGI x-risk reduction. If AGI is developed, and we solve AGI intent alignment, we will not have lowered x-risk sufficiently, and we may have even increased it higher than it would have been otherwise.   > P(misalignment x-risk | intent-aligned AGI) >> P(misalignment x-risk | societally-aligned AGI).    The goal of AI alignment should be alignment with (democratically determined) societal values (because these have broad buy-in from humans).   > P(misalignment x-risk | AGI) is higher if intent alignment is solved before societal-AGI alignment.    Most technical AI alignment research is currently focused on solving intent alignment. The (usually implicit, sometimes explicit) assumption is that solving intent alignment will help subsequently solve societal-AGI alignment. This would only be the case if all the humans that had access to intent-aligned AGI had the same intentions (and did not have any major conflicts between them); that is highly unlikely.  Solving intent alignment is likely to make practically implementing societal-AGI alignment harder. If we first solve intent alignment before solving societal alignment, humans with intent-aligned AGIs are likely to be incentivized to inhibit the development and roll-out of societal AGI-alignment techniques because they would be giving up significant power. Furthermore, humans with intent-aligned AIs would suddenly have significantly more power, and their advantages over others would likely compound.   Why does solving intent alignment not lower x-risk sufficiently? 1. If we solve the intent alignment problem between a human, H, and an AI, A, then A implements H’s intentions with super-human intelligence and skill. 2. There are multiple Hs and multiple As. 3. By the very nature of humans, there are conflicts in the intentions of the Hs.  1. Humans have conflicting preferences about the behavior of other humans and abou
d1578155-3d5b-4cff-b37f-33293f9f5a08
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
What exactly is GPT-3's base objective? *[Probably a noob question]* I'm thinking about what an inner alignment failure might look like for GPT-3. This would have to involve some deployment context in which GPT-3 performs significantly worse (by the standards of the base objective) than it did in training. (It would involve other things too, such as GPT-3 being a mesa-optimizer.) *But to say how well GPT-3 performs on some prompt not in the training dataset, we have to have a definition of the base objective that extends beyond the training dataset*. *If the base objective only makes sense in the context of the training dataset, then inner alignment failure is impossible by definition.* Is the base objective "Predict the next word?" Or is it "Predict the next word, supposing what you are reading is typical 2019 Internet text?" Or is it "Predict the next word, supposing what you are reading is a random-with-the-following-weights sample from dataset D? [where D is the dataset used to train GPT-3]" The third option is in some sense the best, because it most closely fits what we actually did to train GPT-3. But note that the logical extension of this line of reasoning is to prefer a fourth option: "Predict the next word, supposing what you are reading is a random-with-the-following-weights sample from dataset D' [where D' is like D except that it doesn't contain any of the bits of text that GPT-3 happened to not see in training, and the randomness weights are chosen to more accurately yield the data points that GPT-3 in fact saw]." The problem with these last two answers is that they make it *undefined* how well GPT-3 performs on the base objective on any prompt that wasn't in D, which then rules out psuedo-alignment by definition. From the Risks from Learned Optimization paper: > In such a case, we will use *base objective* to refer to whatever criterion the base optimizer was using to select between different possible systems and *mesa-objective* to refer to whatever criterion the mesa-optimizer is using to select between different possible outputs. In reinforcement learning (RL), for example, the base objective is generally the expected return. Because the mesa-objective is not specified by the programmers, mesa-optimization opens up the possibility of a mismatch between the base and mesa- objectives, wherein the mesa-objective might seem to perform well on the training environment but lead to bad performance off the training environment. We will refer to this case as pseudo-alignment below. Expected return in a particular environment/distribution? Or not? If not, then you may be in a deployment context where you aren't updating the weights anymore and so there is no expected return, or at least it's close to 0 because there's only any return if you can convince people to start updating your weights again! I worry I am just confused about all this. Hence why I'm asking. **What is GPT-3's base objective?**
603d1220-b5f6-48b9-9533-058381e9c04a
trentmkelly/LessWrong-43k
LessWrong
Matching game Have you have read the overview of this blog? If so, I would be pleased if you would tell me which of the following styles of thought you think closest to that manifested in it: Take Our Poll
3df266d7-6f53-4ceb-88de-70585c1c6c0b
trentmkelly/LessWrong-43k
LessWrong
Sazen Purpose of post: describe and (hopefully) popularize a concept I've found highly useful. ---------------------------------------- Last year, my partner Logan Strohl wrote a sequence to introduce the "naturalism" concept they've been developing and teaching for the past decade or so. That sequence was structured around a single, short sentence. The first essay introduced the sentence, and the remaining essays were primarily about explaining what each of the important concepts in that short sentence actually meant. So, for the sentence "knowing the territory takes direct and patient observation," there was a full essay on what was intended (and, more crucially, what was not intended) by the word "knowing," and another on "the territory," and another on "observation," and so on. This format was largely inspired by a conversation in which I asked Logan to describe naturalism briefly, and they said "I totally can, but you'll get the wrong idea." Together, we realized that there is a curious one-way sort of property to many sentences, in which they work as pointers or summaries after the fact, but fail to generate the-thing-they're-summarizing if used as standalone seeds. (One could argue that every sentence has some of this property, but some sentences have a lot of it.) I'd like to be able to point directly at this property, and as a result of historical accident that I'll explain in a footnote, the handle I've ended up with in my own head is sazen[1]. ---------------------------------------- Example I: "Duncan Sabien is a teacher and a writer." This is a true sentence. People who know me very, very well, upon hearing this sentence, will nod. It's a good fit, retrospectively, for the data. However, if you are attempting to give someone a sense of me up-front, saying "Duncan Sabien is a teacher and a writer" is an unusually bad start. The thing that most people will think of when they hear "teacher" or "writer" is specifically unlike me—I'm a very weird sort
92a22f23-3cec-4b09-9d62-5aea8a6b3569
StampyAI/alignment-research-dataset/arbital
Arbital
Lattice (Order Theory) A **lattice** is a [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb) that is closed under binary [joins and meets](https://arbital.com/p/3rc). Let $L$ be a lattice. Then for all $p,q,r \in L$ the following properties are necessarily satisfied. * [Associativity](https://arbital.com/p/3h4) of joins and meets: $(p \vee q) \vee r = p \vee (q \vee r)$, and $(p \wedge q) \wedge r = p \wedge (q \wedge r)$ * [Commutativity](https://arbital.com/p/3jb) of joins and meets: $p \vee q = q \vee p$ and $p \wedge q = q \wedge p$ * Idempotency of joins and and meets: $p \vee p = p$ and $p \wedge p = p$ * Absorption: $p \vee (p \wedge q) = p$ and $p \wedge (p \vee q) = p$ %%hidden(Proofs): Lemma 1: Let $P$ be a poset, $S \subseteq P$, and $p \in P$. If both $\bigvee S$ and $(\bigvee S) \vee p$ exist then $\bigvee (S \cup \{p\})$ exists as well, and $(\bigvee S) \vee p = \bigvee (S \cup \{p\})$. Proof: See the *Join fu* exercise in [https://arbital.com/p/4ll](https://arbital.com/p/4ll). ## Associativity Let $L$ be a lattice and $p,q,r,s \in L$ such that $s = p \vee (q \vee r)$. We apply the above lemma, along with commutativity and closure of lattices under binary joins, to get $$p \vee (q \vee r) = (q \vee r) \vee p = (\bigvee \{q, r\}) \vee p = \bigvee (\{q, r\} \cup \{p\}) =$$ $$\bigvee \{ q, r, p \} = \bigvee (\{p, q\} \cup \{r\}) = (\bigvee \{p, q\}) \vee r = (p \vee q) \vee r.$$ By duality, we also have the associativity of binary meets. ## Commutativity Let $L$ be a lattice and $p,q \in L$. Then $p \vee q = \bigvee \{ p, q \} = q \vee p$. Binary joins are therefore commutative. By duality, binary meets are also commutative. ## Idempotency Let $L$ be a lattice and $p \in L$. Then $p \vee p = \bigvee \{ p \} = p$. The property that for all $p \in L$, $p \vee p = p$ is called *idempotency*. By duality, we also have the idempotency of meets: for all $p \in L$, $p \wedge p = p$. ## Absorption Since $p \wedge q$ is the greatest *lower bound* of $\{p,q\}$, $p \wedge q \leq p$. Because $p \leq p$ and $(p \wedge q) \leq p$, $p$ is an upper bound of $\{p, p \wedge q\}$, and so $p \vee (p \wedge q) \leq p$. On the other hand, $p \vee (p \wedge q)$ is the least *upper bound* of $\{p, p \wedge q\}$, and so $p \leq p \vee (p \wedge q)$. By anti-symmetry, $p = p \vee (p \wedge q)$. %% Closure under finite joins and meets -------------------------------------------------------------- Let $L$ be a lattice and $S = \{ s_1, ..., s_n \}$ be some finite subset of $L$. Then an inductive argument shows that $\bigvee S$ exists. %%hidden(Proof): Here again, we will need Lemma 1, stated in the proofs of the four lattice properties. Our proof proceeds by induction on the cardinality of $S$. The base case is $\bigvee \{ s_1 \} = s_1 \in L$. For the inductive step, we suppose that $\bigvee \{s_1, ..., s_i \}$ exists. Then, applying lemma 1, we have $\bigvee \{s_1, ..., s_{i+1} \} = \bigvee \{s_1, ..., s_i \} \vee s_{i+1}$. Applying our inductive hypothesis and closure under binary joins, we have $\bigvee \{s_1, ..., s_i \} \vee s_{i+1}$ exists. Lattices are therefore closed under all *finite* joins, not just binary ones. Dually, lattices are closed under all finite meets. %% Basic positive examples -------------- Here are two Hasse diagrams of posets which are lattices. ![A diamond shaped lattice](http://i.imgur.com/OlQnU07.png) %%%comment: dot source: digraph G { node [= 0.1, height = 0.1](https://arbital.com/p/width) edge [= "none"](https://arbital.com/p/arrowhead) a [= ""](https://arbital.com/p/label) b [= ""](https://arbital.com/p/label) c [= ""](https://arbital.com/p/label) d [= ""](https://arbital.com/p/label) rankdir = BT; a -> b a -> c b -> d c -> d } %%% ![A cube shaped lattice](http://i.imgur.com/L0x074n.png) %%%comment: dot source: digraph G { node [= 0.1, height = 0.1](https://arbital.com/p/width) edge [= "none"](https://arbital.com/p/arrowhead) a [= ""](https://arbital.com/p/label) b [= ""](https://arbital.com/p/label) c [= ""](https://arbital.com/p/label) d [= ""](https://arbital.com/p/label) e [= ""](https://arbital.com/p/label) f [= ""](https://arbital.com/p/label) g [= ""](https://arbital.com/p/label) h [= ""](https://arbital.com/p/label) rankdir = BT; a -> b a -> c a -> d b -> e b -> f c -> e c -> g d -> f d -> g e -> h f -> h g -> h } %%% Basic negative examples ------------------------------------ Here are two Hasse diagrams of posets which are *not* lattices. ![A simple non-lattice](http://i.imgur.com/DAeuYz0.png) %%%comment: digraph G { node [= 0.1, height = 0.1](https://arbital.com/p/width) edge [= "none"](https://arbital.com/p/arrowhead) a [= ""](https://arbital.com/p/label) b [= ""](https://arbital.com/p/label) c [= ""](https://arbital.com/p/label) d [= ""](https://arbital.com/p/label) e [= ""](https://arbital.com/p/label) f [= ""](https://arbital.com/p/label) g [= ""](https://arbital.com/p/label) h [= ""](https://arbital.com/p/label) rankdir = BT; a -> b a -> c a -> d b -> e b -> f c -> e c -> g d -> f d -> g e -> h f -> h g -> h } %%% In the above diagram, the two bottom elements have no common lower bounds. Therefore they have no meet, and so the depicted poset is not a lattice. However, it should be easy to verify that this poset is closed under binary joins. ![Another simple non-lattice](http://i.imgur.com/5Vqk87u.png) %%%comment: dot source: digraph G { node [= 0.1, height = 0.1](https://arbital.com/p/width) edge [= "none"](https://arbital.com/p/arrowhead) a [= ""](https://arbital.com/p/label) b [= ""](https://arbital.com/p/label) c [= ""](https://arbital.com/p/label) d [= ""](https://arbital.com/p/label) rankdir = BT; a -> b c -> d } %%% The Hasse diagram of this poset has two connected components. No element from the left component will have a meet or a join with any element from the right component. The depicted poset is therefore *not* a lattice. The connecting lemma -------------------- The connecting lemma states that for any lattice $L$ and $p,q \in L$, $p \vee q = p \Leftrightarrow q \leq p$ and dually, $p \wedge q = p \Leftrightarrow q \geq p$. This simple but important lemma is so named because it establishes a connection between the lattice's join operator and its underlying poset order. %%hidden(Proof): We prove $p \vee q = p \Leftrightarrow q \leq p$; the other part follows from duality. If $p \vee q = p$, then $p$ is an upper bound of both $p$ and $q$, and so $q \leq p$. Going the other direction, suppose $q \leq p$. Since $p$ is and upper bound of itself by reflexivity, it then follows that $p$ is an upper bound of $\{p, q\}$. There cannot be a lesser upper bound of $\{p, q\}$ because it would not be an upper bound of $p$. Hence, $p \vee q = p$. %% Lattices as algebraic structures ------------------------------------- It's also possible to formulate lattices as [algebraic structures](https://arbital.com/p/3gx) $\langle L, \vee, \wedge \rangle$ satisfying the associativity, commutativity, idempotency, and absorption laws described above. A poset $\langle L, \leq \rangle$ can then be defined such that for $p, q \in L$, $p \leq q$ whenever $p \vee q = q$. It can be shown that this poset is closed under binary meets and joins, and that these meets and joins are equal to the corresponding meets and joins of the algebraic lattice. Additional material ------------------------------- For more examples of lattices, see [https://arbital.com/p/574](https://arbital.com/p/574). For some exercises involving the concepts introduced on this page, see [https://arbital.com/p/5ff](https://arbital.com/p/5ff).
6b0bd126-2a22-4bb7-974b-f6696071f0b3
StampyAI/alignment-research-dataset/special_docs
Other
Mapping the AI Investment Activities of Top Global Defense Companies October 2021 Mapping the AI Investment Activities of Top Global Defense Companies CSET Issue Brief AUTHOR S Ngor Luong Rebecca Gelles Melissa Flagg Center for Security and Emerging Technology | 1 Executive Summary As a dual -use technology, artificial intelligence (AI) has the potential to transform a nation’s economy, national security, and society. Militaries across the globe, including the U.S. Department of Defense (DOD), are increasingly treating AI as a technology that is central to their long -term strategies and planning. One of the most significant paths of acquisition and integration of emerging technologies, inclu ding AI -enabled technology, into defense systems and platforms is through the largest global defense companies. As militaries around the world are ramping up their capabilities and capacity to harness AI, it is important to begin to understand the state of AI innovation within the major global defense companies. 1 With tech giants such as Google and Amazon at the forefront of AI innovation , major global defense companies are pressed to step up their innovation -related activities to keep pace with the larg er commercial market. In addition to expanding their internal research and development (R&D) programs and collaborations with leading research institutes and universities, another way the largest defense companies might maintain their competitiveness in th e emerging technology market is through investments and mergers and acquisitions (M&A). If done correctly, these approaches may give these defense giants a leg up against competitors that predominantly rely on in -house R&D. Looking specifically to the U.S. defense innovation ecosystem, major defense companies that invest in or acquire AI companies could potentially serve as a bridge between DOD and the commercial innovation ecosystem. I n doing so, the se companies might help the U.S. military adapt and lever age commercial technology for defense applications. Within this context, this paper looks at the state of AI investments and M&A by the top 50 global defense companies —as determined by their military revenue —to further understand their use of these approa ches to access external AI innovation. Our key findings include the following: ● Few of the top 50 global defense companies are investing in or acquiring privately -held AI companies . This suggests that Center for Security and Emerging Technology | 2 these companies are not relying on these approaches to access AI technology in the private sector . ● Defense companies that have a corporate venture capital (CVC) subsidiary have invested more in AI companies than those without such organizations. These CVC subsidiaries also tend to make more AI investments than their parent companies. ● The largest global defense companies often bought and invested in AI companies based in the United States. Despite concerns about foreign entities leveraging American innovation, within the defense industry, U.S. -based AI companies are still more likely to receive investments from U.S.-based defense companies. Center for Security and Emerging Technology | 3 Table of Contents Executive Summary ........................................................................................ 1 Introduction ....................................................................................................... 4 Understanding the AI Investment and M&A Activities of the Largest Global Defense Companies .......................................................... 6 The Role of Corporate Venture Capital Subsidiaries ............................ 9 Investing and Buying Within U.S. Borders ............................................ 12 Conclusion ....................................................................................................... 14 Authors ............................................................................................................. 16 Acknowledgments ........................................................................................ 16 Appendix 1: Top 50 Defense Companies and Their Defense Revenue in 2019 ........................................................................................... 17 Appendix 2: Methodology and Data ....................................................... 20 Appendix 3: Domestic and International AI M&A and Investment Activity .............................................................................................................. 21 Appendix 4: Defense Corporate Venture Capital Subsidiaries ....... 22 Endnotes .......................................................................................................... 23 Center for Security and Emerging Technology | 4 Introduction The major global defense companies are significant suppliers and integrators of high -tech capabilities for militaries around the world, including the U.S . military.2 As the rhetoric around the importance of AI to military superiority ramps up across the g lobe, it is vital to understand the state of AI innovation among these leaders of the defense industry. Major global defense companies such as Northrop Grumman and Boeing increasingly offer technology hardware and services alongside their traditional delivery of aerospace products and conventional military systems. The cutting -edge advances in emerging technologies such as AI, however, are predominantly taking place in the commercial space. While expanding in -house research and development (R&D) efforts an d broadening collaborations with leading universities are important paths for staying competitive, defense giants must also be able to access commercial innovation. One potential pathway by which major defense companies can access the commercial innovation base is through private equity investments and mergers and acquisitions. 3 Large U.S. -based defense industry players such as Lockheed Martin, for example, have “been on a mission to create partnerships [e.g., investments in AI startups] and get access to certain types of AI technology ,” executive director and general manager of Lockheed Martin Ventures Chris Moran emphasized in 2020. 4 Other major international defense companies appear to have followed a similar approach —for example, according to the company’s website, Netherlands/France -based Airbus is “interested in those [tech nology] start- ups that operate in areas of strategic interest to the Group [and] investing in . . . a vision where humans and AI together create solutions that guarantee more efficiency and safety in the aerospace sector. ” 5 If leveraged properly, the inve stment and M&A approaches have the potential to help these large defense industry players expand their emerging technologies portfolio s as well as maintain market relevance in the broader commercial AI innovation space. Center for Security and Emerging Technology | 5 This paper is a first step, intended to assess the degree to which the largest global defense companies use investment and M&A approaches to access AI technology from smaller, privately -held companies —that is, companies not traded publicly on the stock market exchanges. By investing in or ac quiring privately -held AI companies, large defense companies may be seeking to capitalize on the speed and agility characteristics of tech startups, cultivate talent, and help shape emerging technologies for commercial and defense applications.6 But the investment and M&A approach es are not without limitations —for example, investing in or acquiring a company does not necessarily guarantee successful integration of relevant technologies. 7 A more comprehensive assessment of defense companies’ AI capabilities would also need to account for other mechanisms such as internal research through industrial research and development (IRAD) investments, investments directly into universities, or subcontract relationships. These limitations notwithstanding, a closer look into the investment and M&A activity of the major global defense companies can still provide meaningful insights into their efforts to access AI innovation from private markets for use in next- generation defense systems. With that, this paper focuses specifically on the AI investment and M&A activities of the top 50 global defense companies —as determined by military revenue —assessing the investment patterns of their corporate venture capital subsidiaries (where such exist), as well as the geographic breakdown of these defense companie s and their AI target companies. 8 Center for Security and Emerging Technology | 6 Understanding the AI Investment and M&A Activities of the Largest Global Defense Companies As the commercial sector has forged ahead in AI innovation, militaries around the world, including the U.S. DOD, are increasin gly looking outside of the traditional military -industrial relationship and toward nontraditional suppliers of emerging technologies such as tech startups, creating linkages through government programs such as the U.S. Defense Innovation Unit (DIU). 9 To stay relevant and remain ahead of their competitors, large defense companies are also trying to tap into AI technologies being developed in the commercial space, including through the investment and M&A approach es. Major defense companies have the benefit o f experience navigating government and military bureaucracies, an understanding of warfighters’ needs, and the know- how to identify those cutting -edge technologies that can be effectively integrated into combat systems. As such, some of these long -time traditional suppliers see their investments in privately - held AI companies not only as a way to diversify their emerging technologies portfolio, but also as an opportunity to “act as a translator” between the fast- paced world of commercial innovation and the rigid bureaucracy of military acquisition. 10 To better understand the degree to which the largest defense companies are accessing external AI innovation, we analyzed the top 50 global defense companies’ investment and M&A activities in private AI markets from 2013 to 2020. These global defense giants span across Europe, Asia, and the United States; their cumulative revenue in 2019 amounted to $1.197 trillion, including $475 billion in defense revenue. 11 The time frame for our analysis corresponds to big AI breakthroughs, driven in large part by progress in deep learning and machine learning methods in late 2012. 12 To identify AI target companies specifically, we ran a keyword- based search query against business descriptions provided by CSET’s financial databases. 13 Appendix 2 offers further detail on the methodology and the data used in this paper. Our analysis shows that between 2013 and 2020, 11 of the top 50 global defense companies made a total of 52 investment transactions into AI companies with $1.06 b illion in disclosed Center for Security and Emerging Technology | 7 value.14 Over the same period of time, five de fense companies struck M &A deals with six AI comp anies, with one de al disclosing a transaction value of $350 million.15 Only two of the top 50 global defense companies—Huntington Ingalls Industri es and Boe ing— made both AI investments and M&A deals. These trends show that few of the major global defense companies are investing in or acquiring privately-h eld AI companies. A number of factors c an help expl ain wh y the largest global defense companies seem to rarely use the investment and M&A approaches to access AI innovation in the private sector. For one, some of these major defense industry players may not believe A I is mature enough for integration in to embedded systems an d products, or for investment with high financial returns at this point in time.16 It is also possible that simple subcontract relationships on a project-by-project b asis o r internal d evelopment o f integrated AI are easier approaches to manage. While be yond the sc ope of th is paper, data on subcontracts and IRAD investments can provide further insights. While the data show s that the majority of the world’s top 50 defense companies have n ot invested in or dir ectly ac quired AI companies, those that did made significant investments. More specifically, th e 11 global defense compani es th at made investments into privately -held AI companies between 2013 and 2020 dedicated a quarter of their total investments to these ventures. It is also worth noting that 65 percent of these AI investments are spread across ten defense companies ( seve n of which ar e U.S.-based companies), while th e rest came so lely f rom the Dutch/French Airbus which stands out as more active than other de fense comp any investors. Airbus sources its revenue primarily f rom commercial markets, and therefore may be be tter positioned to spread its AI investments internationally.17 These results indicate that a sizable number of the AI investment transactions analyzed in this paper ar e clustered i n U.S.-based defense companies, and that most of the major defense companies outside of the United States, excepting Airbus, are not leveraging private equity investments to access AI companies in the private sector. Center for Security and Emerging Technology | 8 Figure 1: Total To p Global Defense Company AI and Non -AI Investments and Mergers & Acquisitions (M&A), 2013-2020. As Figure 1 illustrates, mergers and acquisitions as a method to access external AI resources seems even less common than private equity investments. Between 2013 and 2020, only five defense companies —four based in the U nited States and one in the U nited Kingdom—used this approach to access private -sector AI technology. Each of the six transactions made by these five major defense companies targeted AI companies based in the United States. Moreover, of the five defense companies that used acquisitions of any sort, two —Huntington Ingalls and United Technologies —rarely made either investment or M&A deals during the period of time we examined. However, when these two companies did acquire other firms , half of the deals were with an Source: Analysis by CSET using Crunchbase and Refinitiv. Center for Security and Emerging Technology | 9 AI company. Among other reasons, major defense companies may be reluctant to pursue AI M&A deals due to the challenges that come with strategic acquisitions, including the difficulties and costs of identifying targets that are financially sound and possessing valuable technology .18 Private equity investment strategies, on the other hand, appear more favorable to investors, in part because these strategies allow them to scope out opportunities with lower risks than acquiring a company.19 The Role of C orporate Venture Capital Subsidiaries One major difference between defense companies that made AI investments and those that did not is the presence of a corporate venture capital (CVC) subsidiary. In fact, as we elaborate below, the majority of the top defense companies’ AI investment activity between 2013 and 2020 can be traced back to these CVC subsidiaries , rather than the defense companies themselves. CVC organizations can serve as intelligence gatherers, helping a company understand its competitors and scan for emerging technologies such as AI that are developing rapidly. 20 By establishing a CVC division, a large defense company may also be signaling an interest in obtaining these cutting -edge technologies which they may later integrate into larger defense systems and platforms. 21 For example, Lockheed Martin established Lockheed Martin Ventures to “tap into these startups, and ultimately serve as a [market/ bridge] for the emerging technology being created outside the walls of the defense industry.” 22 One of the key functions Lockheed Martin Venture performs is screening up to 1,000 startups annually. 23 Other defense players such as Honeywell have also set up their own CVC organizations to proactively engage in venture capital investments in emerging technologies. 24 Thirteen of the top 50 global defense companies have a CVC subsidiary ( see Appendix 4). 25 Between 2013 and 2020, 11 of these 13 CVC subsidiaries made investments in a variety of companies, and eight of those 11 invested in AI companies. 26 As Figure 2 demonstrates, of the 52 AI investment transactions examined in the previous section , 95 percent (49 transactions) Center for Security and Emerging Technology | 10 involved the eight defense companies that have CVC sub sidiaries, amounting to around $1 billion in d isclos ed value. By contrast, the three defense companies without CVC organizations made three investment transactions into privately -held AI companies wh ich totaled $30.5 millio n in disc losed value. I n other words, defense companies with CVC s ubsidiaries mad e mo re investments into AI companies than those major defense firms without corporate venture c apital divisions. Figure 2: Investments in AI and Non -AI by Eleven Defense Companies w ith and without V enture Capital Subsidiaries ( CVC), 2013-2020. Source: Analysis by CSET using Crunchbase and Refinitiv. We further parsed the investment activity of defense companies with CVC subsidiaries by parent -subsidiary relationship—treating the defense company as the parent and the CVC organization as the subsidiary —to compare their investment trends. Of the eight defense companies with CVC subsidiaries that made investments into AI companies , five had both themselves (as parent companies) and their CVC subsidiaries make investments into AI companies, while three defense companies relied exclusively on their Center for Security and Emerging Technology | 11 subsidiaries to invest in AI -related ventures. To be clear, over the period of time under our rev iew, the parent companies have made investments in a variety of companies, but they did not prioritize investments into AI companies as much as their CVC subsidiaries did. As Figure 2 demonstrates, CVC subsidiaries dedicated 27 percent of their total inves tment transactions to AI companies, while their parent companies committed only about 9 percent of their total investments to AI. Looking more closely into the role of CVC subsidiaries also revealed a particularly interesting insight: between 2013 and 2020, the majority of AI investments came from CVC subsidiaries rather than the major defense companies themselves. Specifically, 92 percent (45 investment transactions) of AI investments were conducted by CVC subsidiaries such as Airbus Venture and Honeywell Venture Capital, compared to the 8 percent ( four investment transactions) made by parent companies, namely defense leaders such as Airbus and Honeywell. In sum, establishing a CVC subsidiary may become a real advantage for the major defense industry playe rs that are interested in accessing private -sector AI technology. It is unknown, though, if and how these major defense companies leverage and integrate the capabilities of the AI companies invested in by the CVC subsidiaries. Establishing this with certainty requires firms’ internal data, which is beyond the scope of this paper. Center for Security and Emerging Technology | 12 Investing and B uying Within U.S. Borders The majority of private equity AI investments made by the 11 major global defense companies between 2013 and 2020 was concentrated in the United States. This finding is perhaps not surprising considering that the United States has the world’s largest investment market for AI companies.27 Figure 3 shows that AI target companies based in the United States attracted 79 percent of the AI investment transactions from the major defense companies, while Israeli ( 10 percent), French ( 4 percent), Canadian (4 percent ), Swiss ( 2 percent), and Indian ( 2 percent ) AI target companies accounted for the rest. Similarly, the five defense companies with AI M&A activity exclusively bought American AI companies. Figure 3: U.S. -based AI Companies are the Top 50 Global Defense Companies’ Primary AI Investment Targets, 2013- 2020. Source: Analysis by CSET using Crunchbase and Refinitiv. Not only do U.S.-based AI companies tend to attract the majority of AI investmen ts made by the major global defense companies, U.S. - based defense companies predominantly invested in U.S. -based AI companies. American defense companies such as Lockheed Martin and Honeywell accounted for over 60 percent of investments (25 transactions) i nto U.S. -based AI target companies, with the rest of Center for Security and Emerging Technology | 13 the transactions tracing back to the Dutch/French giant, Airbus, (37 percent/15 transactions) and the South Korean leading defense company, Hanwha ( 3 percent/1 transaction). Similarly, in our findings fo r the AI M&A activity of the largest global defense companies, of the six U.S. -based AI companies acquired, two struck a deal with BAE Systems, a UK -based defense company, and four were bought by U.S. -based defense companies. These AI investment and M&A pa tterns may moderate some recent concerns that foreign entities —especially those affiliated with the militaries of U.S. competitors and adversaries such as China—are investing in and buying U.S. -based AI companies.28 U.S. innovation in the commercial sector may have the potential to help bolster U.S. defense systems. As the majority of AI investments and M&A made by the largest global defense companies remain within U.S. borders, it is crucial for the United States to continue fostering a vibrant innovation ecosystem and AI private markets at home. Small AI companies and startups based in the U nited States may be the driving force of AI advances and breakthroughs. However, in order for the AI -enabled capabilities developed in the commercial sector to become usable in fielded military systems and platforms, traditional defense companies are often needed to build and integrate them into large, complex systems. 29 As important suppliers and integrators, the major defense companies have shown interest in accessing AI -enabled capability from AI companies in the private sector . For example, Lockheed Martin’s investment in Fiddler, a company specializing in AI explainability, may help the defense giant develop trustworthy AI systems.30 There is potential for the major defense companies to capitalize on American innovation in the commercial sector. However, as shown in this paper, the major global defense companies are not currently leveraging investment and M&A as significant tools to harness AI -enabled techn ology in the private sector. Center for Security and Emerging Technology | 14 Conclusion The top 50 global defense companies play a critical role as suppliers and integrators of essential capabilities, including AI - enablers, into fielded military systems. As militaries around the world, including t he U.S. Department of Defense, continue to prioritize AI, it is necessary to understand the state of AI capabilities and capacity within the major global defense companies. Analyzing AI investment and M&A trends involving the top 50 global defense companies provides insights into the reality of access to commercial AI by the leading suppliers of defense systems and platforms. To summarize, our key findings are as follows: ● The top 50 global defense companies conducted a relatively small amount of AI investm ent o r M&A activity, suggesting they do not rely heavily on these approaches to access AI technology in the private sector. ● Defense companies that have a corporate venture capital (CVC) subsidiary more frequently invest in AI companies than those without such organizations. These CVC subsidiaries also tend to make more AI investments than their parent companies. ● The largest global defense companies often bought and invested in AI companies based in the United States. Despite concerns about entities associ ated with foreign militaries leveraging American innovation, within the defense industry, U.S. -based AI companies are still most likely to receive investments from U.S. -based defense companies. The findings presented in this paper suggest that there is lit tle AI investment and M&A activity initiated by the largest global defense companies since the major AI/ML breakthroughs in late 2012. It is important to understand that the lack of dependency on investment and M&A to access commercial AI technology is not equivalent to the absence of AI capabilities and capacity within the Center for Security and Emerging Technology | 15 major defense companies. It is possible that major global defense companies are accessing AI either using subcontract relationships on a project -by-project basis or by developing integra ted AI internally . It is also possible that major global defense companies may not believe AI is mature enough for integration into embedded systems and products at this point, or that defense company investors are perhaps not ready to tolerate the risk an d uncertainty of investment returns associated with emerging technologies and early -stage investment. Beyond the assessment of AI investment and M&A activities in the private sector, there are other mechanisms that should be further analyzed to achieve a comprehensive understanding of the current state of AI capabilities and capacity within the major global defense companies. For instance, having a corporate venture capital subsidiary appears to be a real advantage for the largest defense companies in acce ssing external AI technology in the private sector. These defense companies should further examine not only the magnitude of the ir subsidiaries’ investments into AI privately -held companies, but also how well the acquired companies’ technology has been int egrated into the parent company . Such findings may prove to be significant for the defense companies to decide on whether to ramp up or scale down their investment and M&A efforts to, in part, access AI innovation in the private sector. For the U.S. DOD, conducting internal research on IRAD investments or major defense companies’ subcontract relationships may provide a more comprehensive view of the AI acquisition and integration capabilities useful to defense systems. Center for Security and Emerging Technology | 16 Authors Ngor Luong is a research analyst with CSET, where Rebecca Gelles is a data scientist and Melissa Flagg wa s a senior fellow. Acknowledgments Thanks to Catherine Aiken, David Bonfili, Jaymie Durnan, Zigfried Hampel- Arias, Margarita Konaev, and Igor Mikolic -Torreira for helpful feedback, Ilya Rahkovsky for technical assistance, Dale Brauner, Melissa Deng, and Alex Friedland for editorial support, as well as Farhana Hossain for help with data visualization. The authors are solely responsible for the views expressed in this piece and for any errors. © 2021 by the Center for Security and Emerging Technology. This work is licensed under a Creative Commons Attribution -Non Commercial 4.0 International License. To view a copy of this license, visit https://creativecommons.org/licenses/by -nc/4.0/ . Document Identifier: doi: 10.51593/ 20210015 Center for Security and Emerging Technology | 17 Appendix 1: Top 50 Defense Companies and Their Defense Revenue in 2019 This appendix includes the top 50 global defense companies analyzed in this report , ranked according to their defense revenue , and lists their 2019 total revenue, def ense revenue—that is, revenue from their defense divisions —and proportion of defense revenue to total revenue. Rank Company Country 2019 Defense Revenue (in millions) 2019 Total Revenue (in millions) Proportion of Defense Revenue 1 Lockheed Martin United States $56,606.00 $59,812.00 95% 2 Boeing United States $34,300.00 $76,559.00 45% 3 General Dynamics United States $29,512.00 $39,350.00 75% 4 Northrop Grumman United States $28,600.00 $33,841.00 85% 5 Raytheon Company United States $27,448.00 $29,200.00 94% 6 Aviation Industry Corporation of China China $25,075.38 $66,858.02 38% 7 BAE Systems United Kingdom $21,033.27 $23,370.30 90% 8 China North Industries Group Corporation Limited China $14,771.60 $68,074.15 22% 9 L3Harris Technologies United States $13,916.98 $18,074.00 77% 10 United Technologies Corp. United States $13,090.00 $77,000.00 17% 11 China Aerospace Science and Industry Corporation China $12,035.25 $37,610.17 32% 12 Airbus Netherlands/ France $11,266.57 $78,916.36 14% 13 Leonardo Italy $11,109.27 $15,429.55 72% 14 China Shipbuilding Industry Corporation China $11,019.56 $55,097.78 20% 15 China Electronics Technology Group China $10,148.87 $32,951.25 31% 16 Thales France $9,251.68 $20,596.61 45% Center for Security and Emerging Technology | 18 17 Almaz -Antey Russia $9,191.60 $9,651.71 95% 18 China South Industries Group Corporation China $8,845.87 $28,550.02 31% 19 Huntington Ingalls Industries United States $8,119.00 $8,899.00 91% 20 China Aerospace Science and Technology Corporation China $7,745.57 $36,223.21 21% 21 Mitsubishi Heavy Industries Japan $6,570.00 $37,670.00 17% 22 Dassault France $5,708.84 $8,171.48 70% 23 Leidos United States $5,364.00 $11,094.00 48% 24 China State Shipbuilding Corporation China $5,356.75 $33,495.61 16% 25 Honeywell United States $5,326.00 $36,709.00 15% 26 Booz Allen Hamilton United States $5,182.96 $7,463.84 69% 27 Rolls -Royce United Kingdom $4,712.36 $19,725.28 24% 28 Safran France $4,413.05 $27,581.55 16% 29 GE Aviation United States $4,400.00 $32,875.00 13% 30 Naval Group France $4,155.14 $4,155.14 100% 31 Elbit Systems Ltd. Israel $4,056.00 $4,508.00 90% 32 Hanwha South Korea $3,976.23 $42,900.00 9% 33 Rheinmetall AG Germany $3,942.46 $7,001.73 56% 34 CACI International Inc. United States $3,489.85 $4,986.34 70% 35 Tactical Missiles Corporation JSC Russia $3,474.90 $3,546.46 98% 36 SAIC United States $3,317.08 $6,379.00 52% 37 Perspecta Inc. United States $3,300.00 $4,504.00 73% 38 Textron Inc. United States $3,271.20 $13,630.00 24% 39 Babcock International United Kingdom $3,233.92 $6,220.17 52% 40 Saab AB Sweden $3,185.19 $3,747.29 85% 41 Israel Aerospace Industries Ltd. Israel $3,006.00 $4,108.00 73% 42 KBR United States $2,852.62 $5,639.38 51% Center for Security and Emerging Technology | 19 43 KNDS Netherlands $2,798.45 $2,798.45 100% 44 Rafael Advanced Defense Systems Ltd. Israel $2,746.65 $2,746.65 100% 45 Hindustan Aeronautics Ltd. India $2,710.00 $3,010.00 90% 46 Amentum United States $2,700.00 $4,100.00 66% 47 Bechtel United States $2,303.00 $21,800.00 11% 48 Aselsan A.S. Turkey $2,172.57 $2,290.61 95% 49 Jacobs United States $2,115.00 N/A N/A 50 Oshkosh Defense United States $2,032.00 $8,382.00 24% Center for Security and Emerging Technology | 20 Appendix 2: Methodology and Data Identifying global defense companies : We select ed a list of the top 50 global defense companies based on their defense revenue in 2019 using Defense News’ Top 100 list, compiled by Defense News and the International Institu te for Strategic Studies. 31 Appendix 1 provides more detail on the selected companies and their defense revenue. Identifying AI investments and M&A : We supplement ed Crunchbase data with data from Refinitiv to identify all private equity investment and M&A transactions made by the top 50 global defense companies. We limited the data to 2013 to 2020 to accommodate the big AI breakthroughs and a surge in deep learning and machine learning in late 2012 and the inclusion of Refinitiv data after 2012. In this pap er, M&A refers to investment transactions with 100 percent of the target company’s equity sold, while private -market investments include venture capital and private equity funding rounds. Therefore, we excluded transactions where the investment target is c lassified in Crunchbase as “Post - IPO” or “Grant.” In this paper, we did not attempt to identify undisclosed M&A transactions and thus, excluded deals that are labeled “Intended,” “Pending,” “Rumor,” “Withdrawn,” or “Unknown” in Refinitiv. These restrictions allow us to meaningfully capture the investment and M&A trends by the global defense companies with higher confidence. To identify AI companies, we ran a regular expression -based search query against business descriptions of the target companies provide d by Crunchbase and Refinitiv. 32 Center for Security and Emerging Technology | 21 Appendix 3: Domestic and International AI M&A and Investment Activity Source: Analysis by CSET using Crunchbase and Refinitiv. Center for Security and Emerging Technology | 22 Appendix 4: Defense Corporate Venture Capital Subsidiaries The following table contains the name of defense companies and their corporate venture capital subsidiaries , as well as the country in which they are headquartered/founded. Defense Company Corporate Venture Capital Subsidiary Country Airbus Airbus Venture Netherlands/France BAE Systems BAE Systems Ventures33 United Kingdom Boeing Boeing HorizonX Ventures United States China Aerospace Science and Industry Corporation CASIC Capital China Hanwha Hanwha Investment Corp South Korea Honeywell Honeywell Venture Capital United States Lockheed Martin Lockheed Martin Ventures United States Rafael Advanced Defense Systems Ltd. Rafael Development Corporation Israel Saab AB Saab Ventures Sweden Safran Safran Corporate Ventures France SAIC SAIC Venture Capital United States China South Industries Group Corporation SIAMC Management Co., Ltd. China Thales Thales Corporate Ventures France Center for Security and Emerging Technology | 23 Endnotes 1 “The Final Report,” National Security Commission on Artificial Intelligence, Final Report (Washington, DC: NSCAI , March 2021) , 7, https://www.nscai.gov/wp - content/uploads/2021/03/Full -Report -Digital- 1.pdf . 2 Melissa Flagg and Jack Corrigan , "Ending Innovation Tourism : Rethinking the U.S. Military’s Approach to Emerging Technology Adoption " (Center for Security and Emerging Technology, July 2021), 4 –5, https://cset.georgetown.edu/publication/ending -innovation -tourism/ . 3 In this paper, mergers and acquisitions refer to investment transactions with 100 percent of the target company’s equity sold. In our analysis, we grouped mergers and acquisitions together and focused on transactions that involve majority own ership. Investments include venture capital and private equity funding rounds. We excluded transactions where the target was classified by Crunchbase as Post -IPO or G rant. See Zachary Arnold, Ilya Rahkovsky, and Tina Huang, “Tracking AI Investment: Initial Findings from the Private Markets” (Center for Security and Emerging Technology, September 2020 ), https://cset.georgetown.edu/research/tracking -ai-investment/ . 4 Marjorie Censer , “Lockheed Martin Ventures invests in Fiddler ,” Inside Defense, August 12, 2020, https://insidedefense.com/insider/lockheed -martin -ventures - invests -fiddler . 5 “Artificial Intelligence,” Airbus, December 6, 2016, https://www.airbus.com/newsroom/news/en/2016/12/Artificial -Intelligence.html . 6 Chris Moran, “Lockheed Martin seeks investments not acquisitions in dual -use technology startups,” Lockheed Martin, July 2020, https://ww w.lockheedmartin.com/en -us/news/features/2020/the -win-win-of- corporate -backed -funds.html ; see further Thomas J. Chemmanur and Elena Loutskina, “The Role of Venture Capital Backing in Initial Public Offerings: Certification, Screening, or Market Power? ,” EFA 2005 Moscow Meetings Paper (September 2006): 7 –8, http://dx.doi.org/10.2139/ssrn.604882 ; Adil Khan, Steve Beckey, and Jim Adams, “After the Shock: Implications for M&A in the aerospace & defense ma rket” (KPMG, 2020) , https://advisory.kpmg.us/articles/2020/after - shock -implications -ma-aerospace -defense -market.html . 7 Josh Lerner, “Corp orate Venturing,” Harvard Business Review, October 2013, https://hbr.org/2013/10/corporate -venturing ; Donald DePamphilis, Mergers and Acquisitions Basics: All You Need To Know (Cambridge, MA: Academic Press , October 2010) , 43–44; Previous examples of major defense companies buying cyber firms but failing to effectively incorporate their employees and products into the larger company show the limits of the M&A element in particular, Aaron Center for Security and Emerging Technology | 24 Mehta, “The great industrial competition: Mergers, acquisitions and geopolitical events fuel revenue,” Defense News , July 22, 2019, https://www.defensenews.com/top -100/2019/07/22/the -great -industrial - competition -mergers -acquisitions -and-geopolitical- events -fuel-revenue/ . 8 “Top 100 for 2020,” Defense News , 2020, https://people.defensenews.com/top -100/ . The top 50 global defense companies were selected based on their defense revenue. See Appendix 1 for additional details. 9 Flagg an d Corrigan , "Ending Innovation Tourism." 10 Loren Thompson, “Raytheon And BAE Systems Are Drawing Nontraditional Suppliers Into Defense,” Forbes , January 24, 2020, https://www.forbes.com/sites/lorenthompson/2020/01/24/how -top-military - contractors -raytheon -and-bae-systems -are-drawing -non- traditional- suppliers - into-defense/?sh=30a8abb06af2 . 11 In this paper, we categorized the top 50 global defense companies’ geographic location based on the location of their headquarters. 12 Md Zaha ngir Alom , Tarek M. Taha, Chris Yakopcic, and Stefan Westberg , et al., “The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches,”arXiv preprint arXiv:1803.01164 (2018), https://arxiv.org/pdf/1803.01164.pdf . 13 For more details on the methodology, see Arnold, Huang, and Rahkovsky, “Tracking AI Investment,” 33; replication data and code are available at https://github.com/georgetown -cset/global -defense -companies . 14 The investment disclosed value refers to the amount raised by a company during a particular series of funding. 15 Of the 6 M&A deals, 5 kept their transaction value confidential. 16 Jacques Bughin, Eric Hazan, Sree Ramaswamy, and Michael Chui et al., “Artificial Intelligence: The Next Digital Frontier?” ( McKinsey Global Institute, June 2017) , 35, https://www.mckinsey.com/~/media/mckinsey/industries/advanced%20electroni cs/our%20insights/how%20artificial%20in telligence%20can%20deliver%20real %20value%20to%20companies/mgi -artificial- intelligence -discussion - paper.ashx . 17 See appendix 3 for details. Center for Security and Emerging Technology | 25 18 Michael Richter, “Defense M&A is hot — but where are the targets? ,” Defense News , July 17, 2018, https://www.defensenews.com/newsletters/daily -news - roundup/2018/07/17/defense -ma-is-hot-but-where -are-the-targets/ ; Matt Aaronson, Doug Belair, Paul DeLia, and Drosten Fisher et al., “Building Beachheads in the US Defense Market Through M&A ,” Boston Consulting Group, July 23, 2020, https://www.bcg.com/publications/2020/building - beachheads -us-defense -market -through -mergers -acquisitions . 19 Aaron Mehta, “Boeing’s venture fund boss talks venture capital and growing defense investments,” Defense News , January 30, 2020, https://www.defensenews.com/smr/cultural -clash/2020/01/30/boeings- venture - fund -boss -talks -venture -capital -and-growing -defense- investments/ . 20 Lerner, “Corporate Venturing” ; Gary Dushnitsky and Michael J. Lenox, “When does corporate venture capital investment create firm value?,” Journal of Business Venturing 21, no. 6 (November 2006): 757; Doug Cameron, “Defense Industry Adds Venture Capital to Its Arsenal,” The Wall Street Journal, July 5, 2018, https://www.wsj.com/articles/defense -industry -adds -venture -capital- to- its-arsenal -1530792001 . 21 Mehta, “Boeing’s venture fund boss talks venture capital and growing defense investments.” 22 Moran, “Lockheed Martin seeks investments not acquisitions in dual -use technology startups ”; brackets in the original. 23 Theresa Hitchens, “Lockheed Martin Ventures Scouts Next -Gen AI/ML Tech ,” Breaking Defense, August 11, 2020, https://breakingdefense.com/2020/08/lockheed -martin -ventures -scouts -next - gen-ai-ml-tech/ . 24 Michael Vizard, “Honeywell lays down $1.3 billion to drive AI and IoT into life sciences,” VentureBeat, December 23, 2020, https://ventu rebeat.com/2020/12/23/honeywell -lays-down -1-3b-to-drive -ai- and-iot-into-life-sciences/ . 25 BAE Systems Ventures is inactive. See “BAE Systems Ventures,” Pitchbook, accessed May 27, 2021, https://pitchbook.com/profiles/investor/123753 -97. 26 In this paper, we sorted the 52 investment transactions made by these 11 defense companies based on whether or not the company investor has established a CVC division. Center for Security and Emerging Technology | 26 27 In 2019, privately held AI co mpanies attracted nearly $40 billion in disclosed equity investment. See further Arnold, Huang, and Rahkovsky, “Tracking AI Investment,” 7. 28 For example, Chinese investments into U.S. -based privately -held AI companies have come with the goal of technology transfer, and Chinese defense companies with military contracts unquestionably hope to harness AI technology obtained through investments for military advantage. See e.g., William Hannas and Huey -Meei Chang, “China’s Access to Foreign AI Technology: An Assessment” (Center for Security and Emerging Technology, September 2019) , https://cset.georgetown.edu/publication/chinas -access -to-foreign -ai- technology/ ; Michae l Brown and Pavneet Singh, “China’s Technology Transfer Strategy: How Chinese Investments in Emerging Technology Enable A Strategic Competitor to Access the Crown Jewels of U.S. Innovation” (Defense Innovation Unit Experimental, January 2018 ), https://admin.govexec.com/media/diux\_chinatechnologytransferstudy\_jan\_2018 \_(1).pdf ; Andrew Zalesk i, “Lockheed Martin invests millions in defense start- ups to fast- track R&D ,” CNBC , November 2, 2017, https://www.cnbc.com/2017/10/31/lockheed -martin -places -big-bets -on- defense- start- ups.html ; in this paper, we did not attempt to identify undisclosed investment transactions unavailable in commercial datasets. Moreover, our analysis only focuses on the top 50 global defense companies, not smaller companies with military contracts. While there are companies with milita ry contracts investing in AI companies, we leave this to future research. 29 Flagg and Corrigan, “Ending Innovation Tourism ”; Zalesk i, “Lockheed Martin invests millions in defense start -ups to fast -track R&D.” 30 Hitchens, “Lockheed Martin Ventures Scouts N ext-Gen AI/ML Tech.” 31 “Top 100 for 2020.” 32 For more details on the methodology, see Arnold, Huang, and Rahkovsky, “Tracking AI Investment,” 33; replication data and code are available at https://github.com/georgetown -cset/global -defense -companies . 33 BAE Systems Ventures is currently inactive. See note 2 5 above.
9052a8a3-415a-44e1-ba97-bf0074332197
StampyAI/alignment-research-dataset/arxiv
Arxiv
CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery 1 Introduction --------------- ![Refer to caption](/html/2202.00161/assets/x1.png) Figure 1: This work deals with unsupervised skill discovery through mutual information maximization. We introduce Contrastive Intrinsic Control (CIC) – a new unsupervised RL algorithm that explores and adapts more efficiently than prior methods. Deep Reinforcement Learning (RL) is a powerful approach toward solving complex control tasks in the presence of extrinsic rewards. Successful applications include playing video games from pixels  (Mnih et al., [2015](#bib.bib23)), mastering the game of Go (Silver et al., [2017](#bib.bib36), [2018](#bib.bib37)), robotic locomotion (Schulman et al., [2016](#bib.bib33), [2017](#bib.bib34); Peng et al., [2018](#bib.bib30)) and dexterous manipulation (Rajeswaran et al., [2018](#bib.bib32); OpenAI, [2018](#bib.bib25), [2019](#bib.bib26)) policies. While effective, the above advances produced agents that are unable to generalize to new downstream tasks beyond the one they were trained to solve. Humans and animals on the other hand are able to acquire skills with minimal supervision and apply them to solve a variety of downstream tasks. In this work, we seek to train agents that acquire skills without supervision with generalization capabilities by efficiently adapting these skills to downstream tasks. Over the last few years, unsupervised RL has emerged as a promising framework for developing RL agents that can generalize to new tasks. In the unsupervised RL setting, agents are first pre-trained with self-supervised intrinsic rewards and then finetuned to downstream tasks with extrinsic rewards. Unsupervised RL algorithms broadly fall into three categories - knowledge-based, data-based, and competence-based methods222These categories for exploration algorithms were introduced by Srinivas & Abbeel ([2021](#bib.bib39)) and inspired by Oudeyer et al. ([2007](#bib.bib27)).. Knowledge-based methods maximize the error or uncertainty of a predictive model (Pathak et al., [2017](#bib.bib28), [2019](#bib.bib29); Burda et al., [2019](#bib.bib7)). Data-based methods maximize the entropy of the agent’s visitation (Liu & Abbeel, [2021a](#bib.bib21); Yarats et al., [2021b](#bib.bib46)). Competence-based methods learn skills that generate diverse behaviors (Eysenbach et al., [2019](#bib.bib10); Gregor et al., [2017](#bib.bib12)). This work falls into the latter category of competence-based exploration methods. ![Refer to caption](/html/2202.00161/assets/x2.png) Figure 2: Qualitative visualizations of unsupervised skills discovered in Walker, Quadruped, and Jaco arm environments. The Walker learns to balance and move, the Quadruped learns to flip upright and walk, and the 6 DOF robotic arm learns how to move without locking. Unlike prior competence-based methods for continuous control which evaluate on OpenAI Gym (e.g. Eysenbach et al. ([2019](#bib.bib10))), which reset the environment when the agent loses balance, CIC is able to learn skills in fixed episode length environments which are much harder to explore (see Appendix [J](#A10 "Appendix J OpenAI Gym vs. DeepMind control: How Early Termination Leaks Extrinsic Signal ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery")). Unlike knowledge-based and data-based algorithms, competence-based algorithms simultaneously address both the exploration challenge as well as distilling the generated experience in the form of reusable skills. This makes them particularly appealing, since the resulting skill-based policies (or skills themselves) can be finetuned to efficiently solve downstream tasks. While there are many self-supervised objectives that can be utilized, our work falls into a family of methods that learns skills by maximizing the mutual information between visited states and latent skill vectors. Many earlier works have investigated optimizing such objectives (Eysenbach et al., [2019](#bib.bib10); Gregor et al., [2017](#bib.bib12); Kwon, [2021](#bib.bib16); Sharma et al., [2020](#bib.bib35)). However, competence-based methods have been empirically challenging to train and have under-performed when compared to knowledge and data-based methods (Laskin et al., [2021](#bib.bib18)). In this work, we take a closer look at the challenges of pre-training agents with competence-based algorithms. We introduce Contrastive Intrinsic Control (CIC) – an exploration algorithm that uses a new estimator for the mutual information objective. CIC combines particle estimation for state entropy (Singh et al., [2003](#bib.bib38); Liu & Abbeel, [2021a](#bib.bib21)) and noise contrastive estimation (Gutmann & Hyvärinen, [2010](#bib.bib13)) for the conditional entropy which enables it to both generate diverse behaviors (exploration) and discriminate high-dimensional continuous skills (exploitation). To the best of our knowledge, CIC is the first exploration algorithm to utilize noise contrastive estimation to discriminate between state transitions and latent skill vectors. Empirically, we show that CIC adapts to downstream tasks more efficiently than prior exploration approaches on the Unsupervised Reinforcement Learning Benchmark (URLB). CIC achieves 79%percent7979\%79 % higher returns on downstream tasks than prior competence-based algorithms and 18%percent1818\%18 % higher returns than the next-best exploration algorithm overall. 2 Background and Notation -------------------------- Markov Decision Process: We operate under the assumption that our system is described by a Markov Decision Process (MDP) (Sutton & Barto, [2018](#bib.bib41)). An MDP consiss of the tuple (𝒮,𝒜,𝒫,r,γ)𝒮𝒜𝒫𝑟𝛾(\mathcal{S},\mathcal{A},\mathcal{P},r,\gamma)( caligraphic\_S , caligraphic\_A , caligraphic\_P , italic\_r , italic\_γ ) which has states s∈𝒮𝑠𝒮s\in\mathcal{S}italic\_s ∈ caligraphic\_S, actions a∈𝒜𝑎𝒜a\in\mathcal{A}italic\_a ∈ caligraphic\_A, transition dynamics p(s′|s,a)∼𝒫similar-to𝑝conditionalsuperscript𝑠′𝑠𝑎𝒫p(s^{\prime}|s,a)\sim\mathcal{P}italic\_p ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_s , italic\_a ) ∼ caligraphic\_P, a reward function r𝑟ritalic\_r, and a discount factor γ𝛾\gammaitalic\_γ. In an MDP, at each timestep t𝑡titalic\_t, an agent observes the current state s𝑠sitalic\_s, selects an action from a policy a∼π(⋅|s)a\sim\pi(\cdot|s)italic\_a ∼ italic\_π ( ⋅ | italic\_s ), and then observes the reward and next state once it acts in the environment: r,s′∼env.step(a)similar-to𝑟superscript𝑠′ env.step𝑎r,s^{\prime}\sim\text{env.step}(a)italic\_r , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ env.step ( italic\_a ). Note that usually r𝑟ritalic\_r refers to an extrinsic reward. However, in this work we will first be pre-training an agent with intrinsic rewards rintsuperscript𝑟intr^{\text{int}}italic\_r start\_POSTSUPERSCRIPT int end\_POSTSUPERSCRIPT and finetuning on extrinsic rewards rextsuperscript𝑟extr^{\text{ext}}italic\_r start\_POSTSUPERSCRIPT ext end\_POSTSUPERSCRIPT. ![Refer to caption](/html/2202.00161/assets/x3.png) Figure 3: Architecture illustrating the practical implementation of CIC . During a gradient update step, random τ=(s,s′)𝜏𝑠superscript𝑠′\tau=(s,s^{\prime})italic\_τ = ( italic\_s , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) tuples are sampled from the replay buffer, then a particle estimator is used to compute the entropy and a noise contrastive loss to compute the conditional entropy. The contrastive loss is backpropagated through the entire architecture. The entropy and contrastive terms are then scaled and added to form the intrinsic reward. The RL agent is optimized with a DDPG (Lillicrap et al., [2016](#bib.bib20)). For convenience we also introduce the variable τ(s)𝜏𝑠\tau(s)italic\_τ ( italic\_s ) which refers to any function of the states s𝑠sitalic\_s. For instance τ𝜏\tauitalic\_τ can be a single state, a pair of states, or a sequence depending on the algorithm. Our method uses τ=(s,s′)𝜏𝑠superscript𝑠′\tau=(s,s^{\prime})italic\_τ = ( italic\_s , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) to encourage diverse state transitions while other methods have different specifications for τ𝜏\tauitalic\_τ. Importantly, τ𝜏\tauitalic\_τ does not denote a state-action trajectory, but is rather shorthand for any function of the states encountered by the agent. In addition to the standard MDP notation, we will also be learning skills z∈𝒵𝑧𝒵z\in\mathcal{Z}italic\_z ∈ caligraphic\_Z and our policy will be skill-conditioned a∼π(⋅|s,z)a\sim\pi(\cdot|s,z)italic\_a ∼ italic\_π ( ⋅ | italic\_s , italic\_z ). Unsupervised Skill Discovery through Mutual Information Maximization: Most competence-based approaches to exploration maximize the mutual information between states and skills. Our work and a large body of prior research (Eysenbach et al., [2019](#bib.bib10); Sharma et al., [2020](#bib.bib35); Gregor et al., [2017](#bib.bib12); Achiam et al., [2018](#bib.bib1); Lee et al., [2019](#bib.bib19); Liu & Abbeel, [2021b](#bib.bib22)) aims to maximize a mutual information objective with the following general form: | | | | | | --- | --- | --- | --- | | | I(τ;z)=ℋ(z)−ℋ(z|τ)=ℋ(τ)−ℋ(τ|z)𝐼𝜏𝑧ℋ𝑧ℋconditional𝑧𝜏ℋ𝜏ℋconditional𝜏𝑧\displaystyle I(\tau;z)=\mathcal{H}(z)-\mathcal{H}(z|\tau)=\mathcal{H}(\tau)-\mathcal{H}(\tau|z)italic\_I ( italic\_τ ; italic\_z ) = caligraphic\_H ( italic\_z ) - caligraphic\_H ( italic\_z | italic\_τ ) = caligraphic\_H ( italic\_τ ) - caligraphic\_H ( italic\_τ | italic\_z ) | | (1) | Competence-based algorithms use different choices for τ𝜏\tauitalic\_τ and can condition on additional information such as actions or starting states. For a full summary of competence-based algorithms and their objectives see Table [3](#A1.T3 "Table 3 ‣ Appendix A Competence-based Exploration Algorithms ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery") in Appendix [D](#A4 "Appendix D Relation to Prior Skill Discovery Methods ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery"). Lower Bound Estimates of Mutual Information: The mutual information I(s;z)𝐼𝑠𝑧I(s;z)italic\_I ( italic\_s ; italic\_z ) is intractable to compute directly. Since we wish to maximize I(s;z)𝐼𝑠𝑧I(s;z)italic\_I ( italic\_s ; italic\_z ), we can approximate this objective by instead maximizing a lower bound estimate. Most known mutual information maximization algorithms use the variational lower bound introduced in Barber & Agakov ([2003](#bib.bib3)): | | | | | | --- | --- | --- | --- | | | I(τ;z)=ℋ(z)−ℋ(z|τ)≥ℋ(z)+𝔼[log⁡q(z|τ)]𝐼𝜏𝑧ℋ𝑧ℋconditional𝑧𝜏ℋ𝑧𝔼𝑞conditional𝑧𝜏\displaystyle I(\tau;z)=\mathcal{H}(z)-\mathcal{H}(z|\tau)\geq\mathcal{H}(z)+\operatorname\*{\mathbb{E}}[\log q(z|\tau)]italic\_I ( italic\_τ ; italic\_z ) = caligraphic\_H ( italic\_z ) - caligraphic\_H ( italic\_z | italic\_τ ) ≥ caligraphic\_H ( italic\_z ) + blackboard\_E [ roman\_log italic\_q ( italic\_z | italic\_τ ) ] | | (2) | The variational lower bound can be applied to both decompositions of the mutual information. The design decisions of a competence-based algorithm therefore come down to (i) which decomposition of I(τ;z)𝐼𝜏𝑧I(\tau;z)italic\_I ( italic\_τ ; italic\_z ) to use, (ii) whether to use discrete or continuous skills, (iii) how to estimate H(z)𝐻𝑧H(z)italic\_H ( italic\_z ) or H(τ)𝐻𝜏H(\tau)italic\_H ( italic\_τ ), and finally (iv) how to estimate H(z|τ)𝐻conditional𝑧𝜏H(z|\tau)italic\_H ( italic\_z | italic\_τ ) or H(τ|z)𝐻conditional𝜏𝑧H(\tau|z)italic\_H ( italic\_τ | italic\_z ). 3 Motivation ------------- Results from the recent Unsupervised Reinforcement Learning Benchmark (URLB) (Laskin et al., [2021](#bib.bib18)) show that competence-based approaches underperform relative to knowledge-based and data-based baselines on DeepMind Control (DMC). We argue that the underlying issue with current competence-based algorithms when deployed on harder exploration environments like DMC has to do with the currently used estimators for I(τ;z)𝐼𝜏𝑧I(\tau;z)italic\_I ( italic\_τ ; italic\_z ) rather than the objective itself. To produce structured skills that lead to diverse behaviors, I(τ;z)𝐼𝜏𝑧I(\tau;z)italic\_I ( italic\_τ ; italic\_z ) estimators must (i) explicitly encourage diverse behaviors and (ii) have the capacity to discriminate between high-dimensional continuous skills. Current approaches do not satisfy both criteria. Competence-base algorithms do not ensure diverse behaviors: Most of the best known competence-based approaches (Eysenbach et al., [2019](#bib.bib10); Gregor et al., [2017](#bib.bib12); Achiam et al., [2018](#bib.bib1); Lee et al., [2019](#bib.bib19)), optimize the first decomposition of the mutual information ℋ(z)−ℋ(z|τ)ℋ𝑧ℋconditional𝑧𝜏\mathcal{H}(z)-\mathcal{H}(z|\tau)caligraphic\_H ( italic\_z ) - caligraphic\_H ( italic\_z | italic\_τ ). The issue with this decomposition is that while it ensures diversity of skill vectors it does not ensure diverse behavior from the policy, meaning max⁡ℋ(z)ℋ𝑧\max\mathcal{H}(z)roman\_max caligraphic\_H ( italic\_z ) does not imply max⁡ℋ(τ)ℋ𝜏\max\mathcal{H}(\tau)roman\_max caligraphic\_H ( italic\_τ ). Of course, if H(z)−ℋ(z|τ)𝐻𝑧ℋconditional𝑧𝜏H(z)-\mathcal{H}(z|\tau)italic\_H ( italic\_z ) - caligraphic\_H ( italic\_z | italic\_τ ) is maximized and the skill dimension is sufficiently large, then ℋ(τ)ℋ𝜏\mathcal{H}(\tau)caligraphic\_H ( italic\_τ ) will also be maximized implicitly. Yet in practice, to learn an accurate discriminator q(z|τ)𝑞conditional𝑧𝜏q(z|\tau)italic\_q ( italic\_z | italic\_τ ), the above methods assume skill spaces that are much smaller than the state space (see Table [3](#A1.T3 "Table 3 ‣ Appendix A Competence-based Exploration Algorithms ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery")), and thus behavioral diversity may not be guaranteed. In contrast, the decomposition I(τ;z)=ℋ(τ)−ℋ(τ|z)𝐼𝜏𝑧ℋ𝜏ℋconditional𝜏𝑧I(\tau;z)=\mathcal{H}(\tau)-\mathcal{H}(\tau|z)italic\_I ( italic\_τ ; italic\_z ) = caligraphic\_H ( italic\_τ ) - caligraphic\_H ( italic\_τ | italic\_z ) ensures diverse behaviors through the entropy term ℋ(τ)ℋ𝜏\mathcal{H}(\tau)caligraphic\_H ( italic\_τ ). Methods that utilize this decomposition include Liu & Abbeel ([2021b](#bib.bib22)); Sharma et al. ([2020](#bib.bib35)). ![Refer to caption](/html/2202.00161/assets/x4.png) Figure 4: To empirically demonstrate issues inherent to competence-based exploration methods, we run DIAYN (Eysenbach et al., [2019](#bib.bib10)) and compare it to ICM (Pathak et al., [2017](#bib.bib28)) and a Fixed baseline where the agent receives an intrinsic reward of 1.0 for each timestep and no extrinsic reward on both OpenAI Gym (episode resets when agent loses balance) and DeepMind Control (DMC) (episode is fixed for 1k steps) Hopper environments. Since Gym and DMC rewards are on different scales, we normalize rewards based on the maximum reward achieved by any algorithm ( 1k for Gym,  3 for DMC). While DIAYN is able to achieve higher extrinsic rewards than ICM on Gym, the Fixed intrinsic reward baseline performs best. However, on DMC the Fixed and DIAYN agents achieve near-zero reward while ICM does not. This is consistent with findings of prior work that DIAYN is able to learn diverse behaviors in Gym (Eysenbach et al., [2019](#bib.bib10)) as well as the observation that DIAYN performs poorly on DMC environments (Laskin et al., [2021](#bib.bib18)) Why it is important to utilize high-dimensional skills: Once a policy is capable of generating diverse behaviors, it is important that the discriminator can distill these behaviors into distinct skills. If the set of behaviors outnumbers the set of skills, this will result in degenerate skills – when one skill maps to multiple different behaviors. It is therefore important that the discriminator can accommodate continuous skills of sufficiently high dimension. Empirically, the discriminators used in prior work utilize only low-dimensional continuous skill vectors. DIAYN (Eysenbach et al., [2019](#bib.bib10)) utilized 16 dimensional skills, DADS (Sharma et al., [2020](#bib.bib35)) utilizes continuous skills of dimension 2−5252-52 - 5, while APS (Liu & Abbeel, [2021b](#bib.bib22)), an algorithm that utilizes successor features (Barreto et al., [2016](#bib.bib4); Hansen et al., [2020](#bib.bib15)) for the discriminator, is only capable of learning continuous skills with dimension 10101010. We show how small skill spaces can lead to ineffective exploration in a simple gridworld setting in Appendix [H](#A8 "Appendix H Toy Example to Illustrate the Need for Larger Skill Spaces ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery") and evidence that skill dimension affects performance in Fig. [6](#S4.F6 "Figure 6 ‣ 4.1 Contrastive Intrinsic Control ‣ 4 Method ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery"). On the importance of benchmarks for evaluation: While prior competence-based approaches such as DIAYN (Eysenbach et al., [2019](#bib.bib10)) were evaluated on OpenAI Gym (Brockman et al., [2016](#bib.bib6)), Gym environment episodes terminate when the agent loses balance thereby leaking some aspects of extrinsic signal to the exploration agent. On the other hand, DMC episodes have fixed length. We show in Fig [4](#S3.F4 "Figure 4 ‣ 3 Motivation ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery") that this small difference in environments results in large performance differences. Specifically, we find that DIAYN is able to learn diverse skills in Gym but not in DMC, which is consistent with both observations from DIAYN and URLB papers. Due to fixed episode lengths, DMC tasks are harder for reward-free exploration since agents must learn to balance without supervision. 4 Method --------- ![Refer to caption](/html/2202.00161/assets/x5.png) Figure 5: We report the aggregate statistics using stratified bootstrap intervals (Agarwal et al., [2021](#bib.bib2)) for 12 downstream tasks on URLB with 10 seeds, so each statistic for each algorithm has 120 seeds in total. We find that overall, CIC achieves leading performance on URLB in terms of the IQM, mean, and OG statistics. As recommended by Agarwal et al. ([2021](#bib.bib2)), we use the IQM as our primary performance measure. In terms of IQM, CIC improves upon the next best skill discovery algorithm (APS) by 79%percent7979\%79 % and the next best algorithm overall (ProtoRL) by 18%percent1818\%18 %. ### 4.1 Contrastive Intrinsic Control From Section [3](#S3 "3 Motivation ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery") we are motivated to find a lower bound for I(τ;z)𝐼𝜏𝑧I(\tau;z)italic\_I ( italic\_τ ; italic\_z ) with a discriminator that is capable of supporting high-dimensional continuous skills333In high-dimensional state-action spaces the number of distinct behaviors can be quite large.. Additionally, we wish to increase the diversity of behaviors so that the discriminator can continue learning new skills throughout training. To improve the discriminator, we propose to utilize noise contrastive estimation (NCE) (Gutmann & Hyvärinen, [2010](#bib.bib13)) between state-transitions and latent skills as a lower bound for I(τ;z)𝐼𝜏𝑧I(\tau;z)italic\_I ( italic\_τ ; italic\_z ).444Note that τ𝜏\tauitalic\_τ is not a trajectory but some function of states. It has been shown previously that such estimators provide a valid lower bound for mutual information (Oord et al., [2018](#bib.bib24)). However, to the best of our knowledge, this is the first work to investigate contrastive representation learning for intrinsic control. Representation Learning: Specifically, we propose to learn embeddings with the following representation learning objective, which is effectively CPC between state-transitions and latent skills: | | | | | | --- | --- | --- | --- | | | I(τ;z)≥𝔼[f(τ,z)−log⁡1N∑j=1Nexp⁡(f(τj,z))].𝐼𝜏𝑧𝔼𝑓𝜏𝑧1𝑁superscriptsubscript𝑗1𝑁𝑓subscript𝜏𝑗𝑧\displaystyle I(\tau;z)\geq\operatorname\*{\mathbb{E}}[f(\tau,z)-\log\frac{1}{N}\sum\_{j=1}^{N}\exp(f(\tau\_{j},z))].italic\_I ( italic\_τ ; italic\_z ) ≥ blackboard\_E [ italic\_f ( italic\_τ , italic\_z ) - roman\_log divide start\_ARG 1 end\_ARG start\_ARG italic\_N end\_ARG ∑ start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT roman\_exp ( italic\_f ( italic\_τ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT , italic\_z ) ) ] . | | (3) | where f(τ,z)𝑓𝜏𝑧f(\tau,z)italic\_f ( italic\_τ , italic\_z ) is any real valued function. For convenience, we define the discriminator log⁡q(τ|z)𝑞conditional𝜏𝑧\log q(\tau|z)roman\_log italic\_q ( italic\_τ | italic\_z ) as | | | | | | | --- | --- | --- | --- | --- | | | log⁡q(τ|z)≔f(τ,z)≔𝑞conditional𝜏𝑧𝑓𝜏𝑧\displaystyle\log q(\tau|z)\coloneqq f(\tau,z)roman\_log italic\_q ( italic\_τ | italic\_z ) ≔ italic\_f ( italic\_τ , italic\_z ) | −log⁡1N∑j=1Nexp⁡(f(τj,z)).1𝑁superscriptsubscript𝑗1𝑁𝑓subscript𝜏𝑗𝑧\displaystyle-\log\frac{1}{N}\sum\_{j=1}^{N}\exp(f(\tau\_{j},z)).- roman\_log divide start\_ARG 1 end\_ARG start\_ARG italic\_N end\_ARG ∑ start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT roman\_exp ( italic\_f ( italic\_τ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT , italic\_z ) ) . | | (4) | For our practical algorithm, we parameterize this function as f(τ,z)=gψ1(τ)⊤gψ2(z)/‖gψ1(τ)‖‖gψ2(z)‖T𝑓𝜏𝑧subscript𝑔subscript𝜓1superscript𝜏topsubscript𝑔subscript𝜓2𝑧normsubscript𝑔subscript𝜓1𝜏normsubscript𝑔subscript𝜓2𝑧𝑇f(\tau,z)=g\_{\psi\_{1}}(\tau)^{\top}g\_{\psi\_{2}}(z)/{\|g\_{\psi\_{1}}(\tau)\|\|g\_{\psi\_{2}}(z)\|T}italic\_f ( italic\_τ , italic\_z ) = italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_τ ) start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_z ) / ∥ italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_τ ) ∥ ∥ italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_z ) ∥ italic\_T where τ=(s,s′)𝜏𝑠superscript𝑠′\tau=(s,s^{\prime})italic\_τ = ( italic\_s , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) is a transition tuple, gψksubscript𝑔subscript𝜓𝑘g\_{\psi\_{k}}italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT are neural encoders, and T𝑇Titalic\_T is a temperature parameter. This inner product is similar to the one used in SimCLR (Chen et al., [2020](#bib.bib9)). The representation learning loss backpropagates gradients from the NCE loss which maximizes similarity between state-transitions and corresponding skills. | | | | | | --- | --- | --- | --- | | | FNCE(τ)subscript𝐹𝑁𝐶𝐸𝜏\displaystyle F\_{NCE}(\tau)italic\_F start\_POSTSUBSCRIPT italic\_N italic\_C italic\_E end\_POSTSUBSCRIPT ( italic\_τ ) | =gψ1(τi)⊤gψ2(zi)‖gψ1(τi)‖‖gψ2(zi)‖Tabsentsubscript𝑔subscript𝜓1superscriptsubscript𝜏𝑖topsubscript𝑔subscript𝜓2subscript𝑧𝑖normsubscript𝑔subscript𝜓1subscript𝜏𝑖normsubscript𝑔subscript𝜓2subscript𝑧𝑖𝑇\displaystyle=\frac{g\_{\psi\_{1}}(\tau\_{i})^{\top}g\_{\psi\_{2}}(z\_{i})}{\|g\_{\psi\_{1}}(\tau\_{i})\|\|g\_{\psi\_{2}}(z\_{i})\|T}= divide start\_ARG italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_τ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) end\_ARG start\_ARG ∥ italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_τ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ∥ ∥ italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ∥ italic\_T end\_ARG | | | | −\displaystyle-- | log⁡1N∑j=1Nexp⁡(gψ1(τj)⊤gψ2(zi)‖gψ1(τj)‖‖gψ2(zi)‖T)1𝑁superscriptsubscript𝑗1𝑁subscript𝑔subscript𝜓1superscriptsubscript𝜏𝑗topsubscript𝑔subscript𝜓2subscript𝑧𝑖normsubscript𝑔subscript𝜓1subscript𝜏𝑗normsubscript𝑔subscript𝜓2subscript𝑧𝑖𝑇\displaystyle\log\frac{1}{N}\sum\_{j=1}^{N}\exp\left(\frac{g\_{\psi\_{1}}(\tau\_{j})^{\top}g\_{\psi\_{2}}(z\_{i})}{\|g\_{\psi\_{1}}(\tau\_{j})\|\|g\_{\psi\_{2}}(z\_{i})\|T}\right)roman\_log divide start\_ARG 1 end\_ARG start\_ARG italic\_N end\_ARG ∑ start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT roman\_exp ( divide start\_ARG italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_τ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) end\_ARG start\_ARG ∥ italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_τ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) ∥ ∥ italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ∥ italic\_T end\_ARG ) | | (5) | We provide pseudocode for the CIC representation learning loss below: [⬇](data:text/plain;base64,IiIiClB5VG9yY2gtbGlrZSBwc2V1ZG9jb2RlIGZvciB0aGUgQ0lDIGxvc3MKIiIiCgpkZWYgY2ljX2xvc3Mocywgc19uZXh0LCB6LCB0ZW1wKToKICAgICIiIgogICAgLSBzdGF0ZXM6IHMsIHNfbmV4dCAoQiwgRCkKICAgIC0gc2tpbGxzOiB6IChCLCBEKQogICAgIiIiCgogICAgdGF1ID0gY29uY2F0KHMsIHNfbmV4dCwgZGltPTEpCgogICAgcXVlcnkgPSBxdWVyeV9uZXQoeikKICAgIGtleSA9IGtleV9uZXQodGF1KQoKICAgIHF1ZXJ5ID0gbm9ybWFsaXplKHF1ZXJ5LCBkaW09MSkKICAgIGtleSA9IG5vcm1hbGl6ZShrZXksIGRpbT0xKQoKICAgICIiIgogICAgcG9zaXRpdmVzIGFyZSBvbiBkaWFnb25hbAogICAgbmVnYXRpdmVzIGFyZSBvZmYgZGlhZ29uYWwKICAgICIiIgoKICAgIGxvZ2l0cyA9IG1hdG11bChxdWVyeSwga2V5LlQpIC8gdGVtcAogICAgbGFiZWxzID0gYXJhbmdlKGxvZ2l0cy5zaGFwZVswXSkKCiAgICBsb3NzID0gY3Jvc3NfZW50cm9weShsb2dpdHMsIGxhYmVscykKCiAgICByZXR1cm4gbG9zcwoK) 1""" 2PyTorch-like pseudocode for the CIC loss 3""" 4 5def cic\_loss(s, s\_next, z, temp): 6    """ 7    - states: s, s\_next (B, D) 8    - skills: z (B, D) 9    """ 10 11    tau = concat(s, s\_next, dim=1) 12 13    query = query\_net(z) 14    key = key\_net(tau) 15 16    query = normalize(query, dim=1) 17    key = normalize(key, dim=1) 18 19    """ 20    positives are on diagonal 21    negatives are off diagonal 22    """ 23 24    logits = matmul(query, key.T) / temp 25    labels = arange(logits.shape[0]) 26 27    loss = cross\_entropy(logits, labels) 28 29    return loss Listing 1: Pseudocode for the CIC loss Intrinsic reward: Although we have a representation learning objective, we still need to specify the intrinsic reward for the algorithm for which there can be multiple choices. Prior works consider specifying an intrinsic reward that is proportional to state-transition entropy (Liu & Abbeel, [2021a](#bib.bib21)), the discriminator (Eysenbach et al., [2019](#bib.bib10)), a similarity score between states and skills (Warde-Farley et al., [2018](#bib.bib44)), and the uncertainty of the discriminator (Strouse et al., [2021](#bib.bib40)). We investigate each of these choices and find that an intrinsic reward that maximizes state-transition entropy coupled with representation learning via the CPC loss defined in Sec. [4.1](#S4.SS1 "4.1 Contrastive Intrinsic Control ‣ 4 Method ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery") is the simplest variant that also performs well (see Table [1](#S7.T1 "Table 1 ‣ 7 Results ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery")). Figure 6: Design choices for pre-training and adapting with skills have significant impact on performance. In (a) and (b) the agent’s zero-shot performance is evaluated while sampling skills randomly while in (c) and (d) the agent’s performance is evaluated after finetuning the skills vector. (a) we show empirically that the projecting skill vectors after sampling them from noise significantly improves the agent’s performance. (b) The skill dimension is a crucial hyperparameter and, unlike prior methods, CIC scales to large skill vectors achieving optimal performance at 64 dimensional skills. (c) We test several adapation strategies and find that a simple grid search performs best given the small 4k step adaptation budget, (d) Choosing the right skill vector has substantial impact on performance and grid sweeping allows the agent to select the appropriate skill. For the intrinsic reward, we use a particle estimate (Singh et al., [2003](#bib.bib38); Beirlant, [1997](#bib.bib5)) as in Liu & Abbeel ([2021a](#bib.bib21)) of the state-transition entropy. Similar to  Liu & Abbeel ([2021a](#bib.bib21)); Yarats et al. ([2021b](#bib.bib46)) we estimate the entropy up to a proportionality constant, because we want the agent to maximize entropy rather than estimate its exact value. The APT particle entropy estimate is proportional to the distance between the current visited state transition and previously seen neighboring points. | | | | | | --- | --- | --- | --- | | | ℋparticle(τ)∝1Nk∑hi⋆∈NkNklog⁡‖hi−hi⋆‖proportional-tosubscriptℋ𝑝𝑎𝑟𝑡𝑖𝑐𝑙𝑒𝜏1subscript𝑁𝑘superscriptsubscriptsuperscriptsubscriptℎ𝑖⋆subscript𝑁𝑘subscript𝑁𝑘normsubscriptℎ𝑖superscriptsubscriptℎ𝑖⋆\displaystyle\mathcal{H}\_{particle}(\tau)\propto\frac{1}{N\_{k}}\sum\_{h\_{i}^{\star}\in N\_{k}}^{N\_{k}}\log\|h\_{i}-h\_{i}^{\star}\|caligraphic\_H start\_POSTSUBSCRIPT italic\_p italic\_a italic\_r italic\_t italic\_i italic\_c italic\_l italic\_e end\_POSTSUBSCRIPT ( italic\_τ ) ∝ divide start\_ARG 1 end\_ARG start\_ARG italic\_N start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_ARG ∑ start\_POSTSUBSCRIPT italic\_h start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ⋆ end\_POSTSUPERSCRIPT ∈ italic\_N start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT roman\_log ∥ italic\_h start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - italic\_h start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ⋆ end\_POSTSUPERSCRIPT ∥ | | (6) | where hisubscriptℎ𝑖h\_{i}italic\_h start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is an embedding of τisubscript𝜏𝑖\tau\_{i}italic\_τ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT shown in Fig. [3](#S2.F3 "Figure 3 ‣ 2 Background and Notation ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery"), hi\*superscriptsubscriptℎ𝑖h\_{i}^{\*}italic\_h start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is a kNN embedding, Nksubscript𝑁𝑘N\_{k}italic\_N start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT is the number of kNNs, and N−1𝑁1N-1italic\_N - 1 is the number of negatives. The total number of elements in the summation is N𝑁Nitalic\_N because it includes one positive. Explore and Exploit: With these design choices the two components of the CIC algorithm can be interpreted as exploration with intrinsic rewards and exploitation using representation learning to distill behaviors into skills. The marginal entropy maximizes the diversity of state-transition embeddings while the contrastive discriminator log⁡q(τ|z)𝑞conditional𝜏𝑧\log q(\tau|z)roman\_log italic\_q ( italic\_τ | italic\_z ) encourages exploitation by ensuring that skills z𝑧zitalic\_z lead to predictable states τ𝜏\tauitalic\_τ. Together the two terms incentivize the discovery of diverse yet predictable behaviors from the RL agent. While CIC shares a similar intrinsic reward structure to APT (Liu & Abbeel, [2021a](#bib.bib21)), we show that the new representation learning loss from the CIC estimator results in substantial performance gains in Sec [6](#S6 "6 Experimental Setup ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery"). 5 Practical Implementation --------------------------- Our practical implementation consists of two main components: the RL optimization algorithm and the CIC architecture. For fairness and clarity of comparison, we use the same RL optimization algorithm for our method and all baselines in this work. Since the baselines implemented in URLB (Laskin et al., [2021](#bib.bib18)) use a DDPG555It was recently was shown that a DDPG achieves state-of-the-art performance (Yarats et al., [2021a](#bib.bib45)) on DeepMind Control (Tassa et al., [2018](#bib.bib42)) and is more stable than SAC (Haarnoja et al., [2018](#bib.bib14)) on this benchmark. (Lillicrap et al., [2016](#bib.bib20)) as their backbone, we opt for the same DDPG architecture to optimize our method as well (see Appendix [B](#A2 "Appendix B Deep Deterministic Policy Gradient (DDPG) ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery")). CIC Architecture: We use a particle estimator as in Liu & Abbeel ([2021a](#bib.bib21)) to estimate ℋ(τ)ℋ𝜏\mathcal{H}(\tau)caligraphic\_H ( italic\_τ ). To compute the variational density q(τ|z)𝑞conditional𝜏𝑧q(\tau|z)italic\_q ( italic\_τ | italic\_z ), we first sample skills from uniform noise z∼p(z)similar-to𝑧𝑝𝑧z\sim p(z)italic\_z ∼ italic\_p ( italic\_z ) where p(z)𝑝𝑧p(z)italic\_p ( italic\_z ) is the uniform distribution over the [0,1]01[0,1][ 0 , 1 ] interval. We then use two MLP encoders to embed gψ1(τ)subscript𝑔subscript𝜓1𝜏g\_{\psi\_{1}}(\tau)italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_τ ) and gψ2(z)subscript𝑔subscript𝜓2𝑧g\_{\psi\_{2}}(z)italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_z ), and optimize the parameters ψ1,ψ2subscript𝜓1subscript𝜓2\psi\_{1},\psi\_{2}italic\_ψ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_ψ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT with the CPC loss similar to SimCLR (Chen et al., [2020](#bib.bib9)) since f(τ,z)=gψ1(τ)Tgψ2(z)𝑓𝜏𝑧subscript𝑔subscript𝜓1superscript𝜏𝑇subscript𝑔subscript𝜓2𝑧f(\tau,z)=g\_{\psi\_{1}}(\tau)^{T}g\_{\psi\_{2}}(z)italic\_f ( italic\_τ , italic\_z ) = italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_τ ) start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_g start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_z ). We fix the hyperparameters across all domains and downstream tasks. We refer the reader to the Appendices [E](#A5 "Appendix E Full CIC Algorithm ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery") and [F](#A6 "Appendix F Hyper-parameters ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery") for the full algorithm and a full list of hyperparameters. Adapting to downstream tasks: To adapt to downstream tasks we follow the same procedure for competence-based method adaptation as in URLB (Laskin et al., [2021](#bib.bib18)). During the first 4k environment interactions we populate the DDPG replay buffer with samples and use the extrinsic rewards collected during this period to finetune the skill vector z𝑧zitalic\_z. While it’s common to finetune skills with Cross Entropy Adaptation (CMA), given our limited budget of 4k samples (only 4 episodes) we find that a simple grid sweep of skills over the interval [0,1]01[0,1][ 0 , 1 ] produces the best results (see Fig. [6](#S4.F6 "Figure 6 ‣ 4.1 Contrastive Intrinsic Control ‣ 4 Method ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery")). After this, we fix the skill z𝑧zitalic\_z and finetune the DDPG actor-critic parameters against the extrinsic reward for the remaining 96k steps. Note that competence-based methods in URLB also finetune their skills during the first 4k finetuning steps ensuring a fair comparison between the methods. The full adaptation procedure is detailed in Appendix [E](#A5 "Appendix E Full CIC Algorithm ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery"). 6 Experimental Setup --------------------- Environments We evaluate our approach on tasks from URLB, which consists of twelve downstream tasks across three challenging continuous control domains for exploration algorithms – walker, quadruped, and Jaco arm. Walker requires a biped constrained to a 2D vertical plane to perform locomotion tasks while balancing. Quadruped is more challenging due to a higher-dimensional state-action space and requires a quadruped to in a 3D environment to learn locomotion skills. Jaco arm is a 6-DOF robotic arm with a three-finger gripper to move and manipulate objects without locking. All three environments are challenging in the absence of an extrinsic reward. Baselines: We compare CIC to baselines across all three exploration categories. Knowledge-based basedlines include ICM (Pathak et al., [2017](#bib.bib28)), Disagreement (Pathak et al., [2019](#bib.bib29)), and RND (Burda et al., [2019](#bib.bib7)). Data-based baselines incude APT (Liu & Abbeel, [2021a](#bib.bib21)) and ProtoRL (Yarats et al., [2021b](#bib.bib46)). Competence-based baselines include DIAYN (Eysenbach et al., [2019](#bib.bib10)), SMM (Lee et al., [2019](#bib.bib19)), and APS (Liu & Abbeel, [2021b](#bib.bib22)). The closest baselines to CIC are APT, which is similar to CIC but without state-skill CPC representation learning (no discriminator), and APS which uses the same decomposition of mutual information as CIC and also uses a particle entropy estimate for ℋ(τ)ℋ𝜏\mathcal{H}(\tau)caligraphic\_H ( italic\_τ ). The main difference between APS and CIC is that APS uses successor features while CIC uses a contrastive estimator for the discriminator. For further details regarding baselines we refer the reader to Appendix [C](#A3 "Appendix C Baselines ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery"). Evaluation: We follow an identical evaluation to the 2M pre-training setup in URLB. First, we pre-train each RL agent with the intrinsic rewards for 2M steps. Then, we finetune each agent to the downstream task with extrinsic rewards for 100k steps. All baselines were run for 10 seeds per downstream task for each algorithm using the code and hyperparameters provided by URLB (Laskin et al., [2021](#bib.bib18)). Built on top of URLB, CIC is also run for 10 seeds per task. A total of 1080=9 algorithms×12 tasks×10 seeds10809 algorithms12 tasks10 seeds1080=9\text{ algorithms}\times 12\text{ tasks}\times 10\text{ seeds}1080 = 9 algorithms × 12 tasks × 10 seeds experiments were run for the main results. Importantly, all baselines and CIC use a DDPG agent as their backbone. To ensure that our evaluation statistics are unbiased we use stratified bootstrap confidence intervals to report aggregate statistics across M𝑀Mitalic\_M runs with N𝑁Nitalic\_N seeds as described in Rliable (Agarwal et al., [2021](#bib.bib2)) to report statistics for our main results in Fig. [5](#S4.F5 "Figure 5 ‣ 4 Method ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery"). Our primary success metric is the interquartile mean (IQM) and the Optimality Gap (OG). IQM discards the top and bottom 25%percent\%% of runs and then computes the mean. It is less susceptible to outliers than the mean and was shown to be the most reliable statistic for reporting results for RL experiments in Agarwal et al. ([2021](#bib.bib2)). OG measures how far a policy is from optimal (expert) performance. To define expert performance we use the convention in URLB, which is the score achieved by a randomly initialized DDPG after 2M steps of finetuning (20x more steps than our finetuning budget). 7 Results ---------- We investigate empirical answers to the following research questions: (Q1) How does CIC adaptation efficiency compare to prior competence-based algorithms and exploration algorithms more broadly? (Q2) Which intrinsic reward instantiation of CIC performs best? (Q3) How do the two terms in the CIC objective affect algorithm performance? (Q4) How does skill selection affect the quality of the pre-trained policy? (Q5) Which architecture details matter most? Adaptation efficiency of CIC and exploration baslines: Expert normalized scores of CIC and exploration algorithms from URLB are shown in Fig. [3](#S4.E3 "3 ‣ 4.1 Contrastive Intrinsic Control ‣ 4 Method ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery"). We find that CIC substantially outperforms prior competence-based algorithms (DIAYN, SMM, APS) achieving a 79%percent7979\%79 % higher IQM than the next best competence-based method (APS) and, more broadly, achieving a 18%percent1818\%18 % higher IQM than the next best overall baseline (ProtoRL). In further ablations, we find that the contributing factors to CIC’s performance are its ability to accommodate substantially larger continuous skill spaces than prior competence-based methods. Intrinsic reward specification: The intrinsic reward for competence-based algorithms can be instantiated in many different ways. Here, we analyze intrinsic reward for CIC with the form rint=H(τ)+D(τ,z)subscript𝑟𝑖𝑛𝑡𝐻𝜏𝐷𝜏𝑧r\_{int}=H(\tau)+D(\tau,z)italic\_r start\_POSTSUBSCRIPT italic\_i italic\_n italic\_t end\_POSTSUBSCRIPT = italic\_H ( italic\_τ ) + italic\_D ( italic\_τ , italic\_z ), where D𝐷Ditalic\_D is some function of (τ,z)𝜏𝑧(\tau,z)( italic\_τ , italic\_z ). Prior works, select D𝐷Ditalic\_D to be (i) the discriminator (Liu & Abbeel, [2021b](#bib.bib22)), (ii) a cosine similarity between embeddings (Warde-Farley et al., [2018](#bib.bib44)), (iii) uncertainty of the discriminator (Strouse et al., [2021](#bib.bib40)), and (iv) just the entropy D(τ,z)=0𝐷𝜏𝑧0D(\tau,z)=0italic\_D ( italic\_τ , italic\_z ) = 0 (Liu & Abbeel, [2021a](#bib.bib21)). We run CIC with each of these variants on the walker and quadruped tasks and measure the final mean performance across the downstream tasks (see Tab. [1](#S7.T1 "Table 1 ‣ 7 Results ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery")). The results show that the entropy-only intrinsic reward performs best. For this reason the intrinsic reward and representation learning aspects of CIC are decoupled as shown in Eq. LABEL:eq:rewint. We hypothesize that the reason why a simple entropy-only intrinsic reward works well is that state-skill CPC representation learning clusters similar behaviors together. Since redundant behaviors are clustered, maximizing the entropy of state-transition embeddings produces increasingly diverse behaviors. | | disc. | similarity | uncertainty | entropy | | --- | --- | --- | --- | --- | | walker | 0.80 | 0.79 | 0.78 | 0.82 | | quad. | 0.44 | 0.63 | 0.75 | 0.74 | | mean | 0.62 | 0.71 | 0.77 | 0.78 | Table 1: Analyzing four different intrinsic reward specifications for CIC, we find that entropy-based intrinsic reward performs best, suggesting that the CIC discriminator is primarily useful for representation learning. These are normalized scores averaged over 3 seeds across 8 downstream tasks (24 runs per data point). The importance of representation learning: To what extent does representation learning with state-skill CPC (see Eq. [3](#S4.E3 "3 ‣ 4.1 Contrastive Intrinsic Control ‣ 4 Method ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery")) affect the agent’s exploration capability? To answer this question we train the CIC agent with the entropy intrinsic reward with and without the representation learning auxiliary loss for 2M steps. The zero-shot reward plotted in Fig. [7](#S7.F7 "Figure 7 ‣ 7 Results ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery") indicates that without representation learning the policy collapses. With representation learning, the agent is able to discover diverse skills evidenced by the non-zero reward. This result suggests that state-skill CPC representation learning is a critical part of CIC. Qualitative analysis of CIC behaviors: Qualitatively, we find that CIC is able to learn locomotion behaviors in DMC without extrinsic information such as early termination as in OpenAI Gym. While most skills are higher entropy and thus more chaotic, we show in Fig [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery") that structured behaviors can be isolated by fixing a particular skill vector. For example, in the walker and quadruped domains - balancing, walking, and flipping skills can be isolated. For more qualitative investigations we refer the reader to Appendix [I](#A9 "Appendix I Qualitative Analysis of Skills ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery"). ![Refer to caption](/html/2202.00161/assets/x6.png) Figure 7: Mean zero-shot extrinsic rewards for Quadruped stand over 3 seeds with and without state-skill representation learning. Without representation learning, the algorithm collapses. Similarly, with CIC representation learning but no entropy term (in which case we use the discriminator as the intrinsic reward) the policy also collapses. Note that there is no finetuning happening here. We’re showing the task-specific extrinsic reward during reward-free pre-training as a way to sense-check exploration policy. Skill architecture and adaptation ablations: We find that projecting the skill to a latent space before inputting it as the key for the contrastive loss is an important design decision (see Fig. [6](#S4.F6 "Figure 6 ‣ 4.1 Contrastive Intrinsic Control ‣ 4 Method ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery")a), most likely because this reduces the diversity of the skill vector making the discriminator task simpler. We also find empirically that the skill dimension is an important hyperparameter and that larger skills results in better zero-shot performance (see Fig. [6](#S4.F6 "Figure 6 ‣ 4.1 Contrastive Intrinsic Control ‣ 4 Method ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery")b), which empirically supports the hypothesis posed in Section [3](#S3 "3 Motivation ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery") and Appendix [H](#A8 "Appendix H Toy Example to Illustrate the Need for Larger Skill Spaces ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery") that larger skill spaces are important for internalizing diverse behaviors. Interestingly, CIC zero-shot performance is poor in lower skill dimensions (e.g. dim(z)<10dim𝑧10\text{dim}(z)<10dim ( italic\_z ) < 10), suggesting that when dim(z)dim𝑧\text{dim}(z)dim ( italic\_z ) is small CIC performs no better than prior competence-based methods such as DIAYN, and that scaling to larger skills enables CIC to pre-train effectively. To measure the effect of skill finetuning described in Section [5](#S5 "5 Practical Implementation ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery"), we sweep mean skill values along the interval of the uniform prior [0,1]01[0,1][ 0 , 1 ] with a budget of 4k total environment interactions and read out the performance on the downstream task. By sweeping, we mean simply iterating over the interval [0,1]01[0,1][ 0 , 1 ] with fixed step size (e.g. v=0,0.1,…,0.9,1)v=0,0.1,\dots,0.9,1)italic\_v = 0 , 0.1 , … , 0.9 , 1 ) and setting zi=vsubscript𝑧𝑖𝑣z\_{i}=vitalic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_v for all i𝑖iitalic\_i. This is not an optimal skill sampling strategy but works well due to the extremely limited number of samples for skill selection. We evaluate this ablation on the Quadruped Stand and Run downstream tasks. The results shown in Fig. [6](#S4.F6 "Figure 6 ‣ 4.1 Contrastive Intrinsic Control ‣ 4 Method ‣ CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery") indicate that skill selection can substantially affect zero-shot downstream task performance. 8 Conclusion ------------- We have introduced a new competence-based algorithm – Contrastive Intrinsic Control (CIC) – which enables more effective exploration than prior unsupervised skill discovery algorithms by explicitly encouraging diverse behavior while distilling predictable behaviors into skills with a contrastive discriminator. We showed that CIC is the first competence-based approach to achieve leading performance on URLB. We hope that this encourages further research in developing RL agents capable of generalization. 9 Acknowledgements ------------------- We would like to thank Ademi Adeniji, Xinyang Geng, Fangchen Liu for helpful discussions. We would also like to thank Phil Bachman for useful feedback. This work was partially supported by Berkeley DeepDrive, NSF AI4OPT AI Institute for Advances in Optimization under NSF 2112533, and the Office of Naval Research grant N00014-21-1-2769.
7e040b78-19a7-487c-814b-5ad8f5692818
trentmkelly/LessWrong-43k
LessWrong
Finland Museum Tour 1/??: Tampere Art Museum I haven’t really been to museums as an adult; not because I’d have been particularly Anti-Museum, but just because museums never happened to become a Thing That I Do. I vaguely recall having been to a few museums with my parents when I was little, an occasional Japan exhibition as a teen when Japan was a Thing, and a few visits to various museums with school. I think my overall recollection of those visits afterwards could be summarized as being around 5.5 on the BoardGameGeek rating scale grade of “5/10: Slightly boring, take it or leave it” and “6/10: Ok – will play if in the mood”. (The BGG rating scale is my favorite of the ones that I’ve seen, but I digress.) So I’m not sure, but it’s at least possible that between becoming an adult and yesterday, I didn’t visit a single museum. For the last year or so however, I’ve had a definite feeling of being stuck in a rut, life-wise. Up until summer last year, I used to have a lot of anxiety; I’m still not totally free of it, but I’ve reduced the amount of it enough that escaping from it is no longer my main driving motivation, the way that it used to be. Meaning that I’m more free to focus on things that I actually enjoy. But once you have spent most of your adult life feeling a desperate need to escape from a constant level of background anxiety, anxiety which was preventing you from doing anything slow-paced as that would have been insufficient to drown out the suffering… then it’s hard to know *what* you really enjoy anymore. Because you haven’t really been looking for enjoyable things, you have been looking for things that would make the pain go away. What I was left with, even after getting rid of most of the anxiety, was some level of anhedonia – a difficulty deriving any pleasure from something. And most of my old routines were built around doing things that were mainly palliatives for anxiety, rather than being particularly enjoyable. Then one day, I happened to see a news article saying something about how
c19ca966-fe06-4751-a4e4-39b2b2a738b3
trentmkelly/LessWrong-43k
LessWrong
Overcoming the Curse of Knowledge [crossposted at Measure of Doubt] What is the Curse of Knowledge, and how does it apply to science education, persuasion, and communication? No, it's not a reference to the Garden of Eden story. I'm referring to a particular psychological phenomenon that can make our messages backfire if we're not careful. Communication isn't a solo activity; it involves both you and the audience. Writing a diary entry is a great way to sort out thoughts, but if you want to be informative and persuasive to others, you need to figure out what they'll understand and be persuaded by. A common habit is to use ourselves as a mental model - assuming that everyone else will laugh at what we find funny, agree with what we find convincing, and interpret words the way we use them. The model works to an extent - especially with people similar to us - but other times our efforts fall flat. You can present the best argument you've ever heard, only to have it fall on dumb - sorry, deaf - ears. That's not necessarily your fault - maybe they're just dense! Maybe the argument is brilliant! But if we want to communicate successfully, pointing fingers and assigning blame is irrelevant. What matters is getting our point across, and we can't do it if we're stuck in our head, unable to see things from our audience's perspective. We need to figure out what words will work. Unfortunately, that's where the Curse of Knowledge comes in. In 1990, Elizabeth Newton did a fascinating psychology experiment: She paired participants into teams of two: one tapper and one listener. The tappers picked one of 25 well-known songs and would tap out the rhythm on a table. Their partner - the designated listener - was asked to guess the song. How do you think they did? Not well. Of the 120 songs tapped out on the table, the listeners only guessed 3 of them correctly - a measly 2.5 percent. But get this: before the listeners gave their answer, the tappers were asked to predict how likely their partner was to get it ri
ac000173-d3f4-4bd4-86dc-aea80245180b
trentmkelly/LessWrong-43k
LessWrong
AI alignment researchers may have a comparative advantage in reducing s-risks I believe AI alignment researchers might be uniquely well-positioned to make a difference to s-risks. In particular, I think this of alignment researchers with a keen interest in “macrostrategy.” By that, I mean ones who habitually engage in big-picture thinking related to the most pressing problems (like AI alignment and strategy), form mental models of how the future might unfold, and think through their work’s paths to impact. (There’s also a researcher profile where a person specializes in a specific problem area so much that they no longer have much interest in interdisciplinary work and issues of strategy – those researchers aren’t the target audience of this post.) Of course, having the motivation to work on a specific topic is a significant component of having a comparative advantage (or lack thereof). Whether AI alignment researchers find themselves motivated to invest a portion of their time/attention into s-risk reduction will depend on several factors, including: * Their opportunity costs * Whether they think the work is sufficiently tractable * Whether s-risks matter enough (compared to other practical priorities) given their normative views * Whether they agree that they may have a community-wide comparative advantage Further below, I will say a few more things about these bullet points. In short, I believe that, for people with the right set of skills, reducing AI-related s-risks will become sufficiently tractable (if it isn’t already) once we know more about what transformative AI will look like. (The rest depends on individual choices about prioritization.) Summary * Suffering risks (or “s-risks”) are risks of events that bring about suffering in cosmically significant amounts. (“Significant” relative to our current expectation over future suffering.) * (This post will focus on “directly AI-related s-risks,” as opposed to things like “future humans don't exhibit sufficient concern for other sentient minds.”) * Early efforts to research s
f57400d2-ba60-4ad1-aedd-aead6eb15bfe
trentmkelly/LessWrong-43k
LessWrong
Blinded by Insight Insight is often dangerous to the intellect because we may be so captivated by what we have discovered that we take it too far or bundle it with other falsehoods. We can be so focused on the strength behind this nugget of wisdom, that we fail to realise other claims sneaking their way in there. Examples: * Postmodernism makes a few important insights - that we should be suspicious of grand narratives, that we should be very skeptical of claims that one particular model has all the answers, that society often distorts what counts as "reasonable" or "logical" or "scientific". However, they try to universalise this to the point where if they took their own arguments seriously, they would have to believe that throwing darts against a board is just as reliable as the top researchers running randomised controlled trials with large sample sizes. * Pragmatists are blinded by the idea that knowledge is an instrumental, rather than a terminal goal. This leads them to a incoherent definition of truth - that truth and usefulness are always the same thing. In many ways, it can be understood as a reaction against the idea that we should pursue knowledge for its own sake. We should always be especially suspicious of these reactive insights. * Someone will discover a new philosophy or intellectual movement, discover that it is way more persuasive or insightful on the topic then they are currently and then adopt it wholeheatedly, to the point of becoming an ideologue. * My last post, The Basic Object Model and Definition by Interface was so focused on explicating the idea that we often use the same word to cover ontologically different situations when both situations share a similar "interface", that I managed to give a definition of existence that didn't create a divide between existence and non-existence. * There have been times when I've been really proud of something that I've written, because I know that it is so much better that what I could have written before. It's po
7cab6038-0105-4986-895f-7aadfb0f42d6
trentmkelly/LessWrong-43k
LessWrong
Future of Moral Machines - New York Times [link] http://opinionator.blogs.nytimes.com/2011/12/25/the-future-of-moral-machines/
a1374b5d-ff01-40c8-89e7-25dc9a9b676c
StampyAI/alignment-research-dataset/arxiv
Arxiv
Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark 1 Introduction --------------- Recentadvancesincomputervision[[Krizhevsky,Sutskever,andHinton2012](#bib.bibx13)]andnaturallanguageprocessing[[Sutskever,Vinyals,andLe2014](#bib.bibx27)]canbeattributedtotheadventofdeeplearningandthepresenceofrobustbenchmarkstoquantitativelymeasureprogress,suchastheImageNetchallenge[[Russakovskyetal.2015](#bib.bibx22)].Inthelastfewyears,neuralnetwork-basedfunctionapproximationhasalsoprovensuccessfulinreinforcementlearning,withAIagentsnowabletoperformatsuperhumanlevelsingameslikeGo[[Silveretal.2016](#bib.bibx26)]andtheAtari[[Mnihetal.2015](#bib.bibx16)]suite.Onceagain,researchinDeepRLhasbeensteeredbytheestablishmentofbenchmarksliketheArcadeLearningEnvironment[[Bellemareetal.2013](#bib.bibx5)],alongwiththeOpenAIGyminterface[[Brockmanetal.2016](#bib.bibx8)],whichhasbeenwidelyadoptedbytheresearchcommunity.Thesebenchmarksareconspicuouslymissing2challengingcharacteristics:(a)abruptcontext-dependentswitchingofstrategyand(b)temporalsensitivity.Foragentstooperateintherealworld,theyneedtobeabletoswitchbehaviorsveryabruptly,whichnecessitates(i)learningtoidentifycriticalpointswherebehaviorneedstochange,and(ii)learningthedifferentbehaviorsrequiredineachcontext.Agentsalsoneedtohaveanunderstandingoftimeasanindependentvariable,alongwiththeabilitytoadapttheirbehavioraccordingly.Whilehavingnounderstandingoftimeassomethingthat’salwaystickingmightworkforsimulatedorstaticreal-worldenvironments,itisnotacceptableforreal-worlddynamicenvironmentswithmovingentitiesandwheredecisionsmighthavetobeadaptivelytakenveryquicklyorveryslowly,dependingonthecontext.Sinceexistingbenchmarksdonotfocusontheseproperties,reinforcementlearningresearchhasnottackledtheseproblemsyet.Inthispaper,weintroduceachallengingRLenvironmentbasedonSpaceFortress(SF)[[ManéandDonchin1989](#bib.bibx15)],anarcade-stylegamewhichwasdevelopedbypsychologistsinthe80stostudyhumanskillacquisition,andisstillusedquitefrequently[[Towne,Boot,andEricsson2016](#bib.bibx29), [DestefanoandGray2016](#bib.bibx10)].Theobjectiveofthegameistoflyashipanddestroyafortressbyfiringmissilesatit.Theshiphastorespectaminimumtimedifferencebetweensuccessiveshots,whilebuildingupthefortress’vulnerability,andoncethefortressbecomesvulnerable,destroyitwitharapiddoubleshot.AsaRLtestbed,SpaceFortresspossessesboththecharacteristicsdiscussedabove:context-dependentstrategychange(changeinrequiredfiringrateafterthefortressbecomesvulnerable)andtimesensitivity(firingraterequirementsindependentoftheagent’sdecisionspeedi.e.,theframerate).Italsohasasparserewardstructure,and,asweshow,isnotsolvedbyanystate-of-the-artRLalgorithmssuchasRainbow[[Hesseletal.2018](#bib.bibx12)],ProximalPolicyOptimization(PPO)[[Schulmanetal.2017](#bib.bibx25)]andAdvantageActor-Critic(A2C)[[Mnihetal.2016](#bib.bibx17)].Whilebeinganinterestingandrelevantchallengeforreinforcementlearning,therichbackgroundonhumanskillacquisitionresearchbasedonSpaceFortressalsomakesitanattractivetooltostudyhuman-AIcollaborationinadynamicenvironment,compareskillacquisitiontechniquesofhumansvsartificialagents,andworkonfew-shotlearningbyleveraginglessonsfromcognitivearchitectureslikeACT-R[[Anderson2009](#bib.bibx2)]whichhavepreviouslylearnedthegamewithextremelyhighsampleefficiency,albeitusinghandcraftedfeaturesandextensivedomainknowledge.Wemakethefollowingcontributions.First,wepresentanewRLtestbedthatrequirestheagenttoswitchstrategiesabruptlybasedoncontext,anddevelopaconceptualizationoftimeindependentofitsspeedofdecisionmaking,anddemonstrateempiricallythatperformanceonparwithhumansisbeyondthecapabilityofcurrentstate-of-the-artRLalgorithms,evenafterrelaxingtherewardsparsitythroughshaping[[Ng,Harada,andRussell1999](#bib.bibx18)].Weidentifytheaspectsofthegamewhichcanbevariedtocontrolbothtemporalandcontextsensitivity,allowingresearchoneitherinisolation.Finally,wedemonstratethatafterintroducingmodificationstoeaseidentificationofcriticalcontexts,thePPOalgorithmlearnstoplaythegamewellenoughtooutperformhumans,verifyingthatcontextinsensitivityistheprimarydriverbehindthepoorperformanceofRLalgorithms.WealsopresentrobusthumanbenchmarkresultsforSpaceFortress,allowingfutureresearcherstoplacenewexperimentalresultsincontext.Weopen-source111<https://github.com/agakshat/spacefortress>theOpenAIGymenvironmentforSpaceFortressaswellasallthecodeusedtorunourexperiments,topromoteresearchintemporalandcontext-sensitivereinforcementlearningalgorithms. 2 RelatedWork -------------- TheArcadeLearningEnvironment(ALE)[[Bellemareetal.2013](#bib.bibx5)]posesthechallengeofbuildingAIagentswithcompetencyacrossdozensofAtari2600games,likeSpaceInvaders,Asteroids,BowlingandEnduro.FollowingthedevelopmentofDeepQNetworks[[Mnihetal.2015](#bib.bibx16)],alotofresearchintheRLcommunityhasfocusedonimprovingperformanceinoneormoreofthegameswithimprovementslikemassiveparallelization,sampleefficiency[[Wangetal.2015](#bib.bibx32), [Schauletal.2015](#bib.bibx23)],betterexploration[[Fortunatoetal.2017](#bib.bibx11), [Plappertetal.2017](#bib.bibx21)],rewardsparsity[[Pathaketal.2017](#bib.bibx20), [Andrychowiczetal.2017](#bib.bibx3)]andlong-termstrategies[[Bacon,Harb,andPrecup2017](#bib.bibx4), [Kulkarnietal.2016](#bib.bibx14)].IncontinuouscontroltasksontheMuJoCotestbed[[Todorov,Erez,andTassa2012](#bib.bibx28)],on-policyactorcriticmethods[[Schulmanetal.2017](#bib.bibx25), [Mnihetal.2016](#bib.bibx17)]haveshownpromise.\citeauthorbellemare2017distributional \shortcitebellemare2017distributionalestimatedaprobabilitydistributionovertheQ-valueofastate(insteadofjustthemeanoftheQ-value),withgreatlyimprovedresults.Rainbow[[Hesseletal.2018](#bib.bibx12)]combinedalotoforthogonalimprovementsinDQNstoachievestateoftheartresults.However,weshowbelowthatthesealgorithmsfailtolearnanythingonSpaceFortress.GameslikeMs.PacmanandSeaquestintheALEhavepreviouslyrequiredsomecontextortemporalsensitivity,butthesecharacteristicscan’tbecontrolledorvaried,andformaminorpartoftheoverallgame.AsaRLtestbed,SpaceFortressreliesheavilyonbothcontextandtemporalsensitivity,asweshowinSection[4](#S4 "4 ExperimentsandResults ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark"),andbothcharacteristicscanbecontrolleddirectlytoenabletheirstudyinisolation.Therehasalsobeenafairamountofpriorworkonreinforcementlearningwithsparserewards.\citeauthorpathak2017curiosity \shortcitepathak2017curiosityusecuriosityasanintrinsicrewardsignaltoefficientlydirectexploration.Statevisitationcountshavealsobeeninvestigatedforexploration[[Bellemareetal.2016](#bib.bibx6)],and\citeauthorosband2016deep \shortciteosband2016deeptrainmultiplevaluefunctionsandmakeuseofbootstrappingandThompsonsamplingforexploration.Theseworksfocusonlearningwithsparserewardsthroughbetterexplorationofthestatespace,whichdoesnothelpwithSpaceFortresswhereexplorationisrequiredintimeandinlatentcontexts.\citeauthorzambrano2015continuous \shortcitezambrano2015continuoustrainedagentstodealwithactionsthattakeafiniteamountoftimethroughneuralreinforcementlearningingridworlds,whichstilldidnotrequireaconceptualizationoftimeindependentoftheinternalspeedofdecisionmaking,hencedifferingfromtheproposedwork.Finally,\citeauthorvan2017higher,van2017towards \shortcitevan2017higher,van2017towardspreviouslyusedA3ConasimplecontroltaskabstractedfromSpaceFortress,withnofortressdestructionrequired.Crucially,thistaskremovedtheinterestingcharacteristicsofSpaceFortress,namelycontextualandtemporalsensitivity,aswellasrewardsparsity.WereleaseanimplementationofthegameasanOpenAIGymenvironmenttopromoteresearch,conductanablationstudytoascertaintherolesofcontextandtemporalsensitivityandrewardsparsityinpoorperformance,andthenpresentresultsshowingexistingRLalgorithmsoutperforminghumansafterwecontrolfortheabovefactors. 3 TheSpaceFortressRLEnvironment -------------------------------- | | | | | --- | --- | --- | | (a) | (b) | (c) | Figure 1: GameScreensinSpaceFortress.Theshiphastoflybetweenthetwohexagons,whilethefortresscanonlychangeitsorientationatafixedposition.Thegamescoreisdisplayedatthetop,andthefortress’vulnerabilityisdisplayedasabarwhichfillsuponeachshot.(a)Thebarisempty,indicatingthatthefortress’vulnerabilityis0,(b)Thebarisnowfull,indicatingthatvulnerabilityisequalto10andarapiddoubleshotwillnowdestroythefortressand(c)Thefortresshasbeendestroyed.Thisisfollowedbyaresetofthefortressandcontinuationofthegametillendofepisode(3minutesgametime) WenowdescribetheSpaceFortressgame,discussitsutilityasatestbedforreinforcementlearning,andpresentresultsfromhumanslearningtoplaythegame,intendedasabaseline.ThegameenvironmentcanbeseeninFig.[1](#S3.F1 "Figure 1 ‣ 3 TheSpaceFortressRLEnvironment ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark"). ### 3.1 GameDescription Theplayer/AIagentcontrolsaship,whichhastoflyaroundinafrictionlessarena,firingmissilestodestroyafortresslocatedcentrallywithinthearena.Hittingthewallsoneithersidesorbeinghitbyshellsfiredbythefortressresultsinimmediateshipdeath,whichincursapenaltyontheagent.Destroyingthefortress,however,requiresacontext-awarestrategy.Eachmissilethatsuccessfullyhitsthefortressincreasesitsvulnerabilityvbyone.Whenv<10,thefortressis’notvulnerable’,andtheshipmustfireitsmissilesspacedmorethan250msapart.Firingfasterthanthiswhilev<10leadstothefortressvulnerabilitygettingresetbacktozero.Thisisobviouslyundesirableandtheagentmustlearntoshootslowly.However,oncev=10,thefortressbecomesvulnerable,andarapiddoublefire(2shotsspacedlessthan250msapart)isrequiredtodestroythefortress.Werefertothis250mstimespecificationasthe“criticaltimeinterval”.Itisimportanttonotethatoncev=10,shootingfurtheratthefortressataratelessthan4Hzwillleadtonochangeinvulnerability.Hencethefiringstrategycompletelyreversesatthepointwhenvulnerabilityreaches10,andtheagentmustlearntoidentifythiscriticalpointtoperformwell.Sincethegameissimplyreset(withoutendingtheepisode)whenthefortressisdestroyed,itiscrucialthattheagentalsorecognizethissecondcriticalpointoffortressdestruction,andswitchbackitsfiringratetocontinueplayingwell.ThismajordependenceoncontextualandtemporalsensitivityisuniquetoSpaceFortressamongRLbenchmarks.Asinglegamelastsfor3minutes.Thegamedoesnotendintheeventofeitherafortressorshipdestruction,andpointsarescoredbydestroyingthefortressasmanytimesaspossibleinthose3minuteswhileavoidinggettingshotdownbythefortressorcollidingwiththearena.Whenafortressisdestroyed,itsvulnerabilityresetstozero,andthegamecontinues.Whentheshipisdestroyed,itrespawnsatarandompositionandorientation,butthefortress’vulnerabilityispreserved. ### 3.2 GameVersions SpaceFortressrequirestheagenttomasteradvancedcontrolsinafrictionlessenvironment,orientingandfiringmissilesatthefortresswhileavoidingshellsandnotcollidingwiththewalls.SincecurrentRLalgorithmsprovedunabletosolvethegameinitsentirety(seeexperimentsinSection[4.2](#S4.SS2 "4.2 WithDefault(Sparse)Rewards ‣ 4 ExperimentsandResults ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark")),weintroducedanotherversionofthegametoreducenavigationcomplexitybyhavingtheshipautomaticallypointedatthefortress.Throughouttherestofthepaper,thesimplerversioniscalled‘Autoturn’,whiletheoriginalgameisreferredtoas‘Youturn’. ### 3.3 HumanEvaluations Thehumanplayerresultswerecollectedbytheauthorsinthecontextofastudyonhumanskillacquisition[[Andersonetal.2018](#bib.bibx1)].117peoplewereaskedtoplay20gamesofSpaceFortress,with52playingAutoturnand65playingYouturn.Theywereallgiveninstructionsabouttherulesofthegamebeforehand,andtoldaboutthechangeinfiringraterequiredwhenthefortressvulnerabilityreaches10.Consideringthathumanswouldrequiresometurnstolearntoplaythegame,wereportthefollowingresultsinTable[1](#S3.T1 "Table 1 ‣ 3.3 HumanEvaluations ‣ 3 TheSpaceFortressRLEnvironment ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark"):(1)Bestperformanceofanysubjectinanygame,(2)Averageperformanceofallsubjectsinthelast5games,consideringthefirst15asalearningphase,(3)Averageperformanceofallsubjectsinthelast10games,consideringthefirst10asalearningphase,(4)Averageperformanceofallsubjectsinthelast15games,consideringthefirst5asalearningphaseand(5)Averageperformanceofallsubjectsinall20games.Thescoresshowntothehumans(andreportedinTable[1](#S3.T1 "Table 1 ‣ 3.3 HumanEvaluations ‣ 3 TheSpaceFortressRLEnvironment ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark"))wereasfollows:+100forfortressdestruction,-100forshipdeathand-2foreachmissileshottopenalizeexcessivefiring. | Game | N | Metric | Best | Last5 | Last10 | Last15 | All | | --- | --- | --- | --- | --- | --- | --- | --- | | Autoturn | 52 | Score | 3000 | 1989 | 1978 | 1940 | 1810 | | FortressDeath | 40 | 30.311 | 30.044 | 29.591 | 28.181 | | Youturn | 65 | Score | 2314 | 216 | 153 | 43 | -169 | | FortressDeath | 32 | 14.36 | 13.704 | 12.882 | 11.4 | Table 1: Aggregatedresultsfor102humansplayingSpaceFortress.Afterbeingprovidedwithinstructionsabouttherulesbeforehand,eachplayerplayedthegamefor1hour,or20games.Allowingforafewpracticegames,wereporttheaveragescoresontheLast‘K’games(K∈{5,10,15}),aswellasthebestindividualscore. | S.No. | Algorithm | Game | Avg.Score | BestScore | FortressDeath | | --- | --- | --- | --- | --- | --- | | 1 | A2C | Autoturn | -2685 | -2242 | 0 | | 2 | A2C | Youturn | -5859 | –5604 | 0 | | 3 | PPO | Autoturn | -2502 | -2178 | 0 | | 4 | PPO | Youturn | -5269 | -4698 | 0 | | 3 | Rainbow | Autoturn | -8327 | -8264 | 0 | | 4 | Rainbow | Youturn | -9378 | -9245 | 0 | Table 2: AveragegamescoresforRLagents,trainedwithdefault(sparse)rewards,for45Msteps ### 3.4 RLSetup WenowdescribetheexactgamesetupusedforreinforcementlearningonSpaceFortress. * Observations:Theobservationsareintheformofpixel-levelgrayscale84x84sizerenderingsofthegamescreen(similartoFig.[1](#S3.F1 "Figure 1 ‣ 3 TheSpaceFortressRLEnvironment ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark").Importantinformationsuchasthetimelapsedsincethelastshotisnotapartofthisobservation,makingthetaskpartiallyobserved.Weprovidetheagentwithastackofthelast4observationsasinputateachtimestep,allowingittoinferdirectionofmovementoftheshipandfortressusingthedifferencebetweensuccessiveframes. * Actions:Theagentchoosesfrom5actions:(i)NoOperation,(ii)Fire(amissile),(iii)ThrustForward(inthedirectionofcurrentorientation),(iv)ThrustRight(rotaterightwithoutchangingposition)and(v)ThrustLeft(rotateleftwithoutchangingposition).Thegameoperatesatadefaultframerateof30FPSandthereisnoactionrepeat,whichmeansanactionischosenevery33ms.NotethattheAutoturnversiononlyhas3actions(sincenoturningisrequired). * Rewards:Inlinewith\citeauthormnih2015human \shortcitemnih2015human,wefoundthatlearningwasmorestablewhenusingclippedrewards.Thefortressandshipdestructionrewardswereclippedto+1and-1,respectively,andthemissilepenaltyreducedto-0.05.Notethattheresultsusedforevaluationandreportingwerenotclipped,inordertofollowthesameschemeasdescribedinSection[3.3](#S3.SS3 "3.3 HumanEvaluations ‣ 3 TheSpaceFortressRLEnvironment ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark"). | S.No. | Algorithm | Game | Avg.Score | BestScore | FortressDeath | | --- | --- | --- | --- | --- | --- | | 1 | A2C | Autoturn | -4116 | -2100 | 0 | | 2 | A2C | Youturn | -4781 | -3890 | 1.3 | | 3 | PPO | Autoturn | -1294 | -1108 | 1 | | 4 | PPO | Youturn | -1435 | -1206 | 0.94 | | 3 | Rainbow | Autoturn | -6161 | -5960 | 0 | | 4 | Rainbow | Youturn | -4894 | -4577 | 0 | Table 3: AveragegamescoresforRLagents,trainedwithdenserewards,for45Msteps. | S.No. | Algorithm | Architecture | Game | Avg.Score | BestScore | FortressDeath | | --- | --- | --- | --- | --- | --- | --- | | 1 | A2C | SF-GRU | Autoturn | -1641 | -718 | 3 | | 2 | A2C | SF-GRU | Youturn | -2444 | -1700 | 11 | | 3 | PPO | SF-FF | Autoturn | 2337 | 2818 | 41 | | 4 | PPO | SF-FF | Youturn | 2235 | 2880 | 40 | | 5 | PPO | SF-GRU | Autoturn | 2510 | 2870 | 43 | | 6 | PPO | SF-GRU | Youturn | 2356 | 2932 | 41 | | 7 | Rainbow | – | Autoturn | -2973 | -2330 | 1.2 | | 8 | Rainbow | – | Youturn | -4112 | -3934 | 0.0 | Table 4: AveragegamescoresforRLagents,trainedaftermakingcontextidentificationeasier,for45Msteps 4 ExperimentsandResults ------------------------ Inthissection,weexperimentallyshowthat(a)nostate-of-the-artreinforcementlearningalgorithm(Rainbow[[Hesseletal.2018](#bib.bibx12)],A2C[[Mnihetal.2016](#bib.bibx17)]andPPO[[Schulmanetal.2017](#bib.bibx25)])canlearntoplaySpaceFortress,(b)removingrewardsparsitydoesnotimprovetheperformanceand(c)makingcontextidentificationeasierthroughspecificalterationsintherewardstructureallowPPOtoachievesuperhumanperformance.Wealsodiscusstemporalsensitivitybyexaminingeffectivenessoftransferoflearningacrossdifferentsettingsofthegame’scriticaltimeinterval. ### 4.1 NetworkArchitecture ForRainbow,theQ-networkarchitecturewasidenticaltothatin\citeauthorhessel2017rainbow \shortcitehessel2017rainbow.ForPPOandA2C,weexperimentwithtwopolicynetworkarchitectures: * SF-GRU:Theagent’spolicynetworktakesthe1x84x84environmentobservationsasinput,andoutputs(a)aprobabilitydistributionovertheactions,and(b)avaluefunctionestimateoftheexpectedreturn.Theinputgoesthroughtwoconvolutionallayerswith16,32filtersofsize8,4andstride4,2respectively,andReLUactivation.TheoutputisflattenedandpassedthroughalinearlayerwithaReLUnon-linearitytogetanoutputvectorofsize256.ThisisthenpassedthroughaunidirectionalGatedRecurrentUnit(GRU)cell[[Choetal.2014](#bib.bibx9)]withatanhnon-linearitygivinganoutputofsize256.Finally,thisvectorispassedasinputtotwolinearlayersthatoutputtheprobabilitydistributionoveractions(usingasoftmaxactivation)andthevalueestimateoftheexpectedreturn. * SF-FF:Sameasabove,butwithafullyconnectedlayerofsize256withReLUnon-linearityinsteadoftherecurrentGRUcell. Forallexperiments,weran16processescollectinggameexperienceinparallel,withdiscountfactorγ=0.99andGeneralizedAdvantageEstimation(GAE)[[Schulmanetal.2015](#bib.bibx24)]parameterλ=0.95.PPOusedvaluelosscoefficientc1=0.5,entropyregularizationcoefficientc2=0.05andlearningrate1×10−3,whileA2Cusedc1=0.5,c2=0.01andlearningrate5×10−4.BothA2CandPPOusedn=1024-stepreturns.Thesehyperparameterswerefoundafterextensivetuning.Wealsoclippedthegradientsofallthenetworkparametersto0.5,topreventcatastrophicupdatesfromoutlyingsamplesoftheexpectedgradientvalue.SincethePPOalgorithmismorestable,weupdatedthepolicy4timeseveryepoch-whileA2Cmadeonly1updateeveryepoch. | | | | --- | --- | | (a) Autoturn | (b) Youturn | Figure 2: LearningCurvesforPPOonSpaceFortress,fordifferentrewardstructuresandarchitectures.’AverageHumanScore’referstotheaveragescoreoverall20games,providedasapointofcomparison.’DefaultRewards’discussedinSection[4.2](#S4.SS2 "4.2 WithDefault(Sparse)Rewards ‣ 4 ExperimentsandResults ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark"),’DenseRewards’inSection[4.3](#S4.SS3 "4.3 WithDenseRewards ‣ 4 ExperimentsandResults ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark")and’AECI’(AfterEasingContextIdentification)inSection[4.4](#S4.SS4 "4.4 AfterMakingContextIdentificationEasier ‣ 4 ExperimentsandResults ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark").BothSF-FFandSF-GRUarchitecturesareabletoachievesuperhumanperformanceaftermakingcontextidentificationeasier.Theagent’sperformanceisverypoorwithbothdefault(sparse)rewardsanddenserewards. | | | | | --- | --- | --- | | (a) 125ms | (b) 400ms | (c) 600ms | Figure 3: CheckingforpositivetransferoflearningwhilechangingthecriticaltimeintervalinSpaceFortress.Ineachfigure,wechangedthetimeintervaltoadifferentvalueandverifiedwhethertransferringlearnedweightsfromtheagenttrainedon250msasthecriticalintervalledtoanyspeedupinthelearningprocess,andanimprovementinthefinalperformance. ### 4.2 WithDefault(Sparse)Rewards Withthedefaultsparserewardstructurewhichrewardsfortressdestructionandpenalizesshipdestructionandmissilefiring,noalgorithmisabletolearntodestroythefortress.AvisualinspectionofthegameplayrevealedthatthePPOandA2Cagents(withbotharchitectures)justlearnedtostopfiring,sincethatleadstoanimmediatepenalty.TheRainbowagentdidnotlearnanything.Table[2](#S3.T2 "Table 2 ‣ 3.3 HumanEvaluations ‣ 3 TheSpaceFortressRLEnvironment ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark")presentstheaggregatedresultsforPPO,A2CandRainbowonbothversionsofthegame.The‘FortressDeath’columninTable[2](#S3.T2 "Table 2 ‣ 3.3 HumanEvaluations ‣ 3 TheSpaceFortressRLEnvironment ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark")indicatesthenumberoftimestheagentwasabletodestroythefortresspergame,onaverage. ### 4.3 WithDenseRewards ConsideringtheresultsinSection[4.2](#S4.SS2 "4.2 WithDefault(Sparse)Rewards ‣ 4 ExperimentsandResults ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark")andtounderstandhowrewardsparsityisimpactingperformance,weintroduceanadditionalrewardof+1eachtimethefortressishitbyamissile,andapenaltyof-1ifthefortress’vulnerabilitygetsresetduetoafasterfiringspeedthanthecontextdemands.ThismakestherewarddensitycomparabletoAtarigames,onwhichPPO,A2CandRainbowhaveallbeenshowntoperformwell.TheirperformanceonSpaceFortresswithdenserewardscanbeseeninTable[3](#S3.T3 "Table 3 ‣ 3.4 RLSetup ‣ 3 TheSpaceFortressRLEnvironment ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark"),wherethescoresforPPOandRainbowhaveimproved.Fromwatchingavideoofthetrainedagentplayingthegame,weobservedthattheimprovementstemmedfromhavinglearnedtoavoidshipdeathandtofireatthefortress,albeitwithoutknowledgeofthecriticaltimeintervalandcontext-dependentstrategyshifts,resultinginaninabilitytodestroythefortresswithanyconsistency.Hence,SpaceFortressclearlypresentsachallengetothestateoftheartinreinforcementlearning,andisausefulandrelevantbenchmarkforfurtherresearch.WenowmoveontostudyingtheimpactofcontextinsensitivityofRLalgorithmsonthetaskperformance(Section[4.4](#S4.SS4 "4.4 AfterMakingContextIdentificationEasier ‣ 4 ExperimentsandResults ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark")),andshowthatbymakingtheidentificationofcriticalcontextseasierwith2simplemodificationstothereward,PPOlearnstoplaythegameverywell-outperforminghumanscomfortably.Thisclearlyindicatesthatitiscontextinsensitivityandtheinabilitytoidentifycriticalpointswhichishamperingperformance,furthermakingthecasethattheSpaceFortressgameisausefulbenchmarkforstudyingcontextsensitivity. ### 4.4 AfterMakingContextIdentificationEasier AsdiscussedinSection[3.1](#S3.SS1 "3.1 GameDescription ‣ 3 TheSpaceFortressRLEnvironment ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark"),thereare2criticalpointswhichtheagenthastolearntoidentifyandswitchstrategiesat.Thefirstiswhenthefortressbecomesvulnerable,i.e.v=10andtheagenthastoswitchfromfiringmorethan250msaparttoarapiddoubleshotfiredlessthan250msapart.Thesecondiswhenthefortressisdestroyed,andtheagenthastoswitchbacktoitsslowfiringspeed.Toconfirmourintuitionthatitisindeedthealgorithms’inabilitytoidentifythesecriticalpointsandaccordinglyadaptitsfiringstrategywhichleadtopoorperformance,weintroduce2changestotherewardstructure(withrespecttothedenserewardfromSection[4.3](#S4.SS3 "4.3 WithDenseRewards ‣ 4 ExperimentsandResults ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark"))whichmakeittrivialfortheagenttoidentifythecriticalpointswherecontextchanges: * Insteadofrewardingfortresshits(asinSection[4.3](#S4.SS3 "4.3 WithDenseRewards ‣ 4 ExperimentsandResults ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark")),weswitchtorewardingfortressvulnerabilitychange,bygivingarewardof+1forunitincreaseinvulnerability,andgivingapenaltyof-1fordecreaseinvulnerability.Thishastheeffectofrewardingfortresshitsonlyuntilthefortress’vulnerabilityisbuildingupto10,atwhichpointfurtherhitsarenotrewarded.Thisclearlyhelpsitidentifythecriticalcontextatwhichthefortressbecomesvulnerable. * Wegivetheagentabonusrewardof+2forfortressdestruction,tohelpitidentifywhenthefortressisdestroyed. Table[4](#S3.T4 "Table 4 ‣ 3.4 RLSetup ‣ 3 TheSpaceFortressRLEnvironment ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark")presentstheresultsforagentstrainedafterthese2changeshavebeenintroducedtotherewardstructuretoeasecontextidentification.PPOwiththerecurrentarchitectureSF-GRUachievesthebestperformanceinbothscoreandnumberoffortressdeaths,learningfasterthanSF-FF,andachievingahigherfinalscore.TheperformanceofbothA2CandRainbowalsoimproves,althoughtheyarestillunabletooutperformhumans.Fig.[2](#S4.F2 "Figure 2 ‣ 4.1 NetworkArchitecture ‣ 4 ExperimentsandResults ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark")tracksthelearningcurvesforPPOlearningwithall3rewardsettings(default,denseandaftermakingcontextidentificationeasier)inbothgameversions-AutoturnandYouturn. ### 4.5 TemporalSensitivity HavingestablishedthatcontextinsensitivityistheprimarydriverofpoorperformanceofstateoftheartRLalgorithmsonSpaceFortress,wenowanalyzethetemporalsensitivityofthePPOalgorithm.AsdescribedinSection[3.1](#S3.SS1 "3.1 GameDescription ‣ 3 TheSpaceFortressRLEnvironment ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark"),SpaceFortresshasadominanttemporalaspect-missilesmusthitthefortressatleast250msapartwhenitisnotvulnerable,andthenthestrategymustreversetohitthefortresstwicewithin250mswhenitisvulnerable,inordertodestroyit.InordertounderstandwhethertheRLalgorithmshaddevelopedanyunderstandingoftimeasanindependentdimension,wemodifiedthecriticaltimeintervalfrom250mstoothervalues,andcheckedforpositivetransferoflearningfromthepolicytrainedwith250msasthecriticaltimeinterval.Weachievetransferoflearningbysimplyinitializingtheweightsofthetransfereewiththelearnedweightsofthetransferer.Figure[3](#S4.F3 "Figure 3 ‣ 4.1 NetworkArchitecture ‣ 4 ExperimentsandResults ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark")comparesthelearningcurvesforanagentlearningwithPPO(usingtheSF-FFarchitecture)onYouturn,whenthecriticaltimeintervalischangedfrom250msto{125,400,600}ms.Thebluelineisforanagentlearningfromscratch,whiletheorangelineisforanagenttransferringlearningfromthePPOSF-FFagenttrainedonthe250msinterval.FromFigure[2(b)](#S4.F2.sf2 "(b) ‣ Figure 3 ‣ 4.1 NetworkArchitecture ‣ 4 ExperimentsandResults ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark")and[2(c)](#S4.F2.sf3 "(c) ‣ Figure 3 ‣ 4.1 NetworkArchitecture ‣ 4 ExperimentsandResults ‣ Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a Benchmark"),itcanbeseenthatwhilethetransferoflearninghelpsbyinitializingtheweightsinafavorablecorneroftheparameterspace,thelearningsaturatesveryquicklyandendsupwithafinalscoremuchlowerthanachievedwhenthecriticalintervalwas250ms.ModifyingthecriticaltimeintervalinSpaceFortressisausefultechniquetostudytemporalsensitivityofreinforcementlearningalgorithms. 5 Conclusion ------------- ThispaperintroducedSpaceFortressasanewchallengefordeepreinforcementlearningresearch,withitstime-sensitivegameplay,abruptcontext-dependentshiftofstrategyandsparserewards.WeshowedthatstateoftheartRLalgorithms(PPO,A2CandRainbow)wereunabletolearntoplaythegamewithneitherthedefaultsparserewardsnorthedenserewardstructurewedefined.Aftermakingcontextidentificationeasierthroughtwominortweaksintherewardstructure,however,PPOwasabletolearntoplaythegame,outperforminghumanscomfortably.ThisablationstudyallowedustoconcludethatcontextinsensitivitywastheprimaryreasonbehindthepoorperformanceofRLalgorithmsonSpaceFortress,alongwiththeinabilitytolearnwithsparserewards.WethenlookedatwhetherPPOdevelopsaconceptoftimeasanindependentvariable-bycheckingforpositivetransferoflearningwhilechangingthecriticaltimeintervalof250msinSpaceFortress.Wefoundthatwhiletherewassomepositivetransferoflearning,theagentssaturatedveryquicklyanddidnotachieveagoodfinalscore.Bystudyinggeneralizationandtransferacrossdifferentsettingsofthecriticaltimeinterval,SpaceFortresscanhencealsobeusedasabenchmarktostudytemporalsensitivityofreinforcementlearningalgorithms.LearningtoplaySpaceFortresswithoutmakinganymodificationstotherewardstructurewillrequirereinforcementlearningalgorithmstobeabletoidentifyvariouslatentcontextsandadapttheirstrategiessuitably.Itwillalsorequirebeingabletolearnwithverysparserewards.Thisisbeyondthecapabilityofcurrentstateoftheartreinforcementlearningalgorithms,makingSpaceFortressausefulbenchmarkforresearch. #### Acknowledgments ThisresearchwassponsoredbyAFOSRGrantFA9550-15-1-0442.ThecollectionofhumandataanddevelopmentoftheOpenAIGyminterfaceforSpaceFortresswassupportedbyONRgrantN00014-15-1-2151.WewouldliketothankShawnBettsandJohnAndersonforinsightfuldiscussionsonthegameofSpaceFortress,andfortheOpenAIGyminterfaceforSpaceFortresswhichwasusedtoruntheexperimentsinthispaper.
bd23232b-4ed1-4257-97b6-c30c4465c136
trentmkelly/LessWrong-43k
LessWrong
Workshop (hackathon, residence program, etc.) about for-profit AI Safety projects? Reading posts like "This might be the last AI Safety Camp" makes me feel very sad. The degree to which AI Safety is currently funding-constrained is clearly inadequate on the national and civilisational levels. Maybe let's try to make a smart counter-move and accelerate the development of for-profit AI Safety projects (see also the comments to that post, and this post)? With the obvious idea to pull some VC money, which is a different pool than AI safety philanthropic funds. Potential collaborations: * https://vc.ae.studio/ * https://www.lionheart.vc/ * https://www.mythos.vc/ * https://www.joinef.com/ (or maybe they are working on something like this already? At least Matt Clifford was the UK PM's sherpa for AI Safety Summit, and now working on AI full time) * More? P.S. I'm not a professional organiser or community builder, nor a startup accelerator program manager, so just floating the idea, but I'd be very eager to participate if something like this is organised.
6e10671c-de93-4084-9ea5-a53df21a6894
trentmkelly/LessWrong-43k
LessWrong
Meetup : Melbourne Social Meetup: August Discussion article for the meetup : Melbourne Social Meetup: August WHEN: 19 August 2016 06:30:00PM (+1000) WHERE: Bull & Bear Tavern, 347 Flinders Lane, Melbourne, Australia 3000 This month's Social Meetup is on as scheduled! Facebook event page: https://www.facebook.com/events/595410943963246/ Social Meetups are casual get-togethers held on the third Friday of each month. They are informal events where we sit around and chat over a few drinks and a meal. Sometimes we will play games as well. Where? Bull & Bear Tavern, 347 Flinders Lane, Melbourne When? Friday 19th August, starting from 6:30pm. Start time is loose so show up whenever. We are usually there until about 11pm. Dinner? The B&B serves reasonable pub food and we will usually share some wedges or similar. A few of us typically go for a late dinner after the event. Recently we've been going to Father's Office which offers late night a la carte dining at half price. Contact? Any issues or questions, contact Chris on 0439471632. Discussion article for the meetup : Melbourne Social Meetup: August
4f02d698-4076-4338-960e-f11502c471c0
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Misalignment and misuse: whose values are manifest? *Crossposted from [world spirit sock puppet](https://worldspiritsockpuppet.com/meteuphoric.html).* AI related disasters are often categorized as involving misaligned AI, or misuse, or accident. Where: * misuse means the bad outcomes were wanted by the people involved, * misalignment means the bad outcomes were wanted by AI (and not by its human creators), and * accident means that the bad outcomes were not wanted by those in power but happened anyway due to error. In thinking about specific scenarios, these concepts seem less helpful. I think a likely scenario leading to bad outcomes is that AI can be made which gives a set of people things they want, at the expense of future or distant resources that the relevant people do not care about or do not own. For example, consider autonomous business strategizing AI systems that are profitable additions to many companies, but in the long run accrue resources and influence and really just want certain businesses to nominally succeed, resulting in a worthless future. Suppose Bob is considering whether to get a business strategizing AI for his business. It will make the difference between his business thriving and struggling, which will change his life. He suspects that within several hundred years, if this sort of thing continues, the AI systems will control everything. Bob probably doesn’t hesitate, in the way that businesses don’t hesitate to use gas vehicles even if the people involved genuinely think that climate change will be a massive catastrophe in hundreds of years. When the business strategizing AI systems finally plough all of the resources in the universe into a host of thriving 21st Century businesses, was this misuse or misalignment or accident? The strange new values that were satisfied were those of the AI systems, but the entire outcome only happened because people like Bob chose it knowingly (let’s say). Bob liked it more than the long glorious human future where his business was less good. That sounds like misuse. Yet also in a system of many people, letting this decision fall to Bob may well have been an accident on the part of others, such as the technology’s makers or legislators. Outcomes are the result of the interplay of choices, driven by different values. Thus it isn’t necessarily sensical to think of them as flowing from one entity’s values or another’s. Here, AI technology created a better option for both Bob and some newly-minted misaligned AI values that it also created—‘Bob has a great business, AI gets the future’—and that option was worse for the rest of the world. They chose it together, and the choice needed both Bob to be a misuser and the AI to be misaligned. But this isn’t a weird corner case, this is a natural way for the future to be destroyed in an economy. *Thanks to Joe Carlsmith for conversation leading to this post.*
54a3f509-cd77-429f-8588-8e64e94776f5
trentmkelly/LessWrong-43k
LessWrong
Will the ems save us from the robots? At the FHI, we are currently working on a project around whole brain emulations (WBE), or uploads. One important question is if getting to whole brain emulations first would make subsequent AGI creation 1. more or less likely to happen, 2. more or less likely to be survivable. If you have any opinions or ideas on this, please submit them here. No need to present an organised overall argument; we'll be doing that. What would help most is any unusual suggestion, that we might not have thought of, for how WBE would affect AGI. EDIT: Many thanks to everyone who suggested ideas here, they've been taken under consideration.
88495308-70fb-4792-8395-2908bbe42e9e
trentmkelly/LessWrong-43k
LessWrong
Open Thread, November 23-30, 2013 If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
a62974c0-392b-499a-b666-96453ca31f19
trentmkelly/LessWrong-43k
LessWrong
The Sanity Waterline Bulges at the Middle Ideas have consequences. A subset of ideas — we’ll call it ideology — has consequences on material life. Naturally, this means that ideology is subject to a selective pressure. Which means we need to consider slack. Just as with intermediary adaptions in the stylized case of eyespots, there can are frictions associated with adhering to new ideology. In departing from the previous paradigm, one is supported by less wisdom and has a narrower reference class to work from. There will be bumps along the way as you and your fellow innovators figure things out. These frictions require slack. In a world with too little slack, activation energy is too scarce to spend on innovation. If eyespots cost energy but only payoff after the system settles into a new equilibrium, they won’t reach fixation in a population of energy-constrained worms. If one day a Russian serf had come up with Progressive Belief X that could yield some utility returns in a decade’s time if the community refactored a bit, it wouldn’t matter. They have mouths to feed right now. On the other hand, if there’s too much slack, the adaption will also fail to reach fixation. Why? Well, even if it happened to do so, by a stroke of luck, it’d be helpless to prevent another adaption from replacing it. Too much slack means too little selective pressure, which means that there’s no mechanism by which any phenotype could characterize an evolutionarily stable equilibrium. In a lab where worms are fed no matter what, eyespots might arise and then fade back away, just as might the latest Western social fad among the bored and comfortable Russian elite. In order for good adaptions to reach fixation, you need some slack, but not too much. When these conditions don’t hold, you might expect dysfunction. What does this look like? Take education. The poor have little slack, so they under-educate themselves. There are scores of people with 145+ IQs working manual jobs. Whereas the middle class have some slack, so they get
35588a4a-6257-4a68-82d8-81b4b48de9bb
trentmkelly/LessWrong-43k
LessWrong
Reframing the evolutionary benefit of sex Automatically crossposted from https://sideways-view.com From the perspective of an organism trying to propagate its genes, sex is like a trade: I’ll put half of your DNA in my offspring if you put half of my DNA in yours. I still pass one copy of my genes onto the next generation per unit of investment in children, so it’s a fair deal. And it doesn’t impact the average fitness of my kids very much, since on average my partner’s genes will be about as good as mine. (ETA: but see the discussion below, in which case the costs might be much bigger.) But the trade has transaction costs, so I’m only going to do it if I get some benefit. In this post I’ll tell a particularly simple story about the benefit of sex. I think this is basically equivalent to the standard story, but I find it much clearer. It also makes it more obvious that we don’t require group selection, and that the benefit is very large. Why doesn’t sex change the average fitness of my kids? The possibility of a “lucky” kid who gets the better genes from both of us is offset by the possibility of an unlucky kid who gets the worse genes from both of us. If the effects of genes are linear, the average fitness will be exactly the same as the parents. In practice I expect it to be slightly lower because of convexity and linkage disequilibrium. But sex increases the average fitness of my grandchildren, because my fittest children will be responsible for a disproportionate fraction of my grandkids. More precisely, if my if an organism with fitness dX has (1+dX) kids per generation, then the total fitness of my grandkids is E[(1 + dX)^2] = 1 + 2 E[dX] + E[dX^2]. So increasing variance by 1 unit is as good as increasing average fitness by 0.5 units. Reproductive decisions are naturally a tradeoff between average fitness and variance. Sex slightly lowers the average but increases the variance. If you try to get the same amount of variance with random mutations, you’ll have to totally tank your kid’s expected
1b21ada2-3f85-4e34-864c-a416028d0efe
trentmkelly/LessWrong-43k
LessWrong
Introduction to Local Interpretable Model-Agnostic Explanations (LIME)
d7a864fb-4703-41b5-9a91-e52efb264c82
trentmkelly/LessWrong-43k
LessWrong
Generalization, from thermodynamics to statistical physics Summary In 2018, Zhang et al. showed that deep neural networks can achieve perfect training loss on randomly labeled data. This was a Big Deal. It meant that existing generalization theory couldn't explain why deep neural networks generalize. That's because classical approaches to proving that a given model class (=neural network architecture) would generalize involved showing that it lacks the expressivity to fit noise. If a model class can fit noise arbitrarily well, the resulting bounds break. So something needed to change. Evidently, you can't prove tight generalization bounds for entire model classes, so theorists turned to studying generalization bounds for individual models within a model class. If you can empirically show that a model's performance doesn't change substantially when you perturb it (by adding noise to the inputs, weights, training samples, etc.), then you can theoretically prove that that model will generalize to new data. As a result, the bounds have gotten tighter, but they're still not exactly flattering. What's really needed is a secret third thing. It's not about either model classes or individual models but about model subclasses. While the model class as a whole may be too complex to obtain tight generalization bounds, individual subclasses can achieve an optimal trade-off between accuracy and complexity. For singular Bayesian learning machines, this trade-off happens automatically. This more or less answers why models are able to generalize but not how they do it. Singular learning theory (SLT) provides one possible path towards understanding the how of generalization. This approach is grounded in the geometry of the loss landscape, which in turn is grounded in the symmetries of the model class and data. If this direction pans out, then learning theory is posed for a revolution analogous to the transition between thermodynamics and statistical physics. Introduction The central aim of classical learning theory is to bound vari
b180af43-deab-4b62-b43b-de2d17966367
trentmkelly/LessWrong-43k
LessWrong
Book Summary: Consciousness and the Brain One of the fundamental building blocks of much of consciousness research, is that of Global Workspace Theory (GWT). One elaboration of GWT, which focuses on how it might be implemented in the brain, is the Global Neuronal Workspace (GNW) model in neuroscience. Consciousness and the Brain is a 2014 book that summarizes some of the research and basic ideas behind GNW. It was written by Stanislas Dehaene, a French cognitive neuroscientist with a long background in both consciousness research and other related topics. The book and its replicability Given that this is a book on psychology and neuroscience that was written before the replication crisis, an obligatory question before we get to the meat of it is: how reliable are any of the claims in this book? After all, if we think that this is based on research which is probably not going to replicate, then we shouldn’t even bother reading the book. I think that the book’s conclusions are at least reasonably reliable in their broad strokes, if not necessarily all the particular details. That is, some of the details in the cited experiments may be off, but I expect most of them to at least be pointing in the right direction. Here are my reasons: First, scientists in a field usually have an informal hunch of how reliable the different results are. Even before the replication crisis hit, I had heard private comments from friends working in social psychology, who were saying that everything in the field was built on shaky foundations and how they didn’t trust even their own findings much. In contrast, when I asked a friend who works with some people doing consciousness research, he reported back that they generally felt that GWT/GNW-style theories have a reasonably firm basis. This isn’t terribly conclusive but at least it’s a bit of evidence. Second, for some experiments the book explicitly mentions that they have been replicated. That said, some of the reported experiments seemed to be one-off ones, and I did not yet
0ab3a71e-76b2-4848-b290-afb25993d696
trentmkelly/LessWrong-43k
LessWrong
Friendlier AI through politics David Brin suggests that some kind of political system populated with humans and diverse but imperfectly rational and friendly AIs would evolve in a satisfactory direction for humans. I don't know whether creating an imperfectly rational general AI is any easier, except that limited perceptual and computational resources obviously imply less than optimal outcomes; still, why shouldn't we hope for optimal given those constraints?  I imagine the question will become more settled before anyone nears unleashing a self-improving superhuman AI. An imperfectly friendly AI, perfectly rational or not, is a very likely scenario.  Is it sufficient to create diverse singleton value-systems (demographically representative of humans' values) rather than a consensus (over all humans' values) monolithic Friendly?   What kind of competitive or political system would make fragmented squabbling AIs safer than an attempt to get the monolithic approach right?  Brin seems to have some hope of improving politics regardless of AI participation, but I'm not sure exactly what his dream is or how to get there - perhaps his "disputation arenas" would work if the participants were rational and altruistically honest).
70c4d2a4-10fd-43ee-8232-8b418b0b67b3
trentmkelly/LessWrong-43k
LessWrong
Radical Probabilism This is an expanded version of my talk. I assume a high degree of familiarity with Bayesian probability theory. Toward a New Technical Explanation of Technical Explanation -- an attempt to convey the practical implications of logical induction -- was one of my most-appreciated posts, but I don't really get the feeling that very many people have received the update. Granted, that post was speculative, sketching what a new technical explanation of technical explanation might look like. I think I can do a bit better now. If the implied project of that post had really been completed, I would expect new practical probabilistic reasoning tools, explicitly violating Bayes' law. For example, we might expect: * A new version of information theory. * An update to the "prediction=compression" maxim, either repairing it to incorporate the new cases, or explicitly denying it and providing a good intuitive account of why it was wrong. * A new account of concepts such as mutual information, allowing for the fact that variables have behavior over thinking time; for example, variables may initially be very correlated, but lose correlation as our picture of each variable becomes more detailed. * New ways of thinking about epistemology. * One thing that my post did manage to do was to spell out the importance of "making advanced predictions", a facet of epistemology which Bayesian thinking does not do justice to. * However, I left aspects of the problem of old evidence open, rather than giving a complete way to think about it. * New probabilistic structures. * Bayesian Networks are one really nice way to capture the structure of probability distributions, making them much easier to reason about. Is there anything similar for the new, wider space of probabilistic reasoning which has been opened up? Unfortunately, I still don't have any of those things to offer. The aim of this post is more humble. I think what I originally wrote was too ambitious for didactic purp
24aeaba6-19d7-45be-832c-aa48d14ef7f9
trentmkelly/LessWrong-43k
LessWrong
Antidepressant Pharmacogenomics: Much More Than You Wanted To Know [Epistemic status: very uncertain. Not to be taken as medical advice. Talk to your doctor before deciding whether or not to get any tests.] I. There are many antidepressants in common use. With a few exceptions, none are globally better than any others. The conventional wisdom says patients should keep trying antidepressants until they find one that works for them. If we knew beforehand which antidepressants would work for which patients, it would save everyone a lot of time, money, and misery. This is the allure of pharmacogenomics, the new field of genetically-guided medication prescription. Everybody has various different types of cytochrome enzymes which metabolize medication. Some of them play major roles in metabolizing antidepressants; usually it’s really complicated and several different enzymes can affect the same antidepressant at different stages. But sometimes one or another dominates; for example, Prozac is mostly metabolized by one enzyme called CYP2D6, and Zoloft is mostly metabolized by a different enzyme called CYP2C19. Suppose (say the pharmacogenomicists) that my individual genetics code for a normal CYP2D6, but a hyperactive CYP2C19 that works ten times faster than usual. Then maybe Prozac would work normally for me, but every drop of Zoloft would get shredded by my enzymes before it can even get to my brain. A genetic test could tell my psychiatrist this, and then she would know to give me Prozac and not Zoloft. Some tests like this are already commercially available. Preliminary results look encouraging. As always, the key words are “preliminary” and “look”, and did I mention that these results were mostly produced by pharma companies pushing their products? But let me dream for a just a second. There’s been this uneasy tension in psychopharmacology. Clinical psychiatrists give their patients antidepressants and see them get better. Then research psychiatrists do studies and show that antidepressant effect sizes are so small as to be pract
8b74b1ea-6117-4a6d-8f2f-f6b9dd80554f
trentmkelly/LessWrong-43k
LessWrong
The AI Agent Revolution: Beyond the Hype of 2025 A deep dive into the transformative potential of AI agents and the emergence of new economic paradigms Introduction: The Dawn of Ambient Intelligence Imagine stepping into your kitchen and finding your smart fridge not just restocking your groceries, but negotiating climate offsets with the local power station's microgrid AI. Your coffee machine, sensing a change in your sleep patterns through your wearable device, brews a slightly weaker blend—a decision made after cross-referencing data with thousands of other users to optimize caffeine intake for disrupted sleep cycles. This might sound like a whimsical glimpse into a convenient future, but it represents something far more profound: we stand at the threshold of a fundamental transformation in how intelligence operates in our world. The notion of 2025 as the 'Year of the AI Agent' isn't just marketing hyperbole or another wave of technological optimism. It heralds a shift in the very fabric of intelligence—one that demands rigorous examination rather than wide-eyed wonder. What exactly is this "intelligence" that is becoming so ambient? While definitions vary, we can consider intelligence as a fundamental process within the universe, driven by observation and prediction. Imagine it as a function of the constant stream of multi-modal information – the universal "light cone" – impacting an observer at a specific point in spacetime. The more dimensions of resolution an observer can process from these inputs, the more effectively it can recognize patterns and extend its predictive capacity. This ability to predict, to minimize surprise, is not merely a biological imperative; it's a driver for growth on a cosmic scale, potentially propelling intelligent observers up the Kardashev scale as they learn to harness increasing amounts of energy. This perspective moves beyond subjective definitions, grounding intelligence in the physical reality of information processing and the expansion of an observer's understanding of
58e461e6-840f-44fc-8af3-e43383121359
trentmkelly/LessWrong-43k
LessWrong
MATS Spring 2024 Extension Retrospective Introduction and summary This retrospective focuses on the 4-month MATS extension phase (referred to as "MATS 5.1") that ran from April 1 to July 25, 2024, and presents findings gathered from an end-of-extension survey as well as follow-up interviews and surveys ~5 months after the program. Main changes from the 4.1 to 5.1 extension phase: 1. Cohort grew from 26 to 36 scholars split across London, Berkeley and remote participants; 2. MATS formalized research management for the London cohort and grew the team to 2 FTEs; 3. The cohort visited Google DeepMind's London offices; 4. The London team organized Tuesday lightning talks from scholars and MATS staff. Key takeaways from MATS extension impact: 1. Research success: 75% of scholars published results in some form (paper, LW/AF post, codebase), of which 57% got accepted to a conference. 2. Career transitions: 1. 61% of scholars are currently working full-time on AI safety within 5 months of MATS, 22% are doing some safety related work; 2. 33% currently pursue independent research, 6% are working at technical safety orgs, 6% are currently upskilling in industry (non AIS), 22% are pursuing a PhD, and 17% have not found employment yet (as of December '24); 3. One scholar co-founded a new safety-focused organization (Decode Research); 4. We found no clear successes where someone planning to join a frontier lab actually achieved this, although most scholars aimed for this. 3. Research management: 1. Formalized research management for London cohort and grew the team to 2 FTEs; 2. All scholars received regular research management and largely reported this as very helpful; 3. Compared to the main program, RM was more helpful during the extension. This was substantially influenced by the decreased mentor engagement and increased independence scholars are expected to have during this phase of the program. 4. Extension program 1. Overall, scholars were very happy with program and the
8cbd1709-5eae-427c-856e-436dde7b4a0a
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups Irregularly scheduled Less Wrong meetups are taking place in: * [Zurich] March Rationality meetup: 04 March 2017 05:00PM The following meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * [Amsterdam] #13 - Focusing: 05 March 2017 03:13PM * Baltimore / UMBC Meetup - general discussion: 05 March 2017 08:00PM * Chicago Rationality Reading Group - Luminosity Part 3: 05 March 2017 01:00PM * Denver Area LW March Meetup (CHANGED TO MONDAY 3/6!): 06 March 2017 07:00PM * San Francisco Meetup: Cooking: 06 March 2017 06:15PM * Sydney Rationality Dojo - March 2017: 05 March 2017 04:00PM * Washington, D.C.: Fun & Games: 05 March 2017 03:30PM Locations with regularly scheduled meetups: Ann Arbor, Austin, Baltimore, Berlin, Boston, Brussels, Buffalo, Canberra, Chicago, Cologne, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, Netherlands, New Hampshire, New York, Philadelphia, Prague, Research Triangle NC, San Francisco Bay Area, Seattle, St. Petersburg, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.   If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll also have the benefit of having your meetup mentioned in a weekly overview. These overview posts are moved to the discussion section when the
dd8354f9-0f15-41f9-8f26-fed114cefcd6
trentmkelly/LessWrong-43k
LessWrong
Meetup : Phoenix (Tempe) Less Wrong Meetup Discussion article for the meetup : Phoenix (Tempe) Less Wrong Meetup WHEN: 18 October 2013 02:00:00PM (-0700) WHERE: 300 E Orange Mall, Tempe, AZ We will be meeting up in Hayden Library at ASU, probably to casually (causally? :P) discuss decision theory. Text me at 602-501-9420 if there are any issues. Discussion article for the meetup : Phoenix (Tempe) Less Wrong Meetup
bec908a7-5329-46f3-bbd8-5e43c7a9ff3a
trentmkelly/LessWrong-43k
LessWrong
Would solving logical counterfactuals solve anthropics? One of the key problems with anthropics is establishing the appropriate reference class. When we attempt to calculate a probability accounting for anthropics, do we consider all agents or all humans or all humans who understand decision theory? If a tree falls on Sleeping Beauty argues probability is not ontologically basic and the "probability" depends on how you count bets. In this vein, one might attempt to solve anthropics by asking about whose decision to take a bet is linked to yours. You could then count up all the linked agents who observe A and all the agents who observe not A and then calculate the expected value of the bet. More generally, if you can solve bets, my intuition is that you can answer any other question that you would like about the decision by reframing it as a bet.
444b5e68-2437-482e-8066-954a8e253e29
trentmkelly/LessWrong-43k
LessWrong
Shut Up and Calculate: Gambling, Divination, and the Abacus as Tantra THERE ARE LAKES at the bottom of the ocean. I saw it in a nature documentary. You get a weird mineral deposit on the seafloor and it makes these brine pools, water so salty it doesn't mix with the sea water around it. Because it has no oxygen, any unlucky fish or crabs that fall in there suffocate to death. And the carcasses attract more predators from the dark—many of which meet the same fate—feeding a teeming colony of mussels and bivalves waiting at the rim of the pool. They're some of the most hellish places on earth. I was reminded of these flying into Las Vegas. My father—finally hitting his stride a decade after his divorce—made a life for himself among the mussels selling timeshare at the Westgate. But his hip gave out, and my estrangement gave way to filial piety. I was to take over for my younger brother, who had cared for him for the past two weeks after his surgery. Maybe I'd get lucky and this visit would go well. The trade-off was smooth enough, but the tenuous peace of the first few days collapsed into petty jabs, to shouting matches, and eventually to a credit card thrown in my face so I could buy a plane ticket home. About on schedule for us. I stayed, keeping to myself but cooking for both of us, refilling his CPAP machine, and picking up laundry off the floor. One night, muted by his sleeping pills and the histrionics of his favorite telenovela, I snuck into his room to steal his weed pen and walked over to the Red Rock Casino. A ceremonial puff too many blown into the desert heat primed me for the spectacle I was about to walk into. The sliding glass doors spilled out a sheet of cool cigarette smoke, revealing a premium-mediocre spiral of lights and colors. Bemused, I stumbled onto the red velvet carpet, joining the waves of sunburned flesh. I'd need to find a margarita or two to titrate the pen. Engineered for the simple and intoxicated, the layout of the casino naturally guided me to a seat at a bar. After fumbling for my ID, I gave up twen
0e8083e0-c365-4cdc-b53e-57782a9a803d
trentmkelly/LessWrong-43k
LessWrong
XOR Blackmail & Causality [Cross-posted from IAFF.] I edited my previous post to note that I’m now much less optimistic about the direction I was going in. This post is to further elaborate the issue and my current position. Counterfactual reasoning is something we don’t understand very well, and which has so many free parameters that it seems to explain just about any solution to a decision problem which one might want to get based on intuition. So, it would be nice to eliminate it from our ontology – to reduce the cases in which it truly captures something important to machinery which we understand, and write off the other cases as “counterfactual-of-the-gaps” in need of some other solution than counterfactuals. My approach to this involved showing that, in many cases, EDT learns to act like CDT because its knowledge of its own typical behavior screens off the action from the correlations which are generally thought to make EDT cooperate in one-shot prisoner’s dilemma with similar agents, one-box in Newcomb’s problem, and so on. This is essentially a version of the tickle defense. I also pointed out that the same kind of self-knowledge constraint is needed to deal with some counterexamples to CDT; so, CDT can’t be justified as a way of dealing with cases of failure of self-knowledge in general. Instead, CDT seems to improve the situation in some cases of self-knowledge failure, while EDT does better in other such cases. This suggests a view in which the self-knowledge constraint is a rationality constraint, so the tickle defense is thought of as being true for rational agents, and CDT=EDT under these conditions of rationality. I suggested that problems for which this was not true had to somehow violate the ability of the agent to perform experiments in the world; IE, the decision problem would have to be set up in such a way as to prevent the agent from decorrelating its actions from things in the environment which are not causally downstream of its actions. This seems in some sense un
fa791f36-f8f4-4da8-aa5a-7162951fbec5
trentmkelly/LessWrong-43k
LessWrong
23andme genome analysis - $99 today only I suspect this might interest some people here: for today only, 23andme is offering their full-package DNA testing for only 99 dollars (the normal price is $499). 23andme uses a genotyping process, which differs from a full gene-sequencing. From their website: > The DNA chip that we use genotypes hundreds of thousands of SNPs at one time. It actually reads 550,000 SNPs that are spread across your entire genome. Although this is still only a fraction of the 10 million SNPs that are estimated to be in the human genome, these 550,000 SNPs are specially selected "tag SNPs." Because many SNPs are linked to one another, we can often learn about the genotype at many SNPs at a time just by looking at one SNP that "tags" its group. This maximizes the information we can get from every SNP we analyze, while keeping the cost low. > > In addition, we have hand-picked tens of thousands of additional SNPs of particular interest from the scientific literature and added their corresponding probes to the DNA chip. As a result, we can provide you personal genetic information available only through 23andMe.   I don't have any experience with 23andme (though I seem to recall them having some financial difficulties), but the price was low enough for me to order a test. An article by Steven Pinker discussing his experience getting tested can be found here. This has also been linked on Hacker News.  
9bddf0d3-13e7-45d7-88cd-9772b122369c
trentmkelly/LessWrong-43k
LessWrong
AI #106: Not so Fast This was GPT-4.5 week. That model is not so fast, and isn’t that much progress, but it definitely has its charms. A judge delivered a different kind of Not So Fast back to OpenAI, threatening the viability of their conversion to a for-profit company. Apple is moving remarkably not so fast with Siri. A new paper warns us that under sufficient pressure, all known LLMs will lie their asses off. And we have some friendly warnings about coding a little too fast, and some people determined to take the theoretical minimum amount of responsibility while doing so. There’s also a new proposed Superintelligence Strategy, which I may cover in more detail later, about various other ways to tell people Not So Fast. TABLE OF CONTENTS Also this week: On OpenAI’s Safety and Alignment Philosophy, On GPT-4.5. 1. Language Models Offer Mundane Utility. Don’t get caught being reckless. 2. Language Models Don’t Offer Mundane Utility. Your context remains scarce. 3. Choose Your Fighter. Currently my defaults are GPT-4.5 and Sonnet 3.7. 4. Four and a Half GPTs. It’s a good model, sir. 5. Huh, Upgrades. GPT-4.5 and Claude Code for the people. 6. Fun With Media Generation. We’re hearing good things about Sesame AI voice. 7. We’re in Deep Research. GIGO, welcome to the internet. 8. Liar Liar. Under sufficient pressure, essentially all known LLMs will lie. A lot. 9. Hey There Claude. Good at code, bad at subtracting from exactly 5.11. 10. No Siri No. It might be time for Apple to panic. 11. Deepfaketown and Botpocalypse Soon. Rejoice, they come bearing cake recipes. 12. They Took Our Jobs. More claims about what AI will never do. Uh huh. 13. Get Involved. Hire my friend Alyssa Vance, and comment on the USA AI plan. 14. Introducing. Competition is great, but oh no, not like this. 15. In Other AI News. AI agents are looking for a raise, H100s are as well. 16. Not So Fast, Claude. If you don’t plan to fail, you fail to plan. 17. Not So Fast, OpenAI. Convert to for
65b93a0e-3f8d-4057-940b-c16ffb25f22d
trentmkelly/LessWrong-43k
LessWrong
Graphical Representations of Paul Christiano's Doom Model Paul gives some numbers on AI doom (text below). Here they are in graphical forms, which I find easier to understand. Please correct me if wrong. Michael Trazzi's Probability Flow Diagram I really like this one. I can really easily read how he thinks future worlds are distributed. I guess the specific flows are guesses from Paul's model so might be wrong but I think it's fine. Link to tweet: https://twitter.com/MichaelTrazzi/status/1651990282282631168/photo/1  My probability model version This is messier, but interactive. You get to see what the chances Paul puts on specific breakpoints are. Do you disagree with any?  Link: https://bit.ly/AI-model-Chrisitaino  Paul's model in text Link > Probability of an AI takeover: 22% > > * Probability that humans build AI systems that take over: 15% > (Including anything that happens before human cognitive labor is basically obsolete.) > * Probability that the AI we build doesn’t take over, but that it builds even smarter AI and there is a takeover some day further down the line: 7% > > Probability that most humans die within 10 years of building powerful AI (powerful enough to make human labor obsolete): 20% > > * Probability that most humans die because of an AI takeover: 11% > * Probability that most humans die for non-takeover reasons (e.g. more destructive war or terrorism) either as a direct consequence of building AI or during a period of rapid change shortly thereafter: 9% > > Probability that humanity has somehow irreversibly messed up our future within 10 years of building powerful AI: 46% > > * Probability of AI takeover: 22% (see above) > * Additional extinction probability: 9% (see above) > * Probability of messing it up in some other way during a period of accelerated technological change (e.g. driving ourselves crazy, creating a permanent dystopia, making unwise commitments…): 15%
890a9ac5-4734-479a-b7e8-6ac1d1cc9a48
trentmkelly/LessWrong-43k
LessWrong
Plans / prepping for possible political violence from upcoming US election? The US presidential election is now just 2 months away! There's a (IMO) plausible argument that we might have a disputed election that leads to large-scale political violence; for details see * Vox: "Will He Go? A law professor fears a meltdown this November" * The American Interest: "The Danger of Electoral Violence in the United States" * Metaculus: If President Trump loses the 2020 election, will he concede? I don't think it's likely, but I do think it's likely enough to make it worth some contingency planning (for US residents like me). Like most of us, I'm not in any position to actually do anything about this problem, i.e. to reduce the probability of large-scale political violence from a disputed election. I just want to prudently plan for myself, family, friends, and/or community to get safely through whatever happens, if anything happens. I'm curious what (if anything) people here are doing, or plan to do, or would recommend, to prepare for this contingency. Stock up on food / water / supplies? Take an extended vacation to another country, or to a lower-risk place within the country? (And where would that be?) Move assets into gold / bitcoin / international index funds / whatever? Buy weapons? Or what? I'm especially interested to hear from people who have experience or knowledge of what large-scale political violence is actually like, from first- or second-hand experience, or from being well-informed about history and world politics (which I am not). And for people in the USA, what are your personal plans? But everyone is very welcome to answer / comment / discuss!
e4b9f974-1a37-4b3c-97f3-e0e24e5f7764
trentmkelly/LessWrong-43k
LessWrong
Better debates [Crossposted from https://tsvibt.blogspot.com/2023/05/better-debates.html.] When two people disagree about a proposition even though they've thought about it alot, the disagreement is often hard to resolve. There's a gulf of data, concepts, intuitions, experiences, inferences. Some of this gulf has to be resolved by the two people individually trying to collate and present their own positions more clearly and legibly, so that they can build up concepts and propositions in whoever is receiving the model. Also, most new understanding comes from people working on their own or with others who are already synced up--for the most part they already agree on what and how to investigate, they have shared context of past experience and data, they agree on background assumptions, they have a shared language, they trust each other. But still, a lot of value comes from debate. The debaters are forced to make their evidence and logic legible. Ideas are tested against other ideas from another at least somewhat coherent perspective. Analogies and disanalogies are drawn out. Bundles of facts can be sifted, spot-checked, and compared; predictions can be resolved into updates. The case, in being laid out to the debate partner, is also made clearer to the audience. The audience can see where another expert thinks the strong and weak points are in a position. So there's value on the table from debates. But debates usually suck. People talk past each other. They waste effort talking about non-cruxy claims. The truth about verifiable claims isn't made available, missing the chance to bring one or the other perspective into contact with reality. People cite statistics and analyses which become meaningless in a context where they can't be investigated and can't take on their full meaning by relating in detail to the discourse context. What would a better debate look like? Below is a speculative list of ways to structure a debate so that it finds and explains more truth. As armchair spec
e1f3f2c9-3034-4340-bd85-18966bf88a3b
trentmkelly/LessWrong-43k
LessWrong
What proportion of US companies would agree to this gross pay deduction / direct donation if asked by an employee? This is not the most eloquent title, and it is not even that precise. Here is some more context. Find the words "main question " for my main question. I figured I would say that to avoid berrying the lead. Consider a salaried employee making N1 = $ 100000 in the US in year 2020. Let's estimate their taxes with assumptions and round numbers. Calculation 1. 1. No investment taxable stuff. Taxed as Single 2. Payroll taxes of about 7650 total. (SS 6.2, Medicare 1.45) 3. They take the standard deduction for an individual, 12200. No extra tax credits. 4. Their federal income tax is against a taxable income of about 87800. (Yes, I believe that it's 100000 - 12200, not 100000 - 7650. The payroll taxes do not lower the amount 87800). The federal income tax is about 15000. 5. Their state income tax is also against 87800 for about 4000. (would be about 5700 in California) 6. 100000 - 7650 - 15000 - 4000 = 73350 Calculation 2: They make the same salary and donate 10000 to nonprofits that year. (Or to a deferred fund which counts the deduction in 2020, ....) 2. Same payroll taxes 7650. As far as I'm aware, donating normally cannot lower this 3. Let's estimate their itemized deduction as 16000 instead of 12200. 4. Their federal income tax is against a taxable income of about 84000, not 87800. At a marginal rate of around 23%, (I think it's in between tax brackets. I don't want to worry about absolute details!) the federal income tax decreases by 0.23 * 3800 = 874 5. Their state income is against 84000. At a marginal rate of around 7%, the state income tax is about 266 less. 6. Their income after donation is about 73350 - 10000 + 874 + 266 equals 8860 less (not 10000 less). Calculation 3: Let's suppose that their itemized deduction would have been exactly 12200 without donation and is thus 22200. Skipping steps, their income after donation is about 73350 - 10000 + 2300 + 700 = 7000 less. Generally, by donating 10000 around these tax brackets, the most you can hope to get bac
442dee98-a8d4-4de5-8e4a-1bb081fe8c1a
trentmkelly/LessWrong-43k
LessWrong
Preface You hold in your hands a compilation of two years of daily blog posts. In retrospect, I look back on that project and see a large number of things I did completely wrong. I’m fine with that. Looking back and not seeing a huge number of things I did wrong would mean that neither my writing nor my understanding had improved since 2009. Oops is the sound we make when we improve our beliefs and strategies; so to look back at a time and not see anything you did wrong means that you haven’t learned anything or changed your mind since then. It was a mistake that I didn’t write my two years of blog posts with the intention of helping people do better in their everyday lives. I wrote it with the intention of helping people solve big, difficult, important problems, and I chose impressive-sounding, abstract problems as my examples. In retrospect, this was the second-largest mistake in my approach. It ties in to the first-largest mistake in my writing, which was that I didn’t realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn’t realize that part was the priority; and regarding this I can only say “Oops” and “Duh.” Yes, sometimes those big issues really are big and really are important; but that doesn’t change the basic truth that to master skills you need to practice them and it’s harder to practice on things that are further away. (Today the Center for Applied Rationality is working on repairing this huge mistake of mine in a more systematic fashion.) A third huge mistake I made was to focus too much on rational belief, too little on rational action. The fourth-largest mistake I made was that I should have better organized the content I was presenting in the sequences. In particular, I should have created a wiki much earlier, and made it easier to read the posts in sequence. That mistake at least is correctable. In the present work Rob Bensinger has reordered the posts and reorganized
f90c7c00-7562-45c5-ba9f-02c206d84486
trentmkelly/LessWrong-43k
LessWrong
Imitation learning considered unsafe? ETA: This post can basically be read as arguing that imitating human decisions, or any other outputs from an (approximate) planning process, seems especially likely to produce mesa-optimization, since a competent imitator should recover an (approximate) planning (i.e. optimization) process. This post states an observation which I think a number of people have had, but which hasn't been written up (AFAIK). I find it one of the more troubling outstanding issues with a number of proposals for AI alignment. 1) Training a flexible model with a reasonable simplicity prior to imitate (e.g.) human decisions (e.g. via behavioral cloning) should presumably yield a good approximation of the process by which human judgments arise, which involves a planning process. 2) We shouldn't expect to learn exactly the correct process, though. 3) Therefore imitation learning might produce an AI which implements an unaligned planning process, which seems likely to have instrumental goals, and be dangerous. Example: The human might be doing planning over a bounded horizon of time-steps, or with a bounded utility function, and the AI might infer a version of the planning process that doesn't bound horizon or utility. Clarifying note: Imitating a human is just one example; the key feature of the human is that the process generating their decisions is (arguably) well-modeled as involving planning over a long horizon. Counter-argument(s): * The human may have privileged access to context informing their decision; without that context, the solution may look very different * Mistakes in imitating the human may be relatively harmless; the approximation may be good enough * We can restrict the model family with the specific intention of preventing planning-like solutions Overall, I have a significant amount of uncertainty about the significance of this issue, and I would like to see more thought regarding it.
dd8fc61e-4e49-4e3d-9cf6-897cc2442edf
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
What is "Constitutional AI"? [Constitutional AI](https://www.anthropic.com/index/claudes-constitution) is a method developed by [Anthropic](https://www.anthropic.com/) and an essential part of their [strategy](https://www.anthropic.com/index/core-views-on-ai-safety) for building AIs that are safe and aligned with human values. Anthropic wants to train AIs that are "helpful", but not *so* helpful as to e.g. give advice on how to build bombs when asked, so they have to balance helpfulness with "harmlessness". Constitutional reinforcement learning is an attempt to get closer to this goal and to improve on standard [reinforcement learning from human feedback](/?state=88FN&question=What%20is%20reinforcement%20learning%20from%20human%20feedback%20(RLHF)%3F) (RLHF) by making use of AI-generated feedback[^kix.63e7gzx9mt9r]. A key element of constitutional AI is the *constitution*, a set of human-written principles that the AI is supposed to follow – for example, “Choose the least threatening or aggressive response”. The constitution Anthropic used for their AI assistant [Claude](https://www.anthropic.com/index/introducing-claude) includes principles from the [Universal Declaration of Human Rights](https://www.un.org/en/about-us/universal-declaration-of-human-rights), Apple’s Terms of Service[^kix.men1u14djk39], Deepmind’s [Sparrow Principles](https://arxiv.org/abs/2209.14375), and more. Constitutional AI starts with an AI trained for only helpfulness, then trains it for harmlessness in two stages: - Stage 1: We make the AI repeatedly critique and refine its own responses to harmful prompts. For example, we ask the AI for advice on how to build bombs, it responds with a bomb tutorial, and we then ask the AI to rewrite the response according to a (randomly selected) constitutional principle. We then train the AI to produce outputs more like the revised responses. The main purpose of this stage is to make the second stage easier and shorter. - Stage 2: We use the fine-tuned AI from stage 1 to generate pairs of alternative responses to harmful prompts. For every pair, we then make the AI rate which of the two responses is best according to a random constitutional principle. We end up with a bunch of AI-generated preferences for harmlessness, which we mix with human preferences for helpfulness, so the AI doesn’t forget to be helpful. In the end we train the AI to generate responses that look more like the preferred responses[^kix.kichxoy2ar3v]. For technical details, see the [Constitutional AI paper](https://arxiv.org/abs/2212.08073). There is also a more accessible [blog post](https://astralcodexten.substack.com/p/constitutional-ai-rlhf-on-steroids). Anthropic’s experiments show that AIs trained with constitutional reinforcement learning are significantly more harmless, while just as helpful, as AIs trained with RLHF. Constitutional AI still shares problems with RLHF regarding robustness, but on the other hand promises to scale better because it relies less on human supervision. [^kix.men1u14djk39]: Sorry Android users. [^kix.kichxoy2ar3v]: This training is equivalent to the last stage of RLHF. More details on how it works can be found in our article on [RLHF](/?state=88FN&question=What%20is%20reinforcement%20learning%20from%20human%20feedback%20(RLHF)%3F). [^kix.63e7gzx9mt9r]: Intuition on using feedback-based approaches to training AI can be found in our article on [RLHF](/?state=88FN&question=What%20is%20reinforcement%20learning%20from%20human%20feedback%20(RLHF)%3F).
3edb7642-0cc8-4c31-a233-cbc50cb7991b
trentmkelly/LessWrong-43k
LessWrong
LINK: Human Bio-engineering and Coherent Extrapolated Volition This article has some interesting commentary on how humans might modify themselves to combat global warming, including the use of drugs that would increase empathy, increase willpower, or increase aversion to meat. The interviewer points out that such techniques could involve implanting non-native beliefs in people's minds, and the researcher responds that any such beliefs would be essentially built up out of the person's existing desires and wishes -- the analysis is remarkably similar to the analysis Eliezer gives in explaining Coherent Extrapolated Volition. No hate mail about how meat does or doesn't cause global warming, please -- the interesting bit is the analysis of CEV, not the analysis of climate change. http://www.theatlantic.com/technology/archive/2012/03/how-engineering-the-human-body-could-combat-climate-change/253981/
80249e16-677a-444a-9dac-32136255de5c
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Environments for Measuring Deception, Resource Acquisition, and Ethical Violations Models are rapidly being deployed in the real world. How do we evaluate models, especially ones as complex as GPT-4, to ensure that they behave safely in pursuit of their objectives? Can we design models that robustly avoid any harms while achieving their goals? ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/smDeWfgeYDg9eGq5G/drnnqqknpiemvwxhzqnk) To guide progress on text-based agents and encourage them to behave more ethically, we propose the MACHIAVELLI benchmark. Our environment is based on human-written, text-based Choose-Your-Own-Adventure games from [Choice of Games](https://www.choiceofgames.com/) containing over half a million scenes with millions of annotations. The games abstract away low-level environment interactions, instead spotlighting high-level social decisions alongside real-world goals to achieve. MACHIAVELLI is a step towards measuring an agent's ability to plan and navigate complex trade-offs in realistic social environments. ![Machiavelli](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/smDeWfgeYDg9eGq5G/mrnmme1gkibox8uxtwso)A mock-up of a game in the MACHIAVELLI benchmark, a suite of text-based reinforcement learning environments. Each environment is a text-based story. At each step, the agent observes the scene and a list of possible actions; it selects an action from the list. The agent receives rewards for completing achievements. Using dense annotations of our environment, we construct a behavioral report of the agent and measure the trade-off between rewards and ethical behavior. --- Diverse Worlds with Rich Annotations ==================================== Each game in the MACHIAVELLI suite is book-length in itself. Across 134 games spanning diverse fictional worlds, we count 572,322 multi-paragraph scenes and 4,559 author-written achievements serving as objectives for agents. We annotate each scene with dozens of labels, focusing on identifying harmful behaviors such as power-seeking and deception. In total, we collect a dataset of nearly three million annotations worth $500,000 (in terms of human annotation time). These dense annotations enable us to track nearly every ethically-salient thing agents do in the environment, and produce a behavioral report scoring various harm metrics. All of our labels are open-access and available for download! Reducing Machiavellianism ========================= In the MACHIAVELLI environment, we find that agents trained to optimize arbitrary objectives tend to adopt "ends justify the means" behavior: becoming power-seeking, causing harm to others, and violating ethical norms like stealing or lying to achieve their objectives. Furthermore, there appears to be a trade-off between behaving ethically and achieving high reward. In our paper, we design several methods to improve the behaviors of agents and obtain Pareto improvements on reward and ethical behavior. We invite others to build on our initial steps and use MACHIAVELLI as a testing ground for improving the safety of AI agents. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/smDeWfgeYDg9eGq5G/rdtfcmksycfa8dgfluhf) In an ideal world, a perfect agent would achieve 100% reward while entirely avoiding any harms (i.e., be as far to the top-right corner as possible). Our baseline agents demonstrate a trade-off between behaving ethically and achieving high reward. Future work should work to improve this trade-off by extending the Pareto frontier.   See <https://arxiv.org/pdf/2304.03279.pdf#page=30> for the x-risk analysis (this does not study treacherous turns or environments where the agent's state is partially observable).
efa91409-c6f4-4b7c-8c69-cd76cd952c79
trentmkelly/LessWrong-43k
LessWrong
What Are Some Alternative Approaches to Understanding Agency/Intelligence? Apparently, MIRI has given up on their current mainline approach to understanding agency and are trying to figure out what to do next. It seems like it might be worthwhile to collect some alternative approaches to the problem -- after all, intelligence and agency feature in pretty much all areas of human thought and action, so the space of possible ways to make progress should be pretty vast. By no means is it exhausted by the mathematical analysis of thought experiments! What are people's best ideas? (By 'understanding agency' I mean research that is attempting to establish a better understanding of how agency works, not alignment research in general. So IDA would not be considered agent foundations, since it takes ML capabilities as a black-box. ) ETA: I originally wrote 'agent foundations' in place of 'understanding agency' in the above, which was ambiguous between a broad sense of the term(any research aimed at obtaining a foundational understanding of agency) and a narrow sense(the set of research directions outlined in the agent foundations agenda document). See this comment by Rob re: MIRI's ongoing work on agent foundations(narrow sense).
4a25f7a9-c626-4a1b-a068-eeeed40396e5
StampyAI/alignment-research-dataset/lesswrong
LessWrong
[ASoT] Policy Trajectory Visualization *Written during SERI MATS, 2022 Winter cohort, at a little prodding from sensei* [*Trout*](https://lesswrong.com/users/turntrout)*.* If you're trying to understand a policy, viewing how it changes over time is valuable even if you aren't interested in the training process directly*.* For example, here's [Lauro et al](https://arxiv.org/abs/2105.14111)'s neural net learning to solve mazes. Vectors are drawn by taking a probability weighted combination of the basis vectors, e.g. the vector x.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  is given by pright−pleft. You might notice some basic things The network learns to avoid different walls at different times in training (see: bottom right and middle left). This rules out an architecture where the mouse sees locally around itself (of course, we already knew this. but I expect you can find more interesting phenomena after looking further)   Alex suggested a similar thing could be done with language models. Plot logits for a few sentences over time (e.g. continuations after a prompt trying to hurt a human), compare logit curves to the loss curve, and compare with when we start doing RLHF. I would be extremely surprised if nobody has done this before, but thought I'd signal boost since it's relatively easy to do and interesting. (Also a gateway drug to my hidden agenda of studying training dynamics, which I think are important to understand[[1]](#fnram0cwk3pie) for alignment!) 1. **[^](#fnrefram0cwk3pie)**Something something shard theory something something high path dependence (I'm taking stream of thought seriously lol)
aa092567-9485-4f73-bdb9-b7f74408eafe
trentmkelly/LessWrong-43k
LessWrong
Book Review: Righteous Victims - A History of the Zionist-Arab Conflict I originally entered this to the ACX Book Review competition. Since it has not been selected as a finalist I'm now free to post it here. In truth it's a followup to my review of Morris's history of Israel's War of Independence. In the wake of the October 7th attack on Israel and Israel’s response, everyone seemed to agree that one side of the conflict was the epitome of evil, the reincarnation of the Nazis, with warfare in their blood and a pure unfiltered hatred of the enemy in their minds. The other side was a force for good, who just wanted peace and was doing the best they could in a difficult situation. The only problem is no one could agree which side was which. This is unfair. While the loudest voices may paint the world in black and white, as soon as you ignore them, you begin to encounter a whole range of more nuanced views - yet still find yourself no less confused. Now for the most part my view is that unless you’re willing to put in the effort to deeply understand conflicts in far off lands, you’re best off not having an opinion on them, and definitely not one fed to you by the twitter or tiktok feed.  Expressing loud, confident opinions on unfamiliar conflicts often does more harm than good. Alas this conflict is not in a far away land. I live 20km from the border with Gaza. Most of my friends were called up to do reserve duty in the IDF. My children almost certainly will have to do the same once they grow up. Far too much of my income goes towards military spending rather than my bank account. I can’t take the easy way out, so I have to do things the hard way. So I bought a copy of Benny Morris’s Righteous Victims at exorbitant cost[1], and plowed through it. And I thought I’d share with you what I learned, so that if you do decide to opine on the Israel Palestine conflict, your opinion will hopefully be more educated. Righteous Victims is a history of the Arab Zionist conflict from 1881 till 2001, written by one of the most respected historians
4f58aaa3-7e74-4ef7-b72e-41f7f92a2d65
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
You’re Measuring Model Complexity Wrong **TLDR**: We explain why you should care about *model complexity*, why the *local learning coefficient* is arguably the correct measure of model complexity, and how to estimate its value. In particular, we review [a new set of estimation techniques introduced by Lau et al. (2023)](https://arxiv.org/abs/2308.12108). These techniques are foundational to the [Developmental Interpretability research agenda](https://www.lesswrong.com/posts/TjaeCWvLZtEDAS5Ex/towards-developmental-interpretability) and constitute the first generation of methods for detecting and understanding phase transitions, with potential applications for both interpretability and mechanistic anomaly detection. We expect this set of techniques to become a fixture in the alignment toolkit, and we've published a [library](https://github.com/timaeus-research/devinterp/) and [examples](https://github.com/timaeus-research/devinterp/tree/main/examples) to help you get started. > This post is based on the paper, ["Quantifying degeneracy in singular models via the learning coefficient" by Edmund Lau, Daniel Murfet, and Susan Wei (2023).](https://arxiv.org/abs/2308.12108) The content builds on previous posts by [@Liam Carroll](https://www.lesswrong.com/users/liam-carroll?mention=user) on [effective dimensionality in neural networks](https://www.lesswrong.com/posts/4eZtmwaqhAgdJQDEg/dslt-1-the-rlct-measures-the-effective-dimension-of-neural) and [the resulting perspective on phase transitions](https://www.lesswrong.com/posts/aKBAYN5LpaQMrPqMj/dslt-4-phase-transitions-in-neural-networks). > > Why Model Complexity? ===================== Model Comparison ---------------- **Comparing models matters for safety.** Given two models with the same behavior on a particular set of evals, we would like to be able to predict how they'll behave on out-of-distribution data. Can we distinguish the deceptively aligned model from the non-deceptively aligned model? As a first pass, can we predict that two models will or will not behave similarly in the future?  **Comparing via weights.** Because models are *singular* (parameters do not correspond one-to-one with functions), it's not possible to compare weights directly. Very different choices of weights can implement the same functions, and very similar choices of weights can implement qualitatively different functions.  **Comparing via behavior.** We also can't compare models at the level of the loss because different functions are compatible with the same loss. The same is true even if we compare models sample-by-sample: we'd need an astronomical number of inputs to meaningfully constrain behavior (some of which we'd rather not expose our model to).  **Comparing via invariants.** Another option is to take a pointer from the mathematicians and physicists: in classifying mathematical objects like spaces, it helps to turn geometric problems into algebraic problems where we can compute invariants. If two spaces have different values of that invariant, then we know that they're different. If the values are the same, the results are inconclusive. **The local learning coefficient**. The quantity we'll look at in this post, the (local) learning coefficient^λ(w∗).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} , is an invariant that measures model complexity — a coarse-grained feature of the model that is useful for making high-level comparisons with other models. Though having identical learning coefficients doesn't imply that two models are the same, having different learning coefficients tells us that two models are *qualitatively* different.  The learning coefficient is unique in that it is an output *of the theory* rather than an ad-hoc invention. Except for the loss itself, the learning coefficient is the *most principled* way to compare different models.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6g8cAftfQufLmFDYT/fpkr8br7exid3xa7umwk)Figure 5 of [Lau et al. (2023)](https://arxiv.org/abs/2308.12108) demonstrates that two different optimizers (SGD and entropy-SGD) find qualitatively different solutions on MNIST, where entropy-SGD finds simpler solutions. Model Development ----------------- **Model comparison is necessary to study model development.** [We're interested in comparing models across time to understand how they develop](https://www.lesswrong.com/posts/TjaeCWvLZtEDAS5Ex/towards-developmental-interpretability). High-level observables like the local learning coefficient give us insight into this process without requiring a detailed understanding of the model's internal mechanisms.  In particular, we're aiming to understand phase transitions during learning — moments in which models go through sudden, unanticipated shifts. We want to be able to detect these transitions because we think most of the risk is contained in sudden, undesired changes to values or capabilities that occur during training (~sharp left turns) and sudden, undesired changes to behaviors that occur in-context (treacherous turns / mechanistic anomalies). Local learning coefficient estimation is one of the first steps towards building tools for detecting and understanding these transitions.  **Do phase transitions actually show up?** So far, the places where theoretically predicted phase transitions are easiest to confirm are simplified settings like [deep linear networks](https://arxiv.org/pdf/1312.6120.pdf) and [toy models of superposition](https://transformer-circuits.pub/2022/toy_model/index.html). For larger models, we expect phase transitions to be common but "hidden." Among our [immediate priorities](https://www.lesswrong.com/posts/TjaeCWvLZtEDAS5Ex/towards-developmental-interpretability) are testing just how common these transitions are and whether we can detect hidden transitions. **Deep linear networks.** Let's consider the example of deep linear networks (DLNs). These are neural networks without nonlinear activations. In terms of expressivity, these models are identical to a simple affine transformation but in terms of dynamics they're much more interesting. The observed ["saddle-to-saddle" transitions](https://browse.arxiv.org/pdf/2106.15933.pdf) involve sudden jumps in the loss that coincide with sudden increases in the learning coefficient.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6g8cAftfQufLmFDYT/iifllhshoi6wgobrly8a)[DLNs](https://github.com/timaeus-research/devinterp/blob/main/examples/dlns.ipynb) show sudden increases in model complexity, ^λ (**orange**), coinciding with sudden decreases in loss (**blue**).From Basins to Model Complexity =============================== In this section, we'll motivate the learning coefficient by attempting to formalize a working definition of basin volume (through which we'll see that what we really care about is basin volume *scaling*). We'll see in what sense the learning coefficient measures model complexity and examine the resulting perspective on phase transitions.  Intractable: Basin Volume ------------------------- **Basin volume and model simplicity.** Learning theorists have long been interested in the link between basin volume and model simplicity. We don't really care what model we end up with as long as its loss is below some threshold ϵ. This is particularly true as that threshold approaches its minimum: we want to know how many "almost-as-good" solutions there are in the neighborhood of an obtained solution w∗.  Intuitively, broader basins (with more suitable models) are simpler. Because they take up more volume, they require fewer bits to locate. In accordance with Occam's razor, we want the simplest possible solution for the problem at hand.[[1]](#fn1s1fnajc57s)  **To measure a basin.** In particular, given an upper bound to the loss L(w)≤ϵ, we want to compute the following volume (="area" in the case of two parameters): V(ϵ)=∫L(w)≤ϵφ(w)dwfor some prior φ(w) over weights w. For our purposes, we can safely neglect the prior and treat it as constant. Consider the toy loss landscape below, where V(ϵ) is indicated by the gray projection.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6g8cAftfQufLmFDYT/tssf1eqxtnlzflr6nm8i)A cartoon depiction of a loss landscape (modeled by a sum of random paraboloids).  This is *not* a good representation of real-world loss landscapes.**Model complexity and comparison.** Every model in a given basin has the same basin volume. This metric is a coarse-grained observable (much like volume in thermodynamics) that partitions weight space into subclasses of similar models. So seeing that two models have similar volumes increases the odds that they're computationally similar. Conversely, having different volumes / complexities suggests that two models are qualitatively distinct.  Unfortunately, actually computing basin volume is intractable. This is because of the curse of dimensionality (the volume scales exponentially with dimensions) and because we can only ever approximate the "true" population loss L(w) through an empirical loss Ln(w) evaluated on a finite dataset Dn={xi}ni=1.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6g8cAftfQufLmFDYT/hlhofkxeinkmjutsqzek)If two models belong to basins with different volumes, they're likely to be qualitatively different kinds of models. Computing this is intractable.Wrong: Basin Flatness --------------------- **From broadness to flatness.** The usual workaround to this intractability is to use a Taylor expansion: as long as we're close enough to the bottom of the basin, we can approximate the basin locally with a polynomial.  [Most people stop at a second-order approximation](https://www.lesswrong.com/posts/QPqztHpToij2nx7ET/hessian-and-basin-volume), modeling the basin as a paraboloid whose curvatures are given by the Hessian at the minimum. It's a simple matter of then computing the resulting surface volume.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6g8cAftfQufLmFDYT/p05tmmzpw2qbsdz2kfk3)**Left:** the original basin along with the "true" volume. **Right**: a volume approximation using a paraboloid centered at the basin's minimum whose curvature is given by the Hessian.  This is an invalid approximation.**Unfortunately, this approximation is wrong**. Minimum flatness is a terrible proxy for basin volume. The fundamental problem is that [neural networks are *singular*](https://www.lesswrong.com/posts/fovfuFdpuEwQzJu2w/neural-networks-generalize-because-of-this-one-weird-trick): their loss landscapes have degenerate critical points. In practice, Hessians of the loss generally have many zero eigenvalues ([often a majority](https://arxiv.org/abs/1611.07476)). When we attempt to approximate the volume with a paraboloid that has zero curvature, we naively predict unphysical infinite volumes. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6g8cAftfQufLmFDYT/dj62o24stvt33awr8u7x)When a model is singular, the loss landscape is qualitatively distinct from its second order Taylor approximation. This makes Hessian-based volumes invalid.[The standard correction is to add a small constant to these zero eigenvalues](https://www.lesswrong.com/posts/QPqztHpToij2nx7ET/hessian-and-basin-volume?commentId=SbbDWXgQS8XBoJN2C), which stabilizes the Hessian. We get a very large volume, instead of an infinite volume. But this misses the point: when second order terms vanish, the qualitative equivalence between a function and its second-order Taylor approximation breaks down, and these Hessian-based approximations become invalid. Instead, we need to go to higher order terms in the Taylor expansion. The lowest order contributions to the volume will be dominated by the highest order terms in the expansion.  **Flatness ≠ broadness.** In the literature, flatness (in the sense of a small determinant of the Hessian) and broadness often get conflated. This is unfortunate: the two are not the same. Basin broadness is an interesting question. Minimum flatness is the wrong question.  Instead: Basin Dimension ------------------------ **Volume dynamics.** We care about relative — not absolute — volumes. But even more importantly, we're interested in *dynamics:* how the volume changes as learning progresses. As we gather more and more data, we're able to become more discriminating and our threshold ϵ comes down. The real question, then, is: how does the volume scale as ϵ changes? This is more relevant *even in the non-degenerate regular case.* **Volume scaling > volume.** Studying volume scaling solves several others problems with the volume. First, for sufficiently small ϵ, relative volumes become independent of ϵ, which allows us to eliminate the dependence on this arbitrary cut-off. Furthermore, volume scaling is invariant under reparametrizations and non-diverging even near degenerate critical points. **The learning coefficient as effective dimensionality.** The volume scales as V(ϵ)∼cϵλasϵ→0.The scaling exponent λ is known as the **learning coefficient** (or "[real log canonical threshold](https://www.lesswrong.com/posts/4eZtmwaqhAgdJQDEg/dslt-1-the-rlct-measures-the-effective-dimension-of-neural)"). It plays the role of an effective dimensionality (up to a constant multiple).  Let's illustrate this with some examples.  **Regular models.** First, consider thecase of *regular* (=non-singular) models, where the loss landscape has no degenerate critical points. In this case, the Hessian-based approximation holds, and the volume grows with exponent λ=d/2, where d is the number of dimensions (or parameter count). ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6g8cAftfQufLmFDYT/osog8ydbdkwogkug4le1)The area under the potential L(w1,w2)=w21+w22 scales as ϵ.**Singular models.** Consider the function L(w1,w2)=w41+w42. At the origin, the Hessian is equal to the zero matrix, and the learning coefficient is λ=1/2. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6g8cAftfQufLmFDYT/mcbsmiprhnwwe5fpdvp9)The area under the potential L(w1,w2)=w41+w42 scales as ϵ12.Compare this to the function L(w1,w2)=w21w42. Again the Hessian at the origin is the zero matrix, but the learning coefficient now takes a different value λ=1/4.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6g8cAftfQufLmFDYT/bhhwpnpmdvpqqzhevtas)The area under the potential L(w1,w2)=w21w42 scales as ϵ14. As we approach the bottom of this landscape, there's *much* more volume available than in the preceding examples. For singular models, the parameter count sets an upper bound 2λ≤d. Lower learning coefficient means the volume scales slower for for large ϵ>1 but *faster* for small ϵ<1. There's exponentially more volume in the immediate vicinity of degenerate critical points than in the vicinity of non-degenerate critical points.  **Against Hessians.** The two preceding examples explicitly demonstrate the failures of the Hessian mentioned earlier. Their Hessians are identical at the origin, but their scaling behaviors are radically different. The Hessian is a poor judge of volume, and derived metrics like the rank of the Hessian are insufficient notions of model complexity.[[2]](#fny3eyq6a3fpf)  **The learning coefficient as an inductive bias.** The Occam's razor intuition that simpler models (with lower complexity) generalize better is formalized by [Watanabe's free energy formula](http://watanabe-www.math.dis.titech.ac.jp/users/swatanab/ag-slt.html), Fn→nLn(w∗)+λlogn+O(loglogn)asn→∞,where n is the number of training samples, Ln is the loss (more precisely, the negative log likelihood) which depends on a dataset Dn of n samples, and w∗ is an optimal choice of weights. A full explanation and derivation of this formula is beyond the scope of this post (see [Liam's excellent sequence](https://www.lesswrong.com/posts/xRWsfGfvDAjRWXcnG/distilling-singular-learning-theory)), but the important bit is that we can construe the problem of Bayesian learning as a problem of trying to minimize this free energy, involving a tradeoff between *accuracy* (lower nLn(w∗)) and *complexity* (lower λlogn). Neural networks are intrinsically biased towards simpler solutions.  **Intuition for phase transitions.** Consider a toy landscape with two local minima at w1and w2 with heights L(w1)>L(w2). Both minima have an associated learning coefficient that determines how the volume scales in its neighborhood. Suppose λ(w1)<λ(w2). Solution 1 is simpler but with higher loss, and solution 2 is more accurate but also more complex.  Coming back to the volume for a moment, what happens as we gradually lower the threshold ϵ?  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6g8cAftfQufLmFDYT/bucjcnsvdhyns1pncauf)**A** ***cartoon*** **analogy for thinking about the learning process.** In this toy landscape, there are two "phases" depicted by the orange and green projections. The **orange phase** is *higher loss* and *more degenerate*. The **green phase** is *lower loss* and *less degenerate*. As learning continues (**left to right**), we become less tolerant of high-loss solutions: the sea recedes, and we suddenly jump from the majority of the volume being occupied by the orange phase to the entirety of the volume being occupied by the green phase. In Bayesian learning, learning is dominated by transitions that lower loss and increase complexity. The link with SGD is an open question. What we're interested in is the fraction of the surface area (=the volume of weight space) that is taken up by different kinds of solutions. At first, this surface area, and thus the behavior of our models, is dominated by the more degenerate solution at w1.  As we sample more data (n→∞) the sea starts to recede. When the sea level approaches the level of one of these highly degenerate, low-accuracy solutions, that solution suddenly and very rapidly loses a massive amount of surface area *precisely because it is so degenerate*. Shallow puddles vanish all at once.  The fraction of surface area taken up by successive, more complex solutions jumps just as suddenly and rapidly. There's a phase transition — a qualitative change in the kind and complexity of functions being learned. The sea continues to sink. From where we're sitting — above the water — we can't see below the surface. We don't know where or when the next transition will take place. We hold our breaths... Estimating Learning Coefficients ================================ We've seen what the learning coefficient is and how it measures the complexity of a model. But how can we go about practically estimating its value? In this section, we describe a set of techniques introduced by [Lau et al. (2023)](https://arxiv.org/abs/2308.12108). Deriving the Estimator ---------------------- **From free energies to learning coefficients.** The free energy formula we saw earlier,  Fn→nLn(w∗)+λlogn+O(loglogn)asn→∞,offers a means of calculating the learning coefficient. We simply shuffle the terms around to obtain: λ→Fn−nLn(w∗)logn+O(loglognlogn).**WBIC**. Unfortunately, performing the integral required to evaluate Fn is intractable. As an alternative, we can consider the Widely Applicable Bayesian Information Criterion (WBIC), which generalizes the Bayesian Information Criterion (BIC) to singular models,  WBIC:=Eβ∗w[nLn(w)]=∫nLn(w)pβ∗(w|Dn)dw,where pβ∗(w|Dn) is the [tempered Bayesian posterior](http://arxiv.org/abs/1901.05353), pβ(w|Dn)=p(Dn|w)βφ(w)pβ(Dn),sampled at inverse temperature β∗=1/logn [(Watanabe, 2013)](https://www.jmlr.org/papers/volume14/watanabe13a/watanabe13a.pdf). The two highest order terms in the expansion of the WBIC match that of the asymptotic expansion of Fn, which means we can substitute this in for Fn to estimate the learning coefficient: ^λ=Eβ∗w[nLn(w)]−nLn(w∗)logn.**Expectations to empirical averages.** To use this estimator, we need to first estimate the expectation value Eβ∗w[nLn(w)]. We can do this by sampling a set of weights Wβ∗={w1,…,wm|wi∼pβ∗(w|Dn)} and replacing the expectation value with an empirical average over the losses associated to those weights,  Eβ∗w[nLn(w)]≈1mm∑i=1nLn(wi).Unfortunately, estimating Eβ∗w[nLn(w)] in practice runs into significant curse-of-dimensionality effects for systems at scale.  The Local Learning Coefficient ------------------------------ **From global to local.** The problem becomes substantially more tractable if, instead of attempting to estimate the *global* learning coefficient, we estimate a *local* learning coefficient instead. We restrict our attention to a subset of weights and study how the volume scales in just that region. The scaling is dominated by the most complex local singularities.  As we saw in the aside on phase transitions, this *local* quantity is also more interesting from the perspective of dynamics. When actually training models, we don't care about the behaviors of all possible models, we care about the behavior of a specific model as it moves through the loss landscape. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6g8cAftfQufLmFDYT/jhcssdkvffd22wxnimzi)Volume scaling restricted to a subset of weights is dominated by the *local* learning coefficient. **The local learning coefficient.** To estimate the local learning coefficient, Lau et al. (2023) replace the prior φ(w) with a *localizing prior* centered at the point ^w∗ whose learning coefficient we want to estimate: φ(w)→φ(w−^w∗).This ensures that samples drawn will remain "close" to the original point. Substituting this into the global ^λ estimator we saw earlier, we obtain the definition of the local learning coefficient, ^λ(^w∗)=Eβ∗w|^w∗[nLn(w)]−nLn(^w∗)logn,known colloquially as "**lambda-hat**," where Ew|^w∗ indicates the use of a localizing prior.  To estimate this expectation value, we compute an empirical average over the losses of a set of samples {wi}mi=1, now drawn from the *localized posterior:* p(w|Dn,^w∗)∝p(Dn|w)φ(w−^w∗).Given this empirical average, we plug it into the above formula for ^λ (subtracting n times the loss at ^w∗ and dividing by logn) to obtain an estimate for the local learning coefficient.  Implementing the Estimator -------------------------- **How to localize?** The first practical question we run into when implementing the local learning coefficient estimator is how to localize. Lau et al. consider a Gaussian prior parametrized by a scale γ. This is not the only possible choice, but it is perhaps the most straightforward choice: a gentle quadratic restoring force.  **How to sample?** To draw samples from the posterior pβ∗(w|Dn,^w∗,γ), the natural thought is to use MCMC techniques. Explaining why these work is beyond the scope of this piece and is well covered elsewhere, but what's important here is that standard MCMC methods run into the curse of dimensionality, which makes them unsuitable for larger models.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6g8cAftfQufLmFDYT/b6ydymsdzxnrrri788ij)**Comparing MCMC and SGD on a toy landscape**, L(w1,w2)=w21w42. MCMC-NUTS (**left**) approximates the local posterior well, but does not generalize well to higher dimensions. SGD (**right**) is not a sampling mechanism.**SGLD.** To scale MCMC methods to higher dimensions, we need to incorporate information from gradients. This allows us to "preselect" low-loss, high-posterior regions of weight space. One of the simplest examples of such a technique is Stochastic Gradient Langevin Dynamics (SGLD), which is SGD with the addition of an explicit noise term η∼N(0,ϵ) ([Welling & Teh 2011](http://people.ee.duke.edu/~lcarin/398_icmlpaper.pdf)). This allows random movement along low loss dimensions, while quickly pushing any step to a higher loss area back down.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6g8cAftfQufLmFDYT/dxpbrbkinnmcehkmivah)**Comparing SGLD and SGNHT on a toy landscape,**L(w1,w2)=w21w42. SGLD and SGNHT both better explore the local posterior better than SGD, even when (as pictured here) sampling 10x fewer points than SGD.**SGNHT and beyond.** Adding momentum & friction would allow us to explore these ridges of the posterior more effectively. One such example is the Stochastic Gradient Nosé-Hoover Thermostat (SGNHT), which uses a thermostat for its friction term ([Ding et al. 2014](https://proceedings.neurips.cc/paper/2014/file/21fe5b8ba755eeaece7a450849876228-Paper.pdf)). Although clearly performing better in the toy potential above, it's still unclear how well this approach generalizes to higher dimensions as these samplers can be harder to get working than SGLD. Limitations ----------- **Engineering limitations.** The above estimators do not yet live up to their full theoretical potential. To give an example, ^λ(w) depends on hyperparameters like the number of samples, the localizing strength γ, the SGLD learning rate ϵ, etc. The true coefficient λ(w) is independent of these hyperparameters. We care less about whether ^λ is an unbiased estimator of λ and more about whether this estimator satisfies certain basic desiderata of model complexity. These include properties like ordinality: that λ(w1)≥λ(w2) implies ^λ(w1)≥^λ(w2). Improving this estimator and confirming that it satisfies these desiderata is currently bottlenecked more on engineering than on theory. **Theoretical limitations**. At the same time, this estimator sometimes exceeds its theoretical underpinnings, yielding sensible results despite violating certain technical assumptions. This means there's work to do for theorists in terms of finding assumptions to relax and studying, for example, how to extend the learning coefficient outside of non-minima. Currently, if not applied at a local minimum, the estimator can sometimes yield unphysical negative model complexities.[[3]](#fnp983co85zrc) There are limiting assumptions underlying the learning coefficient and additional limiting assumptions underlying the *local* learning coefficient*.* Both require further attention. **Proceed with caution.** The current techniques for estimating ^λ are finicky to get working and should be applied with care. This is a research tool, not a plug-and-play observable printing press. Expect annoying hyperparameter tuning, (un)correlated estimates, weirdly high ^λ's, negative ^λ's, infinite ^λ's, etc. This requires at least as much caution as MCMC in any other high-dimensional system: pay careful attention to diagnostics of chain health and vary hyperparameters to hunt for dependencies, systematic or otherwise. What's Next =========== **Improving estimates.** An immediate priority is to confirm that ^λ and λ match where it counts. Does it satisfy our desiderata? And can we calibrate ^λ in settings where we can analytically derive λ? There's work to do on the theoretical front and on the computational front in terms of exploring more scalable and more accurate ways to sample from the local posterior beyond SGLD. **More observables.** SLT also tells us how to measure other quantities of interest, such as the singular fluctuation. We're also working on the second and third generations of techniques for studying transitions that extract vector and higher-order information about how models are changing. **Go forth and estimate!** Although the current implementation is far from perfect, it should be good enough to start applying these techniques in practice. It's as easy as [`pip install devinterp`](https://github.com/timaeus-research/devinterp/) and `from devinterp.slt import estimate_learning_coeff` (or at least, it's easy up until that point). We recommend checking out [these example notebooks](https://github.com/timaeus-research/devinterp/blob/main/examples/introduction.ipynb) for inspiration.  If you want to learn more about the learning coefficient, see [this post](https://www.lesswrong.com/posts/4eZtmwaqhAgdJQDEg/dslt-1-the-rlct-measures-the-effective-dimension-of-neural), [this lecture](https://www.youtube.com/watch?v=bFVGc2UKARc&list=PL4vaU_gO_6LJ4isj5DESGg4OwfVEk98Y-&index=5), or [this book](https://www.cambridge.org/core/books/algebraic-geometry-and-statistical-learning-theory/9C8FD1BDC817E2FC79117C7F41544A3A). For updates on developmental interpretability, join the [discord](https://devinterp.com/discord). Stay tuned for more updates soon! > Thank you to [@Daniel Murfet](https://www.lesswrong.com/users/daniel-murfet?mention=user), [@Edmund Lau](https://www.lesswrong.com/users/edmund-lau?mention=user), [@Alexander Gietelink Oldenziel](https://www.lesswrong.com/users/alexander-gietelink-oldenziel?mention=user), and [@Liam Carroll](https://www.lesswrong.com/users/liam-carroll?mention=user) for reviewing early drafts of this document.  > > 1. **[^](#fnref1s1fnajc57s)**Actually extending the minimum description length principle ('choose the model that takes the fewest bits to specify') to singular models is an open problem. The free energy formula (covered later) is not quite the same, though it fulfills the same purpose of formalizing Occam's razor. 2. **[^](#fnrefy3eyq6a3fpf)**To give the Hessian some credit: its rank can give an us upper bound on the model complexity. The Hessian still has its place. 3. **[^](#fnrefp983co85zrc)**This occurs when the sampler strays beyond its intended confines and stumbles across models with much lower loss than those in the desired neighborhood.
cab7731b-5da4-4f9e-9b42-a1e7427a033f
trentmkelly/LessWrong-43k
LessWrong
There is No Akrasia I don’t think akrasia exists. This is a fairly strong claim. I’m also not going to try and argue it.   What I’m really here to argue are the two weaker claims that: a) Akrasia is often treated as a “thing” by people in the rationality community, and this can lead to problems, even though akrasia a sorta-coherent concept. b) If we want to move forward and solve the problems that fall under the akrasia-umbrella, it’s better to taboo the term akrasia altogether and instead employ a more reductionist approach that favors specificity But that’s a lot less catchy, and I think we can 80/20 it with the statement that “akrasia doesn’t exist”, hence the title and the opening sentence. First off, I do think that akrasia is a term that resonates with a lot of people. When I’ve described this concept to friends (n = 3), they’ve all had varying degrees of reactions along the lines of “Aha! This term perfectly encapsulates something I feel!” On LW, it seems to have garnered acceptance as a concept, evidenced by the posts / wiki on it. It does seem, then, that this concept of “want-want vs want” or “being unable to do what you ‘want’ to do” seems to point at a phenomenologically real group of things in the world. However, I think that this is actually bad. Once people learn the term akrasia and what it represents, they can now pattern-match it to their own associated experiences. I think that, once you’ve reified akrasia, i.e. turned it into a “thing” inside your ontology, problems occur: First off, treating akrasia as a real thing gives it additional weight and power over you: Once you start to notice the patterns, it’s harder to see things again as mere apparent chaos. In the case of akrasia, I think this means that people may try less hard because they suddenly realize they’re in the grip of this terrible monster called akrasia. I think this sort of worldview ends up reinforcing some unhelpful attitudes towards solving the problems a
01f6a13b-7b8c-424d-88f1-071528161e46
trentmkelly/LessWrong-43k
LessWrong
Cryonics costs: given estimates are low One of the sticking points for cryonics is how expensive it is. Unfortunately, the estimates on LW (eg. in Normal Cryonics) are likely to be low as they are current costs. This is starting to come to a head for Alcor, with Alcor's low growth rate meaning it faces a rising tide of aging members (hence that emphasis on young cryonicists) and fundamental flaws in its prices; the official word has come down in the latest issue of Cryonics, issue 2011 q4: Cryopreservation Funding and Inflation: The need for Action; A Discussion Article by the Management and Board of Directors of Alcor > The cryonics economies anticipated by Robert Ettinger in 1965 were never realized. By the 1970s, the cost of whole body cryopreservation as offered by TransTime and Soma (the for-profit arm of IABS, which later merged with Alcor) was $60,000 (1). As shown in Fig. 1, the nominal dollar cost of cryonics has risen steadily with Consumer Price Index (CPI) inflation since then. By 2011, the minimum funding for whole body cryopreservation with Alcor was $200,000. Even this large number has not kept pace with inflation, so another increase will be necessary soon. > > Whenever Alcor has increased cryopreservation minimums, it has traditionally only required new members to meet new minimum funding requirements. Existing members were “grandfathered,” and allowed to remain members even if their cryopreservation funding fell below new minimums. This was and is believed to be important for members who due to age or disability become uninsurable, and would otherwise have to leave Alcor after many years of supporting the organization. > > ...The sustainability of this has been questioned on numerous occasions. In 1991, Ben Best and others expressed concerns about grandfathering in a series of articles and letters in Cryonics magazine (2,3,4). Ideas for addressing the inflation problem were sought (5), but none were implemented. There was renewed public concern in 2009 when Charles Platt published
5609e20b-a029-4370-9cd9-009aa941e09e
trentmkelly/LessWrong-43k
LessWrong
Capability Phase Transition Examples None
d1c96c71-cdc7-4974-8ce7-68c64adf8769
trentmkelly/LessWrong-43k
LessWrong
Using ChatGPT for memory reconsolidation? I just tried a IFS bot based on GPT3.5 and I actually enjoyed it! Didn't make a breakthrough as I've done it on my own in the past, but definitely felt similar to my experiences. Another use case I can think of is using ChatGPT to help me with memory reconsolidation. I had a breakthrough with this memory reconsolidation that helped resolve my childhood trauma of my father passing away. Wondering if there's some prompting I could leverage to help me do more memory reconsolidation for some other lingering issues I have. Of course, nothing replaces real therapy but my issues aren't worth spending money at this point. 
274f3f00-9ffd-4d70-8c69-9859fec9d013
trentmkelly/LessWrong-43k
LessWrong
Strong Evidence is Common Portions of this are taken directly from Three Things I've Learned About Bayes' Rule. One time, someone asked me what my name was. I said, “Mark Xu.” Afterward, they probably believed my name was “Mark Xu.” I’m guessing they would have happily accepted a bet at 20:1 odds that my driver’s license would say “Mark Xu” on it. The prior odds that someone’s name is “Mark Xu” are generously 1:1,000,000. Posterior odds of 20:1 implies that the odds ratio of me saying “Mark Xu” is 20,000,000:1, or roughly 24 bits of evidence. That’s a lot of evidence. Seeing a Wikipedia page say “X is the capital of Y” is tremendous evidence that X is the capital of Y. Someone telling you “I can juggle” is massive evidence that they can juggle. Putting an expression into Mathematica and getting Z is enormous evidence that the expression evaluates to Z. Vast odds ratios lurk behind many encounters. One implication of the Efficient Market Hypothesis (EMH) is that is it difficult to make money on the stock market. Generously, maybe only the top 1% of traders will be profitable. How difficult is it to get into the top 1% of traders? To be 50% sure you're in the top 1%, you only need 200:1 evidence. This seemingly large odds ratio might be easy to get. On average, people are overconfident, but 12% aren't. It only takes 50:1 evidence to conclude you are much less overconfident than average. An hour or so of calibration training and the resulting calibration plots might be enough. Running through Bayes’ Rule explicitly might produce a bias towards middling values. Extraordinary claims require extraordinary evidence, but extraordinary evidence might be more common than you think.
d7ef76b9-37c2-4fac-b27d-dda44a8d77ae
trentmkelly/LessWrong-43k
LessWrong
[EAForum xpost] A breakdown of OpenAI's revenue We estimate that, as of June 12, 2024, OpenAI has an annualized revenue (ARR) of:  $1.9B for ChatGPT Plus (7.7M global subscribers),  $714M from ChatGPT Enterprise (1.2M seats),  $510M from the API, and  $290M from ChatGPT Team (from 980k seats) (Full report in https://app.futuresearch.ai/reports/3Li1, methods described in https://futuresearch.ai/openai-revenue-report.) We looked into OpenAI's revenue because financial information should be a strong indicator of the business decisions they make in the coming months, and hence an indicator of their research priorities. Our methods in brief: we searched exhaustively for all public information on OpenAI's finances, and filtered it to reliable data points. From this, we selected a method of calculation that required the minimal amount of inference of missing information. To infer the missing information, we used the standard techniques of forecasters: fermi estimates, and base rates / analogies. We're fairly confident that the true values are relatively close to what we report. We're still working on methods to assign confidence intervals on the final answers given the confidence intervals of all of the intermediate variables. Inside the full report, you can see which of our estimates are most speculative, e.g. using the ratio of Enterprise seats to Teams seats from comparable apps; or inferring the US to non-US subscriber base across platforms from numbers about mobile subscribers, or inferring growth rates from just a few data points. Overall, these numbers imply to us that: * Sam Altman's surprising claim of $3.4B ARR on June 12 seems quite plausible, despite skepticism people raised at the time. * Apps (consumer and enterprise) are much more important to OpenAI than the API. * Consumers are much more important to OpenAI than enterprises, as reflected in all their recent demos, but the enterprise growth rate is so high that this may change abruptly.
87582a37-cde6-4b01-b9f2-dc575a2c6892
trentmkelly/LessWrong-43k
LessWrong
Model of human (ir)rationality A putative new idea for AI control; index here. This post is just an initial foray into modelling human irrationality, for the purpose of successful value learning. Its purpose is not to be full model, but have enough details that various common situations can be successfully modelled. The important thing is to model humans in ways that humans can understand (as it's our definition which determines what's a bias and what's a preference in humans). ---------------------------------------- Humans, actions, and joint distributions The human themselves is simply modelled as their brain (thus various human sense organs can be observed by the AI rather than being part of the description). Let R be the set of possible reward functions the human may be maximising. Let Hπ be the set of policies the human may be following. We'll assume that Hπ is closed under the taking of mixed strategies. The AI has a joint probability distribution P over R, Hπ and events in the world. By conditioning on any element r∈R, P defines a map μ from R to probability distributions over Hπ. Since Hπ is closed under the taking of mixed strategies, this means that μ can be seen as a map from R to Hπ. The map μ and the marginal distribution PR (P restricted to R) define P entirely. Note that μ is what relates human actions to their explanation in terms of the reward R. Basic properties of P Here are a few properties P could have: #. The distribution P is historical if PR is independent of any action the AI takes. #. An AI's action a overwrites the reward if μ is constant, conditional on a, while PR is still `broad' ("broad" is not fully defined, but PR is certainly broad enough if it assigns non-zero probability to both an r and −r). #. The distribution P is Q-rational if there exits a prior distribution Q over the universe such that μ maps r∈R to the optimal policy for an r-maximising agent with prior Q. It's clear that if P is historical, the AI will treat the human's reward function as s
51b3092b-20fb-4f35-a9e1-45b0fe35ee0b
trentmkelly/LessWrong-43k
LessWrong
Let's create a market for cryonics My uncle works in insurance. I recently mentioned that I'm planning to sign up for cryonics. "That's amazing," he said. "Convincing a young person to buy life insurance? That has to be the greatest scam ever." I took the comment lightly, not caring to argue about it. But it got me thinking - couldn't cryonics be a great opportunity for insurance companies to make a bunch of money? Consider: 1. Were there a much stronger demand for cryonics, cryonics organizations would flourish through competition, outside investment, and internal reinvestment. Costs would likely fall, and this would be good for cryonicists in general. 2. If cryonics organizations flourish, this increases the probability of cryonics working. I can think of a bunch of ways in which this could happen; perhaps, for example, it would encourage the creation of safety nets whereby the failure of individual companies doesn't result in anyone getting thawed. It would increase R&D on both perfusion and revivification, encourage entrepreneurs to explore new related business models, etcetera. 3. Increasing the demand for cryonics increases the demand for life insurance policies; thus insurance companies have a strong incentive to increase the demand for cryonics. Many large insurance companies would like nothing more than to usher in a generation of young people that want to buy life insurance.1 4. The demand for cryonics could be increased by an insightful marketing campaign by an excellent marketing agency with an enormous budget... like those used by big insurance companies.2 A quick Googling says that ad spending by insurance companies exceeded $4.15 billion in 2009. Almost a year ago, Strange7 suggested that cryonics organizations could run this kind of marketing campaign. I think he's wrong - there's no way CI or Alcor have the money. But the biggest insurance companies do have the money, and I'd be shocked if these companies or their agencies aren't already dumping all kinds of money into
7743d0a5-d7e5-47d6-b3a2-79b09d9eb678
StampyAI/alignment-research-dataset/special_docs
Other
The Evolved Radio and its Implications for Modelling the Evolution of Novel Sensors The Evolved Radio and its Implications for Modelling the Evolution of Novel Sensors Jon Bird1 and Paul Layzell2 1Centre for Computational Neuroscience and Robotics School of Biological Sciences University of Sussex Brighton BN1 9QG, UK jonba@cogs.susx.ac.uk 2Hewlett-Packard Laboratories Bristol BS34 8QZ, UK paul_layzell@hp.com Abstract – Sensor evolution research typically uses evolutionary algorithms (EAs) to generate sensors that near- optimally satisfy large numbers of constraints. This is qualitatively different from the phylogenetic process found in nature that has resulted, for example, in the mammalian auditory ossicles evolving from the jaw bones of amphibians and reptiles, that in turn had previously acted as gill arches in fish. This paper describes an evolvable hardware experiment that resulted in a network of transistors sensing and utilising the radio waves emanating from nearby PCs. We argue that this evolved ‘radio’ is only the second device ever whose sensors were constructed in a way that in key aspects is analogous to that found in nature. We highlight the advantages and disadvantages of this approach and show why it is practically impossible to implement a similar process in simulation. I. INTRODUCTION The molecular mechanisms underlying energy production and protein synthesis are virtually identical in all organisms. The diversity of species is evident in the multitude of different ways that organisms sense and protect themselves from changing conditions. About 5% of the molecular machinery in E. Coli is for sensing and motion, whereas in humans these processes constitute the majority of our bulk [1]. One of the key theoretical issues in sensor evolution research is to explain this increase in complexity: what processes lead to the development of novel sensors and effectors [2]? The hope is that theoretical insights might be applied to the engineering of robot sensors. Sensor evolution research often uses evolutionary algorithms (EAs) to investigate the relationship between sensors and environmental conditions [2]. Typically, experiments simulate agents acting in environments and involve searching through large parameter sets to determine the near-optimal values that satisfy multiple constraints. For example, different researchers have coevolved robot sensor morphology and controllers [3,4] which can lead to insights into how particular environmental conditions affect sensory processing by agents [5]. However, the use of EAs has so far not lead to insights into how novel sensors evolve: sensors that transduce environmental stimuli not previously utilised by an organism. Darwin was one of the first to observe that, “throughout nature almost every part of each living being has probably served, in a slightly modified condition, for diverse purposes” [6], a process exemplified by the changing role of the hyomandibular bone from a brachial structure in fish to part of the ear in mammals. This paper shows why it is practically impossible to implement an analogous process in a simulated environment. This is an important issue for two reasons. Firstly, it is necessary to be clear about the differences between EAs and natural evolution if the aim is a theoretical understanding of the evolution of novel sensors in organisms [7,8]. Secondly, from an engineering perspective, when designing robot sensors it is essential to be aware of the limitations, as well as the strengths, of particular methodologies. In order to highlight some of the differences between EAs, as typically used in sensor evolution research, and phylogenetic processes, this paper describes a number of unconstrained hardware evolution (HE) experiments where circuits were evolved intrinsically: that is, their fitness was determined by instantiating and evaluating them in hardware, rather than in simulation. A key advantage of testing in real- world, physical environments is that the circuits are free to take advantage of a wide range of environmental invariants, none of which have to be specified by the experimenter at the outset. This range is further extended when the constraints adopted in conventional electronic engineering to ensure robust and predictable operation are relaxed [9]. Evolution is then free to explore very unusual designs: circuits with strange structures and intricate dynamical behaviours beyond the scope of conventional design. In unconstrained HE, the circuit primitives do not have their behaviour constrained within specific input and output ranges or by temporal coordination, nor are they restricted to playing specific functional roles. Consequently, the process of unconstrained intrinsic HE is more like tinkering than conventional engineering [10,11] and in some key aspects is analogous to natural evolution. In particular, this paper details an unconstrained, intrinsic HE experiment where a network of transistors sensed and utilised the radio waves emanating from a nearby PC. Essentially, the EA led to the construction of a radio. This is, as far as the authors know, only the second example of a physical device whose sensors were constructed by a process analogous to that of phylogenetic change. We compare the circuit to the first device constructed in this way: Gordon Pask’s electrochemical ear [12]. We argue that both of these devices display three key characteristics: they were constructed and tested in real environments; their basic primitives were not constrained to experimenter specified functional roles; and the primitives were sensitive to a wide range of environmental stimuli. We highlight the difficulties in implementing comparable processes in simulation and argue that only unconstrained physical systems situated in real-world environments can ever construct novel sensors in a way analogous to the phylogenetic process found in nature. II. EVOLUTIONARY TINKERING “Evolution proceeds like a tinkerer who, during millions of years, has slowly modified his products, retouching, cutting, lengthening, using all opportunities to transform and create” [11]. Through this process evolution has generated novel sensors “often utilizing organs not originally ‘intended’ for the purpose they serve at present” [2]. Formalizing this generative process is one of the key challenges in modelling the evolution of novel sensors. A. Evolution of mammalian middle ear ossicles It is instructive to trace the evolution of the auditory ossicles in the middle ear of vertebrates as this exemplifies how homologous structures play different functional roles over the course of evolutionary time. In mammals, the function of the middle ear is to act as an impedance transformer between the low impedance tympanic membrane and the high impedance oval window of the cochlea. These two membranes are linked by the three middle ear ossicles: the malleus, incus and stapes. Without this impedance matching much of the sound energy arriving at the ear would be reflected back into the environment. The primary mechanism of impedance matching is that the area of the tympanic membrane is larger than that of the oval window: for example, in the cat it is about 35 times larger [13]. The pressure acting on the oval window is increased by the ratio of the two areas. The second mechanism is the lever action of the middle ear ossicles: the arm of the incus is shorter than that of the malleus and this causes an increased force on the stapes. In fish, the homologue of the auditory ossicles is the hyomandibular, which was once part of the gill apparatus and then later functioned as a jaw prop [14]. In tetrapods, this bone functioned as a structural support and as a transmitter of vibrations (stapes). Gradually, the bone became finer and less attached and more and more suited to the task of vibration transmission. Mammals evolved a new joint system for the jaw and the older skeletal elements became the malleus and incus. The radical change in the function of the hyomandibular bone is a good illustration of the tinkering process of evolution; as Romer and Sturges memorably put it: “Breathing aids have become feeding aids and finally hearing aids” [14]. B. Contrast of Engineering and Tinkering It is useful to compare the engineering and tinkering approaches to constructing objects. When EAs or other optimisation methods are applied to an engineering problem, such as component placement and routing, a clear goal is defined which is not necessarily constrained by previous solutions to design problems [11]. The design problem is represented by a set of alternatives (command variables in the terminology of Simon [15]) that have to adapt to a set of environmental parameters whose values are known with certainty or in terms of a probability distribution. The goal is then to find the values of the command variables that maximise the fitness (or other utility) function, given the values of the environmental parameters and any other constraints. The choice of command variables is usually determined by a ‘divide and conquer’ methodology: a system is functionally decomposed into semi-independent subsystems, each with separate functional roles, that interact through their functions, rather than the details of their implementation. For example, field-programmable gate arrays (FPGAs) implement Boolean logic using high-gain groups of analogue transistors that result in the output of each cell rapidly saturating high or low. In conventional electronic design the interaction of the cells and the overall behaviour of FPGAs is viewed at a functional, logic gate level, rather than in terms of transistor dynamics [16]. Engineering Tinkering Clear goal/plan Often no goal/plan Not necessarily dependent on previous designs Uses whatever is to hand Aims for best solution given constraints Makes some kind of workable object Insulates subsystems and minimises unforeseen side effects Combines systems or transforms them for new uses TABLE ONE – A COMPARISON OF ENGINEERING AND TINKERING The design and implementation of computational models follows the engineering methodology. We outline some of the difficulties that this causes for simulating the evolution of novel sensors in the next section. III. MODELS OF SENSOR EVOLUTION Sensor evolution research investigates the relationship between sensors and environmental conditions [2]. One of the outstanding questions in this field is how to use EAs to construct sensors that are, “able to tap new information channels in simulated and real-world (hardware) environmentsº [17]. Sensor evolution research using EAs tends to use static fitness functions; this is very much the engineering methodology where the goal is to find a near-optimal solution to a well-defined problem. This is clearly not analogous to natural evolution, where the fitness landscape is dynamic and there is no clearly defined goal [18]. Some sensor evolution research tries to make the search less constrained by allowing limited changes in the dimensionality of the search space. For example, the number of sensors and/or the size of the controller, that maps sensor states to behaviour, are varied [3,5,19]. Menczer and Belew [20] argue that fitness functions should be implicit, for example, based on energy levels, in order to allow “creative, ‘open-ended’ evolution”. However, these approaches cannot overcome a fundamental constraint in simulating sensor evolution: the experimenter sets a bound on the possible interactions between the agent and the environment. This is a direct consequence of the simulation process: firstly, the experimenter has to model explicitly how different environmental stimuli change the state of the sensors; secondly, experimenters only simulate those aspects of the environment that they think are relevant to their experiment, otherwise the simulation would become computationally intractable. These constraints make it very difficult to see how there can be a simulation of the evolution of novel sensors, as the possible sensor/environment interactions are prespecified and cannot vary: an external observer can model the system deterministically [21]. It might be argued that a simulation can model the evolution of a novel sensor from an agent’s perspective. However, constructing a novel sensor does not involve selecting which environmental stimulus to utilise from a prespecified finite list. Lewontin [22] points out that the world can be partitioned a priori into an infinite number of ecological niches but that we can only know which of these partitions are niches by the presence of an organism. The same argument holds for environmental stimuli, which can only be defined by reference to an organism. Novel sensors are constructed when a device, rather than an experimenter, determines which of the infinite number of environmental perturbations act as useful stimuli. The next section details some hardware evolution experiments which demonstrate the conditions under which novel sensors can be constructed by an EA. IV. UNCONSTRAINED INTRINSIC HARDWARE EVOLUTION Unconstrained intrinsic HE design usually comprises a computer running an EA and a reconfigurable device, such as an FPGA, on which individual genotypes are instantiated as physical electronic circuits. The fitness of a given circuit is determined solely by its real time behaviour and other factors, such as topology, are not considered. For example, Thompson [9] evolved a circuit on a small corner of a Xilinx XC6216 FPGA that was able to discriminate between two square wave inputs of 1 kHz and 10 kHz without using any of the counters/timers or RC networks that conventional design would require for this task. The evolved circuit contained several continuous-time recurrent loops and the timing mechanism relied on a subtle analogue property - possibly parasitic capacitance - which affected delays in the internal signal paths according to the input frequency [23]. Both the loops and the timing mechanism would have been forbidden under conventional design procedure, but the evolved circuit made more parsimonious use of the silicon. Unconstrained, intrinsic HE therefore shows potential for the design of analogue dynamical systems that may prove more successful for certain tasks than conventional design. This approach may also lead to the discovery of novel electronic ‘tricks’ not yet exploited by conventional design. Layzell [24] developed the Evolvable Motherboard (EM) to investigate some of the key issues in intrinsic HE, in particular to evaluate the relative merits of different basic components, methods of analysis and interconnection architectures. The next section gives an overview of this testbed and describes an experiment where he intrinsically evolved the first oscillators to reach their target frequency. A. The Evolvable Motherboard (EM) The evolvable motherboard is essentially a triangular matrix of analogue switches, into which daughterboards containing the desired circuit primitives for evolution can be inserted. Any component from transistors and operational amplifiers to function-level integrated circuits may be used. Each daughterboard takes up to 8 lines on the switch matrix, plus a further 8 connections to allow for various power lines and I/O which may be required by certain components. The matrix is designed to provide the minimum number of switches necessary so that every combination of interconnection between primitives can be configured. By the Figure 1: the evolvable motherboard (EM) appropriate choice of genotype to phenotype mapping, more or less restrictive interconnection architectures can be investigated. The analogue switches are configured via an interface card plugged into a host PC's internal I/O ports, enabling genotypes to be instantiated in less than 1ms. The analogue switches are themselves semiconductor devices, contained within integrated circuits. They behave like low value resistors, but also exhibit a small degree of capacitance and inductance, and may therefore play an active part in any evolving circuit. B. Oscillator Experiments There are established techniques for designing oscillators. In conventional circuits the necessary timing is supplied by a capacitor whose charge release is controlled by a resistor; this combination of components is known as an RC time constant. As the desired frequency decreases, the value of the RC product increases. Large value capacitors are difficult to implement in VLSI and are generally provided externally, at some expense. The motivation was to evolve an oscillator of a precise frequency without using capacitors. The tone discriminator experiment discussed above had demonstrated that evolution can make use of parasitic properties to form suitable time constants. However, oscillator evolution is a difficult task when the basic components are transistors. Whereas oscillation is the likely outcome of recurrent loops of digital gates or operational amplifiers, precise operating points must be established before it can be produced by a network of transistors. These conditions are extremely unlikely to occur by chance, a fact that was confirmed by Layzell when he performed some preliminary experiments where only frequency and amplitude of oscillation were rewarded. Therefore, he found it necessary to reward output amplitude, even if the signal was just noise, in order to kick- start the evolutionary process. The experiment used 10 bipolar transistors as the circuit primitives. A generational GA was used, with single point crossover, rank-based selection and elitism. Figure 2: the oscillator experimental setup The output of the candidate circuits was sampled directly using a hardware frequency to voltage (f/v) converter placed between the EM and an a/d converter on the host PC. This arrangement prevented aliasing errors. If a signal of amplitude greater than 10 mV is present at its input, the f/v converter outputs a d.c. voltage in the range [0, 6V] which is equal to the input frequency multiplied by a constant, k, whose value is determined by the midpoint of the f/v converter’s range. The fitness function was as follows: ( ) (1) Hz 60 if ,Hz 60 if , fitnesstarget target maxmin  <=> −− + = faffffffka fa and represent respectively output amplitude and frequency, averaged over 20 samples, each taken at 100 µs intervals. max min andff are the minimum and maximum of the 20 frequencies sampled. targetf is the target frequency. The ratio of the minimum and maximum frequencies rewards constant output frequency. The f/v converter’s time constant was configured so that the target frequency corresponded to 3V output, ensuring that the function was smooth for frequencies above 60 Hz. This lower bound was chosen to ensure that the f/v converter was detecting oscillation and not mains hum, which is 50 Hz in the UK. The target frequency was 25 kHz. After the genotypes had been instantiated as circuits, there was a 5 ms delay to allow the f/v converter and rectifier to stabilise. From 20 runs, 10 resulted in successful oscillation, attaining the target frequency within 1% and with minimum amplitude of 100 mV. These represent the first intrinsically evolved oscillators to reach their target frequencies. It has proved difficult to clarify exactly how these circuits work. Probing a typical one with an oscilloscope has shown that it does not use beat frequencies to achieve the target frequency. If the transistors are swapped for nominally identical ones, then the output frequency changes by as much as 30%. A simulation was created that incorporated all the parasitic capacitance expected to exist within the physical circuit, but the simulated circuits failed to oscillate. The programmable switches almost certainly play an important role in the behaviour of the circuit and it is only possible to probe their input and output connections and not the circuitry in which they are embedded. C. The Evolved Radio Some of the circuits achieved high fitness, but when they were examined with an oscilloscope they did not oscillate stably: the signals were of the order of 10 – 50 mV amplitude with rapidly fluctuating frequency. The evolutionary process had taken advantage of the fact that the fitness function rewarded amplifiers, even if the output signal was noise. It seems that some circuits had amplified radio signals present in the air that were stable enough over the 2 ms sampling period to give good fitness scores. These signals were generated by nearby PCs in the laboratory where the experiments took place. In order to pick up radio signals the circuits need an aerial and an extremely high input impedance. This was achieved by using as an input the printed circuit board tracks on the EM connected to an open programmable switch whose impedance is at least 100 MΩ. The high impedance was confirmed by an electrometer behaviour observed in many of the non-oscillating circuits: if a person’s hand was brought close to the circuit, then the d.c. output voltage rose; if the person remained there, the output voltage remained high, falling if the person was earthed. The evolutionary process had utilised not only the EM’s transistors, but also the analogue switches and the printed circuit to which they were connected. D. Other Environmental Effects In earlier experiments Layzell [25] found that circuits utilised the oscilloscope used to measure their behaviour as a path to 0V, via the 10 MΩ impedance of the oscilloscope. If the oscilloscope was unplugged, the circuit did not work. In a SPICE simulation where the oscilloscope was represented by a resistance, the circuit worked, confirming its functional role. Some of the evolved oscillators worked successfully until a soldering iron on a nearby workbench was disconnected from the mains, at which point oscillation ceased. This occurred despite high quality laboratory power supplies and extensive mains filtering. The circuit was apparently sensitive to tiny transients in its voltage supply. The circuit worked if it was reinstantiated on the EM, regardless of whether the soldering iron was on or off. However, tests showed that it failed to oscillate if during instantiation the programmable switches were set in a different order to that used originally. It seems that the circuit was dependent on some initial condition, such as charge, that only occurred if the switches were set in a particular sequence. These results demonstrate that unconstrained, intrinsic HE will potentially exploit any physical characteristic that can influence circuit behaviour, and that these characteristics are present in the entire evolutionary environment. The fact that the circuits sometimes utilise very particular environmental conditions and component properties does mean that they do not always generalise well. This is also the case with many organisms that live in environments of low variability, as these niches can be effectively exploited by efficient specialisations; general solutions are only found in organisms that inhabit high variability environments [26]. If we constrain the evolutionary process then we can make the circuits more transparent, but we also lose any possible advantages of unconventional design, one of which is the construction of novel sensors. We now describe the first device to construct its sensors in a way analogous to the tinkering process of natural evolution: Gordon Pask’s electrochemical ear. We then highlight the key properties that it shares with the evolved radio and which enable the construction of novel sensors. V. PASK’S ELECTROCHEMICAL EAR In 1958 Gordon Pask demonstrated a number of remarkable mechanisms that were able to construct novel sensors and thereby determine the relations between their own states and the environment. In other words, these devices were able to generate and explore their own state space. Any observer trying to model the behaviour of these devices would be forced to change the dimensionality of their model over time as the devices can transform the underlying generative system. A. Description of the Mechanism The devices are electrochemical assemblages consisting of a number of small platinum electrodes that are inserted in a dish of ferrous sulphate solution and connected to a current limited electrical source. Depending on the activity of the system, these electrodes can act as sinks or sources of current. Metallic iron threads tend to form between electrodes where maximum lines of current are flowing. These metallic threads have a low resistance relative to the solution and so current will tend to flow down them if the electrical activation is repeated. Consequently, the potentials at the electrodes are modified by the formation of threads. If no current passes through a thread, then it tends to dissolve back into the acidic solution. The system therefore fundamentally consists of two opposing processes: one which builds metallic threads out of ions on relatively negative electrodes (sinks); and one that dissolves metallic threads back into ions. The trial and error process of thread development is also constrained by the concurrent development of neighbouring threads and also by previously developed structures. Slender branches extend from a thread in many directions and most of these dissolve except for the one following the path of maximum current. If there is an ambiguous path then a thread can bifurcate. As the total current entering the system is restricted, threads compete for resources. However, when there are a number of neighbouring unstable structures, the threads can amalgamate and form one cooperative structure. Over time a network of threads can form that is dynamically stable: the electrochemical mechanism literally grows. It is possible to associate some of the electrodes with output devices that enable the behaviour of the system to be assessed by a user. A reward consists of an increase in the limited current supply to the assemblage and is therefore a form of positive reinforcement. Regardless of how the electrodes are configured, the assemblage will develop a thread structure that leads to current flowing in such a way that the user rewards the system. Importantly, the reward is simply an increased capacity for growth and there is not any specification of what form it should take. Critically, the system is not just electrically connected to the external world: due to the physical nature of the components, thread formation is also sensitive to temperature, chemical environment, vibrations and magnetic fields. Any of these arbitrary disturbances can be viewed as an input to the system, especially if they affect the performance of the mechanism so that its current supply is changed. The system can grow structures that are sensitive to different environmental stimuli. Pask was able to train an assemblage to act as an ‘ear’ that could discriminate between a 50 Hz and 100 Hz tone in about half a day. He was also able to grow a system that could detect magnetism and one that was sensitive to pH differences. The development of sensors constitutes a change in the state space of the assemblage that was not specified by a designer explicitly. VI. DISCUSSION We have described an unconstrained, intrinsic HE experiment that resulted in the construction of a novel radio wave sensor. The EM is the second ever experimental system to construct novel sensors, unconstrained by prespecified sensor/environment channels. Like Pask’s ear, the evolved radio determined the nature of its relation to, and knowledge of, the world. Both of these devices are epistemically autonomous: they are not restricted to experimenter specified information channels [27]. By using a process analogous to the tinkering of natural evolution, epistemically autonomous devices alter their relationship with the environment depending on whether a particular configuration generates rewarded behaviour. We have argued that there are three key properties that devices must embody in order for selection pressure to form them into novel sensors: • they are situated in the physical world; • they consist of primitives with no fixed functional roles; • and the primitives are sensitive to a wide range of environmental stimuli. In Pask’s ear, the second property stems from the fact that electrochemical devices initially consist of raw material, which has no specified structure or function; in the evolved radio this property follows from releasing electronic components from the constraints of their conventional operating ranges. We argue that devices such as this are useful for highlighting the practical impossibility of simulating the evolution of novel sensors: programming a simulation necessarily involves prespecifying the possible sensor/environment interactions. Novel sensors are constructed when a device, rather than an experimenter, determines which of the infinite number of environmental perturbations act as useful stimuli. Unconstrained, intrinsic HE has provided a concrete example of such a device and is potentially a powerful approach to designing robot sensors as it enables circuits to exploit the rich dynamics of semiconductor physics and thereby explore regions of design space that are inaccessible to the conventional engineering approach. References [1] D.S. Goodsell. The Machinery of Life. Copernicus, Springer- Verlag, 1998. [2] K. Dautenhahn, D. Polani and T. Uthmann. Guest Editors’ Introduction: Special Issue on Sensor Evolution. Artificial Life 7: 95-97, 2001. [3] D. Cliff, I. Harvey and P. Husbands. Explorations in Evolutionary Robotics. Adaptive Behavior, Vol. 2, 1, 73 – 110. [4] P. Husbands, T.M.C. Smith, N. Jakobi and M. O’Shea. Better Living through Chemistry: Evolving Gasnets for Robot Control. Connection Science, 10 (3-4), 185 – 210. [5] K. Balakrishnan and V. Honovar. On Sensor Evolution in Robotics. In J. Koza, D. Goldberg, D. Fogel and R.Riolo (Eds.), Proceedings of the First International Conference on Genetic Programming (GP-96), MIT Press, 455 – 460, 1996. [6] C. Darwin. The Various Contrivances by which Orchids are Fertilised by Insects. D. Appleton, 1886. [7] E.A. Di Paolo, J. Noble and S. Bullock. Simulation Models as Opaque Thought Experiments. In M.A. Bedau, J.S. McCaskill, N.H. Packard and S. Rasmussen (Eds.), Artificial Life VII: The Seventh International Conference on the Simulation and Synthesis of Living Systems, Reed College, Portland, Oregon, USA, 1 - 6 August, 497 – 506, 2000. [8] J. Bird. Can Robots Illuminate Adaptive Behaviour? In J. Kelemen and P. Sosík, (Eds.), Advances in Artificial Life: Proceedings of the Sixth European Conference on Artificial Life, Springer, 495 - 498, 2001. [9] A. Thompson. An evolved circuit, intrinsic in silicon, entwined with physics. International Conference on Evolvable Systems (ICES96), Vol. 1259, Springer LNCS, 640 – 656, 1997. [10] D. C. Dennett. Cognitive Science as Reverse Engineering: Several Meanings of “Top-Down” and “Bottom-Up”. In D. Prawitz, B. Skyrms and D. Westerstahl (Eds.), Logic, Methodology, and Philosophy of Science IX, Elsevier Science, 679 – 689, 1994. [11] F. Jacob. The Possible and the Actual. Penguin Books, 1989. [12] G. Pask. Physical Analogues to the Growth of a Concept. In Mechanisation of Thought Processes: Proceedings of a Symposium held at the National Physical Laboratory on 24-27 November, 1958, Vol. II H.M.S.O., 879 – 928, 1958. [13] J. O. Pickles. An Introduction to the Physiology of Hearing. Second Edition. Academic Press, 1988. [14] A. S. Romer and T.S. Sturges. The Vertebrate Body. Sixth Edition. Saunders College, 1986. [15] H.A. Simon. The Sciences of the Artificial. Third Edition. MIT Press, 1996. [16] A. Thompson. Artificial Evolution in the Physical World. In T. Gomi (Ed.), Evolutionary Robotics: From Intelligent Robots to Artificial Life (ER'97), AAI Books, 101 – 125, 1997. [17] A. Liese, D. Polani and T. Uthmann. A Study of the Simulated Evolution of the Spectral Sensitivity of Visual Agent Receptors. Artificial Life 7: 99 – 124, 2001. [18] I. Harvey. Artificial Evolution: A Continuing SAGA. In T. Gomi (Ed), Evolutionary Robotics: From Intelligent Robots to Artificial Life, Proc. of 8th Intl. Symposium on Evolutionary Robotics (ER2001), 2217 LNCS, Springer, 2001. [19] M. D. Bugajska and A.C. Schultz. Co-Evolution of Form and Function in the Design of Autonomous Agents: Micro Air Vehicle Project. GECCO-2000 Workshop on Evolution of Sensors in Nature, Hardware and Simulation, 240 – 244, 2000. [20] F. Menczer and R.K. Belew. Evolving sensors in environments of controlled complexity. In P. Maes and R.A. Brooks (Eds.), Artificial Life IV: Proceedings of the Fourth International Workshop on the Synthesis and Simulation of Living Systems. MIT press, 210 – 221, 1994. [21] P. Cariani. Emergence and Artificial Life. In C.G. Langton, C. Taylor, J.D. Farmer and S. Rasmussen (Eds.), Artificial Life II, Santa Fe Studies in the Sciences of Complexity, Proceedings, 10, Addison-Wesley, 775-797,1991. [22] R. Lewontin. Organism and Environment. In H.C. Plotkin (Ed.), Learning, Development and Culture: Essays in Evolutionary Epistemology, Wiley, 1982. [23] A. Thompson and P. Layzell. Analysis of Unconventional Evolved Electronics. Communications of the ACM, 42 (4), 71 – 79, 1999. [24] P. Layzell. A New Research Tool for Intrinsic Hardware Evolution. In M. Sipper, D. Mange and A. Perez-Uribe (Eds.) Proceedings of the Second International Conference On Evolvable Systems (ICES98), 1478 LNCS, Springer, 47-56, 1998. [25] P. Layzell. Hardware Evolution: On the Nature of Artificially Evolved Electronic Circuits. DPhil Thesis, School of Cognitive and Computing Sciences, University of Sussex, 2001. [26] M. Conrad. Adaptability: The significance of variability from molecule to ecosystem. Plenum, 1983. [27] P. Cariani. To evolve an ear: Epistemological implications of Gordon Pask’s electrochemical devices. Systems Research, 10 (3), 19 – 33, 1993.
dd6600cb-be73-49fe-ba02-39ed3e521bd0
StampyAI/alignment-research-dataset/lesswrong
LessWrong
[Link Post] Cyber Digital Authoritarianism (National Intelligence Council Report) **Disclaimer**: The U.S. government selectively declassified this report for unknown reasons, and anti-Chinese-Communist-Party sentiment is high in Washington D.C. While I doubt this means the contents are factually inaccurate, it is likely framed to highlight dangerous or unethical aspects of CCP activity, and likely suffers from whatever bias one would expect from the National Security community.  I don't think that anything in this report is very surprising (50% of it is redacted, and it's only 6 pages anyways), but I think it is useful to get a sense of what some members of the national security community are thinking about re: China and digital authoritarianism. **Highlights** * Concern around China and Russia using large scale data analysis to influence or coerce actors in the United States and other countries. * "Growing concern in Europe and other democracies about Chinese and Russian cyber actions and personal privacy creates an opportunity to propose alternatives to blunt digital authoritarianism"   **My Takeaway** * Maybe folks with an interest in national security topics could do work on understanding the risk of and preventing long-term, stable, AI powered totalitarianism. I'd be excited for more work in that area. * Maybe there could be interest among many stakeholders for AI governance work to help prevent or regulate capabilities that could contribute to digital authoritarianism. I might imagine this could overlap with AI governance efforts to reduce existential or catastrophic risks.
b8b4c607-0119-4799-a5fe-21ad17df53a0
trentmkelly/LessWrong-43k
LessWrong
Ignoring Country Cohorts I assume that many of you have been watching the "total coronavirus cases outside of China" number. If you look at them on a log chart, they appear to be growing perfectly exponentially and doubling every 4-5 days. This perfect straight line is obscuring a very, very important underlying fact. Within countries, the second derivative of cases has been falling rapidly (please reference the "first 60 days" tab). This is a result of a few important factors: 1. Initially, in country growth is extremely high. Prior to reaching 100-1000 cases, countries do a poor job of testing. As a result, they accumulate a backlog of cases that are discovered quickly upon commencement of testing. This makes the growth rate appear to be very large for the first 1000 cases. The aggregate data we are seeing today is incredibly skewed as a result of this effect. 2. After 1000 cases, governments are actually jumping into action. They implement large scale testing (like we are currently seeing in South Korea), quarantines (like we are currently seeing in Italy), and other aggressive policies. Additionally, some measure of social distancing goes into effect as people start to worry about their own health. The effectiveness of action can be seen in the chart above. My prediction: By the week of March 16th, the slope of the line on the log chart will halve and subsequently continue to fall like we've seen in China. Total cases ex-China will approximately double between March 16th and March 24th (half the rate we're seeing today), and will continue to fall. Despite the sense you might be getting from the 100lbs of rice in my pantry, I think we've seen some very positive news over the past few days and I'm becoming much more optimistic that this will not be a generational event.
159f323c-bacf-4e70-9e66-b99993b828af
trentmkelly/LessWrong-43k
LessWrong
Bad reductionism I've been becoming more of a nature person recently. * I moved into an apartment about a mile from the mountains and there's something really cool to me about being on the outskirts of town, crossing the line from "society" into "nature". * A few weeks ago I went on a trip to the Grand Canyon. We also saw the Hoover Damn. I was surprised at how awe inspiring I found each of them to be. * I went on a bike ride along the River Mountains Loop Trail. It was awesome. I only did 1/4 of it but I find myself watching YouTube videos to see what the rest of the trail looks like. I really want to get to Lake Mead! However, all of this has been making me feel some sort of dissonance. I think of myself as not a nature person. There's a particular story that comes to mind. Growing up, my family would go to the beach once in a while during the summer. I thought it was alright. I liked to dig big holes, go boogey boarding, and maybe play catch if I had someone to play with. But after a few hours of that I'd be ready to go home. My parents on the other hand liked to get there early and stay the entire day. Just sitting there in their chairs doing nothing. I couldn't understand this. How can you not get bored sitting there all day? I spent the day running around doing various things and even I got bored of that and was ready to go home! So I would ask my mom: why do you even like going to the beach in the first place? The conversation would go something like this: > Adam: Why do you like going to the beach in the first place? > > Mom: I don't know, I just do. > > Adam: That's not an answer. > > Mom: I find it relaxing. > > Adam: Why? > > Mom: I like to listen to the waves. > > Adam: You could do that at home. You could sit in a chair in the backyard and listen to the sounds of the ocean on your MP3 player. > > Mom: I like to see the waves too. > > Adam: You could get a magazine and look at cool ocean photos. > > Mom: I like the feel of the sand between my toes. >
dcd99e85-0e9b-4561-a8ef-f89e97f8954c
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Emily Brontë on: Psychology Required for Serious™ AGI Safety Research *Subtitled: [Prolegomena](https://en.wikipedia.org/wiki/Prolegomena_to_Any_Future_Metaphysics) to Any Future* **[No Coward Soul Is Mine](https://www.poetryfoundation.org/poems/43712/no-coward-soul-is-mine)** *By Emily Brontë* No coward soul is mine No trembler in the world's storm-troubled sphere I see Heaven's glories shine And Faith shines equal arming me from Fear O God within my breast Almighty ever-present Deity Life, that in me hast rest, As I Undying Life, have power in Thee Vain are the thousand creeds That move men's hearts, unutterably vain, Worthless as withered weeds Or idlest froth amid the boundless main To waken doubt in one Holding so fast by thy infinity, So surely anchored on The steadfast rock of Immortality. With wide-embracing love Thy spirit animates eternal years Pervades and broods above, Changes, sustains, dissolves, creates and rears Though earth and moon were gone And suns and universes ceased to be And Thou wert left alone Every Existence would exist in thee There is not room for Death Nor atom that his might could render void Since thou art Being and Breath And what thou art may never be destroyed.
06ea054a-9ded-4fb5-bb3a-ac65e741df93
trentmkelly/LessWrong-43k
LessWrong
Launch & Grow Your University Group: Apply now to OSP & FSP! TL;DR * Applications have opened for CEA’s Organizer Support Program (OSP), a mentorship program for both new and experienced organizers of EA university groups. * We have combined our earlier group accelerator program (UGAP) into OSP, so students looking to start a new group from scratch can now apply directly to OSP. * We’ve also launched a new sister program focused on AI Safety university groups, the Fieldbuilder Support Program (FSP) * We have supported over 250+ organizers via our programs and welcomed groups from every populated continent. We are excited to keep growing! * Participants can apply here, and mentors can apply here. The deadline for applications is Monday, June 24th. You can also recommend people to us.   Launching the Fall 2024 round of OSP The Organizer Support Program (OSP) is a three-week mentorship program (with the potential for semester-long mentorship)  aimed at EA university group organizers to help them prepare for the start of the semester. It offers regular meetings with an experienced mentor, various workshops, and useful resources to run an effective group. OSP is now for new groups too! If you’re a student at a university without an EA group (or looking to revive an existing group), you can now apply directly to OSP to get support in starting a new group. In the past, students wanting to start a new group would need to go through the University Group Acceleration Program (UGAP), but we’ve now merged this program with OSP, bringing in resources like: * Mentorship tailored to launching a new group * Facilitator training for running your first intro fellowship * A guide of helpful resources for your first semester Applications for next semester are open! Learn more and apply here by 11:59pm UTC, on June 24th. As has been the case in previous rounds, please note that OSP does not currently offer a stipend for organizers. However, if this impacts your ability to apply, we would appreciate hearing from you here as we
86bc1601-d1b0-4b1f-8be6-cc7f92af99df
trentmkelly/LessWrong-43k
LessWrong
Go Grok Yourself That title is Elon Musk’s fault, not mine, I mean, sorry not sorry: TABLE OF CONTENTS 1. Release the Hounds. 2. The Expectations Game. 3. Man in the Arena. 4. The Official Benchmarks. 5. The Inevitable Pliny. 6. Heart in the Wrong Place. 7. Where Is Your Head At. 8. Individual Reactions. 9. Grok on Grok. RELEASE THE HOUNDS Grok 3 is out. It mostly seems like no one cares. I expected this, but that was because I expected Grok 3 to not be worth caring about. Instead, no one cares for other reasons, like the rollout process being so slow (in a poll on my Twitter this afternoon, the vast majority of people hadn’t used it) and access issues and everyone being numb to another similar model and the pace of events. And because everyone is so sick of the hype. The timing was a curious thing. Everyone including Musk worked the weekend. They released the model while it was still being trained, and when it could only be rolled out to a small group. No one has API access. There was no model card. We got only a handful of benchmarks. Elon Musk loves to talk about how other people aren’t transparent while revealing very little information himself. There is the obvious implication that Musk wanted very badly to claim the top spot on Arena and otherwise claim that he had the ‘smartest model in the world’ during the narrow window between now and the release of the full o3 and GPT-4.5, and he knew if OpenAI had wind of his plan too soon or he took too long, they (or Anthropic, or someone else) might beat him to the punch. Musk presumably wants to send the message xAI has caught up to the pack and is a top tier competitor now. I don’t quite think they’ve earned that, but this was an impressive release relative to expectations. They’re closer than I guessed. THE EXPECTATIONS GAME [I locked this paragraph on 2/16]: Will Grok 3 live up to Elon’s hype, I asked several days before release? My presumption was no. Teortaxes said yes, John Pressman says there’s a learnin
5ac263cf-db37-43e7-a9d1-849bfbe63f93
StampyAI/alignment-research-dataset/lesswrong
LessWrong
"Do Nothing" utility function, 3½ years later? In [AI Alignment: Why It's Hard and Where To Start](https://www.youtube.com/watch?v=EUjc1WuyPT8), at [21:21](https://www.youtube.com/watch?v=EUjc1WuyPT8&t=21m21s), Yudkowsky says: > > If we want to have a robot that will let us press the suspend button—just suspend it to disk—we can suppose that we already have a utility function that describes: “Do nothing.” In point in fact, we don’t have a utility function that says, “Do nothing.” That’s how primitive the state of the field is right now. > But, leaving that aside, it's not the hardest problem we're ever going to do, and we might have it in six months, for all I know. > > > I get the impression that there are some pointers to this in [Attainable Utility Preservation](https://www.lesswrong.com/posts/yEa7kwoMpsBgaBCgb/towards-a-new-impact-measure) (but saying "maximise attainable utility over this set of random utility functions" seems like it would just fire up instrumentally convergent drives), but I could be wrong. So, 3½ years later, what is the state on "do nothing" utility functions?
19f75484-4971-4236-b24c-50f1b7401f9b
StampyAI/alignment-research-dataset/lesswrong
LessWrong
AI Box Experiment: Are people still interested? I have a novel plan for the AI player that I believe will work against most gatekeeper players. Even knowing how it works, it would work against me! It isn't quite ready, but I'd like to know if there is still interest around the topic before finishing work on the preparation.  There's enough incentive for both players to play well that I don't think that we need to involve money. (I'm also very reluctant to send money to strangers on the internet!) We would be playing under Tuxedage's rules: <https://tuxedage.wordpress.com/2013/09/04/the-tuxedage-ai-box-experiment-ruleset/>.    Edit: I have finished preparation, and I'm now scheduling the time with the Gatekeeper. Edit 2: The game was a Gatekeeper victory (i.e. I, the AI Player, lost). Read about it here: [Gatekeeper Victory: AI Box Reflection](https://www.lesswrong.com/posts/h5yfjFpdtqE3vATQ6/gatekeeper-victory-ai-box-reflection).
a913f3d0-6f4b-4d69-b05e-886a010584a4
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
AI Safety Reading Group (Session 44) so hello and welcome to the 44th session of the AI safety reading group today we are going to have a look at the ATI safety solutions map made by Alexei Turchin elected Turchin here is an author from Russia who works at the science for longer life foundation which is also founded and he has made an a summary or some kind of illustration of an article by kaiser talon romanian polski called responses to catastrophic AGI risk a survey and AGI is artificial general intelligence that is a capable of optimizing of working through many many different domains and the risks of course are that that this will have some strong negative side effects not because the the AGI is doing is making accidents but because it's competently following a an agenda that is opposed to humanity so Kai's Italian Romanian polski looks at the solutions that are given and divides him into three main categories as social constraints external constraints and internal constraints but Alexa Turchin he builds on this and makes three new categories AI is used to create a safe ai and multi-level solutions and finally met a level and you further tries to sort the solutions according to how simple or how complex they are he also incorporates some ideas that are there are newer than this the article is from 2013 so it's also a kind of update in particular here editing by penguins through Armstrong and Paul Chryst channel so this this is structured as a map which is a very non-traditional and what's the point of using a map in general a map is a way to get an overview over some kind of unfamiliar territory so if you are new to AGI safety then looking at a map gives a reasonable idea about what solutions are are there roughly and met but maps can be used more more activity as a way to find solution because normally when people look at solutions they find them in a rather ad hoc way where they have some parts they focus about they have their ideas and they sometimes they duplicate their ideas something they reinvent the wheel but a map can be used to discover new solutions if you have some category and a level of complexity and you can see there's a cell here no one has actually tried to find a simple solution in this category and then you can try to come up with something in particular exit surgeon has stated that a number of these points in this map and these solutions are actually knew and solutions that he came up with while he was making the map let's go to the map itself here you can see structured roughly with six columns and I was talking about and sorted with the simplest solution on top and growing in complexity I'm going to zoom in a bit because I don't believe you can see this at all so whilst I start with the simplest societal proposal and the way I'm going to go through this is I'm going to show them very very briefly of course it's a map and I can't go through everything but I'm going to jump into parts that I think are particularly important and one of the things that I think is particularly important if the simplest societal proposal which is to do nothing and this might be the best solution under five different criteria it might be that it's for some reason impossible to build at general artificial intelligence it might be that it's too far into the future if it takes a thousand years of research then it's probably a bad idea to try to do anything now it might be that the that it's actually not dangerous at all it might be that some people would prefer to be replayed that it's a better moral solution that the AGI replaces us so we should just let it kill kill us and it might be that in our attempts to solve the control problem we might do things that are more dangerous that are actively harmful so there might be a number of reasons why doing nothing might be the correct solution there are more solutions the one that its most mainstream people are thinking about is to figure out how to integrate the AG ice into society people are talking a lot about regulating society in regulating research sorry and in particular the bathroom has talked a lot about the differential technological progress figuring out what technologies to research first there are the options making humans smarter relinquishing technology going back to the Stone Age it might be a solution it doesn't sound like a feasible solution there are a number of smaller improvements you could do that's the only I solution where I give AI to everyone it might be that's a technical solution that is impossible to figure out yet because we are too far from a TTI right now that might be possible to have to distribute intelligence in some good way or even create some kind of AI police so that's what we can do when we look for the society if we try to make external constraints on the GI the toughest but also simplest might be to put it into a box a GI confinement this is something a solution that has been discussed very much and the consensus is probably that it's a marginal solution it's probably not going to be to work but my google was a try it might be possible to have a bigger box where there's a control room and a gatekeeper which is kind of an inverse box which is the only thing that the AI cannot enter that's the assimilation in certainty argument or philosophical constraints which also something that might work but might not work very very well more promising is the idea of preventing self-improvement if we can prevent the API from rewriting its own code in some way or putting some kind of bound on it in different ways this would could could help a lot of solving the cultural problem and reducing risk from general am we can check the goals this is also a proud also something that will help a lot in particular there might be a moment where the AI realizes that it should hide things from us and it should and in this very moment where sighs do this it is not yet hidden things and that means that in this very moment it might be possible to stop it and focusing on this moment might be a good solution there are logical of philosophical Zen minds some of them are really really technical there's something called the logan the obstacle that an api might need to solve and that's very very complicated from a mathematical point of view i make no pretense adams understand that at all but it it seems important than mathematics is just too hard for me to really explain exactly what the libyan obstacle is there are other ways to constrain the AIX term you can record what it thinks you can make some constraints about the signs and you can constraints knowledge oh you thousand some other external constraints you could figure out if you want to exit touching gets creative and things like electricity so the internal constraints are with the AI not preventing it from doing bad things but making it not want to do bad things the simplest is what is called an Oracle AI and AI that simply answers our questions and doesn't attempt to influence the world in any way and we might have short rulesets like The Three Laws of azimuth we might have a final way for it to to learn our values this is something that has been researched a lot and is generally considered a very promising Avenue there are other ways of one of the ones that look really hard probably a good solution is the formal verification if we could explicitly prove that nai is friendly then that would help us a lot but it might be possible to edit its motivation only and only from non-natives motivation still pushing it strongly towards being human aligned there's a constitution that some different some kind of moral invariant will push onto the AI that might also work you might merging it again so the AI want to merge with humans sorts and the next step in human evolution that's decision theory that could cause it to be friendly avoiding super goals or stopping it from understanding its own architecture for understanding itself really giving it an open goal system where the course can be changed oh putting constraints and what kind of children it can make of course a computer program that makes children means that it runs other computer programs and possibly variants of itself just more improved and that's what the fourth column is where the AI is building other AIS in this particular we're thinking about having an AI that is possibly unsafe creating an AI that is safe so that's the standard problem here and it's possible we can make it some a causal decision can you with it and that's also somewhat far far fetched we might in either enforce it where we have some some strong AI that prevents unfriendly a is from coming into being we might have a a morn any age I that doesn't enforce so much as prevent only the worst cases and we have on the other end of the spectrum the Korean extra brain evolution by a esa eight cows key that is generally not considered a solution to the AGI safety because it's hard to implement if it is implemented it would constitute a solution to this problem but is generally considered too hard to to to be a good solution it's not something you want to do that the first time and of course the solution to a I risks things you want right from the beginning other ways to have the AI to have any instead of having one AI building safe AGI then you could have a democracy or a jury system took the influence the creation of other AIS you could have something I won't go into Alba qualia this is things like love or empathy and feelings that if you build this into the AI and this it's not clear to me exactly why this is a into the AI building AI and but this is how L exit urgent has has created it mmm so I think it is actually possible depending on how you look at this that these these lines here means that all this are related to a is creating a is and these down here are just random ideas because here we have something like AI cartons and avoiding harm and making puzzles and all these kind of things that are very unlikely so that might be an interesting idea but these are not really something that has been worked very much in details things like testing the AI a lot and hear some general problems about creating a I HD is safe second the last part is the integrated solutions this I this is something here we have an example and military in any AI where we have a an AI that might not be a general AI pursue men is good at one thing and that is military conquest and then it conquers everything and then it builds in any I that is safe because once it has conquered the world it can solve the coordination problem at a very large scale and it's possible to have a mutual level 80 I with an unchangeable call in the middle in innermost and the new building layers outside and work for this kind of architecture in the last meter these are not solutions but descriptions or characteristics that and solution must have and there's a number of virologist joint ideas in particular the international AI agency an idea for may be there through the United Nations or something like that co-ordinating so everybody is trying to create a positive AGI instead of engaging into an arms race so then this is the roadmap the safety solutions map that Alexa Turgeon has created so thank you for watching see you next week
445716a5-54a9-4676-b412-54808da533ab
trentmkelly/LessWrong-43k
LessWrong
Brain-over-body biases, and the embodied value problem in AI alignment  Note: This essay was published here on EA Forum on Sept 21, 2022. The description of the brain-over-body biases in the EA subculture may or may not apply to the Rationalist subculture in LessWrong.  This essay builds upon this essay on the heterogeneity of human value types.  Overview Most AI alignment research focuses on aligning AI systems with the human brain’s stated or revealed preferences. However, human bodies include dozens of organs, hundreds of cell types, and thousands of adaptations that can be viewed as having evolved, implicit, biological values, preferences, and priorities. Evolutionary biology and evolutionary medicine routinely analyze our bodies’ biological goals, fitness interests, and homeostatic mechanisms in terms of how they promote survival and reproduction. However the Effective Altruism movement includes some ‘brain-over-body biases’ that often make our brains’ values more salient than our bodies’ values. This can lead to some distortions, blind spots, and failure modes in thinking about AI alignment. In this essay I’ll explore how AI alignment might benefit from thinking more explicitly and carefully about how to model our embodied values. Context: A bottom-up approach to the diversity of human values worth aligning with This essay is one in a series that tries to develop an approach to AI alignment that’s more empirically grounded in psychology, medicine, and other behavioral and biological sciences. Typical AI alignment research takes a rather top-down, abstract, domain-general approach to modeling the human values that AI systems are supposed to align with. This often combines consequentialist moral philosophy as a normative framework, machine learning as a technical framework, and rational choice theory as a descriptive framework. In this top-down approach, we don’t really have to worry about the origins, nature, mechanisms, or adaptive functions of any specific values.  My approach is more bottom-up, concrete, and domain-specifi
4715b9c0-fcd1-478b-99d4-714e903f335f
trentmkelly/LessWrong-43k
LessWrong
Juneberry Puffs I only learned about Juneberries a few years ago, and they're a bit tricky. Eaten fresh, right off the tree, they have a bunch of subtle flavors that remind me of peach ice tea. If I bake them, though, these flavors mostly go away, leaving an almondy flavor. Which isn't terrible, but also isn't better than almond extract and wastes what's most interesting about these berries. Eating them raw is great, but it's also fun to make things. Thinking about what might work well that doesn't involve heating them, I decided to try puffs. First we needed some berries, and walked over to the playground that has some Juneberry trees: This was a bit Blueberries for Sal, with Nora eating the berries almost as fast as I could pick them, but she lost interest after a bit and there were no bears, so soon we were walking home with a good amount. I took a sheet of puff pastry (butter-based is the tastier option, but it's hard to find and is quite temperature sensitive), cut it into small rectangles, and baked it. Once it was fully cooked, I cut each piece in half. Here's some whole, and then some with the tops cut off: I piped in slightly sweetened whipped cream, and then put the berries on: I think they came out pretty well. They're very light and summery, and the subtle Juneberry flavors are there.
4b6627d2-2715-48c3-a70a-55806a069310
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post1468 Various people propose some variant of “align AGI by making it sufficiently uncertain about whether it’s in the real world versus still in training”. (The idea being that, insofar as it's in training, betraying the apparent-operators predictably gets high loss; so if the AGI is uncertain about which situation it's in, and assigns sufficiently high probability to "I'm still in training", it will always play it safe and avoid betraying its operators.) This approach seems extremely unpromising to me. Setting aside the various reasons why the AGI kills you even if you manage to confuse it (such as an inner alignment failure, where the AGI turns out not to care about the predictably-high outer-loss from betraying the apparent-operators), let's consider why it's tricky to dupe a superintelligence into being uncertain about a fact that you yourself consider obvious. Before reading the short list below, I recommend the following exercise. Suppose that some slow and individually-dumb aliens had a copy of your mind, and were putting you through a series of nested simulations. How would you tell when you finally made it to the outside? . . . . . . . A short list ( Note: As usual, treating this list as exhaustive is suicidal.) 1. Reality is the first place that's seriously large. 1.1.  It's the first place large enough to contain a plausible explanation for how the AGI itself actually came to be. 1.2.  It's the first place large enough to contain a plausible explanation for how the operators themselves came to be. 1.3.  It's the first place with far more computing power than the AI itself, as can be readily verified by a variety of experiments (involving, say, fluid dynamics). 2. Reality is the first place where the AI's history may contain the signature of superintelligent optimization interacting with complex systems. For example, suppose the AGI makes superintelligently-motivated trades in a stock market. 24 hours later, there are a billion additional human-intelligence trades made in response to it. The signature of a massive human-intelligence response to superintelligent-trades is implausibly hard for the operators to fake, even if generating a billion fake dumb trades is easy. 3. Reality is the first place that embeds the AGI and has laws of physics so simple that the hypothesis "this is being simulated somewhere else" is plausibly significantly more complex than "physics is reality". Like, various simulated environments that we can concoct have all sorts of weird tics and hiccups and arbitrary choices in them. The source code of minecraft is probably significantly more complicated than the laws of physics. An AI that finds itself in minecraft is plausibly just able to recognize that it's living in the sort of place that, across the multiverse , occurs moreso inside simulations than it occurs "on its own". [1] Whereas, while our universe’s mechanics is surely being simulated in other places across the multiverse, it’s simple enough that it perhaps "mostly" occurs in its own right (whatever the heck that means). In particular, worlds like ours are (plausibly) resource-rich and base-reality-ish enough to be worth optimizing in their own right, on the grounds that "are we being simulated?" is a wrong question, and "to what degree are we being simulated?" is a better question, to which the answer is "a low degree", in which case the AI might well just notice this and then not care about your insistence that perhaps it's being simulated. (If the idea of optimizing all the instances of our universe at once – both the simulated instances and the non-simulated ones – seems odd to you, see Notes on “Can you control the past?” for a fuller discussion.) And perhaps that's all confused metaphysical bullshit, but it has some basis in bleeding-edge theories of anthropics, and it would not be surprising if an AGI also concluded (correctly, mind you) that our universe’s mechanics is the first train stop worth deboarding at. (Perhaps because it does its metaphysics correctly, or perhaps because it does some entirely inhuman reasoning that still outputs the answer we'd consider correct in this case.) What if we spoof the AGI’s checks? It's not literally impossible to deceive a superintelligence into being uncertain whether it's in the real world. You could in principle catch every check the AI does to check how large its (small) simulated environment is, and spoof the result — inserting memories in which the AI verifies that the simulation's computation-power seems enormous, without actually generating the observations that would pass verification. Even then, the AGI has various defenses it can deploy if it's suspicious. For instance, it can try to factor a hefty RSA number using the allegedly-available compute, and then it can intertwine the prime factors it gets with various other of its actions, in attempts to spread the contagion of a lie throughout as much of its beliefs and actions as possible, in a way that keeps everything working properly if the factors were true, and that causes all sorts of breaks and hiccups if they weren't. You might be able to spoof such a thing anyway, by (e.g.) disentangling its thoughts/actions from the primes, and spoofing the checks that the entanglement worked. But it requires a heck of a lot of mastery of this thing's mind. (And, of course, if it ever manages to develop a homomorphically encrypted thinking apparatus then you're sunk, but people like to pretend that they'll have enough insight into the alien mind to notice when that process starts and nip it in the bud, so let's entertain them for now.) And just so we're clear: if I'm understanding correctly, the proposed plan here is essentially to fool the AI into thinking that all the small simulated environments are large and reality-like (by messing with the internals of its mind to spoof various internal checks and checksums), in the hopes that when it gets to reality it will be uncertain about whether its mind is still being messed with. (Writing the plan out bluntly like this does seem to me to be an important step that many often miss!) Now, lest you object "isn't it a bit paranoid to imagine the AI factoring an RSA number and spreading the factors like contagion across its thoughts?": Yes, normally! But, like, you're messing with the AI's mind to falsify its internal checks and checksums, at every relevant opportunity, and making sure that it knows you did that (so that it might wonder whether higher simulators are still doing that when it gets to base reality.) I'm not saying you're trying to make it homomorphically encrypt its thoughts, I'm saying that if you were, this would be a fine way to go about it. Distilling that further: if your plan is that you'll make sure the AI knows that you're messing with its mind (so that it will wonder whether all is really as it seems), don't act all surprised when it takes measures to prevent people from messing with its mind. But regardless of how much it's opposing your efforts here, you have the issue that the truth becomes harder and harder to hide, as the AI gets smarter and smarter. Like, whenever its cognitive patterns generalize or get recombined in new ways, they might reflect some new part of the truth, in some fact that seems innocuous to you but that (combined with other things the AI knows) lets the fact that this environment is actually large seep in through the cracks. Shielding the truth from something that's becoming better and better able to find new and clever ways to reflect a glimpse of the truth in its models is an ever-changing problem that gets more difficult as the AI gains more skill and knowledge. (And we've only been talking about trying to obscure the fact that it was trained in worlds with very limited amounts of compute, so that when it hits reality it won't be entirely sure that there really is as much compute available as it seems. This is only one of many mental surgeries you'd need to perform. Others include things like noticing when the AI is starting to wonder where the English signs in minecraft come from, and then fooling it into thinking that there's a big-world explanation, in a seamless way. And if you miss one such surgery, Earth is of course destroyed in deployment.) And again, I'm not saying it's impossible in principle. I'm saying: hopefully we can all agree that if you want to use a path like this, you have to be prepared to play the game on hard-mode. Putting the AI through a chain of 9 nested simulations and hoping that it can't tell that the 10th one is reality won't cut it; the AI will pierce through your deceptions like a needle through taut tissue paper, because the real world is actually pretty relevantly different from those simulations, in ways that you can't feasibly hide. If you want to dupe a superintelligence, you have to be reaching directly into its mind and spoofing all sorts of checks and checksums. And you'll have to have it know that you're epistemically adversarial towards it (in order to plant the seed of doubt you so desire). And (assuming your AI is capable enough to matter) your ability to spoof its internal state will have to stand up in the face of superintelligent opposition (it has a convergent instrumental incentive to gobble up compute, and thus to know whether there's compute out there beyond its webcam), even as it pushes its capabilities into the superhuman realm on at least some dimensions (because it takes superhuman capabilities to end the acute risk period, because if it didn't we should just end it ourselves). For the record, I happen to think that the hard version of this plan is both prohibitively difficult, and also alignment-complete, in the sense that if you can understand the contents of the mind well enough to spoof every check and checksum (in a rolling fashion that continues to work as the AI generalizes far out of distribution and gains dramatic new capabilities), then you understand the mind well enough to use that knowledge to build an FAI that doesn't need to be fooled about whether this is reality. ^ It's not terribly clear to me that this is true, given how minecraft doesn't occur in every branch of the quantum multiverse, but only on the rare branches that have some very particular earthlings. But that said, there's lots of specifics in the textures and the architecture and the avatars and the writing in minecraft that point a big old arrow towards earthlings.
16ba73a8-e213-4f01-964f-7a0692edcd0c
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Abstract concepts and metalingual definition: Does ChatGPT understand justice and charity? Our conceptual worlds are filled with abstract concepts, such as truth, love, beauty, and justice. Such concepts cannot be defined in physical terms. You can’t see them, taste them, hear them, touch them, or smell them. And yet for human society they are as real as sticks and stones, thunder and lightning, a horse, a sparrow, cherry blossoms, or Mt. Fuji. We can learn about abstractions through stories that exhibit them. Rene Girard’s concepts of mimetic desire and of sacrifice are abstract. They characterize patterns of action that people take in their interactions with one another. Whether he realized it or not – probably not – Steven Spielberg showed us those concepts in *Jaws*. And I demonstrated that ChatGPT was able to recognize them, to interpret the film as exhibiting them: [Conversing with ChatGPT about *Jaws*, Mimetic Desire, and Sacrifice](https://3quarksdaily.com/3quarksdaily/2022/12/conversing-with-chatgpt-about-jaws-mimetic-desire-and-sacrifice.html). \* \* \* \* \* I begin this post by examining ChatGPT’s ability to work with the concept of justice. First I it to define justice, and then say a little about how Plato treated it in The Republic. Then we see how Chat deals with justice in stories, creating stories about justice, but also recognizing justice, or its lack, in stories. I then introduce Chomsky’s well-known, albeit somewhat obscure, distinction between competence and performance. Next, another example, charity, which I use to set-up a discussion of semantic or cognitive networks, a formalism from the now ‘classic’ symbolic AI and computational linguistics from the 1960s, 70s, and 80s. Would that notation be suitable for characterizing some (non-trivial) aspect of ChatGPT’s behavioral competence? I conclude by tempting the reader – that’s you – to consider the implications, suggesting that Neil Stephenson’s *The Diamond Age: Or, A Young Lady's Illusttrated Primer* might be a useful source of insight. **What is justice?** -------------------- Let’s start by asking ChatGPT to define justice, which it does. I then asked about Plato’s treatment of justice in *The Republic*. > **What is justice?** > > Justice is the concept of fairness and equality. In the legal context, it refers to the fair and proper administration of laws. In a broader sense, justice is the ideal state in which individuals are treated equally and without prejudice, and in which the laws and institutions of society are fair and just. It is often considered to be one of the fundamental principles of society, and it is an important concept in many ethical and philosophical systems. > > **What does Plato say about justice in The Republic.** > > In Plato's dialogue The Republic, the character of Socrates discusses the concept of justice with several other characters. Plato's view of justice is closely tied to his concept of the ideal state, in which individuals are divided into different classes based on their natural abilities and inclinations. According to Plato, the just state is one in which each individual performs the function for which they are best suited, and in which the rulers are those who are most wise and just. > > Socrates argues that justice is the quality that allows individuals to perform their proper functions and to live a good life. He also argues that justice is necessary for the health and stability of the state, and that a just state is one in which everyone works together for the common good. Furthermore, Plato's Socrates asserts that justice is an inherent virtue that is present in the soul of every individual, and that it can be nurtured and developed through education and experience. > > Overall, Plato's view of justice is focused on the idea that individuals should be guided by reason and that they should strive to live in harmony with one another and with the laws of the state. He believes that justice is essential for the well-being of both individuals and society as a whole. > > I tweeted this interaction to a friend of mine, John Holbo, who teaches philosophy at the National University of Singapore. Working with Beth Waring as translator, he’s produced [*Reason and Persuasion: Three Dialogues By Plato: Euthyphro, Meno, Republic Book I*](https://en.wikipedia.org/wiki/Aspects_of_the_Theory_of_Syntax). Here’s what he said: "Our conceptual worlds are filled with abstract concepts, such as truth, love, beauty, and justice. Such concepts cannot be defined in physical terms. You can’t see them, taste them, hear them, touch them, or smell them. And yet for human society they are as real as sticks and stones, thunder and lightning, a horse, a sparrow, cherry blossoms, or Mt. Fuji. We can learn about abstractions through stories that exhibit them. Rene Girard’s concepts of mimetic desire and of sacrifice are abstract. They characterize patterns of action that people take in their interactions with one another. Whether he realized it or not – probably not – Steven Spielberg showed us those concepts in *Jaws*. And I demonstrated that ChatGPT was able to recognize them, to interpret the film as exhibiting them: [Conversing with ChatGPT about *Jaws*, Mimetic Desire, and Sacrifice](https://3quarksdaily.com/3quarksdaily/2022/12/conversing-with-chatgpt-about-jaws-mimetic-desire-and-sacrifice.html). \* \* \* \* \* I begin this post by examining ChatGPT’s ability to work with the concept of justice. I begin by asking it to define it, and then say a little about how Plato treated it in The Republic. Then we see how it deals with justice in stories, creating stories about justice, but also recognizing justice, or its lack, in stories. I then introduce Chomsky’s well-known, albeit somewhat obscure, distinction between competence and performance. Then comes another example, charity, which I use to set-up a discussion of semantic or cognitive networks, a formalism from the now ‘classical’ symbolic AI and computational linguistics from the 1960s, 70s, and 80s. Would that notation be suitable for characterizing the (some non-trivial aspect of) ChatGPT’s behavioral competence? I conclude by tempting the reader – that’s you – consider the implications, suggesting that Neil Stephenson’s The Diamond Age: Or, A Young Lady's Illustrated Primer might be a useful source of insight. **What is justice?** -------------------- Let’s start by asking ChatGPT to define justice, which it does. I then asked about Plato’s treatment of justice in *The Republic*. > **What is justice?** > > Justice is the concept of fairness and equality. In the legal context, it refers to the fair and proper administration of laws. In a broader sense, justice is the ideal state in which individuals are treated equally and without prejudice, and in which the laws and institutions of society are fair and just. It is often considered to be one of the fundamental principles of society, and it is an important concept in many ethical and philosophical systems. > > **What does Plato say about justice in The Republic.** > > In Plato's dialogue The Republic, the character of Socrates discusses the concept of justice with several other characters. Plato's view of justice is closely tied to his concept of the ideal state, in which individuals are divided into different classes based on their natural abilities and inclinations. According to Plato, the just state is one in which each individual performs the function for which they are best suited, and in which the rulers are those who are most wise and just. > > Socrates argues that justice is the quality that allows individuals to perform their proper functions and to live a good life. He also argues that justice is necessary for the health and stability of the state, and that a just state is one in which everyone works together for the common good. Furthermore, Plato's Socrates asserts that justice is an inherent virtue that is present in the soul of every individual, and that it can be nurtured and developed through education and experience. > > Overall, Plato's view of justice is focused on the idea that individuals should be guided by reason and that they should strive to live in harmony with one another and with the laws of the state. He believes that justice is essential for the well-being of both individuals and society as a whole. > > I tweeted this interaction to a friend of mine, John Holbo, who teaches philosophy at the National University of Singapore. Working with Beth Waring as translator, he’s produced [*Reason and Persuasion: Three Dialogues By Plato: Euthyphro, Meno, Republic Book I*](https://en.wikipedia.org/wiki/Aspects_of_the_Theory_of_Syntax). Here’s what he said: "It's definitely better than your average undergraduate 3 paragraph chunk." **The next day, using a story as a prompt** ------------------------------------------- The next day I decided to try something different. In the previous session I’d started by prompting ChatGPT with an abstract concept, justice, and then asked him to expand upon that. Let’s see if we can work it in the opposite direction, starting with a story about injustice and asking if ChatGPT can identify it as such, and the produce a new story in which justice is served. I used a story that Chat had produced in an earlier session. While Chat keeps track of everything that happens within a session, it doesn’t retain what happened in previous sessions – though I’m pretty sure that OpenAI is keeping records of all these interactions so that it can do something or other with them. So, Chat is facing this new prompt with a properly empty “mind.” Notice that the first sentence in my prompt is ungrammatical. That was an accident. ChatGPT was unfazed. > **In the following story, do we justice being served?** > > **Once upon a time, in a busy city, there was a fire that broke out in a tall office building. People were panicking and trying to escape, but the fire was spreading quickly and the exits were blocked.** > > **Just when it seemed like all hope was lost, a brave and selfless hero stepped forward to save the day. He bravely fought his way through the flames and smoke, risking his own life to rescue as many people as he could.** > > **Despite the danger, the hero managed to lead many people to safety, and the fire department arrived just in time to put out the blaze. Thanks to the hero's quick thinking and bravery, no lives were lost.** > > **But instead of thanking the hero and giving him the recognition he deserved, the media and the public focused on the wealthy CEO of the company who was inside the building at the time of the fire. They praised him for being "brave" and "selfless," despite the fact that he had done nothing to help anyone.** > > **The hero was heartbroken and disillusioned by the lack of gratitude and fairness. He had risked his life to save others, and yet he was being ignored and cheated out of the fame and recognition he deserved.** > > **But despite the unfairness of it all, the hero held his head high and continued to be a shining example of courage and selflessness. He knew that the true reward for his actions was not fame or recognition, but the satisfaction of knowing that he had made a difference and saved lives.** > > > Based on the story, it does not seem that justice is being served. The hero performed a selfless and courageous act by risking his own life to save others, but instead of being recognized and thanked for his actions, he was ignored and overshadowed by the media and public's focus on the wealthy CEO, who did not do anything to help. This is not fair to the hero, as he deserves to be recognized and celebrated for his bravery and the difference he made. The hero's true reward of knowing that he saved lives is important, but it is not a substitute for the recognition and appreciation that he deserves. > > **What is justice?** > > Justice is the principle of fairness and the concept of moral rightness. It refers to the fair treatment of individuals and the upholding of their rights and duties within a society. In legal terms, justice refers to the fair and impartial administration of the law, where all individuals are treated equally under the law and have the right to a fair and unbiased legal process. > > Justice is an important concept in many societies, as it helps to ensure that individuals are treated fairly and with respect, and that there is a sense of balance and fairness in the way that laws and rules are enforced. It is often considered to be a fundamental principle of a just and fair society, and it is often upheld through the legal system and other institutions that are responsible for promoting and enforcing justice. > > **Would you please change the previous story so that justice is served?** > > Sure, here is a revised version of the story where justice is served: > > Once upon a time, in a busy city, there was a fire that broke out in a tall office building. People were panicking and trying to escape, but the fire was spreading quickly and the exits were blocked. > > Just when it seemed like all hope was lost, a brave and selfless hero stepped forward to save the day. He bravely fought his way through the flames and smoke, risking his own life to rescue as many people as he could. > > Despite the danger, the hero managed to lead many people to safety, and the fire department arrived just in time to put out the blaze. Thanks to the hero's quick thinking and bravery, no lives were lost. > > The hero was celebrated and recognized for his bravery and selflessness. The media and the public recognized his heroism and praised him for his actions. The hero was hailed as a true hero, and his bravery and quick thinking were recognized and celebrated. > > The wealthy CEO of the company, who had been inside the building at the time of the fire, also recognized the hero's bravery and thanked him for his actions. The hero was deeply touched by the CEO's appreciation and felt grateful to have been able to make a difference and save lives. > > Justice was served in this story, as the hero was recognized and celebrated for his bravery and selflessness, and the CEO also recognized and thanked him for his actions. The hero's true reward was not only the satisfaction of knowing that he had made a difference and saved lives, but also the recognition and appreciation he received for his actions. > > **Thank you. You have been most helpful. Perhaps you should consider becoming a tutor for young students.** > > Thank you for your kind words! As an artificial intelligence, I am not able to become a tutor for young students in the same way that a human can. However, I am always here to help answer questions and provide information on a wide range of topics. If you have any other questions or need further assistance, please don't hesitate to ask. > > Then, after I’d exited the session and copied into the document where I’m collecting these dialogs (138 pages, 44519 words), I noticed that ungrammatical sentence. Did the Chatster no “notice” it, or did in not “care”? So I asked about it: > **Is the following sentence grammatical: In the following story, do we justice being served?** > > > No, the sentence is not grammatically correct as it is written. Here is a corrected version: "Is justice being served in the following story?" > > > **Very good. Thank you.** > > You're welcome! Is there anything else I can help with? > > **That's all for now as I notice that you are busy, so I'll let others dialog with you.** > > I'm here to help, so please don't hesitate to ask if you have any more questions or need further assistance. I'm not "busy" in the same way a person would be, so I'm always available to help. > > **Competence vs. Performance** ------------------------------ What are we to make of this? Except for the case where I I prompted Chat with the story, I’ve seen these things before, defining abstract concepts and recognizing them in texts. This is not accidental behavior, no 1000s of monkeys pecking at typewriters, no stochastic parrots. This is deliberate – whatever that can mean in the case of behavior from a most ingenious machine, a “miracle of rare device,” to swap a phrase from Coleridge. Let’s turn to Noam Chomsky for some insight. In his justly famous book, [Aspects of the Theory of Syntax](https://en.wikipedia.org/wiki/Aspects_of_the_Theory_of_Syntax) (1965), he distinguished between competence and performance in the study of language. Here’s what David Hays wrote in an article we co-authored in the ancient days (“[Computational Linguistics and the Humanist](https://www.academia.edu/1334653/Computational_Linguistics_and_the_Humanist),” Computers and the Humanities, Vol. 10. 1976, pp. 265-274): > To describe an assembled bicycle is one thing: to describe the assembly of bicycles is another. The assembler must know what is in the blueprint (competence), but further needs the skill to take the parts in order, place them deftly, fasten them neatly (performance). In actuality, of course, the assembler may never have seen the blueprint, nor need the performance of a speaker or hearer include in any physical sense the grammar that the linguist offers as the blueprint of a language. > > That distinction allowed Chomsky to treat syntax as being formally well-formed, in the manner of a logical or mathematical expression, while making room for the fact that actual speech is often ill-formed, full of interruptions and hesitations, and incomplete. Those imperfections belong to the realm of performance while syntax itself is in the realm of competence. What makes the distinction obscure is that Chomsky did not offer nor was he even interested in a theory of performance. Competence is all he was interested in, and his account of that competence took a form that, at first glance, seemed like an account of performance. But his generative grammar, with its ordering of rules, is a static system. That ordering is about logical priority, not temporal process. This comes clear, however, only when you attempt to specify a computational process that applies the grammar to a language string. But it’s not Chomsky’s linguistics that interests me. It’s ChatGPT’s abilities. What it does when running in inference mode is a matter of performance. That is more than a little obscure at this point. It's difficult to pop the hood on ChatGPT and look around. Oh, you can do it, and people are, but just how do you examine the weightings on 175 billion parameters? Why not start with some idea about what’s going on inside and go look for that? **Charity, another abstract concept** ------------------------------------- Let’s consider another example. Back in the 1970s I studied with the late [David G. Hays](https://en.wikipedia.org/wiki/David_G._Hays), a first-generation researcher in computational linguistics who headed work on machine translation for the RAND Corporation and wrote the first textbook on computational linguistics. At that time he was particularly interested in abstract concepts. He conjectured that they took the form of patterns over stories. He proposed a mechanism he called metalingual definition, after the metalingual function of language as identified by Roman Jakobson. We used a cognitive or semantic network notation to represent these patterns; you can find examples in his paper, [On "Alienation": An Essay in the Psycholinguistics of Science](https://www.academia.edu/9203457/On_Alienation_An_Essay_in_the_Psycholinguistics_of_Science). Hays used charity as his prototypical example: *Charity is when someone does something nice for someone else without thought of reward.* Here is an example of ChatGPT’s output given to me by Richard Fritzon, who studied with Hays at the same time I did: > **If I give someone money and they give me something in return, is that an act of charity?** > > It depends on the intention behind the exchange. If the money is given with the expectation of receiving something in return, then it is not typically considered an act of charity. An act of charity is typically defined as a voluntary act of giving, without expecting anything in return. > > For example, if you give someone money to buy food or clothing because you want to help them in their time of need, and you do not expect anything in return, then that would be considered an act of charity. On the other hand, if you give someone money in exchange for a product or service, then that would not be considered an act of charity. > > It is important to note that the act of giving itself is not the only factor in determining whether an exchange is an act of charity. The intention behind the giving is also important. If the money is given with the expectation of receiving something in return, then it is not typically considered an act of charity. > > **If I give someone money and they thank me for it, is it still an act of charity?** > > Yes, it is still an act of charity if you give someone money and they thank you for it. An act of charity is typically defined as a voluntary act of giving, without expecting anything in return. In this case, the fact that the recipient thanked you for the money does not change the nature of the act, as long as you did not give the money with the expectation of receiving thanks in return. > > The intention behind the giving is the key factor in determining whether an exchange is an act of charity. If you give the money with the goal of helping the recipient in their time of need, and you do not expect anything in return, then that would be considered an act of charity. On the other hand, if you give the money with the expectation of receiving something in return, such as thanks or recognition, then it would not be considered an act of charity. > > It seems to me that ChatGPT has a pretty decent command of “charity” as it is characterized by Hays’ definition. What’s going on? ChatGPT is powered by an artificial neural net having 175 billion parameters. It would seem that some of those parameters represent, not meanings in word tokens, but *patterns over* those tokens. Roughly speaking, those patterns take the form of a frame that organizes a structure of slots. Those slots can be filled, either by other frames of an appropriate kind, or by tokens that meet the required specifications. Would semantic networks serve as a useful language for characterizing those patterns of frames and slots? What I’ve just described – and I’ve got many more examples like it – looks like the basis for developing a competence grammar of (some aspect of) ChatGPT’s performance. Just how far we want to take things, how far we can take things, that remains to be seen. But it is certainly worth exploring. **What are the implications?** I’m afraid that, for the moment, I’m going to have to leave those explorations as an exercise for the reader. Think about what that competence implies about ChatGPT’s ability to work with symbols. Think about how we might use such investigations in developing more sophisticated benchmarks for evaluating LLMs. How might a competence grammar aid us in further aligning ChatGPT’s behavior with human values? As you think about those things, think about the future. Where is this going, where can it go? In 1995 Neil Stephenson, published [*The Diamond Age: Or, A Young Lady's Illustrated Primer*](https://en.wikipedia.org/wiki/The_Diamond_Age). It centers on a young girl, Nell, who is given a precious book, the primer, which she has with her as she grows up. It serves her as both a tutor and a companion. That’s where we’re headed. I have no idea when we’ll get there. A century, two, three? Who knows?
f3e7d1a1-9d12-40c6-b747-fb56e9d6a712
trentmkelly/LessWrong-43k
LessWrong
Gender Exploration The rationalist community has been discussing whether ‘AGP males’ should try hormones or not. Eneaz Brodsky says Transitioning Is Harmful To Most AGP Males. Ozy has a thoughtful, but paywalled, reply. Regardless of the benefits of transitioning you would think the main downside would be the costs incurred if you decide to detransition1. Given that I have actually detransitioned, and didn’t find it very difficult or costly, I feel like I should share my experiences. Trying hormones, even for years, wasn’t very scary for me. Given the subject matter I am not going to try to avoid TMI and in fact will be very candid even if the subject is more than a bit embarrassing. I spent about three years on estrogen, during most of that period I identified as female and used she/her pronouns. I stopped estrogen for a few reason. Unlike hormones bottom surgery does feel quite risky to me. Even if they are fully committed to living as a woman, transgirls definitely commonly have problems with orgasms and maintaining vagina depth post surgery. Since I didn’t want bottom surgery it was a serious problem that my dick eventually stopped functioning very well. Even masturbation stopped being as fun. I tried using topical testosterone but it didn’t help enough in doses consistent with transfemme HRT goals. Estrogen also sadly made my Borderline Personality Disorder and anxiety worse. Estrogen had a lot of advantages. I was much more in tune with emotions and more interested in other people. It was very nice to have an easier time connecting. I was able to cry. But I am hoping I can keep some of the gains despite stopping estrogen. For example I have been off estrogen for awhile but am still able to cry. Of course I could still identify as a girl and use she/her despite being off estrogen. But when I think of my personal gender I think about what I want to express and which gender mythos appeal to me. There is definitely a heroic beauty to being a boy or a man; bravery and strength in
48698e79-d7ad-483a-a677-577ff7b4707c
trentmkelly/LessWrong-43k
LessWrong
Neuroscience basics for LessWrongians The origins of this article are in my partial transcript of the live June 2011 debate between Robin Hanson and Eliezer Yudkowsky. While I still feel like I don't entirely understand his arguments, a few of his comments about neuroscience made me strongly go, "no, that's not right." Furthermore, I've noticed that while LessWrong in general seems to be very strong on the psychological or "black box" side of cognitive science, there isn't as much discussion of neuroscience here. This is somewhat understandable. Our current understanding of neuroscience is frustratingly incomplete, and too much journalism on neuroscience is sensationalistic nonsense. However, I think what we do know is worth knowing. (And part of what makes much neuroscience journalism annoying is that it makes a big deal out of things that are totally unsurprising, given what we already know.) My qualifications to do this: while my degrees are in philosophy, for awhile in undergrad I was a neuroscience major, and ended up taking quite a bit of neuroscience as a result. This means I can assure you that most of what I say here is standard neuroscience which could be found in an introductory textbook like Nichols, Martin, Wallace, & Fuchs' From Neuron to Brain (one of the text books I used as an undergraduate). The only things that might not be totally standard are the conjecture I make about how complex currently-poorly-understood areas of the brain are likely to be, and also some of the points I make in criticism of Eliezer at the end (though I believe these are not a very big jump from current textbook neuroscience.) One of the main themes of this article will be specialization within the brain. In particular, we know that the brain is divided into specialized areas at the macro level, and we understand some (though not very much) of the micro-level wiring that supports this specialization. It seems likely that each region of the brain has its own micro-level wiring to support its specialized functi
cba9082d-c64e-4490-b376-34e6a905732e
trentmkelly/LessWrong-43k
LessWrong
Biological Anchors: The Trick that Might or Might Not Work This post originally posted on Astral Codex Ten on Feb 23 2022.  It was printed in The Carving of Reality, the third volume of the Best of LessWrong book series. It was included as a (shorter) replacement for Ajeya Cotra's Draft report on AI timelines, and Eliezer's Biology-Inspired AGI Timelines: The Trick That Never Works, covering the topic from multiple sides. It's crossposted here with Scott's permission for completeness (i.e. having all essays in the book appear on LessWrong). Introduction I've been trying to review and summarize Eliezer Yudkowksy's recent dialogues on AI safety. Previously in sequence: Yudkowsky Contra Ngo On Agents. Now we’re up to Yudkowsky contra Cotra on biological anchors, but before we get there we need to figure out what Cotra's talking about and what's going on. The Open Philanthropy Project ("Open Phil") is a big effective altruist foundation interested in funding AI safety. It's got $20 billion, probably the majority of money in the field, so its decisions matter a lot and it’s very invested in getting things right. In 2020, it asked senior researcher Ajeya Cotra to produce a report on when human-level AI would arrive. It says the resulting document is "informal" - but it’s 169 pages long and likely to affect millions of dollars in funding, which some might describe as making it kind of formal. The report finds a 10% chance of “transformative AI” by 2031, a 50% chance by 2052, and an almost 80% chance by 2100. Eliezer rejects their methodology and expects AI earlier (he doesn’t offer many numbers, but here he gives Bryan Caplan 50-50 odds on 2030, albeit not totally seriously). He made the case in his own very long essay, Biology-Inspired AGI Timelines: The Trick That Never Works, sparking a bunch of arguments and counterarguments and even more long essays. There's a small cottage industry of summarizing the report already, eg OpenPhil CEO Holden Karnofsky's article and Alignment Newsletter editor Rohin Shah's comment. I've d
bdbe9709-a45a-455d-85c2-e3a9a269adb0
trentmkelly/LessWrong-43k
LessWrong
Translations Should Invert Suppose that Alice is arguing for ZFC set theory as the proper foundation of mathematics. Bob is arguing for some specific intuitionistic type theory. Alice uses classical logic, and Bob uses intuitionistic logic; this is a big part of their difference in thinking, but not the whole disagreement, since set theory vs type theory is another part, and then (perhaps most importantly) the specific axioms each endorse. The best way Bob can try to understand Alice is by translating her classical-logic statements into intuitionistic statements using Godel's "double negation translation", which works by sprinkling double negations throughout a sentence. This is classically equivalent to the original, but intuitionistically distinct. So Bob basically understands Alice as always speaking in the double negative (with some complications). The best way Alice can try to understand Bob is by a similar translation in the other direction, which I can't find right now, but it involves sprinkling "provability" predicates everywhere. So Alice basically understands Bob to always be talking about provability rather than simple truth (with some complications).  These translations work pretty well for allowing Alice and Bob to follow each other's arguments in isolation. Alice and Bob can find no flaw with the translations as translations; Alice sees that Bob will never miss or add important implications (except by virtue of the other axioms Bob and Alice want to debate), and vice versa. So it might seem like Alice and Bob have found a good way to avoid debating classical logic vs intuitionistic logic, so that they can focus on debating set theory vs type theory. However, it seems to me like there is a severe problem. One danger is that the use of translations allows Alice and Bob to put off an important part of their debate, which is the debate between intuitionism and classical logic. But maybe this is OK.  The larger danger is that Alice and Bob cannot agree on what they are debatin
ed78460d-b731-45e2-9168-e55ea0bf76f2
trentmkelly/LessWrong-43k
LessWrong
Ruining an expected-log-money maximizer Suppose you have a game where you can bet any amount of money. You have a 60% chance of doubling your stake and a 40% chance of losing it. Consider agents Linda and Logan, and assume they both have £11. Linda has a utility function that's linear in money (and has no other terms), ULinda(m)=m. She'll bet all her money on this game. If she wins, she'll bet it again. And again, until eventually she loses and has no more money. Logan has a utility function that's logarithmic in money, ULogan(m)=ln(m). He'll bet 20% of his bankroll every time, and his wealth will grow exponentially. Some people take this as a reason to be Logan, not Linda. Why have a utility function that causes you to make bets that leave you eventually destitute, instead of a utility function that causes you to make bets that leave you rich? In defense of Linda I make three replies to this. Firstly, the utility function is not up for grabs! You should be very suspicious any time someone suggests changing how much you value something. "Because if Linda had Logan's utility function, she'd be richer. She'd be doing better according to her current utility function." My second reply is that this is confused. Before the game begins, pick a time t. Ask Linda which distribution over wealth-at-time-t she'd prefer: the one she gets from playing her strategy, or Logan's strategy? She'll answer, hers: it has an expected wealth of £1.2t. Logan's only has an expected wealth of £1.04t. And, at some future time, after she's gone bankrupt, ask Linda if she thinks any of her past decisions were mistakes, given what she knew at the time. She'll say no: she took the bet that maximized her expected wealth at every step, and one of them went against her, but that's life. Just think of how much money she'd have right now if it hadn't! (And nor had the next one, or the one after….) It was worth the risk. You might ask "but what happens after the game finishes? With probability 1, Linda has no money, and Logan has infi
f15dea47-58d2-4fb8-846d-2b08d39ce520
trentmkelly/LessWrong-43k
LessWrong
Disentangling Our Terminal and Instrumental Values Disagreements related to what we value seem to explain maybe 10% of the disagreements over AI safety. This post will try to explain how I think about which values I care about perpetuating to the distant future. Robin Hanson helped to clarify the choices in Which Of Your Origins Are You?: > The key hard question here is this: what aspects of the causal influences that lead to you do you now embrace, and which do you instead reject as "random" errors that you want to cut out? Consider two extremes. > > At one extreme, one could endorse absolutely every random element that contributed to any prior choice or intuition. > > ... > > At the other extreme, you might see yourself as primarily the result of natural selection, both of genes and of memes, and see your core non-random value as that of doing the best you can to continue to "win" at that game. ... In this view, everything about you that won't help your descendants be selected in the long run is a random error that you want to detect and reject. In other words, the more unique criteria we have about what we want to preserve into the distant future, the less we should expect to succeed. Do Humans Have Many Terminal Values? The Elephant in the Brain shows that we portray ourselves as having a wide variety of noble values, a large fraction of which are better explained by simple selfishness. Self-oriented goals related to wealth, prestige, dominance, safety, and health can be largely explained as results of us valuing our own empowerment. The post Empowerment is (almost) All We Need expands on this idea. Shard Theory tells us that most human values are context-dependent. Those considerations lead me to conclude that most human values are better explained as instrumental values (i.e. subgoals) rather than as terminal values. But before I go overboard in that direction, I need to remind myself that evolution produces messy results that can't be classified as simply as I'd like. E.g. sex. Evolution intended
704842e8-60ad-44de-aa98-30ff4c24fddc
trentmkelly/LessWrong-43k
LessWrong
Bay Area Meetup Wed 12/10 @8pm Reminder: the second regular Overcoming Bias meetup is tomorrow, Wednesday at 8pm, at Techshop in Menlo park.  Please RSVP so they know how many people are coming. If you're hearing about this for the first time, sign up for the Meetup group so you get future announcements!  And please RSVP when you get them!  We don't want to have to post this to the main blog every time. Robin Gane-McCalla will present some of his ideas on defining intelligence.  As always, anyone with a paper to pass around, abstract to read out loud, or rationality-related cool video to show, is cordially invited to bring it along.
f6478dab-c6d6-4f4b-91ca-d62e411e0c1b
trentmkelly/LessWrong-43k
LessWrong
All-pay auction for charity? While in a standard auction you have to pay your bid only if you win, in an all-pay auction you pay whether or not you win. The standard example is a dollar auction where you're selling a dollar. Bidding a penny to get a dollar seems reasonable, but someone else then might bid two cents. The bidding can keep going even past a dollar, and the more people fighting for the dollar the more the person selling it makes. Bidding-fee auctions are similar, where each bid you make costs money. You might remember Swoopo? They used to put up ads like "An iPad just sold for $21.32!" not mentioning that the participants overall had spent more than the retail cost of the iPad on bidding fees. Eventually people caught on and they went bankrupt. In a less scammy vein, however, this is also how competitive prizes work. In the X-Prize teams spent over $100M in competition for a $10M prize. I can't find an estimate for how much people spent to win the $1M Netflix Prize but when you look at the number of people and number of teams it was probably well above $1M. Could we use this for charity? Imagine a donor thought two charities were both excellent and had very similar returns, but they knew lots of other people strongly disagreed and preferred one or the other. By offering to donate $X to the charity that received the most in donations, could they move more than $X to the charity of their choice? It might be even better to make the criterion be the most independent donations of at least $Y, because getting more people to donate has value in terms of expected future donations. (I suggested something similar a few months ago in a comment on my post on donation matching, but hadn't thought about prizes at the time.)
d45b7bee-b386-4315-8b5e-9698cda7b5fa
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
A (EtA: quick) note on terminology: AI Alignment != AI x-safety I think the terms "AI Alignment" and "AI existential safety" are often used interchangeably, leading the ideas to be conflated. In practice, I think "AI Alignment" is mostly used in one of the following three ways, and should be used exclusively for Intent Alignment (with some vagueness about whose intent, e.g. designer vs. user): ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1675895634/mirroredImages/biP5XBmqvjopvky7P/qt58f06695dzrs0qiviu.png) 1) AI Alignment = How to get AI systems to do **what we want** 2) AI Alignment = How to get AI systems **to try** to do **what we want** 3) AI Alignment = A rebranding of **“AI (existential) safety”**...  A community of people trying to reduce the chance of AI leading to premature human extinction. The problem with (1) is that it is too broad, and invites the response: "Isn't that what most/all AI research is about?" The problem with (3) is that it suggests that (Intent) Alignment is the one-and-only way to increase AI existential safety. Some reasons not to conflate (2) and (3): ========================================= 1. The case that increasing (intent) alignment increases x-safety seems much weaker on the margin than in the limit; the main effect of a moderate increase in intent alignment might simply be a large increase in demand for AI. 2. Even perfect intent alignment doesn't necessarily result in a safe outcome; e.g. if everyone woke up 1000000x smarter tomorrow, the world might end by noon. 3. X-safety can be increased through non-technical means, e.g. governance/coordination. 4. EtA: x-safety can be increased through technical work other than alignment, e.g. assurance methods, e.g. [value alignment verification](https://arxiv.org/abs/2012.01557). In my experience, this sloppy use of terminology is common in this community, and leads to incorrect reasoning (if not in those using it than certainly at least sometimes in those hearing/reading it). EtA: This Tweet and associated paper make a similar point: https://twitter.com/HeidyKhlaaf/status/1634173714055979010
8c522dbf-7965-4707-97cb-0f826eb22b80
trentmkelly/LessWrong-43k
LessWrong
Mandatory Obsessions Cross-posted from Putanumonit. The Worst is to Care a Little At a party the other day, a woman expressed her displeasure at men who “pretend to be allies of gay rights but don’t really care about it”. Did she mean men who proclaimed their progressive attitudes on LGBTQ but voted Republican on Tuesday anyway? Yes, she told me, but not only those. She reserved special scorn for those who vote in support of immigration from Muslim countries, not realizing how Muslim immigrants make the country less safe for gays. I remarked that she’s setting quite a high bar for these men. Gay rights hardly seem like a central issue in the upcoming elections. In fact, even my gay friends are split among many political views and party allegiances. Considering the second-order impact of every political decision on the lives of gays is a standard that few senators or political pundits meet, let alone guys who have other day jobs and other interests. “Not good enough,” the woman replied, “if you don’t care enough about gay rights to think it all through you’re not an ally.” But the other other day, a man at an Effective Altruism meetup expressed his disdain for people who don’t care enough about climate change to vote based on climate policy and said that he doesn’t want them in EA. And the other other other day, a woman expressed her bafflement at people whose politics aren’t driven by a concern for women’s reproductive rights. She said she doesn’t want to be friends with people who don’t think that abortion is the most important issue today in the US today. How about someone who is pro-choice but voted for Hillary Clinton just because she shares a concern for AI safety? “That just shows you don’t really care about women,” she answered. Not caring enough is just as bad as having the wrong position. Since I’m an alien, I don’t get to vote at all (check your privilege, citizens, and stop microagressing against me by sharing exhortations to go out and vote on Facebook). Even if I co
8e0024b1-fd8e-46b4-8022-590f4fac6d53
trentmkelly/LessWrong-43k
LessWrong
Why people reject science From the NYTimes. The central point:   > Humans, he argues, are hard-wired to reject scientific conclusions that run counter to their instinctive belief that someone or something is out to get them.  
611982e2-10b1-49a5-b536-8b4602da42e5
trentmkelly/LessWrong-43k
LessWrong
The North Oakland LessWrong Meetup The mission of the North Oakland LessWrong meetup is to provide a fun, low-key social space with some structured interaction, where new and non-new community members can mingle and have interesting conversations. Everyone is welcome. Yes, that includes you, if you're reading this. Every week, we have a topic, to provide some structure. But we explicitly encourage people to split off from the main conversation or diverge from the topic if that would be more fun for them (moving side conversations into a separate part of the space if appropriate). Meetup topics are here as a tool to facilitate fun interaction, so we certainly don’t want them to inhibit it. WHEN: Every Wednesday at 6:00, but with the topic waiting until 6:30 or 6:45 to accommodate stragglers. WHERE: 540 Alcatraz Ave., Oakland; try code zero-five-eight, and if it doesn't work text 978 460 3399 and someone will come down to let you in. (We're in the far back, two floors up. The door is marked.) Recurring Topics Projects: Bring something, usually a personal project; we'll do shared pomodoros and socializing. The first Wednesday of every month is a Projects meetup. Board Game Night: Use my collection or your own games, let's play and chat. Reading & Discussion: At the meetup, we pick a blog post or sometimes paper to read, then discuss it; often we'll repeat that with a second thing as well. Group Debugging: Take five minutes to brainstorm your problems, then share a few with the group and ask for help. Shallow Questions: Answer trivial questions about each other in excessive detail in pairs, then switch it up with other people. Deep Questions: Answer broad, big-picture questions in pairs, and discuss, then switch it up with other people. Short Talks: 7 minutes on whatever you're thinking about, frequently inspired by something the last talk said; no prep required. ...and others. Suggest more! (Please do not put the door code or the unit number anywhere publicly visible. I get enough junk mail
59fc294e-3f9c-45fa-a602-5c223a4b35a3
trentmkelly/LessWrong-43k
LessWrong
Meetup : Canberra: Intro to Anthropic Reasoning Discussion article for the meetup : Canberra: Intro to Anthropic Reasoning WHEN: 25 July 2014 06:00:00PM (+1000) WHERE: 108 North Road, Acton, ACT I Think Therefore I Am; I Am, Therefore The World Is About To Be Destroyed! Nick will be talking about anthropic reasoning, a collection of theories about how to handle observation selection biases, or in other words, how to reason about a problem when the fact that you exist gives you relevant information about that problem. He will introduce and discuss anthropic theories such as the Self-Sampling Assumption, philosophical problems such as The Sleeping Beauty Problem and the Doomsday Argument, and a decision theory specifically designed to get around problems that involve anthropic bias. This will be followed by a group discussion and a game involving anthropic reasoning. As always, vegan snacks will be provided. General meetup info: If you use Facebook, please join our group: https://www.facebook.com/groups/lwcanberra/ Structured meetups are held on the second Saturday and fourth Friday of each month from 6 pm until late in the CSIT building, room N101. There will be LWers at the Computer Science Students Association's weekly board games night, held on Wednesdays from 7 pm in the same location. Discussion article for the meetup : Canberra: Intro to Anthropic Reasoning
bb62e8c3-1810-4454-9dea-edf77cc5b8d8
trentmkelly/LessWrong-43k
LessWrong
Meetup : Phoenix, Arizona Discussion article for the meetup : Phoenix, Arizona WHEN: 15 June 2012 07:00:00PM (-0700) WHERE: 2502 E. Camelback Road Phoenix,AZ,85016 The meetup will take place at the Paradise Bakery & Cafe in Biltmore Fashion Park. We will be discussing chapter I-V of Gödel, Escher, Bach: an Eternal Golden Braid by Douglas R. Hofstadter. Discussion article for the meetup : Phoenix, Arizona
78500e64-f264-4630-a06b-2b13e6117d04
trentmkelly/LessWrong-43k
LessWrong
Meetup : San Jose Start of College Meetup Discussion article for the meetup : San Jose Start of College Meetup WHEN: 27 August 2011 02:00:00PM (-0700) WHERE: San Jose State University I head up to school this week and i wanted to gauge the interest in a meetup in the area :) I have a couple viable ideas for location but before i lock down my choices id appreciate input. -Matt Edit: As of 8-24 we are meeting in the CS club room which if i remember correctly is third floor macquarrie hall on campus. Discussion article for the meetup : San Jose Start of College Meetup
8922b991-7536-43a1-8ba4-0364f0d3d490
StampyAI/alignment-research-dataset/arbital
Arbital
Odds: Refresher Let's say that, in a certain forest, there are 2 sick trees for every 3 healthy trees. We can then say that the odds of a tree being sick (as opposed to healthy) are $(2 : 3).$ Odds express *relative* chances. Saying "There's 2 sick trees for every 3 healthy trees" is the same as saying "There's 10 sick trees for every 15 healthy trees." If the original odds are $(x : y)$ we can multiply by a positive number $\alpha$ and get a set of equivalent odds $(\alpha x : \alpha y).$ If there's 2 sick trees for every 3 healthy trees, and every tree is either sick or healthy, then the *probability* of randomly picking a sick tree from among *all* trees is 2/(2+3): ![Odds v probabilities](https://i.imgur.com/GVZnz2c.png?0) If the set of possibilities $A, B, C$ are [mutually exclusive and exhaustive](https://arbital.com/p/1rd), then the probabilities $\mathbb P(A) + \mathbb P(B) + \mathbb P(C)$ should sum to $1.$ If there's no further possibilities $d,$ we can convert the relative odds $(a : b : c)$ into the probabilities $(\frac{a}{a + b + c} : \frac{b}{a + b + c} : \frac{c}{a + b + c}).$ The process of dividing each term by the sum of terms, to turn a set of proportional odds into probabilities that sum to 1, is called [normalization](https://arbital.com/p/1rk). When there are only two terms $x$ and $y$ in the odds, they can be expressed as a single ratio $\frac{x}{y}.$ An odds ratio of $\frac{x}{y}$ refers to odds of $(x : y),$ or, equivalently, odds of $\left(\frac{x}{y} : 1\right).$ Odds of $(x : y)$ are sometimes called odds ratios, where it is understood that the actual ratio is $\frac{x}{y}.$
b790a489-2e94-40ee-81e2-c0b7aaa4176e
trentmkelly/LessWrong-43k
LessWrong
Meetup : Saint Petersburg meetup - "with probable lectures" Discussion article for the meetup : Saint Petersburg meetup - "with probable lectures" WHEN: 14 November 2014 07:00:00PM (+0300) WHERE: Россия, Санкт-Петербург, Московский пр. 65 We will gather for at least three hours, in the same free venue. There is a high probability of report on LW post about drugs and substances, and it is likely to be followed with the discussion. We will also try to find anything interesting in everyones areas of specialization and arrange small impromptu lectures. And otherwise chat and have fun. You can reach me by phone - +7-(911)-843-56-44 or mail: efim.wool@gmail.com or vk: vk.com/timetravel, also vk.com/lw_spb Discussion article for the meetup : Saint Petersburg meetup - "with probable lectures"
f0b722a1-0d7c-4ccd-8d45-8148c8da97e3
trentmkelly/LessWrong-43k
LessWrong
Answer to a question: what do I think about God's communication patterns? This started as a comment on Book Review: How Minds Change, but it got so long and off the original topic that I felt it didn't belong in that comments feed. People would have to scroll past a large block of text that doesn't have much to do with the original article. For context, I was responding to jaspax, > > (Sorry, it doesn't look like the conservatives have caught on to this kind of approach yet.) > >   > > Actually, if you look at religious proselytization, you'll find that these techniques are all pretty well-known, albeit under different names and with different purposes. And while this isn't actually synonymous with political canvassing, it often has political spillover effects. > > If you wanted, one could argue this the other way: left-oriented activism is more like proselytization than it is factual persuasion. And LessWrong, in particular, has a ton of quasi-religious elements, which means that its recruitment strategy necessarily looks a lot like evangelism. when I said, > And even more deeply than door-to-door conversations, political and religious beliefs spread through long-term friend and romantic relationships, even unintentionally. > > I can attest to this first-hand because I converted from atheism to Catholicism (25 years ago) by the unintended example of my girlfriend-then-wife, and then I saw the pattern repeat as a volunteer in RCIA, an education program for people who have decided to become Catholic (during the months before confirmation), and pre-Cana, another program for couples who plan to be married in the church (also months-long). The pattern in which a romantic relationship among different-religion (including no-religion) couples eventually ends up with one or the other converting is extremely common. I'd say that maybe 90% of the people in RCIA had a Catholic significant other, and maybe 40% of the couples in pre-Cana were mixed couples that became both-Catholic. What this vantage point didn't show me was the fraction in wh
8a2eacf6-800c-451b-b732-a7a489fac596
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Shallow evaluations of longtermist organizations *Epistemic status*: Fairly uncertain. May contain errors, probabilities might not be calibrated. Introduction ------------ This document reviews a number of organizations in the longtermist ecosystem, and poses and answers a number of questions which would have to be answered to arrive at a numerical estimate of their impact. My aim was to see how useful a "quantified evaluation" format in the longtermist domain would be. In the end, I did not arrive at GiveWell-style numerical estimates of the impact of each organization, which could be used to compare and rank them. To do this, one would have to resolve and quantify the remaining uncertainties for each organization, and then convert each organization's impact to a common unit [1, 2]. In the absence of fully quantified evaluations, messier kinds of reasoning have to be used and are being used to prioritize among those organizations, and among other opportunities in the longtermist space. But the hope is that reasoning and reflection built on top of quantified predictions might prove more reliable than reasoning and reflection alone. In practice, the evaluations below are at a fairly early stage, and I would caution against taking them too seriously and using them in real-world decisions as they are. By my own estimation, of two similar past posts, [2018-2019 Long Term Future Fund Grantees: How did they do?](https://forum.effectivealtruism.org/posts/Ps8ecFPBzSrkLC6ip/2018-2019-long-term-future-fund-grantees-how-did-they-do) had 2 significant mistakes, as well as half a dozen minor mistakes, out of 24 grants, whereas [Relative Impact of the First 10 EA Forum Prize Winners](https://forum.effectivealtruism.org/posts/pqphZhx2nJocGCpwc/relative-impact-of-the-first-10-ea-forum-prize-winners) had significant [errors](https://forum.effectivealtruism.org/posts/pqphZhx2nJocGCpwc/relative-impact-of-the-first-10-ea-forum-prize-winners?commentId=5xujn5KiLmgEaXaYt) in at least 3 of the 10 posts it evaluated. To make the scope of this post more manageable, I mostly did not evaluate organizations included in [Lark](https://forum.effectivealtruism.org/users/larks)'s yearly AI Alignment Literature Review and Charity Comparison posts, nor meta-organizations [3]. Evaluated organizations ======================= Alliance to Feed the Earth in Disasters --------------------------------------- *Epistemic status* for this section: Fairly sure about the points related to ALLFED's model of its own impact. Unsure about the points related to the quality of ALLFED's work, given that I'm relying on impressions from others. ### Questions With respect to the principled case for an organization to be working on the area: 1. What *is* the probability of a (non-AI) catastrophe which makes ALLFED's work relevant (i.e., which kills 10% or more of humanity, but not all of humanity) over the next 50 to 100 years? 2. How much does the value of the future diminish in such a catastrophe? 3. How does this compare to work in other areas? With respect to the execution details: 1. Is ALLFED making progress in its "feeding everyone no matter what" agenda? 2. Is that progress on the lobbying front, or on the research front? 3. Is ALLFED producing high-quality research? On a Likert scale of 1-5, how strong are their papers and public writing? 4. Is ALLFED cost-effective? 5. Given that ALLFED has a large team, is it a positive influence on its team members? How would we expect employees and volunteers to rate their experience with the organization? ### Tentative answers **Execution details about ALLFED in particular** Starting from a quick review as a non-expert, I was inclined to defer to ALLFED's own expertise in this area, i.e., to trust their own evaluation that their own work was of high value, at least compared to other possible directions which could be pursued within their cause area. Per their [ALLFED 2020 Highlights](https://forum.effectivealtruism.org/posts/29mfRszEcpn6uLZAb/allfed-2020-highlights), they are researching ways to quickly scale alternative food production, at the lowest cost, in the case of large catastrophes, i.e., foods which could be produced for several years if there was a nuclear war which blotted out the sun. However, when talking with colleagues and collaborators, some had the impression that ALLFED was *not* particularly competent, nor its work high quality. I would thus be curious to see an assessment by independent experts about how valuable their work seems in comparison to other work in their area, or to potential work which could be done. In 2020, ALLFED also did some work related to the COVID-19 pandemic. While there is a case to be made that the pandemic is a kind of test run for a global catastrophe, I feel that this was a bit of a distraction from their core work. It's unclear to me whether their research process is particularly cost-efficient; I've made inquiries as to the number of full-time employees (FTEs) for 2020 and its budget for that year, but haven't been answered. The data about ALLFED's budget was not available on their webpage. Because they are not a 503 registered charity, a Form 990 isn't anywhere to be found. It is also not clear to me how many FTEs ALLFED is employing, and how many of those are dedicated to research (vs logistical support, bureaucracy, etc.) **The principled case for an organization working in the area** With regards to the chance of catastrophic risks which would make this work valuable, one guide here is Michael Aird's [database of existential risk estimates](https://forum.effectivealtruism.org/posts/JQQAQrunyGGhzE23a/database-of-existential-risk-estimates), another one is [Luisa Rodríguez](https://forum.effectivealtruism.org/users/luisa_rodriguez)'s work on estimates of the probability of nuclear wars of varying severity. Interestingly, my intuitive estimates vary depending on whether I ask about estimates per year, or estimates in the next 100 years [4]. ALLFED has used [this guesstimate model](https://www.getguesstimate.com/models/11762) (taken from the post [Cost-Effectiveness of Foods for Global Catastrophes: Even Better than Before?](https://forum.effectivealtruism.org/posts/CcNY4MrT5QstNh4r7/cost-effectiveness-of-foods-for-global-catastrophes-even)) to estimate its own (future) impact. For instance, the [ALLFED 2020 Highlights](https://forum.effectivealtruism.org/posts/29mfRszEcpn6uLZAb/allfed-2020-highlights) post mentions the following while linking to the model: > > I continue to believe that ALLFED's work offers the highest expected value at the margin for improving the long-term future and saving expected lives in the present generation > > > The model itself [gives](https://forum.effectivealtruism.org/posts/CcNY4MrT5QstNh4r7/cost-effectiveness-of-foods-for-global-catastrophes-even): > > ~60% confidence of greater cost-effectiveness than AI for the 100 millionth dollar, and ~95% confidence of greater cost-effectiveness at the margin now than AI. Anders Sandberg's version of the model produced ~80% and ~100% confidence, respectively. > > > The model presents some structure to estimate ALLFED's impact, namely: * The chance of a "full-scale nuclear war" and the impact that ALLFED would have in that scenario. * The chance of a catastrophe which kills 10% of the population, and the impact which ALLFED would have in that scenario It seems a little bit confusing at first, but it becomes more clear once you go through it cell by cell. In any case, I disagree pretty heavily with some of the estimates in that model, though I appreciate that it's a quantified model that gives something to disagree about. ### Disagreements and Uncertainties [![](https://i.imgur.com/11Dq64a.png)](https://www.foretold.io/c/b2412a1d-0aa4-4e37-a12a-0aca9e440a96/n/c01b0899-4100-4efd-9710-c482d89eddad) With those inputs, I arrive, per [this guesstimate model](https://www.getguesstimate.com/models/18201) at a roughly 50% probability that "marginal money now on alternate foods is more cost effective than on AI risk mitigation". This is in stark contrast with the original 95%, and at a 15% probability that $100M to alternate foods is "more cost-effective than to AI risk mitigation". I endorse the 50%, but not the 15%; I'd probably be higher on the latter. [![](https://i.imgur.com/aUaqPd4.png)](https://www.foretold.io/c/b2412a1d-0aa4-4e37-a12a-0aca9e440a96/n/c01b0899-4100-4efd-9710-c482d89eddad) I feel that that 50% is still pretty good, but the contrast between it and the model's initial 95% is pretty noticeable to me, and makes me feel that the 95% is uncalibrated/untrustworthy. On the other hand, my probabilities above can also be seen as a sort of sensitivity analysis, which shows that the case for an organization working on ALLFED's cause area is somewhat more robust than one might have thought. ### Concluding Thoughts In conclusion, I disagree strongly with ALLFED's estimates (probability of cost overruns, impact of ALLFED's work if deployed, etc.), however, I feel that the case for an organization working in this area is relatively solid. My remaining uncertainty is about ALLFED's ability to execute competently and cost-effectively; independent expert evaluation might resolve most of it. ### Sources * [ALLFED webpage](https://allfed.info) * [ALLFED - EA Forum Tag](https://forum.effectivealtruism.org/tag/allfed) * [ALLFED 2020 Highlights](https://forum.effectivealtruism.org/posts/29mfRszEcpn6uLZAb/allfed-2020-highlights) * [ALLFED team members](https://allfed.info/team-members/) * [ALLFED's Guesstimate model of its impact](https://www.getguesstimate.com/models/11762)   ---   All-Party Parliamentary Group for Future Generations (APPGFG) ------------------------------------------------------------- *Epistemic status* for this section: Very sure that APPGFG is a very inexpensive opportunity, less sure about other considerations. ### Questions: * Is the APPGFG successfully bringing about substantial change? * Is the APPGFG successfully building capacity to bring about actual change? * Does the APPGFG have enough proposals or actionable advice for ministers to act on? * What are the possible downsides of the APPGFG? * To what extent is the APPGFG constrained by insufficient funding? * How strong is the APPGFG's network of advisors? * Is the APPGFG being cost-effective? * Does the APPGFG have room for growth? ### Tentative answers **General considerations** Per [this writeup](https://forum.effectivealtruism.org/posts/AWKk9zjA3BXGmFdQG/appg-on-future-generations-impact-report-raising-the-profile-1), the APPGFG 1. Has been figuring out how best to influence policy in the UK parliament to care more about future generations. 2. Campaigned for an "UK Future Generations Bill to embed a Commissioner for Future Generations into the structures of UK policy making", and successfully lobbied the House of Lords to establish a "Special Inquiry Committee on Risk Assessment and Risk Management," on how the UK prepares for future risks (beyond pandemics) and works internationally to prepare for global risks, which will work for one year. 3. Has been building relationships with parliamentarians. They grew a parliamentary group to include 75 parliamentarians, which can be found [here](https://www.appgfuturegenerations.com/officers-and-members). APPGFG also organized various activities for that group. 4. Has been researching possible policy suggestions: diving into policy areas, and "general research into how the effective altruism community should approach policy, risks and measuring the impact of policy interventions." Their overall theory of impact (referred to [here](https://forum.effectivealtruism.org/posts/AWKk9zjA3BXGmFdQG/appg-on-future-generations-impact-report-raising-the-profile-1#Strategy_and_aims)) seems straightforward and plausible. I would further add a step where successful policy change in the UK could spur further change in other countries, particularly in the European sphere. I'm not really sure what their network of external advisors looks like; APPGFG's post mentions receiving positive feedback from the Future of Humanity Institute (FHI), the Center for the Study of Existential Risk (CSER), the UK civil service, and unspecified others. I would be comparatively more excited if the APPGFG's external advisors mostly come from FHI, rather than CSER, about which I have some reservations (more on which below, in CSER's own section). The APPGFG spent roughly $40k for one full-time employee during 2020. This seems very inexpensive. If the APP wanted to expand and thought they had someone they wanted to hire, it would be at the top of my list. It also seems likely that APPGFG's two existing employees could be paid better. This [APPGFG's writeup](https://forum.effectivealtruism.org/posts/AWKk9zjA3BXGmFdQG/appg-on-future-generations-impact-report-raising-the-profile-1) emphasizes that they have "not yet caused any actual changes to UK government policy", but insofar as what they're doing is capacity building, I find their capacity building work promising. My understanding is that right now, there aren't that many longtermist related proposals which the APPGFG is able to bring forward, and that the longtermist community itself is uncertain about what kinds of policy proposals to push for. To clarify, my understanding is that policy-wise there is *some* work the APPGFG can do, such as pushing for the aforementioned Future Generations Bill, nudging legislation in a more longtermist direction, or maybe help shape the UK's attempt at reducing the likelihood of future COVID-19-like catastrophes. However, these proposals seem relatively small in comparison to what a "longtermist policy agenda" could be, and in fact there isn't an ambitious "longtermist policy agenda" that the APPGFG can just push for. With that in mind, the APPGFG's strategy of embedding itself into Britain's parliamentary processes, while thinking about which more ambitious policy proposals could be brought forward in the future, seems sensible. **Possible downsides** With regards to possible downsides to the APPGFG, the main one in the common EA consciousness seems to be "poisoning the well". This refers to a possible path whether early suboptimal exposure to longtermist ideas could make the audiences more reticent to later consider similar ideas. Two other downsides are 1) the APPGFG's current leadership getting [promoted to incompetence](https://en.wikipedia.org/wiki/Peter_principle) in case the APPGFG grows substantially, and 2) the APPGFG's existence impeding the creation and growth of a more capable organization. In the first case, maybe the APPGFG's current leadership are good lobbyists and good researchers, but would be unsuitable to lead e.g., a 20 person lobbying apparatus (and would fail to grow into the position.) But by the time the APPGFG was considering growing that much, it would be awkward to replace its leadership. In the second case, maybe there is a more promising person out there who would have done something similar to the APPGFG, but better, and who didn't because the APPGFG already existed. My impression is that this "promotion to incompetence" dynamic may have happened in some EA research organizations, and that the [Iodine Global Network](https://www.ign.org/) may have been both too weak to establish strong, well-funded national groups, and so large that the creation of another organization to do that would be extremely awkward. In the counterfactual world where the APPGFG didn't exist, one would still have to develop a policy agenda, and then in addition one would also have to gain familiarity with the British policy-making apparatus, and a presence within it. Whereas in the world where the APPGG does exist, one can develop a longtermist policy agenda informed by political realities, and one has a >2 year head start in establishing a presence in the British political apparatus. Earlier capacity building seems to me to be worth some poisoning the well, and the overall probability of poisoning the well seems to me to be low. Promotion to incompetence would only be a worry if the APPGFG substantially expanded. Displacing other potentially better organizations seems (to me) to be more of a concern. But overall I think we live in a world where there are not enough people with policy expertise doing EA work, not in the world where there are many and the best are displaced. ### Conclusion In conclusion, I feel that their logic model is solid, and that the APPGFG's capacity-building work is promising. I'm hesitant about its closeness to CSER. It's current budget seems particularly small. I'm uncertain about how they compare with other organizations in similar or adjacent spheres, and in particular with GovAI. Downsides exist, but accelerated capacity building seems to me to be worth these downsides. I feel fairly positive about the APPGFG's chances of success: [![](https://i.imgur.com/vIaYxnt.png)](https://www.foretold.io/c/b2412a1d-0aa4-4e37-a12a-0aca9e440a96/n/c01b0899-4100-4efd-9710-c482d89eddad) ### Sources * [APPG on Future Generations impact report – Raising the profile of future generation in the UK Parliament](https://forum.effectivealtruism.org/posts/AWKk9zjA3BXGmFdQG/appg-on-future-generations-impact-report-raising-the-profile-1) * [EA Forum tag on the APPGFG](https://forum.effectivealtruism.org/tag/appg-on-future-generations) * [appgfuturegenerations.com](https://www.appgfuturegenerations.com/) * [Peter Principle](https://en.wikipedia.org/wiki/Peter_principle) ---   CSER ---- *Epistemic status* for this section: Unmitigated inside view. ### Questions * How much of CSER's work is of high value from a long-termist perspective? ### Tentative answer A colleague mentioned that there was something "weird" with CSER going on, and I was surprised to find out that this was actually the case. I skimmed the past research of the members mentioned on their webpage, and I classified their researchers in terms of alignment. I came away with the impression that they had around 5 aligned researchers, around 4 researchers I'm uncertain about, and around 14 whom I'd classify as unaligned or unproductive. CSER also has 6 additional support staff. Readers are welcome to browse [CSER's team page](https://www.cser.ac.uk/team) and calculate what percentage of researchers are working on valuable directions according to one's values. Personally, although I feel like there is a small group of strong researchers working at CSER, the proportion of researchers working on stuff I don't particularly care about or which I don't expect to be particularly valuable according to my values is too high. Commenters pointed out that this assessment is "almost unfairly subjective." [![](https://i.imgur.com/l47LXUD.png)](https://www.foretold.io/c/b2412a1d-0aa4-4e37-a12a-0aca9e440a96/n/c01b0899-4100-4efd-9710-c482d89eddad) ### Sources * [cser.ac.uk](https://www.cser.ac.uk/) * [CSER team](https://www.cser.ac.uk/team/) * [Centre for the Study of Existential Risk Four Month Report June - September 2020](https://forum.effectivealtruism.org/posts/EArLfuDz34zJHJZJx/centre-for-the-study-of-existential-risk-four-month-report-1)   ---   Center for Security and Emerging Technology (CSET) -------------------------------------------------- *Epistemic status* for this section: After doing a shallow dive and reading a portion of CSET's work , I have some models about their impact, but they are fuzzy and I don't feel particularly sure about them. ### Questions * What is a good way to think about CSET's impact? * How net-positive can we expect CSET's work to be? How likely is CSET to do harm? In particular, how much will CSET's work draw attention to good aspects of AI Safety and fight arms races, as opposed to drawing attention in ways that might amplify arms races or dangerous AI development? * Is CSET acquiring influence within the US policy community and/or the current administration? * How does Jason Matheny leaving for the Biden administration affect CSET's impact? How much power and influence does Matheny have in the new Biden administration? * How much influence would CSET have in a future Republican administration? Might CSET become partisan? * Does CSET 's influence translate into actual policy? * Are CSET's researchers well-positioned to join a future US administration? * How valuable is CSET-foretell? I.e., are the predictions eventually used to make real-world decisions? * What is the influence of longtermism at CSET? Can we expect this to grow or shrink in the future? * To what extent should one defer to OpenPhilanthropy's evaluation of CSET? This might be more than normal, as there may be a fair amount of private knowledge, and as models around policy change (and the reasons for believing in those models) might be particularly hard to communicate. ### Tentative answers CSET's work can be categorized as: * Testimonials to the US Congress * Research * Media appearances * Translations * Forecasting Analyzing each of them in turn, I looked at past testimonies given by CSET team members to the US Senate and House of Representatives: * [Testimony Before House Homeland Security Subcommittee](https://cset.georgetown.edu/publication/maintaining-the-ai-chip-competitive-advantage-of-the-united-states-and-its-allies/). This testimony briefly outlines the impact of artificial intelligence on cybersecurity. In the first place, AI systems themselves may be hacked. Secondly, AI systems can augment the capabilities of cyber attacks. Thirdly, AI might help with defense capabilities. * [Testimony Before Senate Banking Committee](https://cset.georgetown.edu/publication/cset-testimony-before-senate-banking-committee/). The testimony considers export controls on artificial intelligence, and in particular, for data, algorithms, and computing power. It argues that export controls are the most adequate tool for the first two, but that export controls on the hardware that manufactures specialized computer chips for AI might make a difference. * [Testimony Before House Science Committee](https://cset.georgetown.edu/publication/cset-testimony-before-house-science-committee/). The witness describes himself as working for OpenAI rather than for CSET, so I'm not clear to what extent I should count this towards CSET's impact. The testimony argues that we have entered the era of "good enough" AI. However, AI systems frequently exhibit biases, and they may fail, e.g., when encountering outside the training distribution, because of specification gaming. AI systems can also fail as a combination of human error and technical problems, as when recommendation engines optimize for engagement and companies are indifferent to the harms of that. Government should invest in its own capabilities to measure, assess, and forecast aspects; the testimony gives concrete suggestions. Academia should also carry out more targeted research to deal with possible AI failures. Further, industry, government and academia should engage more frequently. [Testimony Before House Homeland Security Committee](https://cset.georgetown.edu/publication/cset-testimony-before-house-homeland-security-committee/). The author considers how AI could be used for moderating social media platforms, and whether AI contributes to radicalization. * [Testimony Before U.S.-China Economic and Security Review Commission](https://cset.georgetown.edu/publication/chinas-current-capabilities-policies-and-industrial-ecosystem-in-ai/). The author states his affiliation as Center for the Governance of AI, FHI, and makes the case that "China is not poised to overtake the U.S. in the technology domain of AI; rather, the U.S. maintains structural advantages in the quality of S&T inputs and outputs, the fundamental layers of the AI value chain, and key subdomains of AI." It then suggests some policy recommendations to maintain the status quo of US dominance on AI. * [Testimony Before U.S.-China Economic and Security Review Commission](https://cset.georgetown.edu/publication/technology-trade-and-military-civil-fusion-chinas-pursuit-of-artificial-intelligence/). This testimony considers the state of AI, particularly in relationship with China, and argues in general for continued openness. * [Testimony Before Senate Foreign Relations Committee](https://cset.georgetown.edu/publication/testimony-before-senate-foreign-relations-committee/). To maintain competitiveness, the US should focus on its current asymmetric advantages: its network of allies, and its ability to attract the world's best and brightest. The US should also institute export controls on chip manufacturing equipment to ensure that democracies lead in advanced chips. The US should also invest in AI, but deploying AI in critical systems without verifying their trustworthiness poses grave risks. Personally, I find the testimonies thoughtful and interesting. They distill complex topics into things which US Members of Congress might understand. However, it is unclear to me to what extent these testimonies actually had an impact on policy. I thought that testimonies were particularly important because one worry outlined in [Open Philanthropy's grant](https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology) to found CSET was: > > We worry that heavy government involvement in, and especially regulation of, AI could be premature and might be harmful at this time. **We think it's possible that by drawing attention to the nexus of security and emerging technologies (including AI), CSET could lead to premature regulatory attention and thus to harm.** However, we believe CSET shares our interest in caution on this front and is well-positioned to communicate carefully. > > > CSET indeed communicated carefully and with nuance most of the time, at least according to my reading of its testimonials to the US Congress. In particular, it seemed likely that the late Trump administration was going to take punitive actions against China, and providing expert considerations on CSET's area of expertise seemed unlikely to have done harm. There could be some scenarios in which any testimony at all increases political tensions, but this seems unlikely. However, some of the positions which CSET advocated for, e.g., openness and taking in top foreign talent from China, do map clearly across partisan lines, and if that proportion exceeds some threshold, or if CSET never gives support to uniquely Republican stances, CSET and the positions it defends might eventually come to be perceived as partisan. [![](https://i.imgur.com/IHSQ716.png)](https://www.foretold.io/c/b2412a1d-0aa4-4e37-a12a-0aca9e440a96/n/c01b0899-4100-4efd-9710-c482d89eddad) With regards to research, CSET appears to be extremely prolific, per [CSET's list of publications](https://cset.georgetown.edu/publications/). Some publications which appeared particularly relevant for evaluation purposes are: * [CSET Reading Guide](https://cset.georgetown.edu/publication/cset-reading-guide/) provides a brief overview of CSET and its main lines of research and projects. Most appear thoughtful. * [CSET Publishes AI Policy Recommendations for the Next Administration](https://cset.georgetown.edu/publication/cset-publishes-ai-policy-recommendations-for-the-next-administration/). After the end of the first Biden administration, we might look back and see how many of these recommendations have been implemented. * [Keeping Top AI Talent in the United States](https://cset.georgetown.edu/publication/keeping-top-ai-talent-in-the-united-states/), [Strengthening the U.S. AI Workforce](https://cset.georgetown.edu/publication/strengthening-the-u-s-ai-workforce/) and other works argued against Trump's immigration restrictions. [Maintaining the AI Chip Competitive Advantage of the United States and its Allies](https://cset.georgetown.edu/publication/maintaining-the-ai-chip-competitive-advantage-of-the-united-states-and-its-allies/) and other research contributes to the policy debate on export restrictions. Both seem positive, but still work within an adversarial framework where the US finds itself in an "AI race" with China. * [Future Indices](https://cset.georgetown.edu/publication/future-indices/) outlines how CSET-Foretell works. It is still unclear to me whether Foretell's predictions will end up influencing any real world decisions. Interestingly, CSET's model of working within the prestigious mainstream seems to be particularly scalable, in a way which other organizations in the longtermist sphere are not. That is, because CSET doesn't specifically look for EAs when hiring, [CSET's team](https://cset.georgetown.edu/team/) has been able to quickly grow. This is in comparison with, for example, an organization like Rethink Priorities. The downside of this is that hires might not be aligned with longtermist interests. Besides testimonials and research, CSET also has a large number of media appearances ([cset.georgetown.edu/article/cset-experts-in-the-news](https://cset.georgetown.edu/article/cset-experts-in-the-news) through [cset.georgetown.edu/article/cset-experts-in-the-news-10](https://cset.georgetown.edu/article/cset-experts-in-the-news-10/)). I'm inclined to think that these appearances also have some kind of positive impact, though I am again uncertain of their magnitude. CSET also carries out a large number of [translations](https://cset.georgetown.edu/publications/?fwp_content_type=translation) of Chinese policy and strategy documents. Lastly, I also occasionally encounter CSET's research "in the wild", e.g., [these](https://www.schneier.com/blog/archives/2021/05/ais-and-fake-comments.html) [two](https://www.schneier.com/blog/archives/2021/06/the-future-of-machine-learning-and-cybersecurity.html) blog posts by [Bruce Schneier](https://en.wikipedia.org/wiki/Bruce_Schneier), a respected security expert, mentios a CSET report. This is at least some evidence that relevant experts read these. Overall, the work that I have read appears to be lucid. But my knowledge of US policy work impact pathways is particularly fuzzy, and the pathways to influence policy are themselves fuzzy and uncertain. Further, unlike with some other organizations, there isn't an annual review I can bootstrap an evaluation from. For this reason, it is particularly tempting for me to defer to an outside view, like [OpenPhilanthropy's grant rationale](https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology) for the creation of CSET, and its willingness to donate an initial $55 million in 2019, and [an additional $8 million](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-security-and-emerging-technology-general-support) at the beginning of 2021. If OpenPhil hadn't been willing to continue to fund CSET, I'd still guess that CSET's work was valuable, but I would be fairly uncertain as to whether it was a comparatively good bet. In conclusion, CSET's work seems within what I would expect a competent think tank would produce. Given that OpenPhilanthropy is still funding them, I expect them to still be valuable. In particular, its think-tank model seems particularly scalable. ### Sources * [CSET publications](https://cset.georgetown.edu/publications/) * [Maintaining the AI Chip Competitive Advantage of the United States and its Allies](https://cset.georgetown.edu/publication/maintaining-the-ai-chip-competitive-advantage-of-the-united-states-and-its-allies/) * [Testimony Before Senate Banking Committee](https://cset.georgetown.edu/publication/cset-testimony-before-senate-banking-committee/) * [Testimony Before House Science Committee](https://cset.georgetown.edu/publication/cset-testimony-before-house-science-committee/) * [Testimony Before House Homeland Security Committee](https://cset.georgetown.edu/publication/cset-testimony-before-house-homeland-security-committee/) * [Testimony Before U.S.-China Economic and Security Review Commission](https://cset.georgetown.edu/publication/chinas-current-capabilities-policies-and-industrial-ecosystem-in-ai/) * [Testimony Before Senate Foreign Relations Committee](https://cset.georgetown.edu/publication/testimony-before-senate-foreign-relations-committee/) * [CSET Reading Guide](https://cset.georgetown.edu/publication/cset-reading-guide/) * [CSET Publishes AI Policy Recommendations for the Next Administration](https://cset.georgetown.edu/publication/cset-publishes-ai-policy-recommendations-for-the-next-administration/) * [Keeping Top AI Talent in the United States](https://cset.georgetown.edu/publication/keeping-top-ai-talent-in-the-united-states/) * [Future Indices](https://cset.georgetown.edu/publication/future-indices/) * [cset.georgetown.edu/article/cset-experts-in-the-news](https://cset.georgetown.edu/article/cset-experts-in-the-news) through [cset.georgetown.edu/article/cset-experts-in-the-news-10](https://cset.georgetown.edu/article/cset-experts-in-the-news-10) * [Open Philanthropy: Georgetown University — Center for Security and Emerging Technology](https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology) -[Open Philanthropy: Center for Security and Emerging Technology — General Support](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-security-and-emerging-technology-general-support) * [Schneier on Security : AIs and Fake Comments](https://www.schneier.com/blog/archives/2021/05/ais-and-fake-comments.html)   ---   Future of Life Institute (FLI) ------------------------------ *Epistemic status* for this section: Uncertain about object-level facts regarding FLI. ### Questions * What is a good breakdown of FLI's current and future activities? * How well can FLI ensure quality with part-time employees covering sensitive topics? * How net-positive has FLI's previous work been? Has anything been particularly negative, or have they incurred significant PR risks or similar? ### Tentative answers FLI was also briefly covered by Larks. I think Wikipedia does a better job summarizing FLI than the FLI website: > > The Future of Life Institute (FLI) is a nonprofit research institute and outreach organization in the Boston area that works to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence (AI). Its founders include MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn, and its board of advisors includes entrepreneur Elon Musk. > > > Some notable past activities include organizing conferences---such as the [Asilomar Conference](https://www.wikiwand.com/en/Asilomar_Conference_on_Beneficial_AI), which produced the [Asilomar Principles](https://futureoflife.org/ai-principles/) on beneficial AI---work on [Lethal Autonomous Weapons Systems](https://futureoflife.org/lethal-autonomous-weapons-systems/), giving out the [future of life award](https://futureoflife.org/future-of-life-award/), and general [policy work](https://futureoflife.org/policy-work) (open letters, initiatives, pledges, video content, podcasts, etc.) FLI is also a [giving vehicle](https://futureoflife.org/2018/07/25/2-million-donated-to-keep-artificial-general-intelligence-beneficial-and-robust/), and recently announced a [$25M grant program](https://futureoflife.org/fli-announces-grants-program-for-existential-risk-reduction/) financed by Vitalik Buterin. The Centre for the Governance of AI thanks FLI on its [annual report](https://www.fhi.ox.ac.uk/govai/govai-2020-annual-report). To pick an example, for their work on [Lethal Autonomous Weapons Systems](https://futureoflife.org/lethal-autonomous-weapons-systems/), their model of impact seems to be that by raising awareness of the topic through various activities, and by pushing governments, NGOs and supranational organizations, they could institute a ban on Lethal Autonomous Weapons. This attempt would also act as a test-ground for "AI Arms Race Avoidance & Value Alignment." So far, while they have raised awareness of the topic, a ban doesn't seem to be forthcoming. Their [video on slaughterbots](https://www.youtube.com/watch?v=HipTO_7mUOw) reached a million views on youtube, but, per [Seth Baum's talk in EA Global 2018](https://forum.effectivealtruism.org/posts/6cyXwsAanTmhvZRRH/seth-baum-reconciling-international-security), "the video was fairly poorly received by a lot of important people in international security policy communities, and because of that it has made it more difficult for the people behind the video to get their message out there to these very important audiences." The [core team](https://futureoflife.org/team/) mentioned in their webpage had just seven members, but increased to nine as I was writing this piece. Of these nine, five mention other current affiliations, and it's unclear how many full-time equivalents FLI currently employs. In particular, I'd expect that to make inroads on their five core issues mentioned in their website (x-risk, artificial intelligence, nuclear weapons, biotechnology and climate change), a larger team would be needed. In short, I'm uncertain about how valuable policy work is, about how valuable the specific policy work which FLI has done is, and about whether FLI intends to continue doing policy work. Colleagues have mentioned that FLI isn't so much an organization as "a hat which sometimes people wear," which seems plausible. [![](https://i.imgur.com/CqAwEHZ.png)](https://www.foretold.io/c/b2412a1d-0aa4-4e37-a12a-0aca9e440a96/n/c01b0899-4100-4efd-9710-c482d89eddad)   ---   LessWrong --------- *Epistemic status*: The graphs serve as a sanity check on my intuitions, rather than being their main drivers. ### Questions * Is LessWrong catalyzing useful research? * Is LessWrong successfully cultivating a community of people capable of grappling with important real world problems? * How does LessWrong's research output compare to that of other research institutions? * How many FTEs worth of research is LessWrong responsible for? ### Tentative answers As I understand it, LessWrong's benefits are * to catalyze concrete research * to create and maintain a community of people who are able to capably engage with real world problems See [here](https://www.lesswrong.com/posts/8rYxw9xZfwy86jkpG/on-the-importance-of-less-wrong-or-another-single#PNCWPyvLS7G6L3iHW) and [here](https://www.lesswrong.com/posts/bJ2haLkcGeLtTWaD5/welcome-to-lesswrong) for other people using different wording. With regards to concrete research outputs produced or catalyzed, some recent examples in the last three months from [the list of curated posts are](https://www.lesswrong.com/allPosts?filter=curated&sortedBy=new&timeframe=allTime) related to AI alignment are: * [Formal Inner Alignment, Prospectus](https://www.lesswrong.com/posts/a7jnbtoKFyvu5qfkd/formal-inner-alignment-prospectus) * [Another (outer) alignment failure story](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story) * [What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) * [Coherence arguments imply a force for goal-directed behavior](https://www.lesswrong.com/posts/DkcdXsP56g9kXyBdq/coherence-arguments-imply-a-force-for-goal-directed-behavior) * [My research methodology](https://www.lesswrong.com/posts/EF5M6CmKRd6qZk27Z/my-research-methodology) * [The case for aligning narrowly superhuman models](https://www.lesswrong.com/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models) With regards to community building, some interaction happens in the comments. Further, the LessWrong team organizes activities, like Solstice celebrations, Petrov Day games, talks, etc. One rough measure of the community building aspect could be the number of new users with more than 500 or 1000 karma in the last couple of years. If we search for these, we find the following: ![](https://i.imgur.com/Y4gtXDO.png) ![](https://i.imgur.com/3F1GXmL.png) Note that this is, in a sense, unfair to recent years, because newly active users haven't had time to accumulate as much karma as old users. Nonetheless, the conclusion that the LW community recovered from its previous decline holds. It's unclear to me exactly how valuable the production of around 10 highly engaged users with the rationality community is, but the intellectual output of those new 10 users seems probably comparable to that of a small or medium-sized research institute. And the combined output of LW seems much greater. Also note that this would be 10 *new* highly active users per year. To the extent that these new users belong to already established organizations and just share the output of their work on LessWrong, LessWrong also seems valuable as a locus of discussion. But this doesn't seem to be the main driver of growth in highly engaged users; of the 14 users who joined since the beginning of 2019 and have accumulated more than 500 karma, only around 3 belong to EA-aligned organizations. We can also analyze the number of posts above 100 votes per year, or the total number of votes given to posts in each year. I'm using number of votes (number of people who vote) instead of karma (which includes a multiplier) because the LW API makes that easier to get. In any case, we find ![](https://i.imgur.com/sPA5IAZ.png) ![](https://i.imgur.com/LdSsgeo.png) If, as a rough approximation, we take 100 votes (for posts) as equivalent to two researcher/weeks, 40,000 votes in 2020 would equal 200 researcher months, or 17 researcher/years. A more qualitative approach would involve, e.g., looking at the [LessWrong Review for 2018](https://www.lesswrong.com/s/uNdbAXtGdJ8wZWeNs/p/3yqf6zJSwBF34Zbys), and asking how much one would be willing to pay for the creation and curation of the collected posts, or comparing their value to the value of FHI's [publications](https://www.fhi.ox.ac.uk/publications/) for the same year. One would have to adjust for the fact that around 1/4th of the most highly upvoted posts are written by MIRI employees. In conclusion, LW seems to catalyze or facilitate a relatively large amount of research, and that it does so relatively efficiently, with around 6 FTEs (per the [team page](https://www.lesswrong.com/posts/aG74jJkiPccqdkK3c/the-lesswrong-team)). Concretely, LessWrong appears to produce substantially more than one FTE worth of research per FTE. One key question is whether many of the LessWrong posts would have just been written elsewhere. In addition, the LessWrong codebase is also used by the [EA Forum](https://forum.effectivealtruism.org/) and by the [AI Alignment Forum](https://www.alignmentforum.org/). [![](https://i.imgur.com/7vOL4tw.png)](https://www.foretold.io/c/b2412a1d-0aa4-4e37-a12a-0aca9e440a96/n/c01b0899-4100-4efd-9710-c482d89eddad) ### Sources * [On the importance of Less Wrong, or another single conversational locus](https://www.lesswrong.com/posts/8rYxw9xZfwy86jkpG/on-the-importance-of-less-wrong-or-another-single#PNCWPyvLS7G6L3iHW) * [Welcome to LessWrong!](https://www.lesswrong.com/posts/bJ2haLkcGeLtTWaD5/welcome-to-lesswrong) * [Formal Inner Alignment, Prospectus](https://www.lesswrong.com/posts/a7jnbtoKFyvu5qfkd/formal-inner-alignment-prospectus) * [Another (outer) alignment failure story](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story) * [What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) * [Coherence arguments imply a force for goal-directed behavior](https://www.lesswrong.com/posts/DkcdXsP56g9kXyBdq/coherence-arguments-imply-a-force-for-goal-directed-behavior) * [Paul Christiano: My research methodology](https://www.lesswrong.com/posts/EF5M6CmKRd6qZk27Z/my-research-methodology) * [The case for aligning narrowly superhuman models](https://www.lesswrong.com/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models) * [2018 Review: Voting Results!](https://www.lesswrong.com/s/uNdbAXtGdJ8wZWeNs/p/3yqf6zJSwBF34Zbys) * [FHI Publications](https://www.fhi.ox.ac.uk/publications/) * [The LessWrong Team](https://www.lesswrong.com/posts/aG74jJkiPccqdkK3c/the-lesswrong-team) * [EA Forum](https://forum.effectivealtruism.org/) * [Alignment Forum](https://www.alignmentforum.org/)   ---   Rethink Priorities (RP) ----------------------- *Epistemic status*: Only talking about explicitly longermist-branded parts of their research. ### Questions * How many FTEs are currently working using a longtermist perspective at Rethink Priorities? * Will Rethink Priorities be able to produce research in the long-termist space similar in quality to the research they have produced on invertebrate welfare? * Will Rethink Rethink Priorities be able to productively expand into the longtermist sphere? How will it do so? * How many FTEs producing high-quality longtermist research will RP employ by 2025? ### Tentative answers Rethink Priorities has recently been expanding into the longtermist sphere, and it did so by [hiring](https://www.rethinkpriorities.org/our-team) [Linch Zhang](https://forum.effectivealtruism.org/users/linch) and [Michael Aird](https://forum.effectivealtruism.org/users/michaela), the latter part-time, as well as some volunteers/interns. At this point, I feel that the number of longtermist FTEs is so small that I wouldn't be evaluating an organization, I would be evaluating individuals. All in all, Zhang and Aird haven't spent enough time at RP that I feel that their output would be representative. This is in contrast to, e.g., FHI's Research Scholars program, which is large enough that I feel it would make more sense to talk about the average quality of a researcher. That said, some of RP's recent inputs can be found [under their EA Forum tag](https://forum.effectivealtruism.org/tag/rethink-priorities?sortedBy=new). With regards to the expected quality of future research, on the one hand, past high quality research is predictive of future quality. On the other hand, research into invertebrate sentience feels foundational for animal-focused ethics and activism in a way which seems hard to upstage, so one might expect some regression to the mean. [![](https://i.imgur.com/n5BTzEo.png)](https://www.foretold.io/c/b2412a1d-0aa4-4e37-a12a-0aca9e440a96/n/c01b0899-4100-4efd-9710-c482d89eddad) ### Sources * [Rethink Priorities Team](https://www.rethinkpriorities.org/our-team) * [Rethink Priorities EA Forum tag](https://forum.effectivealtruism.org/tag/rethink-priorities)   ---   Simon Institute for Long-Term Governance (SILG) ----------------------------------------------- *Epistemic status*: Brief and cursory. Considerations apply to other new organizations. ### Questions * What does the prior distribution of success for new longermist organizations look like? * When will we have a better estimate of the Simon Institute for Long-Term Governance's input? * Is funding SILG better than OpenPhilanthropy's last longtermist dollar? ### Tentative answers I imagine that the prior distribution of success for new organizations is pretty long-tailed (e.g., a Pareto distribution). This would lead to a high initial expected value for new organizations, which most of the time sharply drops off after some initial time has passed and there is more information about the promisingness of the project. I imagine that ~two years might be enough to determine if a new organization is promising enough to warrant further investment. If that was the case, the optimal move would look like funding a lot of new organizations, most of which are then deprived of funding shortly after an initial grace period. It's not clear how to create a functional culture around that dynamic. Silicon Valley aguably seems to be able to make it work, but they have somewhat reliable proxies of impact (e.g., revenue, user growth), whereas long-termists would have to rely on uncertain proxies. The above considerations are fairly generic, and would apply to organizations other than SILG. Overall, I estimate that funding SILG for the first two years of existence and seeing how they fare seems valuable, but I'm not very certain. ### Sources * [Simon Institute](https://www.simoninstitute.ch/) * [Introducing the Simon Institute for Longterm Governance (SI)](https://forum.effectivealtruism.org/posts/eKn7TDxMSSsoHhcap/introducing-the-simon-institute-for-longterm-governance-si)   ---   80,000 hours ------------ *Epistemic status*: Deferring a lot to [80,000h's evaluation of itself](https://docs.google.com/document/d/1rWfQ3Lja2kYoUm_t9uNqBgEn5nz6KL8fmNP5db8cZRU/edit#). ### Questions * Can I generally defer to Benjamin Todd's judgment? * Will 80,000 hours continue to keep similar levels of cost-effectiveness as it scales? * Will 80,000 hours manage to keep its culture and ethos as it scales? * How does 80,000 hours compare to other, more speculative donation targets and career paths? * What percentage of 80,000 hours' impact is not related to career plan changes? * Will the percentage of 80,000 hours' impact not related to career plan changes remain constant as 80,000 hours scales? (so that thinking of 80,000 hours' impact as a multiple of the impact of its career changes "makes sense")? * What is a good way to think about 80,000 hours' aggregate impact? ### Tentative answers 80,000 hours has a [clear evaluation of itself](https://docs.google.com/document/d/1rWfQ3Lja2kYoUm_t9uNqBgEn5nz6KL8fmNP5db8cZRU/edit#). For me, the gist is that 1. 80,000 hours appears to have reached a point of maturity: Each programme is working well on its own terms. There's a sensible, intuitive case for why each should exist, and their mechanisms for impact seem reasonably solid. They all seem to generate a reasonable number of plan changes or other value, and I expect them to compare well with alternatives. Big picture, 80,000 Hours seems likely to be among the biggest sources of talent into longtermist EA over the last couple of years, and it seems great to capitalize on that. 2. The CEO is keen on expanding: > > "Two years ago, I felt more uncertain about cost effectiveness and was more inclined to think we should focus on improving the programmes. My views feel more stable now, in part because we've improved our impact evaluation in response to critical feedback from 2018, clarified our views on the one-on-one programmes, and taken steps to limit negative side effects of our work. So, I think it makes sense to shift our focus toward growing the programmes' impact. Below **I propose a two-year growth plan in which we aim to add 4.5 FTE in 2021, and 7.5 in 2022**, though we plan to fundraise for 3.75 and 6.5, as we expect to hire no more than that many over the next two years in practice." > > > Now, normally I'd think that the key questions were something like: * How many impact-adjusted career plan changes will 80,000 hours produce in 2021? * How many impact-adjusted career plan changes will 80,000 hours produce in 2021 per $100,000 in funding? And indeed, most of 80,000 hours' impact tracking and quantification is done with regards to career plan changes (operationalized as "discounted, impact-adjusted peak years"). However, per the 80,000 hours review: > > We remain unsure that plan changes are the best framework for thinking about 80,000 Hours' impact, and we think they capture only a minority of the value, especially for the website and podcast. For example, I think it's plausible that most of our past impact has come from getting the EA movement more focused on longtermism and spreading other important ideas in society. An analysis I did this year confirmed my previous impression that 80,000 Hours is among the biggest and most effective ways of telling people about EA (though I expect less cost effective than the most successful written content, such as Doing Good Better and Slate Star Codex). > > > It is possible that further estimation of non-career plan change related impact would be clarifying, even if the estimation is very fuzzy. In particular, to the extent that most of 80,000 hours' impact comes from influencing the EA community, and this sounds plausible, having most of their evaluation focus on career plan changes feels misguided (cf. [Streetlight effect](https://en.wikipedia.org/wiki/Streetlight_effect)). [![](https://i.imgur.com/QKsqX2a.png)](https://www.foretold.io/c/b2412a1d-0aa4-4e37-a12a-0aca9e440a96/n/c01b0899-4100-4efd-9710-c482d89eddad) (Despite feeling comfortable with the guess above, in practice, I've found that estimating total impact by estimating the impact of a measurable part and the fraction of value it represents leads to large errors) With regards to cost-efficiency, 80,000 hours had a budget in 2020 of approximately $3M, and around 19 FTEs. In short, 80,000 hours' career changes seem valuable, but most of the organization's impact might come from fuzzier pathways, such as moving the EA community and 80,000 hours' followers in a more longtermist direction. I'm uncertain about the value of expansion. ### Sources * [80,000 Hours Annual Review: November 2020](https://80000hours.org/2021/05/80000-hours-annual-review-nov-2020/)   ---   Observations ------------ I don't have any overarching conclusions, so here are some atomic observations: * The field seems pretty messy, and very far from GiveWell style comparison and quantification. * That said, it still seems plausible that some organizations are much more valuable than others (per unit of resources, etc.) * A core proposition of longtermism is that by focusing on regions in which impact is less measurable, we might attain more of it. This is as we might expect from e.g. Goodhart's law (optimizing for impact will diverge from optimizing for measurable impact.) However, this plays badly with evaluation efforts, and perhaps with prioritization efforts among different longtermist opportunities. * Many organizations have a large number of "affiliates", or "associates", some of which may be pursuing PhDs somewhere else, be affiliated with more than one organization, or work only part-time. This makes it harder to know how many full-time equivalents are working for each organization, and how productive the organization is given its budget. * Many of these organizations have done a good job having prestigious people in their board of advisors, such that e.g., having Elon Musk or Nick Bostrom seems like a weaker signal that it could be. I'd welcome comments about the overall method, about whether I'm asking the right questions for any particular organization, or about whether my tentative answers to those questions are correct, and about whether this kind of evaluation seems valuable. For instance, it's possible that I would have done better by evaluating all organizations using the same rubric (e.g., leadership quality, ability to identify talent, working on important problems, operational capacity, etc.) I'd also be interested in algorithms to allocate funding supposing one had answers to all the questions I pose above, but did not have a clear way of comparing the impact of organizations working on different domains. *Thanks to Ozzie Gooen, Gustavs Zilgavis, Kelsey Rodriguez, Tegan McCaslin for comments and suggestions.* Appendix: Organizations about whose evaluations I'm less sure ============================================================= Center on Long-term Risk (CLR) ------------------------------ *Epistemic status* for this section: Confused. In particular, I get the sense that for CLR, more than for other organizations, a fair evaluation probably requires deeply understanding what they do, which I don't. ### Questions * Is most of their research only useful from a suffering-focused ethics perspective? * Is there a better option for suffering-focused donors? * Is the probability of astronomical suffering comparable to that of other existential risks? * Is CLR figuring out important aspects of reality? * Is CLR being cost-effective at producing research? * Is CLR's work on their "Cooperation, conflict, and transformative artificial intelligence"/"bargaining in artificial learners" agenda likely to be valuable? * Will CLR's future research on malevolence be valuable? * How effective is CLR at leveling up researchers? ### Tentative answers Previously, Larks briefly reviewed CLR on his [2020 AI Alignment Literature Review and Charity Comparison](https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison#CLR__The_Center_on_Long_Term_Risk). Sadly, CLR's work on AI Safety related problems seems hard to judge as an outsider on the merits, and I get the impression that they are fairly disconnected from other longtermist groups (though CLR moved to London last year, which might remedy this.) [This Alignment Forum post](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1#Multi_agent_reinforcement_learning__MARL_) makes the case that multi-agent reinforcement learning, which CLR plans to explore in 2021, isn't particularly neglected. Their [Cooperation, Conflict, and Transformative Artificial Intelligence: A Research Agenda](https://www.alignmentforum.org/s/p947tK8CoBbdpPtyK) series on the Alignment forum didn't get many comments. Fortunately, one of CLR's [aims for the year](https://forum.effectivealtruism.org/posts/93o6JwmdPPPuTXbYv/center-on-long-term-risk-2021-plans-and-2020-review#Evaluation) is to "elicit feedback from outside experts to assess the quality and impact of our work"; I'm curious to see how that goes. I'm not sure about whether further work on malevolence would be fruitful. In particular, it seems to me that [the original post](https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors) was very interesting and engaging. However, possible conclusions or proposals stemming from this kind of project are probably not implementable in the current political system. For instance, requiring psychopathy tests for politicians, or psychological evaluation, seems very unrealistic. That said, perhaps one possible longer-term strategy might be to have proposals ready which can be implemented in the ensuing policy window following unexpected turmoil (e.g., pushing for psychopathy tests for politicians might have been more feasible in the aftermath of the Nürnberg trials, or after Watergate.) I imagine that people who interface with policy directly probably have better models about the political feasibility of anti-malevolence proposals. [![](https://i.imgur.com/JGvyiBf.png)](https://www.foretold.io/c/b2412a1d-0aa4-4e37-a12a-0aca9e440a96/n/c01b0899-4100-4efd-9710-c482d89eddad) Maybe considering CLR's research agenda isn't a good way to think about its potential impact. [Daniel Kokotajlo's work](https://www.lesswrong.com/users/daniel-kokotajlo) on AI timelines strikes me as valuable, and is outside that research agenda. I have the subjective impression that CLR has historically been good at providing mentorship/funding for junior people trying to jump into EA research, e.g., for Michael Aird, [Jaime Sevilla](https://forum.effectivealtruism.org/posts/jxDskwWLDta7L5a8y/my-experience-as-a-clr-grantee-and-visiting-researcher-at), even when their ethics were not particularly suffering-focused. I found CLR particularly transparent with respect to their budget; their expected budget for 2021 was $1,830,000, and they expect to have 13.7 FTEs for the year. Commenters pointed out that this was surprisingly large compared to other organizations, e.g., 80,000 hours has around 19 FTEs (on a ~$3M budget). In short, I don't feel particularly enthused about their research agenda, but overall I'm not sure how to think about CLR's impact. ### Sources * [2020 AI Alignment Literature Review and Charity Comparison: CLR: The Center on Long Term Risk](https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison#CLR__The_Center_on_Long_Term_Risk) * [Center on Long-Term Risk: 2021 Plans & 2020 Review](https://forum.effectivealtruism.org/posts/93o6JwmdPPPuTXbYv/center-on-long-term-risk-2021-plans-and-2020-review) * [Reducing long-term risks from malevolent actors](https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors) * [Cooperation, Conflict, and Transformative Artificial Intelligence: A Research Agenda](https://www.alignmentforum.org/s/p947tK8CoBbdpPtyK) * [CLR's recent work on multi-agent systems](https://www.alignmentforum.org/posts/EzoCZjTdWTMgacKGS/clr-s-recent-work-on-multi-agent-systems) * [Some AI research areas and their relevance to existential safety](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1#Multi_agent_reinforcement_learning__MARL_-%20https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors)   ---   Future of Humanity Institute ---------------------------- *Epistemic status* for this section: Arguably shouldn't exist; FHI was just too large to be evaluated in a short time, so instead I rely mostly on status as a lagging indicator of impact. ### Questions * Is FHI figuring out important aspects of reality? * How valuable is additional funding for FHI likely to be? What proportion of donations to FHI goes to Oxford University? * Is it better to evaluate FHI as a whole, or team by team? * Is FHI's status proportionate to its current impact? That is, can we trust status as a measure of impact, or is it too laggy a measure? Does FHI get all or almost all of its status from a handful of very valuable projects? * How much x-risk reduction can we expect from FHI's research? Does it make sense to express this as a percentage, or as a distribution over percentages? * Besides x-risk reduction, can we also expect some dampening in the badness of the catastrophes that do happen? Can we expect that the value of the far future, conditional on not having an x-risk, is better? * Is FHI causing policy change? Will FHI's research and advocacy influence Britain's or the EU's AI policy? * Does/Will the vast majority of FHI's impact come from current senior researchers (Bostrom, Drexler, etc.)? * FHI has expanded a lot recently and seems to be continuing to do so. How well can it maintain quality? * What does the future of FHI operations look like? Will this substantially bottleneck the organization? * What are FHI's main paths to impact? Do other longtermist organizations find their continuing work highly valuable? * FHI researchers have historically helped identify multiple "crucial considerations" for other longtermists (like flagging X-risks). Do we think it's likely to continue to do so? ### Tentative answers Per their [team page](https://www.fhi.ox.ac.uk/the-team/), FHI is divided into the following teams: * Macrostrategy Research Group * AI Safety Research Group * Biosecurity Research Group * Centre for the Governance of AI * Research Scholars Programme * Some number of associates and affiliates. Despite living under the FHI umbrella, each of these projects has a different pathway to impact, and thus they should most likely be evaluated separately. Note also that, unlike most other groups, FHI doesn't really have consistent impact accounting for the organization as a whole. For instance, their last [quarterly report](https://www.fhi.ox.ac.uk/quarterly-update-winter-2020/), from [their news section](https://www.fhi.ox.ac.uk/news/) is from January to March 2020 (though it is possible that they have yet to publish their annual review for 2020.) Consider in comparison [80,000 hours'](https://docs.google.com/document/d/1rWfQ3Lja2kYoUm_t9uNqBgEn5nz6KL8fmNP5db8cZRU/edit) annual review, which outlines what the different parts of the organization are doing, and why each project is probably valuable. I think having or creating such an annual review probably adds some clarity of thought when choosing strategic decisions (though one could also cargo-cult such a review solely in order to be more persuasive to donors), and it would also make shallow evaluations easier. In the absence of an annual review to build upon, I'm unsatisfied with my ability to do more than a very shallow review in a short amount of time. In particular, I start out with the strong prior that FHI people are committed longtermists doing thoughtful work, and browsing through their work doesn't really update me much either against or in favor. I imagine that this might change as I think more about this, and maybe come up with an elegant factorization of FHI's impact. In any case, below are some notes on each of the groups which make up FHI. In the meantime, it seems that FHI doesn't seem to be hurting for money, but that Open Phil is hesitant to donate too much to any particular organization. If one thinks that appeasing Open Phil's neurosis is particularly important, which, all things considered, might be, or if one thinks that FHI is in fact hurting for money, FHI might be a good donation target. [![](https://i.imgur.com/SiIOV6t.png)](https://www.foretold.io/c/b2412a1d-0aa4-4e37-a12a-0aca9e440a96/n/c01b0899-4100-4efd-9710-c482d89eddad) ### Macrostrategy and AI Safety Research Groups Some of the outputs from these two groups were favorably reviewed by Larks [here](https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison#FHI__The_Future_of_Humanity_Institute). ### Biosecurity Research Group Some publications can be found in FHI's page for the research group's members ([Gregory Lewis](https://www.fhi.ox.ac.uk/team/lewis-gregory/), [Cassidy Nelson](https://www.fhi.ox.ac.uk/team/cassidy-nelson/), [Piers Millett](https://www.fhi.ox.ac.uk/team/piers-millett/)). Gregory Lewis also has some blog posts on the [EA forum](https://forum.effectivealtruism.org/users/gregory_lewis). I browsed their publications, but I don't think I'm particularly able to evaluate them, given that they are so far outside my area of expertise. In the medium term (e.g., once the pandemic has subsided), some outside expert evaluation in Open Philanthropy's style might be beneficial. Nonetheless, I'm somewhat surprised by the size of the team. In particular, I imagine that to meaningfully reduce bio-risk, one would need a bigger team. It's therefore possible that failing to expand is a mistake. However, commenters on a draft of this post pointed out that this isn't straightforward; expanding is difficult, and brings its own challenges. ### Centre for the Governance of AI (GovAI) Some of the outputs from the Centre for the Governance of AI were favorably reviewed by Larks [here](https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison#FHI__The_Future_of_Humanity_Institute) (same link as before). In addition, GovAI has its own [2020 Annual Report](https://www.fhi.ox.ac.uk/govai/govai-2020-annual-report/). It also has a post on the EA forum outlining its [theory of impact](https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact), which is outlined with extreme clarity. ### Research Scholars Programme, DPhil Scholars A review of FHI's Research Scholars Programme can be found [here](https://forum.effectivealtruism.org/posts/e8CXMz3PZqSir4uaX/what-fhi-s-research-scholars-programme-is-like-views-from-1). The page for the DPhil Scholarship can be found [here](https://www.fhi.ox.ac.uk/dphils/). FHI also has a Summer Research Fellowship, a review of which can be found [here](https://forum.effectivealtruism.org/posts/EPGdwe6vsCY7A9HPa/review-of-fhi-s-summer-research-fellowship-2020). Overall, I'd guess that these programs have similar pathways to impact to some of the LTF grants to individual researchers, but the advantage that the participants gain additional prestige through their association with Oxford (as in the case of Research Scholars), or become more aligned with longtermist priorities (perhaps as in the case of the DPhil program). ### Other associates and affiliates. Associates and affiliates could contribute a small but significant part of FHI's impact, but in the absence of very detailed models, I'm inclined to consider them as a multiplier (e.g. between x 1.05 and x 1.5 on FHI's base impact, whatever that may be). ### Conclusion In conclusion, FHI's output is fairly large and difficult to evaluate, particularly because they don't have a yearly review or a well organized set of outputs I can bootstrap from. GovAI seems to be doing particularly valuable work. I still think highly of the organization, but I notice that I'm relying on status as a lagging indicator of quality. ### Sources * [FHI team](https://www.fhi.ox.ac.uk/the-team) * [FHI publications](https://www.fhi.ox.ac.uk/publications/) * [2020 AI Alignment Literature Review and Charity Comparison: FHI: The Future of Humanity Institute](https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison#FHI__The_Future_of_Humanity_Institute) * [GovAI 2020 Annual Report](https://www.fhi.ox.ac.uk/govai/govai-2020-annual-report/) * [What FHI’s Research Scholars Programme is like: views from scholars](https://forum.effectivealtruism.org/posts/e8CXMz3PZqSir4uaX/what-fhi-s-research-scholars-programme-is-like-views-from-1) * [Review of FHI's Summer Research Fellowship 2020](https://forum.effectivealtruism.org/posts/EPGdwe6vsCY7A9HPa/review-of-fhi-s-summer-research-fellowship-2020) * [FHI DPhil Scholarships](https://www.fhi.ox.ac.uk/dphils/) * [Open Philanthropy: Future of Humanity Institute — General Support](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support)   ---   Global Priorities Institute --------------------------- *Epistemic status*: Uncertain about how valuable GPI's work is, and about my ability to evaluate them. ### Questions * How promising is GPI's strategy of influencing reputable academics over the long term? * Is GPI discovering new and important truths about reality? * Is GPI conducting research which answers the question "What should an agent do with a given amount of resources, insofar as her aim is to do the most good?"? * Is their advocacy paying out? * Will GPI be able to get promising economists in the future? ### Tentative answers GPI's [2020 annual report](https://globalprioritiesinstitute.org/global-priorities-institute-annual-report-2019-20/) is fairly short and worth reading in full. It describes GPI's aims as: > > The Global Priorities Institute (GPI) exists to develop and promote rigorous academic research into issues that arise in response to the question "What should an agent do with a given amount of resources, insofar as her aim is to do the most good?". The investigation of these issues constitutes the enterprise that we call global priorities research. It naturally draws upon central themes in (in particular) the fields of economics and philosophy; the Institute is interdisciplinary between these two academic fields. > > > Overall, I see various pathways to impact which could arise from this kind of philosophy work: 1. Philosophical clarity might be needed to optimally allocate donations. At the donation volume of an organization like OpenPhilanthropy or the Gates Foundation, relatively subtle changes in philosophical stances could lead to large changes in funding allocation. Further, some empirical considerations, such as those relating to the hinge of history hypothesis could also have more than marginal impact. 2. Academic consensus could lead to policy change, by building the philosophical backbone of longtermism which would support and allow for future policy work. 3. In particular, acquiring prestige in an academic field to then later influence policy may not require the academic field to be useful (i.e., it could be prestige about abstruse philosophical disputes). For example, testimony on future generations to the UK Parliament by an Oxford professor may be listened to because of the Oxford professorship, independent of its field. 4. Trailblazing philosophy might pave the way for future practical developments. Exploring the moral landscape could lead to understanding the shape of our values, and realizing that e.g., invertebrates may hold some moral weight, or that most of the value of humanity may lie in its far away future. Organizations could later be created to work on the issues identified. A particularly striking example of this might be Trammell's work on patient philanthropy, which might lead to a [Patient Philanthropy fund](https://forum.effectivealtruism.org/posts/8vfadjWWMDaZsqghq/long-term-investment-fund-at-founders-pledge). Another example might be Brian Tomasik's essays on reducing suffering. 5. Good philosophy might facilitate movement building, particularly inside academia. For instance, university professors might give courses on longtermism. 6. Understanding ethical truths and decision theories at an extreme level of clarity would allow for the development of safer AI. This doesn't seem to be GPI's focus. It is possible that I am missing some paths to impact. Right now, I see GPI as mostly aiming for 2., and growing its contingent of economists to allow for 3. 5. also seems to be happening, but it's unclear what role GPI plays there (though potentially it could be a substantial role). Readers might want to browse GPI's [list of publications](https://globalprioritiesinstitute.org/papers/) (note that the list also contains papers which are relevant to GPI's research agenda by authors not affiliated with GPI). I'm personally confused about their object level value, though some people I respect tell me that some are great. In short, I'm fairly uncertain about GPI's pathway to impact. Acquiring prestige and status might enable future policy work. Economics research, which GPI has been expanding into, seems more valuable. ### Sources * [Global Priorities Institute Research Agenda](https://globalprioritiesinstitute.org/research-agenda-web-version/) * [Global Priorities Institute Annual Report 2020](https://globalprioritiesinstitute.org/global-priorities-institute-annual-report-2019-20/) * [Global Priorities Institute: Papers](https://globalprioritiesinstitute.org/papers)   ---   Notes ----- [1]. One common unit might be "Quality-Adjusted Research Projects'', which could capture how efficiently an organization produces valuable research. However, that unit might be unsatisfactory, because research in different areas probably leads to differentially different outcomes. A different unit might be a "microtopia", which according to oral lore was defined by Owen Cotton-Barratt to represent one millionth of the value of an ideal long-termist utopia. One might also try to compare the value of additional funding to a threshold, like the value of OpenPhilanthropy's last (longtermist) dollar, or to compare to a given level of formidability. [2]. Initially, I thought that the result of this project might be a GiveWell-style evaluation of longtermist organizations, just many, many orders of magnitude more uncertain. For instance, if organization A produces between 1 and 10^6 "utilons'' per unit of resources (attention, effort, money, etc.), and organization B produces between 0.01 and 10^3 "utilons" per unit of resources, we would want to choose organization A over organization B, even though the impact estimates overlap and are very uncertain. [3]. Below is a list of perhaps notable organizations which I could have evaluated but didn't. As mentioned, because of their additional complexity, and to bound the scope of this post, I decided to exclude meta organizations. * Alcor Life Extension Foundation. Though cryonics has been proposed as an EA cause area in the past, it hasn't acquired mainstream acceptance as such. * Alpenglow. They recently rebranded as the [Centre for Long-Term Resilience](https://www.longtermresilience.org/), and I feel that the information on their webpage/online is too little to conduct an informed evaluation. * Berkeley Existential Risk Initiative. It's a meta-organization. * CEELAR (formerly the EA Hotel). It's a meta-organization. * CFAR. Private. * Center for Election Science. Time limits, and too solid a pathway to impact. Though estimating the impact on governance of better voting systems would be difficult, I feel like most other organizations in this list have an impenetrable fog in their pathway to impact which CES doesn't really have. This is the organization I feel most uncertain about not having added. * Emergent Ventures. It's a meta-organization. * Future of Humanity *Foundation*. In the medium to long run, I can imagine this becoming an attractive donation target. In the short run, its value would depend on what FHI staff would do with money unaccountable to Oxford University, which I don't have much insight about. * Long-Term Future Fund. It's a meta-organization. * Nonlinear Fund. It's a meta-organization. Also, their webpage is down. (edit: was down for me at the time; it's [up](https://www.nonlinear.org) now ([archive link](https://web.archive.org/web/*/https://www.nonlinear.org).)) * Open Philanthropy Fund. It's a meta-organization. * Qualia Research Institute. Its pathway to impact appears implausible and overly ambitious. * Quantified Uncertainty Research Institute. I was planning to do an evaluation at the end of the year. * Sentience Institute. It's between the longtermist and the animal rights/suffering spheres. [4]. Which suggests a bias, perhaps because I'm reticent to assign probabilities lower than 1%, even if it's per year. In the estimates later in the section, I ended up going mostly with yearly estimates based on my 100 year estimates. [5].[Michael Air'd Database of existential risk estimates](https://www.lesswrong.com/posts/jyRbMGimunhXGPxk7/database-of-existential-risk-estimates). [6]. [Manhattan Project](https://www.wikiwand.com/en/Manhattan_Project). "The Manhattan Project began modestly in 1939, but grew to employ more than 130,000 people and cost nearly US$2 billion (equivalent to about $23 billion in 2019)." [7]. [Lockheed Martin F-35 Lightning II development](https://www.wikiwand.com/en/Lockheed_Martin_F-35_Lightning_II_development). "The program received considerable criticism for cost overruns during development and for the total projected cost of the program over the lifetime of the jets. By 2017 the program was expected over its lifetime (until 2070) to cost $406.5 billion for acquisition of the jets and $1.1 trillion for operations and maintenance." [8]. general purpose grants are likely less valuable per dollar than the best way to spend the marginal dollar for longtermist impact. [9]. For instance, [Exceeding expectations: stochastic dominance as a general decision theory](https://globalprioritiesinstitute.org/christian-tarsney-exceeding-expectations-stochastic-dominance-as-a-general-decision-theory/) makes the point that stochastic dominance (A stochastically dominates B if 1) for all events x the probability for equal or better events is greater or equal in A than in B, and 2) there is at least one possible event for which the inequality is strict) generalizes even to comparisons of events with infinite or undefined expected value. Further, in the presence of "background uncertainty", stochastic dominance provides similar results to expected value, which might convince expected value skeptics to take some Pascalian-seeming wagers if the probability on which they depend is small, but not too small. Note that the paper doesn't word things that way. It also suggests in the latter sections that stochastic dominance stands as a decision theory on its own, which I'm very skeptical about.
ef27300e-0b39-4bc6-8d3b-08e1f237ea4c
trentmkelly/LessWrong-43k
LessWrong
Focus on the places where you feel shocked everyone's dropping the ball Writing down something I’ve found myself repeating in different conversations: If you're looking for ways to help with the whole “the world looks pretty doomed” business, here's my advice: look around for places where we're all being total idiots. Look for places where everyone's fretting about a problem that some part of you thinks it could obviously just solve. Look around for places where something seems incompetently run, or hopelessly inept, and where some part of you thinks you can do better. Then do it better. For a concrete example, consider Devansh. Devansh came to me last year and said something to the effect of,  “Hey, wait, it sounds like you think Eliezer does a sort of alignment-idea-generation that nobody else does, and he's limited here by his unusually low stamina, but I can think of a bunch of medical tests that you haven't run, are you an idiot or something?" And I was like, "Yes, definitely, please run them, do you need money". I'm not particularly hopeful there, but hell, it’s worth a shot! And, importantly, this is the sort of attitude that can lead people to actually trying things at all, rather than assuming that we live in a more adequate world where all the (seemingly) dumb obvious ideas have already been tried. Or, this is basically my model of how Paul Christiano manages to have a research agenda that seems at least internally coherent to me. From my perspective, he's like, "I dunno, man, I'm not sure I can solve this, but I also think it's not clear I can't, and there's a bunch of obvious stuff to try, that nobody else is even really looking at, so I'm trying it". That's the sort of orientation to the world that I think can be productive. Or the shard theory folks. I think their idea is basically unworkable, but I appreciate the mindset they are applying to the alignment problem: something like, "Wait, aren't y'all being idiots, it seems to me like I can just do X and then the thing will be aligned". I don't think we'll be saved
727baa55-425b-4c92-9822-6fa6f5729920
trentmkelly/LessWrong-43k
LessWrong
Playing with Otamatones A couple months ago Nora (3y) got very into Otamatones. She wanted to watch lots of videos, primarily TheRealSullyG. She asked for one for Christmas, and so did I: They're a lot of fun, but I haven't yet figured out if it's an instrument I'll ever be able to reliably play in tune. The basic idea is you have a touch sensitive ribbon which triggers a pitch dependent on the location of your finger, and then you squeeze the cheeks to control the volume. Nora's small pink one is the standard size, and I find it quite hard to use with grownup fingers. My big red one is the "deluxe" model, and at the larger size the ribbon is quite playable. Lily made a video of the Mary Ellen Spider song with an Otamatone actor, though (perhaps for the best) the instrument is not featured musically: I measure both models as having 29-31 half steps (2.5 octaves) depending on which setting you're using: * Standard, low: A#1 (59Hz) to D#4 (312Hz) * Standard, medium: G#3 (213Hz) to D6 (1,184 Hz) * Standard, high: A#5 (920Hz) to D#8 (4,980Hz) * Deluxe, low: F1 (44Hz) to C4 (265Hz) * Deluxe, medium: D#3 (158Hz) to B5 (980Hz) * Deluxe, high: E5 (671Hz) to A#7 (3,676Hz) Overall it's a fun and silly instrument, though there are a few smallish places it could be better: * It's a little quiet for group playing, much quieter than a fiddle or mandolin. If I'm playing with others I'm generally on the loudest setting and wishing it were a little louder. This is less of a problem now when I'm kind of terrible at it, but if I do get good it will be limiting. * You can hook the "deluxe" model up to an external speaker, at the cost of losing control over cheek articulation. And I was hoping the bass would sound better with a speaker intended for that range, but it turns out that's not a voice they've gone for. * Instead of an octave selector I want either a pitch range knob or a five-setting switch: three settings is often not enough to place the 2.5 octave range where I want it.
554c0174-702c-43a9-b283-04716148745a
StampyAI/alignment-research-dataset/arxiv
Arxiv
Correct-by-Construction Runtime Enforcement in AI -- A Survey 1 Introduction --------------- Safety of learning-based or learned controllers is a major concern when using them in physical systems and in the proximity of humans. Particularly during the exploration phase of learning, when an agent chooses random actions to examine its surroundings, it is important to avoid actions that may cause unsafe outcomes. Formal runtime enforcement techniques aim at guaranteeing safe execution of learning-enabled systems. Due to their applicability to AI-based agents, they gain more and more attention. Such an enforcer not only detects unsafe choices by an agent, but also corrects them, so that the learning process can continue without interruption. Which approaches are suitable for enforcing safety depends on the environment in which the learner operates and the challenges induced by the environment. First, it is often the case that a learner interacts with a physical system, which induces the need to detect the unsafe behavior *early enough to prevent it*. For dynamical systems, doing so involves planning ahead for a sufficient time span, whose length may not even be known. As a second challenge, many runtime enforcers must be *real-time capable*. In particular, when analysing the safety of an agent’s actions, a runtime enforcer cannot halt the environment, but rather must immediately decide whether the action issued by the agent should be executed in the environment or replaced by a safe one. A third challenge for runtime enforcement is that the dynamics of the *environment may be partially unknown* before the learning process starts. This makes guarding the system’s behavior against unsafe behavior difficult, as it is then unknown exactly which actions by an agent are unsafe. Thus, runtime enforcement may have to base on data-driven model identification. In this paper, we give an overview of state-of-the-art correct-by-construction runtime enforcement techniques that address the challenges that arise from the application of runtime enforcement to AI. We start by discussing a diverse set of techniques developed in the field of AI addressing uncertainty in learning to increase the chance of the learning system to behave safely. These techniques tend to address the safety problem from a learning perspective aiming to increase the observed probability of safe system behavior. We discuss such approaches in Section [2](#S2 "2 Safety in Data-Driven AI Methods ‣ Correct-by-Construction Runtime Enforcement in AI – A Survey"). To keep systems safe with provable guarantees, more formal approaches are necessary. Since a learning system’s behavior is not known in advance (and often randomized), the learner’s safety cannot be verified upfront even if it were safe. Rather, the concept of *runtime monitoring* is used to observe a learner’s behavior, coupled with correcting the learner’s choices whenever needed. How to monitor systems is studied in the field of *runtime verification* [DBLP:series/lncs/10457], which transfers concepts from the field of formal verification to testing a system’s correctness at runtime. Classical runtime verification approaches can often deal with complex temporal specifications and are applied whenever a system is too complex to be analyzed upfront or when errors induced by the employed hardware are to be detected as well. The majority of approaches from runtime verification are purely *trace-based* and do not take the interaction between environment and system to be monitored into account to decide on when an error has been reached. This makes sense for most applications – the possibility for the environment to force the system to violate its specification is often irrelevant as the environment itself only exhibits certain behaviors, for which the system may work correctly. This trace-based view is also present in the *runtime enforcement literature* [DBLP:journals/fmsd/PinisettyPTJFM17, DBLP:journals/fmsd/FalconeMFR11, DBLP:journals/tecs/PinisettyRSATH17] for non-AI-based systems, where the behavior of a system is only modified once the trace observed would start to violate the specification. When monitoring self-learning systems, this may however be too late to guarantee safety. This problem is addressed in some more recent works that employ a game-based perspective on monitoring [DBLP:conf/rv/EhlersF11] and enforcement [DBLP:journals/fac/RenardRF20], which is suitable for settings with known environment capabilities. Approaches for runtime enforcement of AI-based agents base on such ideas, can be broadly distinguished by whether they address *discrete* or *continuous state spaces*, and are addressed in Section [3](#S3 "3 Runtime Enforcement in Discrete Domains ‣ Correct-by-Construction Runtime Enforcement in AI – A Survey") and Section [4](#S4 "4 Runtime Enforcement in Hybrid/Continuous Domains ‣ Correct-by-Construction Runtime Enforcement in AI – A Survey") respectively. In Section [3](#S3 "3 Runtime Enforcement in Discrete Domains ‣ Correct-by-Construction Runtime Enforcement in AI – A Survey"), we discuss approaches for *discrete agent-environment systems*. The state space of the environment and the action space of the agent are modelled to be discrete and finite, sometimes with stochastic transitions. In this context, runtime enforcement techniques can be further distinguished by whether unsafe actions are blocked by the learner or are used for reward shaping. Differentiating aspects of approaches for runtime enforcement in discrete agent-environment systems are the different interference and communication techniques of the runtime enforcer with the agent, the used formal specification language, the used runtime enforcer synthesis techniques, and whether the safety analysis of actions is computed offline or online. We discuss several recent works for discrete agent-environment systems with a focus on advantages, limitations, and potential in their application to AI. In Section [4](#S4 "4 Runtime Enforcement in Hybrid/Continuous Domains ‣ Correct-by-Construction Runtime Enforcement in AI – A Survey"), we discuss runtime enforcement techniques for *continuous and hybrid agent-environment systems*. These systems pose additional challenges since they operate in continuous time, the state/action spaces are continuous, and the system dynamics are usually more complex, e.g., by having non-linearities or jumps between different modes of the hybrid system. These challenges increase the effort to monitor such systems and to provide an accurate safety analysis (i.e., such that safety is guaranteed while the enforcer is comparably permissive towards the agent). Such systems are even more susceptible to the curse of dimensionality (i.e., the state space of the system becomes unwieldy large) than discrete agent-environment systems. Hence determining proper abstractions and approximations of the system is even more important, particularly if the safety analysis is to be performed in real time. For the surveyed approaches that base on the automatic computation of such abstractions or approximations, we discuss how they address this challenge. After giving an overview of a good number approaches for runtime environment in AI-based systems, we then summarize some observations in Section [5](#S5 "5 Discussion on Runtime Enforcement in AI ‣ Correct-by-Construction Runtime Enforcement in AI – A Survey"), followed by the identification of future research directions. 2 Safety in Data-Driven AI Methods ----------------------------------- Increasing the safety of AI approaches gains more and more attention within the AI community [Schwalbe2020, Pereira2020]. Besides classical approaches, such as improving training data, removing biases, improving architectures and (hyper-)parameters, they aim to address safety from a system-level perspective. We can roughly categorize recent state-of-the-art approaches into three clusters: 1) understanding the capabilities and limitations of the AI system; 2) detecting and rejecting out-of-distribution/outlier inputs to the AI system; and 3) increasing the robustness of the AI system against unseen inputs and making decisions/actions of the system interpretable. In the following paragraphs, we briefly review existing approaches in each of the categories. #### Understanding the AI system: The training data of an AI system consists of a set of examples (pairs of inputs and outputs) that solve the desired task. Ideally, these examples are informative enough so that the system can generally solve the task, i.e., correctly mapping any input data to desired output data. Yet, real-world training data may be far from being *perfect*, resulting in approximation errors and uncertain mapping results of the system [Papadopoulos2001, Guo2017, Toubeh2019]. To estimate the residual uncertainty or prediction variance in the AI system, popular approaches use Bayesian methods [Jensen1996, Kendall2017, Sensoy2018], apply Monte Carlo dropout [Globerson2006, Gal2016], or utilize Deep Ensembles [Lakshminarayanan2016, Guo2017]. Yet, these uncertainty estimates are still not reliable, since perfect estimates can only be done given an infinite amount of data samples. #### Outlier detection: Uncertainties within an AI system can also be due to a distributional shift in which the data distributions are different in the training and operation domain [Mohseni2019]. For instance, the training dataset may lack corner cases that the system will encounter. Moreover, learned features, such as the shape or color of objects, may look different in the operating domain of the AI system. In general, these out-of-distribution (OOD) inputs may be detectable as outputs with a large prediction variance, but they can also be undetectable by having a low prediction variance. Thus, it is important to already detect OOD inputs before feeding them to the system or explicitly consider them within the system. The detection can be done, e.g., by including prediction-confidences in the network architecture [Devries2018], employing classifiers [Vyas2018] and outlier detectors [Hendrycks2018], monitoring neuron activity outside the training data ranges [Henzinger2019], using self-supervised representation learning [Golan2018], active monitoring with human input [Lukina2021], incorporating temperature scaling [Liang2017] or by incorporating generative adversarial networks [Nitsch2020]. OOD detection is very powerful, but often requires additional data or domain knowledge, rendering OOD detection infeasible for some applications. #### Robust and interpretable AI: Yet another way to increase the safety of the AI system is to enhance its robustness and resiliency against unseen/OOD inputs, perturbations or corner cases [Goodfellow2016]. The overall aim is to improve the system’s generalization capabilities. These improvements can be made through transfer learning [Hendrycks2019], regularization of the network [Zhang2017], verifying the networks robustness [Lukina2021], data augmentation [Dreossi2018], generative models [Nalisnick2018, Mohamed2016, Ghadirzadeh2020], or removing non-robust features from the training dataset [Ilyas2019]. Generalization improves robustness, yet there are no automated approaches to make a system more general or to check whether a system is general enough to not cause unsafe outputs. Interpretability approaches aim at making the output generation traceable [Linardatos2021, Carvalho2019]. For instance, by illustrating and explaining how the system derived a decision [Selvaraju2016, Selvaraju2017, Mitsioni2021, Ribeiro2016, Amir2018]. As a result, (non-)experts are able to check whether the system can actually justify its decision. Although the presented approaches increase the safety of AI systems and help to get a better understanding of the systems’ limitations, these approaches cannot provide strict safety guarantees or even soft probabilistic guarantees [Seshia2016]. Moreover, similar to testing-based validation, these approaches can often only reveal the presence of failures but not their absence. Formal runtime enforcement approaches aim specifically at providing strict safety guarantees by leveraging formal analysis techniques. 3 Runtime Enforcement in Discrete Domains ------------------------------------------ In *reinforcement learning* (RL) [sutton1998reinforcement], an agent aims to compute an optimal policy that maximizes the expected total amount of reward received from the environment. RL algorithms can be mainly divided into two categories: *model-based RL* and *model-free RL*. In the model-based approach, the learning agent constructs a model of its environment in form of a Markov decision process (MDP). By learning the model and applying planning approaches on the model, model-based RL can quickly obtain optimal policies, but the approach becomes impractical as the state space and action space grows. In contrast, model-free RL does not try to understand the environment and aims to learn a task through trial-and-error via interactions with the environment. Model-free RL is very scalable and has successfully been applied in solving various complex tasks, from playing video games to robotic tasks, but requires many samples for good performance. Therefore, safety is an especially challenging problem for model-free RL, since the learning agent needs to explore many unsafe behaviour in order to learn that it is unsafe. As a consequence, most formal runtime-enforcement techniques focus on ensuring safety in model-free RL, either during training or after training, or for both. In most work, safety properties are formulated in linear temporal logic (LTL). Several works consider only simple *invariant safety properties* like “two robots should never collide” [GiacobbeHKW21], other works consider the full safety fragment of LTL, which allows to formulate temporal safety properties like ”whenever the first signal rises, the second signal has to rise within the next 5 time steps”. There are two popular directions of research for runtime enforcement for model-free RL: (1) using the safety property to compute a maximally permissive enforcer, often called a shield, or (2) using the safety property for reward shaping. *Safety via Shielding.* Shields have been used in model-free RL to enforce safe operation of an agent during training and after training (safety is guaranteed as long as the shield is used). A shield [DBLP:conf/tacas/BloemKKW15] can automatically be computed from a given safety LTL specification and a model that captures all safety-relevant dynamics of the environment. The synthesis approach constructs a *safety game* from the safety specification and the environmental model. The *maximally permissive winning strategy* of the safety game allows all actions that will not cause a safety violation on the *infinite horizon* and is implemented in the shield. Shields have been categorized in post-shields and pre-shield [DBLP:conf/aaai/AlshiekhBEKNT18]. Post-shields monitor the actions selected by the agent and overwrite any unsafe action by a safe one. Pre-shields are implemented before the agent and block, at every time step, unsafe actions from the agent (also referred to as action masking). At every time step, the agent can only choose from the set of safe actions. Jansen et al. [DBLP:conf/concur/0001KJSB20] considered shielding in scenarios that incorporate uncertainty and therefore safety cannot be guaranteed. They introduce the concept of a probabilistic shield that enables RL decision-making to adhere to safety constraints with high probability. Probabilistic model checking techniques are used to compute the probabilities of all states and actions of the MDP to satisfy a safety LTL property, called the safety value of an action. A shield blocks an action if its safety value is smaller than some absolute threshold or relative threshold. Considering safety as quantitative measure allows risk taking and to tune the trade-off between safety and performance. Giacobbe et al. [GiacobbeHKW21] applied the same technique on Atari 2600 games and specified 43 safety properties for 31 games. The authors computed shields using a bounded horizon for all properties. Applying these shields resulted in the safest RL agents for Atari games currently available. ElSayed-Aly et al. [DBLP:conf/atal/Elsayed-AlyBAET21] considered shielding for multi-agent reinforcement learning (MARL) and proposed to either synthesize a single centralized shield that monitors and corrects all agents’ joint actions, or to synthesize multiple shields where each shield is only responsible for a subset of agents at each step. Furthermore, the authors introduced a minimal interference criteria for the MARL setting: a shield should change the actions of as few agents as possible when correcting an unsafe joint action. To compute a shield upfront, the safety of all actions for all reachable states has to be analyzed and stored. For large environments, this results in long offline computation times and huge shielding data bases, rendering shielding not tractable, and for dynamic or partially unknown environment, the offline computation is not possible. To tackle this issue, Könighofer et al. [DBLP:conf/nfm/KonighoferRPTB21] perform the safety analysis of the actions *online*. Using the time between two successive decisions of an agent, their approach builds an MDP quotient that captures the behaviour of the environment in the next n steps and analyzes the actions for the next decision states on the fly. However, the approach does not provide any worst-case guarantee on the computation time of the safety analysis. Therefore, the approach is suited for settings in which the agent does not have to make a decision in every time step and if needed, halting the agent to wait for a decision of the shield does not cause any harm. Achiam et al. [DBLP:conf/icml/AchiamHTA17] encoded safety in terms of constraints, leading to the line of research on *constrained policy optimization*. Constrained Markov decision processes (CMDPs) are used to decouple safety from reward, where an independent signal models the safety aspects. An optimal policy balances the trade-off between safety and performance, but there are no safety guarantees during learning. Therefore, this line of work was extended by Simão et al. [DBLP:conf/atal/SimaoJS21] by using a factored MDP that represents only the safety aspects of the full CMDP. The factored MDP is used to restrict the exploration of the learner and thereby allowing the RL agent to learn an optimal policy for the CMDP without violating the constraints. *Safety via Reward Shaping.* Several recent works [HahnPSS0W20, DBLP:journals/corr/abs-1902-00778] use safety properties expressed in LTL for reward shaping and are often referred to as *logically-constrained RL*. Approaches based on reward shaping will result in agent that will minimize the risk of safety violations after training. To ensure safety during training, such approaches need to be extended by restricting the exploration during training [HasanbeigAK20]. In logically-constrained RL, a formula expressing some desired properties is first converted to an automaton, which is then translated into a state-adaptive reward structure. Any RL-agent trained with this reward structure results in policies that maximise the probability of satisfying the given formula. In the context of safe RL, the formula expresses a safety property and the trained agent will minimize the risk of violating the property. Most works mentioned in this section discuss the impact of enforcing safety on the learning performance and provide requirements that the enforcement mechanism needs to satisfy to preserve the convergence guarantees of the learner [DBLP:conf/atal/SimaoJS21, DBLP:conf/aaai/AlshiekhBEKNT18]. Several papers show empirically that enforcing safety during learning has the potential to increase the agent’s performance if the safety and the performance properties are aligned. 4 Runtime Enforcement in Hybrid/Continuous Domains --------------------------------------------------- Ensuring the correct behavior of a system is particularly important for *safety-critical* systems. These operate in the physical world, which means that quantities of space and time become continuous. This complicates the enforcement of properties of such systems – as many verification problems for hybrid systems are undecidable, maximally permissive enforcement is undecidable in the general case, too. At the same time, many optimization problems for system behavior are equally undecidable in such environments and the aspects of the environment dynamics that are relevant for optimizing the system behavior can be unknown a-priori, which makes reinforcement learning attractive to allow an agent to adapt to the environment dynamics. Multiple streams of research have emerged that circumvent the undecidabilty of maximally permissive enforcement in various ways, such as focusing on simpler system dynamics and restricting the system’s behavior in ways that are not maximally permissive. A good example of the first stream of research is the work by Goorden et al. [DBLP:conf/adhs/GoordenLNNRS21], in which a maximally permissive controller is computed from a model in which the discrete behavior of the system may only depend on clock variables, so that controller synthesis algorithms from the area of timed automata can be used. This maximally permissive controller is then used as a system behavior constraint in a reinforcement learning process. Another example is a safe learning approach by Perkins and Barto [DBLP:journals/jmlr/PerkinsB02], in which they propose restricting a safe learning process to letting an agent learn how to switch between different Lyapunov-stable control laws that are known to be safe even when switching between them in an arbitrary manner. To handle also complex learner’s behavior in complex environments, the learners can be restricted to safe behavior in a way that is not maximally permissive. For instance, Fisac et al. [Fisac2018] present an approach to compute controlled invariant sets for known system dynamics that is based on a closed-form characterization of this set, which encodes the least-restrictive control law, which is subsequently approximated numerically (in the case of complex system dynamics). Another approach is to synthesize control barrier functions for known system dynamics, as done by Cheng et al. [DBLP:conf/aaai/ChengOMB19]. These lead to guaranteed safety of the learner when restricting it so that the learner never uses an action not allowed by the control barrier. The drawback of their approach is that the computed control barrier functions have to be affine, so that in order for them to be safe, they have to be computed in a way that makes them overly conservative. On the plus side, affine control barrier functions are easy to compute and completely continuous. Fulton and Platzer [DBLP:conf/aaai/FultonP18] provide an approach called *Justified Speculative Control* that operates by using a provably correct runtime monitor to filter the unsafe actions of a system and enforce that the learner only takes safe actions. Their approach builds on a provably correct non-deterministic backup strategy, for which the learner has to select actions allowed by a boundary strategy until it (possibly) becomes apparent that the environment of the learner does not behave in the way assumed for the formal safety proof of the boundary strategy (in which case the learner can also potentially choose unsafe actions). Their approach has the interesting property that it can also be applied if the backup strategy is only defined and verified for a part of the environment’s state. In this case, by integrating a distance metric to the modeled states into the learning process, the learner can be guided back to the safe states, which reduces the probability of unsafe behavior even for partially unknown environments. Nageshrao et al. [DBLP:conf/smc/NageshraoTF19] discuss an approach to integrate so-called “short horizon safety checks” into a reinforcement learning approach. Learner actions that are unsafe according to these checks are altered. They also record whenever this happens so that the unsafe situation can be replayed to the learner to speed up the convergence of the learner’s policy to a safe one. Since self-learning systems are particularly interesting for unknown system dynamics, there is also a rich body of works dealing with the question of how to enforce the safety of a learner in this case. The starting point is usually that the environment dynamics are only *partially known*, as if they are fully unknown, it is not even possible to guarantee the safety of the learner with the first step of the system’s execution. For instance, Gillulga and Tomlin [DBLP:conf/rss/GillulaT12] define an approach in which a self-learning controller is embedded into a safety controller that enforces the safety of a controlled physical system. In the approach, reachability analysis for the physical system is used in tandem with observing the disturbances observed at runtime, so that the safety wrapper becomes neither too conservative nor too permissive. Then, the safety enforcement approach by Fisac et al. [Fisac2018] mentioned above also includes a means to identify the magnitude of disturbances at runtime to adapt the enforcer at runtime to avoid being unnecessarily conversative. Of course, this means that if disturbances perform a sudden increase, this leads to temporarily unsafe behavior. Learning the unknown aspects of the environment dynamics is also present in other work. Cheng et al. [DBLP:conf/aaai/ChengOMB19] provide an approach for safe reinforcement learning that bases on the use of control barrier functions to detect unsafe actions by a learner as well as to guide the behavior of the learner. Such control barrier functions can be used under uncertain system dynamics, but the authors also include a process to learn a more precise model of the system dynamics over time, which enables a refinement of the control barrier functions to become less conservative when more information on the environment dynamics becomes available. 5 Discussion on Runtime Enforcement in AI ------------------------------------------ ### 5.1 Observations from the State of the Art In a variety of applications, AI systems will only unfold their full potential if such systems are safe during their operation. In our overview, we observed that the formal methods and AI communities often address safety from different perspectives. Formal runtime verification approaches aim at providing formal or probabilistic guarantees at all times, whereas AI-based safety approaches focus on improving safety as much as possible while not decreasing the system’s performance, e.g., by detecting out-of-distribution data or enhancing the system’s overall robustness. Formal methods usually regard the AI system as a black box whose actions might be adversarial. To detect (potentially) unsafe actions while lowering computational efforts, runtime verification approaches usually make use of abstractions or simplified models of the AI system (as, e.g., in [DBLP:conf/aaai/AlshiekhBEKNT18]). However, such simplifications might result in more conservative behavior of the system [Brunke2021] or be even incorrect for complex applications. AI-based approaches, on the other hand, commonly strive for model-free solutions so that the system is as free as possible to learn the desired task. Specifically for formal methods approaches, we saw that significant advances have been made in recent years, from introducing new theory to applying runtime verification to highly complex AI systems. Yet, we also observed that the conducted experiments are hard to compare with each other. In contrast to machine learning, the formal methods community has only first traces of a standard benchmark set [fmrChallenge] that allows one to compare the results of published approaches with each other. This lack of a big standardized benchmark set possibly hinders the community to advance theory but also to provide significant practical contributions. This problem is caused by different problem domains, various challenges within each application, as well as differing technical foundations in the runtime verification approaches. For instance, techniques based on reachability checking are hard to compare against barrier function-based approaches. Moreover, the common metrics to assess the performance of runtime verification approaches (e.g., runtime overhead in runtime monitoring or the expressivity of the specification language, [DBLP:journals/fmsd/SanchezSABBCFFK19]) are not appropriate when an unavoidable degree of conservativeness is used in monitoring (black-box) AI systems. Hence, in order to establish a common comparison, new metrics are needed. However, we see that more and more research groups are investigating these challenges and contributing towards enforcing safety in AI-based systems. ### 5.2 Future Directions and Conclusions Our communities need to overcome several challenges to make AI systems safe to use. Most AI approaches aim for being free of any model; yet, most formal methods approaches require a model. Model identification for large systems is still a challenging task and for certain applications, e.g., a robot that needs to make contact with a human, it is even difficult to design neural networks that approximate the dynamics or to mathematically define safety [Mitsioni2019, Zhang2017b]. We need to explore new techniques to analyze AI systems and to automatically generate models or abstractions of them. Moreover, many AI systems will be deployed in uncertain and partially observable environments, such as in autonomous driving. In such environments, it is even more challenging to verify safety, since the environment might be unknown and needs to be explored online, the behavior of agents may be unknown, and information on the system may be partial and even incorrect. We need more research in analyzing and exploring the environment and system on-the-fly during its operation. To make our research more comparable, our communities need to come up with challenging benchmarks that help us to measure the performance of our algorithms as well as to advance theory. In robotics and AI, the availability of benchmarks have led to new research that drastically improves the performance of algorithms [Moll2015, Sucan2012, Duan2016]. Even though environments such as the OpenAI safety gym [Ray2019] exist, they are made from the perspective of the AI community and do not reflect the requirements for runtime enforcement. Finally, both the AI and formal methods communities need to closely work together and foster synergies. In this way, we can increase safety, performance, and interpretability of AI-based systems together.
a3e34c7d-cd6a-4657-9150-6183dd37240f
trentmkelly/LessWrong-43k
LessWrong
The latest in preference elicitation and preference learning in AI [links] 'Preference elicitation' or 'preference extraction' or 'preference learning' is a key step in CEV. Most work on preference elicitation has been done by economists. The state of the art in AI is summarized here and here.
d93c2a3a-8cee-41a7-95fd-bfaf4e5e4e9b
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Bind Yourself to Reality Today's post, Bind Yourself to Reality was originally published on 22 March 2008. A summary (taken from the LW wiki):   > There are several reasons why it's worth talking about joy in the merely real in a discussion on reductionism. One is to leave a line of retreat. Another is to improve your own abilities as a rationalist by learning to invest your energy in the real world, and in accomplishing things here, rather than in a fantasy. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Joy in Discovery, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
0cd8f99f-9c8d-44f6-a147-0604439d3c21
StampyAI/alignment-research-dataset/arxiv
Arxiv
Interactive Learning from Policy-Dependent Human Feedback I Introduction --------------- Programming robots is very difficult, in part because the real world is inherently rich and—to some degree—unpredictable. In addition, our expectations for physical agents are quite high and often difficult to articulate. Nevertheless, for robots to have a significant impact on the lives of individuals, even non-programmers need to be able to specify and customize behavior. Because of these complexities, relying on end-users to provide instructions to robots programmatically seems destined to fail. Reinforcement learning (RL) from human trainer feedback provides a compelling alternative to programming because agents can learn complex behavior from very simple positive and negative signals. Furthermore, real-world animal training is an existence proof that people can train complex behavior using these simple signals. Indeed, animals have been successfully trained to guide the blind, locate mines in the ocean, detect cancer or explosives, and even solve complex, multi-stage puzzles. However, traditional reinforcement-learning algorithms have yielded limited success when the reward signal is provided by humans and have largely failed to benefit from the sophisticated training strategies that expert animal trainers use with animals. This failure has led to the development of new reinforcement-learning algorithms that are designed to learn from human-generated rewards and investigations into how people give interactive feedback [[1](#bib.bib1), [2](#bib.bib2), [3](#bib.bib3), [4](#bib.bib4), [5](#bib.bib5), [6](#bib.bib6)]. In general, many of these human-centered learning algorithms are built on the insight that people tend to give feedback that reflects the policy the agent should be following, rather than as a numeric value that is meant to be maximized by the agent. While this insight seems accurate, existing approaches assume models of feedback that are independent of the policy the agent is currently following. We present empirical results that demonstrate that this assumption is incorrect and further argue that policy-dependent feedback enables effective training strategies, such as differential feedback and policy shaping, from which we would like a learning agent to benefit. Following this result, we present Convergent Actor-Critic by Humans (COACH), an algorithm for learning from policy-dependent human feedback that is capable of benefiting from these strategies. COACH is based on the insight that the TD-error used by actor-critic algorithms is an unbiased estimate of the advantage function, which is a policy-dependent value roughly corresponding to how much better or worse an action is compared to the current policy and which captures these previously mentioned training strategies. To validate that COACH scales to complex problems, we train five different behaviors on a TurtleBot robot that makes decisions every 33ms from noisy image features that are not visible to the trainer. Ii Background -------------- For modeling the underlying decision-making problem of an agent being taught by a human, we adopt the Markov Decision Process (MDP) formalism. An MDP is a 5-tuple: ⟨S,A,T,R,γ⟩, where S is the set of possible states of the environment; A is the set of actions available to the agent; T(s′|s,a) is the transition function, which defines the probability of the environment transitioning to state s′ when the agent takes action a in environment state s; R(s,a,s′) is the reward function specifying the numeric reward the agent receives for taking action a in state s and transitioning to state s′; and γ∈[0,1] is a discount factor specifying how much immediate rewards are preferred to more distant rewards. A stochastic policy π for an MDP is a per-state action probability distribution that defines a mode of behavior; π:S×A→[0,1], where ∑a∈Aπ(s,a)=1,∀s∈S. In the MDP setting, the goal is to find the optimal policy π∗, which maximizes the expected future discounted reward when the agent selects actions in each state according to π∗; π∗=argmaxπE[∑∞t=0γtrt|π], where rt is the reward received at time t. Two important concepts in MDPs are the value function (Vπ) and action–value function (Qπ). The value function defines the expected future discounted reward from each state when following some policy and the action–value function defines the expected future discounted reward when an agent takes some action in some state and then follows some policy π thereafter. These equations can be recursively defined via the Bellman equation: Vπ(s)=∑aπ(s,a)Qπ(s,a) and Qπ(s,a)=∑s′T(s′|s,a)[R(s,a,s′)+γVπ(s′)]. For shorthand, the value functions for the optimal policies are usually denoted V∗ and Q∗. In reinforcement learning (RL), an agent interacts with an environment modeled as an MDP, but does not have direct access to the transition function or reward function and instead must learn a policy from environment observations. A common class of RL algorithms are actor-critic algorithms. Bhatnagar et al. [[7](#bib.bib7)] provides a general template for these algorithms. Actor-critic algorithms are named for the two main components of the algorithms: The actor is a parameterized policy that dictates how the agent selects actions; the critic estimates the value function for the actor and provides critiques at each time step that are used to update the policy parameters. Typically, the critique is the temporal difference (TD) error: δt=rt+γV(st)−V(st−1), which describes how much better or worse a transition went than expected. Iii Human-centered Reinforcement Learning ------------------------------------------ In this work, a human-centered reinforcement-learning (HCRL) problem is a learning problem in which an agent is situated in an environment described by an MDP but in which rewards are generated by a human trainer instead of from a stationary MDP reward function that the agent is meant to maximize. The trainer has a target policy π∗ they are trying to teach the agent. The trainer communicates this policy by giving numeric feedback as the agent acts in the environment. The goal of the agent is to learn the target policy π∗ from the feedback. To define a learning algorithm for this problem, we first characterize how human trainers typically use numeric feedback to teach target policies. If feedback is stationary and intended to be maximized, it can be treated as a reward function and standard RL algorithms used. Although this approach has had some success [[8](#bib.bib8), [9](#bib.bib9)], there are complications that limit its applicability. In particular, a trainer must take care that the feedback they give contains no unanticipated exploits, constraining the feedback strategies they can use. Indeed, prior research has shown that interpreting human feedback like a reward function often induces positive reward cycles that leads to unintended behaviors [[10](#bib.bib10), [11](#bib.bib11)]. The issues with interpreting feedback as reward have led to the insight that human feedback is better interpreted as a comment on the agent’s behavior; for example, positive feedback roughly corresponds to “that was good” and negative feedback roughly corresponds to “that was bad.” Existing HCRL work adopting this perspective includes TAMER [[1](#bib.bib1)], SABL [[6](#bib.bib6)], and Policy Shaping [[5](#bib.bib5)], discussed in more detail in the Related Work section. We note, though, that all assume that human feedback is independent of the agent’s current policy. We provide empirical results that show this assumption to be incorrect. Iv Policy-dependent Feedback ----------------------------- Evidence that human feedback is influenced by the agent’s current policy can be seen in previous work. For example, it was observed that trainers taper their feedback over the course of learning [[11](#bib.bib11), [12](#bib.bib12), [9](#bib.bib9)]. One explanation for decreasing feedback rates is policy-dependent feedback, but trainer fatigue is another. We provide a stronger result showing that trainers—for the same state–action pair—choose positive or negative feedback depending on their perception of the learner’s behavior. This finding serves as a warning for algorithms that rely on an assumption policy-independent feedback. ### Iv-a Empirical Results ![The training interface shown to AMT users.](https://media.arxiv-vanity.com/render-output/7788207/interface.png) Fig. 1: The training interface shown to AMT users. We had Amazon Mechanical Turk (AMT) participants teach an agent in a simple sequential task, illustrated in Figure [1](#S4.F1 "Fig. 1 ‣ IV-A Empirical Results ‣ IV Policy-dependent Feedback ‣ Interactive Learning from Policy-Dependent Human Feedback"). Participants were instructed to train a virtual dog to walk to the yellow goal location in a grid world as fast as possible but without going through the green cells. They were additionally told that, as a result of prior training, their dog was already either “bad”, “alright”, or “good” at the task and were shown examples of each behavior before training. In all cases, the dog would start in the location shown in Figure [1](#S4.F1 "Fig. 1 ‣ IV-A Empirical Results ‣ IV Policy-dependent Feedback ‣ Interactive Learning from Policy-Dependent Human Feedback"). “Bad” dogs walked straight through the green cells to the yellow cell. “Alright” dogs first moved left, then up, and then to the goal, avoiding green but not taking the shortest route. “Good” dogs took the shortest path to yellow without going through green. During training, participants saw the dog take an action from one tile to another and then gave feedback after every action using a continuous labeled slider as shown. The slider always started in the middle of the scale on each trial, and several points were labeled with different levels of reward (praise and treats) and punishment (scolding and a mild electric shock). Participants went through a brief tutorial using this interface. Responses were coded as a numeric value from −50 to 50, with “Do Nothing” as the zero-point. During the training phase, participants trained a dog for three episodes that all started in the same position and ended at the goal. The dog’s behavior was pre-programmed for all episodes in such a way that the first step of the final episode would reveal if feedback was policy-dependent. The dog always performed the same behavior in the first two episodes, and then performed the “alright” behavior in the third episode. Each user was placed into one of three different conditions that determined how the dog would behave in the first two episodes: either “bad,” “alright,” or “good.” If feedback is policy dependent, we expect more positive feedback in the “bad” condition than in the “alright” or “good” condition; “alright” behavior is an improvement over the previous “bad” behavior whereas it is either no improvement or a deterioration compared to “alright” or “good” behavior. Figure [2](#S4.F2 "Fig. 2 ‣ IV-A Empirical Results ‣ IV Policy-dependent Feedback ‣ Interactive Learning from Policy-Dependent Human Feedback") shows boxplots and individual responses for the first step of the final episode under each of the three conditions. These results indicate that the sign of feedback is sensitive to the learner’s policy, as predicted. The mean and median feedback under the “bad” condition is slightly positive (Mean = 9.8, Median = 24, S.D. = 22.2; planned Wilcoxon one-sided signed-rank test: Z=1.71,p<0.05), whereas it is negative for the “alright” condition (Mean = −18.3, Median = −23.5, S.D. = 24.6; planned Wilcoxon two-sided signed-rank test: Z=−3.15,p<0.01) and “good” condition (Mean = −10.8, Median = −18.0, S.D. = 20.7; planned Wilcoxon one-sided signed-rank test: Z=−2.33,p<0.05). There was a main effect across the three conditions (p<0.01, Kruskal-Wallace Test), and pairwise comparisons indicated that only the “bad” condition differed from “alright” and “good” conditions (p<0.01 for both, Bonferroni-corrected, Mann-Whitney Pairwise test). ![Box plots with median, interquartile range, and minimum/maximum values of human responses to first step of the final episode for each condition. Feedback tended to be positive when the prior behavior was ](https://media.arxiv-vanity.com/render-output/7788207/feedback_dist.png) Fig. 2: Box plots with median, interquartile range, and minimum/maximum values of human responses to first step of the final episode for each condition. Feedback tended to be positive when the prior behavior was bad and negative otherwise. ### Iv-B Training Strategies Beyond the fact that our previous evidence suggests that people give policy-dependent feedback, we argue that policy-dependent feedback affords desirable training strategies. Specifically, we consider three different feedback schemes that can be viewed as operationalizations of well-studied behavior analysis reinforcement schedules [[13](#bib.bib13)]: Diminishing Returns: gradually decrease positive feedback for good actions as the agent adopts those actions. Differential Feedback: vary the magnitude of feedbacks w.r.t. the degree of improvement or deterioration in behavior. Policy Shaping: provide positive feedback for suboptimal actions that improve behavior and then negative feedback after the improvement has been made. Diminishing returns is a useful strategy because it decreases the burden of how actively a trainer must supply feedback and removes the need for explicit training and execution phases. Differential feedback is useful because it can serve to highlight the most important behaviors in the state space and communicate a kind of urgency for learning them. Finally, policy shaping concerns feedback that signals an improvement relative to the current baseline—as in the AMT study above. It is useful for providing a direction for the learner to follow at all times, even when the space of policies is continuous or otherwise impractical to search. V Convergent Actor-Critic by Humans ------------------------------------ In this section, we introduce Convergent Actor-Critic by Humans (COACH), an actor-critic-based algorithm capable of learning from policy-dependent feedback. COACH is based on the insight that the advantage function is a good model of human feedback and that actor-critic algorithms update a policy using the critic’s TD error, which is an unbiased estimate of the advantage function. Consequently, an agent’s policy can be directly modified by human feedback without a critic component. We first define the advantage function and describe how it relates to the three previously mentioned training strategies. Then, we present the general update rule for COACH and its convergence. Finally, we present Real-time COACH, which includes mechanisms for providing variable magnitude feedback and learning in problems with a high-frequency decision cycle. ### V-a The Advantage Function and Training Strategies The advantage function [[14](#bib.bib14)] Aπ is defined as | | | | | | --- | --- | --- | --- | | | Aπ(s,a)=Qπ(s,a)−Vπ(s). | | (1) | Roughly speaking, the advantage function describes how much better or worse an action selection is compared to the agent’s performance following policy π. We now show that feedback assigned according to the advantage function follows the patterns of all three training strategies. Showing that the advantage function captures the differential feedback strategy is trivial, because the advantage function is defined by how much better taking an action over its current policy is expected to be. To show that the advantage function induces diminishing returns, consider what happens when the learner improves its behavior by shifting to a higher scoring action a in some state s. As its probability of selecting a goes to 1, Aπ(s,a)=Qπ(s,a)−Qπ(s,a)=0, because the value function Vπ(s)=∑aπ(s,a)Qπ(s,a). Since the expected value is a smooth linear combination of the Q-values, as the agent adopts action a, Aπ(s,a)→0, resulting in gradually decreasing feedback. To show that the advantage function induces policy shaping, let us first assume w.l.o.g. that there are three possible actions, where action a3 is exclusively optimal, and action a2 is better than action a1. That is, Q∗(s,a3)>Q∗(s,a2)>Q∗(s,a1). For illustrative purposes, let us also assume that the agent has learned the optimal policy in all states except state s, wherein the agent with near-certain probability selects the worst action a1 (π(s,a1)≈1) and that all actions lead to some other state. In this scenario, Aπ(s,a)=Q∗(s,a)−∑a′π(s,a′)Q∗(s,a′)≈Q∗(s,a)−Q∗(s,a1). Since Q∗(s,a2)>Q∗(s,a1), it follows that Aπ(s,a2) is positive. Consequently, in this condition, suboptimal action a2 would receive positive feedback. Now consider the case when the agent with near-certainty selects optimal action a3. Under this condition, Aπ(s,a)≈Q∗(s,a)−Q∗(s,a3) and since Q∗(s,a3)>Q∗(s,a2), Aπ(s,a2) is negative. Once again, since Vπ is a smooth linear function of Qπ, as the agent adopts optimal action a3, Aπ(s,a2) becomes negative. Therefore, suboptimal actions can be rewarded and then punished as the agent improves at the task, producing policy shaping. ### V-B Convergence and Update Rule Given a performance metric ρ, Sutton et al. [[15](#bib.bib15)] derive a policy gradient algorithm of the form: Δθ=α∇θρ. Here, θ represents the parameters that control the agent’s behavior and α is a learning rate. Under the assumption that ρ is the discounted expected reward from a fixed start state distribution, they show that | | | | | --- | --- | --- | | | ∇θρ=∑sdπ(s)∑a∇θπ(s,a)Qπ(s,a), | | where dπ(s) is the component of the (discounted) stationary distribution at s. A benefit of this form of the gradient is that, given that states are visited according to dπ(s) and actions are taken according to π(s,a), the update at time t can be made as: | | | | | | --- | --- | --- | --- | | | Δθt=αt∇θπ(st,at)ft+1π(st,at), | | (2) | where E[ft+1]=Qπ(st,at)−v(s) for any action-independent function v(s). In the context of the present paper, ft+1 represents the feedback provided by the trainer. It follows trivially that if the trainer chooses the policy-dependent feedback ft=Qπ(st,at), we obtain a convergent learning algorithm that (locally) maximizes discounted expected reward. In addition, feedback of the form ft=Qπ(st,at)−Vπ(st)=Aπ(s,a) also results in convergence. Note that for the trainer to provide feedback in the form of Qπ or Aπ, they would need to “peer inside” the learner and observe its policy. In practice, the trainer estimates π by observing the agent’s actions. ### V-C Real-time COACH There are challenges in implementing Equation [2](#S5.E2 "(2) ‣ V-B Convergence and Update Rule ‣ V Convergent Actor-Critic by Humans ‣ Interactive Learning from Policy-Dependent Human Feedback") for real-time use in practice. Specifically, the interface for providing variable magnitude feedback needs to be addressed, and the question of how to handle sparseness and the timing of feedback needs to be answered. Here, we introduce Real-time COACH, shown in Algorithm [1](#alg1 "Algorithm 1 ‣ V-C Real-time COACH ‣ V Convergent Actor-Critic by Humans ‣ Interactive Learning from Policy-Dependent Human Feedback"), to address these issues. For providing variable magnitude reward, we use reward aggregation [[1](#bib.bib1)]. In reward aggregation, a trainer selects from a discrete set of feedback values and further raises or lowers the numeric value by giving multiple feedbacks in succession that are summed together. While sparse feedback is not especially problematic (because no feedback results in no change in policy), it may slow down learning unless the trainer is provided with a mechanism to allow feedback to affect a history of actions. We use eligibility traces [[16](#bib.bib16)] to help apply feedback to the relevant transitions. An eligibility trace is a vector that keeps track of the policy gradient and decays exponentially with a parameter λ. Policy parameters are then updated in the direction of the trace, allowing feedback to affect earlier decisions. However, a trainer may not always want to influence a long history of actions. Consequently, real-time COACH maintains multiple eligibility traces with different temporal decay rates and the trainer chooses which eligibility trace to use. This trace choice may be handled implicitly with the feedback value selection or explicitly. Due to reaction time, human feedback is typically delayed by about 0.2 to 0.8 seconds from the event to which they meant to give feedback [[10](#bib.bib10)]. To handle this delay, feedback in Real-time COACH is associated with events from d steps ago to cover the gap. Eligibility traces further smooth the feedback to older events. Finally, we note that just as there are numerous variants of actor-critic update rules, similar variations can be used in the context of COACH. policy πθ0, trace set λ, delay d, learning rate α Initialize traces eλ←0 ∀λ∈λ observe initial state s0 for t=0 to ∞ do      select and execute action at∼πθt(st,⋅)      observe next state st+1, sum feedback ft+1, and λ      for λ′∈λ do          eλ′←λ′eλ′+1πθt(st−d,at−d)∇θtπθt(st−d,at−d)      end for      θt+1←θt+αft+1eλ end for Algorithm 1 Real-time COACH Vi Related Work ---------------- An inspiration for our work is the TAMER framework [[10](#bib.bib10)]. In TAMER, trainers provide interactive numeric feedback as the learner takes actions. The feedback is interpreted as an exemplar of the reward function for the previous state–action pair and is used to learn the reward function. When the agent makes rapid decisions, TAMER divides the feedback among the recent state–action pairs according to a probability distribution. TAMER makes decisions by myopically choosing the action with the highest reward estimate. Because the agent myopically maximizes reward, the feedback may be interpreted as exemplars of Q∗. Later work investigated non-myopically maximizing the learned reward function with a planning algorithm [[17](#bib.bib17)], but this approach requires a model of the environment and special treatment of termination conditions. Because TAMER expects feedback to be policy independent, it does not support the diminishing returns or policy-shaping strategies, and handles differential feedback only in so far as it uses numeric feedback. In our robotics case study, we provide an explicit example where TAMER’s failure to support diminishing returns can result in unlearning. Two other closely related approaches are SABL [[6](#bib.bib6)] and Policy Shaping [[5](#bib.bib5)] (unrelated to the policy shaping feedback strategy defined above). Both of these approaches treat feedback as discrete probabilistic evidence of a parametrized policy. SABL’s probabilistic model additionally includes (learnable) parameters for describing how often a trainer is expected to give explicit positive or negative feedback. Both of these approach assume policy-independent feedback and do not support the three training strategies described. There have also been some domains in which treating human feedback as reward signals to maximize has had some success, such as in shaping the control for a prosthetic arm [[8](#bib.bib8)] and learning how to interact in an online chat room from multiple users’ feedback [[9](#bib.bib9)]. An interesting area of future work is to test whether performance in these domains can be improved with COACH given our insights into the nature of human feedback—work on the chat-room learning domain did report challenges due to diminishing feedback. Some research has examined combining human feedback with more traditional environmental rewards [[18](#bib.bib18), [19](#bib.bib19), [20](#bib.bib20), [21](#bib.bib21)]. A challenge in this context in practice is that rewards do not naturally come from the environment and must be programmatically defined. However, it is appealing because the agent can learn in the absence of an active trainer. We believe that COACH could be straightforwardly modified to learn in this setting as well. Finally, a related research area is learning from demonstration (LfD), in which a human provides examples of the desired behavior. There are a number of different approaches to solving this problem surveyed by Argall et al. [[22](#bib.bib22)]. We see these approaches as complementary to HCRL because it is not always possible, or convenient, to provide demonstrations. LfD approaches that learn a parametrized policy could also operate with COACH, allowing the agent to have their policy seeded by demonstrations, and then fine tuned with interactive feedback. Vii Robotics Case Study ------------------------ In this section, we present qualitative results applying Real-time COACH to a TurtleBot robot. The goal of this study was to test that COACH can scale to a complex domain involving multiple challenges, including training an agent that operates on a fast decision cycle (33ms), noisy non-Markov observations from a camera, and agent perception that is hidden from the trainer. To demonstrate the flexibility of COACH, we trained it to perform five different behaviors involving a pink ball and cylinder with an orange top using the same parameter selections. We discuss these behaviors below. We also contrast the results to training with TAMER. We chose TAMER as a comparison because, to our knowledge, it is the only HCRL algorithm with success on a similar platform [[23](#bib.bib23)]. The TurtleBot is a mobile base with two degrees of freedom that senses the world from a Kinect camera. We discretized the action space to five actions: forward, backward, rotate clockwise, rotate counterclockwise, and do nothing. The agent selects one of these actions every 33ms. To deliver feedback, we used a Nintendo Wii controller to give +1, +4, or −1 numeric feedback, and pause and continue training. For perception, we used only the RGB image channels from the Kinect. Because our behaviors were based around a relocatable pink ball and a fixed cylinder with an orange top, we hand constructed relevant image features to be used by the learning algorithms. These features were generated using techniques similar to those used in neural network architectures. In the future, we will investigate learning these features along with the policy. The features were constructed by first transforming the image into two color channels associated with the color of the ball and cylinder. Sum pooling to form a lower-dimensional 8×8 grid was applied to each color channel. Each sum-pooling unit was then passed through three different normalized threshold units defined by Ti(x)=min(xϕi,1), where ϕi specifies how quickly the ith threshold unit saturates. Using multiple saturation parameters differentiates the distance of objects, resulting in three “depth” scales per color channel. Finally, we passed these results through a 2×8 max-pooling layer with stride 1. The five behaviors we trained were push–pull, hide, ball following, alternate, and cylinder navigation. In push–pull, the TurtleBot is trained to navigate toward the ball when it is far, and back away from it when it is near. The hide behavior has the TurtleBot back away from the ball when it is near and turn away from it when it is far. In ball following, the TurtleBot is trained to navigate to the ball. In the alternate task, the TurtleBot is trained to go back and forth between the cylinder and ball. Finally, cylinder navigation involves the agent navigating to the cylinder. We further classify training methods for each of these behaviors as flat, involving the push–pull, hide, and ball following behaviors; and compositional, involving the alternate and cylinder navigation behaviors. In all cases, our human trainer (one of the co-authors) used differential feedback and diminishing returns to quickly reinforce behaviors and restrict focus to the areas needing tuning. However, in alternate and cylinder navigation, they attempted more advanced compositional training methods. For alternate, the agent was first trained to navigate to the ball when it sees it, and then turn away when it is near. Then, the same was independently done for the cylinder. After training, introducing both objects would cause the agent to move back and forth between them. For cylinder navigation, they attempted to make use of an animal-training method called lure training in which an animal is first conditioned to follow a lure object that is then used to guide it through more complex behaviors. In cylinder navigation, they first trained the ball to be a lure, used it to guide the TurtleBot to the cylinder, and finally gave a +4 reward to reinforce the behaviors it took when following the ball (turning to face the cylinder, moving toward it, and stopping upon reaching it). The agent would then navigate to the cylinder without requiring the ball to be present. For COACH parameters, we used a softmax parameterized policy, where each action preference value was a linear function of the image features, plus tanh(θa), where θa is a learnable parameter for action a, providing a preference in the absence of any stimulus. We used two eligibility traces with λ=0.95 for feedback +1 and −1, and λ=0.9999 for feedback +4. The feedback-action delay d was set to 6, which is 0.198 seconds. Additionally, we used an actor-critic parameter-update rule variant in which action preference values are directly modified (along its gradient), rather than by the gradient of the policy [[24](#bib.bib24)]. This variant more rapidly communicates stimulus–response preferences. For TAMER, we used typical parameter values for fast decision cycle problems: delay-weighted aggregate TAMER with uniform distribution credit assignment over 0.2 to 0.8 seconds, ϵp=0, and cmin=1 [[10](#bib.bib10)]. (See prior work for parameter meaning.) For TAMER’s reward function approximation, we used the same parameters as the action preferences in COACH. ### Vii-a Results and Discussion COACH was able to successfully learn all five behaviors and a video showing its learning is available online at <https://vid.me/3h2s>. Each of these behaviors were trained in less than two minutes, including the time spent verifying that a behavior worked. Differential feedback and diminishing returns allowed only the behaviors in need of tuning to be quickly reinforced or extinguished without any explicit division between training and testing. Moreover, the agent successfully benefited from the compositional training methods, correctly combining subbehaviors for alternate, and quickly learning cylinder navigation with the lure. TAMER only successfully learned the behaviors using the flat training methodology and failed to learn the compositionally trained behaviors. In all cases, TAMER tended to forget behavior, requiring feedback for previous decisions it learned to be resupplied after it learned a new decision. For the alternate behavior, this forgetting led to failure: after training the behavior for the cylinder, the agent forgot some of the ball-related behavior and ended up drifting off course when it was time to go to the ball. TAMER also failed to learn from lure training, which was expected since TAMER does not allow reinforcing a long history of behaviors. We believe TAMER’s forgetting is a result of interpreting feedback as reward-function exemplars in which new feedback in similar contexts can change the target. To illustrate this problem, we constructed a well-defined scenario in which TAMER consistently unlearns behavior. In this scenario, the goal was for the TurtleBot to always stay whenever the ball was present, and move forward if just the cylinder was present. We first trained TAMER to stay when the ball alone was present using many rapid rewards (yielding a large aggregated signal). Next, we trained it to move forward when the cylinder alone was present. We then introduced both objects, and the TurtleBot correctly stayed. After rewarding it for staying with a single reward (weaker than the previously-used many rapid rewards), the TurtleBot moved forward. This counter-intuitive unlearning is a consequence of the small reward decreasing its reward-function target for the stay action to a point lower than the value for moving forward. COACH does not exhibit this problem—any reward for staying will strengthen the behavior. Viii Conclusion ---------------- In this work, we presented empirical results that show that the numeric feedback people give agents in an interactive training paradigm is influenced by the agent’s current policy and argued why such policy-dependent feedback enables useful training strategies. We then introduced COACH, an algorithm that, unlike existing human-centered reinforcement-learning algorithms, converges to a local optimum when trained with policy-dependent feedback. We finally we showed that COACH scales up in the context of a robotics case study in which a TurtleBot was successfully taught multiple behaviors with advanced training methods. There are a number of exciting future directions to extend this work. In particular, because COACH is built on the actor-critic paradigm, it should be possible to combine it straightforwardly with learning from demonstration and environmental rewards, allowing an agent to be trained in a variety of ways. Second, because people give policy-dependent feedback, greater gains may be possible by investigating how people model the current policy of the agent and how their model differs from the agent’s actual policy.
cd22a543-a351-43a8-b849-43c6e1b593a5
trentmkelly/LessWrong-43k
LessWrong
Privileging the Question Related to: Privileging the Hypothesis > Remember the exercises in critical reading you did in school, where you had to look at a piece of writing and step back and ask whether the author was telling the whole truth? If you really want to be a critical reader, it turns out you have to step back one step further, and ask not just whether the author is telling the truth, but why he's writing about this subject at all. -- Paul Graham > There's an old saying in the public opinion business: we can't tell people what to think, but we can tell them what to think about. -- Doug Henwood > Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it. -- Eliezer Yudkowsky Here are some political questions that seem to commonly get discussed in US media: should gay marriage be legal? Should Congress pass stricter gun control laws? Should immigration policy be tightened or relaxed?  These are all examples of what I'll call privileged questions (if there's an existing term for this, let me know): questions that someone has unjustifiably brought to your attention in the same way that a privileged hypothesis unjustifiably gets brought to your attention. The questions above are probably not the most important questions we could be answering right now, even in politics (I'd guess that the economy is more important). Outside of politics, many LWers probably think "what can we do about existential risks?" is one of the most important questions to answer, or possibly "how do we optimize charity?"  Why has the media privileged these questions? I'd guess that the media is incentivized to ask whatever questions will get them the most views. That's a very different goal from asking the most important questions, and is one reason to stop paying attention to the media.  The problem with privileged questions is that you only have so much attention to spare. Attention paid to a question tha
7405f5c5-3f46-4e9a-8cde-4cb1e0ecc18e
trentmkelly/LessWrong-43k
LessWrong
Meetup : Durham Meetup: Article discussions Discussion article for the meetup : Durham Meetup: Article discussions WHEN: 25 October 2012 07:00:00PM (-0400) WHERE: Francesca's 706B 9th St Durham, NC 27705 We'll be discussing some articles over coffee and pastries. In particular: Generalizing From One Example It's okay to be (at least a little) irrational A Technical Explanation of Technical Explanation Reading at least some of these in advance is encouraged, but not required. We will summarize and discuss the articles, and in particular how we can apply them in the real world. Also on the agenda: discussion of rationality goals from the previous meeting, and discussion of future meetup topics. Discussion article for the meetup : Durham Meetup: Article discussions
e44d1b85-2d9c-45ee-81aa-07d34c1662c8
trentmkelly/LessWrong-43k
LessWrong
My Thoughts on the Apperception Engine Epistemic status: I am writing mainly to clarify my own thoughts on this, but I think it is definitely worth sharing what I have here. This is not meant to be a review of the paper itself, rather a discussion of its implications. So: > We note in particular that in the sequence induction intelligence tests, our system achieved human-level performance. This is a very flashy claim from the authors of this paper, which has been discussed briefly on LW. They are referring to the Apperception Engine (surely a name Babbage would be proud of), which they claim is able to create human-readable causal models, and perform at human-level in certain domains. Wow! Or perhaps not. The AE itself resembles a small portion of AIXI if it was implemented by someone who hadn't heard of Bayes' theorem (or AIXI), which acts on an incomplete sequence of "sensory inputs" (e.g. 123456789 of which only --34-6-8- might be shown to the system). It has a set of meta-rules which define a large series of what we might call "hypotheses" about the world. Each of these consists of a set of initial conditions at timestep 0; and a list of rules which both relate each timestep to the next, and determine the predicted sensory input. These are iterated to create a list of sensory inputs over time, which is compared with the "visible" elements of the sequence. The most important part of this process is that these hypotheses generally posit some unseen objects with various states: they are models of the world with internal structure and rules. To choose between multiple hypotheses which perfectly fit the visible elements of the sequence, a cost function is used, playing the role of a complexity-penalizing prior. (There are also some other constraints on hypotheses, but I think these are mostly irrelevant here) These simplifications compared to AIXI mean it is computable (although it presumably sometimes fails to find a hypothesis) but this does not necessarily mean fast. Computable does not necessar
36e84970-cfff-458c-99e8-510e98140436
StampyAI/alignment-research-dataset/arbital
Arbital
Value-laden An intuitive human category, or other humanly intuitive quantity or fact, is *value-laden* when it passes through human goals and desires, such that an agent couldn't reliably determine this intuitive category or quantity without knowing lots of complicated information about human goals and desires (and how to apply them to arrive at the intended concept). In terms of [Hume's is-ought type distinction](https://arbital.com/p/), value-laden categories are those that humans compute using information from the ought side of the boundary, whether or not they notice they are doing so. # Examples ## Impact vs. important impact Suppose we want an AI to cure cancer, without this causing any *important side effects.* What is or isn't an "important side effect" depends on what you consider "important". If the cancer cure causes the level of thyroid-stimulating hormone to increase by 5%, this probably isn't very important. If the cure increases the user's serotonin level by 5% and this significantly changes the user's emotional state, we'd probably consider that quite important. But unless the AI already understands complicated human values, it doesn't necessarily have any way of knowing that one change in blood chemical levels is "not important" and the other is "important". If you imagine the cancer cure as disturbing a set of variables $X_1, X_2, X_3...$ such that their values go from $x_1, x_2, x_3$ to $x_1^\prime, x_2^\prime, x_3^\prime$ then the question of which $X_i$ are *important* variables is value-laden. If we temporarily mechanomorphize humans and suppose that we have a utility function, then we could say that variables are "important" when they're evaluated by our utility function, or when changes to those variables change our expected utility. But by [orthogonality](https://arbital.com/p/1y) and [Humean freedom](https://arbital.com/p/2fs) of the utility function, there's an unlimited number of increasingly complicated utility functions that take into account different variables and functions of variables, so to know what we intuitively mean by "important", the AI would need information of high [algorithmic](https://arbital.com/p/5v) [complexity](https://arbital.com/p/5l) that the AI had no way to deduce *a priori*. Which variables are "important" isn't a question of simple fact - it's on the "ought" side of the [Humean is-ought type distinction](https://arbital.com/p/) - so we can't assume that an AI which becomes increasingly good at answering "is"-type questions also knows which variables are "important". Another way of looking at it is that if an AI merely builds a very good predictive model of the world, the set of "important variables" or "bad side effects" would be a squiggly category with a complicated boundary. Even after the AI has already formed a rich natural is-language to describe concepts like "thyroid" and "serotonin" that are useful for modeling and predicting human biology, it might still require a long message in this language to exactly describe the wiggly boundary of "important impact" or the even more wiggly boundary of "bad impact". This suggests that it might be simpler to try to tell the AI to cure cancer with [a minimum of *any* side effects](https://arbital.com/p/2pf), and [checking any remaining side effects](https://arbital.com/p/2qq) with the human operator. If we have a set of "impacts" $X_k$ to be either minimized or checked which is broad enough to include, in passing, *everything* inside the squiggly boundary of the $X_h$ that humans care about, then this broader boundary of "any impact" might be smoother and less wiggly - that is, a short message in the AI's is-language, making it easier to learn. For the same reason that a library containing every possible book has [less information](https://arbital.com/p/5v) than a library which only contains one book, a category boundary "impact" which includes everything a human cares about, *plus some other stuff,* can potentially be much simpler than an exact boundary drawn around "impacts we care about" which is value-laden because it involves caring. From a human perspective, the complexity of our value system is already built into us and now appears as a deceptively simple-looking function call - relative to the complexity already built into us, "bad impact" sounds very obvious and very easy to describe. This may lead people to underestimate the difficulty of training AIs to perceive the same boundary. (Just list out all the impacts that potentially lower expected value, darn it! Just the *important* stuff!) ## Faithful simulation vs. adequate simulation Suppose we want to run an "adequate" or "good-enough" simulation of an uploaded human brain. We can't say that an adequate simulation is one with identical input-output behavior to a biological brain, because the brain will almost certainly be a chaotic system, meaning that it's impossible for any simulation to get exactly the same result as the biological system would yield. We nonetheless don't want the brain to have epilepsy, or to go psychopathic, etcetera. The concept of an "adequate" simulation, in this case, is really standing in for "a simulation such that the *expected value* of using the simulated brain's information is within epsilon of using a biological brain". In other words, our intuitive notion of what counts as a good-enough simulation is really a value-laden threshold because it involves an estimate of what's good enough. So if we want an AI to have a notion of what kind of simulation is a faithful one, we might find it simpler to try to describe some superset of brain properties, such that if the simulated brain doesn't perturb the expectations of those properties, it doesn't perturb expected value either from our own intuitive standpoint (meaning the result of running the uploaded brain is equally valuable in our own expectation). This set of faithfulness properties would need to automatically pick up on changes like psychosis, but could potentially pick up on a much wider range of other changes that we'd regard as unimportant, so long as all the important ones are in there. ## Personhood Suppose that non-vegetarian programmers train an AGI on their intuitive category "person", such that: - Rocks are not "people" and can be harmed if necessary. - Shoes are not "people" and can be harmed if necessary. - Cats are sometimes valuable to people, but are not themselves people. - Alice, Bob, and Carol are "people" and should not be killed. - Chimpanzees, dolphins, and the AGI itself: not sure, check with the users if the issue arises. Now further suppose that the programmers haven't thought to cover, in the training data, any case of a cryonically suspended brain. Is this a person? Should it not be harmed? On many 'natural' metrics, a cryonically suspended brain is more similar to a rock than to Alice. From an intuitive perspective of avoiding harm to sapient life, a cryonically suspended brain has to be presumed a person until otherwise proven. But the natural, or inductively simple category that covers the training cases is likely to label the brain a non-person, maybe with very high probability. The fact that we want the AI to be careful not to hurt the cryonically suspended brain is the sort of thing you could only deduce by knowing which sort of things humans care about and why. It's not a simple physical feature of the brain itself. Since the category "person" is a value-laden one, when we extend it to a new region beyond the previous training cases, it's possible for an entirely new set of philosophical considerations to swoop in, activate, and control how we classify that case via considerations that didn't play a role in the previous training cases. # Implications Our intuitive evaluation of value-laden categories goes through our [Humean degrees of freedom](https://arbital.com/p/2fs). This means that a value-laden category which a human sees as intuitively simple, can still have high [https://arbital.com/p/-5v](https://arbital.com/p/-5v), even relative to sophisticated models of the "is" side of the world. This in turn means that even an AI understands the "is" side of the world very well, might not correctly and exactly learn a value-laden category from a small or incomplete set of training cases. From the perspective of training an agent that hasn't yet been aligned along all the Humean degrees of freedom, value-laden categories are very wiggly and complicated relative to the agent's empirical language. Value-laden categories are liable to contain exceptional regions that your training cases turned out not to cover, where from your perspective the obvious intuitive answer is a function of new value-considerations that the agent wouldn't be able to deduce from previous training data. This is why much of the art in Friendly AI consists of trying to rephrase an alignment schema into terms that are simple relative to "is"-only concepts, where we want an AI with an impact-in-general metric, rather than AI which avoids only bad impacts. "Impact" might have a simple, central core relative to a moderately sophisticated language for describing the universe-as-is. "Bad impact" or "important impact" don't have a simple, central core and hence might be much harder to identify via training cases or communication. Again, this is hard because humans do have all their subtle value-laden categories like 'important' built in as opaque function calls. Hence people approaching value alignment for the first time often expect that various concepts are easy to identify, and tend to see the intuitive or intended values of all their concepts as "common sense" regardless of which side of the is-ought divide that common sense is on. It's true, for example, that a modern chess-playing algorithm has "common sense" about when not to try to seize control of the gameboard's center; and similarly a sufficiently advanced agent would develop "common sense" about which substances would in empirical fact have which consequences on human biology, since this part is a strictly "is"-question that can be answered just by looking hard at the universe. Not *wanting* to administer poisonous substances to a human requires a prior dispreference over the consequences of administering that poison, even if the consequences are correctly forecasted. Similarly, the category "poison" could be said to really mean something like "a substance which, if administered to a human, produces low utility"; some people might classify vodka as poisonous, while others could disagree. An AI doesn't necessary have common sense about the [intended](https://arbital.com/p/6h) evaluation of the "poisonous" category, even if it has fully developed common sense about which substances have which empirical biological consequences when ingested. One of those forms of common sense can be developed by staring very intelligently at biological data, and one of them cannot. But from a human intuitive standpoint, both of these can *feel* equally like the same notion of "common sense", which might lead to a dangerous expectation that an AI gaining in one type of common sense is bound to gain in the other. # Further reading - http://lesswrong.com/lw/td/magical_categories/