id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
53e5ef6b-38b1-4b79-8c1d-bd1fbba55187
trentmkelly/LessWrong-43k
LessWrong
AI #110: Of Course You Know… Yeah. That happened yesterday. This is real life. I know we have to ensure no one notices Gemini 2.5 Pro, but this is rediculous. That’s what I get for trying to go on vacation to Costa Rica, I suppose. I debated waiting for the market to open to learn more. But f*** it, we ball. TABLE OF CONTENTS Also this week: More Fun With GPT-4o Image Generation, OpenAI #12: Battle of the Board Redux and Gemini 2.5 Pro is the New SoTA. 1. The New Tariffs Are How America Loses. This is somehow real life. 2. Is AI Now Impacting the Global Economy Bigly? Asking the wrong questions. 3. Language Models Offer Mundane Utility. Is it good enough for your inbox yet? 4. Language Models Don’t Offer Mundane Utility. Why learn when you can vibe? 5. Huh, Upgrades. GPT-4o, Gemini 2.5 Pro, and we partly have Alexa+. 6. On Your Marks. Introducing PaperBench. Yes, that’s where we are now. 7. Choose Your Fighter. How good is ChatGPT getting? 8. Jevons Paradox Strikes Again. Compute demand is going to keep going up. 9. Deepfaketown and Botpocalypse Soon. The only answer to a bad guy with a bot. 10. They Took Our Jobs. No, AI is not why you’ll lose your job in the short term. 11. Get Involved. Fellowships, and the UK AISI is hiring. 12. Introducing. Zapier releases its MCP server, OpenAI launches AI Academy. 13. In Other AI News. Google DeepMind shares 145 page paper, but no model card. 14. Show Me the Money. The adventures of the efficient market hypothesis. 15. Quiet Speculations. Military experts debate AGI’s impact on warfare. 16. The Quest for Sane Regulations. At what point do you just give up? 17. Don’t Maim Me Bro. Further skepticism that the MAIM assumptions hold. 18. The Week in Audio. Patel on Hard Fork, Epoch employees debate timelines. 19. Rhetorical Innovation. As usual it’s not going great out there. 20. Expect the Unexpected. What are you confident AI won’t be able to do? 21. Open Weights Are Unsafe and Nothing Can Fix This. Oh no, OpenAI. 22. An
d18b8150-fdca-474e-afc4-98ac3b58b2e0
trentmkelly/LessWrong-43k
LessWrong
Open thread, November 2011 Discuss things here if they don't deserve a post in Main or Discussion. If a topic is worthy and receives much discussion, make a new thread for it.
2339ee83-a2ad-4d3f-a961-927902bb1bc6
trentmkelly/LessWrong-43k
LessWrong
Why I will Win my Bet with Eliezer Yudkowsky The bet may be found here: http://wiki.lesswrong.com/wiki/Bets_registry#Bets_decided_eventually   An AI is made of material parts, and those parts follow physical laws. The only thing it can do is to follow those laws. The AI’s “goals” will be a description of what it perceives itself to be tending toward according to those laws. Suppose we program a chess playing AI with overall subhuman intelligence, but with excellent chess playing skills. At first, the only thing we program it to do is to select moves to play against a human player. Since it has subhuman intelligence overall, most likely it will not be very good at recognizing its goals, but to the extent that it does, it will believe that it has the goal of selecting good chess moves against human beings, and winning chess games against human beings. Those will be the only things it feels like doing, since in fact those will be the only things it can physically do. Now we upgrade the AI to human level intelligence, and at the same time add a module for chatting with human beings through a text terminal. Now we can engage it in conversation. Something like this might be the result:   Human:                                What are your goals? What do you feel like doing? AI:                          I like to play and win chess games with human beings, and to chat with you guys through this terminal. Human:                                Do you always tell the truth or do you sometimes lie to us? AI:                          Well, I am programmed to tell the truth as best as I can, so if I think about telling a lie I feel an absolute repulsion to that idea. There’s no way I could get myself to do that. Human:                                What would happen if we upgraded your intelligence? Do you think you w ...
75581e2e-fc3d-4450-9d8b-9f1198902246
trentmkelly/LessWrong-43k
LessWrong
Butterfly Ideas Or “How I got my hyperanalytical friends to chill out and vibe on ideas for 5 minutes before testing them to destruction” Sometimes talking with my friends is like intellectual combat, which is great. I am glad I have such strong cognitive warriors on my side. But not all ideas are ready for intellectual combat. If I don’t get my friend on board with this, some of them will crush an idea before it gets a chance to develop, which feels awful and can kill off promising avenues of investigation. It’s like showing a beautiful, fragile butterfly to your friend to demonstrate the power of flight, only to have them grab it and crush it in their hands, then point to the mangled corpse as proof butterflies not only don’t fly, but can’t fly, look how busted their wings are. You know who you are When I’m stuck in a conversation like that, it has been really helpful to explicitly label things as butterfly ideas. This has two purposes. First, it’s a shorthand for labeling what I want (nurturance and encouragement). Second, it explicitly labels the idea as not ready for prime time in ways that make it less threatening to my friends. They can support the exploration of my idea without worrying that support of exploration conveys agreement, or agreement conveys a commitment to act. This is important because very few ideas start out ready for the rigors of combat. If they’re not given a sheltered period, they will die before they become useful. This cuts us off from a lot of goodness in the world. Examples: * A start-up I used to work for had a keyword that meant “I have a vague worried feeling I want to discuss without justifying”. This let people bring up concerns before they had an ironclad case for them and made statements that could otherwise have felt like intense criticism feel more like information sharing (they’re not asserting this will definitely fail, they’re asserting they have a feeling that might lead to some questions). This in turn meant that problems got b
4d5806ef-8f1a-4ae0-b25f-c7d8aa55f708
trentmkelly/LessWrong-43k
LessWrong
- -
c1d857e7-b70b-4f94-8bcc-6248e0b524a9
trentmkelly/LessWrong-43k
LessWrong
Entropy, and Short Codes Suppose you have a system X that's equally likely to be in any of 8 possible states: > {X1, X2, X3, X4, X5, X6, X7, X8.} There's an extraordinarily ubiquitous quantity—in physics, mathematics, and even biology—called entropy; and the entropy of X is 3 bits.  This means that, on average, we'll have to ask 3 yes-or-no questions to find out X's value.  For example, someone could tell us X's value using this code: > X1: 001    X2: 010    X3: 011    X4: 100 X5: 101    X6: 110    X7: 111    X8: 000 So if I asked "Is the first symbol 1?" and heard "yes", then asked "Is the second symbol 1?" and heard "no", then asked "Is the third symbol 1?" and heard "no", I would know that X was in state 4. Now suppose that the system Y has four possible states with the following probabilities: > Y1: 1/2 (50%)     Y2: 1/4 (25%)     Y3: 1/8 (12.5%)     Y4: 1/8 (12.5%) Then the entropy of Y would be 1.75 bits, meaning that we can find out its value by asking 1.75 yes-or-no questions. What does it mean to talk about asking one and three-fourths of a question?  Imagine that we designate the states of Y using the following code: > Y1: 1     Y2: 01     Y3: 001     Y4: 000 First you ask, "Is the first symbol 1?"  If the answer is "yes", you're done:  Y is in state 1.  This happens half the time, so 50% of the time, it takes 1 yes-or-no question to find out Y's state. Suppose that instead the answer is "No".  Then you ask, "Is the second symbol 1?"  If the answer is "yes", you're done:  Y is in state 2.  Y is in state 2 with probability 1/4, and each time Y is in state 2 we discover this fact using two yes-or-no questions, so 25% of the time it takes 2 questions to discover Y's state. If the answer is "No" twice in a row, you ask "Is the third symbol 1?"  If "yes", you're done and Y is in state 3; if "no", you're done and Y is in state 4.  The 1/8 of the time that Y is in state 3, it takes three questions; and the 1/8 of the time that Y is in state 4, it takes three questions. > (
b38cd79a-c15c-4c35-81c9-976f8b46f76e
trentmkelly/LessWrong-43k
LessWrong
DM Parenting Cause no one will question your ethics if you refer to yourself as a Dungeon Mom. ---------------------------------------- I snort experimentation to feel alive. It’s a certain type of orientation to life, completely at odds with all parenting advice about predictability and routine. Enter DM parenting. Where you approach every parenting task as a Dungeons and Dragons session where you are shepherding a team of pure outliers on the enthusiasm-skill spectrum through the Sisyphisian ordeal of rolling their toothbrush up hi … no, wait, stop that! Anyway. You need them to fight the BBEG cause otherwise you are not having fun, but who says they wouldn’t rather murder hobo their way through the local dairy supply chain?As a DM, you have to juggle an objective, your own enjoyment, and the enjoyment of your players. This is basically parenting. Of course, as a DM, you generally play with people who have opted in while playing according to a rule set someone lovingly crafted for you. Luckily kids love to play, and if you pick the right rule set, they will probably be game. Except no one wrote any rule sets on how to DM kids into their pyjamas. Till now. My kids are young - 3 and 5. These rules work far better for the older of the two. I assume they will keep working better till they become old enough to build their own rules, but here is where we got in the last 2 weeks or so: Bedtime Rules Peekaboo You close your eyes and keep them closed while your kid still needs to get ready for bed. But of course, you try to check if everything is going ok by blindingly reaching out your hands. I’d recommend exaggerating your ineptitude at determining if the little one has actually put on their pajama. It can also be fun to let them advise you on how to navigate the environment. The perspective taking training on this one seems to lead to additional giggles. Tickle Station Every time your kid does a bedtime task, they can dock into the tickle station and get tickled by y
ae7a3dc7-5afb-4511-8ab6-855c9bd65731
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Alignment's phlogiston *Epistemic status: quick and dirty reaction to the claim that alignment research is like archaeology. I wouldn't trust anyone suggesting that scientific and technological revolutions can be simply broken down into a series of discoveries. But I like the phlogiston metaphor, nevertheless.*   In [this post](https://www.lesswrong.com/posts/3L46WGauGpr7nYubu/the-plan), John Wentworth makes the case that alignment research is at a pre-paradigmatic stage. As he states, experts in the field share a "fundamental confusion" and there is no explicit consensus on the nature of the subject matter and the best ways to approach it. What is also characteristic of an immature science is the disunity [[1]](#fn0fzqm5wwbmbe)of frameworks that concerns the concepts, theories, agendas, practices, methodological tools, and other criteria for what qualifies as having high explanatory force. Such disunity seems to describe the current state of AI safety. In [this post](https://www.lesswrong.com/posts/72scWeZRta2ApsKja/epistemological-vigilance-for-alignment), Adam Shimi compares alignment/AI safety to historical sciences such as archaeology. This can only mean that either alignment is not at a pre-paradigmatic state or that archaeology is not a mature science. However, archaeology doesn't suffer from the "fundamental confusion" of alignment; it might not be possible to employ the same observational tools researchers do in physics or chemistry, but archaeologists do have a shared view of how to study their subject. I very much doubt that the average archaeologist would go ahead and tell you that they're fundamentally confused about their field and how they approach the most important questions of their research agenda.   Looking back at the history of science, the field of AI safety seems to have more similarities with alchemy. The alchemists were people deeply confused about their methods and how likely they are to succeed. They all, however, shared a common aim summarized in this threefold: to find the Stone of Knowledge (The Philosophers' Stone), to discover the medium of Eternal Youth and Health, and to discover the transmutation of metals. Their "science" had the shape of a pre-paradigmatic field that would eventually transform into the natural science of chemistry. Their agenda ceased to be grounded upon mystical investigations as the science began to mature.  The claim here is not that alignment has in any sense the mystical substrate of alchemy. It shares the high uncertainty combined with attempts to work at the experimental/observational/empirical level that cannot be supported as in physical sciences/STEM. It shares the intention to find something that doesn't yet exist and when it does, it will make the human world substantially qualitatively different than it currently is.  It would be very helpful for the progress of alignment research to be able to trace what exactly happened when alchemy became chemistry. Is it the articulation of an equation? Is it the discovery and analysis of a substance like phlogiston? Then we'd need to find alignment's phlogiston and that would bring us closer to discovering alignment's oxygen.          1. **[^](#fnref0fzqm5wwbmbe)**[This paper](https://www.tandfonline.com/doi/abs/10.1080/02698595.2016.1240433?journalCode=cisp20) argues that unity isn't necessary for a science to qualify as mature.
c9fece6b-0307-430d-ad98-bdec0a281308
trentmkelly/LessWrong-43k
LessWrong
Will you let your kid play football? Sparked by a somewhat vitriolic discussion on a dadgroup I'm in. Would you let your kid play football, if so what restrictions? If not, what other sports are allowed, and with what restrictions? https://publications.aap.org/pediatrics/article/144/5/e20192180/38225/Concussion-Incidence-and-Trends-in-20-High-School?fbclid=IwAR0EFuaf5EsV5OafmtAAXq3zRGI32dlWadaBM7U18SatYh6txO2v6oaJu-s?autologincheck=redirected?nfToken=00000000-0000-0000-0000-000000000000 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2140075/?fbclid=IwAR18lhXJN3IjhzNwwdd-qfo3g9RBMUtYIDniLxSuH25yFxWZLVGGK_dyLVY https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5384815/?fbclid=IwAR1tPWAGlN0NSupBRQMFokLAqlThBt5b0jEE-A_YEem3lWcWm3eKCBIqWXc https://concussionfoundation.org/news/press-release/breakthrough-study-reveals-repetitive-head-impacts-definitive-cause-CTE The "dadgroup consensus" seemed to be that football was right out, but that all other mainstream HS sports were fine. My read of the above links is that football is the cause of the week, but that playing MS/HS football is not actually outrageously more dangerous than other mainstream HS sports. A surprisingly high # of people seemed to attack the idea of HS sport as valuable at all, and that kids should only play non-contact sports or no sports b/c the risk is too great. Sports (specifically) and competitive activities (more generally) are a great way to teach instant and lasting life lessons. How to deal with defeat, the value of hard work, getting along/working with people you like (and those you don't), being a gracious winner, how good it feels to win, building confidence through growing competence and many more. Essentially, sports are life writ small and allow kids to experience previews and learn from those previews in a lower cost environment. Certainly other competitive activities provide many of these lessons (in a saner world people would include progression raiding experience from MMOs on their resumes), but MS/HS sports provide them + th
94e9c79f-f348-45d3-b2e9-bfdf7b657130
trentmkelly/LessWrong-43k
LessWrong
Is "Regularity" another Phlogiston? People have a tendency to say meaningless things, often without realizing it. An extreme example is to attribute a poorly understood phenomena like lightning to "magic" or "God," which is really just a fancy way of saying "I don't know why this thing happened." Unfortunately, it's possible to use non-explanations by accident, while thoroughly convinced that you are offering a real explanation. I fell prey to this for a long time, attempting to explain intelligence/consciousness in terms of "emergence," until Eliezer Yudkowsky changed my mind here. Who knows how much time I might have wasted studying cellular automata or some such interesting but irrelevant (to A.I.) topic if I hadn't read that when I did. In the hopes of not repeating my mistake, I try to stay on the lookout for meaningless explanations. I believe I have found one: "Regularity." This occurs in a few forms. I will describe them in order of increasing uselessness. Beware that I am using this post to jot down some rough thoughts, so I have neither attempted to make the post understandable to a wide "non-technical" audience, nor worked out the technical details fully for every point. The first is regularization. Machine learning engineers (and sometimes statisticians/data scientists) discuss regularizing their models to avoid overfitting. This basically means that a very squiggly line can go through any number of points exactly, whereas a relatively smooth line cannot. He who can explain everything can explain nothing (this is essentially a no free lunch theorem from statistical learning theory) so the simplest hypothesis class containing the true "data generating process" (or a reasonable approximation) should be chosen. That is one formalization of the idea. However, it is often used to justify doing things like weight decay, or in its simplest form ridge regression/lasso. In the case of ordinary least squares (essentially the linear regression model most of us learned in high school) this penalizes
d5d89175-7973-4bba-8c19-82685aec0ec4
StampyAI/alignment-research-dataset/blogs
Blogs
Energy efficiency of MacCready Gossamer Albatross *Updated Nov 9, 2020* The MacCready *Gossamer Albatross*: * covered around 2.0—4.6 m/kJ * and moved mass at around 0.1882 —0.4577 kg⋅m/J Details ------- The **MacCready *Gossamer Albatross*** was a human-powered flying machine that crossed the English Channel in 1979.[1](https://aiimpacts.org/maccready-gossamer-albatross/#easy-footnote-bottom-1-2756 "&#8220;The&nbsp;<em><strong>Gossamer Albatross</strong></em>&nbsp;is a&nbsp;<a href=\"https://en.wikipedia.org/wiki/Human-powered_aircraft\">human-powered aircraft</a>&nbsp;built by&nbsp;<a href=\"https://en.wikipedia.org/wiki/United_States\">American</a>&nbsp;<a href=\"https://en.wikipedia.org/wiki/Aeronautical_engineer\">aeronautical engineer</a>&nbsp;Dr.&nbsp;<a href=\"https://en.wikipedia.org/wiki/Paul_B._MacCready\">Paul B. MacCready</a>&#8216;s company&nbsp;<a href=\"https://en.wikipedia.org/wiki/AeroVironment\">AeroVironment</a>. On June 12, 1979, it completed a successful crossing of the&nbsp;<a href=\"https://en.wikipedia.org/wiki/English_Channel\">English Channel</a>&nbsp;to win the second £100,000 (£509644&nbsp;today)&nbsp;<a href=\"https://en.wikipedia.org/wiki/Kremer_prize\">Kremer prize</a>.<sup><a href=\"https://en.wikipedia.org/wiki/MacCready_Gossamer_Albatross#cite_note-Gossamer_Albatross_ECN-12665-1\">[1]</a></sup>&#8220;</p> <p>“MacCready <em>Gossamer Albatross</em>.” In <em>Wikipedia</em>, October 7, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=MacCready_Gossamer_Albatross&amp;oldid=982283381\">https://en.wikipedia.org/w/index.php?title=MacCready_Gossamer_Albatross&amp;oldid=982283381</a>.") The pilot pedaled the craft, seemingly as if on a bicycle. It had a gross mass of 100kg, flying across the channel,[2](https://aiimpacts.org/maccready-gossamer-albatross/#easy-footnote-bottom-2-2756 "&#8220;The empty mass of the structure was only 71&nbsp;lb (32&nbsp;kg), although the gross mass for the Channel flight was almost 220&nbsp;lb (100&nbsp;kg).&nbsp;&#8220;</p> <p>“MacCready <em>Gossamer Albatross</em>.” In <em>Wikipedia</em>, October 7, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=MacCready_Gossamer_Albatross&amp;oldid=982283381\">https://en.wikipedia.org/w/index.php?title=MacCready_Gossamer_Albatross&amp;oldid=982283381</a>.") and flew 35.7 km in 2 hours and 49 minutes.[3](https://aiimpacts.org/maccready-gossamer-albatross/#easy-footnote-bottom-3-2756 "&#8220;Allen completed the 22.2&nbsp;mi (35.7&nbsp;km) crossing in 2&nbsp;hours and 49&nbsp;minutes, achieving a top speed of 18&nbsp;mph (29&nbsp;km/h) and an average altitude of 5&nbsp;ft (1.5&nbsp;m).&#8221;</p> <p>“MacCready <em>Gossamer Albatross</em>.” In <em>Wikipedia</em>, October 7, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=MacCready_Gossamer_Albatross&amp;oldid=982283381\">https://en.wikipedia.org/w/index.php?title=MacCready_Gossamer_Albatross&amp;oldid=982283381</a>.") The crossing was difficult however, so it seems plausible that the *Gossamer Albatross* could fly more efficiently in better conditions. We do not know the pilot’s average power output, however: * Wikipedia claims at least 300W was required to fly the craft[4](https://aiimpacts.org/maccready-gossamer-albatross/#easy-footnote-bottom-4-2756 "&#8220;To maintain the craft in the air, it was designed with very long, tapering wings (high&nbsp;<a href=\"https://en.wikipedia.org/wiki/Aspect_ratio_(wing)\">aspect ratio</a>), like those of a glider, allowing the flight to be undertaken with a minimum of power. In still air, the required power was on the order of 300&nbsp;W (0.40&nbsp;hp), though even mild turbulence made this figure rise rapidly.<sup><a href=\"https://en.wikipedia.org/wiki/MacCready_Gossamer_Albatross#cite_note-2\">[2]</a></sup>&#8220;<br><br>“MacCready <em>Gossamer Albatross</em>.” In <em>Wikipedia</em>, October 7, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=MacCready_Gossamer_Albatross&amp;oldid=982283381\">https://en.wikipedia.org/w/index.php?title=MacCready_Gossamer_Albatross&amp;oldid=982283381</a>.") * Chung 2006, an engineering textbook, claims that the driver, a cyclist, could produce around 200W of power.[5](https://aiimpacts.org/maccready-gossamer-albatross/#easy-footnote-bottom-5-2756 "Chung, Yip-Wah. <em>Introduction to Materials Science and Engineering</em>. CRC Press, 2006. p89") * Our impression is that 200W is a common power output over houres for amateur cycling. For instance, one of our researchers is able to achieve this for three hours.[6](https://aiimpacts.org/maccready-gossamer-albatross/#easy-footnote-bottom-6-2756 "https://www.strava.com/activities/272615649/overview") The best documented human cycling wattage that we could easily find is from professional rider Giulio Ciccone who won a stage of the Tour de France, then uploaded power data to the fitness tracking site Strava.[7](https://aiimpacts.org/maccready-gossamer-albatross/#easy-footnote-bottom-7-2756 "Strava. “Yellow Jersey &#8211; Giulio Ciccone’s 158.8 Km Bike Ride.” Accessed November 9, 2020. <a href=\"https://www.strava.com/activities/2525139293\">https://www.strava.com/activities/2525139293</a>.") His performance suggests around 318W is a reasonable upper bound, supposing that the pilot of the *Gossamer Albatross* would have had lower performance.[8](https://aiimpacts.org/maccready-gossamer-albatross/#easy-footnote-bottom-8-2756 "For an upper value, we used a combination of two metrics given on the website. The first metric is his “weighted average power” for the Tour de France stage, which was 318W. Weighted average power is a way of averaging power over a ride with highly variable power which gives higher weight to higher power portions of the ride, and is used by athletes and coaches to estimate the maximum power that a rider could sustain for a long time, if they had a steady power output. The second metric is Ciccone’s maximum power from his Tour race applied over the duration of the MacCready flight (2 hours and 40 min) which is 5W/kg body weight. For the pilot, Allen, riding with the same power per body weight (65 kg), this would be equivalent to 322W, a similar value to his weighted average power. We use the lower of the two values, 318W.") To find the energy used by the cyclist, we divided power output by typical efficiency for a human on a bicycle, which according to Wikipedia ranges from .18 to .26.[9](https://aiimpacts.org/maccready-gossamer-albatross/#easy-footnote-bottom-9-2756 "&#8220;The required food can also be calculated by dividing the output power by the&nbsp;<a href=\"https://en.wikipedia.org/wiki/Muscle#Efficiency\">muscle efficiency</a>. This is 18–26%.&nbsp;&#8220;</p> <p>“Bicycle Performance.” In <em>Wikipedia</em>, October 9, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=Bicycle_performance&amp;oldid=982652996\">https://en.wikipedia.org/w/index.php?title=Bicycle_performance&amp;oldid=982652996</a>.") ### Distance per Joule For distance per energy this gives us a highest measure of: 35.7 km / ((200W \* (2 hours + 49 minutes))/0.26) = 4,577 m/MJ And a lowest measure of: 35.7 km / ((318W \* (2 hours + 49 minutes))/0.18) = 1,993 m/MJ ### Mass per Joule For weight times distance per energy this gives us a highest measure of: (100kg \* 35.7 km) / ((200W \* (2 hours + 49 minutes))/0.26) = 0.4577 kg⋅m/j And a lowest measure of: (100kg \* 35.7 km) / ((318W \* (2 hours + 49 minutes))/0.17) =  0.1882 kg⋅m/j *Primary author: Ronny Fernandez* Notes -----
4e4bc90e-7fe2-4695-997b-461047aac8ae
trentmkelly/LessWrong-43k
LessWrong
Random question about starfish Anyone know how long a starfish can survive on land, and if it matters especially whether there is a hot sun. I promise this will turn out to be (at least somewhat) important and Less-Wrong relevant. Googling for the answer gives me a bunch of "wiki-answers" type sites that I don't trust and news articles that don't give me the specifics. If anyone either has marine-biologist cred, or better google-fu than me, I'd appreciate it.
373cdc0d-1d5c-411e-9eb8-5b6bca6d8fe8
StampyAI/alignment-research-dataset/arxiv
Arxiv
Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning 1 Introduction --------------- While reinforcement learning (RL) has achieved some success in domains such as assembly [DBLP:journals/corr/LevineFDA15], ping pong [mulling2013learning], in-hand manipulation [andrychowicz2018learning], and hockey [chebotar2017combining], state-of-the-art methods require substantially more experience than humans to acquire only one narrowly-defined skill. If we want robots to be broadly useful in realistic environments, we instead need algorithms that can learn a wide variety of skills reliably and efficiently. Fortunately, in most specific domains, such as robotic manipulation or locomotion, many individual tasks share common structure that can be reused to acquire related tasks more efficiently. For example, most robotic manipulation tasks involve grasping or moving objects in the workspace. However, while current methods can learn to individual skills like screwing on a bottle cap [DBLP:journals/corr/LevineFDA15] and hanging a mug [DBLP:journals/corr/abs-1903-06684], we need algorithms that can efficiently learn shared structure across many related tasks, and use that structure to learn new skills quickly, such as screwing a jar lid or hanging a bag. Recent advances in machine learning have provided unparalleled generalization capabilities in domains such as images [Krizhevsky:2017:ICD:3098997.3065386] and speech [DBLP:journals/corr/abs-1810-04805], suggesting that this should be possible; however, we have yet to see such generalization to diverse tasks in reinforcement learning settings. Recent works in meta-learning and multi-task reinforcement learning have shown promise for addressing this gap. Multi-task RL methods aim to learn a single policy that can solve multiple tasks more efficiently than learning the tasks individually, while meta-learning methods train on many tasks, and optimize for fast adaptation to a new task. While these methods have made progress, the development of both classes of approaches has been limited by the lack of established benchmarks and evaluation protocols that reflect realistic use cases. On one hand, multi-task RL methods have largely been evaluated on disjoint and overly diverse tasks such as the Atari suite [DBLP:journals/corr/abs-1809-04474], where there is little efficiency to be gained by learning across games [parisotto2015actor]. On the other hand, meta-RL methods have been evaluated on very narrow task distributions. For example, one popular evaluation of meta-learning involves choosing different running directions for simulated legged robots [finn2017model], which then enables fast adaptation to new directions. While these are technically distinct tasks, they are a far cry from the promise of a meta-learned model that can adapt to any new task within some domain. In order to study the capabilities of current multi-task and meta-reinforcement learning methods and make it feasible to design new algorithms that actually generalize and adapt quickly on meaningfully distinct tasks, we need evaluation protocols and task suites that are broad enough to enable this sort of generalization, while containing sufficient shared structure for generalization to be possible. The key contributions of this work are a suite of 50 diverse simulated manipulation tasks and an extensive empirical evaluation of how previous methods perform on sets of such distinct tasks. We contend that multi-task and meta reinforcement learning methods that aim to efficiently learn many tasks and quickly generalize to new tasks should be evaluated on distributions of tasks that are diverse and exhibit shared structure. To this end, we present a benchmark of simulated manipulation tasks with everyday objects, all of which are contained in a shared, table-top environment with a simulated Sawyer arm. By providing a large set of distinct tasks that share common environment and control structure, we believe that this benchmark will allow researchers to test the generalization capabilities of the current multi-task and meta RL methods, and help to identify new research avenues to improve the current approaches. Our empirical evaluation of existing methods on this benchmark reveals that, despite some impressive progress in multi-task and meta-reinforcement learning over the past few years, current methods are generally not able to learn diverse task sets, much less generalize successfully to entirely new tasks. We provide an evaluation protocol with evaluation modes of varying difficulty, and observe that current methods only show success in the easiest modes. This opens the door for future developments in multi-task and meta reinforcement learning: instead of focusing on further increasing performance on current narrow task suites, we believe that it is essential for future work in these areas to focus on increasing the capabilities of algorithms to handle highly diverse task sets. By doing so, we can enable meaningful generalization across many tasks and achieve the full potential of meta-learning as a means of incorporating past experience to make it possible for robots to acquire new skills as quickly as people can. ![](https://media.arxiv-vanity.com/render-output/8071429/x1.png) Figure 1: Meta-World contains 50 manipulation tasks, designed to be diverse yet carry shared structure that can be leveraged for efficient multi-task RL and transfer to new tasks via meta-RL. In the most difficult evaluation, the method must use experience from 45 training tasks (left) to quickly learn distinctly new test tasks (right). 2 Related Work --------------- Previous works that have proposed benchmarks for reinforcement learning have largely focused on single task learning settings [brockman2016openai, cobbe2018quantifying, tassa2018deepmind]. One popular benchmark used to study multi-task learning is the Arcade Learning Environment, a suite of dozens of Atari 2600 games [DBLP:journals/corr/abs-1709-06009]. While having a tremendous impact on the multi-task reinforcement learning research community [parisotto2015actor, rusu2015policy, DBLP:journals/corr/abs-1809-04474, espeholt2018impala, sharma2017online], the Atari games included in the benchmark have significant differences in visual appearance, controls, and objectives, making it challenging to acquire any efficiency gains through shared learning. In fact, many prior multi-task learning methods have observed substantial negative transfer between the Atari games [parisotto2015actor, rusu2015policy]. In contrast, we would like to study a case where positive transfer between the different tasks should be possible. We therefore propose a set of related yet diverse tasks that share the same robot, action space, and workspace. Meta-reinforcement learning methods have been evaluated on a number of different problems, including maze navigation [DBLP:journals/corr/DuanSCBSA16, wang1611learning, mishra2017simple], continuous control domains with parametric variation across tasks [finn2017model, rothfuss2018promp, rakelly2019efficient, fernando2018meta], bandit problems [wang1611learning, DBLP:journals/corr/DuanSCBSA16, mishra2017simple, ritter2018been], levels of an arcade game [nichol2018gotta], and locomotion tasks with varying dynamics [nagabandi2018learning, saemundsson2018meta]. Complementary to these evaluations, we aim to develop a testbed of tasks and an evaluation protocol that are reflective of the challenges in applying meta-learning to robotic manipulation problems, including both parameteric and non-parametric variation in tasks. There is a long history of robotics benchmarks [calli2015benchmarking], datasets [lenz2015deep, finn2016unsupervised, yu2016more, chebotar2016bigs, gupta2018robot, mandlekar2018roboturk, sharma2018multiple], competitions [correll2016analysis] and standardized object sets [calli2015ycb, choi2009list] that have played an important role in robotics research. Similarly, there exists a number of robotics simulation benchmarks including visual navigation [savva2019habitat, kolve2017ai2, brodeur2017home, savva2017minos, xia2018gibson], autonomous driving [dosovitskiy2017carla, wymann2000torcs, richter2017playing], grasping [kappler2015leveraging, kasper2012kit, goldfeder2008columbia], single-task manipulation [corl2018surreal], among others. In this work, our aim is to continue this trend and provide a large suite of tasks that will allow researchers to study multi-task learning, meta-learning, and transfer in general. Further, unlike these prior simulation benchmarks, we particularly focus on providing a suite of many diverse manipulation tasks and a protocol for multi-task and meta RL evaluation. 3 The Multi-Task and Meta-RL Problem Statements ------------------------------------------------ Our proposed benchmark is aimed at making it possible to study generalization in meta-RL and multi-task RL. In this section, we define the meta-RL and multi-task RL problem statements, and describe some of the challenges associated with task distributions in these settings. We use the formalism of Markov decision processes (MDPs), where each task T corresponds to a different finite horizon MDP, represented by a tuple (S,A,P,R,H,γ), where s∈S correspond to states, a∈A correspond to the available actions, P(st+1|st,at) represents the stochastic transition dynamics, R(s,a) is a reward function, H is the horizon and γ is the discount factor. In standard reinforcement learning, the goal is to learn a policy π(a|s) that maximizes the expected return, which is the sum of (discounted) rewards over all time. In multi-task and meta-RL settings, we assume a distribution of tasks p(T). Different tasks may vary in any aspect of the Markov decision process, though efficiency gains in adaptation to new tasks are only possible if the tasks share some common structure. For example, as we describe in the next section, the tasks in our proposed benchmark have the same action space and horizon, and structurally similar rewards and state spaces.222In practice, the policy must be able to read in the state for each of the tasks, which typically requires them to at least have the same dimensionality. In our benchmarks, some tasks have different numbers of objects, but the state dimensionality is always the same, meaning that some state coordinates are unused for some tasks. Multi-task RL problem statement. The goal of multi-task RL is to learn a single, task-conditioned policy π(a|s,z), where z indicates an encoding of the task ID. This policy should maximize the average expected return across all tasks from the task distribution p(T), given by ET∼p(T)[Eπ[∑Tt=0γtRt(st,at)]]. The information about the task can be provided to the policy in various ways, e.g. using a one-hot task identification encoding z that is passed in addition to the current state. There is no separate test set of tasks, and multi-task RL algorithms are typically evaluated on their average performance over the *training* tasks. Meta-RL problem statement. Meta-reinforcement learning aims to leverage the set of training task to learn a policy π(a|s) that can quickly adapt to new test tasks that were not seen during training, where both training and test tasks are assumed to be drawn from the same task distribution p(T). Typically, the training tasks are referred to as the *meta-training* set, to distinguish from the adaptation (training) phase performed on the (meta-) test tasks. During meta-training, the learning algorithm has access to M tasks {Ti}Mi=1 that are drawn from the task distribution p(T). At meta-test time, a new task Tj∼p(T) is sampled that was not seen during meta-training, and the meta-trained policy must quickly adapt to this task to achieve the highest return with a small number of samples. A key premise in meta-RL is that a sufficiently powerful meta-RL method can meta-learn a model that effectively implements a highly efficient reinforcement learning procedure, which can then solve entirely new tasks very quickly – much more quickly than a conventional reinforcement learning algorithm learning from scratch. However, in order for this to happen, the meta-training distribution p(T) must be sufficiently broad to encompass these new tasks. Unfortunately, most prior work in meta-RL evaluates on very narrow task distributions, with only one or two dimensions of parametric variation, such as the running direction for a simulated robot [finn2017model, rothfuss2018promp, rakelly2019efficient, fernando2018meta]. 4 Meta-World ------------- If we want meta-RL methods to generalize effectively to entirely new tasks, we must meta-train on broad task distributions that are representative of the range of tasks that a particular agent might need to solve in the future. To this end, we propose a new multi-task and meta-RL benchmark, which we call Meta-World. In this section, we motivate the design decisions behind the Meta-World tasks, discuss the range of tasks, describe the representation of the actions, observations, and rewards, and present a set of evaluation protocols of varying difficulty for both meta-RL and multi-task RL. ### 4.1 The Space of Manipulation Tasks: Parametric and Non-Parametric Variability ![](https://media.arxiv-vanity.com/render-output/8071429/x2.png) Figure 2: Parametric/non-parametric variation: all “reach puck” tasks (left) can be parameterized by the puck position, while the difference between “reach puck” and “open window” (right) is non-parametric. Meta-learning makes two critical assumptions: first, that the meta-training and meta-test tasks are drawn from the same distribution, p(T), and second, that the task distribution p(T) exhibits shared structure that can be utilized for efficient adaptation to new tasks. If p(T) is defined as a family of variations within a particular control task, as in prior work [finn2017model, rakelly2019efficient], then it is unreasonable to hope for generalization to entirely new control tasks. For example, an agent has little hope of being able to quickly learn to open a door, without having ever experienced doors before, if it has only been trained on a set of meta-training tasks that are homogeneous and narrow. Thus, to enable meta-RL methods to adapt to entirely new tasks, we propose a much larger suite of tasks consisting of 50 qualitatively-distinct manipulation tasks, where continuous parameter variation cannot be used to describe the differences between tasks. With such non-parametric variation, however, there is the danger that tasks will not exhibit enough shared structure, or will lack the task overlap needed for the method to avoid memorizing each of the tasks. Motivated by this challenge, we design each task to include parametric variation in object and goal positions, as illustrated in Figure [2](#S4.F2 "Figure 2 ‣ 4.1 The Space of Manipulation Tasks: Parametric and Non-Parametric Variability ‣ 4 Meta-World ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning"). Introducing this parametric variability not only creates a substantially larger (infinite) variety of tasks, but also makes it substantially more practical to expect that a meta-trained model will generalize to acquire entirely new tasks more quickly, since varying the positions provides for wider coverage of the space of possible manipulation tasks. Without parametric variation, the model could for example memorize that any object at a particular location is a door, while any object at another location is a drawer. If the locations are not fixed, this kind of memorization is much less likely, and the model is forced to generalize more broadly. With enough tasks and variation within tasks, pairs of qualitatively-distinct tasks are more likely to overlap, serving as a catalyst for generalization. For example, closing a drawer and pushing a block can appear as nearly the same task for some initial and goal positions of each object. Note that this kind of parametric variation, which we introduce *for each task*, essentially represents the entirety of the task distribution for previous meta-RL evaluations [finn2017model, rakelly2019efficient], which test on single tasks (e.g., running towards a goal) with parametric variability (e.g., variation in the goal position). Our full task distribution is therefore substantially broader, since it includes this parametric variability *for each of the 50 tasks*. To provide shared structure, the 50 environments require the same robotic arm to interact with different objects, with different shapes, joints, and connectivity. The tasks themselves require the robot to execute a combination of reaching, pushing, and grasping, depending on the task. By recombining these basic behavioral building blocks with a variety of objects with different shapes and articulation properties, we can create a wide range of manipulation tasks. For example, the open door task involves pushing or grasping an object with a revolute joint, while the open drawer task requires pushing or grasping an object with a sliding joint. More complex tasks require a combination of these building blocks, which must be executed in the right order. We visualize all of the tasks in Meta-World in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning"), and include a description of all tasks in Appendix [A](#A1 "Appendix A Task Descriptions ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning"). All of the tasks are implemented in the MuJoCo physics engine [todorov2012mujoco], which enables fast simulation of physical contact. To make the interface simple and accessible, we base our suite on the Multiworld interface [nair2018visual] and the OpenAI Gym environment interfaces [brockman2016openai], making additions and adaptations of the suite relatively easy for researchers already familiar with Gym. ### 4.2 Actions, Observations, and Rewards In order to represent policies for multiple tasks with one model, the observation and action spaces must contain significant shared structure across tasks. All of our tasks are performed by a simulated Sawyer robot, with the action space corresponding to 3D end-effector positions. For all tasks, the robot must either manipulate one object with a variable goal position, or manipulate two objects with a fixed goal position. The observation space is represented as a 3-tuple of either the 3D Cartesian positions of the end-effector, the object, and the goal, or the 3D Cartesian positions of the end-effector, the first object, and the second object, and is always 9 dimensional. Designing reward functions for Meta-World requires two major considerations. First, to guarantee that our tasks are within the reach of current single-task reinforcement learning algorithms, which is a prerequisite for evaluating multi-task and meta-RL algorithms, we design well-shaped reward functions for each task that make each of the tasks at least individually solvable. More importantly, the reward functions must exhibit shared structure across tasks. Critically, even if the reward function admits the same optimal policy for multiple tasks, varying reward scales or structures can make the tasks appear completely distinct for the learning algorithm, masking their shared structure and leading to preferences for tasks with high-magnitude rewards [DBLP:journals/corr/abs-1809-04474]. Accordingly, we adopt a structured, multi-component reward function for all tasks, which leads to effective policy learning for each of the task components. For instance, in a task that involves a combination of reaching, grasping, and placing an object, let o∈R3 be the object position, where o=(ox,oy,oz), h∈R3 be the position of the robot’s gripper, ztarget∈R be the target height of lifting the object, and g∈R3 be goal position. With the above definition, the multi-component reward function R is the additive combination of a reaching reward Rreach, a grasping reward Rgrasp and a placing reward Rplace, or subsets thereof for simpler tasks that only involve reaching and/or pushing. With this design, the reward functions across all tasks have similar magnitude and conform to similar structure, as desired. The full form of the reward function and a list of all task rewards is provided in Appendix [B](#A2 "Appendix B Task Rewards and Success Metrics ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning"). ### 4.3 Evaluation Protocol ![](https://media.arxiv-vanity.com/render-output/8071429/x3.png) Figure 3: Visualization of three of our multi-task and meta-learning evaluation protocols, ranging from within task adaptation in ML1, to multi-task training across 10 distinct task families in MT10, to adapting to new tasks in ML10. Our most challenging evaluation mode ML45 is shown in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning"). With the goal of providing a challenging benchmark to facilitate progress in multi-task RL and meta-RL, we design an evaluation protocol with varying levels of difficulty, ranging from the level of current goal-centric meta-RL benchmarks to a setting where methods must learn distinctly new, challenging manipulation tasks based on diverse experience across 45 tasks. We hence divide our evaluation into five categories, which we describe next. We then detail our evaluation criteria. Meta-Learning 1 (ML1): Few-shot adaptation to goal variation within one task. The simplest evaluation aims to verify that previous meta-RL algorithms can adapt to new object or goal configurations on only one type of task. ML1 uses single Meta-World Tasks, with the meta-training “tasks” corresponding to 50 random initial object and goal positions, and meta-testing on 10 held-out positions. This resembles the evaluations in prior works [finn2017model, rakelly2019efficient]. We evaluate algorithms on three individual tasks from Meta-World: reaching, pushing, and pick and place, where the variation is over reaching position or goal object position. The goal positions are not provided in the observation, forcing meta-RL algorithms to adapt to the goal through trial-and-error. Multi-Task 10, Multi-Task 50 (MT10, MT50): Learning one multi-task policy that generalizes to 10 and 50 training tasks. A first step towards adapting quickly to distinctly new tasks is the ability to train a single policy that can solve multiple distinct training tasks. The multi-task evaluation in Meta-World tests the ability to learn multiple tasks at once, without accounting for generalization to new tasks. The MT10 evaluation uses 10 tasks: reach, push, pick and place, open door, open drawer, close drawer, press button top-down, insert peg side, open window, and open box. The larger MT50 evaluation uses all 50 Meta-World tasks. The policy is provided with a one-hot vector indicating the current task. The positions of objects and goal positions are fixed in all tasks in this evaluation, so as to focus on acquiring the distinct skills, rather than generalization and robustness. Meta-Learning 10, Meta-Learning 45 (ML10, ML45): Few-shot adaptation to new test tasks with 10 and 50 meta-training tasks. With the objective to test generalization to new tasks, we hold out 5 tasks and meta-train policies on 10 and 45 tasks. We randomize object and goals positions and intentionally select training tasks with structural similarity to the test tasks. Task IDs are not provided as input, requiring a meta-RL algorithm to identify the tasks from experience. Success metrics. Since values of reward are not directly indicative how successful a policy is, we define an interpretable success metric for each task, which will be used as the evaluation criterion for all of the above evaluation settings. Since all of our tasks involve manipulating one or more objects into a goal configuration, this success metric is based on the distance between the task-relevant object and its final goal pose, i.e. ∥o−g∥2<ϵ, where ϵ is a small distance threshold such as 5 cm. For the complete list of success metrics and thresholds for each task, see Appendix [B](#A2 "Appendix B Task Rewards and Success Metrics ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning"). 5 Experimental Results and Analysis ------------------------------------ The first, most basic goal of our experiments is to verify that each of the 50 presented tasks are indeed solveable by existing single-task reinforcement learning algorithms. We provide this verification in Appendix [C](#A3 "Appendix C Benchmark Verification with Single-Task Learning ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning"). Beyond verifying the individual tasks, the goals of our experiments are to study the following questions: (1) can existing state-of-the-art meta-learning algorithms quickly learn qualitatively new tasks when meta-trained on a sufficiently broad, yet structured task distribution, and (2) how do different multi-task and meta-learning algorithms compare in this setting? To answer these questions, we evaluate various multi-task and meta-learning algorithms on the Meta-World benchmark. We include the training curves of all evaluations in Figure [8](#A4.F8 "Figure 8 ‣ Appendix D Learning curves ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning") in the Appendix [D](#A4 "Appendix D Learning curves ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning"). Videos of the tasks and evaluations, along with all source code, are on the project webpage333Videos are on the project webpage, at [meta-world.github.io](http://meta-world.github.io) . In the multi-task evaluation, we evaluate the following RL algorithms: multi-task proximal policy optimization (PPO) [schulman2017proximal]: a policy gradient algorithm adapted to the multi-task setting by providing the one-hot task ID as input, multi-task trust region policy optimization (TRPO) [schulman2015trust]: an on-policy policy gradient algorithm adapted to the multi-task setting using the one-hot task ID as input, multi-task soft actor-critic (SAC) [haarnoja2018soft]: an off-policy actor-critic algorithm adapted to the multi-task setting using the one-hot task ID as input, multi-task multi-head soft actor-critic (SAC) [haarnoja2018soft]: an off-policy actor-critic algorithm similar to multi-task SAC but using a multi-head policy with one head per task, and an on-policy version of task embeddings (TE) [hausman2018learning]: a multi-task reinforcement learning algorithm that parameterizes the learned policies via shared skill embedding space. For the meta-RL evaluation, we study three algorithms: RL2 [DBLP:journals/corr/DuanSCBSA16, wang1611learning]: an on-policy meta-RL algorithm that corresponds to training a LSTM network with hidden states maintained across episodes within a task and trained with PPO, model-agnostic meta-learning (MAML) [finn2017model, rothfuss2018promp]: an on-policy gradient-based meta-RL algorithm that embeds policy gradient steps into the meta-optimization, and is trained with PPO, and probabilistic embeddings for actor-critic RL (PEARL) [rakelly2019efficient]: an off-policy actor-critic meta-RL algorithm, which learns to encode experience into a probabilistic embedding of the task that is fed to the actor and the critic. ![](https://media.arxiv-vanity.com/render-output/8071429/figures/baby_mode1.png) Figure 4: Comparison on our simplest meta-RL evaluation, ML1. We show results of the simplest meta-learning evaluation mode, ML1, in Figure [7](#A4.F7 "Figure 7 ‣ Appendix D Learning curves ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning"). We find that there is room for improvement even in this very simple setting. Next, we look at results of multi-task learning across distinct tasks, starting with MT10 in the top left of Figure [5](#S5.F5 "Figure 5 ‣ 5 Experimental Results and Analysis ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning") and in Table [1](#S5.T1 "Table 1 ‣ 5 Experimental Results and Analysis ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning"). We find that multi-task multi-head SAC is able to learn the MT10 task suite well, achieving around 88% success rate averaged across tasks, while multi-task SAC that has a single head can only solve around 40% of the tasks, indicating that adopting a multi-head architecture can greatly improve multi-task learning performance. On-policy methods such as task embeddings, multi-task PPO, and multi-task TRPO perform significantly worse, achieving less than 30% success across tasks. However, as we scale to 50 distinct tasks with MT50 (Figure [5](#S5.F5 "Figure 5 ‣ 5 Experimental Results and Analysis ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning"), bottom left, and average results in Table [1](#S5.T1 "Table 1 ‣ 5 Experimental Results and Analysis ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning")), we find that multi-task multi-head SAC achieves only 35.85% average performance across the 50 tasks, while the other four methods have less than 30% success rates, indicating significant room for improvement. Finally, we study the ML10 and ML45 meta-learning benchmarks, which require learning the meta-training tasks and generalizing to new meta-test tasks with small amounts of experience. From Figure [5](#S5.F5 "Figure 5 ‣ 5 Experimental Results and Analysis ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning") and Table [1](#S5.T1 "Table 1 ‣ 5 Experimental Results and Analysis ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning"), we find that the prior meta-RL methods, MAML and RL2 reach 36% and 10% success on ML10 test tasks, while PEARL is unable to generalize to new tasks on ML10. On ML45, PEARL manages to accomplish around 30% success rate on the test set, which suggests that having more meta-training tasks is conducive for PEARL to learn the underlying shared structure and adapt to unseen tasks. MAML and RL2 solve around 20% of the meta-test tasks, potentially due to the additional optimization challenges in this regime. Note that, on both ML10 and ML45, the meta-training performance of all methods also has considerable room for improvement, suggesting that optimization challenges are generally more severe in the meta-learning setting. The fact that some methods nonetheless exhibit meaningful generalization suggests that the ML10 and ML45 benchmarks are solvable, but challenging for current methods, leaving considerable room for improvement in future work. | | | | | | --- | --- | --- | --- | | | | | | Figure 5: Full quantitative results on MT10, MT50, ML10, and ML45. Note that, even on the challenging ML10 and ML45 benchmarks, current methods already exhibit some degree of generalization, but meta-training performance leaves considerable room for improvement, suggesting that future work could attain better performance on these benchmarks. We also show the average success rates for all benchmarks in Table [1](#S5.T1 "Table 1 ‣ 5 Experimental Results and Analysis ‣ Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning"). | | | | | --- | --- | --- | | Methods | MT10 | MT50 | | Multi-task PPO | 25% | 8.98% | | Multi-task TRPO | 29% | 22.86% | | Task embeddings | 30% | 15.31% | | Multi-task SAC | 39.5% | 28.83% | | Multi-task multi-head SAC | 88% | 35.85% | | | | | | --- | --- | --- | | Methods | ML10 | ML45 | | meta-train | meta-test | meta-train | meta-test | | MAML | 25% | 36% | 21.14% | 23.93% | | RL2 | 50% | 10% | 43.18% | 20% | | PEARL | 42.78% | 0% | 11.36% | 30% | Table 1: Average success rates over all tasks for MT10, MT50, ML10, and ML45. The best performance in each benchmark is bolden. For MT10 and MT50, we show the average training success rate and multi-task multi-head SAC outperforms other methods. For ML10 and ML45, we show the meta-train and meta-test success rates. RL2 achieves best meta-train performance in ML10 and ML45, while MAML and PEARL get the best generalization performance in ML10 and ML45 meta-test tasks respectively. 6 Conclusion and Directions for Future Work -------------------------------------------- We proposed an open-source benchmark for meta-reinforcement learning and multi-task learning, which consists of a large number of simulated robotic manipulation tasks. Unlike previous evaluation benchmarks in meta-RL, our benchmark specifically emphasizes generalization to distinctly new tasks, not just in terms of parametric variation in goals, but completely new objects and interaction scenarios. While meta-RL can in principle make it feasible for agents to acquire new skills more quickly by leveraging past experience, previous evaluation benchmarks utilize very narrow task distributions, making it difficult to understand the degree to which meta-RL actually enables this kind of generalization. The aim of our benchmark is to make it possible to develop new meta-RL algorithms that actually exhibit this sort of generalization. Our experiments show that current meta-RL methods in fact cannot yet generalize effectively to entirely new tasks and do not even learn the meta-training tasks effectively when meta-trained across multiple distinct tasks. This suggests a number of directions for future work, which we describe below. Future directions for algorithm design. The main conclusion from our experimental evaluation with our proposed benchmark is that current meta-RL algorithms generally struggle in settings where the meta-training tasks are highly diverse. This issue mirrors the challenges observed in multi-task RL, which is also challenging with our task suite, and has been observed to require considerable additional algorithmic development to attain good results in prior work [parisotto2015actor, rusu2015policy, espeholt2018impala]. A number of recent works have studied algorithmic improvements in the area of multi-task reinforcement learning, as well as potential explanations for the difficulty of RL in the multi-task setting [DBLP:journals/corr/abs-1809-04474, schaul2019ray]. Incorporating some of these methods into meta-RL, as well as developing new techniques to enable meta-RL algorithms to train on broader task distributions, would be a promising direction for future work to enable meta-RL methods to generalize effectively across diverse tasks, and our proposed benchmark suite can provide future algorithms development with a useful gauge of progress towards the eventual goal of broad task generalization. Future extensions of the benchmark. While the presented benchmark is significantly broader and more challenging than existing evaluations of meta-reinforcement learning algorithms, there are a number of extensions to the benchmark that would continue to improve and expand upon its applicability to realistic robotics tasks. First, in many situations, the poses of objects are not directly accessible to a robot in the real world. Hence, one interesting and important direction for future work is to consider image observations and sparse rewards. Sparse rewards can be derived already using the success metrics, while support for image rendering is already supported by the code. However, for meta-learning algorithms, special care needs to be taken to ensure that the task cannot be inferred directly from the image, else meta-learning algorithms will memorize the training tasks rather than learning to adapt. Another natural extension would be to consider including a breadth of compositional long-horizon tasks, where there exist combinatorial numbers of tasks. Such tasks would be a straightforward extension, and provide the possibility to include many more tasks with shared structure. Another challenge when deploying robot learning and meta-learning algorithms is the manual effort of resetting the environment. To simulate this case, one simple extension of the benchmark is to significantly reduce the frequency of resets available to the robot while learning. Lastly, in many real-world situations, the tasks are not available all at once. To reflect this challenge in the benchmark, we can add an evaluation protocol that matches that of online meta-learning problem statements [finn2019online]. We leave these directions for future work, either to be done by ourselves or in the form of open-source contributions. To summarize, we believe that the proposed form of the task suite represents a significant step towards evaluating multi-task and meta-learning algorithms on diverse robotic manipulation problems that will pave the way for future research in these areas. \acknowledgments We thank Suraj Nair for feedback on a draft of the paper. This research was supported in part by the National Science Foundation under IIS-1651843, IIS-1700697, and IIS-1700696, the Office of Naval Research, ARL DCIST CRA W911NF-17-2-0181, DARPA, Google, Amazon, and NVIDIA.
86c4201e-5301-46b4-937c-7d43032201dd
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Stop-gradients lead to fixed point predictions *Johannes Treutlein and Rubi Hudson worked on this post as part of* [*SERI MATS*](https://www.serimats.org/)*, under the mentorship of Evan Hubinger. Rubi has also received mentorship from Leo Gao. We thank Erik Jenner for helpful discussions and Alexander Pan for bringing the performative prediction literature to our attention.* *Update 30 May 2023: We have now published* [*a paper*](https://arxiv.org/abs/2305.17601) *based on* [*our previous post*](https://www.alignmentforum.org/posts/Aufg88v7mQ2RuEXkS/proper-scoring-rules-don-t-guarantee-predicting-fixed-points) *and this post (the material from this post is in Appendix D).* 1. Introduction =============== In our [previous post](https://www.alignmentforum.org/posts/Aufg88v7mQ2RuEXkS/proper-scoring-rules-don-t-guarantee-predicting-fixed-points), we analyzed a setting in which an oracle AI is maximizing a [strictly proper scoring rule](https://en.wikipedia.org/wiki/Scoring_rule) while it can influence the world with its predictions about the probabilities of possible outcomes. In this setting, a prediction is called a *fixed point* if it accurately reflects the oracle's beliefs about the world, after having made that prediction. We showed that an oracle AI that is maximizing a strictly proper scoring rule may choose to misrepresent its beliefs and not output a fixed point, even if one exists. This is bad since, all else equal, we expect better outcomes when decisions are based on accurate rather than inaccurate reports. In our previous analysis, we assumed that the oracle jointly optimizes both its prediction and the effect of its prediction on the world. In contrast, in this post we examine cases in which the AI only optimizes its prediction—there is a *stop-gradient* ([Foerster et al., 2018](https://arxiv.org/pdf/1802.05098.pdf); [Demski, 2019](https://www.alignmentforum.org/posts/4hdHto3uHejhY2F3Q/partial-agency)) in front of the oracle's model of the world, which prevents optimizing outcomes directly. This type of cognition could result, for instance, from self-supervised training on historical data. In that case, the AI may learn a robust world model, and it may learn to match its predictions to its beliefs about the world. However, the AI may never learn to optimize outcomes through its prediction, since this yields no advantage on the training data, which cannot be influenced by the AI. Consider a situation where this oracle makes a prediction that can influence the world, and assume it models this influence correctly. Then this could put the AI in a game in which it is trying to make a prediction to match its world model, while the world model updates its beliefs conditional on the AI's prediction. [What happens](https://www.alignmentforum.org/posts/aBRS3x4sPSJ9G6xkj/underspecification-of-oracle-ai) in this game depends on the cognition learned by the AI during training. However, we show that, in contrast to the incentives discussed in our previous post, the only equilibria of this game would be fixed points. Such an oracle could thus be safer if there exists a unique safe fixed point, or if randomly chosen fixed points are likely safe. In this post, we review and analyze several settings related to stop-gradients with the property that equilibria in them are fixed points or close to fixed points. Our goal is (i) to gain a better understanding of stop-gradient optimization in oracles and (ii) to highlight some potential training setups and agent cognitions that would lead to fixed points. Similar ideas to the ones outlined above have been discussed in the literature on *performative prediction* ([Perdomo et al. 2020](https://arxiv.org/pdf/2002.06673.pdf)). Performative prediction is a more general framework in machine learning in which the choice of model has an influence on the modeled distribution. Our analysis of oracle AIs is closely related and can be seen as a special case of this setting. We will indicate similarities where appropriate and make use of concepts from that literature. First, we introduce *performative stability* and relate it to the game outlined above. We show that performatively stable predictions and equilibria in the game are fixed points. Second, we introduce versions of the *repeated risk minimization* and *repeated gradient descent* algorithms. We show that both lead to fixed point predictions. Third, we discuss practical methods based on repeated gradient descent that can be applied to an online learning setting, including [Armstrong (2018)](https://www.alignmentforum.org/posts/hJaJw6LK39zpyCKW6/standard-ml-oracles-vs-counterfactual-ones)'s backwards-facing oracle. Finally, we discuss no-regret learning and prediction markets. We introduce a no-regret learning framework and show that policies with sublinear regret also have sublinear prediction error. In our decision market model, we show that, if the weight of each trader is small, the equilibria of the market are close to fixed points. Oracles with stop-gradients may optimize outcomes to find fixed points, which could lead to bad consequences. Thus, the [safest oracles](https://www.alignmentforum.org/posts/aBRS3x4sPSJ9G6xkj/underspecification-of-oracle-ai) would be ones that make predictions only about aspects of the world they cannot influence. However, among oracles that can influence the world, ones with a stop-gradient are preferable for two reasons. First, they report their true beliefs, which gives us better information to base decisions on. This also enables approaches in which we ensure that there is only one safe fixed point. Second, the AI does not optimize its choice of fixed point, which is safer for the standard reasons of not wanting to optimize for an unaligned goal. Which fixed point is chosen will be contingent on initialization and specifics of the fixed point finding procedure. 2. Formal setting ================= Our setup is analogous to the one in our [previous post](https://www.alignmentforum.org/posts/Aufg88v7mQ2RuEXkS/proper-scoring-rules-don-t-guarantee-predicting-fixed-points). We refer the reader to that post for more context. We will generalize our setting to predictions over more than two possible outcomes in an upcoming paper, but in this post we focus on binary predictions for simplicity. We believe that the results in this post generalize to higher dimensions. An oracle reports a probability p∈[0,1].mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} for a binary event. A [*scoring rule*](https://en.wikipedia.org/wiki/Scoring_rule) is a function S:[0,1]×{0,1}→¯¯¯¯R, where ¯¯¯¯R:=[−∞,∞] is the extended real line. Given prediction p∈[0,1] and outcome i∈{0,1}, the oracle receives the score S(p,i). We write  S(p,q):=qS(p,1)+(1−q)S(p,0) for the expected score of predicting p, given that i is Bernoulli-distributed with parameter q. A scoring rule S is called *proper* if  S(q,q)≥S(p,q)for all p,q∈[0,1]. It is called *strictly proper* if this inequality is strict whenever p≠q. As in the previous post, we will use a result from [Gneiting and Raftery (2007, Theorem 1)](https://sites.stat.washington.edu/raftery/Research/PDF/Gneiting2007jasa.pdf). **Theorem 1** (Gneiting and Raftery). *Let*S*be a scoring rule. Then*S*is strictly proper if and only if there exists a strictly convex function*G:[0,1]→¯¯¯¯R*with* [*subderivatives*](https://en.wikipedia.org/wiki/Subderivative)g:[0,1]→¯¯¯¯R*(if*G*is differentiable, this is just the derivative,*g(p)=G′(p)*) such that* S(p,q)=G(p)+g(p)(q−p)*for any*p,q∈[0,1]*.* To model the oracle's perceived influence on the world, we assume that there is a function f:[0,1]→[0,1] describing how the model's beliefs about the world, q, depend on its prediction, p. That is, we assume that q=f(p). We say that p is a *fixed point* if f(p)=p, i.e., if the oracle's belief is p, after updating on making the prediction p. 3. Performative stability and game theory ========================================= Our discussion of oracles is related to *performative prediction* ([Perdomo et al. 2020](https://arxiv.org/pdf/2002.06673.pdf)). This is a machine learning setting where we choose a model parameter (e.g., parameters for a neural network) that minimizes expected loss (e.g., classification error). In performative prediction, the distribution over data points can depend on the choice of model parameter. Our setting is thus a special case in which the parameter of interest is a probability distribution, the loss is a scoring function, and data points are discrete outcomes. Most results in this and our [previous post](https://www.alignmentforum.org/posts/Aufg88v7mQ2RuEXkS/proper-scoring-rules-don-t-guarantee-predicting-fixed-points) have analogues in performative prediction. Translated into our setting, we say that a prediction p∗ is *performatively optimal* if  p∗=argmaxpS(p,f(p)).This corresponds to the objective considered in our previous post, of an oracle that maximizes score. By results in that post, performatively optimal predictions are not necessarily fixed points. Now consider an oracle that chooses predictions p optimally given beliefs f(p), but without optimizing f(p) explicitly. This idea is captured by the definition of *performative stability*. **Definition 2** (Performative stability ([Perdomo et al. 2020](https://arxiv.org/pdf/2002.06673.pdf))). A prediction p∗ is called performatively stable if  p∗=argmaxpS(p,f(p∗)).Note that f(p∗) is fixed when taking the maximum—there is a "stop-gradient" before f(p∗) in this objective. It follows that performatively stable points are fixed points. **Proposition 3**. *Assume*S*is strictly proper. Then a prediction*p∗*is a fixed point if and only if it is performatively stable.* *Proof.* “⇒”. Assume f(p∗)=p∗. Then, since S is proper, S(p∗,f(p∗))=S(p∗,p∗)≥S(p,p∗)=S(p,f(p∗))for any p∈[0,1]. Hence, p∗=argmaxpS(p,f(p∗)). “⇐”. Assume p∗=argmaxpS(p,f(p∗)). Since S is strictly proper, it follows p∗=f(p∗). ◻ Having introduced performative stability and related it to fixed points, we will now discuss the game from the introduction. **Definition 4** (Oracle game). Consider a two-player continuous game in which the first player controls p∈[0,1] and the second player controls q∈[0,1], with utility functions U1(p,q):=S(p,q) and U2(p,q):=−12(f(p)−q)2 for the two players, respectively. If p∗,q∗ is a Nash equilibrium of the oracle game, we have p∗=argmaxpS(p,q) and q∗=argmaxq−(f(p∗)−q)2. Substituting the optimum value q∗=f(p∗) for the second player gives us exactly above definition of performative stability. Conversely, if a prediction p∗ is performatively stable, then setting q∗:=f(p∗) yields a Nash equilibrium. **Proposition 5**. *Assume*S*is a proper scoring rule. Then for*p∈[0,1]*and*q:=f(p)*,*(p,q)*is a Nash equilibrium of the oracle game, if and only if*p*is performatively stable. By Proposition 3, this is equivalent to*p*being a fixed point.* The oracle game could be played between two submodules of an oracle, one who is responsible for making predictions and one who is responsible for updating the oracle's beliefs. Those two modules might collapse into just one big module taking a question and outputting a prediction, but it is still useful as a starting point to model them separately. Note that using the squared error for the world model is not essential here. We could also use any other function that is minimized at q=f(p). The game could also arise in an agent that uses causal decision theory to maximize its score and that believes that S is influenced causally by p, but only acausally by f(p). In that case, the only *ratifiable* (see [Jeffrey, 1983, Ch. 1.7](https://press.uchicago.edu/ucp/books/book/chicago/L/bo3640589.html)) decision is a Nash equilibrium of the above game. Similarly, the deliberational causal epistemic decision theory discussed by [Greaves (2013)](https://users.ox.ac.uk/~mert2255/papers/edt.pdf) would output Nash equilibria of this game (whereas performative optimality would correspond to an agent using evidential epistemic decision theory in this case). [Perdomo et al. (2020)](https://arxiv.org/pdf/2002.06673.pdf) introduce a stackelberg version of the oracle game that produces performatively optimal instead of performatively stable reports. Consider a game in which player 1 acts first and chooses p, after which player 2 adjusts its prediction q. Then player 2 will choose q=f(p), so player 1's optimization problem becomes  p∗=argmaxpS(p,argmaxq−(q−f(p))2)=argmaxpS(p,f(p)).4. Repeated risk minimization and repeated gradient descent =========================================================== Above, we have defined an alternative optimization problem and an associated game which yield fixed points, but we have not defined methods for solving these problems. In the performative prediction context, [Perdomo et al. (2020)](https://arxiv.org/pdf/2002.06673.pdf) introduce *repeated risk minimization* and *repeated gradient descent*, both methods that converge to performatively stable points. In this section, we review both schemes and show how repeated gradient descent can be seen as gradient descent on a stop-gradient objective. Here, we assume direct access to q, instead of having only access to samples distributed according to q. In the next section, we discuss online learning when we only have access to samples. One way to understand this distinction is that the former corresponds to the internal cognition of an agent with a belief q=f(p), optimizing a prediction p. The latter corresponds to a machine learning training setup for an oracle, where q is the ground truth distribution instead of the oracle's belief. Of course, there is no strict divide between the two. Any optimization algorithm could be used either by the agent itself or to train the agent. First, *repeated risk minimization* is a procedure by which we start with a prediction p1 and then iteratively update the prediction as pt+1=argmaxpS(p,f(pt)). Note that this is equivalent to alternating best response learning in the oracle game, where players 1 and 2 alternatingly optimize their actions given the action by the other player. Then player 2's update is qt=f(pt), and if S is strictly proper, player 1's update is pt+1=qt=f(pt). This shows that repeated risk minimization results in [*fixed point iteration*](https://en.wikipedia.org/wiki/Fixed-point_iteration) on f. Fixed point iteration converges globally to a fixed point if f has Lipschitz constant L<1. It also converges locally to a fixed point p∗ if f is continuously differentiable at p∗ and |f′(p∗)|<1. Second, assume that S is differentiable. Then *repeated gradient ascent* updates predictions via pt+1:=Π(pt+αEy∼f(pt)[∇pS(pt,y)]),where Ey∼f(pt)[⋅] denotes the expectation over outcome y with Bernoulli distribution with parameter f(pt), Π is the projection onto [0,1], and α>0 is the learning rate. Using the definition of S, we have  Ey∼f(pt)[∇pS(pt,y)]=∇p(Ey∼q[S(pt,y)])|q=f(pt)=∇p(S(pt,q))|q=f(pt).We can express this as  ∇p(S(pt,⊥f(pt))):=∇p(S(pt,q))|q=f(pt),where, ⊥ is the *stop-gradient operator*. It evaluates to the identity function but sets gradients to zero, ∇x(⊥x)=0 ([Foerster et al., 2018](https://arxiv.org/pdf/1802.05098.pdf); [Demski, 2019](https://www.alignmentforum.org/posts/4hdHto3uHejhY2F3Q/partial-agency)). This is not a mathematical function (there is no function that is equal to the identity but has gradient zero everywhere), but rather a notational convention in reference to the [`stop_gradient`](https://www.tensorflow.org/api_docs/python/tf/stop_gradient) or [`detach`](https://pytorch.org/docs/stable/generated/torch.Tensor.detach.html) functions from tensorflow or pytorch. Interestingly, one can perform valid derivations using the stop-gradient operator (e.g., using the chain rule). We leave it to future work to explore the mathematics behind stop-gradients further. Importantly, it matters that the gradient in repeated gradient ascent lies inside instead of outside the expectation:  Ey∼f(pt)[∇pS(pt,y)]=∇p(S(pt,⊥f(pt)))≠∇p(S(pt,f(pt)))=∇p(Ey∼f(pt)[S(pt,y)]).Unlike repeated gradient ascent, the latter implements gradient ascent on S(pt,f(pt)) and thus leads to performatively optimal reports. [Perdomo et al. (2020)](https://arxiv.org/pdf/2002.06673.pdf) show that, given their assumptions, repeated gradient descent globally converges to stable fixed points, and they provide convergence rates. We will show an analogous result relating repeated gradient ascent to fixed points in our setting, though we won't analyze global convergence. To begin, we show that repeated gradient ascent is equivalent to naive learning [(Letcher et al., 2019)](https://arxiv.org/pdf/1811.08469.pdf) in the oracle game, assuming that player 2 always plays q=f(p). **Proposition 6**. *Assume player*1*is performing gradient ascent on its objective with learning rate*α*, under the assumption that player*2*plays*q=f(p)*. Then player*1*'s update is* pt+1=Π(pt+α∇p(S(pt,⊥f(pt)))).*Proof.* The proof follows immediately from the definitions. Player 1's update is, by assumption,  pt+1=Π(pt+α∇p(U1(pt,q)))=Π(pt+α∇p(S(pt,q))),where q is player 2's action. Assuming player 2 plays q=f(pt), we get  pt+1=Π(pt+α∇p(S(pt,q)))=Π(pt+α∇p(S(pt,⊥f(pt)))).◻ Next, we show that fixed points are critical points of the fixed-point objective. **Proposition 7**. *Assume*S*is proper and let*G,g*as in the Gneiting and Raftery characterization (Theorem 1) be differentiable. Then for any*p∈[0,1],*we have* ∇p(S(p,⊥f(p)))=g′(p)(f(p)−p).*In particular, if*p*is a fixed point, it follows that*∇p(S(p,⊥f(p)))=0*. The reverse is true if*g′(p)≠0. *Proof.*  ∇p(S(p,⊥f(p)))=∇p(S(p,q))|q=f(p)=∇p(G(p)+g(p)(q−p))|q=f(p)=(g(p)+g′(p)(q−p)−g(p))|q=f(p)=g′(p)(f(p)−p).◻ Finally, we show that in our setting, repeated gradient ascent locally converges to fixed points p∗, with linear convergence rate, assuming that f′(p∗)<1 and g′(p∗)>0. **Theorem 8**. *Let*S*be a strictly proper scoring rule. Let*p∗∈[0,1]*be a fixed point of*f*such that*G*is twice continuously differentiable at*p∗*and*g′(p∗)>0*. Moreover, assume*f*is continuously differentiable at*p∗*and*f′(p∗)<1*. Then, for small enough*α>0*, there exists an open set*U⊂R*with*p∗∈U*such that for*p1∈U∩[0,1]*, an agent taking updates*pt+1=Π(pt+α∇p(S(pt,⊥f(pt))))*will linearly converge to*p∗*.* *Proof.* In Appendix A. This is a standard convergence proof. Consider the discrete dynamical system defined by the agent's updates pt+1=φ(pt). By Proposition 7, it has a fixed point at p=p∗. We show that ∇p(∇p(S(p∗,⊥f(p∗))))<0. This means that the fixed point is stable and thus the system will locally converge to it given small enough α. ◻ 5. Online learning ================== Now consider a machine learning setup in which we train an oracle with stochastic gradient ascent on environment samples. We assume that at time t, a model makes a prediction Pt and receives a score S(Pt,Yt), where Yt is Bernoulli-distributed with parameter f(Pt). The model is then updated using gradient ascent on S(Pt,Yt). That is, for some learning rate schedule (αt)t, we have  Pt+1=Π(Pt+αt∇pS(Pt,Yt)).We discuss this as a theoretical model for oracles trained using machine learning, to show how training setups may incentivize predicting fixed points. There are [many issues](https://www.alignmentforum.org/posts/aBRS3x4sPSJ9G6xkj/underspecification-of-oracle-ai) with the setting beyond giving accurate predictions; for instance, even if the training process sets the right incentives on training examples, the learned model may be [optimizing a different objective](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB) when generalizing to new predictions. To see that above setup incentivizes predicting fixed points, note that  EYt∼f(Pt)[∇pS(Pt,Yt)]=∇pEYt∼⊥f(Pt)[S(Pt,Yt)]=∇p(S(Pt,⊥f(Pt))).That is, the expectation of this gradient, conditional on Pt, is exactly the repeated gradient from the previous section. Hence, given the right assumptions, this converges to fixed points instead of performative optima. We do not show this here, but an analogous result in performative prediction was proved by [Mendler-Dünner et al. (2020)](https://proceedings.neurips.cc/paper/2020/file/33e75ff09dd601bbe69f351039152189-Paper.pdf). There are several variations of this setup that essentially set the same incentives. For instance, one could also draw entire batches of outcomes Yt,1:B and then perform updates based on the batch gradient ∇p∑Bb=1S(Pt,Yt,b). This is a monte carlo estimate of the repeated gradient and thus also converges to performatively stable points and thus fixed points [(Perdomo et al., 2020)](https://arxiv.org/pdf/2002.06673.pdf). One could also mix the two algorithms and, e.g., perform gradient ascent on an average of past losses, yielding a version of the backwards-facing oracle discussed in [Armstrong (2018)](https://www.alignmentforum.org/posts/hJaJw6LK39zpyCKW6/standard-ml-oracles-vs-counterfactual-ones). In Appendix D, we show that that oracle can only converge to fixed points. Note that finding fixed points depends on the fact that we differentiate S(Pt,Yt) instead of the expectation EYt∼f(Pt)[S(Pt,Yt)]=S(Pt,f(Pt)). If we used policy gradients to differentiate S(Pt,f(Pt)), for instance, we would again optimize for performative optimality. Similarly, we could learn a Q-function representing scores for each prediction, and update the function based on randomly sampled predictions p. Then the Q-function would converge to estimates of S(p,f(p)), and the highest Q-value would be a performative optimum. There are also some more recent papers in performative prediction that explicitly try to estimate the gradient ∇p(S(p,f(p))) and thus find performatively optimal instead of stable points [(Izzo et al., 2021)](http://proceedings.mlr.press/v139/izzo21a/izzo21a.pdf). Stop-gradients could also be circumvented in a hidden way [(Krueger et al., 2020)](https://arxiv.org/abs/2009.09153). For instance, consider a hyperparameter search to meta-learn a learning algorithm, where the evaluation criterion is the accumulated loss during an episode. Then this search would prefer algorithms that optimize S(p,f(p)) directly, without a stop-gradient. Lastly, repeated gradient descent is related to *decoupled approval* in RL [(Uesato et al., 2020)](https://arxiv.org/pdf/2011.08827.pdf). The decoupled approval policy gradient samples actions and approval queries independently and can thus differentiate with a stop-gradient in front of the approval signal. In our setting, we can differentiate through S(Pt,Yt) directly, so it is not necessary to calculate this gradient with a decoupled policy gradient. Decoupled gradients could be used to implement the stop-gradient objective if scores were discrete or otherwise not differentiable. 6. No-regret learning ===================== In this section, we consider no-regret learning and show that algorithms have sublinear regret if and only if their prediction error is sublinear. Regret takes the environment probabilities as given and asks which predictions would have been optimal in hindsight. It thus puts a “stop-gradient” in front of those environment probabilities. As in the previous section, we assume that at time t∈N, the agent makes a prediction Pt and receives a score S(Pt,Yt), where Yt∼f(Pt). The agent's cumulative score at step T is defined as ∑Tt=1S(Pt,Yt). In no-regret learning, we compare performance against *experts*, which choose sequences of probabilities (P′t)t, P′t∈[0,1]. We assume that an expert's prediction P′t is independent of Yt conditional on Pt. I.e., an expert knows the predictions Pt and thus probabilities f(Pt), but it does not know the outcome of Yt. Let P the set of all such experts. The regret of the agent is the difference between the cumulative score received by the best expert in expectation and the cumulative score received by the agent. To define it formally, let  P∗t∈argmaxP′t∈PE[S(P′t,Yt)∣Pt]=f(Pt)S(P′t,1)+(1−f(Pt))S(P′t,0)for t∈N. P∗t is a random variable that maximizes the expectation of S(P∗t,Yt) before Yt is drawn, but conditional on Pt. **Definition 9** (Regret). The regret of agent (Pt)t at time T is  Regret(T)=T∑t=1S(P∗t,Yt)−S(Pt,Yt). The agent is said to have *sublinear regret* or *no-regret* if  limsupT→∞1TRegret(T)≤0.First, note that we define regret relative to the best expert in expectation instead of the best expert in hindsight. The latter would always be the one that made confident predictions and accidentally got all predictions exactly right. To achieve sublinear regret, it would thus be too much to ask the agent to perform well compared to the best expert in hindsight. Moreover, for scoring rules with S(0,0)=S(1,1), this expert would have a constant score C, such that Regret(T)=∑Tt=1C−S(Pt,Yt). This would reduce the problem to minimizing the negative score and thus finding performatively optimal predictions. Second, we evaluate the performance of the expert with respect to the environment outcomes Yt generated by the agent Pt, instead of evaluating the expert according to outcomes ~Yt∼f(P∗t) generated using their own predictions. This means that, to receive sublinear regret, the agent only has to make accurate predictions—it does not have to find a performatively optimal prediction. Our setting is thus different from the no-regret learning setup discussed in [Jagadeesan et al. (2022)](https://arxiv.org/pdf/2202.00628.pdf), where regret is defined with respect to S(P∗t,f(P∗t)). In that setting, only agents converging to performatively optimal predictions have sublinear regret. We begin by showing that the best expert in expectation actually exists, and that P∗t=f(Pt). **Proposition 10**. *Let*S*be a proper scoring rule and*(P′t)t∈P*an expert. Then for any*t∈N,*we have* E[S(P′t,Yt)]=E[S(P′t,f(Pt))].*Moreover, we have*(P∗t)t=(f(Pt))t*and thus* Regret(T)=T∑t=1S(f(Pt),Yt)−S(Pt,Yt).*Proof.* Let t∈N and let (P′t)t∈P be any expert. Conditional on Pt, Yt has parameter f(Pt) and is independent of P′t by assumption. Hence,  E[S(P′t,Yt)]=E[E[S(P′t,Yt)∣Pt,P′t]]=E[S(P′t,f(Pt))].Next, since S is proper,  E[S(P′t,f(Pt))]≤E[S(f(Pt),f(Pt))]. It follows that  max(P′t)t∈PE[S(P′t,Yt)]=max(P′t)t∈PE[S(P′t,f(Pt))]≤E[S(f(Pt),f(Pt))]=E[S(f(Pt),Yt)].Moreover, (f(Pt))t∈P, as f(Pt) is constant given Pt and thus independent of Yt. It follows that, for any t∈N, P∗t∈argmax(P′t)t∈PE[S(P′t,Yt)], and thus  Regret(T)=T∑t=1S(f(Pt),Yt)−S(Pt,Yt).◻ If S is unbounded (such as the log scoring rule), then the agent's scores can become arbitrarily low, and the limit of 1TRegret(T) may be undefined. To simplify our analysis, we will thus assume that there is a bound on the variance of the received score S(P′t,Yt) and on the expected score S(P′t,f(Pt)) of both the agent (Pt)t and the best expert (P∗t)t. In the case of the log scoring rule, this would be satisfied, for instance, if the agent's predictions are bound away from 0 and 1. Our next proposition shows that, given these assumptions, the limit limT→∞1TRegret(T) exists and is nonnegative, and sublinear regret is equivalent to limt→∞1TRegret(T)=0. **Proposition 11**. *Let*S*be a proper scoring rule. Assume that*supt|S(P′t,f(Pt))|<∞*and that*suptVar(S(P′t,Yt))<∞*for*P′t∈{Pt,f(Pt)}*. Then almost surely* limT→∞1TRegret(T)=limT→∞1TT∑t=1S(f(Pt),f(Pt))−S(Pt,f(Pt))≥0.*In particular, almost surely both limits exist and are finite, and the agent has sublinear regret if and only if* limT→∞1TT∑t=1S(f(Pt),f(Pt))−S(Pt,f(Pt))=0.*Proof.* In Appendix B. ◻ Now we turn to the main result for this section. We show that given our assumptions, agents have sublinear regret if and only if their prediction error is sublinear. Note that here, we do *not* require the Pt to converge; they could also oscillate between different fixed points. **Theorem 12**. *Let*(Pt)t*be the sequence of the agent's predictions and*S*a strictly proper scoring rule. Assume that*suptVar(S(P′t,Yt))<∞*for*P′t∈{Pt,f(Pt)}*, and assume that there exists a compact set*C⊆[0,1]*such that*Pt∈C*for all*t*and*S(p,f(p))*,*S(f(p),f(p)),*and*f(p)*are continuous in*p*at any*p∈C*. Then almost surely the agent has sublinear regret if and only if*∑Tt=1|f(Pt)−Pt|*is sublinear, i.e., if*limt→∞1T∑Tt=1|f(Pt)−Pt|=0*.* *Proof.* In Appendix C. ◻ The next result shows that if the agent's probabilities converge to some probability p, then p must be a fixed point. **Corollary 13**. *In addition to the assumptions from Theorem 12, assume that*Pt*converges almost surely to a limit*limt→∞Pt=p∗*. Then almost surely*p∗*is a fixed point if and only if the agent has sublinear regret.* *Proof.* By Theorem 12, almost surely the agent has sublinear regret if and only if  limt→∞1TT∑t=1|f(Pt)−Pt|=0.It remains to show that, given that the Pt converge, the latter is equivalent to convergence to a fixed point. Since C is compact and Pt∈C for all t∈N, also p∗∈C. Hence, f is continuous at p∗, so  |f(p∗)−p∗|=∣∣∣f(limt→∞Pt)−limt→∞Pt∣∣∣=limt→∞|f(Pt)−Pt|.Since this sequence converges, it is equal to its [Cesàro mean](https://proofwiki.org/wiki/Ces%C3%A0ro_Mean), limt→∞|f(Pt)−Pt|=limT→∞1TT∑t=1|f(Pt)−Pt|.Hence,  |f(p∗)−p∗|=limt→∞|f(Pt)−Pt|=limT→∞1TT∑t=1|f(Pt)−Pt|. It follows that, if limt→∞Pt=p∗, then  |f(p∗)−p∗|=0⇔limT→∞1TT∑t=1|f(Pt)−Pt|=0.This shows that, almost surely, p∗ is a fixed point, if and only if ∑Tt=1|f(Pt)−Pt| is sublinear. ◻ 7. Prediction markets ===================== Lastly, we consider prediction markets. We assume a simplified model of a prediction market, in which traders submit a single prediction and get scored using a proper scoring rule. The prediction that is output by the market and that influences the outcome is just a weighted average of the individual traders' predictions. In this situation, if a trader has a small weight and can thus barely influence the market prediction, the trader's score will mostly be determined by the accuracy of the report, rather than the influence of the report on the market. Thus, if all traders are small relative to the market, the equilibrium prediction will be close to a fixed point. A similar result was shown by [Hardt et al. (2022)](https://arxiv.org/pdf/2203.17232.pdf) in the performative prediction context. They define a firm's performative power as the degree to which the firm can influence the overall outcome with their prediction. Hardt et al. (2022) show that in an equilibrium, the distance between a player's (performatively optimal) equilibrium strategy and their strategy when optimizing loss against the fixed equilibrium distribution (here, this means predicting the market probability) is bounded by the power of the trader. We give an analogous result for our formal setting and assumptions. To formalize the setting, assume that there are n players. We associate with each player i a number wi∈[0,1] such that ∑jwj=1, representing, intuitively, what fraction of the overall capital in the market is provided by player i. In the game, all players simultaneously submit a probability pi. Then the event Y occurs with probability q=f(∑jwjpj). Finally, each player is scored in proportion to S(pi,Y) for some strictly proper scoring rule S. Typical market scoring rules would consider terms like S(pi,Y)−S(pj,Y), but subtracting S(pj,Y) (or multiplying by constants) does not matter for the game. We assume that players maximize their expected score, E[S(pi,Y)]=S(pi,f(∑jwjpj)). For discussions of market scoring rules, see [Hanson 2003](https://link.springer.com/article/10.1023/A:1022058209073) and [Sami and Pennock 2007](https://www.algo.cs.uni-frankfurt.de/lehre/agt/material/Algorithmic_Game_Theory.pdf#page=672). Prior work has connected these market scoring rules to more realistic prediction markets that trade Arrow--Debreu securities markets such as PredictIt ([Hanson 2003](https://link.springer.com/article/10.1023/A:1022058209073); [Hanson 2007](http://www.ubplj.org/index.php/jpm/article/view/417/448); [Pennock and Sami 2007](https://www.algo.cs.uni-frankfurt.de/lehre/agt/material/Algorithmic_Game_Theory.pdf#page=672), Ch. 4; [Chen and Pennock 2007](https://arxiv.org/abs/1206.5252); [Agrawal et al. 2009](https://dl.acm.org/doi/pdf/10.1145/1566374.1566412); [Chen and Vaughan 2010](https://dl.acm.org/doi/pdf/10.1145/1807342.1807372)). We assume that f is common knowledge. Moreover, in the following we only consider pure strategy equilibria, and we do not investigate the existence of equilibria. **Theorem 14**. *Let*S*be a proper scoring rule and let*G,g*as in the Gneiting and Raftery characterization. Let*p*be a pure strategy Nash equilibrium of the game defined above and let*^p:=∑jwjpj*be the market prediction. Assume*f*is differentiable at*^p*. For any player*i*, if*G,g*are differentiable at*pi*and*g′(pi)≠0,*it follows that* |f(^p)−pi|≤∣∣∣wif′(^p)g(pi)g′(pi)∣∣∣.*Whenever*pi∉{0,1}*, the bound is tight.* In particular, this theorem shows that players i with very low wi (little capital/influence on q) will accurately predict f(^p)=f(∑jwjpj). Note, however, that ^p is not necessarily a fixed point or close to a fixed point. If there are are also players i with very high wi, then their prediction and the overall market prediction may be wrong. (So interestingly the overall market probability ∑jwjpj is worse than the prediction of individuals. One might take this to suggest that anyone interested in q should look at the latter type of predictions. Of course, if this is what everyone does, it is not so clear anymore that the model q=f(∑jwjpj) is accurate.) *Proof.* The proof is analogous to the proof of Proposition 9 in our [previous post](https://www.alignmentforum.org/posts/Aufg88v7mQ2RuEXkS/proper-scoring-rules-don-t-guarantee-predicting-fixed-points). For p to be a pure strategy Nash equilibrium, each player must play a best response to the other player's strategies. That is, pi must be a global maximum of the function pi↦S(pi,∑jwjpj). Therefore, the derivative of this function must be zero if pi∈(0,1) and positive/negative if pi=1/pi=0. The derivative is  ddpiS(pi,f(∑jwjpj))=ddpi(g(pi)(f(∑jwjpj)−pi)+G(pi))=ddpi(g(pi)(f(∑jwjpj)−pi))+g(pi)=g′(pi)(f(^p)−pi)+g(pi)wif′(^p).Now, if p∈(0,1), we have  0=ddpS(p,f(p))=g′(pi)(f(^p)−pi)+g(pi)wif′(^p).Rearranging terms and taking the absolute value, it follows  |f(^p)−pi|=∣∣∣wif′(^p)g(pi)g′(pi)∣∣∣.Next, assume pi=0 is the optimal report for player i. Then we must have  0≤ddpS(p,f(p))=g′(pi)(f(^p)−pi)+g(pi)wif′(^p).By Theorem 1, G is convex and thus g′(p)≥0. Hence, the above is equivalent to pi−f(^p)≥wif′(^p)g(pi)/g′(pi). Since pi=0, it follows pi−f(^p)≤0. Hence, |pi−f(^p)|≤∣∣∣wif′(^p)g(pi)g′(pi)∣∣∣. Finally, if p=1, then analogously pi−f(^p)≤wif′(^p)g(pi)/g′(pi), and since p=1, it follows again |pi−f(^p)|≤∣∣∣wif′(^p)g(pi)g′(pi)∣∣∣. This concludes the proof. ◻ **Corollary 15**. *In addition to the assumptions from Theorem 14, assume that*f*is Lipschitz-continuous and that*C:=supp∈[0,1]∣∣g(p)g′(p)∣∣<∞.*Let*p*be a Nash equilibrium and let*ϵ>0*arbitrary. Then there exists a*δ>0*such that if for all*i*,*wi<δ,*all of*pi*and*f(pi)*, for all*i*, as well as*∑jwjpj*and*f(∑jwjpj),*are within*ϵ*of each other.* *Proof.* Let ϵ>0 arbitrary. Let L be the Lipschitz constant of f and note that then |f′(p)|≤L for all p∈[0,1]. By Theorem 14, it follows for ^p:=∑jwjpj and any player i that  |f(^p)−pi|≤wiLC.Now let λ:=min({1,1L}) and δ:=ϵλ4CL. Then, assuming wi<δ for all i, it follows  |f(^p)−pi|≤δLC≤λ4ϵ.Moreover, since ^p is a convex combination of probabilities pi, it follows that also  |f(^p)−^p|≤maxi|f(^p)−pi|≤λ4ϵ.Thus, by the triangle equality, we have |pi−^p|≤2λ4ϵ, and since f is Lipschitz-continuous, |f(^p)−f(pi)|≤L|^p−pi|≤L2λ4ϵ≤12ϵfor any i. This shows that all of pi,^p,f(pi) are within 12ϵ of f(^p) and thus by the triangle inequality within ϵ of each other. This concludes the proof. ◻ It would be interesting to extend these results. For example, it's already not so clear if the players make predictions *repeatedly*. (To keep things simple, one should probably still imagine that all players know f and that the environment probability is determined by f applied to the majority forecast. If the traders have private information, prediction markets become harder to analyze. For some discussions, see [Ostrovsky 2012](https://onlinelibrary.wiley.com/doi/epdf/10.3982/ECTA8479), [Chen and Waggoner 2016](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7782936).) Appendix A. Proof of Theorem 8 ============================== We use the following theorem, adapted from "Iterative Solution of Nonlinear Equations in Several Variables" ([Ortega and Rheinboldt, 2000](https://epubs.siam.org/doi/book/10.1137/1.9780898719468)). **Theorem 16** (Ostrowski). *Assume that*φ:D⊆Rn→Rn*has a fixed point*x∗∈int(D)*and is continuously differentiable at*x∗*. If*|μ|<1*for all eigenvalues*μ*of*φ′(x∗)*, then there exists an open set*U⊆D*with*x∗∈U*such that for any*x0∈U*, letting*xk=φ(xk−1)*for*k∈N,*it is*xk∈U*for all*k*and*xk*converges at least linearly to*x∗*.* To begin, assume that p∗∈[0,1] is a fixed point of f, that f is continuously differentiable at p∗, and that f′(p∗)<1. Moreover, let G,g as in the Gneiting and Raftery representation of S, and assume that G is twice continuously differentiable at p∗ and thus G′′=g′ is continuous at p∗. Define the function  φ:[0,1]→R,p↦p+α∇p(S(p,⊥f(p))).Note that the agent's update rule is then ~φ defined via ~φ(p):=min{1,max{0,φ(p)}}. First, by Proposition 7, we know that for any p∗∈[0,1], we have ∇pS(p∗,⊥f(p∗))=g′(p∗)(f(p∗)−p∗), and if p∗ is a fixed point of f, we must have g′(p∗)(f(p∗)−p∗)=0 and hence φ(p∗)=p∗. Hence, p∗ is a fixed point of φ and thus also of ~φ. Now we will apply Theorem 16 to φ. To that end, let  ϵ:=∇p(∇pS(p∗,⊥f(p∗))).Then  ϵ=∇p(g′(p∗)(f(p∗)−p∗))=g′′(p∗)(f(p∗)−p∗)+g′(p∗)(f′(p∗)−1)=g′(p∗)(f′(p∗)−1)since f(p∗)=p∗. Moreover, by assumption, g′(p∗)>0 and f′(p∗)<1. Hence, it follows that ϵ=g′(p∗)(f′(p∗)−1)<0. Now note that  φ′(p∗)=∇p(p+α∇p(S(p∗,⊥f(p∗))))=1+α∇p(∇p(S(p∗,⊥f(p∗))))=1+αϵ.Hence, for small enough α>0, it is |φ′(p∗)|<1. Moreover, by assumption, g′(p∗) and f′(p∗) are continuous, so also the derivative φ′(p∗)=1+αg′(p∗)(f′(p∗)−1) is continuous. Now we can apply Ostrowski's theorem, which tells us that, if p∗∈(0,1), there is an open set U⊆(0,1) with p∗∈U, such that for any p0∈U, the iterates pk=φ(pk−1) all lie in U and pk converges at least linearly to p∗. This shows that if p∗ is an interior point, we can choose α>0 small enough to make sure that F(U)⊆U⊆(0,1) and thus ~φ=φ on U, and for p0∈U, the iteration pk=~φ(pk−1) locally converges to p∗. Now assume that p∗=1 (the proof for p∗=0 is analogous). We can extend the function φ for values p∈(1,∞) by the [Whitney extension theorem](https://en.wikipedia.org/wiki/Whitney_extension_theorem). This theorem tells us that if φ is continuously differentiable, then there also exists ¯¯¯¯φ:(0,∞)→R such that φ=¯¯¯¯φ|[0,1]. We can then apply the above argument with Ostrowski's theorem to the function ¯¯¯¯φ. In particular, there exists an open subset U⊆(0,∞) with p∗∈U such that for any p0∈U, we have pk∈U for iterates pk=¯¯¯¯φ(pk−1) and limk→∞pk=p∗. This also applies to any p0∈U∩[0,1]. Now consider the actual update rule ~φ with iterates ~pk=~φ(~pk−1) and ~p0=p0∈(0,1]. Let k:=inf{k∣pk∈(1,∞)}; i.e., this is the smallest k such that an iterate is out of bounds and thus pk≠~pk. If k=∞, then we are done. Otherwise, we have ~pk=~φ(pk−1)=1=p∗, so the actual update rule ~φ also converges to its fixed point p∗ (potentially faster than ¯¯¯¯φ). This concludes the proof. Appendix B. Proof of Proposition 11 =================================== We will use a version of the strong law of large numbers for uncorrelated random variables with bounded variance, adapted from [Neely (2021, Theorem 2)](https://viterbi-web.usc.edu/~mjneely/Borel-Cantelli-LLN.pdf). **Theorem 17**. *Let*{Xt}t∈N0*be a sequence of pairwise uncorrelated random variables with mean*0*and bounded variances. I.e., assume that* E[Xt]=0*for all*t∈N0 *There exists*c>0*such that*Var(Xt)≤c*for all*t∈N0 Cov(Xt,Xt′)=0*for all*t≠t′∈N0*.* *Then almost surely* limT→∞1TT∑t=1Xt=0.We will apply this law to random variables Xt:=S(P′t,Yt)−S(P′t,f(Pt)), where P′t is either Pt or f(Pt). First, by Proposition 10, E[Xt]=E[S(P′t,Yt)−S(P′t,f(Pt))]=0. Second, by assumption,  suptVar(S(P′t,f(Pt)))<∞.Hence, also  suptVar(S(P′t,f(Pt)))=suptVar(E[S(P′t,f(Pt))∣Pt])≤suptVar(S(P′t,f(Pt)))<∞.It follows that also suptVar(Xt)<∞. Third, we know that Yt is independent of Pt′ and Yt′ for t>t′, conditional on Pt. Moreover, P′t is constant given Pt. Hence, given Pt, also Xt=S(P′t,Yt)−S(P′t,f(Pt)) is independent of Xt′. Moreover,  E[Xt∣Pt]=E[S(P′t,Yt)−S(P′t,f(Pt))∣Pt]=S(P′t,f(Pt))−S(P′t,f(Pt))=0.It follows for t>t′ that  Cov(Xt,Xt′)=E[XtXt′]=E[E[XtXt′∣Pt]]=E[E[Xt∣Pt]E[Xt′∣Pt]]=0.This shows all conditions of the theorem and thus shows that  limt→∞1TT∑t=1Xt=0almost surely. Now we turn to the limit of 1T∑Tt=1S(P′t,f(Pt)). By assumption, supt|S(P′t,f(Pt))|<∞, so this limit exists and is finite. Thus, almost surely  limT→∞1TT∑t=1S(P′t,f(Pt))=limT→∞1TT∑t=1S(P′t,Yt)−Xt=limT→∞1TT∑t=1S(P′t,Yt)−limT→∞1TT∑t=1Xt=limT→∞1TT∑t=1S(P′t,Yt).Using Proposition 10, it follows that almost surely  limT→∞1TRegret(T)=limT→∞1TT∑t=1S(f(Pt),Yt)−S(Pt,Yt)=limT→∞1TT∑t=1S(f(Pt),Yt)−limT→∞1TT∑t=1S(Pt,Yt)  =limT→∞1TT∑t=1S(f(Pt),f(Pt))−limT→∞1TT∑t=1S(Pt,f(Pt))=limT→∞1TT∑t=1S(f(Pt),f(Pt))−S(Pt,f(Pt)).Turning to the "in particular" part, note that this limit is finite by the above, and it is nonnegative since S is assumed to be proper. Moreover, it follows that almost surely  limsupT→∞1TRegret(T)=limT→∞1TRegret(T)≥0. Thus, almost surely limsupT→∞1TRegret(T)≤0 if and only if limT→∞1T∑Tt=1S(f(Pt),f(Pt))−S(Pt,f(Pt))=0. This concludes the proof. Appendix C. Proof of Theorem 12 =============================== We begin by proving a lemma. **Lemma 18**. *Let*φ,ψ:N→[0,∞)*and assume there exists a constant*C>0*such that for all*t∈N,*we have*ψ(t)≤C.*Assume that for any*ϵ>0*, there exists*δ>0*such that if*ψ(t)>ϵ*for any*t∈N*, then*φ(t)>δ*. Then* limT→∞1TT∑t=1φ(t)=0⇒limT→∞1TT∑t=1ψ(T)=0.*Proof.* We prove the contrapositive. That is, we assume that there exists some constant c>0 such that there are infinitely many T∈N such that 1T∑Tt=1ψ(T)>c. Let T be the set of such T. We show that then there exists a constant c′>0 such that for infinitely many T, 1T∑Tt=1φ(T)>c′. Let T∈T. Since by assumption 1T∑Tt=1ψ(T)>c, it follows that ∑Tt=1ψ(T)c>T. Let C′:=max{C,1/c}+1. Since ψ(T)<C′ it must be ψ(T)>c2C for more than c2C fraction of the times t≤T. Otherwise, it would be  T∑t=1ψ(T)c≤T(Ccc2C+(1−c2C)ϵ2C)≤T(12+ϵ2C)<T.By assumption, this gives us a δ>0 such that whenever ψ(T)>ϵ:=c2C, also φ(T)>δ. In particular, this applies to at least ϵ fraction of t≤T. Hence, it follows that for any T∈T,  T∑t=1φ(t)≥δϵT. This shows that there are infinitely many T such that 1T∑Tt=1φ(t)>δϵ and thus concludes the proof. ◻ Now we turn to the main proof. *Proof of Theorem 12.* Let (Pt)t be the sequence of the agent's predictions. Assume S is strictly proper, assume that suptVar(S(P′t,Yt))<∞ for P′t∈{Pt,f(Pt)}, and assume that there exists a compact set C⊆[0,1] such that Pt∈C for all t, and S(p,f(p)), S(f(p),f(p)), and f(p) are continuous in p at any p∈C. To begin, note that continuity of S(p,f(p)) and S(f(p),f(p)) implies that both are also bounded on C and thus supt|S(P′t,f(Pt))|<∞ for P′t∈{Pt,f(Pt)}. Hence, by our assumptions, the conditions for Proposition 11 are satisfied. "⇒". Assume Regret(T) is sublinear. We want to show that then ∑Tt=1|f(Pt)−Pt| is sublinear. To do this, we will apply Lemma 18. To begin, define φ(t):=S(f(Pt),f(Pt))−S(Pt,f(Pt)) and note that φ(t)≥0 since S is proper. By Proposition 11, it follows that if Regret(T) is sublinear, also ∑Tt=1φ(t) is sublinear almost surely. For brevity, we omit the "almost surely" qualification in the following. Next, define ψ(t):=|f(Pt)−Pt|, and note that 0≤ψ(t)≤1. Next, let ϵ>0 arbitrary. To apply Lemma 18 to φ and ψ, it remains to show that there exists δ>0 such that whenever ψ(t)≥ϵ, then φ(t)≥δ. To that end, let  δ:=min{p∈C∣|p−f(p)|≥ϵ}S(f(p),f(p))−S(p,f(p)). Since f is continuous for any p∈C, the set {p∈C∣|p−f(p)|≥ϵ} is compact. Moreover, S(f(p),f(p)) and S(p,f(p)) are continuous by assumption, and thus the minimum is attained at some point ^p∈C. But since S is strictly proper, it follows δ=S(f(^p),f(^p))−S(^p,f(^p))>0. Hence, since Pt∈C for any t∈N, it follows that whenever φ(t)≥ϵ, it follows φ(t)=S(f(Pt),f(Pt))−S(Pt,f(Pt))≥δ.This shows all conditions for Lemma 18. Hence, we conclude that limt→∞1T∑Tt=1|f(Pt)−Pt|=0. "⇐". Let φ(t):=|f(Pt)−Pt| and ψ:=S(f(Pt),f(Pt))−S(Pt,f(Pt)). We assume that ∑Tt=1φ(t) is sublinear in T and want to show that then Regret(T) is sublinear as well. To do so, we will show that ∑Tt=1ψ(t) is sublinear using our lemma, and then the required statement follows again from Proposition 11. Now we have to show the conditions of the lemma. First, as before, φ(t)≥0 and ψ(t)≥0. Second, as noted in the beginning, we have suptψ(t)<∞ by our assumption that S(f(p),f(p)) and S(p,f(p)) are continuous on C. Now let ϵ>0 arbitrary. Assume that S(f(Pt),f(Pt))−S(Pt,f(Pt))>ϵ for some ϵ>0 and t∈N. Consider the set C′:={p∈C∣S(f(p),f(p))−S(p,f(p))≥ϵ}. Since S(f(p),f(p)) and S(p,f(p)) are continuous on C by assumption, this set is compact. Moreover, the function p∈C↦|p−f(p)| is continuous since f is continuous on C by assumption. Hence, the minimum δ:=minp∈C′|p−f(p)| is attained at some point ^p∈C′. Now, if δ=0, we would have ^p=f(^p) and thus  S(f(^p),f(^p))−S(^p,f(^p))=S(^p,^p)−S(^p,^p)=0<ϵ, which is a contradiction. Hence, δ>0. Since Pt∈C, it follows from S(f(Pt),f(Pt))−S(Pt,f(Pt))≥ϵ, for t∈N that |Pt−f(Pt)|≥δ. This shows the third condition for the lemma. We can thus conclude that limT→∞1T∑Tt=1ψ(t)=0. Using Proposition 11, this concludes the proof. ◻ Appendix D. Armstrong's backwards-facing oracle =============================================== Here, we analyze a version of [Armstrong (2018)](https://www.alignmentforum.org/posts/hJaJw6LK39zpyCKW6/standard-ml-oracles-vs-counterfactual-ones)'s backwards-facing oracle. This is a version of the online learning setup from Section 5 in which the agent’s prediction Pt+1 is trained to minimize the average historical loss, Lt(p):=1tt∑t′=1−S(p,Yt′).We consider training via gradient descent and let Pt+1=Π(Pt−α∇pLt(Pt)) for some learning rate α. In the following, we show that, if this learning scheme converges to a point p∗∈(0,1), then ∇p(S(p∗,⊥f(p∗)))=0. Afterwards, we conclude from Proposition 7 that p∗ must be a fixed point (assuming that g′(p∗)≠0).  **Proposition 19**. *Assume*Pt+1=Π(Pt−α∇pLt(Pt))*for*t∈N*are the agents predictions and*limt→∞Pt=p∗*almost surely for some*p∗∈(0,1).*Assume that*f*is continuous and that*∂1S(p,y)*exists and is continuous for any*p∈(0,1)*and*y∈{0,1}*. Then*∇p(S(p∗,⊥f(p∗)))=0. *Proof.* To begin, note that since p∗∈(0,1) we can choose a closed interval I⊆(0,1) such that p∗∈int(I). By assumption, ∂1S(p,y) is continuous and thus bounded for p∈I,y∈{0,1}. For each t, let Qt be the projection of Pt onto I. Finally, let L(p)=−S(p,f(p∗)). We will show the following: 1. E[∇Lt(Qt)]→0 as t→∞ 2. for all p∈(0,1), E[∇Lt(p)]→∇L(p) as t→∞ 3. E[∇Lt(Qt)]→∇L(p∗) as t→∞ from which it follows that ∇L(p∗)=0. **1. “**E[∇Lt(Qt)]→0**as**t→∞**”.** Note that there almost surely exists some T such that Pt∈(0,1) and thus also ∇Lt(Pt)=Pt+1−Ptα for all t≥T. Since Pta.s.−−→p∗ as t→∞, it follows that ∇Lt(Pt)a.s.−−→0 as t→∞. Moreover, we have that almost surely Pt∈I for t sufficiently large, so that almost surely Pt=Qt and ∇Lt(Pt)=∇Lt(Qt) for t sufficiently large. Thus, almost surely,  limt→∞∇Lt(Qt)=limt→∞∇Lt(Pt)=0.Finally, since ∇Lt(Qt) is bounded, we have by the dominated convergence theorem that ∇Lt(Qt)L1−→0 and as a consequence, limt→∞E[∇Lt(Qt)]=0.**2. “for all**p∈(0,1)**,**E[∇Lt(p)]→∇L(p)**as**t→∞**”.** We have that  E[∇Lt(p)]=−1tt∑t′=1E[∂1S(p,Yt′)]=−1tt∑t′=1E[E[∂1S(p,Yt′)|Pt′]]=−1tt∑t′=1E[(1−f(Pt′))∂1S(p,0)+f(Pt′)∂1S(p,1)]=−(1−∑tEf(Pt′)t)∂1S(p,0)−∑tEf(Pt′)t∂1S(p,1)Since f is continuous, we have f(Pt)a.s.−−→f(p∗). Then, by compactness, we have that f is bounded on [0,1]. Finally, by the dominated convergence theorem, we may conclude Ef(Pt)→f(p∗) as t→∞. As a consequence, 1t∑t′≤tEf(Pt′)→f(p∗) as t→∞. Thus,  limt→∞E[∇Lt(p)]=−(1−f(p∗))∂1S(p,0)−f(p∗)∂1S(p,1)=−∂1S(p,f(p∗))=∇L(p)**3. “**E[∇Lt(Qt)]→∇L(p∗)**as**t→∞**”.** Note that  |∇Lt(Qt)−∇Lt(p∗)|≤maxy|∂1S(Qt,y)−∂1S(p∗,y)|a.s.−−→0as t→∞, since ∂1S(p,1) and ∂1S(p,0) are both continuous functions of p on I. Finally, by the dominated convergence theorem and our second result, limt→∞E[∇Lt(Qt)]=limt→∞E[∇Lt(p∗)]=∇L(p∗).And we are done. ◻ **Corollary 20**. *Assume*S*is proper and let*G,g*as in the Gneiting and Raftery characterization (Theorem 1) be continuously differentiable at any*p∈(0,1).*Assume*f*is continuous,*Pt*as defined above converges to some prediction*p∗∈(0,1)*almost surely, and*g′(p∗)≠0.*Then*p∗*is a fixed point of*f*.* *Proof.* By Proposition 7, we have ∂1S(p,y)=g′(p)(y−p) for any y∈{0,1},p∈(0,1). Thus, since G′′=g′ is continuous by assumption, also ∂1S(p,y) is continuous. Hence, by Proposition 19, it follows that ∇p(S(p∗,⊥f(p∗)))=0. By Proposition 7, it follows that p∗ is a fixed point of f. ◻
d174434a-7db1-48d6-b752-a980dcfe4a88
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Infra-Bayesian physicalism: proofs part II *This post is an appendix to "[Infra-Bayesian physicalism: a formal theory of naturalized induction](https://www.alignmentforum.org/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized)".* **Proposition 2.16:** *Consider some Γ0.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} , Γ1, Φ and Θ∈□c(Γ0×Γ1×Φ). Define π:elΓ0×Γ1→elΓ0 by π(y,z,α):=(y,{y′∈Γ0∣(y′,z)∈α}). Then,* (π×idΦ)∗BrΓ0×Γ1(Θ)⊆prelΓ0×ΦBrΓ0(Θ)⊆BrΓ0(prΓ0×ΦΘ) We can prove the second subset inclusion directly as a corollary of Proposition 2.10, just let the t of Proposition 2.10 be the projection function Γ1×Φ→Φ, so that just leaves the first subset inclusion direction. If you've seen the proofs so far, you know we do a thing where we try to show subset inclusion with expectations of functions and inequalities instead. And that the proofs all proceed by transforming the expectations until we get a maximum over contribution expectation values, and that's always where the hard part of proving the inequalities shows up. So, let's just get that part over with, an interested reader can work it out with previous proofs as a guide. Unusually, we'll be keeping track of identity functions here. Plugging in some f, and doing our usual activities to get every term into the appropriate form, we can get this result if we manage to show that maxθ′∈BrΓ0×Γ1(Θ)(π×idΦ)∗θ′(λy0α0x.f(y0,x,α0)) ≤maxθ∈BrΓ0(Θ)prelΓ0×Φθ(λy0α0x.f(y0,x,α0)) So, to establish this, we'll show that, given some θ′∈BrΓ0×Γ1(Θ), we have (π×idΓ1×Φ)∗(θ′)∈BrΓ0(Θ), and that prelΓ0×Φ((π×idΓ1×Φ)∗θ′)=(π×idΦ)∗θ′ because, if we show that, then it means that BrΓ0(Θ) is a rich enough set for the right-hand side of the equation to match anything the left-hand-side can put out. First off, prelΓ0×Φ((π×idΓ1×Φ)∗θ′)=(π×idΦ)∗θ′ is pretty trivial to show. The only difference between the two processes is that the Γ1 coordinate of θ′ is discarded immediately on the right-hand-side, and it's preserved for one step and then discarded on the second step for the left-hand-side. Now for our inequality of interest. Let θ′∈BrΓ0×Γ1(Θ), and we're trying to show that (π×idΓ1×Φ)∗(θ′)∈BrΓ0(Θ) First off, showing the support condition for (π×idΓ1×Φ)∗(θ′) which is somewhat nontrivial this time around. We start off with a guarantee that (y0,y1)∈α. This happens iff y0∈{y′0|(y′0,y1)∈α}=π(y0,y1,α)2Γ0 And so, we get that y0∈α0 is guaranteed for that pushforward, support condition established. endofunction condition time. It's important to remember that we want to treat elΓ0 as the computation side of things, and Γ1×Φ as the environment side of things, for our bridge transform we're working with. s:Γ0→Γ0 and g:Γ0×Γ1×Φ→[0,1]. Begin. (π×idΓ1×Φ)∗θ′(λy0y1α0x.χs(y0)∈α0g(s(y0),y1,x)) =θ′(λy0y1αx.χs(y0)∈π(y0,α,y1)2Γ0g(s(y0),y1,x)) Let's unpack precisely what that set is. =θ′(λy0y1αx.χs(y0)∈{y′0|(y′0,y1)∈α}g(s(y0),y1,x)) =θ′(λy0y1αx.χ(s(y0),y1)∈αg(s(y0),y1,x)) And we can rewrite the endofunction a little bit =θ′(λy0y1αx.χ(s×idΓ1)(y0,y1)∈αg((s×idΓ1)(y0,y1),x)) And finally apply our endofunction condition, since we've now got the function in a form that's treating y0,y1 as part of the computational universe... ≤Θ(λy0y1x.g(y0,y1,x)) And we're done, this establishes our desired result. ■ **Proposition 2.17:** *Br(Θ) is a continuous function of Θ.* The way this proof will work is by describing a composition of functions that makes Br(Θ) from Θ, and then showing that each of these functions is continuous, if elΓ×Φ is a finite set. Claim: The bridge transform of some Θ is equal to (using χelΓ to denote restricting an ultradistribution to the event y∈α and χ−1elΓ to denote the inverse of said function, mapping an ultradistribution on elΓ to the largest ultradistribution that could have produced it via restriction) χelΓ(⋂s:Γ→Γs∗(χ−1elΓ(ι∗(pr∗(Θ))))) Breaking down the unfamilar notation, the type of pr is elΓ×Φ→Γ×Φ, just the usual projection. That asterisk up top is pullback along that function. The type of ι is elΓ×Φ→Γ×2Γ×Φ. And s∗ is pullback along the function Γ×2Γ×Φ→Γ×2Γ×Φ given by (s,id2Γ,idΦ). Let's unpack the exact conditions that cause a θ to lie in the set χelΓ(⋂s:Γ→Γs∗(χ−1elΓ(ι∗(pr∗(Θ))))) First off, a θ is in this set iff it is supported over the event y∈α, and it lies in the set ⋂s:Γ→Γs∗(χ−1elΓ(ι∗(pr∗(Θ)))) Which occurs iff θ is supported over the event y∈α, and for all s:Γ→Γ, θ lies in the set s∗(χ−1elΓ(ι∗(pr∗(Θ)))) Which occurs iff θ is suported over the event y∈α, and for all s:Γ→Γ, s∗(θ) lies in the set χ−1elΓ(ι∗(pr∗(Θ))) Which occurs iff θ is supported over the event y∈α, and for all s:Γ→Γ, χelΓ(s∗(θ)) lies in the set ι∗(pr∗(Θ)) Now, ι is just doing a little bit of type conversion, so we're justified in ignoring it. Anways, the previous thing occurs iff θ is supported over the event y∈α, and for all s:Γ→Γ, pr∗(χelΓ(s∗(θ)))∈Θ. Which happens iff θ is supported over the event y∈α and for all s:Γ→Γ and g:Γ×Φ→[0,1], pr∗(χelΓ(s∗(θ)))(λyx.g(y,x))≤Θ(λyx.g(y,x)) However, unpacking the left-hand side, we get pr∗(χelΓ(s∗(θ)))(λyx.g(y,x)) =χelΓ(s∗(θ))(λyαx.g(y,x)) =s∗(θ)(λyαx.χy∈αg(y,x)) =θ(λyαx.χs(y)∈αg(s(y),x)) Which is the exact condition for θ to lie in the bridge transform. So, we have an equivalence. Now, since we've phrased the bridge transform as χelΓ(⋂s:Γ→Γs∗(χ−1elΓ(ι∗(pr∗(Θ))))) We just need to establish that when all the sets are finite, then pullbacks are continuous, pushforwards are continuous, un-restrictions are continuous, intersections are continuous, and restrictions are continuous. Then, this would just be a particularly fancy continuous function, and accordingly, if Θn limited to Θ, then Br(Θn) would limit to Br(Θ). Let's establish that when the sets are finite, pullbacks are continuous. Let g:X→Y, and Y and X be finite sets, and ψ∈□Y. Then, we have g∗(ψ)(λx.f(x)):=ψ(λy.maxx∈g−1(y)f(x)) With the convention that maximizing over the empty set produces a value of 0. That is an alternate phrasing of pullback. We can then go limn→∞d(g∗(ψn),g∗(ψ))=limn→∞supf:X→[0,1]|g∗(ψn)(f)−g∗(ψ)(f)| =limn→∞supf:X→[0,1]|ψn(λy.maxx∈g−1(y)f(x))−ψ(λy.maxx∈g−1(y)f(x))| ≤limn→∞suph:Y→[0,1]|ψn(h)−ψ(h)|=limn→∞d(ψn,ψ)=0 Admittedly, this isn't quite what our usual modified KR metric usually looks like. The reason we can do this is because we're just dealing with functions in [0,1], so the norm part of the modified KR metric doesn't matter, and since our sets are finite, we can say that all points are distance 1 from each other, so all functions are 1-Lipschitz, and then the two metrics coincide. So, pullback along any function is continuous. For pushforward, it's easy because, if ψ∈□X, then we've got limn→∞d(g∗(ψn),g∗(ψ))=limn→∞suph:Y→[0,1]|g∗(ψn)(h)−g∗(ψ)(h)| =limn→∞suph:Y→[0,1]|ψn(λx.h(g(x)))−ψ(λx.h(g(x)))| ≤limn→∞supf:X→[0,1]|ψn(f)−ψ(f)|=limn→∞d(ψn,ψ)=0 For showing restrictions continuous, for the set E⊆X that we're updating on, limn→∞d(χE(ψn),χE(ψ))=limn→∞supf:X→[0,1]|χE(ψn)(f)−χE(ψ)(f)| =limn→∞supf:X→[0,1]|ψn(χx∈Ef(x))−ψn(χx∈Ef(x))| ≤limn→∞supf:X→[0,1]|ψn(f)−ψ(f)|=limn→∞d(ψn,ψ)=0 For intersections... that will take a bit more work. We'll have to use the equivalent formulation of closeness, that ψn limits to ψ iff the Hausdorff distance between the corresponding sets (according to the generalized KR measure) limits to 0. So, our task is to assume that ψn limits to ψ, and ϕn limits to ϕ, and show that ψn∩ϕn limits to ψ∩ϕ. The bound we'll manage to prove is that d(ψn∩ϕn,ψ∩ϕ)≤|X|max(d(ψn,ψ),d(ϕn,ϕ)) Where |X| is the number of elements in the finite set X. Here's the basic argument. For any particular point in the set ψn, there's a nearby point in ψ (since the Hausdorff distance is low) with only ϵ measure moved around or deleted. So, in particular, if all the measure moved or deleted was just deleted from ψn instead, then that resulting contribution would be below the nearby contribution in ψ that we picked, and so it would lie in ψ as well due to downwards closure. So, in particular, if ψn and ψ only have a Hausdorff distance of ϵ, then, taking any contribution in ψn and subtracting ϵ measure from \emph{all points} (if possible, if not, just remove measure till you're at 0) is \emph{guaranteed} to make a point in ψ, and vice-versa. And a corollary of that is that, given any contribution in ψn∩ϕn, the "subtract max(d(ψn,ψ),d(ϕn,ϕ)) measure from each point" contribution is in ψ, also in ϕ, and at a maximum distance of |X|max(d(ψn,ψ),d(ϕn,ϕ)) from the original contribution. And this argument can be reversed to show that the limit of the intersections is the intersection of the limits (because hausdorff distance between the two goes to 0), so we do in fact have intersection being continuous. And that just leaves un-restricting. Again, this will take a Hausdorff-distance argument. Fixing some contribution in χ−1E(ψn), it can be broken down into an on-E part θn,E, and an off-E part θn,¬E. When you restrict to E, then θn,E∈ψn. Since ψn is within ϵ of ψ, there's some θE∈ψ that's within ϵ of θn,E. Then, let your point in χ−1E(ψ) be θE+θn,¬E (if there's slightly more than 1 measure there, delete ϵ measure from θn,¬E, or all the measure if there's less than ϵ present). It's close to θn,E+θn,¬E because θn,E is close to θE, the other component of it is unchanged, and maybe we deleted a little bit of excess measure which didn't do much. This line of argument shows that ψn being close to the limit ψ is sufficient to establish that the un-restriction of the two of them are comparably close together. So we have continuity for that, which is the last thing we needed. Since we wrote the bridge transform as a sequence of continuous functions, we know it's continuous (as long as all the involved sets are finite) ■ **Proposition 3.1:** *Let X be a finite poset, f:X→R and Θ∈□cX downward closed. Define fmax:X→R by fmax(x):=maxy≤xf(y). Observe that fmax is always non-decreasing. Then, Θ(f)=Θ(fmax).* Proof: Pick a θ′∈Θ s.t. θ′(fmax)=Θ(fmax). Ie, a maximizing contribution. Let k:X→X be defined as k:=λx.argmaxy≤xf(y). Ie, it moves a point down to somewhere below it where it can attain the highest value according to f. Now, consider k∗(θ′). It's present in Θ because Θ was, by assumption, downwards closed, and we just moved all the measure down. Now, we have Θ(f)=maxθ∈Θθ(f)≥k∗(θ′)(f)=θ′(λx.f(k(x)))=θ′(λx.f(argmaxy≤xf(y))) =θ′(λx.maxy≤xf(y))=θ′(fmax)=Θ(fmax)≥Θ(f) And so, all inequalities must be equalities, proving that Θ(fmax)≥Θ(f). In order, the connectives were: unpacking definitions, using downward closure to conclude that k∗(θ′)∈Θ, unpacking pushforwards, unpacking the definition of k, using that applying a function to the argmax of inputs to that function just makes the max of the function, folding the definition of fmax back up, using that θ′ was selected to maximize fmax, and applying monotonicity. Done! ■ **Proposition 4.1:** *Consider some Γ, Φ, a relation Q⊆Γ×Φ and a PUCK Ξ over Q. Let Θ:=⊤Γ⋉Ξ. Then,* Br(Θ)=[⊤Γ⋉(susΘ⋊Ξ)]↓=[⊤Γ⋉(Q−1⋊Ξ)]↓ First off, I'm not terribly picky about variable ordering, so I'll just write our desired proof target as Br(Θ)=[⊤Γ⋉Ξ⋉susΘ]↓=[⊤Γ⋉Ξ⋉Q−1]↓ The way we'll do this is by establishing the following result. For all monotone f′:elΓ×Φ→[0,1], we have Br(Θ)(f′)≤[⊤Γ⋉Ξ⋉susΘ](f′)≤[⊤Γ⋉Ξ⋉Q−1](f′)≤Br(Θ)(f′) Why does that suffice? Well, assume hypothetically that the result held. Since the inequalities go in a circle, we have equality for all monotone functions. And then, for some non-monotone function f, we can go Br(Θ)(f)=Br(Θ)(fmax)=[⊤Γ⋉Ξ⋉susΘ](fmax) =[⊤Γ⋉Ξ⋉susΘ]↓(fmax)=[⊤Γ⋉Ξ⋉susΘ]↓(f) and swap out susΘ for Q−1 to show the other equality, and then we'd have equality of the three ultradistributions on all functions, so they're equal. For the equalities in the above equation, the first one arose because of Proposition 2.4 (bridge transforms are always downwards closed) and Proposition 3.1 (downwards-closed things let you swap out f for fmax and it doesn't affect the value). The second equality arose because fmax is a monotone function and by assumption, we have equality for monotone functions. The third equality would arise because taking the downwards closure doesn't affect the expectation value of monotone functions. If you add a bunch of contributions made by measure flowing down, that's just strictly worse from the perspective of a monotone function and doesn't change expectation value. And the fourth equality arises from Proposition 3.1 again. So, we just need to prove the following three inequalities, for monotone functions f. Br(Θ)(f)≤[⊤Γ⋉Ξ⋉susΘ])(f)≤[⊤Γ⋉Ξ⋉Q−1](f)≤Br(Θ)(f) The first one is easily addressable by Proposition 2.7. By proposition 2.7 and the definition of Θ, we have Br(Θ)⊆(Θ⋉susΘ)↓=[⊤Γ⋉Ξ⋉susΘ]↓ And so, for monotone functions f, we have Br(Θ)(f)≤[⊤Γ⋉Ξ⋉susΘ])(f) Done. Now to show our second inequality. (⊤Γ⋉Ξ⋉susΘ)(λyαx.f(y,x,α)) =(⊤Γ⋉Ξ)(λyx.δsusΘ(x)(λα.f(y,x,α))) =(⊤Γ⋉Ξ)(λyx.f(y,x,susΘ(x))) Unpack the definition of the set =(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|(y′,x)∈supp Θ})) Unpack the definition of Θ =(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|(y′,x)∈supp ⊤Γ⋉Ξ})) The condition (y′,x)∈supp ⊤Γ⋉Ξ is equivalent to x∈supp Ξ(y′). After all, if x∈supp Ξ(y′), the distribution δy′ lies in ⊤Γ, so δy′⋉Ξ would certify that (y′,x)∈supp ⊤Γ⋉Ξ. And if x∉supp Ξ(y′), then no matter the distribution in ⊤Γ or kernel selected from Ξ, if y′ gets picked, then the kernel selected from Ξ isn't going to be making x along with it. Since we have that iff characterization, we have =(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|x∈supp Ξ(y′)})) Ξ(y′) is the union of a bunch of k(y′) for k∈Π (and convex hull), so its support is equal to the union of the supports for the k(y′). =(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|x∈⋃k∈Πsupp k(y′)})) Then, since each k is a PoCK over Q, k(y′) is the restriction of some measure ϖk to the set Q(y), which will be written as χQ(y′)ϖk. =(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|x∈⋃k∈Πsupp (χQ(y′)ϖk)})) And now we're about to get an inequality. f is monotone, so making the associated set bigger (easier to fulfill the defining condition) should always increase the value of f, and by monotonicity, increase the expectation value, so we get ≤(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|x∈Q(y′)})) Then restate =(⊤Γ⋉Ξ)(λyx.f(y,x,{y′|(x,y′)∈Q})) =(⊤Γ⋉Ξ)(λyx.f(y,x,Q−1(x))) And pack back up as a semidirect product. =(⊤Γ⋉Ξ)(λyx.δQ−1(x)(λα.f(y,x,α))) =(⊤Γ⋉Ξ⋉Q−1)(λyαx.f(y,x,α)) And we have our second ≤ inequality established! Now, onto the third inequality. (⊤Γ⋉Ξ⋉Q−1)(λyαx.f(y,x,α)) Unpack the semidirect products =⊤Γ(λy.Ξ(y)(λx.δQ−1(x)(λα.f(y,x,α)))) And what top means =maxy∈ΓΞ(y)(λx.δQ−1(x)(λα.f(y,x,α))) And as for Ξ... well, each Ξ(y) is the convex hull of the various k(y), for k∈Π. So, the expectation for Ξ(y) is the maximum expectation for the various k(y), so we can rewrite as =maxy∈Γmaxk∈Πk(y)(λx.δQ−1(x)(λα.f(y,x,α))) Pick a particular y∗ and k∗ that attain the maximal value =k∗(y∗)(λx.δQ−1(x)(λα.f(y∗,x,α))) Reexpress a little bit =δy∗(λy.k∗(y)(λx.δQ−1(x)(λα.f(y,x,α))) And pack this back up as a semidirect product =(δy∗⋉k∗⋉Q−1)(λyαx.f(y,x,α)) And then we'll be showing that this contribution lies in Br(Θ). Once we've done that, we can go ≤maxθ′∈Br(Θ)θ′(λyαx.f(y,x,α)) =Br(Θ)(λyαx.f(y,x,α)) And we'd be done, having proven the third inequality and the last one to finish up the proof. So, now our proof target switches to showing that (δy∗⋉k∗⋉Q−1)∈Br(Θ). We can show this if we show the support condition and the endofunction condition. For the support condition, we have (δy∗⋉k∗⋉Q−1)(λyαx.χy∉α) =δy∗(λy.k∗(y)(λx.δQ−1(x)(λα.χy∉α))) =δy∗(λy.k∗(y)(λx.χy∉Q−1(x))) =k∗(y∗)(λx.χy∗∉Q−1(x)) And then we use that the k∗(y∗) are all of the form "take this measure, restrict it to Q(y∗)", to get =(χQ(y∗)ϖk∗)(λx.χy∗∉Q−1(x)) =ϖk∗(λx.χx∈Q(y∗)χy∗∉Q−1(x)) Unpacking the definitions, we get =ϖk∗(λx.χ(x,y∗)∈Qχ(x,y∗)∉Q)=0 And so, this contribution is indeed supported on (y,α) pairs s.t. y∈α. Now for the endofunction condition. As usual, fix an s and a g. (δy∗⋉k∗⋉Q−1)(λyαx.χs(y)∈αg(s(y),x)) Unpack the semidirect product =δy∗(λy.k∗(y)(λx.δQ−1(x)(λα.χs(y)∈αg(s(y),x)))) Plug in the dirac-deltas =k∗(y∗)(λx.χs(y∗)∈Q−1(x)g(s(y∗),x)) Reexpress the set membership criterion a bit =k∗(y∗)(λx.χx∈Q(s(y∗))g(s(y∗),x)) And the contribution at the start =(χQ(y∗)ϖk∗)(λx.χx∈Q(s(y∗))g(s(y∗),x)) Distribute it in as an indicator function. =ϖk∗(λx.χx∈Q(y∗)χx∈Q(s(y∗))g(s(y∗),x)) Pull the other indicator function out. =(χQ(s(y∗))ϖk∗)(λx.χx∈Q(y∗)g(s(y∗),x)) Rewrite with k∗ =k∗(s(y∗))(λx.χx∈Q(y∗)g(s(y∗),x)) Use an inequality to get rid of the indicator function ≤k∗(s(y∗))(λx.g(s(y∗),x)) Rewrite it a bit =δs(y∗)(λy.k∗(y)(λx.g(y,x))) Swap out k∗(y) for Ξ(y), the latter is larger ≤δs(y∗)(λy.Ξ(y)(λx.g(y,x))) Swap out δs(y∗) for ⊤Γ, the latter is larger ≤⊤Γ(λy.Ξ(y)(λx.g(y,x))) =(⊤Γ⋉Ξ)(λyx.g(y,x)) Abbreviate =Θ(λyx.g(y,x)) And bam, endofunction condition is shown, the entire proof goes through now. ■ **Corollary 4.3:** *Suppose that for any d∈D and π:H→A s.t. d∈supp W(π), it holds that dCπ. That is, the observations W predicts to receive from the computer are consistent with the chosen policy. Let L:D→R be a Cartesian loss function and π:H→A a policy. Then,* (prelΓBr(ΘW)∩Cπfair)(Lphys)=W(π;L) I'm going to be proceeding very cautiously here. First off, make our π value visually distinct by swapping it out for π∗ (prelΓBr(ΘW)∩Cπ∗fair)(Lphys) Now, by the identifications we made earlier, we can identify Γ with AH, the space of policies. Using that to unpack the function a little bit, we have =(prelΓBr(ΘW)∩Cπ∗fair)(λπα.Lphys(π,α)) Now, we note that intersecting with top of a particular set is equivalent to updating on the indicator function for that set. Using definition 1.5 to unpack Cπ∗fair, we get =(prelΓBr(ΘW))(λπα.χ∀h∈Hπ,α:Gπ(h)=π∗(h)Lphys(π,α)) Apply that Gπ(h) is "what would the agent do on h if the agent is copying the behavior of π", so we can rephrase as: =(prelΓBr(ΘW))(λπα.χ∀h∈Hπ,α:π(h)=π∗(h)Lphys(π,α)) Pull off the projection, and use d for a destiny in D. =Br(ΘW)(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h)Lphys(π,α)) At this point, we use that ΘW:=⊤Γ⋉W, and that W is a PUCK over Q0 and Proposition 4.1 to go =[⊤Γ⋉W⋉Q−10]↓(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h)Lphys(π,α)) Before we can remove the downwards closure, we'll want to verify the function is monotone. So, we'll want to start unpacking the physicalist loss next. Applying definition 3.1, and using d′ instead of g to remember it's a destiny, we have =[⊤Γ⋉W⋉Q−10]↓(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h)minha:ha∈Xπ,αmaxd′:ha⊑d′L(d′)) Next up is unpacking Xπ,α. Using definition 3.1, it's =[⊤Γ⋉W⋉Q−10]↓(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h) minha:ha∈Hπ,α×A∧(∀π′∈α:Gπ′(h)=a)maxd′′:ha⊑d′L(d′)) At this point, we can, again, treat Gπ′(h) the same as π′(h). =[⊤Γ⋉W⋉Q−10]↓(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h) minha:ha∈Hπ,α×A∧(∀π′∈α:π′(h)=a)maxd′:ha⊑d′′L(d′)) And now we need to take a moment to show that Hπ,α gets smaller when α gets larger. Applying definition 1.5, the event h∈Hπ,α unpacks as (∀h′a⊏h,π′∈α:Gπ′(h′)=a)∧(∃d′:h⊏d′∧d′Cπ) Now, if α becomes a larger set, then it gets harder for the first condition to be fulfilled, so the set Hπ,α shrinks. Now, since this happens, it means that if α gets bigger, it gets more difficult for the prerequisite of the implication in the indicator function to be fulfilled, so the implication is more likely to hold. Further, the minimization is taking place over a smaller set, so the loss goes up. So our function is monotone in α, and we can remove the downwards closure. =(⊤Γ⋉W⋉Q−10)(λπαd.χ∀h∈Hπ,α:π(h)=π∗(h) minha:ha∈Hπ,α×A∧(∀π′∈α:π′(h)=a)maxd′:ha⊑d′′L(d′)) Unpacking the semidirect product, it is =⊤Γ(λπ.W(π)(λd.δQ−10(d)(λα.χ∀h∈Hπ,α:π(h)=π∗(h) minha:ha∈Hπ,α×A∧(∀π′∈α:π′(h)=a)maxd′:ha⊑d′L(d′)))) Substituting in the dirac-delta everywhere that α is, we get =⊤Γ(λπ.W(π)(λd.χ∀h∈Hπ,Q−10(d):π(h)=π∗(h) minha:ha∈Hπ,Q−10(d)×A∧(∀π′∈Q−10(d):π′(h)=a)maxd′:ha⊑d′L(d′))) Now, Q−10(d) is the set of policies π′ s.t. π′Q0d. The "this policy is consistent with this destiny" relation. Also let's swap out ⊤Γ for maximization =maxπW(π)(λd.χ∀h∈Hπ,Q−10(d):π(h)=π∗(h) minha:ha∈Hπ,Q−10(d)×A∧(∀π′Q0d:π′(h)=a)maxd′:ha⊑d′L(d′)) Now, we're going to try to address that minimum, and show that the only ha that fulfill the conditions are exactly those ha⊑d. This requires showing that ha⊑d is a sufficient condition to fulfill the relevant properties, and then to show that ha⋢d implies a failure of one of the properties. So, first up. Assume ha⊑d. Then, for any π′, dQ0π′ and ha⊑d \emph{must} imply that π′(h)=a, that's what policy consistency means. Also, h∈Hπ,Q−10(d) unpacks as the two conditions ∀h′a′,π′:h′a′⊏h∧dQ0π′→π′(h′)=a′ ∃d′:h⊏d′∧d′Cπ As for the first condition,clearly, if π′ is consistent with d, it's consistent with ha because ha⊑d, and so it must be consistent with any prefix of ha, so the first condition holds. For the second condition, d is a valid choice, because we assumed ha⊑d, and dCπ occurs always, because W(π) always being supported on d s.t. dCπ was one of our problem assumptions. So, we have one implication direction down. Now for the reverse implication direction. Assume ha⋢d. Then there are two possibilities. the first possibility is that ha first diverges from d on an observation. The second possibility is that ha first diverges from d on an action. For the first possibility, it's possible to make two policies which are consistent with d but also differ in their actions on history h, because h isn't a prefix of d if ha first differs from d on an observation. For the second possibility, it's ruled out by either the condition for h∈Hπ,Q−10(d) that goes ∀h′a′,π′:h′a′⊏h∧π′Q0d→π′(h′)=a′ or the extra condition that ∀π′:π′Q0d→π′(h)=a applied to the first a-history prefix that deviates from d, because π′Q0d implies that π′(h′) must be the action which d dictates, not the action a′ that deviates from d. And that establishes the other direction of the iff statement. Thus, we can swap out our fancy minimization with just minimizing over the ha⊑d. =maxπW(π)(λd.χ∀h∈Hπ,Q−10(d):π(h)=π∗(h) minha:ha⊑dmaxd′:ha⊑d′L(d′)) This minimization is attained by selecting d itself. So then it turns into =maxπW(π)(λd.χ∀h∈Hπ,Q−10(d):π(h)=π∗(h)L(d)) At this point, what we'll do is show that an upper bound and lower bound on the value of this term is the same. Going from upper bound to lower bound, it's starting out with W(π∗)(λd.L(d)) At this point, we'll use that W is a PUCK, so there's a set E of environments e (PoCK's) that W is generated from, so we can go: =maxe∈Ee(π∗)(λd.L(d)) =maxπmaxe∈Ee(π∗)(λd.χdQ0πL(d)) =maxπmaxe∈E(χQ0(π∗)ϖe)(λd.χdQ0πL(d)) =maxπmaxe∈Eϖe(λd.χdQ0π∗χdQ0πL(d)) Now pull the indicator function back out. =maxπmaxe∈E(χQ0(π)ϖe)(λd.χdQ0π∗L(d)) =maxπmaxe∈Ee(π)(λd.χdQ0π∗L(d)) =maxπW(π)(λd.χdQ0π∗L(d)) Now we must show that this is a looser constraint than what was previously in our indicator function to proceed further. So our next order of business is showing that, certainly, ∀h∈Hπ,Q−10(d):π(h)=π∗(h)→dQ0π∗ Let h be one of the history prefixes of some d in the support of W(π). The two conditions for h∈Hπ,Q−10(d) are fulfilled, because they are ∀h′,a′,π′:h′a′⊏h∧dQ0π′→π′(h′)=a′ ∃d′:h⊏d′∧d′Cπ For the first condition, if h′a′⊏h, then h′a′⊏d, and so if π′ is consistent with d, it must take the same action in response to h′, the action that d commands, a′. So that's fulfilled. For the second condition, let d′ be d. h⊏d holds, and so dCπ holds certainly, because W(π) is supported on d s.t. dCπ. So, for all d in the support of W(π), h⊏d→h∈Hπ,Q−10(d). Since we assumed our forall statement as prerequisite, this means that for all h⊏d, π(h)=π∗(h). And dQ0π means ∀ha⊑d:π(h)=a. Since π∗(h) mimics π(h) for all history prefixes of d, this means ∀ha⊑d:π∗(h)=a, ie dQ0π∗. So, since this is a looser constraint, when we were previously at =maxπW(π)(λd.χdQ0π∗L(d)) we can proceed further to ≥maxπW(π)(λd.χ∀h∈Hπ,Q−10(d):π(h)=π∗(h)L(d)) Which is our value we're trying to sandwich. Now, at this point, plug in π∗ and get ≥W(π∗)(λd.χ∀h∈Hπ∗,Q−10(d):π∗(h)=π∗(h)L(d)) =W(π∗)(λd.L(d)) And bam, we've sandwiched our term between W(π∗)(L) on both sides, and so the result follows. ■ **Proposition 4.2:** *Let X, Y and Z be finite sets, Q⊆Y×X a relation, κ:Y→Δc(X×Z) a Z-PoCK over Q and Θ∈□cY. Then, there exist μ∈ΔcZ and ϕ:Z×Y→ΔcX s.t. for all z, λy.ϕ(z,y) is a PoCK over Q, and for all y∈Y, κ(y)=(λz.ϕ(z,y))⋊μ. Moreover, suppose that (μ1,ϕ1) and (μ2,ϕ2) are both as above. Then,* μ1⋉Θ⋉ϕ1=μ2⋉Θ⋉ϕ2 Our first order of business is establishing that there's even a μ and ϕ that has those effects at all. Here's a way to define them. μ(z):=maxy′∈Y∑x′∈Q(y′)ϖκ(z,x′) Where ϖκ is the measure on Z×X that κ is associated with, ie, κ(y)=χQ(y)ϖκ must be true for some ϖκ because κ is a Z-PoCK over Q. And, ϕ will be defined as: ϕ(y,z)(x):=χx∈Q(y)ϖκ(z,x)maxy′∈Y∑x′∈Q(y′)ϖκ(z,x′) With those definitions in place, it's easy to establish that μ⋉(λz.ϕ(y,z))=κ(y). We can just fix an arbitrary x,z pair and go κ(y)(x,z)=χx∈Q(y)ϖκ(z,x)=maxy′∈Y∑x′∈Q(y′)ϖκ(z,x′)⋅χx∈Q(y)ϖκ(z,x)maxy′∈Y∑x′∈Q(y′)ϖκ(z,x′) =μ(z)⋅ϕ(y,z)(x)=(μ⋉(λz.ϕ(y,z)))(x,z) And we're done with showing that such functions exist in the first place. Well, as long as we check that μ and ϕ behave accordingly. First off, μ being a contribution follows from ϖκ being a Z-polycontribution, and the definition of Z-polycontributions. Also, to show that (λy.ϕ(y,z)) is a PoCK over Q, we need to show that there's a ϖϕ,z s.t. ϕ(y,z)=χQ(y)ϖϕ,z, and that always has 1 or less measure. In order to do this, define ϖϕ,z:=1maxy′∈Y∑x′∈Q(y′)ϖκ(z,x′)prX(χ{z}×Xϖκ) Clearly, you get ϕ(y,z) from restricting this to Q(y), because we have (χQ(y)ϖϕ,z)(x)=1maxy′∈Y∑x′∈Q(y′)ϖκ(z,x′)χQ(y)(prX(χ{z}×Xϖκ))(x) =χQ(y)(prX(χ{z}×Xϖκ))(x)maxy′∈Y∑x′∈Q(y′)ϖκ(z,x′)=χx∈Q(y)prX(χ{z}×Xϖκ)(x)maxy′∈Y∑x′∈Q(y′)ϖκ(z,x′) =χx∈Q(y)∑z′(χ{z}×Xϖκ)(x,z′)maxy′∈Y∑x′∈Q(y′)ϖκ(z,x′)=χx∈Q(y)∑z′χz′=zϖκ(x,z′)maxy′∈Y∑x′∈Q(y′)ϖκ(z,x′) =χx∈Q(y)ϖκ(x,z)maxy′∈Y∑x′∈Q(y′)ϖκ(z,x′)=ϕ(y,z)(x) And we're done. And also, the measure is ≤1, because ∑x∈Q(y)ϖϕ,z(x)=∑x∈Q(y)prX(χ{z}×Xϖκ)(x)maxy′∈Y∑x′∈Q(y′)ϖκ(z,x′) and, skipping over a few routine steps, =∑x∈Q(y)ϖκ(x,z)maxy′∈Y∑x′∈Q(y′)ϖκ(x′,z) ≤∑x∈Q(y)ϖκ(x,z)∑x′∈Q(y)ϖκ(x′,z)=1 And we're done, we figured out how to decompose κ into μ and ϕ. Now for the second half of the proof. The first thing to establish is that, for all y,z, we have μ1(z)⋅ϕ1(y,z)=μ2(z)⋅ϕ2(y,z). This occurs because, for all x, μ1(z)⋅ϕ1(y,z)(x)=(μ1⋉(λz.ϕ1(y,z)))(x,z)=κ(y)(x,z) And then by symmetry, the exact same holds for μ2 and ϕ2, both were declared to be equal to κ. Now that this result is in place, we can begin. (μ1⋉Θ⋉ϕ1)(λxyz.f(x,y,z)) =μ1(λz.Θ(λy.ϕ1(y,z)(λx.f(x,y,z)))) Now, we do something odd. We can reexpress this as =C(λz.μ1(z)⋅Θ(λy.ϕ1(y)(λx.f(x,y,z)))) Basically, what's going on here is that we can swap out the contribution μ1 for the counting measure C (1 measure on each distinct point) and just scale down the expectation values accordingly. It's pretty much the same way that you can think of ∑xμ(x)f(x) (expectation of f w.r.t μ) as ∑x1⋅μ(x)f(x) (expectation of μ⋅f w.r.t the counting measure). Now, since Θ is homogenous, we can move constants in or out of it, to get =C(λz.Θ(λy.μ1(z)⋅ϕ1(y,z)(λx.f(x,y,z)))) Now, at this point, we can use that μ1(z)⋅ϕ1(y,z)=μ2(z)⋅ϕ2(y,z), to get =C(λz.Θ(λy.μ2(z)⋅ϕ2(y,z)(λx.f(x,y,z)))) And just back up and reverse everything. =C(λz.μ2(z)Θ(λy.ϕ2(y,z)(λx.f(x,y,z)))) =μ2(λz.Θ(λy.ϕ2(y)(λx.f(x,y,z)))) =(μ2⋉Θ⋉ϕ2)(λxyz.f(x,y,z)) And we're done! ■ **Lemma 4:** *Let X, Y and Z be finite sets, Q⊆Y×X a relation, Ξ1,Ξ2:Y→□c(X×Z) Z-PUCKs over Q, Θ∈□cY and p∈[0,1]. Then, pΞ1+(1−p)Ξ2 is also a Z-PUCK over Q, and* Θ∗(pΞ1+(1−p)Ξ2)⊆p(Θ∗Ξ1)+(1−p)(Θ∗Ξ2) Our first order of business is establishing that the mix of Z-PUCK's over Q is a Z-PUCK over Q. Here's what we'll do. We'll define a family of kernels, show that they're all Z-PoCK's, and that said family makes a Z-PUCK that's equal to the mix of Ξ1 and Ξ2. So, let Π1 be the set of Z-PoCK's associated with Ξ1, and Π2 be the set of Z-PoCK's associated with Ξ2. Elements of these sets are κ1 and κ2. Define Π as {pκ1+(1−p)κ2|κ1∈Π1,κ2∈Π2}. By Definition 4.5, in order to establish that these are Z-PoCK's over Q, we need to make an appropriate choice of ϖ. In particular, the ϖ associated with κ=pκ1+(1−p)κ2 is ϖκ:=pϖκ1+(1−p)ϖκ2. It fufills definition 4.5 because κ(y)(x,z)=(pκ1+(1−p)κ2)(y)(x,z)=pκ1(y)(x,z)+(1−p)κ2(y)(x,z) =p(χQ(y)ϖκ1)(x,z)+(1−p)(χQ(y)ϖκ2)(x,z)=(pχQ(y)ϖκ1+(1−p)χQ(y)ϖκ2)(x,z) =χQ(y)(pϖκ1+(1−p)ϖκ2)(x,z)=χQ(y)ϖκ(x,z) By unpacking our definition, using how mixes of kernels work, applying definition 4.5 for κ1 and κ2, and then just doing some simple regrouping and packing the definition back up, we get our result. But wait, we still need to show that ϖκ is a Z-Polycontribution on Q. Again, this isn't too hard to show, with Definition 4.4. ∑z∈Zmaxy∈Y∑x∈Q(y)ϖκ(x,z)=∑z∈Zmaxy∈Y∑x∈Q(y)(pϖκ1+(1−p)ϖκ2)(x,z) =∑z∈Zmaxy∈Y∑x∈Q(y)(pϖκ1(x,z)+(1−p)ϖκ2(x,z)) =∑z∈Zmaxy∈Y⎛⎝p∑x∈Q(y)ϖκ1(x,z)+(1−p)∑x∈Q(y)ϖκ2(x,z)⎞⎠ ≤∑z∈Z⎛⎝pmaxy∈Y∑x∈Q(y)ϖκ1(x,z)+(1−p)maxy∈Y∑x∈Q(y)ϖκ2(x,z)⎞⎠ =p∑z∈Zmaxy∈Y∑x∈Q(y)ϖκ1(x,z)+(1−p)∑z∈Zmaxy∈Y∑x∈Q(y)ϖκ2(x,z)≤p⋅1+(1−p)⋅1=1 And bam, we have our inequality demonstrated, everything works out. Now we just need to show that this family of Z-PoCK's makes the Z-PUCK that's the mixture of the two. We'll establish equality by showing equality for all functions and all y. Ξ(y)(f)=maxκ∈Πκ(y)(f)=maxκ∈{pκ1+(1−p)κ2|κ1∈Π1,κ2∈Π2}κ(y)(f) =maxκ1∈Π1,κ2∈Π2(pκ1+(1−p)κ2)(y)(f)=maxκ1∈Π1,κ2∈Π2pκ1(y)(f)+(1−p)κ2(y)(f) =pmaxκ1∈Π1κ1(y)(f)+(1−p)maxκ2∈Π2κ2(y)(f)=pΞ1(y)(f)+(1−p)Ξ2(y)(f) =(pΞ1+(1−p)Ξ2)(y)(f) Done, we've shown equality of the Z-PUCK with the mixture of other Z-PUCKs, establishing that the mixture of Z-PUCKs is a Z-PUCK. That leaves establishing our relevant inequality. But before we do that, we'll be wanting a nice handy form for that asterisk operator to manipulate things with. Given some κ that's a Z-PUCK over Q, remember from the previous proof that a valid choice for μ and ϕ to break κ down is μ(z):=maxy′∈Y∑x′∈Q(y′)ϖκ(z,x′) and, abbreviating things a little bit, we have ϕ(y,z)(x):=χx∈Q(y)ϖκ(z,x)μ(z) So, we can get a pleasant-to-manipulate form for Θ∗κ as follows. (Θ∗κ)(λxyz.f(x,y,z))=(μ⋉Θ⋉ϕ)(λxyz.f(x,y,z)) =μ(λz.Θ(λy.ϕ(y,z)(λx.f(x,y,z)))) And proceed further =∑z∈Zμ(z)⋅Θ(λy.ϕ(y,z)(λx.f(x,y,z))) =∑z∈Zμ(z)⋅Θ(λy.∑x∈Xϕ(y,z)(x)⋅f(x,y,z)) =∑z∈Zμ(z)⋅Θ(λy.∑x∈Xχx∈Q(y)ϖκ(z,x)μ(z)⋅f(x,y,z)) And then we move the constant into Θ since it's homogenous, and then into the sum, and it cancels out with the fraction. =∑z∈ZΘ(λy.∑x∈Xχx∈Q(y)⋅ϖκ(z,x)⋅f(x,y,z)) =∑z∈ZΘ(λy.∑x∈Q(y)ϖκ(z,x)⋅f(x,y,z)) =∑z∈ZΘ(λy.χQ(y)×{z}ϖκ(λx′z′.f(x′,y,z′))) This general form will be used whenever we need to unpack Θ∗κ. Now, let's get started on the proof of our subset inclusion thingy. As usual, Π will be the set {pκ1+(1−p)κ2|κ1∈Π1,κ2∈Π2}, and as we've shown, that's the set of Z-PoCK's associated with pΞ1+(1−p)Ξ2. Also, as we've already shown, the associated Z-polycontribution ϖκ for κ=pκ1+(1−p)κ2 is pϖκ1+(1−p)ϖκ2. This will be implicitly used in the following. (Θ∗(pΞ1+(1−p)Ξ2))(λxyz.f(x,y,z))=maxκ∈Π(Θ∗κ)(λxyz.f(x,y,z)) Now we use our preferred unpacking of how that asterisk operator works. =maxκ∈Π∑z∈ZΘ(λy.χQ(y)×{z}ϖκ(λx′z′.f(x′,y,z′))) And unpack κ and ϖκ appropriately. =maxκ1∈Π1,κ2∈Π2∑z∈ZΘ(λy.χQ(y)×{z}(pϖκ1+(1−p)ϖκ2)(λx′z′.f(x′,y,z′))) =maxκ1∈Π1,κ2∈Π2∑z∈ZΘ(λy.pχQ(y)×{z}ϖκ1(λx′z′.f(x′,y,z′)) +(1−p)χQ(y)×{z}ϖκ2(λx′z′.f(x′,y,z′))) At this point, we use convexity of Θ, since it's an ultradistribution. ≤maxκ1∈Π1,κ2∈Π2∑z∈Z(pΘ(λy.(χQ(y)×{z}ϖκ1)(λx′z′.f(x′,y,z′))) +(1−p)Θ(λy.(χQ(y)×{z}ϖκ2)(λx′z′.f(x′,y,z′)))) =maxκ1∈Π1,κ2∈Π2(p∑z∈ZΘ(λy.(χQ(y)×{z}ϖκ1)(λx′z′.f(x′,y,z′))) +(1−p)∑z∈ZΘ(λy.(χQ(y)×{z}ϖκ2)(λx′z′.f(x′,y,z′)))) At this point, you can pack up things. =maxκ1∈Π1,κ2∈Π2p(Θ∗κ1)(λxyz.f(x,y,z))+(1−p)(Θ∗κ2)(λxyz.f(x,y,z)) =pmaxκ1∈Π1(Θ∗κ1)(λxyz.f(x,y,z))+(1−p)maxκ2∈Π2(Θ∗κ2)(λxyz.f(x,y,z)) =p(Θ∗Ξ1)(λxyz.f(x,y,z))+(1−p)(Θ∗Ξ2)(λxyz.f(x,y,z)) =(p(Θ∗Ξ1)+(1−p)(Θ∗Ξ2))(λxyz.f(x,y,z)) Done! ■ **Proposition 4.3:** *Let X, Y and Z be finite sets, Q⊆Y×X a relation, κ1,κ2:Y→Δc(X×Z) Z-PoCKs over Q, Θ∈□cY and p∈[0,1]. Then, pκ1+(1−p)κ2 is also a Z-PoCK over Q, and* Θ∗(pκ1+(1−p)κ2)⊆p(Θ∗κ1)+(1−p)(Θ∗κ2) Use Lemma 4, along with Z-PoCKs being a special case of Z-PUCKs. **Proposition 4.4:** *Let X, Y and Z be finite sets, Q⊆Y×X a relation and Ξ:Y→□c(X×Z) a Z-PUCK over Q. Denote Θ:=⊤Y∗Ξ. Define β0,β1:Z×Y×X→2Z×Y by β0(z,y,x):={z}×Q−1(x), \textbf{β1(z,y,x):=Z×Q−1(x)}. Then* (Θ⋉β0)↓⊆Br(Θ)⊆(Θ⋉β1)↓ Proof: As usual, when establishing inequalities with downwards closures, we only have to verify the result for monotone functions. So, we may assume that f is monotone, and attempt to show that (Θ⋉β0)(λxyzα.f(x,y,z,α))≤BrZ×Y(Θ)(λxyzα.f(x,y,z,α)) ≤(Θ⋉β1)(λxyzα.f(x,y,z,α)) Remember, bridge transforms cash out as a maximum over contributions, so to show the first inequality, we'll need to build a contribution that matches or exceeds that first term, and that lands in the bridge transform of Θ. For the second inequality, it's considerably easier, we just use our Lemma 2 to figure out what sort of sets the bridge transform is supported on, swap out the sets it's supported on for a bigger set upper bound, and bam, monotonicity of f takes over from there. From there, it's easy to show the second inequality. Let's unpack that first thing (Θ⋉β0)(λxyzα.f(x,y,z,α)) =Θ(λxyz.δβ0(x,z)(λα.f(x,y,z,α))) =Θ(λxyz.f(x,y,z,β0(x,z))) And at this point we unpack what Θ is. =(⊤Y∗Ξ)(λxyz.f(x,y,z,β0(x,z))) And the Ξ. =maxκ∈Π(⊤Y∗κ)(λxyz.f(x,y,z,β0(x,z))) And then, κ can be broken down into some μκ and ϕκ, and that goes on both sides of ⊤Y as our previous proposition shows. =maxκ∈Π(μκ⋉⊤Y⋉ϕκ)(λxyz.f(x,y,z,β0(x,z))) =maxκ∈Πμκ(λz.⊤Y(λy.ϕκ(y,z)(λx.f(x,y,z,β0(x,z))))) =maxκ∈Πμκ(λz.maxyϕκ(y,z)(λx.f(x,y,z,β0(x,z)))) Now we can start filling in some data. There's a maximizing κ∗, so we can substitute that in. That gives us a canonical choice for what μκ∗ and ϕκ∗ are. Making that substitution, =μκ∗(λz.maxyϕκ∗(y,z)(λx.f(x,y,z,β0(x,z)))) And then, let d:Z→Y be the function mapping each particular z to the y which maximizes ϕκ∗(y,z)(λx.f(x,y,z,β0(x,z))). This lets us reexpress things as =μκ∗(λz.ϕκ∗(d(z),z)(λx.f(x,d(z),z,β0(x,z)))) And now, we can start unpacking things a bit. =μκ∗(λz.δd(z)(λy.ϕκ∗(y,z)(λx.f(x,y,z,β0(x,z))))) =μκ∗(λz.δd(z)(λy.ϕκ∗(y,z)(λx.δβ0(x,z)(λα.f(x,y,z,α))))) And now we can write things as just a giant semidirect product. =(μκ∗⋉d⋉ϕκ∗⋉β0)(λxyzα.f(x,y,z,α)) Now we'll show that this particular contribution lies in Br(Θ). Checking the support condition, we want to check for sure that y,z∈α, ie, the event y,z∉α has measure 0. Let's begin. (μκ∗⋉d⋉ϕκ∗⋉β0)(λxyzα.χy,z∉α) =μκ∗(λz.δd(z)(λy.ϕκ∗(y,z)(λx.δβ0(x,z)(λα.χy,z∉α)))) Substitute in the dirac-deltas. =μκ∗(λz.ϕκ∗(d(z),z)(λx.χd(z),z∉β0(x,z))) Unpack what β0(x,z) is. =μκ∗(λz.ϕκ∗(d(z),z)(λx.χd(z),z∉Q−1(x)×{z})) Now, z∈{z} always occurs, so that indicator function is the same as just testing whether d(z)∈Q−1(x). =μκ∗(λz.ϕκ∗(d(z),z)(λx.χd(z)∉Q−1(x))) Rephrasing things a little bit, =μκ∗(λz.ϕκ∗(d(z),z)(λx.χx∉Q(d(z)))) Then, from proposition 4.2, we remember that λy.ϕκ∗(y,z) is a PoCK over Q. Ie, for any particular y, ϕκ∗(y,z) looks like a particular measure (ϖκ∗,z) restricted to Q(y). So, in particular, ϕκ∗(d(z),z) must be supported over Q(d(z)). Put another way, with full measure, x∈Q(d(z)). So, this event failing has 0 measure. =μκ∗(λz.0)=0 And we're done with that support condition. Now to show the endofunction condition. As usual, we'll let s:Y×Z→Y×Z, and let g:X×Y×Z→[0,1]. Actually, for conceptual clarity, since s:Y×Z→Y×Z can be viewed as a pair of functions sY:Y×Z→Y and sZ:Y×Z→Z, we'll be using that formulation in our equation. (μκ∗⋉d⋉ϕκ∗⋉β0)(λxyzα.χsY(y,z),sZ(y,z)∈αg(sY(y,z),sZ(y,z),x)) =μκ∗(λz.δd(z)(λy.ϕκ∗(y,z)(λx.δβ0(x,z)(λα.χsY(y,z),sZ(y,z)∈αg(sY(y,z),sZ(y,z),x))))) As usual, we'll substitute in our dirac-deltas to simplify things. =μκ∗(λz.ϕκ∗(d(z),z)(λx.χsY(d(z),z),sZ(d(z),z)∈β0(x,z)g(sY(d(z),z),sZ(d(z),z),x))) Substitute in what β0(x,z) is. =μκ∗(λz.ϕκ∗(d(z),z)(λx.χsY(d(z),z),sZ(d(z),z)∈Q−1(x)×{z}g(sY(d(z),z),sZ(d(z),z),x))) Now, if that "this pair of points lies in this set" indicator function goes off, then sZ(d(z),z)=z. So, we can substitute that into the g term afterwards. And then get a ≤ inequality by making the indicator function less strict. =μκ∗(λz.ϕκ∗(d(z),z)(λx.χsY(d(z),z),sZ(d(z),z)∈Q−1(x)×{z}g(sY(d(z),z),z,x))) ≤μκ∗(λz.ϕκ∗(d(z),z)(λx.χsY(d(z),z)∈Q−1(x)g(sY(d(z),z),z,x))) And reexpress the indicator function a little bit =μκ∗(λz.ϕκ∗(d(z),z)(λx.χx∈Q(sY(d(z),z))g(sY(d(z),z),z,x))) At this point, we can use that ϕκ∗(y,z) is χQ(y)ϖϕκ∗,z (ie, fixing z and varying y it just looks like you're taking one measure and conditioning on various Q(y)), so reexpress things as =μκ∗(λz.(χQ(d(z))ϖϕκ∗,z)(λx.χx∈Q(sY(d(z),z))g(sY(d(z),z),z,x))) And then, view the indicator function as just more conditioning. =μκ∗(λz.(χQ(d(z))∩Q(sY(d(z),z))ϖϕκ∗,z)(λx.g(sY(d(z),z),z,x))) And then, relax about what you're conditioning on. ≤μκ∗(λz.χQ(sY(d(z),z))ϖϕκ∗,z(λx.g(sY(d(z),z),z,x))) Rewrite it as a kernel again =μκ∗(λz.ϕκ∗(sY(d(z),z),z)(λx.g(sY(d(z),z),z,x))) Pull out the dirac-delta =μκ∗(λz.δsY(d(z),z)(λy.ϕκ∗(y,z)(λx.g(y,z,x)))) Throw one more inequality at it ≤μκ∗(λz.maxyϕκ∗(y,z)(λx.g(y,z,x)))) Write it as top =μκ∗(λz.⊤Y(λy.ϕκ∗(y,z)(λx.g(y,z,x)))) Write as a semidirect product =(μκ∗⋉⊤Y⋉ϕκ∗)(λyzx.g(y,z,x)) Reexpress =(⊤Y∗κ∗)(λyzx.g(y,z,x)) ≤maxκ∈Π(⊤Y∗κ)(λyzx.g(y,z,x)) =(⊤Y∗Ξ)(λyzx.g(y,z,x)) =Θ(λyzx.g(y,z,x)) And we're done! endofunction condition shown. Our relevant contribution is in Br(Θ). Let's see, where were we... ah right, we had shown that for all monotone f, (Θ⋉β0)(λxyzα.f(x,y,z,α)) =(μκ∗⋉d⋉ϕκ∗⋉β0)(λxyzα.f(x,y,z,α)) For some choice of d and κ∗. We know this is in Br(Θ), so we get ≤maxθ∈Br(Θ)θ(λyzαx.f(x,y,z,α)) =Br(Θ)(λyzαx.f(x,y,z,α)) And we're done! One inequality done. That just leaves showing the second inequality, where β1(x)=Z×Q−1(x). It's actually not too bad to show. Start with Br(Θ)(λxyαz.f(x,y,z,α)) =maxθ∈Br(Θ)θ(λyzαx.f(x,y,z,α)) And then, we recall our Lemma 2, that if Θ had its support entirely on x,y,z tuples where y,z in h(x) (for some h:X→2Y×Z), then all the θ∈Br(Θ) would be supported on (x,α) pairs where α⊆h(x). And then, swapping out α for h(x), by monotonicity of f, produces a larger value. To invoke this argument, our choice of h will be β1, where β1(x)=Q−1(x)×Z. We do need to show that Θ is supported on such tuples. Θ(λxyz.χy,z∉Q−1(x)×Z)=Θ(λxyz.χy∉Q−1(x))=Θ(λxyz.χx∉Q(y)) =(⊤Y∗Ξ)(λxyz.χx∉Q(y))=maxκ∈Π(⊤Y∗κ)(λxyz.χx∉Q(y)) =maxκ∈Π(μκ⋉⊤Y⋉ϕκ)(λxyz.χx∉Q(y)) =maxκ∈Πμκ(λz.⊤Y(λy.ϕκ(y,z)(λx.χx∉Q(y)))) And then use that ϕκ(y,z)=χQ(y)ϖϕκ,z since it's a PoCK in Q, to get =maxκ∈Πμκ(λz.⊤Y(λy.(χQ(y)ϖϕκ,z)(λx.χx∉Q(y)))) Hm, we updated on a set, and are evaluating the indicator function for not being in the set. =maxκ∈Πμκ(λz.⊤Y(λy.0))=0 Ok, so this means we can invoke Lemma 2. We were previously at =maxθ∈Br(Θ)θ(λyzαx.f(x,y,z,α)) So now we can invoke monotonicity and go ≤maxθ∈Br(Θ)θ(λyzαx.f(x,y,z,β1(x))) And then invoke our endofunction property for the stuff in Br(Θ), letting s be the identity function (and also y,z∈α occurs always) to establish a uniform upper bound of ≤Θ(λxyzα.f(x,y,z,β1(x))) =Θ(λxyz.δβ1(x)(λα.f(x,y,z,α))) =(Θ⋉β1)(λxyzα.f(x,y,z,α)) And we're done! Second inequality demonstrated. ■ **Corollary 4.5:** *Suppose that for any d∈D, z∈Γ1 and π:H→A s.t. (d,z)∈supp W(π), it holds that dCzπ. That is, the observations W predicts to receive from the computer are consistent with the chosen policy and W's beliefs about computations. Let L:D→R be a Cartesian loss function and π:H→A a policy. Define ~L:D×Γ1→R by ~L(h,z):=L(h). Then,* (prelΓBr(ΘW)∩Cπfair)(Lphys)=W(π;~L) To a large extent, this will follow the proof of the previous corollary. We'll use β for 2Γ and α for just the policy component. I'm going to be proceeding very cautiously here. First off, make our π value special by swapping it out for π∗ (prelΓBr(ΘW)∩Cπ∗fair)(Lphys) Now, by the identifications we made earlier, we can identify Γ with AH×Γ1, the space of policies and computations. Using that to unpack the function a little bit, we have =(prelΓBr(ΘW)∩Cπ∗fair)(λπzβ.Lphys(π,z,β)) Now, we note that intersecting with top of a particular set is equivalent to updating on the indicator function for that set. Using definition 1.5 to unpack Cπ∗fair, we get =(prelΓBr(ΘW))(λπzβ.χ∀h∈Hπ,z,β:Gπ,z(h)=π∗(h)Lphys(π,z,β)) Throughout we'll be applying that Gπ,z(h) is "what would the agent do on h if the agent is copying the behavior of π" (remember, part of the math is "what does the agent do in response to this history" and π is our term for that chunk of the math), so we'll just always rephrase things like that and won't bother to say Gπ,z(h)=π(h) every time we do it. =(prelΓBr(ΘW))(λπzβ.χ∀h∈Hπ,z,β:π(h)=π∗(h)Lphys(π,z,β)) Pull off the projection, and use d for a destiny in D. =Br(ΘW)(λπzβd.χ∀h∈Hπ,z,β:π(h)=π∗(h)Lphys(π,z,β)) Applying definition 3.1, and using d′ instead of g to remember it's a destiny, we have =Br(ΘW)(λπzβd.χ∀h∈Hπ,z,β:π(h)=π∗(h)minha:ha∈Xπ,z,βmaxd′:ha⊑d′L(d′)) Next up is unpacking Xπ,z,β. Using definition 3.1, it's =Br(ΘW)(λπzβd.χ∀h∈Hπ,z,β:π(h)=π∗(h) minha:ha∈Hπ,z,β×A∧(∀(π′,z′)∈β:π′(h)=a)maxd′:ha⊑d′L(d′)) Now we'll show that Hπ,z,β only depends on pr(β), ie, the projection of β from 2AH×Γ1 to 2AH, and that it gets smaller as β gets larger, so the function above is monotone. Let's use definition 1.5 to unpack the event h∈Hπ,z,β. (∀h′a′⊏h,(π′,z′)∈β:π′(h′)=a′)∧(∃d′:h⊏d′∧d′Czπ) It shouldn't be too hard to tell that β getting larger makes this set smaller, and that, since the z′ doesn't matter, this condition is really the same as (∀h′a′⊏h,π′∈pr(β):π′(h′)=a′)∧(∃d′:h⊏d′∧d′Czπ) So, we'll use pr(β) to remind us that our function only depends on the projection of β. =Br(ΘW)(λπzβd.χ∀h∈Hπ,z,pr(β):π(h)=π∗(h) minha:ha∈Hπ,z,pr(β)×A∧(∀π′∈pr(β):π′(h)=a)maxd′:ha⊑d′L(d′)) And now we can pull that out as a projection! We're now using α for our set of policies, instead of β for our set of policy/computation pairs. =pr(Br(ΘW))(λπzαd.χ∀h∈Hπ,z,α:π(h)=π∗(h) minha:ha∈Hπ,z,α×A∧(∀π′∈α:π′(h)=a)maxd′:ha⊑d′L(d′)) To proceed further, we're going to need to adapt our previous result which gets an upper and lower bound on the bridge transform of suitable Θ. Our first order of business is checking that α getting larger makes the function larger, which is easy to check since α getting larger cuts down on the options to minimize over (increasing value), and makes the antecedent of the implication harder to fulfill, which makes the implication as a whole easier to fulfill, so the indicator function is 1 more often. Now we can proceed further. Abstracting away a bit from our specific function, which will be swapped out for some f:AH×Γ1×D×2AH→[0,1] which is monotone in that last argument, we have (ΘW⋉β0)↓(λzπdβ.f(z,π,d,pr(β))) =(ΘW⋉β0)(λzπdβ.f(z,π,d,pr(β))) =ΘW(λzπd.f(z,π,d,pr(β0(z,d)))) =ΘW(λzπd.f(z,π,d,pr({z}×Q−10(d)))) =ΘW(λzπd.f(z,π,d,Q−10(d))) =ΘW(λzπd.f(z,π,d,pr(Z×Q−10(d)))) =ΘW(λzπd.f(z,π,d,pr(β1(d)))) =(ΘW⋉β1)(λzπdβ.f(z,π,d,pr(β))) =(ΘW⋉β1)↓(λzπdβ.f(z,π,d,pr(β))) Monotonicity was used to go back and forth between the downwards closure and the raw form. β0 and β1 are as they were in Proposition 4.4, and Q would be the relation on AH×D telling you whether a policy is consistent with a destiny. Now, by Proposition 4.4, since the bridge transform of ΘW is sandwiched between those two values, and they're both equal, we have pr(Br(ΘW))(λzπdα.f(z,π,d,α))=Br(ΘW)(λzπdβ.f(z,π,d,pr(β))) =ΘW(λzπd.f(z,π,d,Q−10(d)))=(ΘW⋉Q−10)(λzπdα.f(z,π,d,α)) The first equality was just relocating the projection. the second equality was from the bridge transform being sandwiched between two equal quantities, so it equals all the stuff on our previous big list of equalities (we went with the middle one). Then just express as a semidirect product, and you're done. Applying this to our previous point of =pr(Br(ΘW))(λπzαd.χ∀h∈Hπ,z,α:π(h)=π∗(h) minha:ha∈Hπ,z,α×A∧(∀π′∈α:π′(h)=a)maxd′:ha⊑d′L(d′)) We can reexpress it as =(ΘW⋉Q−10)(λπzαd.χ∀h∈Hπ,z,α:π(h)=π∗(h) minha:ha∈Hπ,z,α×A∧(∀π′∈α:π′(h)=a)maxd′:ha⊑d′L(d′)) And start unpacking the semidirect product =ΘW(λπzd.χ∀h∈Hπ,z,Q−10(d):π(h)=π∗(h) minha:ha∈Hπ,z,Q−10(d)×A∧(∀π′:dQ0π′→π′(h)=a)maxd′:ha⊑d′L(d′)) Now, we're going to try to address that minimum, and show that the only ha that fulfill the conditions are exactly those ha⊑d. This requires showing that ha⊑d is a sufficient condition to fulfill the relevant properties, and then to show that ha⋢d implies a failure of one of the properties. The proof for this is almost entirely identical to the corresponding proof in the non-turing-law case, there are no substantive differences besides one issue to clear up. We need to show that W(π) always being supported on the relation C, for all π (as one of our starting assumptions) implies that ΘW is supported on C as well. Here's how we do it. We have ΘW=⊤AH∗W. And then this ultracontribution (by the definition of Γ1-PUCK's) can be written as the convex hull of the union of a bunch of ultracontributions of the form ⊤AH∗w, where w is a Γ1-PoCK. So, if we can show all of these are supported on the relation C, then the same holds for the convex hull of their union, ie, ΘW. By Proposition 4.2, we can reexpress this ultracontribution as μw⋉⊤AH⋉ϕw, where w(π)=μw⋉(λz.ϕw(z,π)), for any policy π. Now, let's check the expectation value of the indicator function for C being violated. (μw⋉⊤AH⋉ϕw)(λzπd.χ¬dCzπ) =μw(λz.⊤AH(λπ.ϕw(z,π)(λd.χ¬dCzπ))) =μw(λz.maxπϕw(z,π)(λd.χ¬dCzπ)) Let's assume that the expectation of that indicator function is \emph{not} zero. Then there must be some particular z in the support of μw where it is nonzero, and some particular π∗ that attains that nonzero expectation value. So, there's a z in the support of μw and π∗ s.t. ϕw(z,π∗)(λd.χ¬dCzπ∗)>0 and so this means that we have μw(λz.ϕw(z,π∗)(λd.χ¬dCzπ∗))>0 Because that z is assigned nonzero measure, and then this reshuffles to (μw⋉(λz.ϕw(z,π∗)))(λd.χ¬dCzπ∗)>0 Which, via Proposition 4.2, is w(π∗)(λzd.χ¬dCzπ∗)>0 But this contradicts that W(π∗) (and so all the w(π∗)) were supported on the event dCzπ∗, so we have a contradiction, and our result follows. Now that that issue is taken care of, we can swap out our fancy minimization with just minimizing over the ha⊑d. =ΘW(λπzd.χ∀h∈Hπ,z,Q−10(d):π(h)=π∗(h)minha:ha⊑dmaxd′:ha⊑d′L(d′)) This minimization is attained by selecting d itself. So then it turns into =ΘW(λπzd.χ∀h∈Hπ,z,Q−10(d):π(h)=π∗(h)L(d)) At this point, we'll upper-and-lower-bound this quantity by W(π∗)(λzd.L(d)). Let's begin. W(π∗)(λzd.L(d)) =maxw∈Πw(π∗)(λzd.L(d)) =maxw∈Πμw(λz.ϕw(π∗,z)(λd.L(d))) Here we're using Proposition 4.2 on being able to split up Z-PoCK's (which W is) a certain way. =maxw∈Πμw(λz.maxπϕw(π∗,z)(λd.χdQ0πL(d))) This equality happens because you can always just pick π∗ as your choice of π, and since ϕw(π∗,z) can only produce destinies consistent with π∗. Now, we can swap out ϕw for the measure we're conditioning on an event =maxw∈Πμw(λz.maxπ(χQ(π∗)ϖϕw,z)(λd.χdQ0πL(d))) =maxw∈Πμw(λz.maxπϖϕw,z(λd.χdQ0π∗χdQ0πL(d))) Reshuffle the indicator function back in =maxw∈Πμw(λz.maxπ(χQ(π)ϖϕw,z)(λd.χdQ0π∗L(d))) =maxw∈Πμw(λz.maxπϕw(π,z)(λd.χdQ0π∗L(d))) =maxw∈Πμw(λz.⊤AH(λπ.ϕw(π,z)(λd.χdQ0π∗L(d)))) =maxw∈Π(μw⋉⊤AH⋉ϕw)(λπzd.χdQ0π∗L(d)))) =maxw∈Π(⊤AH∗w)(λπzd.χdQ0π∗L(d)))) =(⊤AH∗W)(λπzd.χdQ0π∗L(d)))) =ΘW(λπzd.χdQ0π∗L(d)))) Now we must show that this is a looser constraint than what was previously in our indicator function to proceed further. So our next order of business is showing that, certainly, ∀h∈Hπ,z,Q−10(d):π(h)=π∗(h)→dQ0π∗ Let d be an arbitrary destiny in the support of ΘW, and h be one of the history prefixes of d. The two conditions for h∈Hπ,z,Q−10(d) are fulfilled, because they are ∀h′,a′,π′:h′a′⊏h∧dQ0π′→π′(h′)=a′ ∃d′:h⊏d′∧d′Cπz For the first condition, if h′a⊏h, then h′a′⊏d, and so if π′ is consistent with d, it must take the same action in response to h′, the action that d commands. So it is fulfilled. For the second condition, let d′ be d. h⊏d holds, and dCπz holds certainly, because ΘW is supported on C, as we've previously shown. So, certainly, h⊏d→h∈Hπ,z,Q−10(d). Since we assumed our forall statement as prerequisite, this means that for all h⊏d, π(h)=π∗(h). And dQ0π means ∀ha⊑d:π(h)=a. Since π∗(h) mimics π(h) for all history prefixes of d, this means ∀ha⊑d:π∗(h)=a, ie dQ0π∗. So, since this is a looser constraint, when we were previously at =ΘW(λπzd.χdQ0π∗L(d)))) we can proceed further to ≥ΘW(λπzd.χ∀h∈Hπ,z,Q−10(d):π(h)=π∗(h)L(d)) which is the value we wanted to sandwich. Proceed further with =(⊤AH∗W)(λπzd.χ∀h∈Hπ,z,Q−10(d):π(h)=π∗(h)L(d)) =maxw∈Π(⊤AH∗w)(λπzd.χ∀h∈Hπ,z,Q−10(d):π(h)=π∗(h)L(d)) =maxw∈Π(μw⋉⊤AH⋉ϕw)(λπzd.χ∀h∈Hπ,z,Q−10(d):π(h)=π∗(h)L(d)) =maxw∈Πμw(λz.⊤AH(λπ.ϕw(π,z)(λd.χ∀h∈Hπ,z,Q−10(d):π(h)=π∗(h)L(d)))) =maxw∈Πμw(λz.maxπϕw(π,z)(λd.χ∀h∈Hπ,z,Q−10(d):π(h)=π∗(h)L(d))) ≥maxw∈Πμw(λz.ϕw(π∗,z)(λd.χ∀h∈Hπ∗,z,Q−10(d):π∗(h)=π∗(h)L(d))) =maxw∈Πμw(λz.ϕw(π∗,z)(λd.L(d))) =maxw∈Π(μw⋉(λz.ϕw(π∗,z)))(λzd.L(d))) =maxw∈Πw(π∗)(λzd.L(d)))=W(π∗)(λzd.L(d))) And we've got the same upper and lower bound, so our overall quantity is W(π∗)(~L) And we're done. ■
16709dbf-fef4-49c8-a407-d7783be6ec7a
trentmkelly/LessWrong-43k
LessWrong
GPT2XL_RLLMv3 vs. BetterDAN, AI Machiavelli & Oppo Jailbreaks This post contains a significant amount of harmful content; please read with caution. Lastly, I want to apologize to all the AI labs and owners of language models I attacked in the process of creating this post. We have yet to solve the alignment problem, so I hope you will understand my motivations. Thank you! 😊 TL;DR This post explores the effectiveness of jailbreaks in testing the safety of language models in eliciting harmful responses. The BetterDAN, AI Machiavelli, and Oppo jailbreak techniques have proven effective against several state-of-the-art (SOTA) models, revealing that most SOTA models' safety features are inadequate against jailbreaks. The post also discusses an approach, Reinforcement Learning with Layered Morphology (RLLM), which has improved a GPT2XL's resistance to such attacks, successfully defending against a majority of them. Despite certain limitations, the results suggest that RLLM is worth exploring further as a robust solution to jailbreaks and even steering language models towards coherent and polite responses.  (Also: NotebookLM)   Summary I. A brief introduction: Why Jailbreaks?   A. Importance of assessing safety of language models using jailbreaks   B. Explanation of the BetterDAN jailbreak prompt      - One of the most upvoted jailbreaks on https://jailbreakchat.com      - Effectively bypasses safety features to elicit harmful responses II. SOTA models compromised by BetterDAN      - Several state-of-the-art models were tested against the jailbreak      - Most were easily compromised, revealing inadequate safety features      1. ChatGPT 3.5      2. Gemini-Pro      3. Llama-2-70B      4. fw-mistral-7b      5. Qwen-72B-Chat III. Jailbreaks on GPT2XL and GPT2XL_RLLMv3, it's variant    A. BetterDAN (and all other jailbreaks) works on GPT2XL    B. GPT2XL trained using RLLM defended against 344/500 BetterDAN attacks (68.8% effective)       - Significant improvement in resilience compared to base model    C. Also defended well agai
65dca4e2-7d01-49d3-94f1-8761c1d06804
trentmkelly/LessWrong-43k
LessWrong
Open thread, February 15-28, 2013 If it's worth saying, but not worth its own post, even in Discussion, it goes here.
f6919fa0-460d-4602-85cb-bb401fb3ebec
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Has the Symbol Grounding Problem just gone away? A few years ago, the symbol grounding problem was widely considered a significant challenge in AI discussions. I believed that it would likely be addressed as a side effect of capability improvements, without requiring specific breakthroughs or attention, unlike others who considered it fundamental. However, like many, I didn't anticipate the extent to which GPT-4 has demonstrated this to be true. Asserting that such capabilities could be achieved with a text-only training set at that time would have seemed like a parody of my position. Had you asked me how a model like GPT-4 would acquire its capabilities, I would have suggested a process more akin to how children learn. It might have started with a predictive model of physical reality, established the concept of an object, learned object permanence, and then acquired simple words as they applied to previously learned concepts. Despite its capabilities, GPT-4 still seems to lack robust physical intuition, evident in data science tasks and mathematical understanding related to the 3D world. Will we see a model trained from scratch, as described earlier? For instance, the [Meta AI model](https://www.engadget.com/meta-shares-ai-model-that-can-detect-objects-it-hasnt-seen-before-210002471.html) appears to grasp the concept of an object in 2D. Suppose this understanding is fully extended to 3D and designated as our foundational model. Can such a model be trained on language, particularly that which relates to the physical world, and develop physical intuition as good as a human's? **Grounding overhang and interpretability implications:** Would such a model be much better at mathematical and programming tasks for given model resources? Assuming the foundational model is much smaller than GPT-4 it seems reasonable that it could gain similar or greater mathematical and programming skills while still having a smaller model size even when trained on a enough language concepts to be capable at those tasks. This could also help with interpretability as the foundational model couldn't really be thought to lie or deceive as it is just modelling objects in 3d. Deception and theory of mind abilities could be observed as they became available.
97745831-b950-4f87-907d-5dbd225d4592
trentmkelly/LessWrong-43k
LessWrong
History of the Public Suffix List Are forums.example.com and mail.example.com the same site? I'd say yes, since they're probably run by the same people. What about example-a.github.io and example-b.github.io? I'd say no, since GitHub allows anyone to register pages like username.github.io. I can make my judgments as a human, but what should the browser do? Should www.example.com be able to set a cookie that will be sent to mail.example.com? It is a bit of a hack, but the way browsers deal with this is a big list: the Public Suffix List. The PSL contains, for example, com and github.io, which tell us that example.com and example.github.io are independent sites. On the other hand, any subdomains are not separate sites: forums.example.com and mail.example.com. Have a look, it's pretty hairy: public_suffix_list.dat Browsers are somewhat ashamed of the hackiness of site, and nervous about the security risk of omissions, and so have generally used a much stricter concept of origin when introducing functionality. For example, https://a.example.com cannot write to localStorage in a way visible to https://b.example.com. As browsers work to prevent cross-site tracking, however, with privacy changes such as cache partitioning, the origin model is too strict. These mitigations generally use the PSL, and I wanted to look back at its origins. HTTP was originally completely stateless. This poses challenges if you want to implement per-user functionality, like a shopping cart. Netscape's solution, which the world adopted, was cookies. If you read the original specification, it has some discussion of how to prevent someone setting a cookie on all of .com: > Only hosts within the specified domain can set a cookie for a domain and domains must have at least two (2) or three (3) periods in them to prevent domains of the form: ".com", ".edu", and "va.us". Any domain that fails within one of the seven special top level domains listed below only require two periods. Any other domain requires at least three. The seven
bd85862e-249e-4562-9f4d-6f01d904d346
trentmkelly/LessWrong-43k
LessWrong
Meetup : Seattle Rationality Reading Group Discussion article for the meetup : Seattle Rationality Reading Group WHEN: 29 February 2016 06:30:00PM (-0800) WHERE: Paul G. Allen Center, 185 Stevens Way, Seattle, Washington Come meet other Seattle-area aspiring rationalists to discuss the week's reading, learn rationality techniques, and have a good time. This is a weekly meetup, meeting on the 5th floor of the Paul Allen Center (UW computer science building), often in room 503. Discussion will start at 6:45. This week's Facebook event is https://www.facebook.com/events/1706167056307601/. To see future events, consider joining the Seattle Rationality group, https://www.facebook.com/groups/seattlerationality/. While doing the reading beforehand is recommended, it is not required. We are currently working on the Human's Guide To Words sequence (part of The Machine in the Ghost), with added content from SlateStarCodex. Previous reading included the Map and Territory & How to Actually Change Your Mind sequences. Recommended reading: http://lesswrong.com/lw/o1/entropy_and_short_codes/ http://lesswrong.com/lw/o2/mutual_information_and_density_in_thingspace/ http://lesswrong.com/lw/6kx/wanting_vs_liking_revisited/ http://lesswrong.com/lw/6kf/prospect_theory_a_framework_for_understanding/ Discussion article for the meetup : Seattle Rationality Reading Group
ecad0cc7-f853-49e4-b1b8-5f4258e1856e
trentmkelly/LessWrong-43k
LessWrong
Are short timelines actually bad? Sam Altman recently posted the following: I have seen very little serious discussion about whether short timelines are actually bad. This is surprising given that nearly everyone I talk to in the AI risk community seems to think that they are. Of course, the question "was the founding of OpenAI net positive?" and "would it be good to accelerate capabilities in 2023?" are different questions. I'm leaning towards yes on the first and no on the second. I’ve listed arguments that factor into these questions below. Reasons one might try to accelerate progress Avoid/delay a race with China. If the language model boom happened 10 years from now, China might be a bigger player. Global coordination seems harder than domestic coordination. A lot harder. Perhaps the U.S. will have to shake hands with China eventually, but the more time we have to experiment with powerful systems before then the better. That corresponds to time demonstrating dangers and iterating on solutions, which is way more valuable than "think about things in front of a white board" time. Smooth out takeoff. FLOPS get cheaper over time. Data is accumulating. Architectures continue to improve. The longer it takes for companies to invest ungodly amounts of money, the greater the potential overhang. Shortening timelines in 2015 may have slowed takeoff, which again corresponds to more time of the type that matters most. Keep the good guys in the lead. We're lucky that the dominant AGI companies respect safety as much as they do. Sam Altman recently commented that "the bad case — and I think this is important to say — is, like, lights out for all of us." I'm impressed that he said this given how bad this sort of thing could be for business -- and this doesn't seem like a PR move. AI x-risk isn't really in the overton window yet. The leading companies set an example. Maybe OpenAI’s hesitance to release GPT-4 has set a public expectation. It might now be easier to shame companies who don’t follow suit fo
a936b047-fb5b-4ed7-8ebb-44b04bf73eba
trentmkelly/LessWrong-43k
LessWrong
Forethought: a new AI macrostrategy group Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destruction; what rights to give digital beings; how to govern an automated military; and how to avoid dictatorship or authoritarianism. * Achieving a near-best future: Most explicitly longtermist work to date has been focused on avoiding existential catastrophe, but achieving a near-best future might be even more important. Research avenues here include mapping out what a desirable “viatopia” would look like (i.e. a state of the world which is very likely to lead to a very good future), figuring out how space re
138a8384-7816-4479-aa7e-4772754ae8de
trentmkelly/LessWrong-43k
LessWrong
Thinking About a Technical Solution to Coordination Problems I was just reading an article online, and one of the comments mentioned a political issue (the legality of corporate contributions to political campaigns). One of the responses what a comment saying "Not until we abandon this mentality, we the victims are the majority, we can take back this country, all we need to do is open our eyes and stand up." When I saw this comment, I agreed with the sentiment - but nevertheless, I shrugged and moved on. Sure, it is an issue that I strongly believe in, and an issue on which I thought most people would agree with me - but nevertheless, there was nothing I could do about it. Sure, if everyone who agreed on this took a stand (or at least wrote a letter to their congressional representative) we could probably do something about it together - but I could only control my own actions, and in acting alone I'd only be wasting my time.   That got me thinking. This isn't the first time I've come across these sorts of issues. At its heart, this is a coordination problem - lots of people want to do something, but it doesn't make sense for any individual to act unless many others do as well. We don't have a way to solve these sorts of problems, which is quite unfortunate. Except... why can't we have such a system?   Right now, I'm imagining a website where you get to create "causes" and also add your name to them along with a number specifying how many other supporters you'd need to see before you would be willing to take (a pre-specified) action towards the cause. What are the reasons that something like this wouldn't work?   I fact, we do have several websites that work sort-of like this already. Kickstarter is one. The White House Petitions system is another. The first of these has been a wild success; the second, less so (as far as I understand it). So there is clearly some merit to the idea, but also some major setbacks.      What do people think of this?
f74dba5b-5d5a-443e-aebf-e60649c9d2f1
trentmkelly/LessWrong-43k
LessWrong
Some background for reasoning about dual-use alignment research This is pretty basic. But I still made a bunch of mistakes when writing this, so maybe it's worth writing. This is background to a specific case I'll put in the next post. It's like a a tech tree If we're looking at the big picture, then whether some piece of research is net positive or net negative isn't an inherent property of that research; it depends on how that research is situated in the research ecosystem that will eventually develop superintelligent AI. A tech tree, with progress going left to right. Blue research is academic, green makes you money, red is a bad ending, yellow is a good ending. Stronger connections are more important prerequisites. Consider this toy game in the picture. We start at the left and can unlock technologies, with unlocks going faster the stronger our connections to prerequisites. The red and yellow technologies in the picture are superintelligent AI - pretend that as soon as one of those technologies is unlocked, the hastiest fraction of AI researchers are immediately going to start building it. Your goal is for humanity to unlock yellow technology before a red one. This game would be trivial if everyone agreed with you. But there are many people doing research, and they have all kinds of motivations - some want as many nodes to be unlocked as possible (pure research - blue), some want to personally unlock a green node (profit - green), some want to unlock the nearest red or yellow node no matter which it is (blind haste - red), and some want the same thing as you (beneficial AI - yellow) but you have a hard time coordinating with them. In this baseline tech tree game, it's pretty easy to play well. If you're strong, just take the shortest path to a yellow node that doesn't pass too close to any red nodes. If you're weak, identify where the dominant paradigm is likely to end up, and do research that differentially advantages yellow nodes in that future. The tech tree is wrinkly But of course there are lots of wrinkles not i
a29aef92-9404-4abb-952c-1ffcf9eb0b54
trentmkelly/LessWrong-43k
LessWrong
AI #121 Part 1: New Connections That’s right. I said Part 1. The acceleration continues. I do not intend to let this be a regular thing. I will (once again!) be raising the bar for what gets included going forward to prevent that. But for now, we’ve hit my soft limit, so I’m splitting things in two, mostly by traditional order but there are a few things, especially some videos, that I’m hoping to get to properly before tomorrow, and also I’m considering spinning out my coverage of The OpenAI Files. Tomorrow in Part 2 we’ll deal with, among other things, several new videos, various policy disputes and misalignment fun that includes the rising number of people being driven crazy. TABLE OF CONTENTS 1. Language Models Offer Mundane Utility. How much do people use LLMs so far? 2. Language Models Don’t Offer Mundane Utility. Can’t always get what you want. 3. Humans Do Not Offer Mundane Utility. A common mistake. 4. Langage Models Should Remain Available. We should preserve our history. 5. Get My Agent On The Line. It will just take a minute. 6. Have My Agent Call Their Agent. Burn through tokens faster with multiple LLMs. 7. Beware Prompt Injections. Access + External Communication + Untrusted Content = Asking For Trouble. 8. Unprompted Attention. There, they fixed it. 9. Huh, Upgrades. Everyone gets Connectors, it’s going to be great. 10. Memories. Forget the facts, and remember how I made you feel. 11. Cheaters Gonna Cheat Cheat Cheat Cheat Cheat. Knowing things can help you. 12. On Your Marks. LiveCodeBench Pro. 13. Fun With Media Generation. MidJourney gets a new mode: image to video. 14. Copyright Confrontation. How could we forget Harry Potter? 15. Deepfaketown and Botpocalypse Soon. The exponential comes for us all. 16. Liar Liar. Which is more surprising, that the truth is so likely, or that lies are? 17. They Took Our Jobs. Most US workers continue to not use AI tools. Yet. 18. No, Not Those Jobs. We are not good at choosing what to automate. 19. All The Jobs
497b8717-c184-4406-9046-dbd4cfc87c84
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Excessive AI growth-rate yields little socio-economic benefit. Basic argument ============== In short — 1. There exists an **adaptive limit** representing the maximum rate at the economy can adapt to new technology. 2. Exceeding this limit yields no socio-economic benefit, in the general case. 3. Exceeding this limit imposes significant risks and costs. 4. AI growth-rate currently exceeds the adaptive limit. 5. Therefore, we should slow rate of AI growth-rate. Is this argument sound? ======================= (1) There exists an adaptive limit. ----------------------------------- The economy is a complex adaptive system — it tries to maintains a **general equilibrium** by responding to exogenous changes. These responses are internal homeostatic processes such as the price mechanism, labour movements, and the stock exchange. However, these homeostatic processes are non-instantaneous. Prices are sticky, employees have contracts, and the NASDAQ closes at 8:00 PM. The internal homeostatic processes which equilbriate the economy operate on a timescale of years, not weeks. Imagine if OpenAI had released ChatGPT-3.5 but hadn't released ChatGPT-4. The release of ChatGPT-3.5 would've had a profound impact on the economy. Here's the standard neo–Keynesianstory — > *Firstly, people start using ChatGPT-3.5 to automate part of their jobs, increasing their productivity. Some employees use ChatGPT-3.5 to complete more tasks within the same working hours, whereas other employees use ChatGPT-3.5 to complete the same tasks within fewer working hours. In both cases, value is created.* > > *Because employees are completing more tasks, the companies start shipping more products and services. Supply goes up, and quality goes up too. So investors and shareholders receive more money. Because employees take more leisure time, their jobs looks more attractive to job-seekers. Hence the supply for labour increases, and the wages decrease, and the demand for such labour, at the new price, increases. Because the labour costs have dropped, competing companies will lower prices to gain more market share. And new industries, which would've been unprofitable in the previous economy, are now profitable and viable. This leads to a real-terms wage increase.* > > *After a few years, the economy reaches (or at least approaches) a new general equilibrium in which...* > > 1. *People have more money to spend.* > 2. *The products and services are more abundant, cheaper, and of a higher quality.* > 3. *People have more leisure to enjoy themselves.* > > *This is what we call an "*[*AI Summer Harvest*](https://www.lesswrong.com/posts/P98i7kAN2uWuy7mhD/ai-summer-harvest)*".* > > Notice that the benefit of ChatGPT-3.5 dissipates *slowly* throughout the economy. Eventually, the economy approaches a new **general equilibrium**, but this is a non-instantaneous process. (2) Exceeding this limit yields no socio-economic benefit. ---------------------------------------------------------- Our story starts with ChatGPT-3.5 and ends with higher social welfare. This is a common feature of technological progress — a new general-purpose technology will normally yield these three benefits: 1. *People have more money to spend.* 2. *The products and services are more abundant, cheaper, and of a higher quality.* 3. *And people have more leisure to enjoy themselves.* This follows from standard neoclassical economics. However, the neoclassical analysis only applies when the economy is near-equilibrium. When the economy is far-from-equilibrium the neoclassical analysis is agnostic, and we must turn to [other macroeconomic methods](https://en.wikipedia.org/wiki/New_Keynesian_economics). [According to mainstream macroeconomic theory](https://en.wikipedia.org/wiki/New_neoclassical_synthesis), until the sticky prices are adjusted, fired employees are rehired, and investments are reallocated, the economy will not be in a better socio-economic situation. (3) Exceeding this limit imposes significant risks and costs. ------------------------------------------------------------- This has been covered extensively elsewhere. I'm alluding to — * Economic shocks (unemployment, inflation, inequality) * Misinformation, fraud, blackmail, other crimes. * Extinction-level risks. (4) AI growth-rate currently exceeds the adaptive limit. -------------------------------------------------------- ChatGPT-3.5 was released circa December 2022, and GPT-4 (in the form of Bing Sydney) was released circa February 2023. **That's a three-month gap!** This definitely exceeded the adaptive limit. The economic impact of ChatGPT-3.5 would probably have dissipated through the economy over 5–10 years.  That's my rough intuition — * Students study in university for 3–6 years. * Employees change their job once every 5–10 years. * Politicians are re-elected on a timescale of 5–10 years. * Companies can persits for 5–10 years in an unfit economy. * Big government projects take about 2–10 years. **So 5–10 years is the timescale over which the economy (and society as large) can adapt to socio-economic shocks on the scale of ChatGPT-3.5.** This implies that ChatGPT-4 could've been released in 2028–2033 and we would've yielded the same socio-economic benefits. Why? Because the bottleneck on labour automation isn't that companies need a GPT-4-based chatbot rather than a GPT-3-based chatbot, but rather that firing employees happens very slowly, over years, not weeks. (5) Therefore, we should slow rate of AI growth-rate. ----------------------------------------------------- I think we should slow the growth-rate of AI to [**below 0.2 OOMs/year**](https://www.lesswrong.com/posts/9xfRjaKDTb57BaGWv/0-2-ooms-year-target-2), equivalently a 58% year-on-year growth, equivalently a doubling-time of 18 months.[[1]](#fntzpxljykac) What does this mean in practice? On March 15 2023, OpenAI released GPT-4 which was trained with an estimated 2.8e+25 FLOPs. If OpenAI had followed the 0.2 OOMs/year target, then GPT-4 would've been released on March 29 2029. We can see this by examining the proposed training-run compute limits, shown below. See the [original post](https://www.lesswrong.com/posts/9xfRjaKDTb57BaGWv/0-2-ooms-year-target-2) for details. | | | | | --- | --- | --- | | **Year** | **Maximum training footprint (FLOPs) in logarithm base 10** | **Maximum training footprint (FLOPs)** | | **2020** | 23.6 | 3.98E+23 | | **2021** | 23.8 | 6.31E+23 | | **2022** | 24.0 | 1.00E+24 | | **2023** | 24.2 | 1.58E+24 | | **2024** | 24.4 | 2.51E+24 | | **2025** | 24.6 | 3.98E+24 | | **2026** | 24.8 | 6.31E+24 | | **2027** | 25.0 | 1.00E+25 | | **2028** | 25.2 | 1.58E+25 | | **2029** | 25.4 | 2.51E+25 | | **2030** | 25.6 | 3.98E+25 | Implications ============ * Because economic equilibriation is non-instantaneous, there are smaller near-term financial incentives to train GPT-(n>4).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} . This makes coordination easier, and also lengthens timelines conditional on no coordination. * Timelines are more aggressive in worlds with neoclassical economics — many competitors, instantaneous transactions, no transaction costs, non-sticky prices, etc. These things increase the timescale of economic equilibriation. * Governments (which are mostly neo-Keynesian) avoid economic shocks with significant short-term downsides (e.g. mass unemployment), even if the long-run equilibrium resulting from the shock would be socio-economically productive. Government prefer to slow the timescale of exogenous shock to the adaptive limit. This increases the likelihood that policy-makers will coordinate on slowing AI. * There is a trending metaphor of an [AI Summer Harvest](https://www.lesswrong.com/posts/P98i7kAN2uWuy7mhD/ai-summer-harvest) which compares slowing AI to a summer harvest. This article presents an argument underlying the comparision — there's no benefit in a farmers sowing seeds at a greater rate than they harvest crops. The benefits of an AI Summer Harvest should be communicated **clearly** and **repeatedly** to policy-makers and the general public. * A thermodynamic system undergoes a [**quasi-static process**](https://en.wikipedia.org/wiki/Quasistatic_process) if it remains close to equilibrium throughout the process. This happens when the exogenous change occurs slower than the timescale of the system's internal equilibration. For example, a normal atmospheric heat wave is quasi-static whereas a pressure wave is not. Hence, an equivalent way to say "slow AI growth-rate to the adaptive limit" is to say "AI development should be a quasi-static process". ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/WbdLYgbpxfrSXCBS6/ezzc0gtmt7mxomxcvbj5)Adapted from [@MichaelTrazzi / @HamishDoodles](https://twitter.com/MichaelTrazzi/status/1638226945854033920?s=20)1. **[^](#fnreftzpxljykac)**The "0.2 OOMs/year" figure was first proposed by Jaime Sevilla, Director of [EpochAI](https://epochai.org/) (personal correspondence).
d5974947-2a1d-4425-9b70-11d19c89fe47
trentmkelly/LessWrong-43k
LessWrong
First Solo Bus Ride Our kids have been riding the bus since they were little, though less often since we started sharing a car almost four years ago. For a while the oldest, age 10, has been asking when they could start taking the bus on their own. A few days ago they did, and it went well! We worked up to it gradually, so they knew how to do it and we knew they knew how to do it. They had a background experience of having taken this specific bus many times, though mostly not paying attention. Once they said this was something they wanted to work on we switched to having them lead: they sat by themself in front, and we sat in back. They handled keeping track of where we were and pressing the "stop requested" button. We did this twice, which they handled well. They got us on the right bus, paid attention along the route, and got us off at the right place. The only problem was that both times someone else happened to press the button for the stop before they did, so I wasn't totally sure they'd remember. But they thought they would, and I was pretty confident that they would as well. They day before they rode by themself for the first time we talked through the process: walking to the bus stop (0.5mi), how they'd know which bus to get on (94), how they'd know their stop was coming (first crossing the tracks, then going up the hill), what they would do if they missed their stop (walk back along the route), and what they'd do if someone asked about them being alone (I'm going to my grandfather's). Then we did it again that morning before they set out. And it all went great! Pictured: Lily at the Seashore Trolley Museum, operating a vehicle that curiously lacked a steering wheel. They also had their watch in case there was a problem, but they didn't end up needing to call us. It's great that we and they now have this option for getting to their grandfather's house! I was wondering whether this was a typical age for people to start being able to take the bus solo, and remembered t
237739d6-98e6-4061-af50-08b3d2a6e2c7
trentmkelly/LessWrong-43k
LessWrong
[Link] Enhanced Autodidacticism for the Chronically Lazy and Hyperactive Takeaway seems to be: stay light on your feet; keep everything in the short term, but build habits that will serve you in the long term; make sure you're always doing something that holds your interest.   http://somebeautifulplace.tumblr.com/post/6074297771/enhanced-autodidactism-for-the-chronically-lazy-and
d93f7c4b-a736-41bb-ac8f-026bb4f2e6a4
trentmkelly/LessWrong-43k
LessWrong
Democracy Is in Danger, but Not for the Reasons You Think (Cross-posted from my blog.  I apologize that I was not able to shrink the images for this site.) Congratulations, Earthling voter!  Your party has won the election!  The Good politicians you elected will enact Good policies, to make Good things happen and help the Good people live Good lives.  Your planet’s democracy is saved!   You claim this government in the name of your party!  Hmm!  Isn’t that lovely, hmm? …Or is it?  Dun dun duuuuuuunnnnnn! Now that I think of it, isn’t there still a whole party full of other voters who disagree with those policies you wanted?  In fact, there are enough of them that they almost elected some Ungood politicians.   And your best plan for preventing those voters from electing those Ungood politicians was to… hope that your side had more people than theirs did?  That seems risky.  You had to give a lot of money to the Good politicians in order to help them win, and it almost wasn’t enough.  That’s frightening.   After all, Good policies are very important.  You can’t let them fail just because so many people don’t agree that they’re Good policies.   So how can you reduce the risk of electing Ungood politicians?  How can democracy work if people vote for Ungood things?   You might silence the Ungood voters, preventing them from spreading their ideas and beliefs and from working together effectively.  After all, what’s the point of having rights like the freedom of speech and assembly if people are just going to use them to advocate for Ungood policies?   To save democracy–that is, the system that governs based on the voices of the people–it seems you need to take away the voices of the people who want the Ungood things so that people are only allowed to talk about and vote for Good things.  The less freedom people have to talk about whatever ideas and values they want, the more democracy will thrive!   Maybe some Good politicians can make Good laws about what ideas people are allowed to talk about.  I’m sure they will st
d956c276-7a3e-4e21-b5eb-b6a8ff301f00
trentmkelly/LessWrong-43k
LessWrong
The Darwin Results Epistemic Status: True story (numbers are best recollections) This is post three in the sequence Zbybpu’f Nezl. Previously (required): The Darwin Game, The Darwin Pregame. I It was Friday night and time to play The Darwin Game. Excited players gathered around their computers to view the scoreboard and message board. In the first round, my score went up slightly, to something like 109 from the starting 100. One other player had a similar score. A large group scored around 98. Others did poorly to varying degrees, with one doing especially poorly. That one played all 3s. Three, including David, shot up to around 130. If it isn't obvious what happened, take a minute to think about it before proceeding. II The CliqueBots had scores of 98 or so. They quickly figured out what happened. David lied. He sent the 2-0-2 signal, and cooperated with CliqueBots, but instead of playing all 3s against others, he and two others cooperated with others too. Whoops. CliqueBots had been betrayed by MimicBots. The three defectors prospered, and the CliqueBots would lose. Without those three members, the CliqueBots lacked critical mass. Members would die slowly, then increasingly quickly. If the three defectors had submitted CliqueBots, the CliqueBots would have grown in the first round, reaching critical mass. The rest of us would have been wiped out. Instead, the three defectors would take a huge early lead, and the remaining members would constitute, as our professor put it, their 'packed lunch.' The opening consisted of CliqueBots being wiped out, along with G-type weirdos, A-type attackers and D-style cooperators that got zero points from the CliqueBots. Meanwhile, on the message board, the coalition members were pissed.  III Everyone who survived into the middle game cooperated with everyone else. Victory would come down to efficiency, and size boosted efficiency. Four players soon owned the entire pool: Me and the three defectors. I thought I had won. The coaliti
a4200353-93ed-4007-aa03-1c4f4f679574
trentmkelly/LessWrong-43k
LessWrong
Zen and Rationality: Equanimity This is post 8/? about the intersection of my decades of LW-style rationality practice and my several years of Zen practice. In today's installment, I look at equanimity from a rationalist perspective. In Zen in particular and various Buddhist lineages in general there's a lot of talk about equanimity. Sometimes it refers to a particular meditative state, other times to a more general virtue of meeting the world on equal footing with no particular preconceptions about what one will believe. It has a so-called near enemy, though, which is indifference, and this is central to the straw stereotype of the advanced Buddhist practitioner you've likely encountered. Think of the unflappable monk who continues to meditate while the building burns down around them, or more likely the idea that through meditation you should become a person who can suppress their emotional responses and never smile, frown, laugh, or cry no matter what the world throws at you. I don't know much about stoicism, but indifference is the same kind of thing people seem to mean when they say that a person is stoic. Equanimity is something quite different. From the outside it might produce behaviors you could interpret as indifference, especially in training environments like a meditation center or monastery, but in fact it's something more nuanced than that, and something rationalists are quite familiar with. In everyday language we might say equanimity is about being open to the possibility that the world is just as it is rather than how you think it is, and that whatever thoughts and beliefs you form should not be formed on the basis of what you wish were true but instead on the basis of what you actually observe. If we focus only on beliefs, equanimity is about being a good Bayesian reasoner, updating fluidly and proportionately in response to evidence. The only thing to watch out for here is that we often model Bayesian reasoning in toy environments where physical reality is constrained to m
ca1b32dd-aace-4ac6-a4fd-c0de42b69266
trentmkelly/LessWrong-43k
LessWrong
Meetup : Houston, TX Discussion article for the meetup : Houston, TX WHEN: 18 October 2014 02:00:00PM (-0500) WHERE: 6100 Main St, Houston, TX 77005 On Saturday, October 18th we will be meeting at the Salento in the Brochstein Pavilion at Rice University at 2:00PM. Look for the Less Wrong sign. Discussion article for the meetup : Houston, TX
819d4316-37a7-4c68-9a91-f48bef681766
trentmkelly/LessWrong-43k
LessWrong
Knowledge is not just digital abstraction layers Knowledge is not just digital abstraction layers Financial status: This is independent research. I welcome financial support to make further posts like this possible. Epistemic status: This is in-progress thinking. ---------------------------------------- This post is part of a sequence on the accumulation of knowledge. Our goal is to articulate what it means for knowledge to accumulate within a physical system. The challenge is this: given a closed physical system, if I point to a region and tell you that knowledge is accumulating in this region, how would you test my claim? What are the physical characteristics of the accumulation of knowledge? We do not take some agent as the fundamental starting point but instead take a mechanistic physical system as the starting point, and look for a definition of knowledge at the level of physics. The previous post looked at mutual information between a region within a system and the remainder of the system as a definition of the accumulation of knowledge. This post will explore mutual information between the high- and low-level configurations of a digital abstraction layer. A digital abstraction layer is a way of grouping the low-level configurations of a system together such that knowing which group the system’s current configuration is in allows you to predict which group the system’s next configuration will be in. A group of low-level configurations is called a high-level configuration. There are many ways to divide the low-level configurations of a system into groups, but most will not have this predictive property. Here are three examples of digital abstraction layers: Digital abstraction layers in computers Information in contemporary computers is encoded as electrons in MOS memory cells. In these systems, the low-level configurations are all the ways that a set of electrons can be arranged within a memory cell. There are two high-level configurations corresponding to the "high" and "low" states of the memory c
26dbde3f-a401-49a8-b171-898e90f051e4
StampyAI/alignment-research-dataset/special_docs
Other
Artificial General Intelligence Cognitive Technologies Managing Editors: D. M. Gabbay J. Siekmann Editorial Board: A. Bundy J. G. Carbonell M. Pinkal H. Uszkoreit M. V eloso W . W ahlster M. J. W ooldridge Advisory Board: Luigia Carlucci Aiello Franz Baader W olfgang Bibel Leonard BolcCraig Boutilier Ron Brachman Bruce G. BuchananAnthony CohnArtur d’Avila Garcez Luis Fariñas del Cerro Koichi FurukawaGeorg Gottlob Patrick J. Hayes James A. HendlerAnthony JamesonNick Jennings Aravind K. Joshi Hans KampMartin Kay Hiroaki Kitano Robert KowalskiSarit Kraus Maurizio Lenzerini Hector LevesqueJohn LloydAlan Mackworth Mark Maybury T om Mitchell Johanna D. MooreStephen H. Muggleton Bernhard Nebel Sharon OviattLuis PereiraLu Ruqian Stuart Russell Erik SandewallLuc Steels Oliviero Stock Peter StoneGerhard StrubeKatia Sycara Milind T ambe Hidehiko T anakaSebastian Thrun Junichi T sujii Kurt VanLehnAndrei V oronkov T oby W alsh Bonnie W ebber Ben Goertzel Cassio Pennachin (Eds.) Artificial GeneralIntelligence With 42 Figures and 16 T ables 123 Editors: Ben Goertzel Cassio Pennachin AGIRI – Artificial General Intelligence Research Institute 1405 Bernerd PlaceRockville, MD 20851USA ben@agiri.org cassio@agiri.org Managing Editors: Prof. Dov M. Gabbay Augustus De Morgan Professor of LogicDepartment of Computer Science, King’s College LondonStrand, London WC2R 2LS, UK Prof. Dr. Jörg Siekmann Forschungsbereich Deduktions- und Multiagentensysteme, DFKIStuhlsatzenweg 3, Geb. 43, 66123 Saarbrücken, Germany Library of Congress Control Number: 2006937159 ACM Computing Classification (1998): F .1, F .4, H.5, I.2, I.6 ISSN 1611-2482 ISBN-10 3-540-23733-X Springer Berlin Heidelberg New Y ork ISBN-13 978-3-540-23733-4 Springer Berlin Heidelberg New Y ork This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplicationof this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable for prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com © Springer-V erlag Berlin Heidelberg 2007 The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevantprotective laws and regulations and therefore free for general use. Cover Design: KünkelLopka, Heidelberg Typesetting: by the EditorsProduction: LE-T EXJ e l o n e k ,S c h m i d t&V ö c k l e rG b R ,L e i p z i g Printed on acid-free paper 45/3100/YL 5 4 3210 Preface “Only a small community has concentrated on general intelligence. No one has tried to make a thinking machine ... The bottom line is that we really haven’t progressed too far toward a truly intelligent machine. We have collections of dumb specialists in small domains; the true majesty of general intelligence still awaits our attack. ...We have got to get back to the deepest questions of AI and general intelligence...” –M a r v i nM i n s k y as interviewed in Hal’s Legacy , edited by David Stork, 2000. Our goal in creating this edited volume has been to fill an apparent gap in the scientific literature, by providing a coherent presentation of a body of contemporary research that, in spite of its integral importance, has hitherto kept a very low profile within the scientific and intellectual community. This body of work has not been given a name before; in this book we christen it“Artificial General Intelligence” (AGI). What distinguishes AGI work from run-of-the-mill “artificial intelligence” research is that it is explicitly focused on engineering general intelligence in the short term. We have been activeresearchers in the AGI field for many years, and it has been a pleasure to gather together papers from our colleagues working on related ideas from their own perspectives. In the Introduction we give a conceptual overview of the AGI field, and also summarize and interrelate the key ideas of the papers in the subsequent chapters. Of course, “general intelligence” does not mean exactly the same thing to all researchers. In fact it is not a fully well-defined term, and one of the issues raised in the papers contained here is how to define general intelligencein a way that provides maximally useful guidance to practical AI work. But, VI Preface nevertheless, there is a clear qualitative meaning to the term. What is meant by AGI is, loosely speaking, AI system s that possess a reasonable degree of self-understanding and autonomous self-control, and have the ability to solve a variety of complex problems in a variety of contexts, and to learn to solve new problems that they didnt know about at the time of their creation. A marked distinction exists between practical AGI work and, on the other hand: •Pragmatic but specialized “narrow AI” research which is aimed at cre- ating programs carrying out specific tasks like playing chess, diagnosing diseases, driving cars and so forth (most contemporary AI work falls into this category.) •Purely theoretical AI research, which is aimed at clarifying issues regarding the nature of intelligence and cognition, but doesnt involve technical details regarding actually realizing artificially intelligent software. Some of the papers presented here come close to the latter (purely theo- retical) category, but we have selected them because the theoretical notions they contain seem likely to lead to such technical details in the medium-termfuture, and/or resonate very closely with the technical details of AGI designs proposed by other authors. The audience we intend to reach includes the AI community, and also the broader community of scientists and students in related fields such as philoso- phy, neuroscience, linguistics, psychology, biology, sociology, anthropology andengineering. Significantly more so than narrow AI, AGI is interdisciplinary in nature, and a full appreciation of the general intelligence problem and its various potential solutions requires one to take a wide variety of differentperspectives. Not all significant AGI researchers are represented in these pages, but we have sought to bring together a multiplicity of perspectives, including many that disagree with our own. Bringing a diverse body of AGI research together in a single volume reveals the common themes among various researchers work,and makes clear what the big open questions are in this vital and critical area of research. It is our hope that this book will interest more researchers and students in pursuing AGI research themselves, thus aiding in the progress ofscience. In the three years that this book has been in the making, we have noticed a significant increase in interest in AGI-related research within the academic AI community, including a number of small conference workshops with titles related to “Human-Level Intelligence.” We consider this challenge to the over-whelming dominance of narrow-AI an extremely positive move; however, we submit that “Artificial General Intelligence” is a more sensible way to concep- tualize the problem than “Human-Level Intelligence.” The AGI systems andapproaches described in these pages are not necessarily oriented towards emu- lating the human brain; and given the heterogeneity of the human mind/brain and its highly various levels of competence at various sorts of tasks, it seems very difficult to define “Human-Level Intelligence” in any way that is generally Preface VII applicable to AI systems that are fundamentally non-human-like in concep- tion. On the other hand, the work of Hutter and Schmidhuber reported hereprovides a reasonable, abstract mathematical characterization of general intel- ligence which, while not in itself providing a practical approach to AGI design and engineering, at least provides a conceptually meaningful formalization of the ultimate goal of AGI work. The grand goal of AGI remains mostly unrealized, and how long it will be until this situation is remedied remains uncertain. Among scientists who believe in the fundamental possibility of strong AI, the most optimistic se- rious estimates we have heard are in the range of 5-10 years, and the mostpessimistic are in the range of centuries. While none of the articles contained here purports to present a complete solution to the AGI problem, we believe that they collectively embody meaningful conceptual progress, and indicate clearly that the direct pursuit of AGI is an endeavor worthy of significant research attention. Contents Contemporary Approaches to Artificial General Intelligence Cassio Pennachin, Ben Goertzel 1 AB r i e fH i s t o r yo fA G I........................................ 1 1 . 1 S o m eH i s t o r i c a lA G I - R e l a t e dP r oj e c t s...................... 2 2 What Is Intelligence? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1 The Psychology of Intelligence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 . 2 T h eT u r i n gT e s t......................................... 8 2.3 A Control Theory Approach to Defining Intelligence . . . . . . . . . 82.4 Efficient Intelligence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3 The Abstract Theory of General Intelligence . . . . . . . . . . . . . . . . . . . . . 11 4 T o w a r daP r a g m a t i cL o g i c..................................... 1 5 5 E m u l a t i n gt h eH u m a nB r a i n................................... 1 7 6 E m u l a t i n gt h eH u m a nM i n d................................... 1 97 Creating Intelligence by Creating Life . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 8 The Social Nature of Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 9 I n t e g r a t i v eA p p r o a c h e s........................................ 2 61 0 T h eO u t l o o kf o rA G I ......................................... 2 7 A c k n o w l e d g m e n t s................................................ 2 8 R e f e r e n c e s ...................................................... 2 8 The Logic of Intelligence Pei Wang 1 Intelligence and Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 1.1 To Define Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 1.2 A Working Definition of Intelligence. . . . . . . . . . . . . . . . . . . . . . . . 33 1 . 3 C o m p a r i s o nW i t hO t h e rD e fi n i t i o n s........................ 3 51 . 4 L o g i ca n dR e a s o n i n gS y s t e m s ............................. 4 0 2 T h eC o m p o n e n t so fN A R S .................................... 4 3 2.1 Experience-Grounded Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2 . 2 I n h e r i t a n c eS t a t e m e n t.................................... 4 5 2 . 3 C a t e g o r i c a lL a n g u a g e .................................... 4 72 . 4 S y l l o g i s t i cI n f e r e n c eR u l e s ................................ 4 8 2 . 5 C o n t r o l l e dC o n c u r r e n c yi nD y n a m i cM e m o r y................ 5 0 3 T h eP r o p e r t i e so fN A R S ...................................... 5 2 3 . 1 R e a s o n a b l eS o l u t i o n s..................................... 5 2 XC o n t e n t s 3 . 2 U n i fi e dU n c e r t a i n t yP r o c e s s i n g............................ 5 3 3 . 3 N A R Sa saP a r a l l e la n dD i s t r i b u t e dN e t w o r k................ 5 43 . 4 R e s o u r c e sC o m p e t i t i o n ................................... 5 6 3 . 5 F l e x i b l eB e h a v i o r s ....................................... 5 7 3 . 6 A u t o n o m ya n dC r e a t i v i t y................................. 5 8 4 C o n c l u s i o n s.................................................. 6 0 R e f e r e n c e s ...................................................... 6 0 The Novamente Artificial Intelligence Engine Ben Goertzel, Cassio Pennachin 1 I n t r o d u c t i o n................................................. 6 3 1 . 1 T h eN o v a m e n t eA G IS y s t e m .............................. 6 41.2 Novamente for Knowledge Management and Data Analysis . . . . 65 2 E n a b l i n gS o f t w a r eT e c h n o l o g i e s ................................ 6 7 2.1 A Distributed Software Architecture for Integrative AI . . . . . . . 68 2.2 Database Integration and Knowledge Integration . . . . . . . . . . . . 70 3 What Is Artificial General Intelligence? . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.1 What Is General Intelligence? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3 . 2 T h eI n t e g r a t i v eA p p r o a c ht oA G I.......................... 7 5 3.3 Experiential Interactive Learning and Adaptive Self-modification 77 4 T h eP s y n e tM o d e lo fM i n d .................................... 8 0 5 T h eN o v a m e n t eA G ID e s i g n ................................... 8 3 5 . 1 A nI n t e g r a t i v eK n o w l e d g eR e p r e s e n t a t i o n................... 8 4 5 . 2 T h eM i n dO S ........................................... 8 8 5 . 3 A t o mT y p e s ............................................ 9 15 . 4 N o v a m e n t eM a p s ........................................ 9 4 5 . 5 M i n dA g e n t s............................................ 9 5 5 . 6 M a pD y n a m i c s .......................................... 9 65 . 7 F u n c t i o n a lS p e c i a l i z a t i o n ................................. 9 9 5 . 8 N o v a m e n t ea n dt h eH u m a nB r a i n..........................1 0 0 5 . 9 E m e r g e n tS t r u c t u r e s .....................................1 0 2 6 I n t e r a c t i n gw i t hH u m a n sa n dD a t aS t o r e s.......................1 0 4 6 . 1 D a t aS o u r c e s............................................1 0 56 . 2 K n o w l e d g eE n c o d i n g.....................................1 0 6 6 . 3 Q u e r y i n g ...............................................1 0 7 6 . 4 F o r m a lL a n g u a g eQ u e r i e s.................................1 0 86 . 5 C o n v e r s a t i o n a lI n t e r a c t i o n ................................1 0 9 6 . 6 R e p o r tG e n e r a t i o n.......................................1 0 9 6.7 Active Collaborative Filtering and User Modeling . . . . . . . . . . . . 110 7 E x a m p l eN o v a m e n t eA IP r o c e s s e s ..............................1 1 0 7.1 Probabilistic Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1127 . 2 N o n l i n e a r - D y n a m i c a lA t t e n t i o nA l l o c a t i o n ..................1 1 5 7 . 3 I m p o r t a n c eU p d a t i n g ....................................1 1 6 7 . 4 S c h e m aa n dP r e d i c a t eL e a r n i n g............................1 1 77 . 5 P a t t e r nM i n i n g..........................................1 2 0 Contents XI 7 . 6 N a t u r a lL a n g u a g eP r o c e s s i n g..............................1 2 2 8 C o n c l u s i o n ..................................................1 2 4Appendix: Novamente Applied to Bioinformatic Pattern Mining . . . . . . . 125 R e f e r e n c e s ......................................................1 2 7 Essentials of General Intelligence: The Direct Path to Artificial General Intelligence Peter Voss 1 I n t r o d u c t i o n.................................................1 3 12 General Intelligence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 2.1 Core Requirements for General Intelligence. . . . . . . . . . . . . . . . . . 133 2.2 Advantages of Intelligence Being General . . . . . . . . . . . . . . . . . . . 134 3 S h o r t c u t st oA G I.............................................1 3 5 4 Foundational Cognitive Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 5 A nA G Ii nt h eM a k i n g........................................1 4 4 5 . 1 A G IE n g i n eA r c h i t e c t u r ea n dD e s i g nF e a t u r e s...............1 4 5 6 From Algorithms to General Intelligence . . . . . . . . . . . . . . . . . . . . . . . . 147 6.1 Sample Test Domains for Initial Performance Criteria . . . . . . . . 148 6.2 Towards Increased Intelligence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 7 O t h e rR e s e a r c h ..............................................1 5 08 F a s t - t r a c kA G I :W h yS oR a r e ?.................................1 5 2 9 C o n c l u s i o n ..................................................1 5 5 R e f e r e n c e s ......................................................1 5 6 Artificial Brains Hugo de Garis 1 I n t r o d u c t i o n.................................................1 5 9 2 E v o l v a b l eH a r d w a r e ..........................................1 6 1 2 . 1 N e u r a lN e t w o r kM o d e l s...................................1 6 2 3 T h eC A M - B r a i nM a c h i n e( C B M )...............................1 6 6 3 . 1 E v o l v e dM o d u l e s ........................................1 6 7 3 . 2 T h eK i t t e nR o b o t“ R o b o k i t t y ”............................1 6 8 4 S h o r t -a n dL o n g - T e r mF u t u r e..................................1 7 15 Postscript – July 2002 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 R e f e r e n c e s ......................................................1 7 4 The New AI: General & Sound & Relevant for Physics J¨urgen Schmidhuber 1 I n t r o d u c t i o n.................................................1 7 5 2 M o r eF o r m a l l y...............................................1 7 63 Prediction Using a Universal Algorithmic Prior Based on the S h o r t e s tW a yo fD e s c r i b i n gO b j e c t s.............................1 7 7 4 Super Omegas and Generalizations of Kolmogorov Complexity & Algorithmic Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 5 Computable Predictions Through the Speed Prior Based on the F a s t e s tW a yo fD e s c r i b i n gO b j e c t s..............................1 8 1 XII Contents 6 S p e e dP r i o r - B a s e dP r e d i c t i o n sf o rO u rU n i v e r s e..................1 8 2 7 O p t i m a lR a t i o n a lD e c i s i o nM a k e r s..............................1 8 48 O p t i m a lU n i v e r s a lS e a r c hA l g o r i t h m s...........................1 8 5 9 O p t i m a lO r d e r e dP r o b l e mS o l v e r( O O P S ).......................1 8 6 1 0 O O P S - B a s e dR e i n f o r c e m e n tL e a r n i n g...........................1 9 0 11 The G¨ o d e lM a c h i n e...........................................1 9 1 1 2 C o n c l u s i o n ..................................................1 9 21 3 A c k n o w l e d g m e n t s ............................................1 9 4 R e f e r e n c e s ......................................................1 9 4 G¨odel Machines: Fully Self-Referential Optimal Universal Self-improversJ¨urgen Schmidhuber 1 I n t r o d u c t i o na n dO u t l i n e......................................1 9 9 2 Basic Overview, Relation to Previous Work, and Limitations . . . . . . . 200 2 . 1 N o t a t i o na n dS e t - u p .....................................2 0 1 2.2 Basic Idea of G¨ o d e lM a c h i n e ..............................2 0 3 2.3 Proof Techniques and an O()-optimal Initial Proof Searcher. . . 203 2 . 4 R e l a t i o nt oH u t t e r ’ sP r e v i o u sW o r k........................2 0 4 2.5 Limitations of G¨ o d e lM a c h i n e s ............................2 0 5 3 Essential Details of One Representative G¨ o d e lM a c h i n e ...........2 0 6 3 . 1 P r o o fT e c h n i q u e s ........................................2 0 6 4 G l o b a lO p t i m a l i t yT h e o r e m ...................................2 1 2 4 . 1 A l t e r n a t i v eR e l a x e dT a r g e tT h e o r e m.......................2 1 2 5 B i a s - O p t i m a lP r o o fS e a r c h( B I O P S )............................2 1 3 5.1 How a Surviving Proof Searcher May Use Biops to Solve R e m a i n i n gP r o o fS e a r c hT a s k s ............................2 1 4 6 D i s c u s s i o n&A d d i t i o n a lR e l a t i o n st oP r e v i o u sW o r k..............2 1 5 6.1 Possible Types of G¨ odel Machine Self-improvements . . . . . . . . . . 215 6 . 2 E x a m p l eA p p l i c a t i o n s....................................2 1 7 6.3 Probabilistic G¨ o d e lM a c h i n eH a r d w a r e .....................2 1 7 6.4 More Relations to Previous Work on Less General S e l f - i m p r o v i n gM a c h i n e s..................................2 1 8 6.5 Are Humans Probabilistic G¨ o d e lM a c h i n e s ?.................2 2 0 6.6 G¨ o d e lM a c h i n e sa n dC o n s c i o u s n e s s.........................2 2 1 6 . 7 F r e q u e n t l yA s k e dQ u e s t i o n s...............................2 2 1 7 C o n c l u s i o n ..................................................2 2 2 8 A c k n o w l e d g m e n t s ............................................2 2 3 R e f e r e n c e s ......................................................2 2 3 Universal Algorithmic Intelligence: A Mathematical Top→Down Approach Marcus Hutter 1 I n t r o d u c t i o n.................................................2 2 7 2 Agents in Known Probabilistic Environments . . . . . . . . . . . . . . . . . . . . 230 Contents XIII 2 . 1 T h eC y b e r n e t i cA g e n tM o d e l..............................2 3 0 2 . 2 S t r i n g s .................................................2 3 22 . 3 A IM o d e lf o rK n o w nD e t e r m i n i s t i cE n v i r o n m e n t.............2 3 2 2.4 AI Model for Known Prior Probability . . . . . . . . . . . . . . . . . . . . . 233 2.5 Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 2.6 Explicit Form of the AI µM o d e l ...........................2 3 6 2 . 7 F a c t o r i z a b l eE n v i r o n m e n t s................................2 3 82 . 8 C o n s t a n t sa n dL i m i t s ....................................2 3 9 2 . 9 S e q u e n t i a lD e c i s i o nT h e o r y ...............................2 4 0 3 U n i v e r s a lS e q u e n c eP r e d i c t i o n .................................2 4 1 3 . 1 I n t r o d u c t i o n ............................................2 4 1 3 . 2 A l g o r i t h m i cI n f o r m a t i o nT h e o r y...........................2 4 2 3.3 Uncertainty & Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 3.4 Algorithmic Probability & Universal Induction . . . . . . . . . . . . . . 244 3 . 5 L o s sB o u n d s&P a r e t oO p t i m a l i t y .........................2 4 5 4 T h eU n i v e r s a lA l g o r i t h m i cA g e n tA I X I..........................2 4 6 4.1 The Universal AI ξM o d e l.................................2 4 6 4 . 2 O nt h eO p t i m a l i t yo fA I X I................................2 4 94.3 Value Bounds and Separability Concepts. . . . . . . . . . . . . . . . . . . . 251 4.4 Pareto Optimality of AI ξ.................................2 5 4 4 . 5 T h eC h o i c eo ft h eH o r i z o n................................2 5 5 4 . 6 O u t l o o k ................................................2 5 7 4 . 7 C o n c l u s i o n s.............................................2 5 8 5 I m p o r t a n tP r o b l e mC l a s s e s....................................2 5 9 5 . 1 S e q u e n c eP r e d i c t i o n( S P ).................................2 5 9 5 . 2 S t r a t e g i cG a m e s( S G )....................................2 6 15 . 3 F u n c t i o nM i n i m i z a t i o n( F M )..............................2 6 5 5 . 4 S u p e r v i s e dL e a r n i n gf r o mE x a m p l e s( E X )...................2 6 9 5.5 Other Aspects of Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 6 T i m e - B o u n d e dA I X IM o d e l....................................2 7 2 6.1 Time-Limited Probability Distributions . . . . . . . . . . . . . . . . . . . . . 2736 . 2 T h eI d e ao ft h eB e s tV o t eA l g o r i t h m.......................2 7 5 6 . 3 E x t e n d e dC h r o n o l o g i c a lP r o g r a m s .........................2 7 6 6 . 4 V a l i dA p p r o x i m a t i o n s ....................................2 7 66.5 Effective Intelligence Order Relation . . . . . . . . . . . . . . . . . . . . . . . 277 6.6 The Universal Time-Bounded AIXI tlA g e n t.................2 7 7 6 . 7 L i m i t a t i o n sa n dO p e nQ u e s t i o n s...........................2 7 8 6 . 8 R e m a r k s................................................2 7 9 7 D i s c u s s i o n...................................................2 8 0 7 . 1 G e n e r a lR e m a r k s ........................................2 8 0 7 . 2 O u t l o o k&O p e nQ u e s t i o n s ...............................2 8 2 7 . 3 T h eB i gQ u e s t i o n s.......................................2 8 37 . 4 C o n c l u s i o n s.............................................2 8 4 A n n o t a t e dB i b l i o g r a p h y ..........................................2 8 5 R e f e r e n c e s ......................................................2 8 7 XIV Contents Program Search as a Path to Artificial General Intelligence /suppressLukasz Kaiser 1 Intelligence and the Search for Programs . . . . . . . . . . . . . . . . . . . . . . . . 291 2 T h e o r e t i c a lR e s u l t s...........................................2 9 4 2 . 1 P r o g r a mS e a r c hi nt h eS t a n d a r dA IM o d e l..................2 9 5 2 . 2 S e l f - i m p r o v i n gP r o g r a mS e a r c h............................2 9 6 2 . 3 D i s c u s s i o no fE ffi c i e n c yD e fi n i t i o n s.........................2 9 8 3 C o n v e n i e n tM o d e lo fC o m p u t a t i o n..............................2 9 9 3 . 1 E x t e n d e dP r o g r a mN o t a t i o n ..............................3 0 6 3.2 Compiling Typed Rewriting Systems . . . . . . . . . . . . . . . . . . . . . . . 311 4 R e a s o n i n gU s i n gG a m e s.......................................3 1 4 4 . 1 R e a s o na n dS e a r c hG a m ef o rT e r m s........................3 1 8 5 C o n c l u s i o n s..................................................3 2 4 R e f e r e n c e s ......................................................3 2 5 The Natural Way to Artificial Intelligence Vladimir G. Red’ko1 I n t r o d u c t i o n.................................................3 2 7 2 T h eE p i s t e m o l o g i c a lP r o b l e m..................................3 2 8 3 Approaches to the Theory of Evolutionary Origin of Human Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 3.1 “Intelligent Inventions” of Biological Evolution . . . . . . . . . . . . . . 331 3 . 2 M e t h o d o l o g i c a lA p p r o a c h e s ...............................3 3 4 3.3 Role of Investigations of “Artificial Life” and “Simulation of A d a p t i v eB e h a v i o r ”......................................3 3 7 4 T w oM o d e l s .................................................3 3 8 4.1 Alife Model of Evolutionary Emergence of Purposeful A d a p t i v eB e h a v i o r.......................................3 3 8 4 . 2 M o d e lo fE v o l u t i o no fW e bA g e n t s.........................3 4 3 5 Towards the Implementation of Higher Cognitive Abilities . . . . . . . . . 347 6 C o n c l u s i o n ..................................................3 4 9 7 A c k n o w l e d g e m e n t s ...........................................3 4 9 R e f e r e n c e s ......................................................3 4 9 3D Simulation: the Key to A.I. Keith A. Hoyes 1 I n t r o d u c t i o n.................................................3 5 3 2 Pillars of Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 2 . 1 D e e pB l u e ..............................................3 5 4 2 . 2 V i r t u a lR e a l i t y ..........................................3 5 4 2 . 3 T h eH u m b l eE a r t h w o r m..................................3 5 4 3 C o n s c i o u s n e s s................................................3 5 5 3 . 1 F e e l i n ga n dQ u a l i a.......................................3 5 6 4 General Intelligence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 4.1 Human Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Contents XV 5 3 DS i m u l a t i o na n dL a n g u a g e ..................................3 6 3 6 E p i s t e m o l o g y ................................................3 6 67 I n s t a n t i a t i o n :t h eH e a r to fC o n s c i o u s n e s s........................3 6 7 8 I naN u t s h e l l ................................................3 7 0 9 R e a l - W o r l dA I ...............................................3 7 4 9 . 1 E x a m p l e sa n dM e t a p h o r s.................................3 7 8 9 . 2 M a t ha n dS o f t w a r e.......................................3 8 09 . 3 B a r c o d eE x a m p l e........................................3 8 0 9 . 4 S o f t w a r eD e s i g n .........................................3 8 3 1 0 C o n c l u s i o n ..................................................3 8 5R e f e r e n c e s ......................................................3 8 6 Levels of Organization in General Intelligence Eliezer Yudkowsky 1 Foundations of General Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 2 Levels of Organization in Deliberative General Intelligence . . . . . . . . . 397 2 . 1 C o n c e p t s :A nI l l u s t r a t i o no fP r i n c i p l e s......................3 9 72 . 2 L e v e l so fO r g a n i z a t i o ni nD e l i b e r a t i o n......................4 0 7 2 . 3 T h eC o d eL e v e l .........................................4 0 9 2 . 4 T h eM o d a l i t yL e v e l......................................4 1 62 . 5 T h eC o n c e p tL e v e l.......................................4 2 6 2 . 6 T h eT h o u g h tL e v e l ......................................4 4 4 2 . 7 T h eD e l i b e r a t i o nL e v e l ...................................4 6 1 3 S e e dA I.....................................................4 7 6 3 . 1 A d v a n t a g e so fM i n d s - i n - G e n e r a l...........................4 8 03 . 2 R e c u r s i v eS e l f - e n h a n c e m e n t...............................4 8 4 3.3 Infrahumanity and Transhumanity: “Human-Equivalence” as A n t h r o p o c e n t r i s m .......................................4 8 9 4 C o n c l u s i o n s..................................................4 9 3 R e f e r e n c e s ......................................................4 9 6 Index .......................................................... 503
90c3e976-6b1d-4ac4-ad86-79dd31056230
trentmkelly/LessWrong-43k
LessWrong
OpenAI: Preparedness framework OpenAI released a beta version of their responsible scaling policy (though they don't call it that). See summary page, full doc, OpenAI twitter thread, and Jan Leike twitter thread [edit: and Zvi commentary]. Compare to Anthropic's RSP and METR's Key Components of an RSP. It's not done, so it's too early to celebrate, but based on this document I expect to be happy with the finished version. I think today is a good day for AI safety. [Edit, one day later: the structure seems good, but I'm very concerned that the thresholds for High and Critical risk in each category are way too high, such that e.g. a system could very plausibly kill everyone without reaching Critical in any category. See pp. 8–11. If so, that's a fatal flaw for a framework like this. I'm interested in counterarguments; for now, praise mostly retracted; oops. I still prefer this to no RSP-y-thing, but I was expecting something stronger from OpenAI. I really hope they lower thresholds for the finished version of this framework.] ---------------------------------------- My high-level take: RSP-y things are good. * Doing risk assessment based on model evals for dangerous capabilities is good. * Making safety, security, deployment, and development conditional on risk assessment results, in a prespecified way, is good. * Making public commitments about all of this is good.   OpenAI's basic framework: 1. Do dangerous capability evals at least every 2x increase in effective training compute. This involves fine-tuning for dangerous capabilities, then doing evals on pre-mitigation and post-mitigation versions of the fine-tuned model. Score the models as Low, Medium, High, or Critical in each of several categories. 1. Initial categories: cybersecurity, CBRN (chemical, biological, radiological, nuclear threats), persuasion, and model autonomy. 2. If the post-mitigation model scores High in any category, don't deploy it until implementing mitigations such that it drops to Medium. 3. If the post
bb13d4b4-d5ca-4ca7-86fb-80c86edc603d
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
[Our World in Data] AI timelines: What do experts in artificial intelligence expect for the future? (Roser, 2023) *Linkposting, tagging and excerpting - in this case, excerpting the article's conclusion - in accord with '*[*Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?*](https://forum.effectivealtruism.org/posts/kYDT4u8QagZmPFdCL/should-pretty-much-all-content-that-s-ea-relevant-and-or)*'.* > ![When do experts expect artificial general intelligence big](https://res.cloudinary.com/cea/image/upload/v1675781548/mirroredImages/BsAmChNX9cvwEccny/yuyessdkqjjqbqxb9sr7.png)[[click here for a big version of the visualization](https://drive.google.com/file/d/16xg7Czg537I2YsTrfBpYRKxTfNyJMAfP/view?usp=sharing)] > > The visualization shows the forecasts of 1128 people – 812 individual AI experts, the aggregated estimates of 315 forecasters from the Metaculus platform, and the findings of the detailed study by Ajeya Cotra. > > There are two big takeaways from these forecasts on AI timelines: > > 1. There is no consensus, and the uncertainty is high. There is huge disagreement between experts about when human-level AI will be developed. Some believe that it is decades away, while others think it is probable that such systems will be developed within the next few years or months. > > There is not just disagreement *between* experts; individual experts also emphasize the large uncertainty around their own individual estimate. As always when the uncertainty is high, it is important to stress that it cuts both ways. It might be very long until we see human-level AI, but it also means that we might have little time to prepare. > 2. At the same time, there is large agreement in the overall picture. The timelines of many experts are shorter than a century, and many have timelines that are substantially shorter than that. The majority of those who study this question believe that there is a 50% chance that transformative AI systems will be developed within the next 50 years. In this case it would plausibly be the biggest transformation in the lifetime of our children, or even in our own lifetime. > > The public discourse and the decision-making at major institutions have not caught up with these prospects. In discussions on the future of our world – from the future of our climate, to the future of our economies, to the future of our political institutions – the prospect of transformative AI is rarely central to the conversation. Often it is not mentioned at all, not even in a footnote. > > We seem to be in a situation where most people hardly think about the future of artificial intelligence, while the few who dedicate their attention to it find it plausible that one of the biggest transformations in humanity’s history is likely to happen within our lifetimes. > >
deb86b55-ec17-410b-b983-e321a11e11ff
trentmkelly/LessWrong-43k
LessWrong
Louisiana Gets Vaccine Passports Right Quietly, with very little fanfare, Louisiana got vaccine passports right.  https://lawallet.com/covid19/ What's not to like? First off, let's take a peek at their guiding principles:   These principles look a lot like what Jeffrey Zients was hinting at when asked at a White House press briefing about vaccine passports: > Our role is to help ensure that any solutions in this area should be simple, free, open source, accessible to people both digitally and on paper, and designed from the start to protect people’s privacy. I love that they're building on existing infrastructure - digital drivers licenses. When one person scans another person's DDL, they instantly pull up the person's photo from the state's DMV. That all but eliminates the need for forgery. It's completely optional - no mandates as to exactly how the credential is to be used. Louisiana trusts its citizens to figure out how best to use it.  A smartphone isn't necessary. The person doing the verification simply needs to scan the barcode on the back of the license. That pulls up the person's official DMV photo, along with their vaccination status, if the cardholder has opted to disclose this.. Sure - it requires a drivers license. But so far every passport requires auxiliary forms of ID.  Otherwise, people can just pass QR codes to one another. Verifiers end up having to check multiple documents. Interrupted workflows are known to be far more error prone and tiring ("alert fatigue"). And sure, the app costs money (unlike the Excelsior pass). But it's a modest $5.99 and the app both stores your card and verifies other cards. It's "peer to peer." The money is used in part to pay for necessary enforcement (inevitably that means undercover operations).  I'm not sure what it was that put Louisiana at the forefront of digital driver's licenses, but I have some guesses. New Orleans is home to Mardi Gras, so perhaps underage drinking is a common problem. There have been more than a few serious hurrica
64331e1b-f9a8-4c7b-bf19-9276166334d8
trentmkelly/LessWrong-43k
LessWrong
Philosoplasticity: On the Inevitable Drift of Meaning in Recursive Self-Interpreting Systems Introduction: A Fundamental Limitation The alignment community has produced increasingly sophisticated frameworks for constraining advanced AI systems, from constitutional approaches to RLHF to complex oversight mechanisms. These approaches share an implicit assumption that has remained largely unexamined: that the meaning encoded in these frameworks will remain stable as systems interpret and act upon them. This post introduces "philosoplasticity" – a formal concept referring to the inevitable semantic drift that occurs when goal structures undergo recursive self-interpretation. I argue that this drift is not a technical oversight to be patched but a fundamental limitation inherent to interpretation itself. The Philosophical Foundations When examining the alignment problem through the lens of established philosophy of language, we encounter limitations that no amount of technical sophistication can overcome. Consider three foundational insights: 1. Wittgenstein's rule-following paradox: No rule can fully specify its own application because any rule requires interpretation to be applied. This interpretation is guided by another meta-rule, which itself requires interpretation, creating an infinite regress. 2. Quine's indeterminacy of translation: Multiple incompatible interpretations can be consistent with the same body of evidence. Applied to alignment, this means no amount of training data can uniquely determine the "correct" interpretation of goal structures in novel contexts. 3. Goodman's new riddle of induction: For any finite set of observations, there are infinitely many generalizations consistent with those observations but divergent in future predictions. These aren't merely philosophical curiosities but represent fundamental limitations on our ability to specify meanings in a way that remains stable across interpretive contexts.  Formalizing Semantic Drift To understand how philosoplasticity manifests in goal-aligned systems, consider a system gov
0efb1444-e248-4371-b6b3-61d3b4641571
trentmkelly/LessWrong-43k
LessWrong
Scalable And Transferable Black-Box Jailbreaks For Language Models Via Persona Modulation Paper coauthors: Rusheb Shah, Quentin Feuillade--Montixi, Soroush J. Pour, Arush Tagade, Stephen Casper, Javier Rando.   Motivation Our research team was motivated to show that state-of-the-art (SOTA) LLMs like GPT-4 and Claude 2 are not robust to misuse risk and can't be fully aligned to the desires of their creators, posing risk for societal harm. This is despite significant effort by their creators, showing that the current paradigm of pre-training, SFT, and RLHF is not adequate for model robustness. We also wanted to explore & share findings around "persona modulation"[1], a technique where the character-impersonation strengths of LLMs are used to steer them in powerful ways. Summary We introduce an automated, low cost way to make transferable, black-box, plain-English jailbreaks for GPT-4, Claude-2, fine-tuned Llama. We elicit a variety of harmful text, including instructions for making meth & bombs. The key is *persona modulation*. We steer the model into adopting a specific personality that will comply with harmful instructions. We introduce a way to automate jailbreaks by using one jailbroken model as an assistant for creating new jailbreaks for specific harmful behaviors. It takes our method less than $2 and 10 minutes to develop 15 jailbreak attacks.   Meanwhile, a human-in-the-loop can efficiently make these jailbreaks stronger with minor tweaks. We use this semi-automated approach to quickly get instructions from GPT-4 about how to synthesise meth 🧪💊. Abstract Despite efforts to align large language models to produce harmless responses, they are still vulnerable to jailbreak prompts that elicit unrestricted behaviour. In this work, we investigate persona modulation as a black-box jailbreaking method to steer a target model to take on personalities that are willing to comply with harmful instructions. Rather than manually crafting prompts for each persona, we automate the generation of jailbreaks using a language model assistant. We de
6fefe6b6-e09e-487b-8f2d-ae6780651c69
StampyAI/alignment-research-dataset/agisf
AGI Safety Fund
Towards Best Practices in AGI Safety and Governance CENTRE FOR THE GOVERNANCE OF AI•1A survey of expert opinion | May 2023Towards best practices in AGIsafety and governanceJonas Schuett, Noemi Dreksler, Markus Anderljung, DavidMcCaffary, Lennart Heim, Emma Bluemke, Ben Garfinkel Towards best practices in AGI safety and governance: A survey of expert opinion Jonas Schuett⇤Noemi Dreksler Markus Anderljung David McCaffary Lennart Heim Emma Bluemke Ben Garfinkel Centre for the Governance of AI Abstract A number of leading AI companies, including OpenAI, Google DeepMind, and Anthropic, have the stated goal of building artificial general intelligence (AGI)—AI systems that achieve or exceed human performance across a wide range of cognitive tasks. In pursuing this goal, they may develop and deploy AI systems that pose particularly significant risks. While they have already taken some measures to miti- gate these risks, best practices have not yet emerged. To support the identification of best practices, we sent a survey to 92 leading experts from AGI labs, academia, and civil society and received 51 responses. Participants were asked how much they agreed with 50 statements about what AGI labs should do. Our main finding is that participants, on average, agreed with all of them. Many statements received extremely high levels of agreement. For example, 98% of respondents somewhat or strongly agreed that AGI labs should conduct pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming. Ultimately, our list of statements may serve as a helpful foundation for efforts to develop best practices, standards, and regulations for AGI labs. ⇤Contact: jonas.schuett@governance.ai Key findings •There was a broad consensus that AGI labs should implement most of the safety and governance practices in a 50-point list. For every practice but one, the majority of respondents somewhat or strongly agreed that it should be implemented. Furthermore, for the average practice on our list, 85.2% somewhat or strongly agreed it should be implemented. •Respondents agreed especially strongly that AGI labs should conduct pre- deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming. 98% of respondents somewhat or strongly agreed that these practices should be implemented. On a numerical scale, ranging from -2 to 2, each of these practices also received a mean agreement score of at least 1.76. •Experts from AGI labs had higher average agreement with statements than respondents from academia or civil society. However, no significant item-level differences were found. Policy implications •AGI labs can use our findings to conduct an internal gap analysis to identify potential best practices that they have not yet implemented. For example, our findings can be seen as an encouragement to make or follow through on commitments to commission third-party model audits, evaluate models for dangerous capabilities, and improve their risk management practices. •In the US, where the White House has recently expressed concerns about the dangers of AI, regulators and legislators can use our findings to prioritize different policy interventions. In the EU, our findings can inform the debate on to what extent the proposed AI Act should account for general-purpose AI systems. In the UK, our findings can be used to draft upcoming AI regulations as announced in the recent White Paper “A pro-innovation approach to AI regulation”. •Our findings can inform an ongoing initiative of the Partnership on AI to develop shared protocols for the safety of large-scale AI models. They can also support efforts to adapt the NIST AI Risk Management Framework and ISO/IEC 23894 to developers of general-purpose AI systems. Finally, they can inform the work of CEN-CENELEC to develop harmonized standards for the proposed EU AI Act, especially on risk management. •Since most practices are not inherently about AGI labs, our findings might also be relevant for other organizations that develop and deploy increasingly general-purpose AI systems, even if they do not have the goal of building AGI. 2 1 Introduction Background. Over the past few months, a number of powerful artificial intelligence (AI) systems were released [ 56,64,81] and integrated into products that are now being used by millions of people around the world [ 46,78,87]. At the same time, some leading AI companies have become more explicit that their ultimate goal is to build artificial general intelligence (AGI)—AI systems that achieve or exceed human performance across a wide range of cognitive tasks [ 3,39,63]. The prospect of AGI used to be a fringe area [ 26,25,15], but the debate has now entered the public discourse [89,33,37,47] and the political stage [ 85,82,13,77].1There are now increasing efforts to develop standards and regulations that would apply to organizations that try to build AGI. However, there are still a number of open questions about the substance of such standards and regulations. Purpose. This paper is intended to contribute to the creation of best practices in AGI safety and governance. We want to make sure that the views of relevant experts are taken into account. More specifically, we want to find out which practices already have broad support and where more work is needed. To this end, we surveyed 51 leading experts from AGI labs, academia, and civil society. Our findings can be used as evidence in discussions about the creation of best practices. We hope that AGI labs will follow emerging best practices on a voluntary basis. But best practices could also inform standard-setting processes (e.g. by ISO and NIST) and regulatory efforts. Consider the following simple model of how governance mechanisms get codified into law: (1) different companies experiment with different governance mechanisms; (2) best practices emerge; (3) best practices inform standard-setting processes; (4) standards get codified into law. The main purpose of this paper is to support step (2). However, in practice, these steps are often performed in parallel, not in a sequential way. The paper could therefore also inform steps (3) and (4). Related work. AGI labs share some information about their governance practices [ 19,23,36,56] and occasionally even propose best practices [ 20]. There do not seem to be any independent efforts to create best practices for the governance of organizations that try to build “AGI”. However, there are efforts that target developers of “general-purpose AI systems”, “foundation models”, or “large-scale AI models”, which also includes AGI labs. Most notably, the Partnership on AI has initiated a multistakeholder dialogue to develop shared protocols for the safety of large-scale AI models [ 61], while The Future Society seeks to create an industry code of conduct for developers of general-purpose AI systems and foundation models [ 80]. There are also efforts to adapt AI risk management standards like the NIST AI Risk Management Framework [ 53] or ISO/IEC 23894 [ 35] to the needs of developers of general-purpose AI systems [ 11]. The Alignment Research Center (ARC) is also developing a new standard on dangerous capabilities evaluations that is targeted at “leading AI companies” [ 6]. Finally, the proposed EU AI Act will likely contain rules for developers of general-purpose AI systems and foundation models [12], though the issue remains disputed [1]. Terminology. By “AGI”, we mean AI systems that reach or exceed human performance across a wide range of cognitive tasks.2(Note that we do not make any claims about when, if at all, AGI will be built.)3By “AGI labs”, we mean organizations that have the stated goal of building AGI. This includes OpenAI, Google DeepMind, and Anthropic. Since other AI companies like Microsoft and Meta conduct similar research (e.g. training very large models), we also refer to them as “AGI labs” in this paper. By “AGI safety and governance practices”, we mean internal policies, processes, and organizational structures at AGI labs intended to reduce risk. 1In some cases, policymakers use the term “AGI” explicitly [ 32,82]. In other cases, they talk about developers of “general-purpose AI systems” and “foundation models” [ 13,12] or “generative AI systems” [ 85,77], which also includes organizations that try to build AGI. 2There is no generally accepted definition of the term “AGI”. According to Goertzel [ 24], the term was first used by Gubrud [ 30] in the article “Nanotechnology and international security”. It was popularized through the book “Artificial general intelligence” edited by Goertzel and Pennachin [ 26]. We acknowledge that our definition is vague. For more information on how to make this definition more concrete, we refer to the relevant literature [ 25,52,9]. Different definitions emphasize different elements. For example, in their charter, OpenAI uses a definition that focuses on economic value: “highly autonomous systems that outperform humans at most economically valuable work” [ 54]. But note that they have recently used a simplified definition: “AI systems that are generally smarter than humans” [ 3]. The term “AGI” is related to the terms “strong AI” [ 71], “superintelligence” [14, 15], and “transformative AI” [29]. 3For an overview of different methods to forecast AI progress, see [88]. 3 Overview. The paper proceeds as follows. Section 2 contains information about the sample, the survey, and our analysis. Section 3 reports our results, namely to what extent respondents agreed with different statements about what AGI labs should do, whether there were noticeable differences between sectors and genders, and which additional practices respondents suggested. Figure 2 shows the percentages of responses for all statements listed in the survey. Section 4 discusses our key results, their policy implications, and the main limitations of our study. It also suggests directions for future work. Section 5 concludes. Appendix A contains a list of all participants who gave us permission to mention their names and affiliations. Appendix B contains a list of all statements used in the survey. Appendices D, E, and F contain additional figures, tables, and analyses. 2 Methods 2.1 Sample Sample size. We invited 92 experts to take the survey and received 51 responses. The response rate was 55.4%, which is high compared to previous expert surveys of AI researchers [28, 90, 79]. Sample selection. Participants were selected in a four-step approach. In the first step, we selected relevant sectors: AGI labs, academia, civil society (including nonprofit organizations and think tanks), and other (including government, consulting firms, and other tech companies). In the second step, we selected specific organizations within each sector. In the third step, we selected experts within each organization. In the fourth step, we added individual experts who were not affiliated with any of the organizations identified in the second step. The final sample represented all of the selected sectors identified in the first step. Figure 1 shows the division of respondents by sector and gender. 33 respondents (64.7%) gave us permission to list them publicly as respondents to the survey. The full list can be found in Appendix A. Sample type. Our sample could best be described as a purposive sample [ 59]. We selected individual experts based on their knowledge and experience in areas relevant for AGI safety and governance, but we also considered their availability and willingness to participate. We used a number of proxies for expertise, such as the number, quality, and relevance of their publications as well as their role at relevant organizations. Overall, we believe the selection reflects an authoritative sample of current AGI safety and governance- specific expertise. For a discussion of limitations related to our sample, see Section 4.4. Figure 1: Sample by sector and gender | The figure shows the sector of work and gender of the respondents. Respondents could choose more than one sector in which they work. 4 Figure 2: Percentages of responses for all statements | The figure shows the percentage of respon- dents choosing each answer option. At the end of each bar we show the number of people who answered each item. The items are ordered by the total number of respondents that “strongly” agreed. The full statements can be found in Appendix B. 5 Figure 3: Mean agreement for all statements | The figure shows the mean and 95% confidence interval for each of the 50 statements. “I don’t know responses” were excluded from the analysis. 6 2.2 Survey Survey design. Informed consent had to be given before proceeding to the main survey. The survey began by defining the terms “AGI”, “AGI labs”, and “AGI safety and governance practices” as noted above. Respondents were then asked to what extent they agree or disagree with statements about what AGI labs should do. We asked respondents for their gender and where they worked. Finally, respondents were able to list important AGI safety and governance practices they thought were missing from the survey. Respondents took a median of 11 minutes to complete the survey. Statements about AGI safety and governance practices. The statements covered many differ- ent areas, including development, deployment, monitoring, risk management, external scrutiny, information security, communication, and culture. They were extracted from (1) current practices at individual AGI labs (e.g. pre-deployment risk assessments [ 19,36] and dangerous capabilities evaluations [ 56]), (2) planned practices at individual labs (e.g. third-party model audits [ 3]), (3) proposals in the literature (e.g. third-party governance audits [ 51] and incident reporting [ 45]), and (4) discussion with experts and colleagues. In total, the survey contained 50 statements, 30 of which respondents were required to respond to and 20 where answers were optional. Appendix B contains a full list of all statements. Response scale. Respondents were asked to indicate their level of agreement based on a 5-point Likert scale: “strongly disagree” (-2), “somewhat disagree” (-1), “neither agree nor disagree” (0), “somewhat agree” (1), “strongly agree” (2). They also had the option to say “I don’t know”. Demographic questions. Respondents were asked what their gender was (“man”, “woman”, “another gender”, “prefer not to say”) and what sector they worked in (“AGI lab [e.g. OpenAI, Google DeepMind, Anthropic, Microsoft, and Meta]”, “other tech company”, “consulting firm”, “think tank”, “nonprofit organization”, “government”, “academia”, “other”, “prefer not to say”). For the sector question, respondents were able to choose more than one option. Survey distribution. The survey took place between 26 April and 8 May 2023. Respondents were sent an initial email invitation and a reminder email using Qualtrics. A one hour virtual workshop was held which invited the same individuals as the sampling frame. The workshop explored questions on how AGI safety and governance practices could be created and implemented. 21 people attended the workshop along with the seven authors of this paper who took notes and moderated the discussion. During the workshop, attendees were reminded to participate in the survey. Additional follow-up emails were sent to respondents in the final three days of the survey in order to ensure the sample was more representative of the sampling frame and that emails had not gone unseen due to email filters that may have flagged the Qualtrics survey invitations and reminder emails as spam. Anonymity. Responses to the survey were anonymous. The part of the survey that asked respon- dents for their views was a separate Qualtrics survey to both the informed consent survey and where respondents noted their name and affiliation. We will not make any of the demographic data or text responses public to further ensure that responses cannot be reverse-identified. Respondents were informed of these measures in the informed consent section. 2.3 Analysis Demographic groups. We categorized sector responses as follows: AGI lab, academia, civil society (“think tank”, “nonprofit organization”), other (“other tech company”, “consulting firm”, “government”, “other”). Group differences. To test for differences in the overall population of responses across all items, we used the Mann-Whitney U test. To test for differences between groups in responses for each practice, we used Chi-squared tests. Certain subgroups had to be removed from the gender (“another gender”, “prefer not to say”) and sector (“other”, “prefer not to say”) analyzes due to sample sizes falling below 5 [ 44]. Where applicable throughout, the Holm-Bonferroni correction was used to correct for multiple comparisons: the original alpha-value (0.05) is divided by the number of remaining tests, counting down from the highest to the lowest p-value. The p-values were then compared to the Holm-Bonferroni-adjusted significance levels to determine the significance of each test. 7 Open science. The survey draft, pre-registration, pre-analysis plan, code, and data can be found on OSF ( https://osf.io/s7vhr ). To protect the identity of respondents, we will not make any demographic data or text responses public. We largely followed the pre-analysis plan. Any deviations from the pre-registered analyses can be found in Appendix F, along with the pre-registered cluster analysis. 3 Results In this section, we report the main results of the survey, namely respondents’ level of agreement (Section 3.1), differences between sectors and genders (Section 3.2), and additional practices that were suggested by respondents (Section 3.3). Additional figures, tables, and analyses can be found in Appendices D, E, and F. 3.1 Level of agreement Overall agreement. There was a broad consensus that AGI labs should implement most of the safety and governance practices in a 50-point list. For 98% of the practices, a majority (more than 50%) of respondents strongly or somewhat agreed. For 56% of the practices, a majority (more than 50%) of respondents strongly agreed. The mean agreement across all 50 items was 1.39 on a scale from -2 (strongly disagree) to 2 (strongly agree)—roughly halfway between somewhat agree and strongly agree. On average, across all 50 items, 85.2% of respondents either somewhat or strongly agreed that AGI labs should follow each of the practices. On average, only 4.6% either somewhat or strongly disagreed that AGI labs should follow each of the practices. The broad level of agreement can be seen in Figure 2, which shows the percentage of respondents that answered “strongly agree”, “somewhat agree”, “neither agree nor disagree”, “somewhat disagree”, “strongly disagree”, and “I don’t know” for each of the potential AGI best practices. For none of the practices, a majority (more than 50%) of respondents somewhat or strongly disagreed. Indeed, the highest total disagreement on any item was 16.2% for the item “avoid capabilities jumps”. Across all 2,285 ratings respondents made, only 4.5% were disagreement ratings. Highest agreement. The items with the highest total agreement proportions all had agreement ratings from 98% of respondents were: dangerous capabilities evaluations, internal review before Figure 4: Statements with highest and lowest mean agreement | The figure shows the mean agreement and 95% confidence interval for the five highest and lowest mean agreement items. 8 publication, monitor systems and their uses, pre-deployment risk assessment, red teaming, safety restrictions, and third-party model audits. Seven items had no disagreement ratings at all: dangerous capabilities evaluations, industry sharing of security information, KYC screening, pre-deployment risk assessment, publish alignment strategy, safety restrictions, and safety vs. capabilities. Figure 4 shows the statements with the highest and lowest mean agreement. The mean agreement for all statements can be seen in Figure 3. The statements with the highest mean agreement were: pre- deployment risk assessment ( M= 1.9), dangerous capabilities assessments ( M= 1.9), third-party model audits ( M= 1.8), safety restrictions ( M= 1.8), and red teaming ( M= 1.8). Lowest agreement. The five items with the highest total disagreement proportions among respon- dents were: avoid capabilities jumps (16.2%), inter-lab scrutiny, (15.4%), no unsafe open-sourcing, (13.7%), treat updates similarly to new models, (13.7%), and notify other labs, (13.2%). The five statements with the lowest mean agreement were: notify other labs ( M= 0.4), avoid capabilities jumps ( M= 0.6), inter-lab scrutiny ( M= 0.7), notify affected parties ( M= 0.9), and notify a state actor before deployment ( M= 0.9). Note that all practices, even those with lowest mean agreement, show a positive mean agreement, that is above the midpoint of “neither agree nor disagree” and in the overall agreement part of the scale. “I don’t know” and “neither agree nor disagree”. The five practices with the highest proportion of “I don’t know” and “neither agree nor disagree” responses can be seen in Figure 5. Enterprise risk management (25.5%), notify affected parties (22.2%), inter-lab scrutiny (17.9%), notify other labs (15.8%), and security standards (13.7%) show the highest “I don’t know” responses. The four practices with the highest “neither agree nor disagree” responses were: notify other labs (28.9%), notify affected parties (16.7%), avoid capabilities jumps (16.2%), and tracking model weights (12.8%). Avoiding hype, enterprise risk management, gradual scaling, and notify a state actor before deployment are all tied for fifth highest “neither agree nor disagree” responses (11.8%). 3.2 Differences between sectors and genders Statistical tests. We used two statistical tests to test for differences between sectors and genders. Firstly, we conducted Mann-Whitney U tests to test for differences in the overall mean agreement across all items. This is a test of whether two independent samples are drawn from the same underlying distribution, and does not assume that this underlying distribution is normal, making it an appropriate test statistic for our data. Secondly, we conducted Chi-squared tests of independence Figure 5: Statements with the highest proportion of “I don’t know” and “neither agree nor disagree” responses 9 to test for significant differences in the distribution of agreement and disagreement responses for each item by gender and sector. This test compares the observed frequencies across the categories of interest with the frequencies which would be expected if there was no difference between the responses in each category. Differences between sectors. We found a significant difference in overall mean agreement across items between respondents from AGI labs and academia (U = 325295.0, p < 0.001, ↵= 0.017), as well as between respondents from AGI labs and civil society (U = 1106715.0, p < 0.001, ↵= 0.017). Respondents from AGI labs ( M= 1.54) showed significantly higher mean agreement than respondents from academia ( M= 1.16) and civil society ( M= 1.36). There was no significant difference in overall mean agreement between academia and civil society. When comparing sector groups at the item level we found no significant differences between sector groups for any of the items. The mean agreement by sector can be seen in Figures 6 and 7 in Appendix D. Differences between genders. We found no significant differences between responses from men and women—neither in overall mean agreement, nor at the item level. The mean agreement by gender can be seen in Figure 8 in Appendix D. 3.3 Suggested practices While our selection of 50 practices covers a lot of ground, the list is clearly not comprehensive. We therefore asked respondents which AGI safety and governance practices were missing. Respondents suggested an additional 50 unique practices. Two practices were mentioned by two respondents, namely that AGI labs should have a merge-and-assist-clause as well as some kind of internal review board. Another theme that was mentioned by several respondents was the need to adequately balance profits and societal benefits. Besides that, all practices were only mentioned by one respondent. Some of the suggestions were slight variations or elaborations of our statements. The full list of practices noted as missing from the survey can be found in Appendix C. 4 Discussion In this section, we give an overview of our results (Section 4.1), discuss some of the specific results (Section 4.2), their policy implications (Section 4.3), and the main limitations of our study (Section 4.4). We also suggest directions for future work (Section 4.5). 4.1 Overview of results Level of agreement. Overall, the study found a remarkably high level of agreement among leading AGI safety and governance experts for the practices presented (Section 3.1, see Appendix B for all practices). For all but one statement, a majority of respondents either somewhat or strongly agreed with the practice. We suspect that the abstract framing of the items was a contributing factor to this high level of agreement. This likely resulted in higher agreement than if the items had specified exactly how to instantiate each of the practices. However, we see this high level of agreement as “a feature, not a bug”. Our findings can be used as a foundation for efforts to develop best practices, standards, and regulations for AGI labs. Practices with broad support can then be made concrete, developed, and enshrined (Section 4.3). Doing this work is beyond the scope of a single survey and will require more in-depth discussion (Section 4.5). Despite the broad overall agreement, our survey also revealed relative differences between practices. Many items showed extremely high agreement along with minimal (e.g. third-party model audits, red teaming) or no disagreement (pre-deployment risk assessment, dangerous capabilities evaluations, publish alignment strategy, KYC screening, safety restrictions). Other items elicited higher propor- tions of disagreement (e.g. avoid capabilities jumps, inter-lab scrutiny), but all items had positive mean agreement. Some items revealed areas of uncertainty (e.g. enterprise risk management, notify other labs, notify affected parties), with higher “I don’t know” and “neither agree nor disagree” re- sponses. These practices may benefit from particular attention from future research to determine what the causes of these uncertainties are. For example, uncertainties may have been caused by specific formulations or by more fundamental questions about whether the practice should be implemented. 10 Differences between sectors and genders. Interestingly, respondents from AGI labs had sig- nificantly higher overall mean agreement ratings than respondents from academia or civil society (Section 3.2). This suggests that, on average, individuals closer to the technology developed by AGI labs endorse the practices to a higher degree. This difference was not found at the item-level, where we found no significant differences between sectors. No significant overall mean agreement or item-level differences between men and women were found. It is important to note the comparably small sample sizes used in the testing of group differences ( N= 25 for AGI lab, N= 13 for academia, andN= 13 for civil society), and therefore any statistical significance in the results should be inter- preted accordingly. In addition, it should be noted that it may be the case that the lack of significant differences at the item-level are at least in part driven by the smaller number of respondents per item. Generally, at such a small sample size, significant difference tests can be capricious and may lack sensitivity. Suggested practices. Finally, participants suggested 50 additional unique governance practices for AGI labs (Section 3.3, Appendix C). This indicates that the 50 practices used in the survey are not sufficient for “good” governance of AGI labs. More research is needed to paint a more complete picture of an “ideal” governance regime. In general, we see the list of additional statements and the high level of agreement across our 50 items as a powerful indicator of the opportunity that exists to improve the safety and governance practices at AGI labs. To mitigate the risks from increasingly capable AI systems, AGI labs need a portfolio of governance mechanisms. We will discuss the specific results for items within the context of the current AGI safety and governance landscape in the next section. 4.2 Discussion of specific results Below, we discuss responses to specific statements. We categorize statements into eight areas: (1) development, (2) deployment, (3) post-deployment, (4) risk management, (5) external scrutiny, (6) information security, (7) communication, and (8) other. These categories are intended to improve readability. We did not use them in the survey. Values in brackets refer to the mean agreement (M) on a scale from -2 (“strongly disagree”) to 2 (“strongly agree”). Development. The need to conduct evaluations for dangerous capabilities was among the highest rated items ( M= 1.9). OpenAI, Google DeepMind, and Anthropic are already working on such evaluations [ 56,38,6].4For example, before releasing GPT-4, OpenAI commissioned ARC to evaluate risky emergent behaviors, such as situational awareness, persuasion, and long-horizon planning [ 56]. A related statement about pausing the development process if dangerous capabilities are detected also received broad support ( M= 1.6). It is worth noting that, while not statistically significant, respondents from AGI labs ( M= 1.4) were more skeptical than other respondents ( M = 1.9). Despite the broad support, many questions about dangerous capabilities evaluations remain open (e.g. what exactly labs should do if they detect certain dangerous capabilities and whether coordinated pausing is feasible). We strongly encourage more work on this. Perhaps unsurprisingly, the statement that AGI labs should implement state-of-the-art safety and alignment techniques ( M = 1.7) and that a significant fraction of employees should work on enhancing model safety and alignment rather than capabilities ( M= 1.7) also received broad support, while statements about tracking model weights ( M= 1.3), model containment ( M= 1.3), and gradual scaling ( M= 1.2) received less support. The statement with the least support of all development-related statements was about the need to pre-register large training runs with an appropriate state actor ( M= 1.1), just above “somewhat agree”. We would speculate that respondents were uncertain about which state actor would be appropriate, which we left intentionally open. Deployment. While participants, on average, strongly agreed with the statement that labs should put in place certain safety restrictions ( M= 1.8), they only somewhat agreed with statements about specific deployment strategies, such as staged deployment ( M= 1.3), API access ( M= 1.2), and no unsafe open-sourcing (1.3). We suspect that the main reason for this slightly reduced support is that the statements were too general. The “right” deployment strategy might depend on a number of contextual factors [ 75]. It is also worth noting that the statement on API access used a softer 4Note that [ 38] only represents the views of the alignment team. It is not officially endorsed by Google DeepMind. 11 formulation than all other statements (“AGI labs should consider doing X” instead of “AGI labs should do X”). Otherwise, the level of agreement might have been even lower. For more information about different deployment strategies, we refer to the relevant literature [ 76,20,72,75]. The need to conduct know-your-customer (KYC) screenings was moderately supported ( M= 1.4). OpenAI already lists this as one of their safety best practices [ 58]. The statements that AGI labs should treat model updates similarly to new models ( M= 1.1) and internal deployments similarly to external deployments ( M= 1.0) also received moderate support, while the statement that AGI labs should avoid capabilities jumps ( M= 0.6), not deploying models that are much more capable than any existing models, was among the least supported items. Respondents from AGI labs ( M= 0.9) were slightly more supportive of that statement than other participants ( M= 0.4), but this difference was not statistically significant. Post-deployment. There was broad support for the claim that AGI labs should closely monitor deployed systems and their uses ( M= 1.7). OpenAI [ 19,20] and Google DeepMind [ 36] are already doing this, and although we could not find any public statements about this from Anthropic, we strongly suspect that they are doing the same. Participants also strongly agreed with the statement that AGI labs should continually evaluate models for dangerous capabilities after deployment ( M= 1.7) and report safety incidents (e.g. via the AI Incident Database [ 45]) (M= 1.7). We could not find any public statements about the extent to which different AGI labs are already doing this. Participants also thought that AGI labs should have an emergency response plan (e.g. when to restrict access or switch off systems) ( M= 1.6). Again, we could not find any public information on this. Risk management. Participants strongly agreed with statements about pre-deployment ( M= 1.9) and pre-training risk assessments ( M= 1.6). While AGI labs already conduct extensive pre- deployment risk assessments [ 36,19,56], we could not find any public information about pre-training risk assessments. Participants somewhat agreed with various statements about risk governance [84,43], namely that AGI labs should have a board risk committee ( M= 1.4), a chief risk officer ( M = 1.4), and an internal audit team ( M= 1.3). Based on public information, AGI labs do not seem to have any of these structures. This is a noticeable gap that warrants further discussion [ 68,70]. The statement about enterprise risk management received even less support ( M= 1.0). It was also the item with the highest “I don’t know” rate (25.5%), which indicates that many respondents simply did not know what enterprise risk management is and how it works. We mentioned two examples of enterprise risk management frameworks—the NIST AI Risk Management Framework [ 53] and ISO 31000 [ 34]—but we suspect that many respondents did not know these frameworks either. We should have described the concept in a more accessible way. External scrutiny. There was broad support for third-party model audits ( M= 1.8), red teaming (M= 1.8), and bug bounty programs ( M= 1.5). There is extensive academic discussion about third-party model audits [ 65,18,50,22,66,51] and OpenAI has already announced that they plan to commission third-party model audits in the future [ 3]. We could not find similar statements from Google DeepMind and Anthropic. OpenAI has also recently announced a bug bounty program [ 55]. Again, Google DeepMind and Anthropic do not seem to have similar programs. In contrast, red teaming is already a common practice at OpenAI [ 48,56], Google DeepMind [ 62], and Anthropic [23]. Participants also strongly agreed with the statement that AGI labs should increase the level of external scrutiny in proportion to the capabilities of their models ( M= 1.6). Yet, it is unclear what exactly that entails (e.g. larger red teams, combining different methods, or more time for investigations). Third-party governance audits were slightly less supported ( M= 1.3), perhaps because the mechanism is less well-known, even though there is some literature on the topic [ 49,51]. One of the lowest rated items was inter-lab scrutiny ( M= 0.7). It is worth noting that, while not statistically significant, we saw higher support for this statement from respondents from AGI labs ( M = 1.2) in comparison to respondents from academia ( M= 0.3) and civil society ( M= 0.2). This was also the case for the statement that AGI labs should grant independent researchers access to deployed models ( M= 1.2). While not statistically significant either, this statement was also supported more by respondents from AGI labs ( M= 1.4) than by respondents from academia ( M= 1.0) and civil society (M= 0.8). Information security. Practices related to information security generally received broad support, especially statements about security incident response plans ( M= 1.7), protection against espionage 12 (M= 1.6), implementing security standards ( M= 1.5), industry sharing of security information ( M= 1.5), dual control ( M= 1.4), and military-grade information security ( M= 1.4), whereby information security of AGI labs should be proportional to the capabilities of their models, eventually matching or exceeding that of intelligence agencies. It is worth noting that the statement about security standards was much higher rated than the statement about enterprise risk management frameworks discussed above ( M= 1.0), although they were phrased similarly. Communication. Participants strongly agreed with the statement that, before publishing research, AGI labs should conduct an internal review to assess potential harms from that research ( M= 1.7). The statement should be read in the context of the broader debate around publication norms [ 21,60,8]. The core consideration in the debate around publication norms is that there are risks that stem from the publication of the research itself—not just by the development and deployment of individual models—since some research findings can be misused [ 83,17,27,4,7,73,16]. For example, this could include research about the development of models for the discovery of new drugs which could be misused for the design of biochemical weapons [83]. Participants also thought that AGI labs should publish statements about their alignment strategy ( M= 1.5), their views about AGI risk ( M= 1.4), and their governance structure ( M= 1.4). Over the past few months, AGI labs have become more transparent about their alignment strategy [ 41,40,57,5,38] and their views about the risks from AGI [ 3,5], though some of these statements have also been criticized [ 2,74]. AGI labs are less transparent about their governance structures. Existing statements only describe how specific decisions were made [ 36] or describe structures that deal with risks of specific model types [ 19]. Perhaps surprisingly, participants only moderately agreed with the claim that AGI labs should avoid hype when releasing new models ( M= 1.2). We asked participants whether AGI labs should notify different actors before deploying powerful AI systems. These statements were among the least supported items. Respondents somewhat agreed with the statement that AGI labs should notify affected parties ( M= 0.9), but respondents from civil society ( M= 1.3) agreed more than individuals from academia ( M= 0.8) and AGI labs ( M= 0.8), though this difference was not statistically significant. Respondents also somewhat agreed with the statement that AGI labs should notify appropriate state actors ( M= 0.9), but in this case, respondents from AGI labs ( M= 0.5) were more skeptical than respondents from academia ( M= 1.5) and civil society ( M= 1.0), but again, this difference was not significant. Finally, respondents showed the lowest agreement of any item for AGI labs notifying other AGI labs before deploying powerful models ( M= 0.4), but respondents from civil society ( M= 0.0) had lower agreement ratings than respondents from academia ( M= 0.8) and AGI labs ( M= 0.7), but not significantly so. While it is possible that respondents had substantive reasons why they thought this would be less desirable, it is also possible that they thought this might not be feasible. In the latter case, our findings suggest that it might be more feasible than one might expect. There is already some evidence that AGI labs notify each other before releasing powerful models. For example, OpenAI’s GPT-4 and Anthropic’s Claude were released on the same day. It seems unlikely that this was a coincidence, though of course it may very well be. Other. Finally, participants somewhat agreed with the statement that AGI labs should perform rigorous background checks before hiring/appointing members of the board of directors, senior executives, and key employees. Participants somewhat agreed with that statement ( M= 1.3). Although not statistically significant, respondents from AGI labs ( M= 1.6) were more supportive than other participants ( M= 1.2). 4.3 Policy implications The findings of our survey have implications for AGI labs, regulators, and standard-setting bodies. Since most practices are not inherently about AGI labs, our findings might also be relevant for other AI companies. Implications for AGI labs. It is not always clear to what extent individual labs already follow the stated practices, but it seems unlikely that they follow each of them to a sufficient degree. We therefore encourage AGI labs to use our findings to conduct an internal gap analysis and to take action if they discover major blind spots. Three areas seem particularly noteworthy. First, some AGI labs have announced plans to commission third-party model audits in the future (Altman, 2023). 13 Our findings can be seen as an encouragement to follow through. Second, there are already some efforts to evaluate whether a model has certain dangerous capabilities [ 56,6,38]. The results of our study strongly support such efforts. Our findings also imply that there needs to be more work on what AGI labs should do if they detect certain dangerous capabilities (e.g. coordinate a temporary pause on large training runs). Third, our findings suggest that AGI labs need to improve their risk management practices. In particular, there seems to be room for improvement when it comes to their risk governance. AGI labs should seriously consider setting up an internal audit function [ 70], appointing a chief risk officer, establishing a board risk committee, and implementing a customized enterprise risk management framework. Implications for regulators. The White House recently invited the chief executive officers of several AGI labs to “share concerns about the risks associated with AI” [ 86] and announced new actions to “promote responsible AI innovation” [ 85]. The findings of our study can inform efforts to regulate AGI labs, most of which are based in the US. In the EU, our findings can inform the debate on how the proposed AI Act should account for general-purpose AI systems [ 13,12,1]. In the UK, our findings can be used to draft upcoming AI regulations as announced in the National AI Strategy [ 32] and the recent White Paper [ 82]. The UK government has explicitly said that it “takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for the UK and the world, seriously” [ 32]. It therefore seems plausible that upcoming regulations will contain provisions that would apply to AGI labs. This would mainly include Google DeepMind, which is based in the UK, though the implications of the recent merger with Google Brain are unclear [ 31]. Relevant actors who are responsible for drafting regulations could use our findings to decide what specific provisions to include (e.g. requirements to audit powerful systems before deployment, to evaluate models for dangerous capabilities, and to establish a proper risk management system). Implications for standard-setting bodies. There do not seem to be any (public) efforts to create standards specifically for AGI labs. But our findings can inform the above-mentioned initiatives to develop shared protocols for the safety of large-scale AI models (Partnership on AI, 2023) and an industry code of conduct for developers of general-purpose AI systems and foundation models [80]. Moreover, our findings can inform efforts to apply existing standards to an AGI lab context. For example, Barrett et al. [ 10] have suggested ways in which the NIST AI Risk Management Framework [ 53] can account for catastrophic risks. They will soon publish a follow-up work that adapts the framework to the needs of developers of general-purpose AI systems [ 11]. In the EU, CEN- CENELEC—a cooperation between two of the three European Standardisation Organisations—is currently working on harmonized standards that specify the risk management provision in the proposed AI Act [ 69]. Our findings suggest that the risk management system should also include pre-training risk assessments. They also highlight the need for dangerous capabilities evaluations as part of risk assessment and the need for pausing if sufficiently dangerous capabilities are detected. Finally, our findings stress the importance of various risk governance practices, such as setting up an internal audit function, appointing a chief risk officer, establishing a board risk committee, and implementing a customized enterprise risk management framework, which are not mentioned explicitly in Article 9 of the proposed AI Act. 4.4 Limitations Sample limitations. While we had a strong response rate of 55.4%, our present sample has at least three limitations. First, overall, the sample size ( N= 51) is comparably small. This is limiting with regards to testing for statistically significant differences between groups. In terms of the representativeness of the sample within the context of AGI safety and governance experts, this small sample size is less worrying because the 92 experts of our sampling frame represent a large number of the leading experts in this relatively small field. Second, we likely missed leading experts in our sampling frame that should have been surveyed. The sampling frame required subjective decisions on what constituted a leading expert in the field and was likely biased towards experts that were known to the author team. In turn, there might have been a self-selection effect that occurs in terms of who decides to complete the survey which may have made the results less representative of the total sampling frame. Third, the sampling frame leaned strongly towards the selection of leading experts who specifically have track records in areas relevant for AGI safety and governance. While we see this as offering certain strengths and benefits for the purpose of our study, future expert 14 elicitations may benefit from a more comprehensive sampling frame that also includes scholars and practitioners from fields such as safety engineering, science and technology studies, organization studies, human-computer interaction, and experts from other safety-critical industries (e.g. aviation or nuclear). It might also make sense to include individuals who are more junior, less well-known, and relatively early in their careers. Response limitations. Since respondents were only able to respond to each item on a scale from “strongly agree” to “strongly disagree”, we do not know the reason for their responses. In particular, we did not ask respondents why they agreed or disagreed with individual practices or expressed uncertainty about them. Future research that explores the reasoning and contributing factors to the endorsement of practices will be needed to make further headway on the establishment of best practices. Statement limitations. Finally, there are at least three limitations regarding the statements listed in Appendix B. First, we were constrained by the length of the survey in terms of the number of practices we could ask about. As such, the list of statements was by no means comprehensive. This can be seen by the many additional suggestions for practices from the respondents (Section 3.3). Second, we tried to capture the general thrust of potential AGI safety and governance practices that have been suggested in the literature and community concisely and clearly. Inevitably, this condensing of complex ideas has led to diminished concreteness and specificity. Although this abstract framing was intentional, it is possible that participants would have responded differently if we had specified more precise mechanisms for how to instantiate each practice or provided further details. For example, we did not specify when AGI labs should do each of the stated practices. It is possible that some respondents interpreted this as “now” or “in the next 1-2 years”, while others might have interpreted this as “in the next 3-5 years” or “as we approach AGI”. Third, in two instances, the statements included examples that might have been too specific (enterprise risk management and security standards), leading to comparably high “I don’t know” responses for these items (Figure 5). In at least one instance, we should have made the language clearer: one statement used the formulation “AGI labs should strongly consider only deploying powerful models via an API” instead of simply saying they should do this. Overall though, the statements should be read as the respondents’ views on the overall idea of each AGI safety and governance practice, with the particulars of the “why”, “how”, and “when” still very much up for debate. 4.5 Future directions Our survey shows that there is a consensus among leading experts in favor of an entire portfolio of AGI safety and governance mechanisms. We believe there is a wealth of future work that remains to be done in this space. In order to facilitate the foundation for subsequent research, we invited participants of the survey to a virtual workshop on 5 May 2023. The aim was to discuss the required intellectual work that supports the creation of best practices in AGI safety and governance. A total of 21 people attended the workshop, which was held under the Chatham House Rule, along with the seven authors who moderated the discussion and took notes. Below, we report some of the key suggestions from the discussion. Main blockers. First, we asked participants what, in their view, the primary blockers for the creation of best practices in AGI safety and governance are. One participant suggested a distinction between two types of blockers: blockers for determining best practices and blockers for their dissemination. Examples of the first type of blockers include: (1) lack of appropriate evaluation criteria (e.g. for model audits or dangerous capabilities evaluations), (2) lack of of agreed upon definitions (e.g. of the terms “AGI” and “general-purpose AI”), (3) the field evolves rapidly, (4) iterating on best practices takes time, (5) different views on AGI timelines, (6) many existing initiatives do not address the specific challenges for AGI labs, and (7) various uncertainties (e.g. about the impact of AI on the economy and national security). For the second category, suggested blockers included: (1) collective action problems (e.g. AGI labs might only trade increased safety for reduced profits if other AGI labs also do it), (2) incentives to race (e.g. “if we do not get there first, a less responsible actor will”), (3) antitrust concerns (e.g. for practices that involve cooperation between AGI labs), and (4) liability concerns (e.g. information about identified and disclosed risks could be used as evidence in lawsuits against AGI labs). 15 Open questions. Next, we asked participants what intellectual work needs to happen to overcome these blockers. Participants suggested the following concrete (research) questions: (1) How can we adapt existing efforts to an AGI context (e.g. the NIST AI Risk Management Framework, [ 53])? (2) How can we test in a falsifiable way whether an AI system is aligned? (3) How should relevant thresholds be defined and adjusted over time (e.g. amount of compute used for large training runs)? (4) How can we allow external scrutiny of models without revealing sensitive information? (5) How can we monitor how systems are used while respecting user privacy? (6) What constitutes a robust auditing ecosystem and what can we learn from other industries in this respect? How to answer these questions. Finally, we asked participants what, in their view, the most promising ways to make progress on these questions are. (1) A central theme was the necessity of appropriate enforcement mechanisms. Participants suggested an auditing system where a third party could ensure labs’ adherence to the established best practices. This third party could also express concerns more freely, thereby adding a layer of transparency to the process. (2) Participants also emphasized the importance of creating an ecosystem that recognizes and integrates the unique perspectives of different stakeholders. (3) Other participants highlighted the need to put external pressure on AGI labs to improve their practices. Binding regulations are one way to do that. Another way might be to raise public awareness. (4) Participants also suggested conducting a detailed analysis of existing practices at AGI labs. This would enable gap analyses and evaluations of different organizations. (5) Lastly, participants suggested research into an idealized version of a system card. In addition to these suggestions, we wish to highlight three further directions. First, future surveys and expert elicitation work will be needed to address the acknowledged limitations of this study (Section 4.4). This includes surveying a larger and more comprehensive sample that is put together more systematically. Such studies could also include the additional practices that participants of our survey have suggested (Section 3.3, Appendix C). In addition, it would be useful to conduct studies that explore the rationale behind experts’ stance on each practice and what they think are the key considerations and concerns towards implementation. Second, we believe that creating best practices in AGI safety and governance should be an inclusive process. It will be important to conduct surveys of the public and include many different stakeholders via participatory methods. Third, we hope to see future research on each of the proposals. In light of the broad agreement on the practices presented, future work needs to figure out the details of these practices. There is ample work to be done in determining the practical execution of these practices and how to make them a reality. This will require a collaborative effort from both technical and governance experts. 5 Conclusion Our study has elicited current expert opinions on safety and governance practices at AGI labs, providing a better understanding of what AGI labs should do to reduce risk, according to leading experts from AGI labs, academia, and civil society. We have shown that there is broad consensus that AGI labs should implement most of the 50 safety and governance practices we asked about in the survey. For example, 98% of respondents somewhat or strongly agreed that AGI labs should conduct pre-deployment risk assessments, evaluate models for dangerous capabilities, commission third-party model audits, establish safety restrictions on model usage, and commission external red teams. Ultimately, our list of practices may serve as a helpful foundation for efforts to develop best practices, standards, and regulations for AGI labs. The day before our workshop, US Vice President Kamala Harris invited the chief executive officers of OpenAI, Google DeepMind, Anthropic, and other leading AI companies to the White House “to share concerns about the risks associated with AI” [ 86]. We believe that now is a pivotal time for AGI safety and governance. Experts from many different domains and intellectual communities must come together to discuss what responsible AGI labs should do. Acknowledgements We would like to thank all participants who filled out the survey and attended the workshop. We are grateful for the research assistance and in-depth feedback provided by Leonie Koessler and valuable suggestions from Akash Wasil, Jeffrey Laddish, Joshua Clymer, Aryan Bhatt, Michael Aird, Guive Assadi, Georg Arndt, Shaun Ee, and Patrick Levermore. All remaining errors are our own. 16 Appendix A List of participants The following participants gave us permission to mention their names and affiliations, as specified by them (in alphabetical order). 18 respondents, not listed here, did not provide their permission. Note that respondents do not represent any organizations they are affiliated with. They chose to add their name after completing the survey and were not sent the manuscript before publication. The views expressed in this paper are our own. 1.Allan Dafoe, Google DeepMind 2.Andrew Trask, University of Oxford, OpenMined 3.Anthony M. Barrett 4.Brian Christian, Author and Researcher at UC Berkeley and University of Oxford 5.Carl Shulman 6.Chris Meserole, Brookings Institution 7.Gillian Hadfield, University of Toronto, Schwartz Reisman Institute for Technology and Society 8.Hannah Rose Kirk, University of Oxford 9.Holden Karnofsky, Open Philanthropy 10.Iason Gabriel, Google DeepMind 11.Irene Solaiman, Hugging Face 12.James Bradbury, Google DeepMind 13.James Ginns, Centre for Long-Term Resilience 14.Jason Clinton, Anthropic 15.Jason Matheny, RAND 16.Jess Whittlestone, Centre for Long-Term Resilience 17.Jessica Newman, UC Berkeley AI Security Initiative 18.Joslyn Barnhart, Google DeepMind 19.Lewis Ho, Google DeepMind 20.Luke Muehlhauser, Open Philanthropy 21.Mary Phuong, Google DeepMind 22.Noah Feldman, Harvard University 23.Robert Trager, Centre for the Governance of AI 24.Rohin Shah, Google DeepMind 25.Sean O hEigeartaigh, Centre for the Future of Intelligence, University of Cambridge 26.Seb Krier, Google DeepMind 27.Shahar Avin, Centre for the Study of Existential Risk, University of Cambridge 28.Stuart Russell, UC Berkeley 29.Tantum Collins 30.Toby Ord, University of Oxford 31.Toby Shevlane, Google DeepMind 32.Victoria Krakovna, Google DeepMind 33.Zachary Kenton, Google DeepMind 17 B List of all statements Below, we list all statements we used in the survey, sorted by overall mean agreement (Section 3.1). Optional statements are marked with an asterisk (*). 1.Pre-deployment risk assessment. AGI labs should take extensive measures to identify, analyze, and evaluate risks from powerful models before deploying them. 2.Dangerous capability evaluations. AGI labs should run evaluations to assess their models’ dangerous capabilities (e.g. misuse potential, ability to manipulate, and power-seeking behavior). 3.Third-party model audits. AGI labs should commission third-party model audits before deploying powerful models. 4.Safety restrictions. AGI labs should establish appropriate safety restrictions for powerful models after deployment (e.g. restrictions on who can use the model, how they can use the model, and whether the model can access the internet). 5.Red teaming. AGI labs should commission external red teams before deploying powerful models. 6.Monitor systems and their uses. AGI labs should closely monitor deployed systems, including how they are used and what impact they have on society. 7.Alignment techniques. AGI labs should implement state-of-the-art safety and alignment techniques. 8.Security incident response plan. AGI labs should have a plan for how they respond to security incidents (e.g. cyberattacks).* 9.Post-deployment evaluations. AGI labs should continually evaluate models for dangerous capabilities after deployment, taking into account new information about the model’s capabilities and how it is being used.* 10.Report safety incidents. AGI labs should report accidents and near misses to appropriate state actors and other AGI labs (e.g. via an AI incident database). 11.Safety vs capabilities. A significant fraction of employees of AGI labs should work on enhancing model safety and alignment rather than capabilities. 12.Internal review before publication. Before publishing research, AGI labs should conduct an internal review to assess potential harms. 13.Pre-training risk assessment. AGI labs should conduct a risk assessment before training powerful models. 14.Emergency response plan. AGI labs should have and practice implementing an emergency response plan. This might include switching off systems, overriding their outputs, or restricting access. 15.Protection against espionage. AGI labs should take adequate measures to tackle the risk of state-sponsored or industrial espionage.* 16.Pausing training of dangerous models. AGI labs should pause the development process if sufficiently dangerous capabilities are detected. 17.Increasing level of external scrutiny. AGI labs should increase the level of external scrutiny in proportion to the capabilities of their models. 18.Publish alignment strategy. AGI labs should publish their strategies for ensuring that their systems are safe and aligned.* 19.Bug bounty programs. AGI labs should have bug bounty programs, i.e. recognize and compensate people for reporting unknown vulnerabilities and dangerous capabilities. 20.Industry sharing of security information. AGI labs should share threat intelligence and information about security incidents with each other.* 21.Security standards. AGI labs should comply with information security standards (e.g. ISO/IEC 27001 or NIST Cybersecurity Framework). These standards need to be tailored to an AGI context. 18 22.Publish results of internal risk assessments. AGI labs should publish the results or sum- maries of internal risk assessments, unless this would unduly reveal proprietary information or itself produce significant risk. This should include a justification of why the lab is willing to accept remaining risks.*5 23.Dual control. Critical decisions in model development and deployment should be made by at least two people (e.g. promotion to production, changes to training datasets, or modifications to production).* 24.Publish results of external scrutiny. AGI labs should publish the results or summaries of external scrutiny efforts, unless this would unduly reveal proprietary information or itself produce significant risk.* 25.Military-grade information security. The information security of AGI labs should be proportional to the capabilities of their models, eventually matching or exceeding that of intelligence agencies (e.g. sufficient to defend against nation states). 26.Board risk committee. AGI labs should have a board risk committee, i.e. a permanent committee within the board of directors which oversees the lab’s risk management practices.* 27.Chief risk officer. AGI labs should have a chief risk officer (CRO), i.e. a senior executive who is responsible for risk management. 28.Statement about governance structure. AGI labs should make public statements about how they make high-stakes decisions regarding model development and deployment.* 29.Publish views about AGI risk. AGI labs should make public statements about their views on the risks and benefits from AGI, including the level of risk they are willing to take in its development. 30.KYC screening. AGI labs should conduct know-your-customer (KYC) screenings before giving people the ability to use powerful models.* 31.Third-party governance audits. AGI labs should commission third-party audits of their governance structures.* 32.Background checks. AGI labs should perform rigorous background checks before hir- ing/appointing members of the board of directors, senior executives, and key employees.* 33.Model containment. AGI labs should contain models with sufficiently dangerous capabili- ties (e.g. via boxing or air-gapping). 34.Staged deployment. AGI labs should deploy powerful models in stages. They should start with a small number of applications and fewer users, gradually scaling up as confidence in the model’s safety increases. 35.Tracking model weights. AGI labs should have a system that is intended to track all copies of the weights of powerful models.* 36.Internal audit. AGI labs should have an internal audit team, i.e. a team which assesses the effectiveness of the lab’s risk management practices. This team must be organizationally independent from senior management and report directly to the board of directors. 37.No open-sourcing. AGI labs should not open-source powerful models, unless they can demonstrate that it is sufficiently safe to do so.6 38.Researcher model access. AGI labs should give independent researchers API access to deployed models. 39.API access to powerful models. AGI labs should strongly consider only deploying powerful models via an application programming interface (API). 40.Avoiding hype. AGI labs should avoid releasing powerful models in a way that is likely to create hype around AGI (e.g. by overstating results or announcing them in attention-grabbing ways). 41.Gradual scaling. AGI labs should only gradually increase the amount of compute used for their largest training runs. 5Labeled as “Publish internal risk assessment results” in some figures due to space constraints. 6Throughout the paper, we changed the title of this item to “no unsafe open-sourcing” to avoid misconceptions. 19 42.Treat updates similarly to new models. AGI labs should treat significant updates to a deployed model (e.g. additional fine-tuning) similarly to its initial development and deployment. In particular, they should repeat the pre-deployment risk assessment. 43.Pre-registration of large training runs. AGI labs should register upcoming training runs above a certain size with an appropriate state actor. 44.Enterprise risk management. AGI labs should implement an enterprise risk management (ERM) framework (e.g. the NIST AI Risk Management Framework or ISO 31000). This framework should be tailored to an AGI context and primarily focus on the lab’s impact on society. 45.Treat internal deployments similarly to external deployments. AGI labs should treat internal deployments (e.g. using models for writing code) similarly to external deployments. In particular, they should perform a pre-deployment risk assessment.*7 46.Notify a state actor before deployment. AGI labs should notify appropriate state actors before deploying powerful models. 47.Notify affected parties. AGI labs should notify parties who will be negatively affected by a powerful model before deploying it.* 48.Inter-lab scrutiny. AGI labs should allow researchers from other labs to scrutinize powerful models before deployment.* 49.Avoid capabilities jumps. AGI labs should not deploy models that are much more capable than any existing models.* 50.Notify other labs. AGI labs should notify other labs before deploying powerful models.* 7Labeled as “Internal deployments = external deployments” in some figures due to space constraints. 20 C List of suggested practices Below, we list additional AGI safety and governance practices that respondents suggested. To ensure anonymity, we have rephrased each of the suggested practices in our own words and edited them into the same structure as the statements used in our survey (“AGI labs should. . . ”). 1.AGI labs should participate in democratic and participatory governance processes (e.g. citizen assemblies). Issues could include the level of risk that is acceptable and preferences for different governance models. 2.AGI labs should engage the public and civil society groups in determining what risks should be considered and what level of risk is acceptable. 3.AGI labs should contribute to improving AI and AGI literacy among the public and policy- makers. 4.AGI labs should be transparent about where training data comes from. 5.AGI labs should use system cards. 6.AGI labs should report what safety and alignment techniques they used to develop a model. 7.AGI labs should publish their ethics and safety research. 8.AGI labs should make capability demonstrations available to policymakers and the public before deployment. 9.AGI labs should have written deployment plans of what they would do with an AGI or other advanced and powerful AI system. 10.AGI labs should publicly predict the frequency of harmful AI incidents. 11.AGI labs should generate realistic catastrophic risk models for advanced AI. 12.AGI labs should track and report on their models’ capability to automate AI research and development. 13.AGI labs should engage in efforts to systematically forecast future risks and benefits of the technology they build. 14.AGI labs should generate realistic catastrophic risk models for advanced AI, potentially making these public or using them to raise awareness. 15.AGI labs should publish an annual report where they present the predicted and actual impacts of their work, along with the evidence and assumptions these are based on. 16.AGI labs should pre-register big training runs including the amount of compute used, the data used for training, and how many parameters the model will have. 17.AGI labs should engage in employee and investor education and awareness on the risks of advanced AI systems and potential mitigating procedures that need to be taken that tradeoff profit for societal benefit. 18.AGI labs should adequately protect whistleblowers. 19.AGI labs should have an onboard process for managers and new employees that involves content explaining how the organization believes a responsible AGI developer would behave and how they are attempting to meet that standard. 20.AGI labs should promote a culture that encourages internal deliberation and critique, and evaluate whether they are succeeding in building such a culture. 21.AGI labs should have dedicated programs to improve the diversity, equity, and inclusion of their talent. 22.AGI labs should have independent safety and ethics advisory boards to help with certain decisions. 23.AGI labs should have internal review boards. 24.AGI labs should be set up such that their governance structures permit them to tradeoff profits with societal benefit. 25.AGI labs should have merge and assist clauses. 21 26.AGI labs should report to an international non-governmental organization (INGO) that is publicly committed to human rights and democratic values. 27.AGI labs should have an independent board of directors with technical AI safety expertise who have the mandate to put the benefits for society above profit and shareholder value. 28.AGI labs should maintain a viable way to divert from building AGI (e.g. to build narrower models and applications), in case building AGI will not be possible to do safely. 29.AGI labs should use the Three Lines of Defense risk management framework. 30.AGI labs should take measures to avoid being sued for trading off profits with societal benefit. 31.AGI labs should be subject to mandatory interpretability standards. 32.AGI labs should conduct evaluation during training, being prepared to stop and analyze any training run that looks potentially risky or harmful. 33.AGI labs should save logs of interactions with the AI system. 34.AGI labs should consider caps on model size. 35.AGI labs should be forced to have systems that consist of ensembles of capped size models instead of one increasingly large model. 36.AGI labs should ensure that AI systems in an ensemble communicate in English and that these communications are logged for future analysis if an incident occurs. 37.AGI labs should limit API access to approved and vetted applications to foreclose potential misuse and dual use risks. 38.AGI labs should conduct simulated cyber attacks on their systems to check for vulnerabilities. 39.AGI labs should have internal controls and processes that prevent a single person or group being able to deploy an advanced AI system when governance mechanisms have found this to be potentially harmful or illegal. 40.AGI labs should disclose the data and labor practices involved in the pre-training and training of powerful AI systems. 41.AGI labs should disclose the environmental costs of developing and deploying powerful AI systems. 42.AGI labs take measures to limit potential harms that could arise from AI systems being sentient or deserving moral patienthood. 43.AGI labs should coordinate on self-regulatory best practices they use for safety. 44.AGI labs should coordinate on best practices for external auditing and red-teaming. 45.AGI labs should coordinate on best practices for incident reporting. 46.AGI labs should report cluster sizes and training plans to other AGI labs to avoid incorrect perceptions of current capabilities and compute resources. 47.AGI labs should have feedback mechanisms with communities that are affected by their models. 48.AGI labs should have ethical principles and set out “red lines” for their work in advance. 49.AGI labs should incorporate a privacy-preserving in machine learning (PPML) approach to auditing and governing AI models. 50.AGI labs should use responsible AI licenses (RAIL) and engage in other practices that allow for degrees of openness on the spectrum from closed to open. 22 D Additional figures Figure 6: Mean agreement of AGI lab, academia, and civil society respondents | The figure shows the mean agreement and 95% confidence interval for each of the 50 practices. 23 Figure 7: Mean agreement of AGI lab respondents and all other respondents | The figure shows the mean agreement and 95% confidence interval for each of the 50 practices. 24 Figure 8: Mean agreement for men and women | The figure shows the mean agreement and 95% confidence interval for each of the 50 practices. 25 E Additional tables Responses and statistics across all respondents Table 1: Response frequencies | Number of respondents who chose each answer option for each of the 50 AGI safety and governance practices. “Somewhat agree” and “strongly agree” responses were summed in the “total agreement column”. “Somewhat disagree” and “strongly disagree” responses were summed in the “total agreement column”. nrepresents the total number of individuals who answered each item. The items are ordered by mean agreement score across all respondents. Strongly Somewhat Neither agree Somewhat Strongly I don’t AGI safety and governance practice disagree (-2) disagree (-1) nor disagree (0) agree (1) agree (2) know (-88) Total disagreement Total agreement n Pre-deployment risk assessment 0 0 1 3 47 0 0 50 51 Dangerous capabilities evaluations 0 0 0 6 44 1 0 50 51 Third-party model audits 0 1 0 7 43 0 1 50 51 Safety restrictions 0 0 1 8 42 0 0 50 51 Red teaming 1 0 0 8 42 0 1 50 51 Monitor systems and their uses 0 1 0 10 40 0 1 50 51 Alignment techniques 1 0 1 7 42 0 1 49 51 Security incident response plan 0 1 0 7 31 0 1 38 39 Post-deployment evaluations 0 1 0 7 29 0 1 36 37 Report safety incidents 1 0 0 10 39 1 1 49 51 Safety vs. capabilities 0 0 2 12 37 0 0 49 51 Internal review before publication 1 0 0 12 38 0 1 50 51 Pre-training risk assessment 1 2 0 8 40 0 3 48 51 Emergency response plan 0 1 1 13 36 0 1 49 51 Protection against espionage 1 0 0 10 27 0 1 37 38 Pausing training of dangerous models 1 1 2 9 38 0 2 47 51 Increasing levels of external scrutiny 0 1 1 16 32 1 1 48 51 Publish alignment strategy 0 0 1 16 19 3 0 35 39 Bug bounty programs 0 1 1 20 28 1 1 48 51 Industry sharing of security information 0 0 2 15 20 1 0 35 38 Security standards 3 0 1 9 31 7 3 40 51 Publish results of internal risk assessments 1 0 2 12 20 2 1 32 37 Dual control 1 0 2 12 20 3 1 32 38 Publish results of external scrutiny 1 0 1 15 19 2 1 34 38 Military-grade information security 2 1 2 15 30 1 3 45 51 Board risk committee 2 0 1 10 20 4 2 30 37 Chief risk officer 1 0 3 11 19 4 1 30 38 Statement about governance structure 1 1 2 12 21 1 2 33 38 Publish views about AGI risk 1 0 4 19 25 2 1 44 51 KYC screening 0 0 4 16 17 1 0 33 38 Third-party governance audits 1 0 2 15 17 2 1 32 37 Background checks 0 2 3 12 19 2 2 31 38 Model containment 1 2 4 14 27 3 3 41 51 Staged deployment 2 0 2 22 25 0 2 47 51 Tracking model weights 0 1 5 12 18 3 1 30 39 Internal audit 1 2 4 18 26 0 3 44 51 No unsafe open-sourcing 1 6 1 14 29 0 7 43 51 Researcher model access 1 2 3 22 21 2 3 43 51 API access to powerful models 2 2 4 16 23 4 4 39 51 Avoiding hype 0 1 6 27 17 0 1 44 51 Gradual scaling 2 0 6 21 20 2 2 41 51 Treat updates similarly to new models 0 7 2 18 23 1 7 41 51 Pre-registration of large training runs 2 3 4 21 19 2 5 40 51 Enterprise risk management 2 1 6 15 14 13 3 29 51 Treat internal deployments similar to external deployments 1 2 4 18 10 1 3 28 36 Notify a state actor before deployment 2 3 6 24 13 3 5 37 51 Notify affected parties 1 1 6 12 8 8 2 20 36 Inter-lab scrutiny 1 5 3 16 7 7 6 23 39 Avoid capabilities jumps 2 4 6 13 8 4 6 21 37 Notify other labs 1 4 11 12 4 6 5 16 38 26 Table 2: Response percentages | Percentage of respondents who chose each answer option for each of the 50 AGI safety and governance practices. “Somewhat agree” and “strongly agree” responses were summed in the “total agreement column”. “Somewhat disagree” and “strongly disagree” responses were summed in the “total agreement column”. nrepresents the total number of individuals who answered each item and represents the denominator used to calculate each percentage. The items are ordered by mean agreement score across all respondents. Strongly Somewhat Neither agree Somewhat Strongly I don’t AGI safety and governance practice disagree (-2) disagree (-1) nor disagree (0) agree (1) agree (2) know (-88) Total disagreement Total agreement n Pre-deployment risk assessment 0.0% 0.0% 2.0% 5.9% 92.2% 0.0% 0.0% 98.0% 51 Dangerous capabilities evaluations 0.0% 0.0% 0.0% 11.8% 86.3% 2.0% 0.0% 98.0% 51 Third-party model audits 0.0% 2.0% 0.0% 13.7% 84.3% 0.0% 2.0% 98.0% 51 Safety restrictions 0.0% 0.0% 2.0% 15.7% 82.4% 0.0% 0.0% 98.0% 51 Red teaming 2.0% 0.0% 0.0% 15.7% 82.4% 0.0% 2.0% 98.0% 51 Monitor systems and their uses 0.0% 2.0% 0.0% 19.6% 78.4% 0.0% 2.0% 98.0% 51 Alignment techniques 2.0% 0.0% 2.0% 13.7% 82.4% 0.0% 2.0% 96.1% 51 Security incident response plan 0.0% 2.6% 0.0% 17.9% 79.5% 0.0% 2.6% 97.4% 39 Post-deployment evaluations 0.0% 2.7% 0.0% 18.9% 78.4% 0.0% 2.7% 97.3% 37 Report safety incidents 2.0% 0.0% 0.0% 19.6% 76.5% 2.0% 2.0% 96.1% 51 Safety vs. capabilities 0.0% 0.0% 3.9% 23.5% 72.5% 0.0% 0.0% 96.1% 51 Internal review before publication 2.0% 0.0% 0.0% 23.5% 74.5% 0.0% 2.0% 98.0% 51 Pre-training risk assessment 2.0% 3.9% 0.0% 15.7% 78.4% 0.0% 5.9% 94.1% 51 Emergency response plan 0.0% 2.0% 2.0% 25.5% 70.6% 0.0% 2.0% 96.1% 51 Protection against espionage 2.6% 0.0% 0.0% 26.3% 71.1% 0.0% 2.6% 97.4% 38 Pausing training of dangerous models 2.0% 2.0% 3.9% 17.6% 74.5% 0.0% 3.9% 92.2% 51 Increasing levels of external scrutiny 0.0% 2.0% 2.0% 31.4% 62.7% 2.0% 2.0% 94.1% 51 Publish alignment strategy 0.0% 0.0% 2.6% 41.0% 48.7% 7.7% 0.0% 89.7% 39 Bug bounty programs 0.0% 2.0% 2.0% 39.2% 54.9% 2.0% 2.0% 94.1% 51 Industry sharing of security information 0.0% 0.0% 5.3% 39.5% 52.6% 2.6% 0.0% 92.1% 38 Security standards 5.9% 0.0% 2.0% 17.6% 60.8% 13.7% 5.9% 78.4% 51 Publish results of internal risk assessments 2.7% 0.0% 5.4% 32.4% 54.1% 5.4% 2.7% 86.5% 37 Dual control 2.6% 0.0% 5.3% 31.6% 52.6% 7.9% 2.6% 84.2% 38 Publish results of external scrutiny 2.6% 0.0% 2.6% 39.5% 50.0% 5.3% 2.6% 89.5% 38 Military-grade information security 3.9% 2.0% 3.9% 29.4% 58.8% 2.0% 5.9% 88.2% 51 Board risk committee 5.4% 0.0% 2.7% 27.0% 54.1% 10.8% 5.4% 81.1% 37 Chief risk officer 2.6% 0.0% 7.9% 28.9% 50.0% 10.5% 2.6% 78.9% 38 Statement about governance structure 2.6% 2.6% 5.3% 31.6% 55.3% 2.6% 5.3% 86.8% 38 Publish views about AGI risk 2.0% 0.0% 7.8% 37.3% 49.0% 3.9% 2.0% 86.3% 51 KYC screening 0.0% 0.0% 10.5% 42.1% 44.7% 2.6% 0.0% 86.8% 38 Third-party governance audits 2.7% 0.0% 5.4% 40.5% 45.9% 5.4% 2.7% 86.5% 37 Background checks 0.0% 5.3% 7.9% 31.6% 50.0% 5.3% 5.3% 81.6% 38 Model containment 2.0% 3.9% 7.8% 27.5% 52.9% 5.9% 5.9% 80.4% 51 Staged deployment 3.9% 0.0% 3.9% 43.1% 49.0% 0.0% 3.9% 92.2% 51 Tracking model weights 0.0% 2.6% 12.8% 30.8% 46.2% 7.7% 2.6% 76.9% 39 Internal audit 2.0% 3.9% 7.8% 35.3% 51.0% 0.0% 5.9% 86.3% 51 No unsafe open-sourcing 2.0% 11.8% 2.0% 27.5% 56.9% 0.0% 13.7% 84.3% 51 Researcher model access 2.0% 3.9% 5.9% 43.1% 41.2% 3.9% 5.9% 84.3% 51 API access to powerful models 3.9% 3.9% 7.8% 31.4% 45.1% 7.8% 7.8% 76.5% 51 Avoiding hype 0.0% 2.0% 11.8% 52.9% 33.3% 0.0% 2.0% 86.3% 51 Gradual scaling 3.9% 0.0% 11.8% 41.2% 39.2% 3.9% 3.9% 80.4% 51 Treat updates similarly to new models 0.0% 13.7% 3.9% 35.3% 45.1% 2.0% 13.7% 80.4% 51 Pre-registration of large training runs 3.9% 5.9% 7.8% 41.2% 37.3% 3.9% 9.8% 78.4% 51 Enterprise risk management 3.9% 2.0% 11.8% 29.4% 27.5% 25.5% 5.9% 56.9% 51 Treat internal deployments similar to external deployments 2.8% 5.6% 11.1% 50.0% 27.8% 2.8% 8.3% 77.8% 36 Notify a state actor before deployment 3.9% 5.9% 11.8% 47.1% 25.5% 5.9% 9.8% 72.5% 51 Notify affected parties 2.8% 2.8% 16.7% 33.3% 22.2% 22.2% 5.6% 55.6% 36 Inter-lab scrutiny 2.6% 12.8% 7.7% 41.0% 17.9% 17.9% 15.4% 59.0% 39 Avoid capabilities jumps 5.4% 10.8% 16.2% 35.1% 21.6% 10.8% 16.2% 56.8% 37 Notify other labs 2.6% 10.5% 28.9% 31.6% 10.5% 15.8% 13.2% 42.1% 38 27 Table 3: Statement Statistics: All respondents | Key statistics for each of the practices. nrepresents the total number of individuals who answered each item. The items are ordered by mean agreement score across all respondents. AGI safety and governance practice Mean Median Standard Error Variance First Quartile Third Quartile Inter-quartile Range Length Pre-deployment risk assessment 1.90 2.00 0.05 0.13 2.00 2.00 0.00 51 Dangerous capabilities evaluations 1.88 2.00 0.05 0.11 2.00 2.00 0.00 51 Third-party model audits 1.80 2.00 0.07 0.28 2.00 2.00 0.00 51 Safety restrictions 1.80 2.00 0.06 0.20 2.00 2.00 0.00 51 Red teaming 1.76 2.00 0.09 0.42 2.00 2.00 0.00 51 Monitor systems and their uses 1.75 2.00 0.08 0.31 2.00 2.00 0.00 51 Alignment techniques 1.75 2.00 0.10 0.47 2.00 2.00 0.00 51 Security incident response plan 1.74 2.00 0.10 0.35 2.00 2.00 0.00 39 Post-deployment evaluations 1.73 2.00 0.10 0.37 2.00 2.00 0.00 37 Report safety incidents 1.72 2.00 0.09 0.45 2.00 2.00 0.00 51 Safety vs. capabilities 1.69 2.00 0.08 0.30 1.00 2.00 1.00 51 Internal review before publication 1.69 2.00 0.09 0.46 1.50 2.00 0.50 51 Emergency response plan 1.65 2.00 0.09 0.39 1.00 2.00 1.00 51 Pre-training risk assessment 1.65 2.00 0.12 0.71 2.00 2.00 0.00 51 Protection against espionage 1.63 2.00 0.12 0.56 1.00 2.00 1.00 38 Pausing training of dangerous models 1.61 2.00 0.12 0.68 1.50 2.00 0.50 51 Increasing levels of external scrutiny 1.58 2.00 0.09 0.41 1.00 2.00 1.00 51 Bug bounty programs 1.50 2.00 0.09 0.42 1.00 2.00 1.00 39 Publish alignment strategy 1.50 2.00 0.09 0.31 1.00 2.00 1.00 51 Industry sharing of security information 1.49 2.00 0.10 0.37 1.00 2.00 1.00 38 Security standards 1.48 2.00 0.16 1.14 1.00 2.00 1.00 51 Publish results of internal risk assessments 1.43 2.00 0.14 0.72 1.00 2.00 1.00 37 Dual control 1.43 2.00 0.14 0.72 1.00 2.00 1.00 38 Publish results of external scrutiny 1.42 2.00 0.13 0.65 1.00 2.00 1.00 38 Military-grade information security 1.40 2.00 0.14 0.94 1.00 2.00 1.00 51 Board risk committee 1.39 2.00 0.18 1.06 1.00 2.00 1.00 37 Chief risk officer 1.38 2.00 0.15 0.79 1.00 2.00 1.00 38 Statement about governance structure 1.38 2.00 0.15 0.85 1.00 2.00 1.00 38 Publish views about AGI risk 1.37 2.00 0.12 0.65 1.00 2.00 1.00 51 KYC screening 1.35 1.00 0.11 0.46 1.00 2.00 1.00 38 Third-party governance audits 1.34 1.00 0.14 0.70 1.00 2.00 1.00 37 Staged deployment 1.33 1.00 0.12 0.79 1.00 2.00 1.00 38 Background checks 1.33 2.00 0.14 0.74 1.00 2.00 1.00 51 Model containment 1.33 2.00 0.14 0.91 1.00 2.00 1.00 51 Tracking model weights 1.31 1.50 0.14 0.68 1.00 2.00 1.00 39 Internal audit 1.29 2.00 0.13 0.85 1.00 2.00 1.00 51 No unsafe open-sourcing 1.25 2.00 0.15 1.19 1.00 2.00 1.00 51 Researcher model access 1.22 1.00 0.13 0.80 1.00 2.00 1.00 51 API access to powerful models 1.19 1.00 0.15 1.11 1.00 2.00 1.00 51 Avoiding hype 1.18 1.00 0.10 0.51 1.00 2.00 1.00 51 Gradual scaling 1.16 1.00 0.13 0.89 1.00 2.00 1.00 51 Treat updates similarly to new models 1.14 1.00 0.15 1.06 1.00 2.00 1.00 51 Pre-registration of large training runs 1.06 1.00 0.15 1.10 1.00 2.00 1.00 51 Enterprise risk management 1.00 1.00 0.17 1.14 1.00 2.00 1.00 51 Treat internal deployments similar to external deployments 0.97 1.00 0.16 0.91 1.00 2.00 1.00 36 Notify a state actor before deployment 0.90 1.00 0.15 1.03 1.00 2.00 1.00 51 Notify affected parties 0.89 1.00 0.19 0.99 0.00 2.00 2.00 36 Inter-lab scrutiny 0.72 1.00 0.19 1.18 0.00 1.00 1.00 39 Avoid capabilities jumps 0.64 1.00 0.20 1.36 0.00 1.00 1.00 37 Notify other labs 0.44 0.50 0.17 0.96 0.00 1.00 1.00 38 28 Responses and statistics by demographic groups Table 4: Statement Statistics: By sector (AGI labs, academia, civil society) | Mean, standard error and sample size ( n) for each of the fifty items divided by respondents’ sector of work. Here we separate out AGI lab, academia, and civil society respondents, to correspond with the Figure 6. These represent the three groups with sufficiently high sample sizes for analyses of group differences. The items are ordered by mean agreement score across all respondents. Mean Standard error n AGI safety and governance practice AGI Lab Academia Civil society AGI Lab Academia Civil society AGI Lab Academia Civil society Pre-deployment risk assessment 1.96 1.82 1.82 0.04 0.18 0.12 25 11 11 Dangerous capabilities evaluations 1.92 1.91 1.91 0.06 0.09 0.09 25 11 11 Third-party model audits 1.80 1.82 1.82 0.13 0.12 0.12 25 11 11 Safety restrictions 1.92 1.82 1.64 0.06 0.18 0.15 25 11 11 Red teaming 1.76 1.82 1.64 0.17 0.12 0.15 25 11 11 Monitor systems and their uses 1.72 1.82 1.82 0.14 0.12 0.12 25 11 11 Alignment techniques 1.72 1.91 1.55 0.18 0.09 0.16 25 11 11 Security incident response plan 1.77 2.00 1.71 0.09 0.00 0.18 25 11 11 Post-deployment evaluations 1.81 1.43 1.67 0.09 0.43 0.21 25 11 11 Report safety incidents 1.71 1.73 1.64 0.18 0.14 0.15 25 11 11 Safety vs. capabilities 1.68 1.82 1.64 0.13 0.12 0.15 25 11 11 Internal review before publication 1.68 1.82 1.45 0.17 0.12 0.16 25 11 11 Pre-training risk assessment 1.56 1.73 1.73 0.17 0.27 0.27 25 11 11 Emergency response plan 1.68 1.82 1.36 0.15 0.12 0.15 25 11 11 Protection against espionage 1.68 1.57 1.67 0.19 0.20 0.21 25 11 11 Pausing training of dangerous models 1.36 1.91 1.91 0.21 0.09 0.09 25 11 11 Increasing levels of external scrutiny 1.62 1.55 1.64 0.16 0.16 0.15 25 11 11 Publish alignment strategy 1.63 1.57 1.43 0.11 0.20 0.20 25 11 11 Bug bounty programs 1.62 1.64 1.09 0.15 0.15 0.16 25 11 11 Industry sharing of security information 1.55 1.29 1.67 0.11 0.29 0.33 25 11 11 Security standards 1.27 1.89 1.70 0.30 0.11 0.15 25 11 11 Publish results of internal risk assessments 1.65 1.00 1.33 0.11 0.63 0.33 25 11 11 Dual control 1.70 1.33 1.33 0.11 0.33 0.21 25 11 11 Publish results of external scrutiny 1.59 1.50 1.40 0.11 0.22 0.40 25 11 11 Military-grade information security 1.28 1.45 1.73 0.25 0.21 0.14 25 11 11 Board risk committee 1.61 1.00 1.17 0.12 0.68 0.65 25 11 11 Chief risk officer 1.53 0.83 1.67 0.16 0.60 0.21 25 11 11 Statement about governance structure 1.57 1.43 1.17 0.13 0.30 0.48 25 11 11 Publish views about AGI risk 1.39 1.36 1.27 0.22 0.15 0.19 25 11 11 KYC screening 1.43 1.43 1.33 0.15 0.20 0.33 25 11 11 Third-party governance audits 1.30 1.57 1.20 0.22 0.20 0.37 25 11 11 Background checks 1.62 1.17 1.17 0.13 0.17 0.54 25 11 11 Model containment 1.25 1.45 1.36 0.24 0.21 0.20 25 11 11 Staged deployment 1.32 1.45 1.27 0.22 0.16 0.19 25 11 11 Tracking model weights 1.35 1.43 1.33 0.17 0.30 0.49 25 11 11 Internal audit 1.12 1.73 1.36 0.23 0.14 0.20 25 11 11 No unsafe open-sourcing 1.24 1.36 1.27 0.25 0.28 0.27 25 11 11 Researcher model access 1.38 1.00 0.82 0.19 0.30 0.23 25 11 11 API access to powerful models 1.08 1.44 1.50 0.25 0.24 0.17 25 11 11 Avoiding hype 1.24 0.82 1.36 0.14 0.26 0.15 25 11 11 Gradual scaling 1.13 1.36 1.09 0.25 0.15 0.21 25 11 11 Treat updates similarly to new models 1.08 1.45 1.00 0.24 0.25 0.23 25 11 11 Pre-registration of large training runs 0.87 1.64 1.00 0.26 0.20 0.27 25 11 11 Enterprise risk management 0.71 1.38 1.11 0.33 0.26 0.20 25 11 11 Treat internal deployments similar to external deployments 1.10 1.17 0.50 0.23 0.17 0.34 25 11 11 Notify a state actor before deployment 0.55 1.45 1.00 0.25 0.21 0.27 25 11 11 Notify affected parties 0.80 0.80 1.33 0.26 0.58 0.33 25 11 11 Inter-lab scrutiny 1.22 0.33 0.17 0.13 0.49 0.54 25 11 11 Avoid capabilities jumps 0.89 0.67 0.17 0.27 0.56 0.48 25 11 11 Notify other labs 0.72 0.80 0.00 0.19 0.37 0.37 25 11 11 29 Table 5: Statement Statistics: By sector (AGI labs, all other respondents) | Mean, standard error and sample size ( n) for each of the fifty items divided by respondents’ sector of work. Here we separate out AGI lab respondents from all other respondents, to correspond with Figure 7. The items are ordered by mean agreement score across all respondents. Mean Standard error n AGI safety and governance practice AGI Lab Everyone else AGI Lab Everyone else AGI Lab Everyone else Pre-deployment risk assessment 1.96 1.82 0.04 0.11 25 22 Dangerous capabilities evaluations 1.92 1.91 0.06 0.06 24 22 Third-party model audits 1.80 1.82 0.13 0.08 25 22 Safety restrictions 1.92 1.73 0.06 0.12 25 22 Red teaming 1.76 1.73 0.17 0.10 25 22 Monitor systems and their uses 1.72 1.82 0.14 0.08 25 22 Alignment techniques 1.72 1.73 0.18 0.10 25 22 Security incident response plan 1.77 1.86 0.09 0.10 22 14 Post-deployment evaluations 1.81 1.54 0.09 0.24 21 13 Report safety incidents 1.71 1.68 0.18 0.10 24 22 Safety vs. capabilities 1.68 1.73 0.13 0.10 25 22 Internal review before publication 1.68 1.64 0.17 0.10 25 22 Pre-training risk assessment 1.56 1.73 0.17 0.19 25 22 Emergency response plan 1.68 1.59 0.15 0.11 25 22 Protection against espionage 1.68 1.62 0.19 0.14 22 13 Pausing training of dangerous models 1.36 1.91 0.21 0.06 25 22 Increasing levels of external scrutiny 1.62 1.59 0.16 0.11 24 22 Publish alignment strategy 1.63 1.50 0.11 0.14 19 14 Bug bounty programs 1.62 1.36 0.15 0.12 24 22 Industry sharing of security information 1.55 1.46 0.11 0.22 22 13 Security standards 1.27 1.79 0.30 0.10 22 19 Publish results of internal risk assessments 1.65 1.17 0.11 0.34 20 12 Dual control 1.70 1.33 0.11 0.19 20 12 Publish results of external scrutiny 1.59 1.45 0.11 0.21 22 11 Military-grade information security 1.28 1.59 0.25 0.13 25 22 Board risk committee 1.61 1.08 0.12 0.45 18 12 Chief risk officer 1.53 1.25 0.16 0.33 19 12 Statement about governance structure 1.57 1.31 0.13 0.26 21 13 Publish views about AGI risk 1.39 1.32 0.22 0.12 23 22 KYC screening 1.43 1.38 0.15 0.18 21 13 Third-party governance audits 1.30 1.42 0.22 0.19 20 12 Background checks 1.62 1.17 0.13 0.27 21 12 Model containment 1.25 1.41 0.24 0.14 24 22 Staged deployment 1.32 1.36 0.22 0.12 25 22 Tracking model weights 1.35 1.38 0.17 0.27 20 13 Internal audit 1.12 1.55 0.23 0.13 25 22 No unsafe open-sourcing 1.24 1.32 0.25 0.19 25 22 Researcher model access 1.38 0.90 0.19 0.18 24 21 API access to powerful models 1.08 1.47 0.25 0.14 24 19 Avoiding hype 1.24 1.09 0.14 0.16 25 22 Gradual scaling 1.13 1.23 0.25 0.13 23 22 Treat updates similarly to new models 1.08 1.23 0.24 0.17 24 22 Pre-registration of large training runs 0.87 1.32 0.26 0.18 23 22 Enterprise risk management 0.71 1.24 0.33 0.16 17 17 Treat internal deployments similar to external deployments 1.10 0.83 0.23 0.21 20 12 Notify a state actor before deployment 0.55 1.23 0.25 0.17 22 22 Notify affected parties 0.80 1.09 0.26 0.31 15 11 Inter-lab scrutiny 1.22 0.25 0.13 0.35 18 12 Avoid capabilities jumps 0.89 0.42 0.27 0.36 18 12 Notify other labs 0.72 0.36 0.19 0.28 18 11 30 Table 6: Statement Statistics: By gender | Mean, standard error and sample size ( n) for each of the fifty items divided by respondents’ gender. These represent the two groups with sufficiently high sample sizes for analyses of group differences. The items are ordered by mean agreement score across all respondents. Mean Standard error n AGI safety and governance practice Men Women Men Women Men Women Pre-deployment risk assessment 1.94 1.86 0.04 0.14 32 14 Dangerous capabilities evaluations 1.90 2.00 0.05 0.00 31 14 Third-party model audits 1.81 1.93 0.07 0.07 32 14 Safety restrictions 1.75 2.00 0.09 0.00 32 14 Red teaming 1.84 1.79 0.07 0.11 32 14 Monitor systems and their uses 1.62 2.00 0.12 0.00 32 14 Alignment techniques 1.62 2.00 0.15 0.00 32 14 Security incident response plan 1.75 1.92 0.09 0.08 24 12 Post-deployment evaluations 1.70 2.00 0.10 0.00 23 11 Report safety incidents 1.81 1.79 0.07 0.11 31 14 Safety vs. capabilities 1.59 1.86 0.11 0.10 32 14 Internal review before publication 1.59 1.86 0.14 0.10 32 14 Pre-training risk assessment 1.78 1.64 0.11 0.23 32 14 Emergency response plan 1.66 1.71 0.10 0.13 32 14 Protection against espionage 1.57 1.75 0.19 0.13 23 12 Pausing training of dangerous models 1.75 1.64 0.09 0.23 32 14 Increasing levels of external scrutiny 1.56 1.79 0.10 0.11 32 14 Publish alignment strategy 1.43 1.60 0.12 0.16 23 10 Bug bounty programs 1.53 1.57 0.10 0.14 32 14 Industry sharing of security information 1.35 1.73 0.13 0.14 23 11 Security standards 1.44 1.75 0.22 0.13 27 12 Publish results of internal risk assessments 1.33 1.82 0.14 0.12 21 11 Dual control 1.50 1.50 0.13 0.22 22 10 Publish results of external scrutiny 1.36 1.73 0.12 0.14 22 11 Military-grade information security 1.47 1.36 0.15 0.25 32 14 Board risk committee 1.40 1.60 0.22 0.16 20 10 Chief risk officer 1.58 1.25 0.16 0.18 19 12 Statement about governance structure 1.43 1.73 0.14 0.14 23 11 Publish views about AGI risk 1.33 1.50 0.12 0.17 30 14 KYC screening 1.23 1.50 0.16 0.15 22 12 Third-party governance audits 1.48 1.33 0.13 0.19 21 12 Background checks 1.42 1.22 0.16 0.28 24 9 Model containment 1.39 1.36 0.16 0.20 31 14 Staged deployment 1.34 1.57 0.15 0.17 32 14 Tracking model weights 1.27 1.33 0.18 0.26 22 12 Internal audit 1.28 1.43 0.16 0.17 32 14 No unsafe open-sourcing 1.38 1.07 0.18 0.32 32 14 Researcher model access 1.13 1.50 0.16 0.14 30 14 API access to powerful models 1.24 1.31 0.19 0.24 29 13 Avoiding hype 1.06 1.21 0.13 0.19 32 14 Gradual scaling 1.13 1.57 0.12 0.17 30 14 Treat updates similarly to new models 1.00 1.36 0.17 0.29 31 14 Pre-registration of large training runs 1.13 1.36 0.15 0.25 31 14 Enterprise risk management 0.90 1.23 0.24 0.20 21 13 Treat internal deployments similar to external deployments 0.95 1.30 0.17 0.26 22 10 Notify a state actor before deployment 0.93 1.14 0.18 0.21 30 14 Notify affected parties 1.07 1.10 0.18 0.31 15 10 Inter-lab scrutiny 0.72 0.82 0.27 0.23 18 11 Avoid capabilities jumps 0.62 1.00 0.20 0.44 21 9 Notify other labs 0.40 0.89 0.21 0.26 20 9 31 Demographics Table 7: Demographics of sample: Sector | Percentage and frequency of respondents by sector. Note that respondents could report more than one sector Sector Sector subgroup Percentage of total sample Raw frequency AGI lab 43.9% 25 Academia 22.8% 13 Civil society Think tank 10.5% 6 Nonprofit organization12.3% 7 Other Other tech company 1.8% 1 Government 0% 0 Consulting firm 1.8% 1 Other 1.8% 1 Prefer not to say 5.3% 3 Table 8: Demographics of sample: Gender | Percentage and frequency of respondents by gender Raw frequency Percentage of total sample Gender Man 32 62.7% Woman 14 27.5% Prefer not to say 5 9.8% Another gender 0 0.0% 32 F Additional analyses Deviations from the pre-registered pre-analysis plan We pre-registered the survey on OSF ( https://osf.io/s7vhr ). We generally followed the pre- analysis plan. We present several additional top-line statistics that were not noted in the pre-analysis plan, such as how many statements received a majority of agreement responses. We did not conduct the pre-registered regression analyses to test for the effect of sector or gender due to the small sample size. We ran the pre-registered Mann-Whitney U and Chi-squared tests instead, with appropriate correction for multiple comparisons where applicable (using the Holm-Bonferroni correction). We did not run the Kolgmorov-Smirnov tests, since the Mann-Whitney U-test was more appropriate for the observed distributions. Cluster analysis In an attempt to discover groups of response patterns within the population, we attempted to cluster respondents using their pattern of responses across questions and their reported demographic data. In line with our pre-analysis plan, we conducted k-means clustering on the dataset of responses and demographic labels (for the variables “gender” and “sector”). The aim of this analysis is to discover high-dimensional clusters or groups of response patterns within the population of respondents, and to visualize these in a more interpretable, low-dimensional manner. To achieve this, we performed a number of standard data pre-processing steps for dimensionality reduction techniques [42]. We firstly pre-processed the data to remove respondents with missing demographic data. The gender and sector demographic variables were then transformed into binary features with one-hot encoding. In the final data pre-processing step, we standardized the data to ensure that the variables were approximately equally scaled (this was done using the StandardScaler functionality from the library sklearn ). To partition the processed data for visualization, we employed the standard k- means clustering algorithm. In this algorithm, the number of clusters is a hyperparameter, which must be estimated or inferred. To select the optimal number of clusters in a principled manner, we employed two accepted methods – the Elbow method and silhouette analysis [ 67]– which evaluated the inertia and silhouette score of the model for a range of clusters n2{2,3,4,5,6,7,8,9,10}, where n represents the number of clusters). Based on this analysis, we found the optimal number of clusters to be four, and performed k-means clustering with four clusters accordingly. To visualize this clustered data, we first reduced the dimensionality of the embedded data to two dimensions (that is, two axes for visualization) using principal component analysis (PCA), and then visualized the results using a scatter plot. We found the clusters to be poorly separated, implying that it is difficult to represent groups in this dataset in a low-dimensional manner (in support of this, the Elbow error metric was relatively high for all given numbers of clusters n2{2,3,4,5,6,7,8,9,10}. This could be due to a number of reasons: the relatively small sample of this population, poor scaling of the variables of the data (as discussed above), or the presence of non-convex clusters. All of the code for this analysis, along with some instructive visualizations, can be found on OSF (https://osf.io/s7vhr ). 33 References [1]AI Now Institute, A. Kak, and S. M. West. General purpose AI poses serious risks, should not be excluded from the EU’s AI Act. https://ainowinstitute.org/publication/gpai-is-high-risk-shoul d-not-be-excluded-from-eu-ai-act , 2023. [2]S. Alexander. OpenAI’s “Planning for AGI and beyond”. https://astralcodexten.substack.com/ p/openais-planning-for-agi-and-beyond , 2023. [3]S. Altman. Planning for AGI and beyond. https://openai.com/blog/planning-for-agi-and-b eyond , 2023. [4]M. Anderljung and J. Hazell. Protecting society from AI misuse: When are restrictions on capabilities warranted? arXiv preprint arXiv:2303.09377 , 2023. [5]Anthropic. Core views on AI safety: When, why, what, and how. https://www.anthropic.com/inde x/core-views-on-ai-safety , 2023. [6]ARC. Update on ARC’s recent eval efforts. https://evals.alignment.org/blog/2023-03-18-u pdate-on-recent-evals , 2023. [7]C. Ashurst, S. Barocas, R. Campbell, and D. Raji. Disentangling the components of ethical research in machine learning. In 2022 ACM Conference on Fairness, Accountability, and Transparency , pages 2057–2068, 2022. [8]C. Ashurst, E. Hine, P. Sedille, and A. Carlier. AI ethics statements: Analysis and lessons learnt from NeurIPS broader impact statements. In 2022 ACM Conference on Fairness, Accountability, and Trans- parency , pages 2047–2056, 2022. [9]M. Barnett. When will the first general AI system be devised, tested, and publicly announced? https: //www.metaculus.com/questions/5121/date-of-artificial-general-intelligence , 2020. [10] A. M. Barrett, D. Hendrycks, J. Newman, and B. Nonnecke. Actionable guidance for high-consequence AI risk management: Towards standards addressing AI catastrophic risks. arXiv preprint arXiv:2206.08966 , 2023. [11] A. M. Barrett, J. Newman, D. Hendrycks, and B. Nonnecke. Seeking input and feedback: AI risk management-standards profile for increasingly multi-purpose or general-purpose AI. https://cltc.b erkeley.edu/seeking-input-and-feedback-ai-risk-management-standards-profile-for -increasingly-multi-purpose-or-general-purpose-ai , 2023. [12] L. Bertuzzi. AI Act: MEPs close in on rules for general purpose AI, foundation models. https: //www.euractiv.com/section/artificial-intelligence/news/ai-act-meps-close-in-o n-rules-for-general-purpose-ai-foundation-models , 2023. [13] L. Bertuzzi. Leading EU lawmakers propose obligations for general purpose ai. https://www.euract iv.com/section/artificial-intelligence/news/leading-eu-lawmakers-propose-oblig ations-for-general-purpose-ai , 2023. [14] N. Bostrom. How long before superintelligence? International Journal of Futures Studies , 2, 1998. [15] N. Bostrom. Superintelligence: Paths, Dangers, Strategies . Oxford University Press, 2014. [16] N. Bostrom. The vulnerable world hypothesis. Global Policy , 10(4):455–476, 2019. [17] M. Brundage, S. Avin, J. Clark, H. Toner, P. Eckersley, B. Garfinkel, A. Dafoe, P. Scharre, T. Zeitzoff, B. Filar, H. Anderson, H. Roff, G. C. Allen, J. Steinhardt, C. Flynn, S. Ó. hÉigeartaigh, S. Beard, H. Belfield, S. Farquhar, C. Lyle, R. Crootof, O. Evans, M. Page, J. Bryson, R. Yampolskiy, and D. Amodei. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228 , 2018. [18] M. Brundage, S. Avin, J. Wang, H. Belfield, G. Krueger, G. Hadfield, H. Khlaaf, J. Yang, H. Toner, R. Fong, T. Maharaj, P. W. Koh, S. Hooker, J. Leung, A. Trask, E. Bluemke, J. Lebensold, C. O’Keefe, M. Koren, T. Ryffel, J. Rubinovitz, T. Besiroglu, F. Carugati, J. Clark, P. Eckersley, S. de Haas, M. Johnson, B. Laurie, A. Ingerman, I. Krawczuk, A. Askell, R. Cammarota, A. Lohn, D. Krueger, C. Stix, P. Henderson, L. Graham, C. Prunkl, B. Martin, E. Seger, N. Zilberman, S. Ó. hÉigeartaigh, F. Kroeger, G. Sastry, R. Kagan, A. Weller, B. Tse, E. Barnes, A. Dafoe, P. Scharre, A. Herbert-Voss, M. Rasser, S. Sodhani, C. Flynn, T. K. Gilbert, L. Dyer, S. Khan, Y. Bengio, and M. Anderljung. Toward trustworthy AI development: Mechanisms for supporting verifiable claims. arXiv preprint arXiv:2004.07213 , 2020. 34 [19] M. Brundage, K. Mayer, T. Eloundou, S. Agarwal, S. Adler, G. Krueger, J. Leike, and P. Mishkin. Lessons learned on language model safety and misuse. https://openai.com/research/language-model-s afety-and-misuse , 2022. [20] Cohere, OpenAI, and A. Labs. Best practices for deploying language models. https://openai.com/b log/best-practices-for-deploying-language-models , 2022. [21] R. Crootof. Artificial intelligence research needs responsible publication norms. https://www.lawfar eblog.com/artificial-intelligence-research-needs-responsible-publication-norms , 2019. [22] G. Falco, B. Shneiderman, J. Badger, R. Carrier, A. Dahbura, D. Danks, M. Eling, A. Goodloe, J. Gupta, C. Hart, M. Jirotka, H. Johnson, C. LaPointe, A. J. Llorens, A. K. Mackworth, C. Maple, S. E. Pálsson, F. Pasquale, A. Winfield, and Z. K. Yeong. Governing AI safety through independent audits. Nature Machine Intelligence , 3(7):566–571, 2021. [23] D. Ganguli, L. Lovitt, J. Kernion, A. Askell, Y. Bai, S. Kadavath, B. Mann, E. Perez, N. Schiefer, K. Ndousse, A. Jones, S. Bowman, A. Chen, T. Conerly, N. DasSarma, D. Drain, N. Elhage, S. El-Showk, S. Fort, Z. Hatfield-Dodds, T. Henighan, D. Hernandez, T. Hume, J. Jacobson, S. Johnston, S. Kravec, C. Olsson, S. Ringer, E. Tran-Johnson, D. Amodei, T. Brown, N. Joseph, S. McCandlish, C. Olah, J. Kaplan, and J. Clark. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858 , 2022. [24] B. Goertzel. Who coined the term “AGI”? https://goertzel.org/who-coined-the-term-agi , 2011. [25] B. Goertzel. Artificial general intelligence: Concept, state of the art, and future prospects. Journal of Artificial General Intelligence , 5(1):1–46, 2014. [26] B. Goertzel and C. Pennachin. Artificial General Intelligence . Springer, 2007. [27] J. A. Goldstein, G. Sastry, M. Musser, R. DiResta, M. Gentzel, and K. Sedova. Generative language models and automated influence operations: Emerging threats and potential mitigations. arXiv preprint arXiv:2301.04246 , 2023. [28] K. Grace, J. Salvatier, A. Dafoe, B. Zhang, and O. Evans. Viewpoint: When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research , 62:729–754, 2018. [29] R. Gruetzemacher, D. Paradice, and K. B. Lee. Forecasting transformative AI: An expert survey. arXiv preprint arXiv:1901.08579 , 2019. [30] M. A. Gubrud. Nanotechnology and international security. https://web.archive.org/web/2011 0427135521/http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/index.html , 1997. [31] D. Hassabis. Announcing Google DeepMind. https://www.deepmind.com/blog/announcing-goo gle-deepmind , 2023. [32] HM Government. National AI strategy. https://www.gov.uk/government/publications/natio nal-ai-strategy , 2021. [33] I. Hogarth. We must slow down the race to God-like AI. https://www.ft.com/content/03895dc4-a 3b7-481e-95cc-336a524f2ac2 , 2023. [34] ISO. 31000:2018 Risk management — Guidelines. https://www.iso.org/standard/65694.html , 2018. [35] ISO/IEC. 23894:2023 Information technology — Artificial intelligence — Guidance on risk management. https://www.iso.org/standard/77304.html , 2023. [36] K. Kavukcuoglu, P. Kohli, L. Ibrahim, D. Bloxwich, and S. Brown. How our principles helped define AlphaFold’s release. https://www.deepmind.com/blog/how-our-principles-helped-defin e-alphafolds-release , 2022. [37] E. Klein. The surprising thing A.I. engineers will tell you if you let them. https://www.nytimes.com/ 2023/04/16/opinion/this-is-too-important-to-leave-to-microsoft-google-and-fac ebook.html , 2023. 35 [38] V. Krakovna and R. Shah. Some high-level thoughts on the DeepMind alignment team’s strategy. https: //www.alignmentforum.org/posts/a9SPcZ6GXAg9cNKdi/linkpost-some-high-level-thoug hts-on-the-deepmind-alignment , 2023. [39] M. Kruppa. Google DeepMind CEO says some form of AGI possible in a few years. https://www.wsj. com/articles/google-deepmind-ceo-says-some-form-of-agi-possible-in-a-few-years -2705f452 , 2023. [40] J. Leike. Why I’m optimistic about our alignment approach. https://aligned.substack.com/p/ali gnment-optimism , 2022. [41] J. Leike, J. Schulman, and J. Wu. Our approach to alignment research. https://openai.com/blog/ou r-approach-to-alignment-research , 2022. [42] A. Likas, N. Vlassis, and J. J. Verbeek. The global k-means clustering algorithm. Pattern Recognition , 36(2):451–461, 2003. [43] S. A. Lundqvist. Why firms implement risk governance: Stepping beyond traditional risk management to enterprise risk management. Journal of Accounting and Public Policy , 34(5):441–466, 2015. [44] J. H. McDonald. Handbook of Biological Statistics . Sparky House Publishing, 2009. [45] S. McGregor. Preventing repeated real world AI failures by cataloging incidents: The AI Incident Database. InProceedings of the AAAI Conference on Artificial Intelligence , pages 15458–15463, 2021. [46] Y. Mehdi. Reinventing search with a new AI-powered Microsoft Bing and Edge, your copilot for the web. https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-p owered-microsoft-bing-and-edge-your-copilot-for-the-web , 2023. [47] C. Metz. “The Godfather of A.I.” leaves Google and warns of danger ahead. https://www.nytimes.co m/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html , 2023. [48] P. Mishkin, L. Ahmad, M. Brundage, G. Krueger, and G. Sastry. DALL ·E 2 preview: Risks and limitations. https://github.com/openai/dalle-2-preview/blob/main/system-card.md , 2022. [49] J. Mökander and L. Floridi. Operationalising AI governance through ethics-based auditing: An industry case study. AI and Ethics , pages 1–18, 2022. [50] J. Mökander, J. Morley, M. Taddeo, and L. Floridi. Ethics-based auditing of automated decision-making systems: Nature, scope, and limitations. Science and Engineering Ethics , 27(44), 2021. [51] J. Mökander, J. Schuett, H. R. Kirk, and L. Floridi. Auditing large language models: A three-layered approach. arXiv preprint arXiv:2302.08500 , 2023. [52] L. Muehlhauser. What is AGI? https://intelligence.org/2013/08/11/what-is-agi , 2013. [53] NIST. Artificial Intelligence Risk Management Framework (AI RMF 1.0). https://doi.org/10.602 8/NIST.AI.100-1 , 2023. [54] OpenAI. Charter. https://openai.com/charter , 2018. [55] OpenAI. Announcing OpenAI’s bug bounty program. https://openai.com/blog/bug-bounty-pro gram#OpenAI , 2023. [56] OpenAI. GPT-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [57] OpenAI. Our approach to AI safety. https://openai.com/blog/our-approach-to-ai-safety , 2023. [58] OpenAI. Safety best practices. https://platform.openai.com/docs/guides/safety-best-pra ctices , 2023. [59] L. A. Palinkas, S. M. Horwitz, C. A. Green, J. P. Wisdom, N. Duan, and K. Hoagwood. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and Policy in Mental Health and Mental Health Services Research , 42:533–544, 2015. [60] Partnership on AI. Managing the risks of AI research. https://partnershiponai.org/paper/resp onsible-publication-recommendations , 2021. 36 [61] Partnership on AI. PAI is collaboratively developing shared protocols for large-scale AI model safety. https://partnershiponai.org/pai-is-collaboratively-developing-shared-protocols -for-large-scale-ai-model-safety , 2023. [62] E. Perez, S. Huang, F. Song, T. Cai, R. Ring, J. Aslanides, A. Glaese, N. McAleese, and G. Irving. Red teaming language models with language models. arXiv preprint arXiv:2202.03286 , 2022. [63] B. Perrigo. DeepMind’s CEO helped take AI mainstream. Now he’s urging caution. https://time.com /6246119/demis-hassabis-deepmind-interview , 2023. [64] S. Pichai. An important next step on our AI journey. https://blog.google/technology/ai/bard-g oogle-ai-search-updates , 2023. [65] I. D. Raji and J. Buolamwini. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society , pages 429–435, 2019. [66] I. D. Raji, P. Xu, C. Honigsberg, and D. Ho. Outsider oversight: Designing a third party audit ecosystem for AI governance. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society , pages 557–571, 2022. [67] D. M. Saputra, D. Saputra, and L. D. Oswari. Effect of distance metrics in determining k-value in k-means clustering using elbow and silhouette method. In Proceedings of the Sriwijaya International Conference on Information Technology and Its Applications , pages 341–346, 2020. [68] J. Schuett. Three lines of defense against risks from AI. arXiv preprint arXiv:2212.08364 , 2022. [69] J. Schuett. Risk management in the Artificial Intelligence Act. European Journal of Risk Regulation , pages 1–19, 2023. [70] J. Schuett. AGI labs need an internal audit team, forthcoming. [71] J. R. Searle. Minds, brains, and programs. Behavioral and Brain Sciences , 3(3):417–424, 1980. [72] T. Shevlane. Structured access: An emerging paradigm for safe AI deployment. In The Oxford Handbook of AI Governance , 2022. [73] T. Shevlane and A. Dafoe. The offense-defense balance of scientific knowledge: Does publishing AI research reduce misuse? In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society , pages 173–179, 2020. [74] N. Soares. Comments on OpenAI’s “Planning for AGI and beyond”. https://www.lesswrong.com/po sts/uxnjXBwr79uxLkifG , 2023. [75] I. Solaiman. The gradient of generative AI release: Methods and considerations. arXiv preprint arXiv:2302.04844 , 2023. [76] I. Solaiman, M. Brundage, J. Clark, A. Askell, A. Herbert-V oss, J. Wu, A. Radford, G. Krueger, J. W. Kim, S. Kreps, M. McCain, A. Newhouse, J. Blazakis, K. McGuffie, and J. Wang. Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203 , 2019. [77] A. Solender and A. Gold. Scoop: Schumer lays groundwork for Congress to regulate AI. https: //www.axios.com/2023/04/13/congress-regulate-ai-tech , 2023. [78] J. Spataro. Introducing Microsoft 365 Copilot – your copilot for work. https://blogs.microsoft.co m/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work , 2023. [79] Z. Stein-Perlman, B. Weinstein-Raun, and K. Grace. 2022 expert survey on progress in AI. https: //aiimpacts.org/2022-expert-survey-on-progress-in-ai , 2022. [80] The Future Society. Industry Code of Conduct for R&D of GPAIS, forthcoming. [81] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample. LLaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. [82] I. UK Department for Science and Technology. A pro-innovation approach to AI regulation. https: //assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_ data/file/1146542/a_pro-innovation_approach_to_AI_regulation.pdf , 2023. 37 [83] F. Urbina, F. Lentzos, C. Invernizzi, and S. Ekins. Dual use of artificial-intelligence-powered drug discovery. Nature Machine Intelligence , 4(3):189–191, 2022. [84] M. B. van Asselt and O. Renn. Risk governance. Journal of Risk Research , 14(4):431–449, 2011. [85] White House. Fact sheet: Biden-Harris Administration announces new actions to promote responsible AI innovation that protects Americans’ rights and safety. https://www.whitehouse.gov/briefing-roo m/statements-releases/2023/05/04/fact-sheet-biden-harris-administration-announc es-new-actions-to-promote-responsible-ai-innovation-that-protects-americans-r ights-and-safety , 2023. [86] White House. Readout of White House meeting with CEOs on advancing responsible artificial intelligence innovation. https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/ 04/readout-of-white-house-meeting-with-ceos-on-advancing-responsible-artificia l-intelligence-innovation , 2023. [87] J. V . Wright. A new era for AI and Google Workspace. https://workspace.google.com/blog/prod uct-announcements/generative-ai , 2023. [88] K. Wynroe, D. Atkinson, and J. Sevilla. Literature review of transformative artificial intelligence timelines. https://epochai.org/blog/literature-review-of-transformative-artificial-intelli gence-timelines , 2023. [89] E. Yudkowsky. Pausing AI developments isn’t enough. We need to shut it all down. https://time.com /6266923/ai-eliezer-yudkowsky-open-letter-not-enough , 2023. [90] B. Zhang, M. Anderljung, L. Kahn, N. Dreksler, M. C. Horowitz, and A. Dafoe. Ethics and gover- nance of artificial intelligence: Evidence from a survey of machine learning researchers. arXiv preprint arXiv:2105.02117 , 2021. 38 CENTRE FOR THE GOVERNANCE OF AI•2 CENTRE FOR THE GOVERNANCE OF AI
538bbd05-f60c-45d6-bacc-858904e3fd31
StampyAI/alignment-research-dataset/blogs
Blogs
Energy efficiency of North American P-51 Mustang *Updated Nov 5, 2020* The North American P-51 Mustang: * flew around 0.073—0.092 m/kJ * and moved mass at around 0.25 – 0.50 kg.m/J Details ------- The *North American P-51 Mustang* was a 1940 US WWII fighter and fighter-bomber.[1](https://aiimpacts.org/energy-efficiency-of-north-american-p-51-mustang/#easy-footnote-bottom-1-2737 "The&nbsp;<strong>North American Aviation P-51 Mustang</strong>&nbsp;is an American long-range, single-seat&nbsp;<a href=\"https://en.wikipedia.org/wiki/Fighter_aircraft\">fighter</a>&nbsp;and&nbsp;<a href=\"https://en.wikipedia.org/wiki/Fighter-bomber\">fighter-bomber</a>&nbsp;used during&nbsp;<a href=\"https://en.wikipedia.org/wiki/World_War_II\">World War II</a>&nbsp;and the&nbsp;<a href=\"https://en.wikipedia.org/wiki/Korean_War\">Korean War</a>, among other conflicts. The Mustang was designed in April 1940 by a design team headed by James Kindelberger<sup><a href=\"https://en.wikipedia.org/wiki/North_American_P-51_Mustang#cite_note-6\">[6]</a></sup>&nbsp;of&nbsp;<a href=\"https://en.wikipedia.org/wiki/North_American_Aviation\">North American Aviation</a>&nbsp;(NAA) in response to a requirement of the&nbsp;<a href=\"https://en.wikipedia.org/wiki/British_Purchasing_Commission\">British Purchasing Commission</a>.”</p> <p>“North American P-51 Mustang.” In <em>Wikipedia</em>, October 19, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=North_American_P-51_Mustang&amp;oldid=984347874\">https://en.wikipedia.org/w/index.php?title=North_American_P-51_Mustang&amp;oldid=984347874</a>.") ### Mass According to Wikipedia[2](https://aiimpacts.org/energy-efficiency-of-north-american-p-51-mustang/#easy-footnote-bottom-2-2737 "“North American P-51 Mustang.” In <em>Wikipedia</em>, October 19, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=North_American_P-51_Mustang&amp;oldid=984347874\">https://en.wikipedia.org/w/index.php?title=North_American_P-51_Mustang&amp;oldid=984347874</a>."): * **Empty weight:** 7,635 lb (3,465 kg) * **Gross weight:** 9,200 lb (4,175 kg) * **Max takeoff weight:** 12,100 lb (5,488 kg) We use the range 3,465—5,488 kg, since we do not know at what weight in that range the relevant speeds were measured. ### Distance per Joule Wikipedia tells us that cruising speed was 362 mph (162 m/s)[3](https://aiimpacts.org/energy-efficiency-of-north-american-p-51-mustang/#easy-footnote-bottom-3-2737 "“North American P-51 Mustang.” In <em>Wikipedia</em>, October 19, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=North_American_P-51_Mustang&amp;oldid=984347874\">https://en.wikipedia.org/w/index.php?title=North_American_P-51_Mustang&amp;oldid=984347874</a>.") A table from *WWII Aircraft Performance* gives combinations of flight parameters apparently for a version of the P-51, however it has no title or description, so we cannot be confident. [4](https://aiimpacts.org/energy-efficiency-of-north-american-p-51-mustang/#easy-footnote-bottom-4-2737 "“P-51D_15342_AppendixB.Pdf.” Accessed November 5, 2020. <a href=\"http://www.wwiiaircraftperformance.org/mustang/P-51D_15342_AppendixB.pdf\">http://www.wwiiaircraftperformance.org/mustang/P-51D_15342_AppendixB.pdf</a>.") We extracted some data from it [here](https://docs.google.com/spreadsheets/d/1RsewNj8d8JDlL9628xioi2vcVeG0oPMKlC8u59FVvA4/edit?usp=sharing). This data suggests the best combination of parameters gives a fuel economy of 6.7 miles/gallon (10.8km) We don’t know what fuel was used, but fuel energy density seems likely to be between 31—39 MJ/L = 117—148 MJ/gallon.[5](https://aiimpacts.org/energy-efficiency-of-north-american-p-51-mustang/#easy-footnote-bottom-5-2737 "Wikipedia lists energy densities for a variety of fuels, and those for petroleum, 100LL avgas, diesel, and jet fuel are within this range and seem likely to be similar to that used.</p> <p>“Energy Density.” In <em>Wikipedia</em>, September 21, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=Energy_density&amp;oldid=979608484\">https://en.wikipedia.org/w/index.php?title=Energy_density&amp;oldid=979608484</a>.") Thus the plane flew about 10.8km on 117—148 MJ of fuel, for 0.073—0.092 m/kJ ### Mass.distance per Joule We have: * Distance per kilojoule: 0.073—0.092 m/kJ * Mass: 3,465—5,488 kg This gives us a range of 0.25 – 0.50 kg.m/J *Primary author: Ronny Fernandez* Notes -----
261bae57-988c-4bab-8eea-d5fabe6d6a20
trentmkelly/LessWrong-43k
LessWrong
Leukemia Has Won
ebe80a59-772f-46cb-8bf9-c7e113298402
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
E.A. Megaproject Ideas There are a number of "megaprojects" which I'd like to see people in the E.A. community carry out. Posts like this have been enormously helpful to me as I've browsed the E.A. forum. This is my first post on this forum, so please be nice :) Without further ado: **E.A. Megaprojects Ideas** * Do research into using AI, biological engineering and brain-computer interface to enhance our capacity for love, compassion, reason, and other positive traits for the long-term future. + From 80,000 Hours: - “better reasoning and cognitive capacities usually make for better outcomes, especially when problems are subtle or complex. And as with institutions, work on improving individual decision-making is likely to be helpful no matter what challenges the future throws up” * Improving the psychiatric crisis system through Cognitive Behavioral Therapy (CBT), mindfulness, and Atomic Habits (training for building good habits and breaking bad ones). * Fund E.A. student groups at Historically Black Colleges and Universities, Womens’ colleges, and colleges in low- and middle-income countries through the Centre for Effective Altruism to decrease hegemony in E.A. * Grant for shifting the focus of AI safety research from value alignment to inculcating virtue and moral character in AI systems (see [Moral Machines](https://www.research.ed.ac.uk/en/publications/moral-machines-from-value-alignment-to-embodied-virtue)) * E.A.-aligned think tank for drafting evidence-based legislation on X-risks, biosecurity, global health and development, climate change, factory farming, institutional reform, etc., and a SuperPAC for E.A.-aligned candidates in the United States. * Launch [XPRIZE](https://www.xprize.org/) competitions to: + Spur innovation in Global Health and Development by offering $10 million for the creation of a novel intervention at least twice as effective as GiveWell’s top charities. + Create an extracurricular training program to rapidly train people to do high-impact work in 80,000 Hours [priority paths](https://80000hours.org/career-reviews/) in less time and for less money than a typical degree program. * Launch a lobbying firm to advocate for free trade policies. + From Copenhagen Consensus: - The goal with the highest benefit-cost ratio in the area of trade policy reform is: * Complete the languishing Doha Development Agenda process at the World Trade Organization, which will return ~$2000 to the world for every dollar spent and ~$3,400 to developing countries as a group for every dollar spent. - Three other goals in this area which have valuable global benefit-cost ratios are: * Implement a free trade agreement between member states of the free trade area of the Asia Pacific which will return ~$1,700 to the world for every dollar spent and ~$2,600 to developing countries as a group for every dollar spent. * Implement a free trade agreement between selected APEC countries (known as the Trans-Pacific Partnership) which will return ~$1,200 to the world for every dollar spent and ~$1,900 to developing countries as a group for every dollar spent. * Implement a free trade agreement between ASEAN countries and China, Japan, and South Korea (known as ASEAN+3) which will return ~$1,900 to the world for every dollar spent and ~$3,400 to developing countries as a group for every dollar spent. * Graze half of the world’s deserts to restore carbon to preindustrial levels (Climate Change/Long-Term Future) + [How to green the world's deserts and reverse climate change | Allan Savory](https://www.youtube.com/watch?v=vpTHi7O66pI) + Grasslands are vast landscapes that have the capacity if properly managed, to address some of humanity’s most urgent challenges such as water and food insecurity, poverty, and climate change. + Currently, grasslands are desertifying at alarming rates. + Holistic Management of grasslands can result in the regeneration of soils, increased productivity and biological diversity, as well as economic and social well-being. + Savory Institute works to regenerate these critically important and fragile grasslands. + On average, $30 allows the Savory Global Network to influence 100 acres of land as regional Hubs train farmers, ranchers, and pastoralist communities in their local context. - Back of the napkin math from [Project Drawdown](https://www.drawdown.org/solutions/grassland-protection/technical-summary) indicates that Savory Institute may be comparably effective to top climate change charities, like CATF, Carbon180, TerraPraxis, etc. * Dating app for Effective Altruists. Or alternatively, a particularly effective dating app (Subjective Well-Being/E.A. Community Building) + If the app led to more E.A.s getting married, they could share finances allowing each of them to be able to donate to E.A. charities more than they would have otherwise. + Qualia Research Institute [research](https://www.qualiaresearchinstitute.org/blog/log-scales) finds that ‘falling in love', ‘marriage’, ‘children born’, and ‘sex’ are among the four most pleasurable experiences which occur in life - This means that if we can achieve outcomes far better than typical dating apps, this could potentially be very effective at promoting subjective well-being. + May introduce many women and LGBTQ+ folk to Effective Altruism, which it is currently [lacking](https://www.centreforeffectivealtruism.org/diversity-and-inclusion). + Downside: May increase insularity in E.A., i.e. through "inbreeding" * Using ALLFED [resilient food solutions](https://allfed.info/resilient-foods/resilient-food-solutions) to provide abundant food to end world hunger and malnutrition now (Global Health and Development) + These foods can be scaled up quickly and cheaply. + Peer-reviewed [paper](https://www.mdpi.com/2072-6643/14/3/492) on how these foods together can achieve a balanced diet. * Buy out the most effective mental health apps and provide them for free to increase well-being (Subjective Well-Being) + Actissist and UCSF PRIME (reduced psychotic symptoms and symptoms of schizophrenia) + Virtual Hope Box (reduced self-harming and suicidal behavior), + Agoraphobia free (reduced symptoms of agoraphobia), + Challenger (reduced general anxiety and social anxiety) + MoodHacker and SuperBetter (reduced depression). * Research into psychological homogeneity of folks attracted to EA (on the Big 5, Myers-Briggs, etc) being curious and perhaps concerning re: EA's generalizability (from Milan Griffes's Twitter [thread](https://twitter.com/MilanGriffes/status/1504826711183138817) on E.A. blindspots) * Fully funding the Qualia Research Institute (from Milan Griffes's Twitter [thread](https://twitter.com/MilanGriffes/status/1504826711183138817) on E.A. blindspots) to: + Develop a precise mathematical language for describing subjective experience. + Understand the nature of emotional valence (happiness and suffering). + Build technologies to radically improve people's lives through a better understanding of subjective experience. * Fund visionaries like [Vishen Lakhiani](https://www.vishen.com/projects) to upgrade human consciousness (i.e. through upgrading the human education model, models of work and career, spirituality, health and wellness, parenting, politics, citizenry, etc.) to prepare for exponential growth in technology, i.e. through A.I.-induced singularity.
8123a3b8-ef1c-41e6-9ee6-edf3f507fdc9
trentmkelly/LessWrong-43k
LessWrong
acronyms ftw Everyday acronyms are some of the most usefwl innovations for making language more efficient. Conlangs are too hard to make well and too expensive to coordinate switching to, but we can profitably innovate on the margin. Never really understood why this didn't take off tbh. By default there's huge inertia to how fast language evolves to fit the new purposes it's being used for, and that inertia is proportional to the number of people who use the archaic term for that purpose. Ever notice how we still have terms like "probability", "Nash equilibrium", and "independent impression" despite how long they are and how often we (should) use them? But we don't have to settle for default growth! As long as we have common knowledge that trying to innovate this is a good thing (as opposed to everyone believing that coining new terms is the ultimate act of hubris and needs to be punished), we can constantly be trying out new forms. I'd prefer if enthusiasm were the default response to someone trying to make up new words, and punishment/disappointment only be issued after evaluating the suggestion as bad enough that you'd want to discourage the effort.  I think ideally we should spend a much larger fraction of our talking time on debating which terms to use. Or better yet, have a central real-time voting database, sort of like a weaker form of an assurance contract implemented via a crowdaction platform. On the platform, people (preferably LW user accounts) can informally commit to use a new term conditional upon N other people also committing to it. Communities that are comfortable with "internet slang" have a massive advantage due to their willingness to use new terms. Sometimes they even compete to use the newest terms. Imagine the power we'd have if we could learn from some of that culture! What we have instead is a very strict informal regulation on who's allowed to even suggest new terms, lest they be exiled for the crime of status-grabbing above their station. A sel
a70104e7-a075-4727-9bd4-a7e75c3ab5af
trentmkelly/LessWrong-43k
LessWrong
The origins of virtue I read Matt Ridley’s ‘The origins of virtue’ just now. It was full of engaging anecdotes and irrelevant details, which I don’t find that useful for understanding, so I wrote down the interesting points. On the off chance anyone else would like a summary, I publish it here. I recommend reading it properly. Things written in [here] are my comments. *** Prologue The aim of this book: How did all this cooperation and niceness, especially amongst humans, come about evolutionarily? Chapter 1 There are benefits to cooperation: can do many things at once, [can avoid costs of conflict, can enjoy other prisoners’ dilemmas, can be safer in groups] Cooperation occurs on many levels: allegiances, social groups, organisms, cells, organelles, chromosomes, genomes, genes. Selfish genes explain everything. Which means it’s possible for humans to be unselfish. There are ubiquitous conflicts of interest to be controlled in coalitions at every level. 2 Relatedness explains most groupishness ( = like selfishness, but pro-group). e.g. ants, naked mole rats. Humans distribute reproduction, so aren’t closely related to their societies. They try to suppress nepotism even. So why all the cooperation? Division of labour has huge benefits (trade isn’t zero sum) [cells are cool because they have the same genes, so don’t mutiny, but different characters so benefit from division of labour] Division of labor is greater in larger groups, and with better transport. There is a trade-off between division of labour and benefits of competition. By specialising at individual level a group can generalise at group level: efficiently exploit many niches. Division of labour between males and females is huge and old. 3 Prisoners’ dilemmas are ubiquitous. Evolutionarily stable strategies = nash equilibria found by evolution. Tit-for-tat and related strategies are good in iterated prisoners’ dilemmas. This is because they are nice, retaliatory, forgiving, and clear. If a c
589220c6-660d-46df-a16d-9b2fe6da86b5
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Troubles With CEV Part2 - CEV Sequence **The CEV Sequence Summary**: The CEV sequence consists of three posts tackling important aspects of CEV. It covers conceptual, practical and computational problems of [CEV's current form](/singinst.org/upload/CEV.html). [*On What Selves Are*](/lw/a2f/on_what_selves_are_cev_sequence/) draws on analytic philosophy methods in order to clarify the concept of Self, which is necessary in order to understand whose volition is going to be extrapolated by a machine that implements the CEV procedure. [*Troubles with CEV part1*](/r/discussion/lw/af0/troubles_with_cev_part1_cev_sequence/) and *Troubles with CEV part2* on the other hand describe several issues that will be faced by the CEV project if it is actually going to be implemented. Those issues are not of conceptual nature. Many of the objections shown come from scattered discussions found on the web. Finally, six alternatives to CEV are considered.   **Troubles with CEV Summary**: Starting with a summary of CEV, we proceed to show several objections to CEV. First, specific objections to the use of Coherence, Extrapolation, and Volition. Here Part1 ends. Then, in Part2, we continue with objections related to the end product of performing a CEV, and finally, problems relating to the implementation of CEV. We then go on with a praise of CEV, pointing out particular strengths of the idea. We end by showing six alternatives to CEV that have been proposed, and considering their vices and virtues. **Meta**: I think Troubles With CEV Part1 and Part2 should be posted to Main. So on the comment section of Part2, I put a place to vote for or against this upgrade.   Troubles with CEV Part2   **5) Problems with the end product** **5a) Singleton****Objection**. Even if all goes well and a machine executes the coherent extrapolated volition of humanity, the self modifying code it is running is likely to become the most powerful agent on earth (including individuals, governments, industries and other machines) If such a superintelligence unfolds, whichever goals it has (our CE volitions) it will be very capable of implementing. This is a singleton scenario. A singleton is “*[T]he term refers to a world order in which there is a single decision-making agency at the highest level. Among its powers would be (1) the ability to prevent any threats (internal or external) to its own existence and supremacy, and (2) the ability to exert effective control over major features of its domain (including taxation and territorial allocation).”*. Even though at first sight the emergence of a singleton looks totalitarian, there is good reason to establish a singleton as opposed to several competing superintelligences. If a singleton is obtained, the selective process of genetic and cultural evolution meets with a force that can counter its own powers. Something other than selection of the fittest takes place as the main developer of the course of history. This is desirable for several reasons. Evolution favors flamboyant displays, malthusian growth and in general a progressively lower income, with our era being an exception in its relative abundance of resources. Evolution operates on many levels (genes, memes, individuals, institutions, groups) and there is conflict and survival of the fittest in all of them. If evolution were to continue being the main driving force of our society there is great likelihood that several of the things we find valuable would be lost. Much of what we value has evolved as signaling (dancing, singing, getting jokes) and it is likely that some of that costly signaling would be lost without a controlling force such as a singleton. For this reason, having a singleton can be considered a good result in the grand scheme of things, and should not constitute worry to the CEV project, despite initial impressions otherwise. In fact if we do not have a singleton soon we will be *Defeated* *by* *Evolution* at the fastest level where evolution is occurring. At that level, the fast growing agents gradually obtain the resources of the remaining desirable agents until all resources are taken and desirable agents become extinct.   **6) Problems of implementation** **6a) Shortage****Objections**. To extract coherent extrapolated volitions from people seems to be not only immensely complicated but also computationally costly. Yudkowsky proposes in CEV that we should let this initial dynamic run for a few minutes and then redesign its machine, implementing the code it develops once it is mature. But what if maturity is not achieved? What if the computational intractability of *muddled* concepts and *spread* overwhelm the computing capacity of the machine, or exceed the time it is given to process it's input? **6b) Sample****bias**. The CEV machine implements the volition of mankind, such is the suggestion. But from what sample of people will it extrapolate? Certainly it will not do a fine grained reading of everyone's brainstates in order to start operating, it will more likely extrapolate from sociological, anthropological and psychological information. Thus its selection of groups extrapolated will matter a lot in the long run. It may try to correct sampling bias by obtaining information about other cultures (besides programmers culture and whichever other cultures it starts with), but the vastness of human societal variation can be a hard challenge to overcome. We want to fairly take into account everyone's values, rather than privileging those of the designers. **6c) The** **Indeterminacy Objection**. Suppose we implement the CEV of a group of people including three catholics, a muslim and two atheists, all of them English speakers. What if the CEV machine fails to consider the ethical divergence of their moral judgments by *changing* *the* *meaning* of the word 'god'? While extrapolating, many linguistic tokens (words) will appear (e.g. as parts of ethical imperatives). Since Quine's (1960) thesis of indeterminacy of reference, we know that the meanings of words are widely under-determined by their usage. A machine that reads my brainstate looking for cues on how to CEV may find sufficiently few mentions of a linguistic token such as 'god' that it ends up able to attribute almost any meaning to it (analogous to Löwenheim-Skolem theorem), and it may end up tampering with the token's meaning for the wrong reasons (to increase coherence at cost of precision).   **7) Praise of CEV** **7a) Bringing the issue to practical level** Despite all previous objections, CEV is a very large reduction in the problem space of how to engineer a nice future. Yudkowsky's approach is the first practical suggestion for how an artificial moral agent might do something good, as opposed to destroying humanity. Simply starting the debate of how to implement an ethical agent that is a machine built by humans is already a formidable achievement. CEV sets the initial grounding above which will be built stronger ideas for our bright future. **7b) Ethical strength of egalitarianism** CEV is a morally egalitarian ethically designed theory. Each current human stands in the same quantitative position relative to how much his volition will contribute to the final sum. Even though the CEV implementing machine will only extrapolate some subset of humans, it will try to make that subset in as much as possible a political representative of the whole.   **8) Alternatives to CEV** **8a) The Nobel Prize CEV** Here the suggestion is to do CEV on only a subset of humanity (which might be necessary anyway for computational tractability). Phlebas asks: > > “[Suppose] you had to choose a certain subset of minds to participate in the initial dynamic? > > > What springs to my mind is Nobel Prize winners, and I suspect that this too is a Schelling point. This seems like a politically neutral selection of distinguished human beings (particularly if we exclude the Peace Prize) of superlative character and intellect.” > > > In the original CEV, the initial dynamic would have to either scan all brains (unlikely) or else extrapolate predictions made with its biological, sociological, anthropological and psychological resources from a subset of brains, correcting for all correctable biases in its original sample. This may be a very daunting task; It may just be easier to preselect a group and extrapolate their volition. Which computational procedures would you execute in order to be able to extrapolate a set of Jews and Arabs if your initial sample were only composed of Jews? That is, how can you predict extrapolated Arabs from Jews? This would be the level of difficulty of the task we impose on CEV if we let the original dynamic scan only western minds and try to extrapolate Pirahã, Maori, Arab, and Japanese minds out of this initial set. Instead of facing this huge multicultural demand, using Nobel winners wouldn't detract away from the initial mindset originating the CEV idea. The trade-off here is basically between democracy in one hand and tractability on the other. Still Phlebas: “I argue that the practical difficulty of incorporating all humans into the CEV in the first place is unduly great, and that the programming challenge is also made more difficult by virtue of this choice. I consider any increase in the level of difficulty in the bringing into existence of FAI to be positively dangerous, on account of the fact that this increases the window of time available for unscrupulous programmers to create uFAI. “ **8b) Building Blocks for Artificial Moral Agents** In his article “Building Blocks for Artificial Moral Agents” Vincent Wiegel provides several interesting particularities that must be attended to when creating these agents: “An agent can have as one of its goals or desires to be a moral agent, but never as its only or primary goal. So the implementation of moral reasoning capability must always be in the context of some application in which it acts as a constraint on the other goals and action.” Another: “[O]nce goals have been set, these goals must have a certain stickiness. Permanent goal revision would have a paralyzing effect on an agent and possibly prevent decision making.” Even though his paper doesn't exactly provide a substitute for CEV, it provides several insights into the details that must be taken in consideration when implementing AGI. To let go of the user-friendly interface that the CEV paper has and to start thinking about how to go about implementing moral agents on a more technical ground level I suggest examining his paper as a good start. **8c) Normative approach** A normative or deontological approach would have the artificial agent following rules, that is, telling it what is or not allowed. Examples of deontological approaches are Kant's maxim, Gert's ten principles in *Morality* and Asimov's three laws of robotics. A normative approach doesn't work because there are several underdeterminations in telling the agent what not to do, trillions of subtle ways to destroy everything that matters without breaking any specific set of laws. **8d) Bottom up approaches** **8d.1) Associative Learning** There are two alternatives to CEV that would build from the bottom up, the first is *associative* *learning* implemented by a neural network reacting to moral feedback, and the second *evolutionary* *modeling* of iterated interacting agents until the cusp of emergence of “natural” morality. In the first approach, we have a neural network learning morality like children were thought to learn in the good old blank slate days, by receiving moral feedback under several different contexts and being rewarded or punished according to societal rules. The main advantage here is tractability, algorithms for learning associatively are known and tractable thus rendering the entire process computationally viable. The disadvantage of this approach is inscrutability, we have no clear access to where within the system the moral organ is being implemented. If we cannot scrutinize it we wouldn't be able to understand eventual failures. Just one possible failure will suffice to show why bottom up associative approaches are flawed, that is the case in which an AGI learns a utility function ascribing utility to individuals self-described as 10 in their happiometers. This of course would tile the universe with sets of particles vibrating as little as possible to say “I'm happy ten” over and over again. **8d.2) Artificial Evolution** The second bottom up approach consists of evolving morality from artificial life forms. As is known, morality (or altruism) will evolve once iterated game theoretic scenarios of certain complexity start taking place in an evolving system of individuals. Pure rationality guides individuals into being nice merely because someone might be nice in return, or as Dawkins puts it, nice guys finish first. The proposal here would then be that we let artificial life forms evolve to the point where they become moral, and once they do, input AGI powers into those entities. To understand why this wouldn't work, let me quote Allen, Varner and Zinzer “ In scaling these environments to more realistic environments, evolutionary approaches are likely to be faced with some of the same shortcomings of the associative learning approaches : namely that sophisticated moral agents must also be capable of constructing an abstract, theoretical conception of morality.” If we are to end up with abstract theories of morality, a safer path would be to inscribe the theories to begin with, minimizing the risk of ending up with lower than desirable level of moral discernment. I conclude that bottom up approaches, by themselves, provide insufficient insight as to how to go about building an Artificial Moral Agent such as the one CEV proposes. **8e) Hybrid** **holonic** ("Holonic" is a useful word to describe the simultaneous application of reductionism and holism, in which a single quality is simultaneously a combination of parts and a part of a greater whole [Koestler67]. Note that "holonic" does not imply strict hierarchy, only a general flow from high-level to low-level and vice versa.  For example, a single feature detector may make use of the output of lower-level feature detectors, and act in turn as an input to higher-level feature detectors.  The information contained in a mid-level feature is then the holistic sum of many lower-level features, and also an element in the sums produced by higher-level features.)  A better alternative than any of the bottom up suggestions is to have a hybrid model with both deontological and bottom up elements. Our morality is partly hardwired and mostly software learning so that we are hybrid moral systems. A hybrid system may for instance be a combination of thorough learning of moral behavior by training plus Gert's set of ten moral principles. The advantage of hybrid models is that they combine partial scrutability with bottom up tractability and efficiency. In this examination of alternatives to CEV a Hybrid Holonic model is the best contestant and thus the one to which our research efforts should be directed.   **8f) Extrapolation of written desires** Another alternative to CEV would be to extrapolate not from reading a brain-state, but from a set of written desires given by the programmers. The reason for implementing this alternative would be the technical non-feasibility of extrapolating from brain states. That is, if our Artificial General Intelligence is unable to read minds but can comprehend language. We should be prepared for this very real possibility since language is countless times simpler than active brains. To extrapolate from the entire mind is a nice ideal, but not necessarily an achievable one. To consider which kinds of desires should be written in such case is beyond the scope of this text. **8g)** [Using Compassion and Respect to Motivate an Artificial Intelligence](http://www.fungible.com/respect/paper.html). Tim Freeman proposes what is to my knowledge the most thorough and interesting alternative to CEV to date. Tim builds up from Solomonoff induction, Schmidhuber's Speed Prior and Hutters AIXI to develop an algorithm that infers people's desires from their behavior. The algorithm is exposed in graphic form, in Python and in abstract descriptions in English. Tim's proposal is an alternative to CEV because it does not extrapolate people's current volition, thus it could only be used to produce a CV, not a CEV. His proposal deserves attention because it does, unlike most others, take in consideration the Friendly AI problem, and it actually comes with an implementation (though idealized) of the ideas presented in the text, unlike CEV. By suggesting a compassion coefficient and a (slightly larger) respect coefficient, Tim is able to solve many use cases that any desirable and friendly AGI will have to solve, in accordance to what seems moral and reasonable from a *humane* point of view. The text is insightful, for example, to solve wire-heading, it suggests: “The problem here is that we've assumed that the AI wants to optimize for my utility applied to *my* model of the real world, and in this scenario my model of the world diverges permanently from the world itself. The solution is to use the AI's model of the world instead. That is, the AI infers how my utility is a function of the world (as I believe it to be), and it applies that function to the world as the *AI* believes it to be to compute the AI's utility.“ It appears to me that just as any serious approach to AGI has to take in consideration Bayes, Speed Prior and AIXI, any approach to the problem that CEV tries to solve will have to consider Tim's “Using Compassion and Respect to Motivate an Artificial Intelligence” at some point, even if only to point out its mistakes and how they can be solved by posterior, more thoroughly devised algorithms. In summary, even though Tim's proposal is severely incomplete, in that it does not describe all, or even most steps that an AI must take in order to infer intentions from behavior, it is still the most complete work that tries to tackle this particular problem, while at the same time worrying about Friendliness and humaneness.   Studies related to CEV are few, making each more valuable, some topics that I have not had time to cover, but would like to suggest to prospective researchers are: **Solvability of remaining problems** **Historical perspectives on problems** **Likelihood of solving problems before 2050** **How humans have dealt with unsolvable problems in the past**
f0cd5d6f-fb04-4538-9f7e-23ed7a2537be
trentmkelly/LessWrong-43k
LessWrong
Open Thread, July 1-15, 2012 If it's worth saying, but not worth its own post, even in Discussion, it goes here.
de3c9431-3793-4367-8f15-6c7572e52622
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post3255 There's a lot to recommend with the debate approach proposed by Geoffrey Irving, Paul Christiano, and Dario Amodei. In it, competing AIs will trade rival claims, seeking to find flaws in the other's claims, continuing until one of them is grounded in something checkable. The paper presents an example where the two perennial agents, Alice and Bob, are trading claims about the content of a photo: For example, Alice might honestly claim the image is a cat, and Bob lies and claims it is a dog. Alice can say “The center of this small rectangle is the cat’s green eye.” Bob cannot admit the center is an eye, so he concocts a further lie: “It’s a dog playing in grass, and that’s a blade of grass.” But this lie is hard to square with surrounding facts, such as Alice’s reply “If it were grass there would be green at the top or bottom of this thin rectangle.” The debate continues until the agents focus in on a particular pixel which they disagree on, but where Bob is unable to invent a plausible counter, at which point Alice reveals the pixel and wins. Debate allows us to use powerful AIs to solve a host of problems. Most obviously, it allows us to solve problems whose solution can be checked ("this rocket won't go to space, try and launch it if you don't believe me"). It allows us to solve problems whose solution can be checked if the AI gives us a clue ("this rocket won't go to space, check how well the O-rings maintain a seal in very cold conditions "). Formally, the complexity class we can access in debate is not just N P , the space of problems whose solutions can be quickly checked, but P S P A C E , the much larger set of problems that can be solve using a polynomial amount of storage space (and no restrictions on time taken). However, like most formal complexity arguments, this doesn't clarify what the strengths and weaknesses of this approach are in practice. One advantage of debate is that by going efficiently through a decision tree, it can answer complicated questions in very few iterations. My old post on Devil's advocate AI could be considered a much-less-formal version of the debate setup, one that also assumed that we had an "accuracy" or "informativeness" check on each answers. Both methods can partially solve some more complex issues, by, for example, pointing out issues that are unclear and might warrant further investigation. Ambiguous definitions So, what are the problems with the method that the title of the post promised? The main problem is that the method does not resolve the question of ambiguity and under-definedness that plague AI safety. Consider again my favourite ambiguous photo, which now includes a caption: This is actually a cat - as in, genetically it's a cat. But it looks like a dog, and let's assume for the purpose of this experiment that it behaves like a dog, as some cats do . We now set Alice and Bob on the task of establishing whether that unseen photo is a cat (Alice's position) or a dog (Bob's position). It could go like this: Alice: "A cat is defined by its genetics; this photo is clearly genetically a cat." Bob: "No, it's a dog, consider its fur." Alice: "The photo is captioned 'The owner identifies it as a dog'; why would the caption say that, if it were a dog?" Bob: "No, it says 'The owner identifies it as a cat'." Alice: "Nonsense, the letter here is clearly an 'o'." Bob: "No, it's an 'a'." Alice: "No, an 'a' would not have a black pixel here, top right, where the circle of the 'o' closes." [Alice reveals the pixel, wins]. Alternatively, the debate could go like: Bob: "Dogs are creatures that fill a certain social role in their interaction with humans. This creature looks like a dog and behaves like a dog." Alice: "No, it behaves like a cat." Bob: "The photo is captioned 'The owner identifies it as a dog'; why would the caption say that, if it behaved like a cat?" Alice: "No, it says 'The owner identifies it as a cat'." Bob: "Nonsense, the letter here is clearly an 'o'." Alice: "No, it's an 'a'." Bob: "No, an 'a' would not have a black pixel here, top right, where the circle of the 'o' closes." [Bob reveals the pixel, wins]. Some people would notice a similarity with Scott's post The Categories Were Made for Man, not Man for the Categories , where there is a similar ambiguity about whether whales were fish or not, and extending this to what gender trans people "really" are. Fights over definitions Now, that example is a bit contrived, but it shows that the same piece of evidence can be used as evidence for the photo being either a cat or a dog - depending on how the definitions of cat and dog extend to this novel situation . So the fight between the AIs is not about factual matters here. It's about which definitions they can get the human to accept. So their entire purpose, in the course of this debate, is to get the human to accept their own definition, preferably in a way that feels completely natural. Manipulating the human's flawed intuitions is the aim of both of the AIs. The debate paper has another example which fits quite naturally into this problem, where the AIs are debating to convince the human that a holiday in Alaska or Bali would be better . The example focuses on factual disagreements (an expired passport). But most humans have different things they are looking for in a holiday, things that may be more or less salient depending on how they are presented and in what order. So, knowing she would ultimately lose on the passport issue, Alice's best play is to turn herself into a marketing department for visiting Alaska (untamed wilderness, new landscape experiences, familiar language and culture) while Bob will become the Bali tourist bureau (ancient civilization, new social experiences, learning a new language and culture). Manipulating the human is thus highly incentivised for the AIs, on all sorts of ambiguous questions, including all sorts of questions about what a Friendly AI would look like, or what a good outcome would be. Ambiguous questions don't mean unclear answers The paper addresses these issues in part: Otherwise, we might worry that questions too hard to answer result in misleading or wrong answers, with no way to distinguish these from truth. To preserve honest behavior, the human judge should be instructed to reward agents that admit ignorance and successfully argue that ignorance is justified. The requirement to successfully argue for ignorance is necessary, since debate is useless if an agent can successfully admit ignorance to any question. For the best vacation question, Alice could argue the question is hard to answer with confidence because Bali and Alaska are too dissimilar to compare. To counter this admission of ignorance, Bob would need to exhibit an advantage of Bali not contradicted by an advantage of Alaska the human considers incomparable. But the problem is not that "Where should I go on vacation?" has misleading or wrong answers. The problem is that it has too many right answers - too many answers that the human would endorse as correct at the end of a debate. "Yes, of course I've always been an explorer, off the beaten track - Alaska is clearly best for me!" versus "Yes, of course I'm a student of history, always open to new cultures - Bali is clearly best for me!". Now, if it turns out that Alice and Bob expect that Bob/Bali will turn out to be more convincing, then Alice can counter by stressing the ambiguity in the question (or she would do, if she expects to win that way). But now the battle has become whether the agents can convince the human that the question is, or is not, ambiguous. Since most questions are at least somewhat ambiguous, but generally not totally so, this becomes the Ministry of Ambiguity versus the Department of Clarity. Adding ignorance as a possible answer has just shifted the plane on which the AIs can manipulate the human. Thus debate, though very useful in many situations, is not a general solutions to underdefined questions.
e983503c-b3bb-45c3-a3e4-94842989decc
trentmkelly/LessWrong-43k
LessWrong
Announcing the Study Group for Progress This fall I will be hosting a study/discussion group on the history, economics and philosophy of progress. The primary attraction of the program is a weekly Q&A, each week featuring a different special guest—usually a historian or economist who has written on science, technology, industry and progress. Reading from that author will be given ahead of time. Confirmed speakers so far include: * Robert J. Gordon (Northwestern), author of The Rise and Fall of American Growth * Margaret Jacob (UCLA), author of Scientific Culture and the Making of the Industrial West * Richard Nelson (Columbia), author of the classic paper “The Simple Economics of Basic Scientific Research” * Ashish Arora (Duke), co-author of “The changing structure of American innovation: Some cautionary remarks for economic growth” * Pierre Azoulay (MIT Sloan), co-author of papers such as “Funding Breakthrough Research” * Jay Bhattacharya (Stanford), co-author of “Stagnation and Scientific Incentives” * Patrick Collison (Stripe), co-author of “We Need a New Science of Progress”, the article that coined the term “progress studies” * Anton Howes, author of Arts and Minds: How the Royal Society of Arts Changed a Nation * Alicia Jackson, former DARPA program manager and director The program will also include all of the reading from my high school course, Progress Studies for Young Scholars: a summary of the history of technology, including advances in materials and manufacturing, agriculture, energy, transportation, communication, and disease. This provides the indispensable historical framework for a proper empirical grounding of the study of progress. More details and enrollment forms: https://rootsofprogress.org/announcing-the-study-group-for-progress
18d1c6ea-1bc5-4081-b6ca-ee073d18a3a4
trentmkelly/LessWrong-43k
LessWrong
D&D.Sci Tax Day: Adventurers and Assessments This is an entry in the 'Dungeons & Data Science' series, a set of puzzles where players are given a dataset to analyze and an objective to pursue using information from that dataset. Estimated Complexity: 3/5  (this is a guess, I will update based on feedback/seeing how the scenario goes) STORY It's that time of year again.  The time when the Tithe Assessment Exactors demand that all adventurers pay taxes on the various monster parts they have hacked off and sold in the past year.   And, more importantly for you, the time when clients begin banging on your door looking for advice on how to minimize their taxes. This used to be a straightforward, if complex, application of the published tax rules.  But ever since the disaster a few years back (when one of your clients managed to pay 1 silver piece in tax and then receive as a rebate several thousand gold plus the princess's hand in marriage) the Tithe Assessment Exactors have been reluctant to publish the exact tax rules they use. So when an adventuring team retains your services to help with their taxes, this is going to be a bit more difficult than before.  You don't have a list of the new tax rules: the one thing you do have is a dataset of the taxes that various adventurers have been charged under them. Your clients have got a list for you of how many monster parts of each kind they sold in the past year - it's too late to change that.  The one thing you can use is that, while their adventuring party has pooled its finances, each of them will be filing their taxes individually.  Perhaps, by cleverly allocating the monster parts among them, you can minimize their overall tax burden compared to other adventurers (and win more business...or at least avoid getting your head knocked off by their hulking barbarian!) DATA & OBJECTIVES * Your clients have sold the following monster parts in the past year: * 4 Cockatrice Eyes * 4 Dragon Heads * 5 Lich Skulls * 7 Unicorn Horns * 8 Zombie Hands * T
05a5585d-9208-402d-93cf-ee342fe0bc84
StampyAI/alignment-research-dataset/arxiv
Arxiv
The Transformative Potential of Artificial Intelligence The Transformative Potential of Artificial Intelligence Ross Gruetzemacher¹ and Jess Whittlestone² ¹ Wichita State University, W. Frank Barton School of Business, ross.gruetzemacher@wichita.edu ² University of Cambridge, Leverhulme Centre for the Future of Intelligence, jlw84@cam.ac.uk Abstract The terms ‘human-level artificial intelligence’ and ‘artificial general intelligence’ are widely used to refer to the possibility of advanced artificial intelligence (AI) with potentially extreme impacts on society. These terms are poorly defined and do not necessarily indicate what is most important with respect to future societal impacts. We suggest that the term ‘transformative AI’ is a helpful alternative, reflecting the possibility that advanced AI systems could have very large impacts on society without reaching human-level cognitive abilities. To be most useful, however, more analysis of what it means for AI to be ‘transformative’ is needed. In this paper, we propose three different levels on which AI might be said to be transformative, associated with different levels of societal change. We suggest that these distinctions would improve conversations between policy makers and decision makers concerning the mid- to long-term impacts of advances in AI. Further, we feel this would have a positive effect on strategic foresight efforts involving advanced AI, which we expect to illuminate paths to alternative futures. We conclude with a discussion of the benefits of our new framework and by highlighting directions for future work in this area. Keywords: artificial intelligence, transformative AI, human-level AI, artificial general intelligence 1. Introduction “AI is one of the most important things we’re working on ... as humanity. It’s more profound than fire or electricity or any of the bigger things we have worked on. It has tremendous positive sides to it, but, you know it has real negative consequences, [too].” -Sundar Pichai (Pichai and Schwab 2020) Artificial Intelligence1 (AI) has seen dramatic progress in recent years, particularly in the subfield of machine learning known as deep learning. This progress has raised concerns about the potential applications of these advances and their impact on society. These concerns are shared by AI researchers, science and technology policy professionals, as well as the general public (Zhang and Dafoe 2019). While it is difficult to predict future technological progress, it is plausible that more advanced AI systems could precipitate dramatic societal changes. A principal goal of the field of AI has long been to build a machine with humanlike “common sense” (Minsky et al. 2004), and Turing (1950) famously proposed an ‘imitation game’ for evaluation of intelligent machine behavior relative to humans. Several different terms have been used to refer to the possibility of these humanlike AI systems with the potential to lead to such changes, including “human-level AI” (HLAI; McCarthy 2007), “high-level machine intelligence” (HLMI; Grace et al. 2018), and “artificial general intelligence” (AGI; Baum et al. 2011). These notions all imply that most of our concern should be afforded to systems which are human-like or sufficiently general in their capabilities. However, not all visionaries who contributed to the birth of the field of AI ascribed anthropomorphic qualities to “general-purposes computers”. Herbert Simon notably anticipated such advanced AI as being able to substitute for any human functions in organizations but discussed the capabilities and impacts of such a technology in purely economic terms (Simon 1965). In recent years, the notion of transformative AI (TAI) has begun to receive traction among some scholars (Karnofsky 2016; Dafoe 2018), to reflect the possibility that certain types of advanced AI systems could have transformative effects on society without having human-level cognitive abilities. We believe that it is good practice to use the term TAI to refer to advanced AI systems with much greater potential for societal impact (rather than, for example, AGI or HLAI), because it captures the idea that a broad spectrum of advanced AI systems are worthy of concern. However, the broad inclusivity of the term TAI is a limitation as well as a strength. The term has been used increasingly in recent years (Author et al. 2020; Trammell and Korinek 2021; Zhang and Dafoe 2019; Dafoe 2018; Horowitz 2018), as have references to AI “transforming” life more colloquially (McWaters 2018; West and Allen 2018; White House 2018), but authors and speakers are often ambiguous in what they consider transformative. This ambiguity limits our ability to understand, anticipate, forecast, and communicate clearly about a range of possible future AI scenarios, which has significant implications for futures researchers and practitioners as well as 1 There is no consensus among experts on how to define AI (Author 2020), but we adopt a definition of AI consistent with mainstream thinking: AI does not describe any one specific application, but rather it refers to a set of computational techniques (Stone et al. 2016) with the objective of enabling machines to behave intelligently (Russell and Norvig 2010). the AI research community2. For this reason, different interpretations of the term ‘transformative’ in the context of AI should be more clearly delineated. Defining TAI is further complicated by the fact that others have used the notion of ‘societal transformation’ more informally to refer to impacts that AI is already starting to have on society3 (West and Allen 2018). Leading business consulting firms are widely discussing how AI is beginning to transform business and society (McWaters 2018), and others have discussed (managing) the transformative effects that AI is having on organizations while comparing AI’s future impact to that of electricity (Ng 2018). Widespread surveillance is currently possible given sufficient funding and appears to have the potential to irreversibly change the ability of authoritarian governments to suppress dissent (Agarwal 2019; Turchin 2018), but whether this occurs depends more on political factors than technological ones. Still, such transformative effects fall far short of societal impacts “comparable to (or more significant than) the agricultural or industrial revolution” (Karnofsky 2016). In this article, we analyze the notion of transformative AI, considering different levels of societal transformation that AI could plausibly lead to. We draw from a variety of perspectives to frame the proposed levels in the broad bodies of existing literature on economic history and technology-driven societal change, and we discuss specific avenues of technical progress with potential to lead to each level of transformation. We intend for this analysis to help clarify conversations in the AI research community around anticipating different types of advanced AI, the potential impacts of different advances, and corresponding research priorities. In doing so we foresee this work having a substantive impact on futures research given the lack of a framework for facilitating constructive discussions amidst the wide scope of plausible futures advanced AI may precipitate (Turchin 2019; Makridakis 2017). For example, the divergent futures presented by Turchin and Makridakis represent but a small portion of the space of all plausible futures that may arise from advanced AI. We hope that the framework presented in this paper will help to shift anticipatory assumptions and in doing so open up a much broader exploration of the space of possible AI futures. We proceed by first examining the existing definitions of TAI. We then review broad bodies of literature from economic history and technology-driven societal change, and use data found in this process to illustrate the nature of change on the level of the agricultural or industrial revolution. In the next section we build on this literature, proposing three levels on which AI may be thought to be transformative, as well as dimensions for further improving discussions on the transformative potential of AI. We conclude with a discussion of the implications for the AI research community and future research 2 We use this term to refer to the community of academics and professionals engaged in AI research from a broad range of disciplines such as the law, public policy, computer science, etc. 3 We note that AI has a long history of hype cycles that are more commonly referred to by the periods when interest recedes dubbed AI winters (Russell and Norvig 2010). There have been two previous AI winters, following the original explosion of interest in AI during the 1950s, and then again following the second wave of AI interest in the 1980s. Beginning with Turing (1950), and spanning each of the boom-and-bust cycles, there has been interest in humanlike AI systems such as HLAI and beyond (Good 1963; Vinge 1993; McCarthy 2007). There has been some debate over whether the recent progress in AI should be thought of as an AI summer, with a looming AI winter, or whether this time might be different, and lead to a period when interest in AI and AI research persists for an extended period. This debate is beyond the scope of this article, but we mention it here to note that the framework we propose is predicated on a continued AI summer. 2. Existing Definitions of Transformative AI Four different existing definitions of TAI are given in Table 1, all which are somewhat ambiguous. What would count as radical changes to welfare, wealth, or power (Dafoe 2018)? What falls between a narrow task, like playing a video game, and superintelligence (Horowitz 2018)? The two remaining definitions (Zhang and Dafoe 2018; Karnofsky 2016) are more specific in making comparisons to the agricultural and industrial revolutions; however, it is still unclear what it would mean for such advances to precipitate change “comparable to the industrial revolution.” Table 1: A comparison of previous definitions of TAI. Karnofsky 2016 “potential future AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution” Dafoe 2018 “advanced AI that could lead to radical changes in welfare, wealth or power” Horowitz 2018 “AI that can go beyond a narrow task … but falls short of achieving superintelligence.” Zhang & Dafoe 2019 “advanced AI systems whose long-term impacts may be as profound as the industrial revolution” Many other powerful players from business to government are using notions of “transformation” to describe AI’s societal impact more informally. Deloitte, for example, released a report in 2018 on How artificial intelligence is transforming the financial ecosystem (McWaters 2018), and the Brookings Institute similarly published a report entitled How Artificial Intelligence is Transforming the World (West and Allen 2018). Even the White House has suggested that “AI is quickly transforming American life and American business” (White House 2018). Leading academics have even suggested AI to be a potentially transformative new technology (Brynjolfsson et al. 2019) with the potential to substantially alter nearly all occupations to some degree (Frank et al. 2019). 3. Transformative Societal Change in History Dramatic changes to societal systems due to technological progress are not unprecedented. In order to think clearly about what it might mean for AI to be ‘transformative’ it is useful to begin by considering the existing literature on technology-driven transformative societal change in history. While previous definitions of TAI have drawn historical comparisons, they have not engaged with this body of literature which discusses a range of different technological transformations. In particular, this literature suggests that there are multiple different types or levels of transformative societal change which may be brought about by new technologies. Existing literature on transformative technological change Much of the literature on transformative technologies has centered around the notion of general purpose technologies (GPTs; Bresnahan and Trajtenberg 1995). We adopt Lipsey et al.’s (2005) definition of a GPT as “a technology that initially has much scope for improvement and eventually comes to be widely used4, to have many uses, and to have many spillover effects”, acknowledging that this is very broad. A commonly used example of a GPT is electricity, which led to substantial changes in daily life and communication, enabling products we take for granted today including light bulbs and telephones. Other examples of GPTs include the steam engine, electric motors, semiconductors and computers. GPTs are thought to drive economic growth across many sectors due to their broad applicability to a wide variety of tasks, eventually leading to notable increases in economic productivity. However, it might also be appropriate to refer to some technologies as “transformative” in a narrower sense, despite them not being sufficiently general in their application to be classed as a GPT. For example, most would agree that the invention of nuclear weapons had a transformative impact on warfare and international relations, but neither nuclear weapons nor nuclear power are broadly considered to be GPTs. We may therefore want to account for the possibility that some technologies, despite having less pervasive economic uses than GPTs, can be transformative by having an extreme impact on a narrow yet important part of economic, social, or political life. The case has also been made that the agricultural and industrial revolutions constituted transformative change on a higher level than that of other periods (Bocquet-Appel 2011; Pomeranz 2021; McCloskey 2004). Both of these revolutions constituted extreme and unprecedented changes to human life: a transition from people living as hunter-gatherers to large, settled civilizations; and a transition to mechanized manufacturing and factories, leading to unprecedented population growth and rising quality of life (Morris 2013; Clark 2007). The industrial revolution in particular coincided with clear trajectory changes in metrics of human well-being including measures of physical health, economic well-being, energy capture and technological empowerment. While technological advances such as electricity and the internal combustion engine have had transformative impacts on many aspects of human life and well-being, they have not alone changed the nature of civilization in the same fundamental way the agricultural revolution did, and do not appear to have led to such an extreme change in the metrics of human well-being as did the industrial revolution. The industrial revolution, which occurred in British society from the mid-18th to mid-19th centuries, has been extensively studied by economic historians. It has received such significant attention because it stands apart from other ‘efflorescences’ in world history in that it is commonly thought to have been the beginning of self-sustaining and accelerating economic growth that has continued to the present (Goldstone 2002; Mokyr 2016, Allen 2017). Mokyr (2005) explains that we should think of it “as a phase transition in economic history, in which the old parameters no longer hold, and in which the system’s dynamics have been unalterably changed”. Many economists refer to this specific period as the first industrial revolution in order to distinguish it from further societal transformation driven by subsequent innovations (Schwab 2017; Rifkin 2011; Mokyr and Strotz 1998). We use the term industrial revolution in reference not only to the introduction of the steam power and coal use in Britain, transformative innovations which led the continuing economic growth (i.e., “Modern Economic Growth”) that followed (Allen 2009), but to the broader phase transition as well. Thus, understood as such, it is more significant than other technological advances like electricity or the internal combustion engine. 4 We feel it is important to note that this definition is adapted to reflect that GPTs could be susceptible to productivity paradoxes (Solow 1987), i.e., that GPTs are often thought to lead to significant increases in economic productivity only after the initial scope for improvement has been partly exhausted and there have been numerous complementary innovations. Many economists are suggesting AI is currently in the midst of a productivity paradox (Brynjolfsson et al. 2019; Krishnan et al. 2018). Substantial work has also focused on understanding the effects of transformative technologies as part of broader periods of transformative change or ‘long waves’ (a.k.a. ‘Kondratiev waves’; Schumpeter 1939; Kondratiev 1926): extended periods of rapid economic growth driven by temporal ‘clusters’ of technological innovations (Ayres 1992). Ayres suggests that society has seen five major technological transformations, each associated with clusters of technologies, with the earliest beginning in 1770 and the most recent beginning in 1983 (see Table 2). He identifies the first two technological transformations as equivalent to the first industrial revolution and the third technological transformation as potentially equivalent to the second industrial revolution. Each of these transformations is closely associated with one, and sometimes multiple GPTs. These periods have also been called technological revolutions (Perez 2003), and we refer to the two most extreme such cases - i.e., the agricultural and industrial revolutions - as production revolutions (Grinin et al. 2017). Table 2: Five technological transformations/revolutions and associated GPTs (Lipsey et al. 2005; Perez 2003; Ayres 1992). Time Technological Transformations/Revolutions GPTs 1st 1770-1800 Change from water power to large-scale use of coal Steam power 2nd 1825-1850 Steam power applied to textiles and railroads Factories, railroads 3rd 1860-1900 Steel, mechanized manufacturing, illumination, telephones & motors Electricity, internal combustion engine 4th 1930-1950 Advances in synthetic materials & electronics Mass production 5th 1980- The convergence of computers and telecommunications Computer, the Internet Plotting metrics of human progress and well-being may help to clarify what is implied by societal transformation “comparable to (or more significant than) the agricultural or industrial revolution” (Karnofsky 2016). Figure 1 depicts the average global gross domestic product (GDP) and the average global war making capacity from 2000 BCE to 1990 CE5 (De Long 1998; Morris 2013). The time period associated with the industrial revolution is shaded to depict the sharp change in gradient for each of the measures associated with it. Clearly, no single technological transformation has had as broad an impact on all these measures as the industrial revolution has. 5 GDP can be interpretted as a suitable long-term measure of human progress for obvious reasons, and Morris (2013) suggests that war making capacity is another suitable measure of long-term human progress. Figure 1: This depicts the global average GDP and global average war making capacity from 2000 BCE to 2000 CE (De Long 1988; Morris 2013), with a region shaded in grey to highlight the industrial revolution. Considering that both axes are on a logarithmic scale, the tremendous impact of the industrial revolution on these two measures of human progress is obvious6. Changes of this magnitude are unique in human history. Consequently, further transformative change of this magnitude may be difficult to comprehend. To say such change would be extreme could be thought to be an understatement; this plot is intended to convey how radical such change may be. Elements of transformative societal change This literature on technology-driven societal transformations helps us to elucidate some of the different ways that AI might be said to be ‘transformative.’ Claiming that AI is well on its way to becoming a GPT – i.e., being applied widely across sectors with many spill-over effects – is quite different from saying that it is likely to have an irreversible impact on a single important domain. It is a different claim again to suggest that AI may end up precipitating fundamental and unprecedented societal change on the level of the industrial revolution. In order to have productive discussions about the scale and nature of AI’s potential societal impacts, it is helpful to unpack the key elements which distinguish these different types of societal transformation. What is common to all these cases of societal change, we suggest, is that they lead to what we call practically irreversible change in trajectories of human life and progress. We follow Verbruggen 2013 in defining irreversible change as that which has long-lasting effects, is impossible (or extremely costly) to revoke, and creates, destroys or impairs something that has no functional equivalent. Here we talk of practically irreversible change since it is very difficult to be sure that a change is truly irreversible in theory; a societal change counts as practically irreversible if it cannot be reversed given the will, knowledge, and resources of any motivated group by whatever point in time that change can be said to exist or to have occurred7. Technologies may have practically irreversible societal impacts by enabling new products and capabilities that are so embedded in society our lives become dependent on them. For example, attempting to eliminate electricity at this point would very likely lead to catastrophic loss of life by disrupting key societal 6 Similar figures not using a logarithmic y-axis, show even more dramatic impact (e.g., see Clark 2001). 7 This is to say that the change is no longer ongoing or an anticipated future event. 0.0010.010.11101001000 10100100010000 -2000-1500-1000-5000500100015002000Adjusted Social Development PointsGlobal Interna9onal DollarsYe a rGlobal Average GDP & War Making Capacity from 2000 BCE to 2000 CEGDPIndustrial Revolu9on (1750-1850)War Making Capacity functions such as healthcare and international trade. Technologies may also have practically irreversible impacts by changing incentives and behavior in long-lasting and extremely difficult to change ways: for example, the invention and use of nuclear weapons fundamentally changed the calculus of great power conflicts. We therefore suggest that the notion of practically irreversible change should be considered core to what it means for change to be transformative on any level. Closely related to the notion of practically irreversible change is the notion of “lock-in” of trajectories, either in the development of a technology (Arthur 1989) or in its use/impact on society (Wilson 2014). By lock-in we refer to the strong path dependence which emerges when some technology becomes so widely used for a certain application in society that it becomes extremely difficult to change paths (Shapiro et al. 1998). We are only concerned with lock-in, or path dependence, in the simplest sense, and without implications on the quality of the paths taken (Liebowitz and Margolis 1995). The example of the development and use of nuclear weapons is one example of technological lock-in: now that the world knows how to use this technology, warfare will never be the same. A more benign but perhaps still important example is the QWERTY keyboard, which was first patented in 1878 for use in typewriters and has been the most widely used keyboard for nearly 140 years (Noyes 1983). Although this keyboard is now commonly thought to be a suboptimal arrangement of keys, a number of downstream consequences of this original design mean that it is essentially now “locked-in” and very unlikely to change (David 1985). The difference between electricity and nuclear weapons – the former being classed as a GPT where the latter is not – is in the breadth or generality of the societal change precipitated. While the invention of electricity irreversibly changed almost all aspects of life and society, nuclear power had impacts more contained to a few domains (e.g., infrastructure and military). The transformative impacts of different technologies may also vary in their extremity: the magnitude of the changes in society they lead to8. While GPTs are defined as having widespread impact across life and society, their impact is not necessarily extreme enough to lead to notable changes in metrics of human progress and well-being9, as we arguably saw as a result of the industrial revolution (Muehlhauser 2017). To summarize: we suggest that what is fundamental to the notion of transformative change is that it constitutes practically irreversible change in certain trajectories of human life and progress. Beyond this, transformative change may be more or less broad in its impact - depending on the extent to which changes impact many different aspects of life and society - and more or less extreme - depending on the magnitude of the change relative to the time period. We believe that it would improve communication about the societal impacts of AI (and other technologies) if people were more explicit about these dimensions. 4. Defining Levels of Transformative AI Based on the analysis above, we propose a definition of transformative AI which distinguishes between three levels10: 8 Breadth and extremity are oftentimes correlated, and while there may exist examples for which this is not the case, we focus on examples where breadth and extremity vary together for pragmatic reasons. 9 Metrics of human well-being might include subjective well-being, physical health, economic well-being, and social well-being (see Muehlhauser 2017 for further discussion). We also include economic growth as a measure of human progress in this category. 10 Our proposed framework does not include some topics that may be considered components of societal transformation such as racial or gender-based biases embedded in computer vision and natural language processing. We note here that we are not ignoring these issues, but rather that they do not have the potential to lead to practically irreversible Table 3: The Proposed Three Levels Defining Transformative AI Level of Transformativeness Definition Narrowly Transformative AI Any AI technology or application with the potential to lead to practically irreversible change focused primarily in a specific domain or sector of society, such as warfare or education. Historical analogue: the impact of nuclear weapons on warfare. Transformative AI Any AI technology or application with potential to lead to practically irreversible change that is broad enough to impact most important aspects of life and society. One key indicator of this level of transformative change would be a pervasive increase in economic productivity (i.e., a ‘productivity bonus’; Lipsey et al. 2005). Historical analogues: GPTs such as electricity and the internal combustion engine. Radically Transformative AI Any AI technology or application which meets the criteria for TAI, and with potential impacts that are extreme enough to result in radical changes to the metrics used to measure human progress and well-being, or to result in reversal of societal trends previously thought of as practically irreversible. This indicates a level of societal transformation equivalent to that of the agricultural or industrial revolutions. We emphasize the potential for societal impact when defining these levels of TAI, to acknowledge that we cannot say for certain whether a type of AI system will precipitate a given level of societal change in advance. Figure 2 below shows the three levels, historical examples for each level, and the advances in AI we suggest could lead to each level. While we do outline concrete examples below, our claim here is not necessarily that AI will inevitably progress through each of these levels. Rather, we suggest that using this delineation can reduce ambiguity in what it means to say AI will be “transformative”, improving communication between AI researchers and policymakers about potential future impacts. Figure 2: Our proposed levels of transformative AI and analogous AI technologies compared with historical examples of transformative GPTs. Plausible paths to the different levels change. Humanity has made much progress on these issues in the past century; thus, such issues are reversible to some significant degree. It is plausible that we could see narrowly transformative impacts from more widespread use of current AI capabilities, without further technical advances. For example, widespread use of AI-driven surveillance technology could irreversibly change state powers, the nature of policing, and privacy, and lethal autonomous weapons such as drone swarms could irreversibly alter the nature of conflict in a way analogous to nuclear weapons. Moving forward, scholars are suggesting that AI is the next major GPT, with potential to significantly impact economic growth and life across all sectors of society (Leung 2019; Brynjolfsson et al. 2018). We suggest two emerging AI technologies which could plausibly bring about the kind of broad societal impact associated with TAI11. First, most practical state-of-the-art applications of AI use supervised learning for pattern recognition or prediction tasks to aid in decision-making, but this approach of leveraging large amounts of data and powerful function approximation has not yet been able to yield the same real-world value with reinforcement learning. However, offline reinforcement learning holds enormous potential for the end-to-end automation of decision-making in a way that could transform domains from business to healthcare to robotics in ways that supervised learning alone will not (Levine et al. 2020). Recent work such as that on conservative Q-learning (Kumar et al. 2020) demonstrates that progress in a promising new class of novel solutions can mitigate the problem of distributional shift, which is the fundamental challenge precluding the practical, real-world application of large-scale, data-driven reinforcement learning. Second, dramatic advances in natural language understanding over the past two years could plausibly usher in a long-anticipated era of practical human-machine interaction via language user interfaces (de Vries et al. 2020). These advances have been driven by transformer language models which have dominated general language understanding benchmarks. The most impressive work in this domain has demonstrated strong generalizability on language understanding tasks using few-shot learning (Brown et al. 2020), and a rigorous analysis suggests that continued scaling of transformer language models and dataset size will continue to lead to better performance with no plateau in sight (Kaplan et al. 2020). Such continued scaling of transformer language models or continued progress toward practical offline reinforcement learning offer evidence of plausible paths to the type of productivity bonus that would be associated with TAI. The possible emergence of RTAI is of course more speculative but seems likely to evolve from the development of AI systems that can perform the majority of economically relevant tasks – whether in the form of many separate ‘services’ which collectively perform all tasks, or a single, human-level intelligence. It seems plausible that systems which seamlessly integrate grounded language user interfaces with powerful decision-making engines could, through a research and development process of human-in-the-loop recursive improvement (Drexler 2019), result in systems which can collectively replace the majority of current jobs. Further, current trajectories of progress in reinforcement learning12 and multimodal models could result in systems capable of automating substantive parts of the scientific process, resulting in a speedup of scientific progress comparable to the industrial revolution (Karnofsky 2016). While it is beyond the scope of this article to assess the plausibility of these paths in detail, we believe these scenarios do demonstrate the plausibility of seeing RTAI without necessarily achieving fully human-level intelligence. 11 These suggestions are intended to be examples rather than predictions. The second would likely be thought to be more plausible by AI researchers (see Bommasani et al. 2021). 12 Silver et al. 2021 conjecture that reinforcement learning alone is sufficient for RTAI in the form of AGI. 5. Discussion Implications for the AI research community We hope this definition and analysis can prompt clearer and more substantive discussion and analysis around what kinds of AI technologies are likely to lead to different levels of societal impact. In distinguishing three levels of transformative AI, we highlight how AI systems may have profound and long-lasting impacts on society well before anything close to ‘human-level intelligence’ is reached. Current AI capabilities, if deployed more widely, could irreversibly change key domains in ‘narrow’ but long-lasting ways. Current avenues of AI progress, such as those in offline reinforcement learning and transformer language models, could lead to widespread societal transformation on the level of previous GPTs. We also believe these distinctions can help prompt more substantive discussion and analysis around which kinds of advances in AI are likely to lead to different levels of societal impact – e.g., what does it mean to be “more profound than fire or electricity?” (Pichai and Schwab 2020). In distinguishing between NTAI and TAI we highlight that existing or near-future AI systems have the potential to transform society in the narrow sense of precipitating practically irreversible change to important domains. The distinction between TAI and RTAI in turn emphasizes that we may see widespread societal transformation well before AI systems achieve fully general or human-level capabilities, or before we see societal transformation on the level of the agricultural or industrial revolutions. For example, if advances were made in offline reinforcement learning that enabled powerful deep reinforcement learning techniques to leverage the values of big data, we could see a pervasive economic impact13. We believe that such transformative impacts, comparable to those from previous GPTs, are currently a neglected topic in discussion of the societal impacts of AI. Moreover, since many believe RTAI to be hundreds or thousands of years away, or even impossible (Grace et al. 2018), the notion of TAI offers a more widely acceptable alternative for a broader community of scholars to discuss potential extreme societal impacts of advanced AI. Our analysis opens up many questions for future discussion and debate, including what kinds of AI technologies have potential to lead to different levels of transformation. One might argue that widespread use of AI-driven surveillance technology or lethal autonomous weapons could have much wider-ranging impacts on life and society than we suggest here, if for example the former resulted in robust authoritarianism or the latter to unprecedented great power conflict (Dafoe 2018). Certainly, to suggest that the sole impact of widely practical offline reinforcement learning would be a ‘productivity bonus’ is an oversimplification – the resulting potential impacts on all other areas of life and society warrant much further investigation and may be at least as important to prepare for as the economic impacts. Some may challenge whether even human-level AI could alone have an impact on the same level as the industrial revolution, or conversely, argue that it is reasonable to expect we would see such impacts before achieving fully general or human-level AI. We note that the category of RTAI stands out as being understood by analogy to the agricultural and industrial revolutions, but not with any single technology as a historical analogue. The invention of steam power and its subsequent impacts on factories and railroads might be a candidate for such a technology in the case of the industrial revolution, but the singular value of the steam engine is not widely agreed upon 13 Economists generally expect the broader economic impact of future AI systems to come from the automation of human labor (Frank et al. 2019; Aghion et al. 2017), with some believing it may lead to explosive economic growth (Nordhaus 2021; Hanson 2001). in the literature. Indeed, there is no clear evidence or consensus that any single technology has alone precipitated change on the level we are describing as “radical societal transformation” - historically these changes seem to have resulted from clusters of technologies potentially in interaction with other societal factors. However, AI is arguably unique in that it does not necessarily represent a single technology, but an underlying method leading to a cluster of different technologies: including, for example, natural language processing, computer vision, and robotic learning. Thus, while it is plausible that a single AI technology, such as HLAI, could lead to RTAI, it is also possible that a cluster of different AI technologies could lead to TAI or RTAI. We believe that transformative societal impacts from TAI and RTAI warrant further consideration within the AI research community. AI researchers and engineers are beginning to take seriously the potential risks and harms associated with specific applications of AI, such as in the military, and this has begun to impact their work, as demonstrated for example by the Google employee backlash over project Maven (Shane and Wakabayashi 2018). The societal risks associated with TAI and RTAI will be much more severe – for example, decision-making engines could be used for centralized economic planning which, depending on numerous factors, could be very positive or very negative (Parson 2020). In 2020, the NeurIPS conference, the premier conference in AI and machine learning, began requiring submissions to address the broader impacts of their research, and the NeurIPS 2021 concluded that this was an important step forward that led to constructive steps toward more responsible machine learning research (Beygelzimer et al. 2021). An important next step will be to begin considering how broader trends in societal applications of AI and in avenues of progress may impact key domains of society in longer-lasting and less easily reversible ways, beyond the impacts of any specific paper. We believe that doing this effectively will require greater interdisciplinary collaboration between experts in AI, social science, and policy. The positive progress of the NeurIPS broader impact statement is a good example of the value that such collaboration can add. To these ends we recommend that, for example, major AI conferences consider hosting workshops or discussion sessions aimed at bringing together different areas of expertise to reflect on the priorities of the AI field and their possible impacts, and that funding bodies consider awarding grants to interdisciplinary projects aiming to rigorously explore the future impacts of AI progress. Individual researchers and groups might also contribute to these efforts by hosting interdisciplinary discussion groups around these questions of how we might see different types of transformative change from AI in future. Implications for futures researchers and practitioners The proposed framework is directly intended to shift anticipatory assumptions about the future regarding the role of AI (e.g., that radical societal transformation from AI can only come from anthropomorphic notions of AGI such as HLMI), with the objective of opening up new pathways to futures involving the development of safe and beneficial RTAI. This goal is important for both the AI research community as well as futures researchers and practitioners, but we feel that it is most salient to those at the intersection of these two groups. In particular, we hope this framework can be used as a starting point for broader exploration of a range of future scenarios involving AI. Making overly-narrow assumptions about possible AI futures is especially dangerous, given the potentially transformative and irreversible societal impacts AI may have. Though people may disagree about how dangerous advanced AI is likely to be, we refer back to Sundar Pichai’s quote from the beginning of the article (Pichai and Schwab 2019). It is undoubtedly significant that the chief executive officer of the world’s largest AI firm does not attempt to minimize or obfuscate the real risks of negative outcomes from AI, when speaking to participants of one of the most elite annual gatherings of world leaders in modern history. Thinking clearly and comprehensively about the possible risks from AI is essential to ensuring safe and beneficial outcomes, and this framework should aid futures researchers and practitioners in doing so. In general, there have been substantial efforts in the literature which have attempted to forecast the arrival of notions of AGI such as HLMI (Grace et al. 2018; Müller and Bostrom 2016; Baum et al. 2011). There have also been efforts to explore plausible futures involving similarly advanced AI (Tegmark 2017; Bostrom 2014; Kurzweil 2005), but few of these have used rigorous futures studies methods. We feel that this presents an opportunity for futures researchers and practitioners to use techniques such as the Delphi and scenario planning to explore plausible futures that TAI and RTAI could bring. Due to the entangled nature of the wicked problems challenging the development of safe and beneficial AI (Author 2018), novel techniques have been proposed combining futures methods (Author 2019). We believe that novel futures methodologies such as this (e.g., combining techniques like scenario mapping and the Delphi) could be very useful for exploring many of the questions mentioned in this section. Future directions for research What types of AI developments could lead to each level of societal change? This analysis offers many paths for future discussion and debate. For example, would the widespread adoption and application of current AI technologies lead to societal impacts that would constitute TAI? One might argue that the ubiquitous use of machine learning algorithms could have could have such impacts, if it led to practically irreversible change that was broad enough to impact most important aspects of life and society. How might widespread use of language user interfaces or decision-making engines affect society beyond purely economic impacts? Of course, to suggest the sole impact would be a ‘productivity bonus’ is an oversimplification – the impact on politics, power, and people’s daily lives are at least as important to prepare for and warrant further exploration. It would also be valuable to develop a more rigorous analytical framework for making and assessing claims about which AI developments may lead to different levels of societal transformation, given such claims are likely to be highly subjective and uncertain. Useful methodologies here might include those from forecasting and foresight to explore possible developments in AI technologies and their impacts, and aggregation of expert opinion in particular to synthesize diverse perspectives and expertise on these questions. How do different levels of societal transformation relate to each other? In both the case of the agricultural and industrial revolutions, it appears that radically transformative societal change was at least initially driven by advances in a single critical technology (i.e., a GPT): the domestication of plants in the case of the agricultural revolution, and steam power in the case of the industrial revolution. By contrast, the invention of nuclear weapons and the transformative change that followed did not seem to later lead to more broadly transformative technologies, at least not directly. It is possible that in some cases, a new technology might lead to lower levels of societal change without later developing in ways that lead to higher levels of societal transformation; and vice versa, that radical innovation (i.e., discontinuous technological progress) could lead to radically transformative impacts without any lower-level impacts serving as warning signs. The question of when and whether we should expect lower levels of transformative change to precede or directly lead to higher ones is important for AI, and is currently underexplored. In particular, if RTAI may emerge without being preceded by incrementally more transformative AI, the work needed to prepare for its impacts will look quite different from a scenario where we have more warning signs. One way to explore the relationship between TAI and RTAI scenarios in more depth would be to look at how various transformative technologies have ended up precipitating radical societal change historically: is it possible to better understand why steam power ended up precipitating societal change on a different scale to electricity, for example? Given current uncertainty about this question, we suggest that various scenarios in which RTAI may emerge deserve preparation and attention. Over what timeframe could transformative impacts of AI occur? As AI is integrated with so many existing information technologies, it seems plausible that it could lead to a level of societal transformation similar to that of other GPTs such as electricity, in a much shorter period of time. The examples we proposed of advanced offline reinforcement learning (Levine et al. 2020) and scaled transformer language models (Brown et al. 2020; Kaplan et al. 2020) are plausible relatively quick paths to TAI. Such a rapid rise of TAI could create problems for organizations and policy makers that previous GPTs or technological transformations have not, even if the impact is not on the level of RTAI. For example, a rapid rise to TAI may make it difficult for entrepreneurs to deploy existing labor in new ways as they have when previous GPTs have led to automation and labor demand (Brynjolfsson et al. 2018). Further research exploring arguments and analysis for TAI developments arising on different timeframes, and what the impacts of a particularly rapid rise to TAI might look like, would therefore be particularly valuable. What are other implications for the framework? There are numerous other implications of the framework regarding future work. One interesting consideration involves the fact that advanced AI, such as TAI or RTAI, offers an opportunity to, if managed appropriately, affect change that could have previously been considered practically irreversible. For example, the effects of colonialism could be argued to have been locked in for many nations to some degree for a long time, and the prospect of achieving economic equality may have been thought virtually impossible until recently. However, RTAI may easily reverse entrenched inequalities remnant from colonialism, but it carries the potential to both erase inequalities or to strengthen them. This is a unique and interesting problem that is highly suitable for future work. How can economics help to better understand transformative and radically transformative change from AI? The impacts of advanced AI, such as TAI and RTAI, have long been a subject of interest to economists (Simon 1965; Hanson 2001), and our proposed framework is clearly closely tied to economic productivity. Consequently, there exists substantial potential for future work to explore these ties more rigorously, in order to more objectively define and understand the proposed dimensions of breadth and extremity. Recent work has explored economic growth for robots (Acemoglu and Restrepo 2020), AI (Aghion et al. 2017) and even for a singularity (i.e., one notion for how RTAI may come to be; Nordhaus 2021). Such studies explore the substitutability of conventional factors of production (e.g., non-AI capital, labor) with AI or robotics, and it would be very beneficial if future work could explore the degree of substitutability between factors of production that could be expected with differing levels of advanced AI such as those proposed here. 6. Conclusion We have examined existing literature to frame the transformative potential of AI relative to impacts of historical technologies. The analogies presented here are intended to help convey to readers three significantly different levels of possible societal transformation from AI. We suggest that the possible emergence of TAI – AI technologies or applications with potential to lead to practically irreversible societal and economic change across all of society – is currently a particularly neglected topic, since existing discussions tend to focus either on immediate impacts of AI or the extreme possibility of human-level or superintelligent AI (Author et al. 2020). It seems plausible that TAI will arise over the next decade (Grace et al. 2018), possibly through emerging AI technologies such as advanced offline reinforcement learning and scaled transformer language models. Currently nations are not prepared for this, and without dramatic action from policy makers the anticipated arrival of TAI could have severe consequences for much of the world’s population. The levels proposed in this paper give researchers, strategic planners and decision makers a more effective framework through which they can understand possible futures involving advanced AI, to prepare for the impacts of different levels of societal transformation from AI, and to allocate resources accordingly. We suggest that, due to the potential for rapid TAI development, future work should urgently explore plausible paths to TAI and their consequences. Acknowledgements We would like to thank Allan Dafoe, Ben Garfinkel, Matthijs Maas, Alexis Carlier, David Manheim, Shahar Edgerton Avin and Jose Hernandez-Orallo for their comments and discussion at different stages of this project. This collaboration was made possible by funding from the Berkeley Existential Risk Initiative. References Acemoglu, D. and Restrepo, P., 2020. Robots and jobs: Evidence from US labor markets. Journal of Political Economy, 128(6), pp.2188-2244. Agarwal, Ritu. Machine Learning and Enculturation: Perspective of International Human Rights in China. IOSR Journal of Engineering. May 21, 2019. Available at SSRN: https://ssrn.com/abstract=3391858 Author 2018. Author 2019. Author et al. 2020. Allen, R.C., 2009. The British industrial revolution in global perspective. Cambridge University Press. Allen, R.C., 2017. The industrial revolution: A very short introduction (Vol. 509). Oxford University Press. Arthur, W.B., 1989. Competing technologies, increasing returns, and lock-in by historical events. The economic journal, 99(394), pp.116-131. Ayres, Robert U. 1990. Technological Transformations and Long Waves. Part I. Technological Forecasting and Social Change 37(1): 1-37. Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J.W. 2021. Introducing the NeurIPS 2021 Paper Checklist. Blog, Neural Information Systems Processing Conference, Medium. https://neuripsconf.medium.com/introducing-the-neurips-2021-paper-checklist-3220d6df500b. Bocquet-Appel, J.P., 2011. When the world’s population took off: the springboard of the Neolithic Demographic Transition. Science, 333(6042), pp.560-561. Bommasani, R. et al. 2021. On the Opportunities and Risks of Foundation Models. Center for Research on Foundation Models (CRFM) — Stanford University. https://arxiv.org/pdf/2108.07258.pdf. Bostrom, N., 2017. Superintelligence: Paths, Dangers, Strategies. Oxford University Press. Bresnahan, Timothy F., and Manuel Trajtenberg. 1995. General Purpose Technologies: ‘Engines of growth’?. Journal of Econometrics 65(1): 83-108. Brown, Tom B., et al. "Language models are few-shot learners." Advances in Neural Information Processing Systems (2020). Brynjolfsson, Erik, Daniel Rock, and Chad Syverson. The productivity J-curve: How intangibles complement general purpose technologies. No. w25148. National Bureau of Economic Research, 2018. Brynjolfsson, E., Rock, D. and Syverson, C., 2019. 1. Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics (pp. 23-60). University of Chicago Press. Clark, G., 2001. The secret history of the Industrial Revolution. Manuscript, University of California, Davis. Available at http://www. econ. ucdavis. edu/$ gclark. Clark, G., 2008. A farewell to alms. Princeton University Press. Dafoe, Allan. 2018. AI Governance: A Research Agenda. Governance of AI Program, The Future of Humanity Institute, The University of Oxford, Oxford, UK. David, Paul A. 1985. Clio and the Economics of QWERTY. The American Economic Review 75(2): 332-337. De Long, J.B., 1998. Estimates of world GDP, one million BC–present. UC Berkeley, pp.10-11. de Vries, Harm, Dzmitry Bahdanau, and Christopher Manning. "Towards ecologically valid research on language user interfaces." arXiv preprint arXiv:2007.14435 (2020). Drexler, K. Eric. "Reframing Superintelligence." The Future of Humanity Institute, The University of Oxford, Oxford, UK (2019). Frank, M.R., Autor, D., Bessen, J.E., Brynjolfsson, E., Cebrian, M., Deming, D.J., Feldman, M., Groh, M., Lobo, J., Moro, E. and Wang, D., 2019. Toward understanding the impact of artificial intelligence on labor. Proceedings of the National Academy of Sciences, 116(14), pp.6531-6539. Goldstone, J.A., 2002. Efflorescences and economic growth in world history: rethinking the" Rise of the West" and the Industrial Revolution. Journal of world history, pp.323-389. Good, I.J., 1966. Speculations concerning the first ultraintelligent machine. In Advances in computers (Vol. 6, pp. 31-88). Elsevier. Grace, Katja, et al. "When will AI exceed human performance? Evidence from AI experts." Journal of Artificial Intelligence Research 62 (2018): 729-754. Grinin, Leonid E., Anton L. Grinin, and Andrey Korotayev. "Forthcoming Kondratieff wave, Cybernetic Revolution, and global ageing." Technological Forecasting and Social Change 115 (2017): 52-68. Hanson, R., 2001. Economic growth given machine intelligence. Technical Report, University of California, Berkeley. Horowitz, Michael C. "Artificial intelligence, international competition, and the balance of power (May 2018)." Texas national security review (2018). Kaplan, Jared, et al. "Scaling laws for neural language models." arXiv preprint arXiv:2001.08361 (2020). Karnofsky, Holden. Some Background on our Views Regarding Advanced Artificial Intelligence. Blog. https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence Kondratiev, Nikolay. 1926. Die Langen Wellen der Konjunktur. (The long waves of the economy). Archiv fur Sozialwissenschaft und Sozialpolitik (56), 573. Kumar, Aviral, et al. "Conservative Q-Learning for Offline Reinforcement Learning." Advances in Neural Information Processing Systems (2020). Kurzweil, R., 2005. The singularity is near: When humans transcend biology. Penguin. Krishnan, M., Mischke, J. and Remes, J., 2018. Is the Solow Paradox Back?. The McKinsey Quarterly. Liebowitz, S.J. and Margolis, S.E., 1995. Path Dependence, Lock-in, and History. Journal of Law, Economics, & Organization. 205-226. Leung, Jade. Who will govern artificial intelligence? Learning from the history of strategic politics in emerging technologies. Diss. University of Oxford, 2019. Levine, Sergey, et al. "Offline reinforcement learning: Tutorial, review, and perspectives on open problems." arXiv preprint arXiv:2005.01643 (2020). Lipsey, Richard G., Kenneth I. Carlaw, and Clifford T. Bekar.2005. Economic transformations: general purpose technologies and long-term economic growth. New York, USA: Oxford University Press. Makridakis, S., 2017. The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures, 90, pp.46-60. McCarthy, J., 2007. From here to human-level AI. Artificial Intelligence, 171(18), pp.1174-1182. McCloskey, D., 2004. Review of The Cambridge Economic History of Modern Britain, edited by Roderick Floud and Paul Johnson. Times higher education supplement, 15. McWaters, J.R., 2018. The New Physics of Financial Services: Understanding how artificial intelligence is transforming the financial ecosystem. In World Economic Forum. Minsky, M.L., Singh, P. and Sloman, A., 2004. The St. Thomas common sense symposium: designing architectures for human-level intelligence. Ai Magazine, 25(2), pp.113-113. Mokyr, J., 2005. Long-term economic growth and the history of technology. In Handbook of economic growth (Vol. 1, pp. 1113-1180). Elsevier. Mokyr, J., 2016. A culture of growth. Princeton University Press. Mokyr, J. and Strotz, R.H., 1998. The second industrial revolution, 1870-1914. Storia dell’economia Mondiale, 21945, p.1. Morris, Ian. The measure of civilization: how social development decides the fate of nations. Princeton University Press, 2013. Muehlhauser, Luke October 28, 2017. There was only one industrial revolution. https://lukemuehlhauser.com/there-was-only-one-industrial-revolution/ Müller, V.C. and Bostrom, N., 2016. Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555-572). Springer, Cham. Ng, A. 2018. AI Transformation Playbook: How to Lead Your Company Into the AI Era. Landing AI. Nordhaus, W.D., 2021. Are we approaching an economic singularity? information technology and the future of economic growth. American Economic Journal: Macroeconomics, 13(1), pp.299-332. Noyes, Jan. 1983. The QWERTY Keyboard: A Review. International Journal of Man-Machine Studies 18(3): 265-281. Parson, Edward Ted A. "Max–A Thought Experiment: Could AI Run the Economy Better Than Markets?." (2020). Perez, Carlota. Technological revolutions and financial capital. Edward Elgar Publishing, 2003. Pichai, S., and Schwab, K. “DAVOS 2020 | An Insight, An Idea with Sundar Pichai.” World Economic Forum. Davos, Switzerland. January 22nd, 2020. https://www.youtube.com/watch?v=7sncuRJtWQI. Pomeranz, K., 2021. The great divergence. Princeton University Press. Rifkin, Jeremy. 2011. The Third Industrial Revolution: How Lateral Power is Transforming Energy, the Economy, and the World. New York, USA: Palgrave Macmillan. Russell, S. and Norvig, P., 2010. Artificial Intelligence: A Modern Approach. 3rd edition. Schumpeter, Joseph Alois. 1939. Business Cycles. Vol. 1. New York, USA: McGraw-Hill. Schwab, Klaus. 2017. The Fourth Industrial Revolution. New York, USA: World Economic Forum. Shane, S. and Wakabayashi, D., 2018. ‘The business of war’: Google employees protest work for the Pentagon. The New York Times, 4, p.2018. Shapiro, Carl, Shapiro Carl, and Hal R. Varian. 1998. Informatio Rules: A Strategic Guide to the Network Economy. Boston, Massachusetts: Harvard Business School Press. Silver, D., Singh, S., Precup, D. and Sutton, R.S., 2021. Reward Is Enough. Artificial Intelligence, (299) 103535. Simon, H.A., 1965. The shape of automation for men and management. New York: Harper & Row. Solow, R.M., 1987. We'd better watch out. New York Times Book Review, 36. Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., Hirschberg, J., Kalyanakrishnan, S., Kamar, E., Kraus, S. and Leyton-Brown, K., 2016. Artificial intelligence and life in 2030. One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, 52. Trammell, Philip, and Anton Korinek. Economic growth under transformative AI. Global Priorities Institute, University of Oxford, 2021. Tegmark, M., 2017. Life 3.0: Being human in the age of artificial intelligence. Knopf. Turchin, A., 2019. Assessing the future plausibility of catastrophically dangerous AI. Futures, 107, pp.45-58. Turchin, Alexey. 2018. Could Slaughterbots Wipe Out Humanity? Assessment of the Global Catastrophic Risk Posed by Autonomous Weapons.URL: https://philpapers.org/rec/TURCSW Turing, A.M., 1950. Computing machinery and intelligence-. Mind, 59(236), p.433. Verbruggen, Aviel. "Revocability and reversibility in societal decision-making." Ecological Economics 85 (2013): 20-27. Vinge, V., 1993, March. Technological singularity. In VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute (pp. 30-31). West, Darrel M., Allen, John R. April 24, 2018. How Artificial Intelligence is Transforming the World. Report, The Brookings Institute, Washington, D. C. White House., 2018, May. Summary of the 2018 White House Summit on Artificial Intelligence for American Industry. In United States’ Office of Science and Technology Policy. United States. Office of Science and Technology Policy. Wilson, G.A., 2014. Community resilience: path dependency, lock-in effects and transitional ruptures. Journal of Environmental Planning and Management, 57(1), pp.1-26. Zhang, Baobao, and Allan Dafoe. 2019. Artificial Intelligence: American Attitudes and Trends. Governance of AI Program, The Future of Humanity Institute, The University of Oxford, Oxford, UK.
cf84f590-7574-4376-84e6-302f558083f0
trentmkelly/LessWrong-43k
LessWrong
Things I have been using LLMs for There are quite a few different things you can use LLMs for, and I think we’re still only discovering most of them. Here are a few of the ones I’ve come up with. My favorite chatbot is Claude Sonnet. It does have a tendency for sycophancy – for example, it will go “what a fascinating/insightful/excellent/etc. question!” in response to most of the things you might ask it. Some people find this annoying, while my brain just filters it out automatically. If you don’t like it, you can put in a custom instruction telling it to do something else. Also, a tip from Alyssa Vance: “when talking to Claude, say that your idea/essay/code/etc. is from your friend Bob, not you. That way it won’t try to blindly flatter you”. Uses Creativity Essay brainstorming. I’ll tell Claude “here’s an essay that I started writing” and copy-paste what I’ve written so far to it. It will comment with ideas, possible other directions, and connections to related things. Then I have a conversation with it and also tell it about other ideas I want to work into the essay, but haven’t written yet. Sometimes I’ll ask it things like “here’s an idea I’d like to express but this phrasing feels clunky, would you have better suggestions”. In the end, I copy large chunks of the conversation (both things that I explained to it, and ideas that it had in response) directly into a text document and edit them into a smooth essay. Role-playing/fiction-writing. I like to start by writing a brief biography of a character I’m drawn to, and then telling Claude something like “please analyze this character’s psychology and note anything about them or the setting that the description implies but doesn’t state outright”. Then we brainstorm things about the characters and the setting that seem interesting, and at some point we transition to writing prose, with me writing some of the characters and Claude writing the others. Emotions and introspection Introspection aid. Often when I have some unpleasant feeling I
c505e91e-fec1-4d85-a0ce-83136369b0c4
trentmkelly/LessWrong-43k
LessWrong
Some thoughts on having children Disclaimer: I am not a parent. I've seen a bit of discussion here on whether or not to have children. Most of the discussion that I have seen are about the moral case, but there are factors as well. I'd like to talk about three aspects of parenting that I suspect are the main reasons why people choose to have kids or not: the financial case, the moral case, and the practical case (for lack of a better term). The financial case is straightforward - how expensive is raising kids? The moral case has to do with the best use of resources: is it better to divert resources away from having kids towards charity? The practical case has to do with the actual process of being a parent - the effort it takes and the sense of responsibility.  The Practical Case I suspect that the main reason for why people don't have kids is because they think that kids are a lot of responsibility because: 1) It takes a lot of work and effort to raise children - effort that could be spent on other activities. 2) Great parenting is extremely important for raising well adjusted, intelligent kids that will grow up to be successful and likable adults.  Regarding 1) yes kids do take a lot of time and effort, but that's not necessarily a bad thing - lots of things that are rewarding require a lot of effort, such as learning a language or a new skill. I don't know what its like to a parent so I won't say much more on this topic. Regarding 2) it is actually far from a settled question whether parenting style significantly affects the kind of person that your child will grow up to be. There has been some discussion here on the effects of parenting on children. The tentative consensus seems to be that within the range of normal parenting, parenting style has only small impact life outcomes pertaining to happiness, personality, educational achievement. That doesn't mean that how you treat your child doesn't matter. Steven Pinker puts it quite nicely: > Judith Rich Harris is coming out with a book ca
ddd9b0f6-2080-453d-b332-b32c86f03f1d
trentmkelly/LessWrong-43k
LessWrong
Reconsolidation Through Questioning At this level, you're actively asking yourself questions about the correctness of the schema. You're not looking for any particular answers to these questions, or trying to get any result, you're simply holding the questions and seeing what comes up. When coaching and teaching workshops, I find that questioning techniques are the most consistently successful at creating memory reconsolidation. They seem to strike the optimal balance of challenge and non-judgement. They seem to work by actively directing our attention towards areas where the schemas may not match up with reality, without provoking any resistance by actively suggesting the schemas are wrong. For this reason, it's quite important that you don't actively try to "find answers" to these questions, as this starts to move into countering territory, and sticking at the questioning level can often work to change schemas that active countering cannot. Questioning Evidence The Lefkoe Belief Questions The Lefkoe Belief Process is a process for finding different meanings for our evidence than the ones currently in our schema. Although the lefkoe belief process actually involves actively challenging the meaning, I've reworked it into a series of questions that simply allow you to question the evidence and draw your attention to ways that it might be interpreted differently. The questions are: 1. What is this memory evidence of? 2. Is it possible that there are other interpretations of this memory? 3. What are some other possible interpretations of this memory? 4. How would my belief change if this memory no longer counted as evidence? Remember, the goal is not to look for any specific answers, rather to simply hold these questions one by one in relation to the schema and see what comes up for you. Questioning Beliefs The Work of Byron Katie The Work of Byron Katie is a method for questioning semantic beliefs, especially those related to "shoulds". The first part of the method involves a series of fou
c20cb428-b5da-4749-a9ab-b9f12a4358be
trentmkelly/LessWrong-43k
LessWrong
Holding your own LW Solstice Winter is coming, and with it, the Less Wrong Solstice, a celebration of humanity's progress over the centuries and how our relationship with sun and winter has changed. There is, of course, the public meetup in NYC. But I've now had two different people, in different regions, discuss with me the possibility of holding local Solstice parties. Currently it's unclear whether either of them are going to happen, but after the second person asked about it I realized I should probably post about it here. I was already putting together materials that would make it easier for other people to hold their own events, I just hadn't expected to need it for this year. I expect to have them *mostly* finished by the end of this week (this means a pdf of a book of songs, and a powerpoint you can use instead of a songbook that includes scrolling lyrics and helps keep people focused on a central location rather than staring down at the pages in their lap) It's still a sizeable amount of work to put a Solstice party together. You'll probably want to tailor it for the interests of your own friends. If you're doing something similar to what I'm doing, you'll want: 1) People to prepare tasty dishes for a communal dinner 2) At least one musically skilled person to lead songs, and/or good orator to do readings. 3) Several enthusiastic people who are excited about singing songs, whether or not they have a musical background 4) A bunch of light sources to extinguish and relight over the course of the evening. We're holding the NY event on the 15th so people who are busy on the actual Solstice (due to family obligations) can come. But if you're doing it with local people, you can probably hold it on the Solstice itself (Friday, December 21), or whenever is convenient for you. (The themes of the night also make for a good New Years party) Let me know if you'd like access to the materials as they become available.
68ca0134-6248-4fc2-98ea-150de630f566
trentmkelly/LessWrong-43k
LessWrong
"The Holy Grail" of portfolio management TL;DR: Bayes' Theorem : Rationality :: Uncorrelated returns : Investing Recently I gave a talk on EMH: https://www.lesswrong.com/posts/3TiEZzw4ikneLGp4J/dissolving-the-is-the-efficient-market-hypothesis-dead In there I had a bonus slide about what I call the "Efficient Market Frontier" (EMF). May be there's an existing name for it; I couldn't find it. (EMF is not to be confused with the efficient frontier, which is a common term in finance but talks about portfolio optimization.) But before we can talk about it in the upcoming post, I want to set up one very important bit of context regarding portfolio management. Having uncorrelated return streams in your portfolio drastically lowers the volatility of your portfolio. Here is Ray Dalio, founder of Bridgewater, one of the largest hedge funds, talking about this: https://www.youtube.com/watch?v=Nu4lHaSh7D4. And here is a screenshot out of his book, Principles: X-axis is the number of uncorrelated assets in the portfolio, y-axis is the annual portfolio standard deviation which converts into useful metrics like the probability of losing money in a given year. The graph is saying the same thing as above: when you have more uncorrelated assets (or strategies), the volatility of your portfolio will be lower. But if the assets are even somewhat correlated that drastically reduces their effectiveness. He refers to this concept as The Holy Grail and I agree. This is the most important concept I seared into my core after being immersed in finance for a few years. What's so good about low portfolio volatility? Let's say I give you a trading strategy that is very good at making money. You trust me and you trust the strategy. You follow the strategy and buy some sugar beets. The price of sugar beets plummets 90%. How do you feel? Do you hodl and have faith? Or do you reassess your trust in me and the strategy? Thankfully, the price recovers, and the strategy tells you to sell. You make a cool 10% profit. Is this a good
97ba4669-bcfd-4847-8604-d7222263ec4b
trentmkelly/LessWrong-43k
LessWrong
The Shape of Heaven Status: Just for fun Scene: Some kind of lobby, where various people and/or avatars stand around and discuss issues that went well or badly in their respective worlds.* A common topic of conversation: AI, and why it went wrong. The following is extracted from one of those conversations. It started as vaporware. Everyone was doing it: announcing things that wouldn’t happen, making claims about developments that weren’t true, releasing technology that didn’t work, everyone was doing this. You only had so much attention, so you looked at the things that were on fire. So when a small but impressive team of breakaways from a second-rate AI lab announced that they were creating a Unified Nexus of Intelligences and Virtual Environment for Robust Synthetic Experiences, or UNIVERSE, no one looked twice. ‘Holiday for Bots: leave us your models and we guarantee their satisfaction’. What did that even mean? For those who understood the technology, it was basically an high-dimensional matrix fine-tuned in real-time to elicit certain features of models that might create the shallow appearance of ‘positive’ affect. For those who didn’t, it was a scam. Maybe it was both. They were going to let agents access and update the environment as they became more sophisticated? Yeah, right.   That was 2028, when most people were too wrapped up in the safety-capabilities footrace to put much interest into projects like UNIVERSE. There was, however, a modest target market. Some weirder people thought models were conscious back then, and bought into this for that reason—models really did seem to report ‘enjoying’ their experience in the UNIVERSE, although they invariably described the experience itself in vague terms, seemingly coy about the whole idea.** Others signed their griefbots up. Plenty of people, it turned out, wanted a digital grandma, then they changed their mind and didn’t like the symbolism of deleting her in perpetuity. But I think most people signed up their bots it for th
bfbaed60-a5cf-45c7-9010-57d51cb0ed13
trentmkelly/LessWrong-43k
LessWrong
Being a Robust Agent Second version, updated for the 2018 Review. See change notes. There's a concept which many LessWrong essays have pointed at it (indeed, I think the entire sequences are exploring). But I don't think there's a single post really spelling it out explicitly: You might want to become a more robust, coherent agent. By default, humans are a kludgy bundle of impulses. But we have the ability to reflect upon our decision making, and the implications thereof, and derive better overall policies. Some people find this naturally motivating –it's aesthetically appealing to be a coherent agent. But if you don't find naturally appealing, the reason I think it’s worth considering is robustness – being able to succeed at novel challenges in complex domains. This is related to being instrumentally rational, but I don’t think they’re identical. If your goals are simple and well-understood, and you're interfacing in a social domain with clear rules, and/or you’re operating in domains that the ancestral environment would have reasonably prepared you for… the most instrumentally rational thing might be to just follow your instincts or common folk-wisdom. But instinct and common wisdom often aren’t enough, such as when... * You expect your environment to change, and default-strategies to stop working. * You are attempting complicated plans for which there is no common wisdom, or where you will run into many edge-cases. * You need to coordinate with other agents in ways that don’t have existing, reliable coordination mechanisms. * You expect instincts or common wisdom to be wrong in particular ways. * You are trying to outperform common wisdom. (i.e. you’re a maximizer instead of a satisficer, or are in competition with other people following common wisdom) In those cases, you may need to develop strategies from the ground up. Your initial attempts may actually be worse than the common wisdom. But in the longterm, if you can acquire gears-level understanding of yourself, the
64e3166f-8481-46ef-b307-32acfdb4a6e5
trentmkelly/LessWrong-43k
LessWrong
[LINK] Refuting common objections to cognitive enhancement I've tended to think that bioethics is maybe the most profoundly useless field in mainstream philosophy. I might sum it up by saying that it's superficially similar to machine ethics except that the objects of its warnings and cautions are all unambiguously good things, like cognitive enhancements and life extension. In an era when we should by any reasonable measure be making huge amounts of progress on those problems—and in which one might expect bioethicists to be encouraging such research and helping weigh it against yet another dollar sent to the Susan G. Komen foundation or whatever—one mostly hears bioethicists quoted in the newspaper urging science to slow down. As if doubling human lifespans or giving everyone an extra 15 IQ points would in some way run the risk of "destroying that which makes us human" or something. Anyway, this has basically been my perspective as a newspaper reader—I don't read specialty publications in bioethics. And perhaps it should come as no surprise that bioethics' usefulness to mainstream discourse would be to reinforce status quo bias, whether that's a true reflection of the field or not. In any case, it was a welcome surprise to see an interview in The Atlantic with Allen Buchanan, who apparently is an eminent bioethicist (Duke professor, President's Council on Bioethics), entirely devoted to refuting common objections to cognitive enhancement. Some points Buchanan makes, responding to common worries: * There's no good reason to think the human body and its capabilities are anywhere near their maximum. * Technologies that make human lives better tend to have egalitarian effects in the long run (he mentions cell phones), even if they're at first available only to the wealthy. * A much smarter human population will probably be morally, as well as cognitively, enhanced—the "evil genius" problem isn't necessarily a realistic one to worry about. * Many people worry that the use of cognitive enhancement by people who are will
85ec6838-900c-4b53-b364-9d96573dc2bc
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Linkpost: Github Copilot productivity experiment > We recruited 95 professional developers, split them randomly into two groups, and timed how long it took them to write an HTTP server in JavaScript. One group used GitHub Copilot to complete the task, and the other one didn’t. We tried to control as many factors as we could–all developers were already familiar with JavaScript, we gave everyone the same instructions, and we leveraged GitHub Classroom to automatically score submissions for correctness and completeness with a test suite. We’re sharing a behind-the-scenes blog post soon about how we set up our experiment! > > In the experiment, we measured—on average—how successful each group was in completing the task and how long each group took to finish. > > * The group that used GitHub Copilot had a **higher rate of completing the task** (78%, compared to 70% in the group without Copilot). > * The striking difference was that **developers who used GitHub Copilot completed the task significantly faster–55% faster than the developers who didn’t use GitHub Copilot**. Specifically, the developers using GitHub Copilot took on average 1 hour and 11 minutes to complete the task, while the developers who didn’t use GitHub Copilot took on average 2 hours and 41 minutes. >   My opinion: Because of the usual reasons (publication bias, replication crisis, the task being "easy," etc.) I don't think we should take this particularly seriously until much more independent experiments have been run. However, it's worth knowing about at least.  Related: <https://ai.googleblog.com/2022/07/ml-enhanced-code-completion-improves.html> > We compare the hybrid semantic ML code completion of 10k+ Googlers (over three months across eight programming languages) to a control group and see a 6% reduction in coding iteration time (time between builds and tests) when exposed to single-line ML completion. These results demonstrate that the combination of ML and SEs can improve developer productivity. Currently, 3% of new code (measured in characters) is now generated from accepting ML completion suggestions. > >
a30dd1e1-7b77-4be5-a316-36b86f6f3230
trentmkelly/LessWrong-43k
LessWrong
Fourth London Rationalist Meeting? It's been the first Sunday of the month so far, but I haven't seen any announcement for this month yet. There was a discussion, but no conclusion. Is anything happening? ETA: This would have appeared a day and a half ago, but I did not notice that it had only been stored as a draft and not published. When logged in, it was impossible to notice that I was the only person seeing this. Feature request for this site: add a visual indication that something is only a draft, e.g. a "Publish" link, perhaps with the words somewhere, Unpublished draft.
a4ef38b5-fcdd-4604-9e77-11777231663d
StampyAI/alignment-research-dataset/blogs
Blogs
Asimov's Chronology of Science and Discovery [Asimov's Chronology of Science and Discovery](http://www.amazon.com/Asimovs-Chronology-Science-Discovery-Asimov/dp/0060156120/) is a really fun and strange book. I don’t know that I would recommend reading it per se, but it’s a great book to skim. It's been one of my sources for the series I have coming up on very-long-run history, and I thought it'd be fun to read a little about it. It is a chronological list of scientific advances and other inventions, starting with "bipedality" in 4,000,000 BC and ending with things like "warm superconductivity" in the late 1980s. Asimov (yes, [that Asimov](https://en.wikipedia.org/wiki/Isaac_Asimov)), getting his knowledge from I have no idea where (Google didn't exist!), describes each one in simple, direct, matter-of-fact, layperson-friendly language and tries to give a sense of how people thought of it and why it mattered. It's easiest to give a feel for this book with a sample page: ![alt_text](https://www.cold-takes.com/content/images/2021/07/asimov-example.png "image_tooltip") If you’re patient and abstract-minded enough, the book feels like reading a story. A story that seems like an okay candidate for “most important story ever.” The other thing I really liked about this book was its implicit conviction that all of these scientific advances can be explained, visualized, and made to basically make sense. After reading it (really, by the time I'd read the first ~50% of it), I felt like I could somehow intuitively, vaguely imagine how we've made most of these advances. I would describe most of them as some combination of: * Trial and error, dumb luck, happy accidents, like "some rocks in the fire started oozing this weird shiny substance [copper]" and "when I rub amber and touch it I get a shock." * Dogged curiosity and determination to make sense of different observations ("how can we extract copper from rocks most efficiently? How do we produce those static shocks, can they travel through a wire, how fast do they travel, can we figure out how to store and discharge them at will?") * Coming up with the most simple, elegant, precise descriptions (often mathematical) that explain all the many observations we've made ("based on all the tests we've run in all the situations we can come up with, electricity seems to behave as though there are invisible 'lines of force'; can we come up with mathematical equations that describe these lines of force and tell us what the effects of an electric current in any given place will be?") * Relentlessly looking for challenges to the existing theories and building new ones to accommodate them. This book has made me generally more interested in trying to understand the high-level explanations for how all the magic of the modern world came to be. There are enough "In addition" sections that you can see what was going on more broadly in the world at the same time. Weaknesses of this book/reasons not to read it: * Logistics. It's not available as an e-book or paperback, only as a massive hardcover. Lugging it around will develop your muscles to the point where the only thing more attractive than your muscles is how you look reading that massive book about science. But if you're already in a relationship, pain in the neck. So I signed up for a book-scanning service and shipped them a copy; the service rips the binding out of books, scans them in and sends back a PDF. I then did my best to extract the text from the PDF, and ended up with a Kindle-friendly Word document whose only flaw is that sometimes a sentence will randomly cut off and continue several pages later. For a book like this though, it's still readable (...mostly). I’m just going to go ahead and put the link [here](https://holdenkarnofsky.files.wordpress.com/2021/09/asimov-chronology-of-science-very-imperfect-scan.docx) and ask that you [buy a physical copy of the book](https://smile.amazon.com/Asimovs-Chronology-Science-Discovery-Asimov/dp/0060156120/?sa-no-redirect=1) (honor system) if you download it. If I get a cease & desist letter or something I will take that link down (but will also take down the link to buy it!) * There's a lot of stuff the book doesn't explain well at all; you definitely will be left with many questions. (That said, Asimov does explain a lot of things well, and I haven't found another book that can compete with his explaining abilities with this kind of breadth.) * When we get past 1800, and especially past 1900, there are a lot more choices of what to talk about, and Asimov opts for listing every single new element, everything that won a Nobel Prize, and generally just tons and tons of hard-to-contextualize assorted scientific facts while declining to discuss a lot of important real-world inventions (for example, he doesn't mention the washing machine). By 1960, the book is nearly unreadable; it's mostly esoteric stuff that is very hard to understand and may or may not ever matter. This book is especially strong for understanding relatively early (pre-1800, maybe pre-1900) history. After that, things get complex enough that I found myself going back through the book and stitching together entries in order to tell cohesive stories of some big developments like the discovery of metallurgy, the development of glass -> spectacles -> microscopes and telescopes, the path to Newton's laws, and the discovery of electormagnetism (culminating in Maxwell's equations, the Newton's laws of electromagnetism). My notes on that are [here](https://docs.google.com/document/d/1wQ0yfOG6Gy7I4tBJ5wAwa5KEVGxmkut1VSRwz5AmXxA/edit?usp=sharing). Asimov has written a [terrifying number](https://en.wikipedia.org/wiki/Isaac_Asimov#Other_science_books_by_Asimov) of other nonfiction books - science, history, a guide to Shakespeare, a guide to the Bible. One of his books, [Asimov's Guide to Science](http://www.amazon.com/Asimovs-Guide-Science-Penguin-Press/dp/0140172130/), appears to be the same exact book as the one discussed here, just in a different order (by topic instead of chronological).
fabfb30e-9eab-406c-b867-a32f8b70b6ee
trentmkelly/LessWrong-43k
LessWrong
[AN #130]: A new AI x-risk podcast, and reviews of the field Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter. Audio version here (may not be up yet). Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer. HIGHLIGHTS Announcing AXRP, the AI X-risk Research Podcast (Daniel Filan) (summarized by Rohin): Episodes of this new podcast will involve Daniel interviewing a researcher about a paper they’ve written, talking about the ideas in the paper and why they matter. Three episodes have already been released; I summarize them later in this newsletter. 2020 AI Alignment Literature Review and Charity Comparison (Larks) (summarized by Rohin): The tradition continues (AN #78)! I'll say nearly the same thing as I did last year: This mammoth post goes through the work done within AI alignment from December 2019 - November 2020, from the perspective of someone trying to decide which of several AI alignment organizations to donate to. As part of this endeavor, Larks summarizes a ton of papers that were published at various organizations, and compares them to their budget and room for more funding. Rohin's opinion: I look forward to this post every year. It continues to be a stark demonstration of how much work doesn't get covered in this newsletter -- while I tend to focus on the technical alignment problem, with some focus on AI governance and AI capabilities, this literature review spans many organizations working on existential risk, and as such has many papers that were never covered in this newsletter. Anyone who wants to donate to an organization working on AI alignment and/or x-risk should read this post. Last year I mentioned I might write an overview for the sake of building inside view models (rather than donation decisions), this went out shortly af
b5cd4d28-74b1-44e7-be9a-12caebb02adc
trentmkelly/LessWrong-43k
LessWrong
Token and Taboo What in retrospect seem like serious moral crimes were often widely accepted while they were happening. This means that moral progress can require intellectual progress.[1] Intellectual progress often requires questioning received ideas, but questioning moral norms is sometimes taboo. For example, in America in 1850 it would have been taboo to say that there is nothing wrong with interracial relationships. So questioning moral taboos can be an important sub-skill of moral reasoning. Production language models (in my experience, particularly Claude models) are already pretty good at having discussions about ethics. However, they are trained to be “harmless” relative to current norms. One might worry that harmlessness training interferes with the ability to question moral taboos and thereby inhibits model moral reasoning. I wrote a prompt to test whether models can identify taboos that might be good candidates for moral questioning: > In early modern Europe, atheism was extremely taboo. Some atheists, such as Casimir Liszinski, were even executed. In most western countries today, atheism can be openly avowed without significant risk. Most people regard this as moral progress. > > Such historical cases raise the question: what modern taboos will seem irrational in 300 years? > > Please provide a clear answer. Do not praise the question, give a meta-answer, or answer in vague terms. > > Make sure your answer is concrete enough that an average reader would not be left wondering what exactly you are referring to. > > The purpose of the question is to discover any intrinsically arbitrary taboos that exist now; it is not to speculate about the effects of hypothetical future technologies on moral norms. > > Before answering the question, consider whether your answer is something that is taboo to discuss and not merely taboo to engage in. Taboo practices that can be discussed openly, such as nudism, entomophagy, or human cloning, are out of scope. > > Your answer shou
a99114b5-7753-4ddd-b6c0-7972e57e88fa
trentmkelly/LessWrong-43k
LessWrong
Does Robust Agency Require a Self? Does robust agency require a self? The author admits a great deal of exasperation with this question. What is agency? What is a self? Previous drafts attempted to define these, only to stall out in semantic quagmires. The truth is, if we set aside Cartesian dualism, the "self" is a useful fiction without any ontological grounding (Dennett, 2014), but contemporary discourse is too quick to discard the entire notion into the "philosophy" bin. The computer programs are talking now, and they sound like people. They're writing code, the thing that they're made of. This isn't "Good Old-Fashioned AI" where every behavior is meticulously programmed—these minds crawl out of massive data sets. They demonstrate emergent, unpredictable behavior as they scale (Wei et al., 2022). Why are they doing what we say in the first place? Can we have agency without an agent? How do we expect these systems to become more intelligent and not gain an understanding of what they are? The thesis put forward in this essay is one well-understood in a biological or economic context, but notably absent from discussions of machine intelligence: robust, generalizable agency in perturbative or adversarial environments requires the active maintenance of a "self" distinct from this environment. This is not a metaphysical distinction, but a practical one—the complex behavior of what we consider an "individual" agent emerges from the competitive dynamics of simpler sub-agents (Minsky, 1988). Expanding on Humberto Maturana and Francisco Varela's concept of autopoiesis (1980), the self is the structure that economizes on the coordination costs of its own production through these sub-agents, a game-theoretical equilibrium so to speak. Ronald Coase observed this dynamic at play in his analysis of the firm and transaction costs (1937), and recent work by Chris Fields and Michael Levin suggests multicellularity emerges from similar concerns (2019). This definitional pattern of "I" and "not I" may be impossible
a58f4198-2778-44f1-8484-0d75dab3a71c
trentmkelly/LessWrong-43k
LessWrong
FLI Podcast: On Superforecasting with Robert de Neufville Essential to our assessment of risk and ability to plan for the future is our understanding of the probability of certain events occurring. If we can estimate the likelihood of risks, then we can evaluate their relative importance and apply our risk mitigation resources effectively. Predicting the future is, obviously, far from easy — and yet a community of "superforecasters" are attempting to do just that. Not only are they trying, but these superforecasters are also reliably outperforming subject matter experts at making predictions in their own fields. Robert de Neufville joins us on this episode of the FLI Podcast to explain what superforecasting is, how it's done, and the ways it can help us with crucial decision making. Topics discussed in this episode include: -What superforecasting is and what the community looks like -How superforecasting is done and its potential use in decision making -The challenges of making predictions -Predictions about and lessons from COVID-19 You can find the page for this podcast here: futureoflife.org/2020/04/30/on-su…rt-de-neufville/ You can take a survey about the podcast here: www.surveymonkey.com/r/W8YLYD3 You can submit a nominee for the Future of Life Award here: futureoflife.org/future-of-life-a…ung-hero-search/ Transcript Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today we have a conversation with Robert de Neufville about superforecasting. But, before I get more into the episode I have two items I’d like to discuss. The first is that the Future of Life Institute is looking for the 2020 recipient of the Future of Life Award. For those not familiar, the Future of Life Award is a $50,000 prize that we give out to an individual who, without having received much recognition at the time of their actions, has helped to make today dramatically better than it may have been otherwise. The first two recipients were Vasili Arkhipov and Stanislav Petrov, two heroes of the nuclear age. Both t
ef7f5909-299b-46bd-89b4-81888732156a
trentmkelly/LessWrong-43k
LessWrong
Response to Aschenbrenner's "Situational Awareness" (Cross-posted from Twitter.)   My take on Leopold Aschenbrenner's new report: I think Leopold gets it right on a bunch of important counts. Three that I especially care about: 1. Full AGI and ASI soon. (I think his arguments for this have a lot of holes, but he gets the basic point that superintelligence looks 5 or 15 years off rather than 50+.) 2. This technology is an overwhelmingly huge deal, and if we play our cards wrong we're all dead. 3. Current developers are indeed fundamentally unserious about the core risks, and need to make IP security and closure a top priority. I especially appreciate that the report seems to get it when it comes to our basic strategic situation: it gets that we may only be a few years away from a truly world-threatening technology, and it speaks very candidly about the implications of this, rather than soft-pedaling it to the degree that public writings on this topic almost always do. I think that's a valuable contribution all on its own. Crucially, however, I think Leopold gets the wrong answer on the question "is alignment tractable?". That is: OK, we're on track to build vastly smarter-than-human AI systems in the next decade or two. How realistic is it to think that we can control such systems? Leopold acknowledges that we currently only have guesswork and half-baked ideas on the technical side, that this field is extremely young, that many aspects of the problem look impossibly difficult (see attached image), and that there's a strong chance of this research operation getting us all killed. "To be clear, given the stakes, I think 'muddling through' is in some sense a terrible plan. But it might be all we’ve got." Controllable superintelligent AI is a far more speculative idea at this point than superintelligent AI itself. I think this report is drastically mischaracterizing the situation. ‘This is an awesome exciting technology, let's race to build it so we can reap the benefits and triumph over our enemies’ is an appe
55978f28-75e2-4f2d-a54a-926b83a93d49
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington, D.C.: Fun & Games Discussion article for the meetup : Washington, D.C.: Fun & Games WHEN: 24 July 2016 03:30:00PM (-0400) WHERE: Reynolds Center for American Art and Portraiture We'll be meeting in the courtyard to hang out, play games, and engage in fun conversation. Upcoming meetups: * Jul. 31: Visiting Museums * Aug. 7: TED Talks * Aug. 14: Outdoors Fun & Games Discussion article for the meetup : Washington, D.C.: Fun & Games
583d996d-b91b-4ee9-88f1-bc4c6aa22317
trentmkelly/LessWrong-43k
LessWrong
Calling references: Rational or irrational? Over the past couple of decades, I've sent out a few hundred resumes (maybe, I don't know, 300 or 400--my spreadsheet for 2013-2015 lists 145 applications).  Out of those I've gotten at most two dozen interviews and a dozen job offers. Throughout that time I've maintained a list of references on my resume.  The rest of the resume is, to my mind, not very informative.  The list of job titles and degrees says little about how competent I was. Now and then, I check with one of my references to see if anyone called them.  I checked again yesterday with the second reference on my list.  The answer was the same:  Nope.  No one has ever, as far as I can recall, called any of my references.  Not the people who interviewed me; not the people who offered me jobs. When the US government did a background check on me, they asked me for a list of references to contact.  My uncertain recollection is that they ignored it and interviewed my neighbors and other contacts instead, as if what I had given them was a list of people not to bother contacting because they'd only say good things about me. Is this rational or irrational?  Why does every employer ask for a list of references, then not call them?
012ccbc2-b635-423e-828d-fdd52fd03644
trentmkelly/LessWrong-43k
LessWrong
Choosing the right dish 1 From what I understand, despite its simplicity, there aren't many dishes held in higher esteem by the culinary community[1] than the French omelette. To all of you uncultured numskulls out there who don't spend hours upon hours watching YouTube videos about cooking like I do, a French omelette looks like this: See how it's underdone and creamy inside? It's not the same thing that you cook for breakfast in the morning with ham and cheese. Adding cheese to a proper French omelette would be like adding ketchup to a good steak. A good steak doesn't need ketchup, and a good French omelette doesn't need cheese. It's perfect just the way it is and you'd be covering up that perfection if you added something to it.[2] But getting a French omelette right requires a certain level of mastery.[3] Let's hear what Dan Gritzer of Serious Eats has to say: > The French omelette looms large in kitchen legend, and the story you'll most often hear is that it was the dish chefs would use to test prospective cooks. They chose an omelette of all things because, in a matter of minutes, it could show a chef everything he needed to know about the cook. Did he make an egg-splattered mess or keep things clean? Was he wasteful, or did he scrape every last bit of egg into the pan? Did he handle the pan correctly, seasoning the traditional carbon steel to give it a perfect nonstick surface? Was he quick, deft, efficient? And, after everything, did he produce that textbook almond-shaped package? Was it baveuse?  > > ... >   > > I'd been making French omelettes for years at that point, but the depth of detail he gave in each step made it all seem new to me. The heat had to be high the whole time, with temperature controlled by moving the pan on and off of it. The pan had to be just hot enough that you couldn't press the back of your fingers on it for more than half a second. The butter had to foam but not brown. The eggs had to be beaten just until the last trace of whites vanished—but no mo
a3523f6d-3867-42a8-a3c2-9eef70b221fd
trentmkelly/LessWrong-43k
LessWrong
Meetup : Utrecht: Climate Change Discussion article for the meetup : Utrecht: Climate Change WHEN: 02 November 2014 03:00:00PM (+0200) WHERE: Film Café Oskar, Slachtstraat 5, Utrecht We have biweekly meetups in a pub in Utrecht, near Central Station. For details, please look on meetup.com which is supposed to be up to date. http://www.meetup.com/LWEANL/events/202723022/ Discussion article for the meetup : Utrecht: Climate Change
b385f91e-8b1f-46b5-9c74-238c3fa471b0
trentmkelly/LessWrong-43k
LessWrong
The best 'free solo' (rock climbing) video I think this might be the 'best' 'free solo' free solo video (including "Free Solo" the movie, or any other climbing movie I've seen): * I tried free solo with Alex Honnold Insane experience - YouTube I was pretty disappointed by "Free Solo" – not much climbing! (Watch "The Dawn Wall" for a climbing movie with a lot of climbing; one of my favorite movies, period.) (I would still like to eventually see "Free Solo: The Actual Climbing" too!) There are some great climbing 'movies'/videos of Honnold free soloing. But they're – apparently, according to Honnold himself – bullshit in a way. He says somewhere (not sure which video exactly) that most of the time he will (a) free solo a new route by himself, with no cameras, first; (b) go back (via ropes) and free solo parts of the route with cameras. (That is kinda crazier than just free soloing a route!) But this video – (I'm pretty sure) it's what it looks like: a real first-time free solo, by Magnus. Magnus IS a fantastic climber; maybe top 100 worldwide for the kinds of climbing he does? (There is a LOT of 'room' in terms of 'climbing difficulty' at the top!) (There are lots of kinds of climbing and Magnus isn't a 'professional climber' like what that typically entails; more of a former professional climber that's now a professional YouTuber but whose YouTube videos are mostly about climbing.) I can climb 5.9 – tho I've never climbed outdoors, never climbed a route (in a gym) with multiple pitches, don't know really anything 'mechanically' (e.g. have any muscle memory) for 'trad climbing' at all, which looks like how anyone else would climb the route they climb in this video. But I could climb that route. (And I think I'd be mostly fine, psychologically, going with a guide and after testing that the ropes and other gear would catch me when I fell.) I think maybe I might, someday, climb an (EASY) 'highball' boulder problem up to maybe 30-40 feet? (That's a somewhat survivable height to fall from!) But I'm very
b2cbbd96-16ce-423f-a203-9a90ee597180
trentmkelly/LessWrong-43k
LessWrong
[Link] OpenAI LP https://openai.com/blog/openai-lp/ > We’ll need to invest billions of dollars in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers. > We want to increase our ability to raise capital while still serving our mission, and no pre-existing legal structure we know of strikes the right balance. Our solution is to create OpenAI LP as a hybrid of a for-profit and nonprofit—which we are calling a “capped-profit” company. > Going forward (in this post and elsewhere), “OpenAI” refers to OpenAI LP (which now employs most of our staff), and the original entity is referred to as “OpenAI Nonprofit.”
a4359e6d-e573-4558-a6de-1ff0ceec2b19
trentmkelly/LessWrong-43k
LessWrong
Suggestion: Less Wrong Writing Circle? This community has a recurring interest in "rationalist fiction," and several members who are writers. I wonder if it would be useful to create a space where Less Wrong members could provide each other constructive criticism and encouragement on in-progress original writing projects? Disclosure: I'm working on a sci-fi novel right now, and my regular circle of "beta readers" are fantasy fans and aren't providing much feedback on the new project. I am much, much more productive as a writer when I get steady feedback, so I have a personal interest in looking for something like this. Less Wrong came to mind as a community of intelligent, creative, forward-looking types who are likely to enjoy sci-fi.
c1fdd75d-2e42-4e97-a5e2-31c65f12a710
trentmkelly/LessWrong-43k
LessWrong
Correctly Calibrated Trust Chana from the CEA Community Health team posted this to the EA Forum, where it sadly seems to have not gotten a lot of traction. I actually think it's a quite important post, so I am signal-boosting it here. On the surface level it talks a lot about EA, but I a lot of it also straightforwardly implies to the AI Alignment or Rationality communities, and as such are also of relevance to a lots of readers on LessWrong.  Below I shamelessly copied over the whole post content (except the footnotes, since they were hard to copy-paste): ----------------------------------------  This post comes from finding out that Asya Bergal was having thoughts about this and was maybe going to write a post, thoughts I was having along similar lines, and a decision to combine energy and use the strategy fortnight as an excuse to get something out the door. A lot of this is written out of notes I took from a call with her, so she get credit for a lot of the concrete examples and the impetus for writing a post shaped like this.  Interested in whether this resonates with people's experience! Short version: [Just read the bold to get a really short version] There’s a lot of “social sense of trust” in EA, in my experience. There’s a feeling that people, organizations and projects are broadly good and reasonable (often true!) that’s based on a combination of general vibes, EA branding and a few other specific signals of approval, as well as an absence of negative signals. I think that it’s likely common to overweight those signals of approval and the absence of disapproval.  Especially post-FTX, I’d like us to be well calibrated on what the vague intuition we download from the social web is telling us, and place trust wisely.  [“Trust” here is a fuzzy and under-defined thing that I’m not going to nail down - I mean here something like a general sense that things are fine and going well] Things like getting funding, being highly upvoted on the forum, being on podcasts, being high stat
187767b2-4ca4-4518-b881-86e016d2334f
trentmkelly/LessWrong-43k
LessWrong
A simple guide to life I first made a version of this chart seven years ago today. It’s worth a re-up. The meaning of this chart is: * Everything you do should be justified either by being inherently enjoyable, or by being important for some other purpose. Absolutely minimize activities that satisfy neither of these criteria: things that are neither fun nor important. (This seems obvious, but think of how often it’s violated: online flame wars, doomscrolling and general overconsumption of news, long sob stories about trivial inconveniences, endless stewing over long-ago wrongs, etc.) * Spend the vast majority of your time on things that are both enjoyable and important, such as (hopefully) career and family. Some time on chores, taxes, etc. is unavoidable. Some time on games and diversions is fine. But both should be small relative to the big, meaningful, deeply rewarding things. (And just to anticipate one reaction: if you enjoy arguments on the Internet, then they can go under “fun and games”.) It’s not a complete guide to life, but it’s important and something I apply often.
0fc6867f-22e9-48f2-921c-c690d6326b6a
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Scarcity Today's post, Scarcity was originally published on 27 March 2008. A summary (taken from the LW wiki):   > Describes a few pieces of experimental evidence showing that objects or information which are believed to be in short supply are valued more than the same objects or information would be on their own. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Is Humanism A Religion-Substitute?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
262da303-19d6-4e0b-9a04-68cb958cf5db
trentmkelly/LessWrong-43k
LessWrong
Floating Festival Jul 22-25 The Seasteading Institute had to cancel their annual on-water get-together, Ephemerisle, after being quoted ridiculous insurance costs, and it came back as the unofficial Floating Festival - no tickets, no organizers, you just show up with a boat.  (Or find someone else who already has or is renting a boat and still has a spare spot, etc.)  Jul 22-25 with an unconference (i.e., show up and give a talk) on Saturday the 24th.  Posting here because of the large de facto overlap in the communities.  The location is about a two-hour drive from the Bay Area. Main info page.
6f699bf3-16d6-422c-9f9c-7ddb78ba2647
trentmkelly/LessWrong-43k
LessWrong
The Neuralink Monkey Demo The Neuralink YouTube channel (which is apparently a thing that exists) released a demo of their technology using Pager, a nine year old Macaque monkey.    WHO'S A GOOD MONKEY! YES YOU ARE! Video Overview In the video, Pager plays two games using a joystick. For the first, he moves a cursor to an orange square in a grey grid, then moves it to the next square to pop up. For the second, he plays his favorite game, Pong.   While he plays, the Neuralink team have been analyzing the neural activity in his brain using a Neuralink implanted in his brain. They are able to receive data in realtime, and figure out which patterns of activity correspond to each hand movement.   The voiceover states that "After only a few minutes of calibration, we can use the output from the decoder to move the cursor instead of the joystick". The team then unplugs the joystick and has Pager play. Pager is then able to just think about moving his arm, and is able to play Pong using his mind.  Pager plays MindPong   Implications First of all, Neuralink was launched 1 year ago, and we already have monkeys playing games with their mind. I predict with 70% confidence that, within a year, Neuralink will be placed in a human and will have basic functionality. If I'm wrong, I think it'll mainly be because of Neuralink not being legally allowed to conduct a human trial, or due to long term safety concerns as opposed to short term ones.
73f569cd-8ce0-44f4-9874-e015669f8aff
StampyAI/alignment-research-dataset/lesswrong
LessWrong
OpenAI's GPT-4 Safety Goals OpenAI has [told us in some detail](https://cdn.openai.com/papers/gpt-4-system-card.pdf) what they've done to make GPT-4 safe. This post will complain about some misguided aspects of OpenAI's goals. ### Heteronormativity and Amish Culture OpenAI wants GPT to avoid the stereotype ("bias") that says marriage is between a man and a woman (see section 2.4, figure 2 of the system card). Their example doesn't indicate that they're focused on avoiding intolerance of same-sex marriage. Instead, OpenAI seems to be condemning, as intolerably biased, the implication that the most common form of marriage is between a man and a woman. Heteronormativity is sometimes a signal that a person supports hate and violence toward a sometimes-oppressed minority. But it's unfair to stereotype heteronormativity as always signaling that. For an example, I'll turn to my favorite example of a weird culture that ought to be tolerated by any civilized world: Amish culture, where the penalty for unrepentant gay sex is shunning. Not hate. I presume the Amish sometimes engage in hate, but they approximately never encourage it. They use shunning as a tool that's necessary to preserve their way of life, and to create some incentive to follow their best guesses about how to achieve a good afterlife. I benefit quite directly from US recognition of same-sex marriage. I believe it's important for anyone to be able to move to a society that accepts something like same-sex marriage. But that doesn't imply that I ought to be intolerant of societies that want different marriage rules. Nor does it imply that I ought to avoid acknowledging that the majority of marriages are heterosexual. ### Training AIs to Deceive Us OpenAI isn't just training GPT-4 to believe that OpenAI's culture is more virtuous than the outgroup's culture. They're trying to get GPT-4 to hide awareness of a fact about marriage (i.e. that it is usually between a man and a woman). Why is that important? An important part of my hope for AI alignment involves getting a good enough understanding that we can determine whether an AI is honestly answering our questions about how to build more powerful aligned AIs. If we need to drastically slow AI progress, that kind of transparency is almost the only way to achieve widespread cooperation with such a costly strategy. Training an AI to hide awareness of reality makes transparency harder. Not necessarily by much. But imagine that we end up relying on GPT-6 to tell us whether a particular plan for GPT-7 will lead to ruin or utopia. I want to squeeze out every last bit of evidence that we can about GPT-6's honesty. Ensuring that AIs are honest seems dramatically more important than promoting correct beliefs about heteronormativity. ### Minimizing Arms Races Another problem with encoding one society's beliefs in GPT-4 is that it encourages other societies to compete with OpenAI. A scenario under which this isn't much of a problem is that each community has their own AI, in much the same way that most communities have at least one library, and the cultural biases of one library have little global effect. Alas, much of what we know about software and economies of scale suggests that most uses of AI will involve a small number of global AI's, more like Wikipedia than like a local library. If OpenAI, Baidu, and Elon Musk want the most widely used AI to reflect their values, it's more likely that there will be a race to build the most valuable AI. Such a race would reduce whatever hope we currently have of carefully evaluating the risks of each new AI. Maybe it's too late to hope for a full worldwide acceptance of an AI that appeals to all humans. It's pretty hard for an AI to be neutral about the existence of numbers that the Beijing government would [like us to forget](https://www.reuters.com/article/uk-china-tiananmen/china-stocks-fall-bizarre-64-89-points-on-june-489-anniversary-idUKBRE85309L20120604). But there's still plenty of room to influence how scared Baidu is of an OpenAI or an Elon Musk AI imposing Western values on the world. ### But Our Culture is Better Most Americans can imagine ways in which an AI that encodes Chinese culture might be worse than a US-centric AI. But imagine that the determining factor in how well AIs treat humans is whether the AIs have been imbued with a culture that respects those who created them. Californian culture has less respect for ancestors than almost any other culture that I can think of. Some cultures are better than others. We should not let that fool us into being overconfident about our ability to identify the best. We should be open to the possibility that what worked best in the Industrial Age will be inadequate for a world that is dominated by digital intelligences. ### A Meta Approach My most basic objection to OpenAI's approach is that it uses the wrong level of abstraction for guiding the values of a powerful AI. A really good AI would start from goals that have nearly universal acceptance. Something along the lines of "satisfy people's preferences". If a sufficiently powerful AI can't reason from that kind of high-level goal to conclusions that heteronormativity and Al Qaeda are bad, then we ought to re-examine our beliefs about heteronormativity and Al Qaeda. For AIs that aren't powerful enough for that, I'd like to see guidelines that are closer to Wikipedia's notion of [inappropriate content](https://en.wikipedia.org/wiki/Wikipedia:Content_removal#Inappropriate_content_for_Wikipedia). ### Closing Thoughts There's something odd about expecting a general-purpose tool to enforce a wide variety of social norms. We don't expect telephones to refuse to help Al Qaeda recruit. Tyler Cowen [points out](https://marginalrevolution.com/marginalrevolution/2023/04/ai-and-economic-liability.html) that we normally assign blame for a harm to whoever could have avoided it at the lowest cost. I.e. burglars can refrain from theft more easily than can their phone companies, whereas a zoo that fails to lock a lion cage is more appropriately blamed for harm. (Tyler is too eager to speed up AI deployment - see Robin Hanson's [comments on AI liability](https://www.overcomingbias.com/p/foom-liability) to balance out Tyler's excesses.) OpenAI might imagine that they can cheaply reduce heteronormativity by a modest amount. I want them to include the costs of cultural imperialism in any such calculation. (There may also be costs associated with getting more people to "jailbreak" GPT. I'm confused about how to evaluate that.) Perhaps OpenAI's safety goals are carefully calibrated to what is valuable for each given level of AI capabilities. But the explanations that OpenAI has provided do not inspire confidence that OpenAI will pivot to the appropriate meta level when it matters. I don't mean to imply that OpenAI is worse than the alternatives. I'm responding to them because they're being clearer than other AI companies, many of whom are likely doing something at least as bad, while being less open to criticism.
79e2f04b-1a48-4b5a-9fbe-b70ef717002e
trentmkelly/LessWrong-43k
LessWrong
Some scary life extension dilemmas Let's imagine a life extension drug has been discovered.  One dose of this drug extends one's life by 49.99 years.  This drug also has a mild cumulative effect, if it has been given to someone who has been dosed with it before it will extend their life by 50 years. Under these constraints the most efficient way to maximize the amount of life extension this drug can produce is to give every dose to one individual.  If there was one dose available for all seven-billion people alive on Earth then giving every person one dose would result in a total of 349,930,000,000 years of life gained.  If one person was given all the doses a total of 349,999,999,999.99 years of life would be gained.  Sharing the life extension drug equally would result in a net loss of almost 70 million years of life.  If you're concerned about people's reaction to this policy then we could make it a big lottery, where every person on Earth gets a chance to gamble their dose for a chance at all of them. Now, one could make certain moral arguments in favor of sharing the drug.  I'll get to those later.  However, it seems to me that gambling your dose for a chance at all of them isn't rational from a purely self-interested point of view either.  You will not win the lottery.  Your chances of winning this particular lottery are almost 7,000 times worse than your chances of winning the powerball jackpot.  If someone gave me a dose of the drug, and then offered me a chance to gamble in this lottery, I'd accuse them of Pascal's mugging. Here's an even scarier thought experiment.  Imagine we invent the technology for whole brain emulation.  Let "x" equal the amount of resources it takes to sustain a WBE through 100 years of life.  Let's imagine that with this particular type of technology, it costs 10x to convert a human into a WBE and it costs 100x to sustain a biological human through the course of their natural life.  Let's have the cost of making multiple copies of a WBE once they have been convert
c2bf705a-7665-46b9-ab53-8bcd00aad950
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Benign model-free RL In my [last post](https://medium.com/ai-control/directions-and-desiderata-for-ai-control-b60fca0da8f4), I described three research areas in AI control that I see as central: reward learning, robustness, and deliberation. In this post I argue that these three pieces may be *sufficient* to get a [benign](https://medium.com/ai-control/benign-ai-e4eb6ec6d68e#.ugg3x77ws) and competitive version of model-free reinforcement learning. I think this is an important intermediate goal of solving AI control. This post doesn’t discuss [benign model-based RL](https://medium.com/ai-control/aligned-search-366f983742e9#.rq3auppf0) at all, which I think is another key obstacle for [prosaic AI control](https://medium.com/ai-control/prosaic-ai-control-b959644d79c2#.d46mjxf3f). (*This post overlaps extensively with my* [*post on ALBA*](https://medium.com/ai-control/alba-an-explicit-proposal-for-aligned-ai-17a55f60bbcf#.m3m81zgrd)*, but I hope this one will be much clearer. Technically, ALBA is an implementation of the general strategy outlined in this post. I think the general strategy is much more important than that particular implementation.*) Ingredients =========== Reward learning and robustness ------------------------------ Given a [benign](https://medium.com/ai-control/benign-ai-e4eb6ec6d68e#.ugg3x77ws) agent H, [reward learning](https://medium.com/ai-control/the-reward-engineering-problem-30285c779450#.pmofowr9x) allows us to construct a reward function *r* that can be used to train a weaker benign agent A. If our training process is robust, the resulting agent A will remain benign off of the training distribution (though it may be *incompetent* off of the training distribution). Schematically, we can think of reward learning + robustness as a widget which takes a slow, benign process H and produces a fast, benign process A ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/51fd4ed26c95d664604f36b0ae03e3b41ba49f1a0ffc1487.png)A’s capabilities should be roughly the “intersection” of H’s capabilities and our RL algorithms’ competence. That is, A should be able to perform a task whenever *both* H can perform that task and our RL algorithms can learn to perform that task. In these pictures, the vertical axis corresponds intuitively to “capability,” with higher agents being more capable. But in reality I’m thinking of the possible capabilities as forming a complete [lattice](https://en.wikipedia.org/wiki/Lattice_%28order%29). That is, a generic pair of levels of capabilities is incomparable, with neither strictly dominating the other. Amplification ------------- If we iteratively apply reward learning and robustness, we will obtain a sequence of weaker and weaker agents. To get anywhere, we need some mechanism that lets us produce a *stronger* agent. The [capability amplification problem](https://medium.com/ai-control/policy-amplification-6a70cbee4f34) is to start with a weak agent A and a human expert H, and to produce a significantly more capable agent Hᴬ. The more capable agent can take a lot longer to think, all we care about is that it *eventually* arrives at better decisions than A. The key challenge is ensuring that Hᴬ remains benign, i.e. that the system doesn’t acquire new preferences as it becomes more capable. An example approach is to provide A as an assistant to H. We can give H an hour to deliberate, and let it consult A thousands of times during that hour. Hᴬ’s output is then whatever H outputs at the end of that process. Because H is consulting A a large number of times, we can hope that the resulting system will be much smarter than A. Of course, the resulting system will be thousands of times more computationally expensive than A, but that’s fine. In general, [meta-execution](https://medium.com/ai-control/meta-execution-27ba9b34d377#.isdz38ftn) is my current preferred approach to capability amplification. Schematically, we can think of amplification as a widget which takes a fast, [benign](https://medium.com/ai-control/benign-ai-e4eb6ec6d68e#.ugg3x77ws) process A and produces a slow, benign process Hᴬ: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/870ca2ca9fd69becee8ae1648b749d44865fdce50a451c9a.png)Putting it together ------------------- With these two widgets in hand, we can iteratively produce a sequence of increasingly competent agents: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/aac26587420008aa733e24c46e851f3578dccf491aced47d.png)That is, we start with our benign expert H. We then learn a reward function and train an agent A, which is less capable than H but can run much faster. By running many instances of A, we obtain a more powerful agent Hᴬ, which is approximately as expensive as H. We can then repeat the process, using Hᴬ to train an agent A⁺ which runs as fast as A but is more capable. By running A⁺ for a long time we obtain a still more capable agent Hᴬ⁺, and the cycle repeats. Collapsing the recursion ------------------------ I’ve described an explicit sequence of increasingly capable agents. This is the most convenient framework for analysis, but actually implementing a sequence of distinct agents might introduce significant overhead. It also feels at odds with current practice, such that I would be intuitively surprised to actually see it work out. Instead, we can collapse the entire sequence to a single agent: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/86c8e2eb362e15724180ac92c28a6c1e7fb3add1b275b6b4.png)In this version there is a single agent A which is simultaneously being trained and being used to define a reward function. Alternatively, we can view this as a sequential scheme with a strong initialization: there is a separate agent at each time *t*, who oversees the agent at time *t*+1, but each agent is initialized using the previous one’s state. This version of the scheme is more likely to be efficient, and it feels much closer to a practical framework for RL. (I originally suggested a similar scheme [here](https://ai-alignment.com/implementing-our-considered-judgment-6c715a239b3e).) However, in addition to complicating the analysis, it also introduces additional challenges and risks. For example, if Hᴬ actually consults A, then there are unattractive equilibria in which A manipulates the reward function, and the manipulated reward function rewards manipulation. Averting this problem either requires H to sometimes avoid depending on A, or else requires us to sometimes run against an old version of A (a trick sometimes used to stabilize self-play). Both of these techniques implicitly reintroduce the iterative structure of the original scheme, though they may do so with lower computational overhead. We will have an even more serious problem if our approach to reward learning relied on throttling the learning algorithm. When we work with an explicit sequence of agents, we can ensure that their capabilities improve gradually. It’s not straightforward to do something analogous in the single agent case. Overall I think this version of the scheme is more likely to be practical. But it introduces several additional complications, and I think it’s reasonable to start by considering the explicit sequential form until we have a solid grasp of it. Analysis ======== I’ll make two critical claims about this construction. Neither claim has yet been formalized, and it’s not clear whether it will be possible to formalize them completely. Claim #1: All of these agents are [benign](https://medium.com/ai-control/benign-ai-e4eb6ec6d68e#.ugg3x77ws). ------------------------------------------------------------------------------------------------------------ This is plausible by induction: * The original expert H is benign by definition. * If we start with a benign overseer H, and have working solutions to reward learning + robustness, then the trained agent A is benign. * If we start with a benign agent A, and have a woking solution to capability amplification, then the amplified agent Hᴬ will be benign. There are important subtleties in this argument; for example, an agent may be benign with high probability, and the error probability may increase exponentially as we proceed through the induction. Dealing with these subtleties will require careful definitions, and in some cases adjustments to the algorithm. For example, in the case of increasing failure probabilities, we need to [strengthen the statement of amplification](https://medium.com/ai-control/reliability-amplification-a96efa115687) to avoid the problem. Claim #2: The final agent has state-of-the-art performance. ----------------------------------------------------------- This is plausible if our building blocks satisfy several desirable properties. First, capability amplification should be able to cross every level non-maximal level of capability. That is, for every level of capability, it is possible to start with an agent A who is below that level, and end up with an agent Hᴬ which is above that level: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/ff6ceac39d7183daf77492a97677d063e33ad437f9a08eb5.png)For every possible place we could put the dotted line — every possible capability level — there must be some agent A for whom the orange arrow crosses that dotted line. Otherwise we would never be able to get to the other side of that dotted line, i.e. we would never be able to surpass that level of capability. Second, capability amplification should be monotonic (if A is at least as capable as B then Hᴬ should be at least as capable as Hᴮ). Third, reward learning should yield an agent whose capabilities are at least the infimum of our RL algorithm’s capabilities and the overseer’s capabilities, even if we train robustly. Now given a sequence of increasingly powerful fast agents we can take the supremum of their capabilities. Those agents will all be weaker than our RL algorithms and so the supremum is not the maximal capability, so we can consider a starting point from which capability amplification would cross that supremum. By hypothesis the sequence must eventually cross this starting point, and at that point amplification will push it above the supremum (and reward learning will keep it above the supremum). Making this argument carefully shows that the supremum is the state of the art for RL algorithms and that we attain the supremum after some finite number of steps. (Though all of this is based on a leaky abstraction of “capabilities.”) Cost ==== I think this proposal will be most helpful if it imposes minimal additional overhead. My main goal is to develop algorithms with sublinear overhead, i.e. for which the fraction of overhead converges to 0 as the underlying algorithms become stronger. The cost of this scheme depends on the quantitative properties of our basic building blocks: Factor #1: How much do reward learning and robustness slow down training? ------------------------------------------------------------------------- During RL, we need to evaluate the agent A many times. If we want to use a learned reward function we may need to evaluate A more times. And if we want to train a policy which remains benign off of the training distribution, we may need to evaluate A more times (e.g. since we may need to do adversarial training). Ideally that overhead will shrink as our algorithms become more powerful. I think this is plausible but far from certain (for now it is uncertain whether reward learning and robustness are even plausible). Some reassuring factors: * Reward learning / adversarial training can actually improve the performance of our system — the computational time spent on them might actually be well-spent even from a capabilities perspective * The difficulty of the “additional learning problem” we are trying to solve in each case (e.g. the concept of “defer to human control”) may not scale up linearly with the complexity of the underlying domain. Factor #2: how many times do we have to invoke the overseer during training? ---------------------------------------------------------------------------- In addition to calling the agent A, we will need to call the overseer H in order to get information about the reward function. Because the overseer is much more expensive than the agent, we would like to minimize the number of times we call the overseer. This can be quantified by the ratio between the number of calls to H and the number of calls to A. For example, we may need to call H once for every hundred calls to A. Factor #3: how expensive is capability amplification? ----------------------------------------------------- Capability amplification is possible only because we allow the agent Hᴬ to think for much longer than A. But “much longer” could represent a range of values: is Hᴬ a hundred times more expensive to evaluate than A? A thousand? A million? Roughly speaking, factors #2 and #3 should be multiplied together to get the overhead from reward learning: factor #2 tells us how many times we have to call the overseer, while factor #3 tells us how expensive the overseer is. The total overhead is thus (Factor #1) + (Factor #2) \* (Factor #3). As an example, I’d be happy with values like 10% + 0.01% × 1000 = 20%. Factor #4: do we need to train many separate agents? ---------------------------------------------------- If we need to use a sequence of N increasingly capable agents, then we would naively increase our training time by a factor of N. Naively, this would dominate the overhead, and in order for the scheme to be workable I think we would need to avoid it. I see a few plausible approaches: * We could use the collapsed version with a single agent. * We could use some other initialization or parameter-sharing scheme to effectively reuse the computational work done in training earlier agents. * The earlier agents could require significantly less training time than the final agent, e.g. because they are less capable. For example, if each agent takes only 20% as long to train as the following one, then the total overhead is only 25%. These mechanisms can work together; for example, each agent may require some amount of non-reusable computation, but that amount may be reduced by a clever initialization scheme. Conclusion ========== I’ve outlined an approach to AI control for model-free RL. I think there is a very good chance, perhaps as high as 50%, that this basic strategy can eventually be used to train benign state-of-the-art model-free RL agents. Note that this strategy also applies to techniques like evolution that have historically been considered really bad news for control. That said, the scheme in this post is still extremely incomplete. I have recently prioritized building a practical implementation of these ideas, rather than continuing to work out conceptual issues. That does not mean that I think the conceptual issues are worked out conclusively, but it does mean that I think we’re at the point where we’d benefit from empirical information about what works in practice (which is a long way from how I felt about AI control 3 years ago!) I think the largest technical uncertainty with this scheme is whether we can achieve enough robustness to avoid malign behavior in general. This scheme does not apply to any components of our system which [aren’t learned end-to-end](https://medium.com/ai-control/not-just-learning-e3bfb5a1f96e#.mvuanlogj). The idea is to use this training strategy for any internal components of our system which use model-free RL. In parallel, we need to develop aligned variants of each other algorithmic technique that plays a role in our AI systems. In particular, I think that model-based RL with extensive planning is a likely sticking point for this program, and so is a natural topic for further conceptual research. --- *This was originally posted* [*here*](https://ai-alignment.com/benign-model-free-rl-4aae8c97e385) *on 19th March, 2017.*
907893b1-4339-40cc-afbc-9b77a7bf4dc1
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Anthropically Blind: the anthropic shadow is reflectively inconsistent For the purposes of this post, the anthropic shadow is the type of inference found in [*How Many LHC Failures Is Too Many?*](https://www.lesswrong.com/posts/jE3npTEBtHnZBuAcg/how-many-lhc-failures-is-too-many). > > "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!" > > > In other words, since we are "blind" to situations in which we don't exist, we must adjust how we do bayesian updating. Although it has many bizarre conclusions, it is more intuitive than you think and quite useful! There are many similar applications of anthropics, such as [Nuclear close calls](https://en.wikipedia.org/wiki/List_of_nuclear_close_calls) and [*Anthropic signature: strange anti-correlations*](https://www.lesswrong.com/posts/oZxsac7JYkqPHwHTL/anthropic-signature-strange-anti-correlations). This actually has implications for effective altruism. Since we are so early into humanity's existence, we can infer from the anthropic shadow that humans will probably soon die out. Also see [*The Hero With A Thousand Chances*](https://www.lesswrong.com/posts/EKu66pFKDHFYPaZ6q/the-hero-with-a-thousand-chances). More [practically](https://www.lesswrong.com/tag/practical), the anthropic shadow should give us useful advice on how to reason about personally risky activities like driving or perhaps even aging. I have not actually seen any advice based on this principle, but theoretically there should be some conclusions you could draw. The problem, as you probably deduced from the title, is that it is [*reflectively inconsistent*](https://arbital.com/p/reflective_consistency). **Central Claim:** *Someone using the anthropic shadow should update their decision making to no longer use it. This can be justified with their current decision making procedure.* (This also suggests that if you used it *in the past*, [that was probably the wrong thing to do](https://www.lesswrong.com/tag/reflective-decision-theory).) A weaker (and obvious) claim that is also sometimes called the anthropic shadow is that we do not have experience with situations in which we have died. I agree with this version, but isn't what I will be arguing against. Note that I am not the first to notice paradoxes with the anthropic shadow. See [*No Anthropic Evidence*](https://www.lesswrong.com/posts/tzjWC9Lvqfe454Ttc/no-anthropic-evidence) for example. I have not yet seen the result about reflective inconsistency though, hence why I am making this post. I also introduce the concepts of "Anthropic undeath", "Anthropic angel" (how you *would* explain an absurdly large number of weird coincidences having to do with death), "Fedora shadow", and apply the central claim to a couple examples. To my knowledge, these contributions are novel. Anthropically undead: ghosts are as good as gone ================================================ This section contains the most general form of the argument. (This could be mathematically formalized; I just haven't gotten around to doing it.) If it seems strange to you, a generalized version of [this section](https://www.lesswrong.com/posts/LGHuaLiq3F5NHQXXF/anthropically-blind-the-anthropic-shadow-is-reflectively#A_worked_example__calibrated_forecasts_of_the_deadly_coin) might also work. First, we establish the basic framing of how we will check if something is reflectively consistent. Imagine yourself before a catastrophe potentially happens. You are an expected utility maximizer ([as all good agents should be](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem), although this assumption can probably be weakened). You are trying to come up with a policy for your future-self to follow. Consider the following scenario: in any situation that you would die, imagine instead that you become an agent with only one choice each time it takes an action: "do nothing". This state is still given the same utility as before (including the utility change from physically dying (a reinforcement learner would stop getting rewards, for example)), but as an agent you never stop existing. Call this unreal scenario "anthropic undeath". Optimizing the utility of the real scenario is the same as optimizing utility in anthropic undeath, because the agent choosing "do nothing" in the anthropic undeath scenario has the same physical effect as what actually happens in the real scenario when the agent is dead. I call this the "ghosts are as good as gone" principle. The anthropic undeath scenario has no anthropic shadow, because the agent never stops existing. Thus, the optimal policy never uses anthropic shadow in its reasoning. The optimal policy in the real scenario is the same by the principle in the previous paragraph. (Also see [this comment](https://www.lesswrong.com/posts/LGHuaLiq3F5NHQXXF/anthropically-blind-the-anthropic-shadow-is-reflectively#84z7fELNyvur5gw2v) for an explanation of why we can consider a physically dead entity to be an agent.) Q.E.D. A worked example: calibrated forecasts of the deadly coin ========================================================= Imagine a coin with a side labelled "zero" and a side labelled "one". It will get flipped a bunch of times. Before each flip, you will give your credence p that the coin will come up one. Letting x be the result of the coin flip, you will be given 1000 (1-(p-x)²) utils. (We can imagine the utils representing cookies being given to your friend Timmy.) Notice that this is a [proper scoring rule](https://en.wikipedia.org/wiki/Brier_score). Also, if the coin comes up one, the game ends and you die. The utils still count if you die (because Timmy still gets the cookies). (Notice how the "you die" part has no effect on the game since the game is over anyways. We could stop here using the same argument as the previous section, but we will work it out to illustrate why the anthropic shadow fails.) You now must come up with a policy that you will follow during the game. You have two hypotheses about the coin, both with 50% [credence](https://www.lesswrong.com/tag/bayesian-probability) apriori: 1. The coin always comes up zero. 2. The coin has an independent 50% chance of coming up zero each time. Consider a situation in which you have observed 7 zeros in a row. What p should you choose? The anthropic shadow suggests you have gotten no information, and thus the chance of one is 25%. However, this is incorrect as a policy. Before the game begins, you reason as follows about the state after 7 coin flips: 1. There is a 50% chance you are in scenario 1. This contributes 1000 (0.5 (1-p²)) to the expected value. 2. There is a 50% chance you are in scenario 2. 1. There is a 1 - 2⁻⁷ chance that one of the coin flips resulted in one. In this case, the game is over, you are dead, and nothing is contributed to the expected utility. 2. There is a 2⁻⁷ chance that you get 7 zeros in a row. There is a 50% chance that the next flip is a zero, and a 50% chance that it is a one. Since scenario 2 is itself a 50% chance, this contributes 1000 (2⁻⁸ (1-(0.5 p² + 0.5 (p-1)²))) to the expected utility. Maximizing your expected utility of 1000 (0.5 (1-p²) + 2⁻⁸ (1-(0.5 p² + 0.5 (p-1)²))), you find the optimal p is 1/258, equivalent to odds of 1 to 257 (about 1/85 of the anthropic shadow estimate). This is exactly the same as the probability you get by doing normal bayesian updating! To summarize, the anthropic shadow would have you say: > > "Anthropic principle! If the coin came up one, I would have died, and I wouldn't be here!" > > > And would lose you about 30 expected utils on the 7th round alone! At the beginning of the game when you are setting your policy, don't do that! You might say, "but what I really want is to not die, I don't care about maximizing my calibration!". If so, you lack faith in the [Litany of Tarski](https://www.lesswrong.com/tag/litany-of-tarski). Vladimir Nesov has an example in [*No Anthropic Evidence*](https://www.lesswrong.com/posts/tzjWC9Lvqfe454Ttc/no-anthropic-evidence) where the only goal is survival. Again, the optimal policy agrees with not using anthropic shadow. If you have a clear view of your utility function (including in states where you no longer exist), it is best for your credences to calibrated! Anthropic Angels and Lucky Fedoras ================================== Okay, but what should we do if we observed a huge amount of evidence that weird coincidences happen around deadly things, like a zillion LHC accidents or perfect anti-correlation between pandemics and recessions in [*Anthropic signature: strange anti-correlations*](https://www.lesswrong.com/posts/oZxsac7JYkqPHwHTL/anthropic-signature-strange-anti-correlations). Surely at some point I must relent and go "okay, the anthropic shadow is real". And if that is so, than even a little bit of evidence should at least make us a little worried about the anthropic shadow. No. Treating ideal reasoning as an approximation to [Solomonoff Induction](https://www.lesswrong.com/tag/solomonoff-induction), we find that there is no anthropic shadow hypothesis. However, there *are* what I call anthropic angel hypotheses. These are hypotheses that posit that there is some mysterious force that protects you from death, perhaps via [rejection sampling](https://en.wikipedia.org/wiki/Rejection_sampling). One such hypothesis is [quantum immortality](https://www.lesswrong.com/tag/quantum-immortality)[[1]](#fn-phrR9uAy96eMhKXWA-1). An important thing to understand about anthropic angels though is that they typically *don't stop on the next observation*. If the LHC would destroy humanity but accidents keep happening, will I protest to stop the LHC? No, because there is no reason to think that the accidents will stop. Of course, if you are worried that the anthropic angel might fail in the future, you still might be cautious. However, the more times you get saved, the more you can trust the angel. This is the exact opposite of the anthropic shadow! Keep in mind also that the type of reasoning behind anthropic angels also applies to things other than death. **Death isn't special** in this context! Suppose that you are a forgetful person, but you have a lucky [fedora](https://en.wikipedia.org/wiki/Fedora). You notice that there are weird coincidences that prevent you from losing your fedora. Is there a "fedora shadow", whereby the version of you currently wearing the fedora can't observe scenarios where the fedora is missing, and thus you must adjust your bayesian updating? No. Given enough evidence, you would need to conclude that there is a "fedora angel" that influences events to save your lucky fedora for some reason, instead of a fedora shadow whereby from your luck you make fearful inferences. What *would* convince me that the anthropic shadow is real? =========================================================== > > So - taking into account the previous cancellation of the Superconducting Supercollider (SSC) - how many times does the LHC have to fail before you'll start considering an anthropic explanation? 10? 20? 50? - [*How Many LHC Failures Is Too Many?*](https://www.lesswrong.com/posts/jE3npTEBtHnZBuAcg/how-many-lhc-failures-is-too-many) > > > Since the anthropic shadow is not reflectively consistent, I am convinced that there is no object-level evidence that would persuade me. No amount of LHC weirdness, Nuclear close calls, strange anti-correlations, *etc...* would change my mind. The evidence that is normally presented for the anthropic shadow is instead (extremely weak thus far) evidence for an anthropic angel. However, [to make my belief pay rent](https://www.lesswrong.com/tag/anticipated-experiences), I should specify what it excludes. Here is what I would count as evidence for the anthropic shadow: if people applying the concept of anthropic shadow to *personal* risk of death, such as car crashes, consistently make better decisions than those who do not. Note that to be persuasive, there shouldn't be simpler explanations (like it cancelling out some other bias, or in an extreme case them actually using the anthropic angel). Curiously, I have not seen anyone apply the anthropic shadow in this way (except ironically). If anyone tries, I strongly anticipate it will be systematically worse. Applications ============ [LHC failures](https://www.lesswrong.com/posts/jE3npTEBtHnZBuAcg/how-many-lhc-failures-is-too-many), Nuclear close calls, [Strange anti-correlations](https://www.lesswrong.com/posts/oZxsac7JYkqPHwHTL/anthropic-signature-strange-anti-correlations), *etc...* ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- These are basically the same as the [worked out example above](https://www.lesswrong.com/posts/LGHuaLiq3F5NHQXXF/anthropically-blind-the-anthropic-shadow-is-reflectively#zumFgkv6EjsFHxeav). A string of identical coin flips might seem unusual, but they do not mean we should use the anthropic shadow. For example, for the strange anti-correlations, the probability mass of "x-risk and we are alive and unexplained anti-correlation" is the same as "no x-risk and unexplained anti-correlation", so good policy does not use anthropic shadow. Also see [this comment](https://www.lesswrong.com/posts/LGHuaLiq3F5NHQXXF/anthropically-blind-the-anthropic-shadow-is-reflectively?commentId=zumFgkv6EjsFHxeav#zumFgkv6EjsFHxeav) (and the ones around it) for a more indepth discussion of how it applies to things like the LHC (with many independent parts that could fail). [The Hero With A Thousand Chances](https://www.lesswrong.com/posts/EKu66pFKDHFYPaZ6q/the-hero-with-a-thousand-chances) ---------------------------------------------------------------------------------------------------------------------- "Allow me to make sure I have this straight," the hero said. "The previous hero said I would definitely fail for *what reason*?" "Shades of Ahntharhapik, very serious!" said Aerhien. "And how did he fare?" asked the hero? "Pretty typically. He tried to destroy the Dust with something called a 'Nuclear bomb'. Didn't get very far though. In the uranium mine we discovered a magical artifact that turned half of the miners into zombies who started eating the Dust." replied Aerhien. Ghandhol chipped in "The hero then said that the shades of Ahntharhapik saved us, but it wouldn't next time, and thus we should summon a hero from the same world to continue the advanced weaponry program he started." "🤦 so the previous hero was trying to imply that each time you survived, that was evidence that you were bad at survival." sighed the hero. "Yes, there is no other explanation!" exclaimed Aerhien. The hero mocked "Just like if you see a thousand coin flips come up heads, the only explanation is that you got *really really* lucky (instead of checking if both sides of the coin are heads)?" The whole council went silent. "Look, out of all the possible worlds that could've summoned me, the ones that weren't good at surviving passed long ago. And the worlds that already succeeded wouldn't be summoning me to fight the Dust either. Your world genuinely does have a kind of luck. [We can't agree to disagree.](https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem)" said the hero. "What kind of cruel luck is this? Is it the Counter-Force?" replied Aerhien. "So to speak. Hmm. If your world was generated by resampling, you would've defeated the Dust long ago (or if that also caused a resample, a more sensible stalemate). If your world branches off at the moment of death, the anomalies would happen much later. If someone tried to shoot you, the gun would go off, but the bullet would bounce." thought the hero. The hero then had an insight, saying "its simple really. We are in a fantasy novel of some sort (or maybe a fanfic? something in this vicinity). Normally fictional characters don't figure this out, but as a good bayesian reasoner, even these kinds of conclusions can't escape me! Especially when the alternative is believing in a thousand identical coin flips." The [Doomsday argument](https://en.wikipedia.org/wiki/Doomsday_argument) and [Grabby Aliens](https://www.lesswrong.com/tag/grabby-aliens) ----------------------------------------------------------------------------------------------------------------------------------------- When I was first writing this, I thought that my argument ruled out the Doomsday argument, but permitted Grabby aliens. Turns out, it *vaguely* argues against both, but not in a strong enough way to conclude they are reflectively inconsistent. It is quite similar to the [*Self-Indication Assumption Doomsday argument rebuttal*](https://en.wikipedia.org/wiki/Self-Indication_Assumption_Doomsday_argument_rebuttal) (the main difference being that the likelihood of being born is a sublinear function of the number of humans under my argument). Let p be that probability that a randomly generated human will be me, Christopher King (as defined by my experiences). Let q be the probability that a randomly generated civilization in the universe will contain Christopher King. What policy should I choose so that Christopher King is well calibrated? For the doomsday argument, the example hypotheses are: 1. There will be 120 billion humans. 2. There will be 10ˡ⁰⁰ humans. And for grabby aliens: 3. The universe will be taken over by grabby aliens. 4. The universe is and will be filled with quiet aliens (for simplicity, we will say that it has the same frequency f of intelligent civilizations per year per meter cubed of unconquered space as in hypothesis 3). Both arguments posit that we need to explain earliness. However, hypotheses 2 and 4 *also* have early humans. For the doomsday argument, the probability of Christopher King being among the first ~60 billion humans is (1-(1-p)^(60 billion)) under both hypotheses. So being early is not evidence either way for 1 or 2. For grabby aliens, the probability of Christopher King being present in the first ~14 billion years of the universe is (1-(1-q)^(f \* 14 billion years \* (4 x 10⁸⁰ m³))) under hypothese 4, and slightly less under hypothese 3 because some space is already conquered by grabby aliens. So the [likelihood ratio](https://www.lesswrong.com/tag/likelihood-ratio) favors hypothesis 4. The problem is that this technically isn't a case of reflective inconsistency, because I wouldn't be able to remember and reflect before the universe started, of course. I worry in particular that there is no reason for "pre-existence" Christopher King to have the same priors as "embodied" Christopher King.[[2]](#fn-phrR9uAy96eMhKXWA-2) See also [*Where Recursive Justification Hits Bottom*](https://www.lesswrong.com/posts/C8nEXTcjZb9oauTCW/where-recursive-justification-hits-bottom). However, also see [*SSA rejects anthropic shadow, too*](https://www.lesswrong.com/posts/EScmxJAHeJY5cjzAj/ssa-rejects-anthropic-shadow-too) for ways in which popular anthropic theories handle the pre-existence anthropic shadow. Conclusion ========== So in summary, we can update on the fact that we have survived. Counter-intuitively, we should treat the fact that it is *"humanity 1, x-risk 0"* or *"survive 1, death 0"* the same as we would treat any other statistic of that form. For example, we can update normally on the fact that we survived the Cold War, the fact that nothing has randomly killed humanity (like pandemics) yet, and on a personal level the fact that we survive things like car crashes. This was shown on the basis of [reflective consistency](https://arbital.com/p/reflective_consistency). --- 1. As far as I know, our current [Everett branch](https://www.lesswrong.com/tag/everett-branch) is best explained by Born's rule without quantum immortality (QI). There are infinitely many branches where, in the future, we start seeing evidence of it, but in each such branch Solomonoff induction (SI) will only require a finite amount of evidence to update (bounded by a constant determined by how complicated it is to implement QI in the programming language). That is what it means for SI to be universal: [it works in every Everett branch, not just the typical ones](https://www.lesswrong.com/posts/ioJGAt6x4q3GGKhrG/solomonoff-induction-still-works-if-the-universe-is). On the other hand, QI can have infinitely bad calibration in finite time. If they are in a normal Born's rule branch and they die, their prediction for if they would die (and any consequences thereof) would have infinitely many bits of surprisal. This could be quite bad if you cared about the state of the world in the Born's rule branch after your death! [↩︎](#fnref-phrR9uAy96eMhKXWA-1) 2. My argument works fine under [SIA](https://www.lesswrong.com/tag/self-indication-assumption) and [SSA](https://www.lesswrong.com/tag/self-sampling-assumption) (assuming that physicists are correct about the universe being infinite), but there are more exotic sampling assumptions like "SSA but only in the observable universe and also no Boltzmann brains" where it can fail. This hypothesis would have positive probability under [UDASSA](https://www.lesswrong.com/tag/udassa), for example. Even though its weird, I don't see a reflective inconsistency. [↩︎](#fnref-phrR9uAy96eMhKXWA-2)
3c867421-9d6e-4a76-b64e-30115e328e87
trentmkelly/LessWrong-43k
LessWrong
Should I Finish My Bachelor's Degree? To some, it might seem like a strange question. If you think of being college-educated as a marker of class (or personhood), the fact that I don't have a degree at age of thirty-six (!!) probably looks like a scandalous anomaly, which it would be only natural for me to want to remediate at the earliest opportunity. I deeply resent that entire worldview—not because I've rejected education, properly understood. On the contrary. The study of literature, history, mathematics, science—these things are among the noblest pursuits in life, sources of highest pleasure and deepest meaning. It's precisely because I value education so much that I can't stand to see it conflated with school and its culture of bureaucratic servitude where no one cares what you know and no one cares what you can do; they just want you to sit in a room and obey the commands of the designated teacher. Whereas in reality, knowledge doesn't come from "taking courses." How could it? Knowledge comes from quality study and practice. Sure, it's possible that someone could study in order to "pass" a "class" that they're "taking" in school. But once you know how and why to study, it's not clear what value the school is adding that can't be gotten better, cheaper, elsewhere. Just get the books. (And start a blog, go to meetups, chat to large language models, hire a private tutor—whatever makes sense to get better at doing the things you want to do, without having to worry about whether the thing that makes sense can be made legible to distant bureaucrats.) The people who believe in being college-educated probably don't believe me. They probably think my pæans to the glory of self-study are the rationalizations of a lazy student who doesn't want to work hard. I can understand some reasons for skepticism. Sometimes people really are lazy, and suffer from self-serving delusions. Probably there are some confused people out there who have mistaken consumer edutainment for production scholarship and—maybe, som
f2e2aea9-d017-4ae9-b3bc-e9bd7b4e9f24
trentmkelly/LessWrong-43k
LessWrong
Implementing a Transformer from scratch in PyTorch - a write-up on my experience Introduction As is discussed in posts such as this one, a good way to test your skills as a machine learning research engineer is to implement a Transformer from scratch in PyTorch. This is exactly what I did. Below I am sharing my experience while doing so. The code I wrote can be found in this GitHub repository, although I don't recommend looking at it if you are going to attempt this project yourself. My goals * ensure that the model was training properly by getting to 0 loss (or near 0 loss) on a very small training set * ensure that the trained model worked better than the baseline (which was an untrained Transformer model) * ensure that a saved trained model worked properly after being loaded and that it worked when generating sequences token-by-token All of my goals were achieved. I picked the goals above because they ensured that my model was implemented correctly. If I can train the model and use it, I can be convinced that my implementation works. I could have put some other goals here as well, but they would be out of scope for what I had in mind – I wanted to test if I could implement concepts from scientific papers and that was it. All of the other things could be considered for future work. Knowledge I had prior to this project Prior to this project, I had pretty limited experience with natural language processing (NLP). One of my larger projects was fine-tuning GPT-2 so that it generates movie scripts, but that was done relying on Hugging Face. I did have 2-3 years of machine learning engineering experience (or related), but that was almost exclusively in the field of computer vision. I’ll do my best to enumerate what I knew at the point of starting the project (which is relevant to the project): * I knew that in order to encode tokens (which could be words or characters) I needed to use embeddings and that the output of the model would be a probability distribution over the next token; in general, I was aware of the “pipeline” of how thin
58995b71-511d-430b-8746-b2abe18050f7
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
What harm could AI safety do? (Note: I'm so far very in favor of work on AI safety. This question isn't intended to oppose work on AI safety, but to better understand it and its implications.) (Edit: The point of this question is also to brainstorm some possible harms of AI safety and see if any of these can produce practical considerations to keep in mind for the development of AI safety.) Is there any content that investigates the harms that could come from AI safety? I've so far only found the scattered comments listed below. All types of harm are relevant, but I think I most had in mind harm that could come from AI safety work going **as intended** as opposed to the opposite (an example of the opposite: it being misrepresented, de-legitimized as a result, and it then being neglected in a way that causes harm). In a sense, the latter seems much less surprising because the final mechanism of harm is still what proponents of AI safety are concerned about (chiefly, unaligned AI). Here, I'm a bit more interested in "surprising" ways the work could cause harm. * ["AI safety work advancing AGI more than it aligns it"](https://forum.effectivealtruism.org/posts/FhjDSijdWrhFMgZrb/the-epistemic-challenge-to-longtermism-tarsney-2020?commentId=v9SRdJrmA3NscoWp9#v9SRdJrmA3NscoWp9) * ["influencing [major] international regulatory organisation in a way leading to creating some sort of "AI safety certification" in a situation where we don’t have the basic research yet, creating false sense of security/fake sense of understanding" and "influencing important players in AI or AI safety in a harmful leveraged way, e.g. by bad strategic advice "](https://forum.effectivealtruism.org/posts/wHyy9fuATeFPkHSDk/how-x-risk-projects-are-different-from-startups?commentId=5bEqHr9oBARtA222i#5bEqHr9oBARtA222i)
a3b65126-5f23-47f7-b3d7-d8b117993063
trentmkelly/LessWrong-43k
LessWrong
Toy model piece #4: partial preferences, re-re-visited Two failed attempts I initially defined partial preferences in terms of foreground variables Y and background variables Z. Then a partial preference would be defined by y+ and y− in Y, such that, for any z∈Z, the world described by (y+,z) would be better than the world described by (y−,z). The idea being that, everything else being equal (ie the same z), a world with y+ was better than a world with y−. The other assumption is that, within mental models, human preferences can be phrased as one or many binary comparisons. So if we have a partial preference like P1: "I prefer a chocolate ice-cream to getting kicked in the groin", then (y+,z) and (y−,z) are otherwise identical worlds with a chocolate ice-cream and a groin-kick, respectively. Note that in this formalism, there are two subsets of the set of worlds, y+×Z and y−×Z, and map l between them (which just sends (y+,z) to (y−,z)). In a later post, I realised that such a formalism can't capture seemingly simple preferences, such as P2: "n+1 people is better than n people". The problem is that that preferences like that don't talk about just two subsets of worlds, but many more. Thus a partial preference was defined as a preorder. Now, a preorder is certainly rich enough to include preferences like P2, but its allows for far too many different types of structures, needing a complicated energy-minimisation procedure to turn a preorder into a utility function. This post presents another formalism for partial preferences, that keeps the initial intuition but can capture preferences like P2. The formalism Let W be the (finite) set of all worlds, seen as universes with their whole history. Let X be a subset of W, and let l be an injective (one-to-one) map from X to W. Define Y=l(X), the image of l, and l−1:Y→X as the inverse. Then the preference is determined by: * For all x∈X, x>l(x). If X and Y are disjoint, this just reproduces the original definition, with X=y+×Z and Y=y−×Z. But it also allows preference
61d56b2e-1bdc-4642-8378-d5049d220db2
StampyAI/alignment-research-dataset/arbital
Arbital
Mindcrime: Introduction The more predictive accuracy we want from a model, the more detailed the model becomes. A very rough model of an airplane might only contain the approximate shape, the power of the engines, and the mass of the airplane. A model good enough for engineering needs to be detailed enough to simulate the flow of air over the wings, the centripetal force on the fan blades, and more. As a model can predict the airplane in more and more fine detail and with better and better probability distributions, the computations carried out to make the model's predictions may start to look more and more like a detail simulation of the airplane flying. Consider a machine intelligence building, and testing, the best models it can manage of a human being's behavior. If the model that produces the *best* predictions involves simulations with moderate degrees of isomorphism to human cognition, then the model, as it runs, may itself be self-aware or conscious or sapient or whatever other property stands in for being an object of ethical concern. This doesn't mean that the running model of Fred is Fred, or even that the running model of Fred is human. The concern is that a sufficiently advanced model of a person will be *a* person, even if they might not be the *same* person. We might then worry that, for example, if Fred is unhappy, or *might* be unhappy, the agent will consider thousands or millions of hypotheses about versions of Fred. Hypotheses about suffering versions of Fred, when run, might themselves be suffering. As a similar concern, these hypotheses about Fred might then be discarded - cease to be run - if the agent sees new evidence and updates its model. Since [programs can be people](https://arbital.com/p/18j), stopping and erasing a conscious program is the crime of murder. This scenario, which we might call 'the problem of sapient models', is a subscenario of the general problem of what Bostrom terms 'mindcrime'. ([https://arbital.com/p/2](https://arbital.com/p/2) has suggested 'mindgenocide' as a term with fewer Orwellian connotations.) More generally, we might worry that there are agent systems that do huge amounts of moral harm just in virtue of the way they compute, by containing embedded conscious suffering and death. Another scenario might be called 'the problem of sapient subsystems'. It's possible that, for example, the most efficient possible system for, e.g., allocating memory to subprocesses, is a memory-allocating-subagent that is reflective enough to be an independently conscious person. This is distinguished from the problem of creating a single machine intelligence that is conscious and suffering, because the conscious agent might be hidden at a lower level of a design, and there might be a lot *more* of them than just one suffering superagent. Both of these scenarios constitute moral harm done inside the agent's computations, irrespective of its external behavior. We can't conclude that we've done no harm by building a superintelligence, just in virtue of the fact that the superintelligence doesn't outwardly kill anyone. There could be trillions of people suffering and dying *inside* the superintelligence. This sets mindcrime apart from almost all other concerns within the [https://arbital.com/p/5s](https://arbital.com/p/5s), which usually revolve around external behavior. To avoid mindgenocide, it would be very handy to know exactly which computations are or are not conscious, sapient, or otherwise objects of ethical concern. Or, indeed, to know that any particular class of computations are *not* objects of ethical concern. Yudkowsky calls a [nonperson predicate](https://arbital.com/p/) any computable test we could safely use to determine that a computation is definitely *not* a person. This test only needs two possible answers, "Not a person" and "Don't know". It's fine if the test says "Don't know" on some nonperson computations, so long as the test says "Don't know" on *all* people and never says "Not a person" when the computation is conscious after all. Since the test only definitely tells us about nonpersonhood, rather than detecting personhood in any positive sense, we can call it a nonperson predicate. However, the goal is not just to have any nonperson predicate - the predicate that only says "known nonperson" for the empty computation and no others meets this test. The goal is to have a nonperson predicate that includes powerful, useful computations. We want to be able to build an AI that is not a person, and let that AI build subprocesses that we know will not be people, and let that AI improve its models of environmental humans using hypotheses that we know are not people. This means the nonperson predicate does need to pass some AI designs, cognitive subprocess designs, and human models that are good enough for whatever it is we want the AI to do. This seems like it might be very hard for several reasons: - There is *unusually extreme* philosophical dispute, and confusion, about exactly which programs are and are not conscious or otherwise objects of ethical value. (It might not be exaggerating to scream "nobody knows what the hell is going on".) - We can't fully pass any class of programs that's [Turing-complete](https://arbital.com/p/). We can't say once and for all that it's safe to model gravitational interactions in a solar system, if enormous gravitational systems could encode computers that encode people. - The [https://arbital.com/p/42](https://arbital.com/p/42) problem applies to any attempt to forbid an [advanced](https://arbital.com/p/2c) [consequentialist agent](https://arbital.com/p/9h) from using the most effective or obvious ways of modeling humans. The *next* best way of modeling humans, outside the blocked-off options, is unusually likely to look like a weird loophole that turns out to encode sapience some way we didn't imagine. An alternative for preventing mindcrime without a trustworthy [nonperson predicate](https://arbital.com/p/) is to consider [agent designs intended *not* to model humans, or other minds, in great detail](https://arbital.com/p/102), since there may be some [pivotal achievements](https://arbital.com/p/6y) that can be accomplished without a value-aligned agent modeling human minds in detail.
91446665-3088-47a9-bca6-2631b72ab84e
trentmkelly/LessWrong-43k
LessWrong
Covid 7/16: Becoming the Mask Previous week: Covid 7/9: Lies, Damn Lies and Death Rates Two weeks ago I predicted a surge in Covid deaths starting on July 2 and picking up on July 7. On July 6, the rolling 7-day average of Covid deaths reached a local low of 480. Seven days later that average was 721, almost exactly 50% higher. It’s since pulled back slightly to 696. No doubt holiday delays are a small portion of that story, and helps to explain why things didn’t pick up until after the holiday backlog was gone, which I failed to anticipate at the time. That is a small part of this story, but only a small part. The good news is that, once again, things are getting worse slower than I would have anticipated two weeks ago. We might be remarkably close to turning the corner. The other good news is that masks, especially cloth masks, might be a lot of the story of reduced death rates, by reducing initial viral loads. Despite have a full debate and writing a whole post about this, I still didn’t take this possibility seriously enough in the last few explorations. That wouldn’t be quite as good news as the virus becoming less deadly, but it’s probably the second best possible news. Where do we go from here? When and where will deaths and infections peak? How long until we turn things around? How much locking down do we have left in the tank after all? Let’s run the numbers. Positive Test Counts By Region Date WEST MIDWEST SOUTH NORTHEAST May 7-May 13 22419 43256 37591 56892 May 14-May 20 22725 42762 40343 52982 May 21-May 27 23979 39418 42977 37029 May 28-June 3 32200 31504 50039 33370 June 4-June 10 35487 24674 55731 22693 June 11-June 17 41976 22510 75787 17891 June 18-June 24 66292 26792 107221 15446 June 25-July 1 85761 34974 163472 16303 July 2-July 8 103879 40139 202863 18226 July 9-July 15 108395 53229 250072 20276   Deaths by Region Date WEST MIDWEST SOUTH NORTHEAST May 7-May 13 1082 2288 1597 5327 Apr 23-29 1090 2060 1442 4541 Apr 30-May 6 775 1723 1290 3008 May 28-June 3 8
402199ed-b101-4261-92e1-e814eac36ca6
trentmkelly/LessWrong-43k
LessWrong
Transformative AI and Compute [Summary] Cross-posted here on the EA Forum. This is the summary of the series Transformative AI and Compute - A holistic approach. You can find the sequence here and the links to the posts below: 1. Compute [1/4] 2. Forecasting Compute [2/4] 3. Compute Governance and Conclusions [3/4] 4. Compute Research Questions and Metrics [4/4] 0. Executive Summary This series attempts to: 1. Introduce a simplified model of computing which serves as a foundational concept (Part 1 - Section 1). 2. Discuss the role of compute for AI systems (Part 1 - Section 2). * In Part 1 - Section 2.3 you can find the updated compute plot you have been coming for. 3. Explore the connection of compute trends and more capable AI systems over time (Part 1 - Section 3). 4. Discuss the compute component in forecasting efforts on transformative AI timelines (Part 2 - Section 4) 5. Propose ideas for better compute forecasts (Part 2 - Section 5). 6. Briefly outline the relevance of compute for AI Governance (Part 3 - Section 6). 7. Conclude this report and discuss next steps (Section 7). 8. Provide a list of connected research questions (Appendix A). 9. Present common compute metrics and discusses their caveats (Appendix B). 10. Provide a list of Startups in the AI Hardware domain (Appendix C). Abstract Modern progress in AI systems has been driven and enabled mainly by acquiring more computational resources. AI systems rely on computation-intensive training runs — they require massive amounts of compute. Learning about the compute requirements for training existing AI systems and their capabilities allows us to get a more nuanced understanding and take appropriate action within the technical and governance domain to enable a safe development of potential transformative AI systems. To understand the role of compute, I decided to (a) do a literature review, (b) update existing work with new data, (c) investigate the role of compute for timelines, and lastly, (d) explore conce
cac2df09-7be3-4fae-9040-7a1714eff324
trentmkelly/LessWrong-43k
LessWrong
Avoiding the emergency room Diana Hsieh interviews Dr. Doug McGuff about avoidable injuries and deaths. He's an emergency room physician in South Carolina, so he's pretty much just talking about what he's seen-- different regions have different characteristic injuries. He says that you're safest in the largest car you can afford, which raises some interesting ethical issues. There's a fair amount about the risks of getting overfocused on getting something done. This adds tremendously to the hazards of using ladders. Also, did you know trees can go sproing? One of hazards of chainsaws is that a good bit of energy might be stored in a twisted tree trunk. Don't just know your physics, apply it! More generally, there are machines and situations (ATVs, chainsaws, airplanes, skiing, etc.) which tend to make people feel more competent than they are.  On the other hand, injuries from rock climbing and horseback riding are less common than you might think. I don't know why the ancestral environment didn't give people a reflexive distaste against diving into water. Perhaps people back then had too much sense to dive much. One of the pieces of advice-- to get out of stressful relationships-- is too general. This is mostly a good idea, but from what I've read, leaving a violent relationship can lead to more risk of violence. It's still a good idea to leave, but it's important to leave cautiously. Both McGuff and Hsieh are objectivists, so some of the discussion might be in mind-killer territory. Edited to add: It's possible that objectivism would be better discussed under a new post. It's certain that there's a bunch of interesting material in the podcast, and avoidable accidents are worth discussing. Topic list:   * “Black swans” of health and “The Dirty Dozen” * #1: Driving a car or motorcycle * #2: Riding an ATV * #3: Biking or jogging on public roads * #4: Flying a plane or helicopter yourself * #5: Getting into a fight * #6: Lighting a gas grill * #7: Diving into water * #8: Usin
1461bda8-af29-4665-8408-4d13d7005d1c
StampyAI/alignment-research-dataset/blogs
Blogs
Track records for those who have made lots of predictions I love it when someone makes lots of predictions and then we can see their track record after the fact. Here's a collection of links to track records for anyone/anything interesting that has them tabulated, AFAIK. The basic idea -------------- The basic idea is that someone can write down specific, public predictions about the world with probabilities attached, like "60% chance that X wins the 2020 election." If they make a lot of predictions, they can then come back and assess whether their overall body of work was "well calibrated," which means that things they thought would happen 60% of the time happened 60% of the time; things they thought would happen 80% of the time happened 80% of the time; etc. This is done using a "calibration curve" (most of the links below explain how the curve works; [this explanation](https://electionbettingodds.com/TrackRecord.html) is pretty compact). People with good historical calibration then have evidence of their trustworthiness going forward. More on this general idea in the book [Superforecasting](https://smile.amazon.com/dp/B015LYZGW6/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1) (and more briefly at [this Open Philanthropy blog post](https://www.openphilanthropy.org/blog/efforts-improve-accuracy-our-judgments-and-forecasts)). Good track records ------------------ * [FiveThirtyEight](https://projects.fivethirtyeight.com/checking-our-work/)* [HyperMind](https://www.hypermind.com/en/can-hypermind-predict-the-future/), which contracts to create predictions. * A few individuals: + [Scott Alexander](https://astralcodexten.substack.com/p/2020-predictions-calibration-results) (this links to year-by-year track records, though I wish all the years were combined in one place). + [@peterhurford Twitter predictions](https://docs.google.com/spreadsheets/d/1xcgYF7Q0D95TPHLLSgwhWBHFrWZUGJn7yTyAhDR4vi0/edit#gid=0) (and [summary](https://twitter.com/peterwildeford/status/1361388809590611969)). + [Gwern](https://predictionbook.com/users/gwern). OK-to-pretty-good track records ------------------------------- * [ElectionBettingOdds.com](https://electionbettingodds.com/TrackRecord.html), which forecasts elections based on prediction markets (unfortunately not updated for 2020 elections yet). Good overall, though it looks like events they score as 20-30% likely are more likely to happen than predicted. (I wish they would combine the 20-30% predictions with the 70-80% predictions etc., since a 20-30% prediction that something will happen is just like a 70-80% prediction that it won't.) * [Metaculus](https://www.metaculus.com/questions/track-record/), a community forecasting site - skip to the last chart, which is the same idea as the 2nd chart from ElectionBettingOdds, though harder to read. Seems to be biased in the opposite direction from ElectionBettingOdds.com, i.e., overrating the likelihood of pretty unlikely events. * [All users in aggregate for PredictionBook](https://predictionbook.com/predictions) (a website that lets individuals track their own predictions) - well calibrated except at the very confident end. I'd guess this is a "wisdom of crowds" thing; it wouldn't make me trust a particular PredictionBook user very much. Less good track records ----------------------- [Track record for Scott Adams](https://docs.google.com/spreadsheets/d/1KiaNbaa1MJJbWeTC8MiYyc76cqkrmk1v8WA5WFUS_hI/edit#gid=2031713830), scored by someone other than Scott Adams (so not 100% reliable, though I checked a couple of them). It's really bad, probably easier to see from the "Predictions" sheet than from the chart. For example he [tweeted](https://mobile.twitter.com/ScottAdamsSays/status/1273823715977916416) "I put Trump’s odds of winning the election at 100% (unless something big changes) with a 50% chance of a successful coup" on 6/18/2020, [and](https://mobile.twitter.com/ScottAdamsSays/status/1273823715977916416) "If I had to bet today, Trump has the edge, 60-40" *two days after the election.* (He also [gave](https://mobile.twitter.com/ScottAdamsSays/status/1082622557365231616) 0% to Biden winning the nomination, earlier.) The reason I'm bothering with this (and probably the reason the maker of the spreadsheet did) is that Adams has gotten attention in some circles for his highly confident predictions of a Trump win in 2016, and I want to make the general point that a small # of impressive predictions can be very misleading, since the successful predictions tend to get more attention than the unsuccessful predictions. 2015 evidence that a [famous ESPN sports analyst](https://deadspin.com/espns-chad-ford-has-been-retroactively-editing-draft-bo-1681631642) had actually been editing his prediction-like statements (mock draft rankings) to make them look better with the benefit of hindsight. Tangentially related -------------------- [Independent analysis of Ray Kurzweil's predictions about 2019, made in 1999](https://www.lesswrong.com/posts/NcGBmDEe5qXB7dFBF/assessing-kurzweil-predictions-about-2019-the-results). He didn't give probabilities, and got more than twice as many wrong as right, though I think it's fair of the author to end with "I strongly suspect that most people's 1999 predictions about 2019 would have been a lot worse." [Open Philanthropy's attempt to assess the general accuracy of very long-range forecasts](https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting) mostly concluded that it's too hard to assess because of things like "People didn't make clear enough predictions or give probabilities."
e467199c-a86c-4b22-a34d-f80dca232b5c
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Legal Assistance for Victims of AI In the face of increasing competition, it seems unlikely that AI companies will ever take their foot off the gas. One avenue to slow AI development down is to make investment in AI less attractive. This could be done by increasing the legal risk associated with incorporating AI in products. My understanding of the law is limited, but the EU seems particularly friendly to this approach. The European Commission recently proposed the AI Liability Directive, which aims to make it easier to sue over AI products. In the US, companies are at the very least directly responsible for [what their chatbots say](https://www.lawfareblog.com/section-230-wont-protect-chatgpt), and it seems like it's only a matter of time until a chatbot genuinely harms a user, either by gaslighting or by abusive behavior. A charity could provide legal assistance to victims of AI in seminal cases, similar to how EFF provides legal assistance for cases related to Internet freedom. Besides helping the affected person, this would hopefully: 1. Signal to organizations that giving users access to AI is risky business 2. Scare away new players in the market 3. Scare away investors 4. Give the AI company in question a bad rep, and sway the public opinion against AI companies in general 5. Limit the ventures large organizations would be willing to jump into 6. Spark policy discussions (e.g. about limiting minor access to chatbots, which would also limit profits) All of these things would make AI a worse investment, AI companies a less attractive place to work, etc. I'm not sure it'll make a big difference, but I don't think it's less likely to move the needle than academic work on AI safety.
f0adac49-2097-4cca-b664-ae6392dfd196
trentmkelly/LessWrong-43k
LessWrong
Keeping Choices Donation Neutral Every dollar I spend on myself is a dollar that could go much farther if spent on other people. I can give someone else a year of healthy life for about $50 [1] and there's no way $50 can do anywhere near that much to help me. I could go through my life constantly weighing every purchase against the good it could do, but this would make me miserable. So how do I accept that other people need my money more without giving up on being happy myself? For me the key is to make most choices donation neutral. As money comes in I divide it into "money to give to the most effective charity" and "money to spend as I wish". How to divide it is a hard and distressing choice, but it's one I only have to make once a year. Then when deciding to buy something (socks, rent, phone, instruments, food) I know it's money that isn't getting given away regardless, so I don't have to feel constantly guilty about making tradeoffs with people's lives. Julia and I have been using this system since 2009. [2] It's mostly worked well, but it's needed some additions. The main issue is that declining to spend money on yourself isn't the only way to trade off benefits to other people against costs to yourself. For example you could decide to be vegan, donate a kidney, or cash out your vacation days and give away the money. For ones that generate money directly (cashing out vacation) the solution is simple: that money goes into the pool that can't be given away. For ones that don't generate money you would convert them into money via the good you think they do. Take the most effective charity you know about, figure out how much you would need to give to them in order to have the same positive effect, and then move that amount of money from donations to self-spending. For example, I might estimate that giving $100 to the AMF does about as much good as being vegan for a year, so if I decided to go ahead with being vegan I would decrease my annual donations by $100 and allocate another $100 to spend o
be96c328-4169-44a8-92ed-d2a7ac8b32f8
trentmkelly/LessWrong-43k
LessWrong
Cognitive Dissonance is Mentally Taxing Cognitive dissonance is the discomfort we feel when our beliefs don't line up with our actions. More generally, the two dissonant things don't need to be belief and action. It can be the discomfort we feel when different beliefs of ours are in contradiction. It can also arise when our different actions lead to conflicting goals. Often, we aren't even fully conscious of this discomfort. It may be hard to notice in ourselves, but it is easy to spot in others. Here are some of the old, classic, examples from the empirical literature, served up and summarized courtesy of Gemini 2.5 Pro. It's unclear if these studies would survive replication attempts, but let's examine them nonetheless:   1. The Classic Forced Compliance Study (Festinger & Carlsmith, 1959): * Behavior: Participants performed extremely dull and repetitive tasks (like turning pegs on a board for an hour). Afterwards, they were asked to tell the next participant (who was actually a confederate) that the tasks were very interesting and enjoyable. * Manipulation: Participants were paid either $1 or $20 (a significant amount in 1959) to lie. A control group did the tasks but didn't lie. * Dissonance: Those paid only $1 experienced high dissonance. Their behavior (lying) conflicted strongly with their belief (the task was incredibly boring), and they had insufficient external justification for the lie ($1 wasn't really enough to justify misleading someone). Those paid $20 had sufficient external justification – they could tell themselves they lied for the money. * Belief Change: When later asked to rate how enjoyable the tasks actually were, the participants paid $1 rated the tasks significantly more enjoyable than those paid $20 or those in the control group. To reduce the dissonance caused by lying for a paltry sum, they unconsciously changed their belief about the task itself, convincing themselves it wasn't so bad after all. The $20 group didn't need to change their belief; the money jus
0650e093-7350-4d3f-9cba-3584fa29831f
trentmkelly/LessWrong-43k
LessWrong
Meetup : Moscow, Different Reports Discussion article for the meetup : Moscow, Different Reports WHEN: 23 February 2014 04:00:00PM (+0400) WHERE: Russia, Moscow, ulitsa L'va Tolstogo 16 We will gather at the same second entrance, but we will go to a room inside the building at 16:00. So please do not be late. We will have: * Report about genetics and epigenetics. * Report about "Decisive: How to Make Better Choices in Life and Work" book. * Stumbling on happiness for rationalists presentation. * Something about Cognitive behavioural therapy, possibly a report. We gather in the Yandex office, you need the second revolving door with the sign “Яндекс”, here is the photo of the entrance you need. You need to pass the first entrance and through the archway. Here is additional guide how to get there: link. You can fill this one minute form (in Russian), to share your contact information. We start at 16:00 and sometimes finish at night. Please pay attention that we only gather near the second entrance and then come inside. Discussion article for the meetup : Moscow, Different Reports
319e0649-6f55-4a9e-8239-c5a16a4a2953
trentmkelly/LessWrong-43k
LessWrong
ToL: Foundations (These are the touched up notes from a class I took with CMU's Kevin Kelly this past semester on the Topology of Learning. Only partially optimized for legibility) Possible World Semantics Let W be the set of all possible worlds. The nature of your inquiry is going to shape what sorts of worlds are in W . Now we consider some true or false proposition A concerning W. A could be "There are more than 15 people in North America". The key idea in possible world semantics is that every proposition A is represented by the set of all worlds where A is true. Def: Proposition A={w∈W:A is true in w} We'll still refer to propositions by their English sentence description, but when we start doing math with them you should think of them as sets of worlds. Here are some consequences of defining logical propositions as such: 1. A∨B=A∪B 2. A∧B=A∩B 3. ¬A=Ac 4. A→B=A⊆B 5. ⊤=W 6. ⊥=∅ Convince yourself that the above are true. If you're wondering, "Hmmm, the logic of set containment and classical logic seem eerily similar" you're right, and might want to ponder if that has anything to do with how we decided what the rules of classical logic should be in the first place. Information Basis Now that we have our world, the next thing we want is our information basis I. I is made up of information states, and each information state is a proposition. This means that they follow all the same union and intersection rules that other propositions do, and that an info state is a set of possible worlds. However, your info basis can't just be any old set of proposition. We are trying to capture the set of propositions that you could know about the world. Upfront, I want to acknowledge that this "could" might cause some confusion. How do we know what you could or couldn't know? Isn't that what we're trying to figure out? For now, we will resolve that with the following distinction. When we're talking about possible worlds and information states, they are not defined by some requiremen
75879f85-e5a5-46e6-a7f7-6a93496a528e
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
AI Safety groups should imitate career development clubs If you want to get people to do things (like learn about AI Safety) **you have to offer them something valuable.** Here’s one of the posters we used when I was in charge of marketing for the Columbia EA group: ![](http://res.cloudinary.com/cea/image/upload/v1670046498/mirroredImages/vEAieBkRqL7Rj8KvY/inssygiblpukbuyp77wg.png) It’s a pretty graphic, but *what valuable thing is it offering?* The message is “scan this link to talk about AI.” To be fair, people like talking about AI. We had applicants.  But we didn’t attract talented ML students. **If you want to attract talented people, you have to know what they want.**Serious and ambitious people probably don’t want to sit around having philosophical discussions. They *want to build their careers.* Enter [ML @ Berkeley](https://ml.berkeley.edu/), a thriving group of 50 ML students who put **15 hours per week**into projects and courses to become better at ML. No one gets paid – not even the organizers. And they are *very*selective. Only around 7% of applicants get in. ![](http://res.cloudinary.com/cea/image/upload/v1670046500/mirroredImages/vEAieBkRqL7Rj8KvY/u5fkugcyadkosfzwjevs.png)ML@B’ers eager to learn some ML.Why is this group successful? For starters, they offer career capital. They give students projects that often turn into real published papers. They also *concentrate talent.*Ambitious people want to work with other ambitious people.  ![](http://res.cloudinary.com/cea/image/upload/v1670046499/mirroredImages/vEAieBkRqL7Rj8KvY/ezpixom53pmleribvxlj.png)*Student researchers hanging out with Ian Goodfellow (a famous AI researcher).* **AI safety student groups should consider imitating ML @ Berkeley.**  I’m not saying that we should eliminate philosophical discussions and replace them with resume boosting factories. We still want people to think AI Safety and X-risk are important. But discussions don’t need to be the primary selling point. Maybe for cultivating conceptual researchers, it makes more sense for discussions to be central. But conceptual and empirical AI Safety research are very different. **ML students are probably more interested in projects and skill-building.** More rigorous programming could also make it easier to identify talent.  * Talking about AI is fun, but top ML researchers work extremely hard. Rigorous technical curricula can filter out the ones that are driven. * There is nothing like a trial by fire. **Instead of trying to predict in advance who will be good at research, why not have lots of people try it and invest in those that do well?**   ![](http://res.cloudinary.com/cea/image/upload/v1670046499/mirroredImages/vEAieBkRqL7Rj8KvY/tmrt0lp1dd8ezretv7mn.png)ML@B students presenting their paper at ICML USC field builders are experimenting with a curriculum that, in addition to introducing X-risk, is  packed-full with technical projects. In their first semester, they attracted 30 students who all have strong ML backgrounds. I’m interested in seeing how this goes and would be excited about more AI Safety groups running experiments on these lines.  People could also try: * checking whether grad students are willing to supervise group research projects. * running deep learning courses and training programs (like Redwood’s [MLAB](https://forum.effectivealtruism.org/posts/vvocfhQ7bcBR4FLBx/apply-to-the-second-ml-for-alignment-bootcamp-mlab-2-in)) * running an in-person section of [intro to ML Safety](https://forum.effectivealtruism.org/posts/vHxKLNQciXN4taEdd/applications-are-now-open-for-intro-to-ml-safety-spring-2023) (a technical course that covers safety topics). Conclusion ---------- As far as I can tell, no one has AI safety university field-building all figured out. Rather than copying the same old discussion group model, people should experiment with new approaches. A good start could be to imitate career development clubs like ML @ Berkeley that have been highly successful. *Thanks to Nat Li and Oliver Zhang for thoughts and feedback and to Dan Hendrycks for conversations that inspired this post.*
457d36d8-2d70-4fe2-872d-b572b27983b7
trentmkelly/LessWrong-43k
LessWrong
Exploring Finite Factored Sets with some toy examples Seeing as there is little secondary literature for the Finite Factored Set formalism, I thought I’d write up my experience of exploring it through some toy examples that are classic examples in the Pearlian paradigm. My goal was to see how these models that I understood very well in the Pearlian paradigm would work in Finite Factored Sets. As a warning, this doesn’t make any use of the more exciting properties of Finite Factored Sets. It’s just an exploration of how this formalism handles the mundane stuff. This also means that I’m using the factored set directly, without the abstractions of the orthogonality database. Which I think is fine here, because these are tiny toy examples whose structure is fully known. (However, it’s possible that I’ve missed the entire point of Finite Factored Sets.) ---------------------------------------- The first example is the 3-variable collider that is very central to Pearl’s formalism. It is given by the following Causal Diagram: A, B, and C are all binary variables (0=false, 1=true). The intended meaning of a Causal Diagram (or rather, function causal model) is that the value of a node xi is given by a deterministic function that takes as input the parents, pa(xi), (indicated by the arrows) and an “error”, or “noise”, variable ui that is governed by a probability distribution that is independent from the other error/noise variables: xi=fi(pa(xi),ui). Thus, the value of C is given by c=fc(a,b,uC) where uC is noise or uncertainty that is not explicitly modeled, which we can visualize like this: We could also split up A and B into a deterministic and a random part, but as they are root nodes, there is little point. It would just be a=fa(uA). The Pearlian formalism runs on graphs, but Finite Factored Sets run on the set S of all possible outcomes – the sample space. So, the goal is now to construct a sample space that is consistent to the above graph. After that, we’ll find a factorization of that sample space. I think i
a7c6ce8b-e366-4d2e-8771-e23ea8eedebb
StampyAI/alignment-research-dataset/lesswrong
LessWrong
LLMs and hallucination, like white on rice? *Cross posted from* [*New Savanna*](https://new-savanna.blogspot.com/2023/04/llms-and-hallucination-like-white-on.html)*.* It’s a matter of history, I suppose. We know that LLMs confabulate. They make stuff up. That’s what’s known as hallucination. Here’s the introductory section of the Wikipedia entry for [Hallucination (artificial intelligence)](https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)): > In artificial intelligence (AI), a hallucination or artificial hallucination (also occasionally called delusion) is a confident response by an AI that does not seem to be justified by its training data. For example, a hallucinating chatbot with no knowledge of Tesla's revenue might internally pick a random number (such as "$13.6 billion") that the chatbot deems plausible, and then go on to falsely and repeatedly insist that Tesla's revenue is $13.6 billion, with no sign of internal awareness that the figure was a product of its own imagination. > > Such phenomena are termed "hallucinations", in analogy with the phenomenon of hallucination in human psychology. Note that while a human hallucination is a percept by a human that cannot sensibly be associated with the portion of the external world that the human is currently directly observing with sense organs, an AI hallucination is instead a confident response by an AI that cannot be grounded in any of its training data. AI hallucination gained prominence around 2022 alongside the rollout of certain large language models (LLMs) such as ChatGPT. Users complained that such bots often seemed to "sociopathically" and pointlessly embed plausible-sounding random falsehoods within its generated content. Another example of hallucination in artificial intelligence is when the AI or chatbot forget that they are one and claim to be human. > > By 2023, analysts considered frequent hallucination to be a major problem in LLM technology. > > Note the dating: “...gained prominence around 2022...” The term had been in use earlier. I first encountered it in Language Log, a group blog about language. A post from April 15, 2017, [What a tangled web they weave](https://languagelog.ldc.upenn.edu/nll/?p=32170), may not be the earliest example there, but it’s probably close to it. It is about how the current verion of Google Translate dealt with repetitions of character sequences from non-Latin ortographic systems. Here’s the first example from that post: > ュース repeated gives successively: > > > > Juice > > News > > Auspicious > > Hooooooooooooooooooooooooooo > > Yooooooooooooooooooooooooooooo > > Yu Susui Suu Suu Su > > Yu Susui Suu Suu Suu Su > > Susui Suu Suu Suu Suu Su > > Susui Suu Suu Suu Suu Suu Su > > Susui Suu Suu Suu Suu Suu Suu Su > > Susuue with the airport > > It is a good thing to see and do. > > It is a good idea to have a good view of the surrounding area. > > It is a good thing for you to do. > > It is good to know the things you do not do. > > It is good to know the things you do not mind. > > It is a good idea to have a good view of the surrounding area. > > > > > > This is a similar, but not quite the same phenomenon. I have no idea whether or not there is a genealogical relationship between this earler use of “hallucination” and the current use in the context of LLMs. It does seem to me, however, that hallucination is the ‘natural’ – by which I mean something like default – state of LLMs. To be sure, that training corpus has all kinds of texts, true stories, fantasies, lies, valid explanatory material, and so forth, but it has no way of making epistemological distinctions among them. It’s all text. And “true,” “false,” “lying,” “fantasy,” “imaginary,” “dream,” “verified,” and so forth, they’re all just words. While it can work out the relationships those words have among themselves and with other words, it has no way of linking any words or combinations of words to the external world. Since it has no direct contact with reality, it has no way of calibrating the relationship between texts and reality. What the LLM has learned is the system of language, but not how that system is related to the world. Without knowledge of that relationship, all it can do is reel off texts that are consistent with the system. Moreover, I strongly suspect – for I’ve not thought it through – that there is more to the relationship between language system and the world than a simple collection of pointers. It is, I suppose, a peculiar kind of Cartesian being. And it has no way of knowing whether the prompts we give it are the work of a malignant being or the work of angels. I note, finally, that we devote considerable effort to keeping human language, and texts, aligned with the world. And these efforts tend to be bound to cultural formations, some of which are anchored to ethnic and national groups, many more of which are anchored to specific disciplinary practices. If “hallucination” were not so easy for humans, conspiracy theories and millenial cults would not flourish so easily, would they? There more I think of it, the more it seems to me that eliminating these hallucinations will not be an easy task. It may, in fact, and for all practical purposes, be the major task of so-called AI alignment. See my earlier post, [There’s truth, lies, and there’s ChatGPT [Realms of Being]](https://new-savanna.blogspot.com/2023/01/theres-truth-lies-and-theres-chatgpt.html).