id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
d5e96077-9960-4a40-947e-8ef7c3a8b31c
trentmkelly/LessWrong-43k
LessWrong
Meetup : LessWrong Australia - Online Hangout Discussion article for the meetup : LessWrong Australia - Online Hangout WHEN: 13 July 2014 06:30:00PM (+1000) WHERE: Australia From the people who brought you a Lesswrong Australia mega-meetup. We now bring you an Australia-wide mega-online hangout. First hangout will be on Sunday the 13th at 6:30pm till Midnight (Sydney time). If you want to check in. feel free to do so! link is here: https://plus.google.com/hangouts/_/gxswklalufwfqamex4yhkanevqa subject to change depending on whether google likes us or not. Discussion article for the meetup : LessWrong Australia - Online Hangout
a131c122-535e-4171-bd5c-abeac87fd07c
trentmkelly/LessWrong-43k
LessWrong
A Primer on the Symmetry Theory of Valence Crossposted from opentheory.net STV is Qualia Research Institute‘s candidate for a universal theory of valence, first proposed in Principia Qualia (2016). The following is a brief discussion of why existing theories are unsatisfying, what STV says, and key milestones so far. ---------------------------------------- I. Suffering is a puzzle We know suffering when we feel it — but what is it? What would a satisfying answer for this even look like? The psychological default model of suffering is “suffering is caused by not getting what you want.” This is the model that evolution has primed us toward. Empirically, it appears false (1)(2). The Buddhist critique suggests that most suffering actually comes from holding this as our model of suffering. My co-founder Romeo Stevens suggests that we create a huge amount of unpleasantness by identifying with the sensations we want and making a commitment to ‘dukkha’ ourselves until we get them. When this fails to produce happiness, we take our failure as evidence we simply need to be more skillful in controlling our sensations, to work harder to get what we want, to suffer more until we reach our goal — whereas in reality there is no reasonable way we can force our sensations to be “stable, controllable, and satisfying” all the time. As Romeo puts it, “The mind is like a child that thinks that if it just finds the right flavor of cake it can live off of it with no stomach aches or other negative results.” Buddhism itself is a brilliant internal psychology of suffering (1)(2), but has strict limits: it’s dogmatically silent on the influence of external factors on suffering, such as health, relationships, or anything having to do with the brain. The Aristotelian model of suffering & well-being identifies a set of baseline conditions and virtues for human happiness, with suffering being due to deviations from these conditions. Modern psychology and psychiatry are tacitly built on this model, with one popular version being S
cab50511-299e-4b5e-b9b4-8fd5713018ae
trentmkelly/LessWrong-43k
LessWrong
June Outreach Thread Please share about any outreach that you have done to convey rationality and effective altruism-themed ideas broadly, whether recent or not, which you have not yet shared on previous Outreach threads. The goal of having this thread is to organize information about outreach and provide community support and recognition for raising the sanity waterline, a form of cognitive altruism that contributes to creating a flourishing world. Likewise, doing so can help inspire others to emulate some aspects of these good deeds through social proof and network effects.
8ce9d738-409c-4fdd-803a-dd73a3487d5e
StampyAI/alignment-research-dataset/blogs
Blogs
July 2021 Newsletter #### MIRI updates * MIRI researcher Evan Hubinger discusses learned optimization, interpretability, and homogeneity in takeoff speeds [on the Inside View podcast](https://www.lesswrong.com/posts/NFfZsWrzALPdw54NL). * Scott Garrabrant releases part three of "[Finite Factored Sets](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr)", on [conditional orthogonality](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr/p/hA6z9s72KZDYpuFhq). * UC Berkeley's Daniel Filan provides examples of conditional orthogonality in finite factored sets: [1](https://www.lesswrong.com/posts/qGjCt4Xq83MBaygPx/a-simple-example-of-conditional-orthogonality-in-finite), [2](https://www.lesswrong.com/posts/GFGNwCwkffBevyXR2/a-second-example-of-conditional-orthogonality-in-finite). * Abram Demski proposes [factoring the alignment problem](https://www.lesswrong.com/posts/vayxfTSQEDtwhPGpW) into "outer alignment" / "on-distribution alignment", "inner robustness" / "capability robustness", and "objective robustness" / "inner alignment". * MIRI senior researcher Eliezer Yudkowsky [summarizes](https://twitter.com/ESYudkowsky/status/1405580521237745665) "the real core of the argument for 'AGI risk' (AGI ruin)" as "appreciating the power of intelligence enough to realize that getting superhuman intelligence wrong, *on the first try*, will kill you *on that first try*, not let you learn and try again". #### News and links * From DeepMind: "[generally capable agents emerge from open-ended play](https://www.lesswrong.com/posts/mTGrrX8SZJ2tQDuqz/deepmind-generally-capable-agents-emerge-from-open-ended)". * DeepMind’s safety team summarizes their work to date on [causal influence diagrams](https://www.lesswrong.com/posts/Cd7Hw492RqooYgQAS). * [Another (outer) alignment failure story](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji) is [similar to](https://www.lesswrong.com/posts/7qhtuQLCCvmwCPfXK/ama-paul-christiano-alignment-researcher?commentId=oj4rm8937fyJzFwjL) Paul Christiano's best guess at how AI might cause human extinction. * Christiano discusses a "special case of alignment: solve alignment [when decisions are 'low stakes'](https://www.lesswrong.com/posts/TPan9sQFuPP6jgEJo)". * Andrew Critch argues that power dynamics are "[a blind spot or blurry spot](https://www.lesswrong.com/posts/WjsyEBHgSstgfXTvm/power-dynamics-as-a-blind-spot-or-blurry-spot-in-our)" in the collective world-modeling of the effective altruism and rationality communities, "especially around AI". The post [July 2021 Newsletter](https://intelligence.org/2021/08/03/july-2021-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
02f3190a-8513-414b-a74e-cbe2c61c18c0
trentmkelly/LessWrong-43k
LessWrong
Humanity becomes more untilitarian with time I would think there'd be evolutionary pressure to focus more and more on having descendants. What's actually happened so far is that people do more for signalling and fun and limit the number of their children. Is this just a blip, and the Mormons (perhaps with a simplified religion) will inherit the earth?  
c497b9ed-be23-4830-b3f0-a8a466b22a57
trentmkelly/LessWrong-43k
LessWrong
Meetup : Chicago: Discuss Thinking, Fast and Slow Discussion article for the meetup : Chicago: Discuss Thinking, Fast and Slow WHEN: 03 August 2013 03:00:00PM (-0500) WHERE: Corner Bakery, 360 N. Michigan Ave., Chicago, IL We'll be discussing the beginning of Kahneman's Thinking, Fast and Slow as part of a series of meetups for this book. Discussion article for the meetup : Chicago: Discuss Thinking, Fast and Slow
b9b7b562-7945-477d-b7c8-4cb5dce8a20f
StampyAI/alignment-research-dataset/blogs
Blogs
2011 Summer Matching Challenge Success! Thanks to the effort of our donors, the 2011 Summer Singularity Challenge has been met! All $125,000 contributed will be matched dollar for dollar by our matching backers, raising a total of $250,000 to fund the Machine Intelligence Research Institute’€™s operations. We reached our goal two days early, near midnight of August 29th. On behalf of our staff, volunteers, and entire community, I want to personally thank everyone who donated. Your dollars make the difference. Here’€™s to a better future for the human species. The post [2011 Summer Matching Challenge Success!](https://intelligence.org/2011/09/01/2011-summer-matching-challenge-success/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
e36e0aa0-65f0-4bda-94af-93a61aaba3ec
trentmkelly/LessWrong-43k
LessWrong
[LINK] stats.stackexchange.com question about Shalizi's Bayesian Backward Arrow of Time paper Link to the Question I haven't gotten an answer on this yet and I set up a bounty; I figured I'd link it here too in case any stats/physics people care to take a crack at it.
bdeb4e27-057c-4343-81b3-92e60ce5ff51
StampyAI/alignment-research-dataset/lesswrong
LessWrong
[Book] Interpretable Machine Learning: A Guide for Making Black Box Models Explainable This is a book by [Christoph Molnar](https://christophm.github.io/) about interpretability. It covers a lot of traditional, non AI safety interpretability. Some recommendations for chapters to read are in an excerpt from our summary of it in [the interpretability starter resources](https://github.com/apartresearch/interpretability-starter): > An amazing and up-to-date introduction to traditional interpretability research. We recommend reading the [taxonomy of interpretability](https://christophm.github.io/interpretable-ml-book/taxonomy-of-interpretability-methods.html) and about the specific methods of [PDP](https://christophm.github.io/interpretable-ml-book/pdp.html), [ALE](https://christophm.github.io/interpretable-ml-book/ale.html), [ICE](https://christophm.github.io/interpretable-ml-book/ice.html), [LIME](https://christophm.github.io/interpretable-ml-book/lime.html), [Shapley values](https://christophm.github.io/interpretable-ml-book/shapley.html), and [SHAP](https://christophm.github.io/interpretable-ml-book/shap.html) . Also read the chapter on [neural network interpretation](https://christophm.github.io/interpretable-ml-book/neural-networks.html) such as [saliency maps](https://christophm.github.io/interpretable-ml-book/pixel-attribution.html) and [adversarial examples](https://christophm.github.io/interpretable-ml-book/adversarial.html). He has also published a ["Common pitfalls of interpretability" post](https://mindfulmodeler.substack.com/p/8-pitfalls-to-avoid-when-interpreting) (and its [paper](https://arxiv.org/pdf/2007.04131.pdf)) that is recommended reading. > > **Why share it here?** It is a continuously-updated, high quality book on interpretability as seen from outside AI safety which seems very relevant to understanding what the field in general looks like. It has [78 authors](https://github.com/christophM/interpretable-ml-book/graphs/contributors) and seems very canon. The first edition is from Apr 11, 2019 while the second edition (latest version) is from [Mar 04, 2022](https://github.com/christophM/interpretable-ml-book/releases). It was last updated on the Oct 13, 2022. From the introduction of the book: > Machine learning has great potential for improving products, processes and research. But **computers usually do not explain their predictions** which is a barrier to the adoption of machine learning. This book is about making machine learning models and their decisions interpretable. > > After exploring the concepts of interpretability, you will learn about simple, **interpretable models** such as decision trees, decision rules and linear regression. The focus of the book is on model-agnostic methods for **interpreting black box models** such as feature importance and accumulated local effects, and explaining individual predictions with Shapley values and LIME. In addition, the book presents methods specific to deep neural networks. > > All interpretation methods are explained in depth and discussed critically. How do they work under the hood? What are their strengths and weaknesses? How can their outputs be interpreted? This book will enable you to select and correctly apply the interpretation method that is most suitable for your machine learning project. Reading the book is recommended for machine learning practitioners, data scientists, statisticians, and anyone else interested in making machine learning models interpretable. > >
8e283747-97ee-411f-9067-663504f2aae7
trentmkelly/LessWrong-43k
LessWrong
Meetup : London Games Meetup 09/03 [VENUE CHANGE: PENDEREL'S OAK!], + Social 16/02 Discussion article for the meetup : London Games Meetup 09/03, + Socials 02/03 and 16/02 WHEN: 09 March 2014 02:00:00PM (+0000) WHERE: 283-288 High Holborn, City of London, WC1V 7HP LessWrong London's next non-social gathering is going to be on the 9th of March and is going to be a Games Meetup at a new location - The Penderel's Oak pub located about 5-10 minutes away from our usual spot in the middle between Chancery Lane and Holborn stations (I'd recommend looking at the map to get a better idea of the location) Thanks to Phil we have a wide range of choices.The main ones are Resistance, Coup and Zendo. Alternatively, we will be able to play Ingenious, Go, Diplomacy (only if people insist on it) or card games. We are also having socials on the 16th of March as the Meetups are currently a weekly event. If you want more information about the meetups or anything else come by our google group or alternatively to our facebook group. If you have trouble finding us - feel free to call or text me on 07425168803. Discussion article for the meetup : London Games Meetup 09/03, + Socials 02/03 and 16/02
d7815ffe-6b0c-4465-a3bf-0c34dfca1017
trentmkelly/LessWrong-43k
LessWrong
Gaining knowledge at a price 1. In our lives we often pay a price for knowledge (a different price for each circumstance) (Either it be something negative happening to us or missing a percieved valuable opportunity) Sometimes we don't recoup the cost of that knowledge during our lifetime, other times we gain it back manifold (Sometimes it's an unconscious purchase, sometimes even against one's will) 2. Sometimes it's a fully thought out transaction, although we often forget later on the reasoning behind our choices (only focusing on the circumstance and the outcome, forgetting how valuable the experience it provided is) For instance, those experiences may have guided us away from certain bad paths in our lives, but we never account the value of the absence of said paths because they are nevermore part of the calculations of where to go since we discard them right away 3. For example, as we grow older, we might think, 'I haven't encountered anything like that since, so I didn't gain much from that experience, therefore it wasn't worth it' But you haven't encountered it much because you have knowledge about it, and you might have learned something that allows you to instinctively prevent it from showing up And so we take for granted all the times we make the correct choice (whether big or small), but we often forget that we learned it once, and possibly at a certain price
3fff849f-1f4b-4cca-a179-740fcc7f2250
trentmkelly/LessWrong-43k
LessWrong
Datasets that change the odds you exist 1. It’s October 1962. The Cuban missile crisis just happened, thankfully without apocalyptic nuclear war. But still: * Apocalyptic nuclear war easily could have happened. * Crises as serious as the Cuban missile crisis clearly aren’t that rare, since one just happened. You estimate (like President Kennedy) that there was a 25% chance the Cuban missile crisis could have escalated to nuclear war. And you estimate that there’s a 4% chance of an equally severe crisis happening each year (around 4 per century). Put together, these numbers suggest there’s a 1% chance that each year might bring nuclear war. Small but terrifying. But then 62 years tick by without nuclear war. If a button has a 1% chance of activating and you press it 62 times, the odds are almost 50/50 that it would activate. So should you revise your estimate to something lower than 1%? 2. There are two schools of thought. The first school reasons as follows: * Call the yearly chance of nuclear war W. * This W is a “hidden variable”. You can’t observe it but you can make a guess. * But the higher W is, the less likely that you’d survive 62 years without nuclear war. * So after 62 years, higher values of W are less plausible than they were before, and lower values more plausible. So you should lower your best estimate of W. Meanwhile, the second school reasons like this: * Wait, wait, wait—hold on. * If there had been nuclear war, you wouldn’t be here to calculate these probabilities. * It can’t be right to use data when the data can only ever pull you in one direction. * So you should ignore the data. Or at least give it much less weight. Who’s right? 3. Here’s another scenario: Say there’s a universe. In this universe, there are lots of planets. On each planet there’s some probability that life will evolve and become conscious and notice that it exists. You’re not sure what that probability is, but your best guess is that it’s really small. But hey, wait a second, you’re a life-for
cbe5ad17-7ba8-42eb-b037-43e9d3cd20ee
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
An audio version of the alignment problem from a deep learning perspective by Richard Ngo Et Al Hello everyone! Anyone interested in an audio version of [this amazing research?]( https://arxiv.org/pdf/2209.00626.pdf), I created [an audio/voiceover](https://www.whitehatstoic.com/p/the-alignment-problem-from-a-deep#details) version of it. Enjoy! Credits go to the original authors, I just love how clear and concise this paper is. Read the full research paper here: <https://arxiv.org/pdf/2209.00626.pdf>
3aa63539-bff4-44e5-9baf-76cac209e7c1
trentmkelly/LessWrong-43k
LessWrong
[Cryonics News] Australian cryonics startup: Stasis Systems Australia update Potentially of interest to my fellow antipodean LessWrongers. Stasis Systems Australia is a company seeking to start a cryonics facility in Australia. Their website is pretty sparse, but they just sent out a mailing list update on how they are going, and it doesn't appear that the information therein is located in any news section of the website, so I thought I'd post it here.   > Hello, > > We at  Stasis Systems Australia Ltd are happy to report our plans to build a cryonics facility in Australia are progressing well. > > You may remember contacting us or attending our online meeting last year, but here's a quick reminder of what we're doing. > > We are a group of Australasian cryonicists putting together a non-for-profit organisation to build and run the first cryonic storage facility in the southern hemisphere. > > We're proud to have WA-based Marta Sandberg on the board of directors as an advisor, as she has a wealth of knowledge and experience from her ongoing role as a director of the  well-established Cryonics Institute. > > We are now officially incorporated as a not-for-profit company, and one investor away from the magic number of ten that will trigger the next stage of the project - selecting a piece of land and starting construction! > > We have had productive discussions with the NSW Department of Health, and are developing positive relationships with the Cryonics Institute, Alcor, and KrioRus. > > We think what we’re doing is worthwhile and in the long term will be of great benefit to the Australasian community.  If you’d like to get involved either as an investor or a volunteer, that would be fantastic.  We’d love help with articles for the website, search engine optimisation, web graphics, or any other skill you have that we might need. > > We would especially appreciate you passing this update on to anyone you know who might be interested. > >   > > Best regards, > > The Stasis Systems Australia team > > www.StasisSystemsAustralia.com
bfd7bb8b-2029-44e8-b0dc-d04743da101d
trentmkelly/LessWrong-43k
LessWrong
Digital Dinner Signup For house dinner we need a way of coordinating who is going to cook what days and marking who will/won't be at dinner. For the past ~7y we've used a piece of paper on the fridge with a row for each day. This is not a bad system, and it's generally worked well, but it has a few downsides: * Some of our housemates live downstairs, and coming up to read or modify the sheet is a bit annoying. * One of our housemates will soon move to a nearby house, making this even harder. * Same issue if you are traveling or want to make a change while you're out. * Some housemates are "default present" while others are "default absent", and it's annoying remembering which. * We're adding a new housemate who will only be here some weeks. So we're trying out a new system, a digital one. Like with any post you make right when you start a new system there's some risk that it isn't a good one and you don't know yet, but that beats never describing it at all. We now have one Google Calendar event for each day of the week, each marked as recurring weekly. Each housemate is invited, and can RSVP to individual instances or the repeated event. This lets you communicate "I'll be gone this Friday", or "I'm not going to make Fridays", or "I'm not going to make Fridays, but I'll be here this Friday". To cook you edit the title to add "Name cooking" and if you're bringing a guest you can either invite them to the instance of the event or add their name to the title. It's also helpful to see cooking and eating plans at a glance, so I made a web page that summarizes the current state. You log in with Google, give it read access to your calendar, and it summarizes events named "Dinner": The page is here and you're welcome to look at the source, but it won't work for you unless you live here. If you want to tweak and use for your house, though, go ahead! There are comments in the HTML source saying how. Overall this is almost the way I like it, except that login only lasts for an hou
ef629cde-5a37-4f16-96ff-ca0e5a0968b1
trentmkelly/LessWrong-43k
LessWrong
Extended analogy between humans, corporations, and AIs. There are three main ways to try to understand and reason about powerful future AGI agents: 1. Using formal models designed to predict the behavior of powerful general agents, such as expected utility maximization and variants thereof (explored in game theory and decision theory). 2. Comparing & contrasting powerful future AGI agents with their weak, not-so-general, not-so-agentic AIs that actually exist today. 3. Comparing & contrasting powerful future AGI agents with currently-existing powerful general agents, such as humans and human organizations. I think it’s valuable to try all three approaches. Today I'm exploring strategy #3, building an extended analogy between: * A prototypical human corporation that has a lofty humanitarian mission but also faces market pressures and incentives. * A prototypical human working there, who thinks of themselves as a good person and independent thinker with lofty altruistic goals, but also faces the usual peer pressures and incentives. * AGI agents being trained in our scenario — trained by a training process that mostly rewards strong performance on a wide range of difficult and challenging tasks, but also attempts to train in various goals and principles (those described in the Spec). (For context, we at the AI Futures Project are working on a scenario forecast in which "Agent-3," an autonomous AI researcher, is trained in 2027) The Analogy Agent Human corporation with a lofty humanitarian mission Human who claims to be a good person with altruistic goals AGI trained in our scenario Not-so-local modification processThe MarketEvolution by natural selectionThe parent company iterating on different models, architectures, training setups, etc. (??? …nevermind about this)GenesCodeLocal modification processResponding to incentives over the span of several years as the organization grows and changesIn-lifetime learning, dopamine rewiring your brain, etc.Training process, the reward function, stochastic gradient desce
de112c10-3bb7-4907-9f56-d946589ceab5
trentmkelly/LessWrong-43k
LessWrong
Philosophical considerations of cessation of brain activity I'm unfamiliar with the philosophies of personal identity. Which theories would postulate that a total interruption of consciousness/neural activity (e.g., a coma), but where the brain itself is completely undamaged, would be "death", in the sense of the person before the coma wouldn't be able to feel what happens after it?  Reason is I need to make a decision about elective surgery under general anesthesia imminently. I'm concerned about the possibility that from my current perspective I will die as I'm put under even though from everyone else's perspective I'll wake up all the same, as I would be "rebooted" into a new "session" of consciousness and my current session won't be able to access/experience what happens in the new one the same way I can feel what happens to me 5 minutes from now. Of course this may happen every night during sleep. However, the risk is much greater under general anesthesia because of the much more complete loss of activity and information processing much like a coma, e.g. even during the deepest stage of sleep perhaps only 1 brain hemisphere sleeps at a time. Hence a coma being a much better proxy for this question: if sleep is okay anesthesia may not be, but if a coma's okay it definitely is too. I realize LWers are broadly on board with cryonics and thus unconcerned with this, but I'd still like to know which specific theories are more in line with my intuitive concerns. 
8e4af29c-cdf1-4a5b-ba46-4d2e0a47cd0c
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Generalizing POWER to multi-agent games ### Acknowledgements: This article is a writeup of a research project conducted through the [SERI](https://cisac.fsi.stanford.edu/stanford-existential-risks-initiative/content/stanford-existential-risks-initiative) program under the mentorship of [Alex Turner](https://www.lesswrong.com/users/turntrout). I ([Jacob Stavrianos](https://www.lesswrong.com/users/midco)) would like to thank Alex for turning a messy collection of ideas into legitimate research, as well as the wonderful researchers at SERI for guiding the project and putting me in touch with the broader X-risk community. Motivation/Overview ------------------- In the single-agent setting, [Seeking Power is Often Robustly Instrumental in MDPs](https://www.lesswrong.com/s/7CdoznhJaLEKHwvJW/p/6DuJxY8X45Sco4bS2) showed that optimal policies tend to choose actions which pursue "power" (reasonably formalized). In the multi-agent setting, the [Catastrophic Convergence Conjecture](https://www.lesswrong.com/posts/w6BtMqKRLxG9bNLMr/the-catastrophic-convergence-conjecture) presented intuitions that "most agents" will "fight over resources" when they get "sufficiently advanced." However, it wasn't clear how to formalize that intuition. This post synthesizes single-agent power dynamics (which we believe is now somewhat well-understood in the MDP setting) with the multi-agent setting. The multi-agent setting is important for AI alignment, since we want to reason clearly about when AI agents disempower humans. Assuming constant-sum games (i.e. maximal misalignment between agents), this post presents a result which echoes the intuitions in the Catastrophic Convergence Conjecture post: as agents become "more advanced", "power" becomes increasingly scarce & constant-sum. An illustrative example ----------------------- You're working on a project with a team of your peers. In particular, your actions affect the final deliverable, but so do those of your teammates. Say that each member of the team (including you) has some goal for the deliverable, which we can express as a reward function over the set of outcomes. How well (in terms of your reward function) can you expect to do? It depends on your teammates' actions. Let's first ask "given my opponent's actions, what's the highest expected reward I can attain?" ### Case 1: Everyone plays nice We can start by imagining the case where everyone does exactly what you'd want them to do. Mathematically, this allows you to obtain the globally maximal reward; or "the best possible reward assuming you can choose everyone else's actions". Intuitively, this looks like your team sitting you down for a meeting, asking what you want them to do for the project, and carrying out orders without fail. As expected, this case is 'the best you can hope for" in a formal sense. ### Case 2: Everyone plays mean Now, imagine the case where everyone does exactly what you *don't* want them to do. Mathematically, this is the worst possible case; every other choice of teammates' actions is at least as good as this one. Intuitively, this case is pretty terrible for you. Imagine the previous case, but instead of following orders your team actively sabotages them. Alternatively, imagine that your team spends the meeting breaking your knees and your laptop. ### Case 3: Somewhere in between However, scenarios where your team is perfectly aligned either with or against you are rare. More typically, we model people as maximizing their own reward, with imperfect correlation between reward functions. Interpreting our example as a multi-player game, we can consider the case where the players' strategies form a Nash equilibrium: every person's action is optimal for themselves given the actions of the rest of their team. This case is both relatively general and structured enough to make claims about; we will use it as a guiding example for the formalism below..mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} POWER, and why it matters ------------------------- Many attempts have been made to classify AI [robustly instrumental goals](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf), with the goals of understanding why they emerge given seemingly-unrelated utilities and ultimately to counterbalance (either implicitly or explicitly) undesirable robust instrumental subgoals. [One promising such attempt](https://www.lesswrong.com/s/7CdoznhJaLEKHwvJW/p/6DuJxY8X45Sco4bS2) is based on POWER (the technical term is all-caps to distinguish from normal use of the word): consider an agent with some space of actions, which receives rewards depending on the chosen actions (formally, an agent in an MDP). Then, POWER is roughly "ability to achieve a wide variety of goals". [It's been shown](https://arxiv.org/abs/1912.01683) that POWER is robustly instrumental given certain conditions on the environment, but currently no formalism exists describing power of different agents interacting with each other. Since we'll be working with POWER for the rest of this post, we need a solid definition to build off of. We present a simplified version of the original definition: *Consider a scenario in which an agent has a set of actions*a∈A*and a distribution*D*of reward functions*r:A→R*. Then, we define the POWER of that agent as* POWERD:=Er∼D[maxar(a))]As an example, we can rewrite the project example from earlier in terms of POWER. Let your goal for the project be chosen from some distribution D (maybe you want it done nicely, or fast, or to feature some cool thing that you did, etc). Then, your POWERD is the maximum extent to which you can accomplish that goal, in expectation. However, this model of power can't account for the actions of other agents in the environment (what about what your teammates do? Didn't we already show that it matters a lot?). To say more about the example, we'll need a generalization of POWER. Multi-agent POWER ----------------- We now consider a more realistic scenario: not only are you an agent with a notion of reward and POWER, but *so is everyone else*, all playing the same multiplayer game. We can even revisit the project example and go through the cases for your teammates' actions in terms of POWER: * In Case 1, your team works to maximize your reward in every case, which (with some assumptions) maximizes your POWER over the space of all choices of teammate actions. * In Case 2, your team works to *minimize* your reward in every case, which analogously minimizes your POWER. * In case 3, we have a Nash equilibrium of the game used to define multi-agent POWER. In particular, each player's action is a best-response to the actions of every other player. We'll see a parallel between this best-response property and the maxa∈A term in the definition of POWER pop up in the discussion of constant-sum games. ### Bayesian games To extend our formal definition of power to the multi-agent case, we'll need to define a type of multiplayer normal-form game called a [Bayesian game](https://en.wikipedia.org/wiki/Bayesian_game). We describe them below: * At the beginning of the game, each of n players is assigned a type ti∈Ti from a joint type distribution t=(ti)∼Ω. The distribution Ω is common knowledge. * The players then (independently, **not** sequentially) choose actions ai∈Ai, resulting in an *action profile*a=(ai). * Player i then receives reward ri(ti,a) (crucially, a player's reward can depend on their type). Strategies (technically, mixed strategies) in a Bayesian game are given by functions σi:Ti→ΔAi. Thus, even given a fixed strategy profile σ, any notion of "expected reward of an action" will have to account for uncertainty in other players' types. We do so be defining *interim expected utility* for player i as follows: fi(ti,ai,σ−i):=E[ri(ti,a)] where the expectation is taken over the following: * the posterior distribution over opponents' types t−i|ti - in other words, what types you expect other players to have, given your type. * random choice of opponents' actions a−i∼σ−i(t−i) - even if you know someone's type, they might implement a mixed strategy which stochastically selects actions. Further, we can define a (Bayesian) Nash Equilibrium to be a strategy profile where each player's strategy is a best response to opponents' strategies in terms of interim expected utility. ### Formal definition of multi-agent POWER We can now define POWER in terms of a Bayesian game: *Fix a strategy profile*σ*. We define player*i*'s POWER as* POWER(i,σ):=Etimaxaifi(ti,ai,σ−i))  Intuitively, POWER is maximum (expected) reward given a distribution of possible goals. The difference from the single-agent case is that your reward is now influenced by other players' actions (by taking an expectation over opponents' strategy). Properties of constant-sum games -------------------------------- As both a preliminary result and a reference point for intuition, we consider the special case of *zero-sum games:* A zero-sum game is a game in which for every possible outcome of the game, the sum of each player's reward is zero. For Bayesian games, this means that for all type profiles t=(ti) and action profiles a, we have ∑iri(ti,a)=0. Similarly, a *constant-sum game* is a game satisfying ∑iri(ti,a)=c for any choices of t,a. As a simple example, consider chess; a two-player adversarial game. We let the reward profile be constant, given by "1 if you win, -1 if you lose" (assume black wins in a tie). This game is clearly zero-sum, since exactly one player will win and lose. We could ask the same "how well can you do?" question as before, but the upper-bound of winning is trivial. Instead, we ask "how well can both players simultaneously do?"  Clearly, you can't both simultaneously win. However, we can imagine scenarios where both players have the *power* to win: in a chess game between two beginners, the optimal strategy for either player will easily win the game. As it turns out, this argument generalizes (we'll even prove it): in a constant-sum game, the sum of each player's POWER ≥c, with equality iff each player responds optimally for all their possible goals ("types"). This condition is equivalent to a Bayesian Nash Equilibrium of the game. Importantly, this idea suggests a general principle of multi-agent POWER I'll call *power-scarcity:* in multi-agent games, gaining POWER tends to come at the expense of another player losing POWER. Future research will focus on understanding this phenomenon further and relating it to "how aligned the agents are" in terms of their reward functions. **Claim: Consider a Bayesian constant-sum game with some strategy profile**σ**. Then,**∑iPOWER(i,σ)≥c**with equality iff**σ**is a Nash Equilibrium.** Intuition: By definition, σ *isn't* a Nash Equilibrium iff some player i's strategy σi isn't a best response. In this case, we see that player i has the power to play optimally, but the other players also have the power to capitalize off of player i's mistake (since the game is constant-sum). Thus, the lost reward is "double-counted" in terms of POWER; if no such double-counting exists, then the sum of POWER is just the expected sum of reward, which is c by definition of a constant-sum game. **Rigorous proof:** We prove the following for general strategy profiles σ: ∑iPower(i,σ)=∑iEtimaxaifi(ti,ai,σ−i))≥∑iEtiEai∼σifi(ti,ai,σ−i))=∑iEtiEa∼σri(ti,a))=EtEa∼σ(∑iri(ti,a))=EtEa∼σ(c)=c Now, we claim that the inequality on line 2 is an equality iff σ is a Nash Equilibrium. To see this, note that for each i, we have maxaifi(ti,ai,σ−i)≥Eai∼σifi(ti,ai,σ−i) with equality iff σi is a best response to σ−i. Thus, the sum of these inequalities for each player is an equality iff each σi is a best response, which is the definition of a Nash Equilibrium. □ Final notes ----------- To wrap up, I'll elaborate on the implications of this theorem, as well as some areas of further exploration on power-scarcity: * It initially seems unintuitive that as players' strategies improve, their collective POWER tends to decrease. The proximate cause of this effect is something like "as your strategy improves, other players lose the power to capitalize off of your mistakes". More work is probably needed to get a clearer picture of this dynamic. * We suspect that if all players have identical rewards, then the sum of POWER is equal to the sum of best-case POWER for each player. This gives the appearance of a spectrum with [aligned rewards (common payoff), maximal sum power] on one end and [anti-aligned rewards (constant-sum), constant sum power] on the other. Further research might look into an interpolation between these two extremes, possibly characterized by a correlation metric between reward functions. + We also plan to generalize POWER to Bayesian stochastic games to account for sequential decision making. Thus, any such metric for comparing reward functions would have to be consistent with such a generalization. * POWER-scarcity results in terms of Nash Equilibria suggest the following dynamic: as agents get smarter and take available opportunities, POWER becomes increasingly scarce. This matches the intuitions presented in [the Catastrophic Convergence Conjecture](https://www.lesswrong.com/posts/w6BtMqKRLxG9bNLMr/the-catastrophic-convergence-conjecture), where agents don’t fight over resources until they get sufficiently “advanced.”
05edb93c-70c3-4b28-a59f-381935bbfabd
trentmkelly/LessWrong-43k
LessWrong
Human trials for the Marburg vaccine: funding opportunity? According to the Independent, scientists at Oxford have developed a potential vaccine for Marburg. However, they have been unable to run human trials due to lack of funding. Are any institutional or high net worth funders in the EA community looking at this opportunity? In the event that the current Marburg outbreak gets out of control, a few weeks saved on vaccine approval could save thousands of lives.
6a4cace0-d557-4e4a-bffd-efd3a11602d3
trentmkelly/LessWrong-43k
LessWrong
Meetup : London rationalish meetup - 2016-03-20 Discussion article for the meetup : London rationalish meetup - 2016-03-20 WHEN: 20 March 2016 02:00:00PM (+0000) WHERE: Shakespeare's Head, 64-68 Kingsway, London WC2B 6AH I'm late posting the event this week, but that's because I was distracted, not because it isn't happening. This meetup will be social discussion in a pub, with no set topic. If there's a topic you want to talk about, feel free to bring it. The pub is the Shakespeare's Head in Holborn. There will be some way to identify us. The event on facebook is visible even if you don't have a facebook account. Any last-minute updates will go there. ---------------------------------------- We're a fortnightly London-based meetup for members of the rationalist diaspora. The diaspora includes, but is not limited to, LessWrong, Slate Star Codex, rationalist tumblrsphere, and parts of the Effective Altruism movement. You don't have to identify as a rationalist to attend: basically, if you think we seem like interesting people you'd like to hang out with, welcome! You are invited. You do not need to think you are clever enough, or interesting enough, or similar enough to the rest of us, to attend. You are invited. People start showing up around two, and there are almost always people around until after six, but feel free to come and go at whatever time. Discussion article for the meetup : London rationalish meetup - 2016-03-20
765224fd-11c4-4254-b431-39db56bfdca9
trentmkelly/LessWrong-43k
LessWrong
Against Street Epistemology According to https://streetepistemology.com/publications/street_epistemology_the_basics , street epistemology is a "conversational technique" which is intended to be "a more productive and positive alternative to debates and arguments." Street epistemologists assume a role similar to Socrates in Plato's dialogues, asking questions of his interlocutor to try to create a realisation of ignorance in them. The goal of street epistemology is to find incoherences in people's beliefs, and to convince them of the value of "scepticism." A street epistemologist tries to remain calm and pleasant throughout the entire interaction, and to build rapport at the beginning in order to make their interlocutor comfortable with the exchange. After introductions and rapport are established,they can ask their interlocutor to identify a belief and give an approximate level of confidence in it (on a scale of 1 to 10). The early stages of the conversation, after identifying the belief, are devoted to making the belief clear and precise so that there is as little ambiguity as possible, and less wiggle room if and when incoherences are found. Terms are defined, clarifying questions are asked and answered. To confirm that the belief is understood, before trying to undermine it, the street epistemologist will try to give a paraphrase of the view that his interlocutor finds charitable and acceptable. Having pinpointed what the claim is, the street epistemologist then asks which methods the interlocutor used to arrive at their confidence level in this belief. This is the very first question in what might be called the cross-examination stage, and it reveals what sort of incoherence is being sought in these conversations. Street epistemology is all about finding poorly articulated or unarticulated spots in people's epistemological views. One the interlocutor has given a few answers and it comes time to dive into them, the website recommends focusing only on "one or two" of the methods listed, id
b6e08cb8-4d0a-4581-becb-257bc49e447d
trentmkelly/LessWrong-43k
LessWrong
Why empiricists should believe in AI risk Empiricists are people who believe empirical information (from experiments and observational studies) is far more useful and has far more weight than speculating about possibilities using pure reasoning. Why should they believe in AI risk? I present the Empiricist's Paradox: * There is strong empirical evidence that relying on non-empirical reasoners (e.g. superforecasters) works better than simply assuming a 0% chance if there is no empirical data and calling yourself an "empiricist." Actual empiricists should support AI safety because the median superforecaster sees a 2.1% chance of an AI catastrophe (killing 1 in 10 people).[1] There is empirical evidence that 2% of these predictions turn out true, if the superforecasters predict them with 2% chance.[2] A 2% chance of AI catastrophe actually justifies a large spending relative to military spending (see our Statement on AI Inconsistency).   1. ^ The predictions were for 2100, but the predictions were made before ChatGPT was released. 2. ^ Someone asked for source for this :/ I should have done more research. I think https://goodjudgment.com/wp-content/uploads/2022/10/Superforecaster-Accuracy.pdf#page=4 sort of suggests roughly 2%. The observed frequency is a little bit higher than the forecast probability, because superpredictors slightly underestimate low probability events.
754ac7e2-a3e3-4558-8479-8a15c0196952
trentmkelly/LessWrong-43k
LessWrong
Migraine hallucinations, phenomenology, and cognition I have several times in my life experienced migraine hallucinations. I call them that because they look exactly like what other people report under that name. I'll come back to those. If I look at someone, and hold up my hand so as to block my view of their head, I do not experience looking at a headless person. I experience looking at a normal person, whose head I cannot see, because there is something else in the way. Why is this? One can instantly talk about Bayesian estimation, prior experience, training of neural nets, constant conjunction, and so on. However, a real explanation must also account for situations in which this filling-in does not occur. One ordinary example is the pictures here. I see these as headless men, not ordinary men whose heads I cannot see. Migraine hallucinations provide a more interesting example. If you've ever had one, you might already know what I'm going to say, but I do not know if this experience is the same for everyone. If I superimpose the hallucination on someone's head, they seem to have no head. I don't mean that I cannot see their head (although indeed I can't), but that I seem to be looking at a headless person. If I superimpose it on a part of their head, it is as if that part does not exist. Whatever the blind spot covers, my brain does not fill it in. Whatever my hand covers, my brain does fill in, not at the level of the image (I don't confabulate an image of their face), but at some higher level. I know in both cases that they have a head. But at some level below knowing, the experience in one case is that they have no head, and in the other, that they do. My knowledge that they have a head does nothing to alter the sensation that they do not. It is quite disconcerting to look at myself in a mirror and see half my head missing. Those who have never had such hallucinations might try experimenting with their ordinary blind spots. I am not sure it will be the same. The brain has had more practice filling those in
371693a4-5436-4d16-835c-f6907b3f5c15
trentmkelly/LessWrong-43k
LessWrong
My thoughts on nanotechnology strategy research as an EA cause area This is a cross-post from the Effective Altruism Forum. Two-sentence summary: Advanced nanotechnology might arrive in the next couple of decades (my wild guess: there’s a 1-2% chance in the absence of transformative AI) and could have very positive or very negative implications for existential risk. There has been relatively little high-quality thinking on how to make the arrival of advanced nanotechnology go well, and I think there should be more work in this area (very tentatively, I suggest we want 2-3 people spending at least 50% of their time on this by 3 years from now). Context: This post reflects my current views as someone with a relevant PhD who has thought about this topic on and off for roughly the past 20 months (something like 9 months FTE). Note that some of the framings and definitions provided in this post are quite tentative, in the sense that I’m not at all sure that they will continue to seem like the most useful framings and definitions in the future. Some other parts of this post are also very tentative, and are hopefully appropriately flagged as such. Key points * I define advanced nanotechnology as any highly advanced future technology, including atomically precise manufacturing (APM), that uses nanoscale machinery to finely image and control processes at the nanoscale, and is capable of mechanically assembling small molecules into a wide range of cheap, high-performance products at a very high rate (note that my definition of advanced nanotechnology is only loosely related to what people tend to mean by the term “nanotechnology”). (more) * If developed, advanced nanotechnology could increase existential risk, for example by making destructive capabilities widely accessible, by allowing the development of weapons that pose a higher existential risk, or by accelerating AI development; or it could decrease existential risk, for example by causing the world’s most destructive weapons to be replaced by weapons that pose a lower existential
4e268cef-dd7c-4812-8fd9-d2b06a5230e1
trentmkelly/LessWrong-43k
LessWrong
Idea selection THE PROBLEMATIC IDEA In our culture today there is a strong trend of disengagement with the views of people who are accused publicly of being “problematic.” This trend refers mostly to how disagreement manifests on social media, and it has been dubbed cancel culture.  A quote, video, or photograph is given as evidence of the person’s problematic nature, and the groups of people who want this viewpoint eliminated will collectively disengage with both the person and the idea.  This brings to mind a sort of social cleansing wherein both the problematic ideas and the “problematic people” are separated from the rest and refused entry to the discussion. Associating with the problematic person risks being a proponent of their idea, so both the person and the idea are banned as a unit. The criticism of problematic ideas is often superficial, and ideally this is done as quickly as possible via the ‘shut down.’ This ‘shut down’ is always expressed with violent imagery that evokes images of obliteration beyond repair. Ideas are not discussed; they are ‘destroyed,’ as if a bomb has been dropped on them.  The 'problematic person' has no salvageable ideas or arguments. If they are an artist, their art should not be viewed; if a director, their films not watched. All belonging to them is poisonous and should be placed into a box and hidden away in a dark place, never to be brought out again into the light.  IDEA SELECTION At play here is a free market element, which is that nobody is owed engagement, just like companies are not entitled to business. The ideas are not openly discussed because that would risk transmitting them. Yet at the same time, they do not trust the free market to effectively weed out ‘problematic’ or ‘weak’ ideas.  Perhaps in defense of cancel culture, the market has never done this well. Just examine the popularity of superstitious beliefs in the Modern world. The market does not naturally select the ideas that are useful, moral, or 'true'. Instead, ca
533e7342-4add-4160-9759-3d4430701d12
trentmkelly/LessWrong-43k
LessWrong
Marx and the Machine “The means of labour passes through different metamorphoses whose culmination is the machine, or rather, an automatic system of machinery… set in motion by an automaton, a moving power that moves itself; this automaton consisting of numerous mechanical and intellectual organs… It is the machine which possesses skill and strength in place of the worker, is itself the virtuoso, with a soul of its own… The science which compels the inanimate limbs of the machinery, by their construction, to act purposefully, as an automaton, does not exist in the worker's consciousness, but rather acts upon him through the machine as an alien power, as the power of the machine itself.” — Karl Marx, from “The Fragment on Machines”   Karl Marx’s thought is both sufficiently ambiguous and sufficiently insightful to have launched an entire industry of interpreters.  But as a rough and ready sketch, Marx saw economics and politics as downstream of technology.  Viewing the progress of the industrial revolution, Marx foresaw the development of increasingly powerful technologies of automation.  Automation would unleash abundance as machines replaced labor; in turn, this would cause the collapse of capitalism and its surrounding political structures.  Marx vague on the mechanics of this transition but also deeply confident in its inevitability.  With capitalism dead and technologically-induced abundance, we would enter utopia.  Freed from wage labor, we would unleash our full human potential for science and creativity and enter into a world where money, government, and class would not exist. Strikingly, this is almost exactly the view of many AI optimists (particularly of the “money won’t matter after AGI” variety).  One needs to swap out a little of the verbiage, but the two lines of thinking run  in parallel.  Marx’s “automaton consisting of numerous mechanical and intellectual organs … with a soul of its own” is about as close as one can come to a description of AGI within the language of
b91bf3ad-84a7-40d9-bfd5-09693ea67bbd
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Interpreting Neural Networks through the Polytope Lens Sid Black\*, Lee Sharkey\*, Leo Grinsztajn, Eric Winsor, Dan Braun, Jacob Merizian, Kip Parker, Carlos Ramón Guevara, Beren Millidge, Gabriel Alfour, Connor Leahy \*equal contribution   Research from[Conjecture](https://conjecture.dev/).   *This post benefited from feedback from many staff at Conjecture including Adam Shimi, Nicholas Kees Dupuis, Dan Clothiaux, Kyle McDonell. Additionally, the post also benefited from inputs from Jessica Cooper, Eliezer Yudkowsky, Neel Nanda, Andrei Alexandru, Ethan Perez, Jan Hendrik Kirchner, Chris Olah, Nelson Elhage, David Lindner, Evan R Murphy, Tom McGrath, Martin Wattenberg, Johannes Treutlein, Spencer Becker-Kahn, Leo Gao, John Wentworth, and Paul Christiano and from discussions with many other colleagues working on interpretability.*   .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} Summary ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Mechanistic interpretability aims to explain what a neural network has learned at a nuts-and-bolts level. What are the fundamental primitives of neural network representations? What basic objects should we use to describe the operation of neural networks mechanistically? Previous mechanistic descriptions have used individual neurons or their linear combinations to understand the representations a network has learned. But there are clues that neurons and their linear combinations are not the correct fundamental units of description - directions cannot describe how neural networks use nonlinearities to structure their representations. Moreover, many instances of individual neurons and their combinations are polysemantic (i.e. they have multiple unrelated meanings). Polysemanticity makes interpreting the network in terms of neurons or directions challenging since we can no longer assign a specific feature to a neural unit.  In order to find a basic unit of description that doesn’t suffer from these problems, we zoom in beyond just directions to study the way that piecewise linear activation functions (such as ReLU) partition the activation space into numerous discrete polytopes. We call this perspective the ‘polytope lens’. Although this view introduces new challenges, we think they are surmountable and that more careful consideration of the impact of nonlinearities is necessary in order to build better high-level abstractions for a mechanistic understanding of neural networks. The polytope lens makes concrete predictions about the behavior of neural networks, which we evaluate through experiments on both convolutional image classifiers and language models. Specifically, we show that polytopes can be used to identify monosemantic regions of activation space (while directions are not in general monosemantic) and that the density of polytope boundaries reflect semantic boundaries. We also outline a vision for what mechanistic interpretability might look like through the polytope lens.  Introduction ============ How should we carve a neural network at the joints? Traditionally, mechanistic descriptions of neural circuits have been posed in terms of neurons, or linear combinations of neurons also known as  ‘directions’. Describing networks in terms of these neurons and directions has let us understand a surprising amount about what they’ve learned ([Cammarata et al., 2020](https://distill.pub/2020/circuits/)). But these descriptions often possess undesirable properties - such as polysemanticity and inability to account for nonlinearity - which suggest to us that they don’t always carve a network at its joints. If not neurons or directions, then what should be the fundamental unit of a mechanistic description of what a neural network has learned? Ideally, we would want a description in terms of some object that throws away unnecessary details about the internal structure of a neural network while simultaneously retaining what’s important. In other words, we’d like a less [*‘leaky’ abstraction*](https://en.wikipedia.org/wiki/Leaky_abstraction) for describing a neural network’s mechanisms. We propose that a particular kind of mathematical object – a ‘polytope’ – might serve us well in mechanistic descriptions of neural networks with piecewise-linear activations[[1]](#fnyp6wk11agmm). We believe they might let us build less leaky abstractions than individual neurons and directions alone, while still permitting mechanistic understandings of neural networks of comparable length and complexity.  To help explain how the polytope lens could underlie mechanistic descriptions of neural networks, we first look at the problems that arise when using individual neurons (both biological and artificial) and then when using directions as the basic units of description and suggest how this perspective offers a potential solution.  Are individual neurons the fundamental unit of neural networks? --------------------------------------------------------------- Studying the function of single neurons has a long history. The dominant view in neuroscience for approximately one hundred years was the ‘neuron doctrine’ ([Yuste, 2015](https://www.nature.com/articles/nrn3962)). The neuron doctrine contended that the way to understand neural networks is to understand the responses  of individual neurons and their role in larger neural circuits. This led to significant successes in the study of biological neural circuits, most famously in the visual system. Early and important discoveries within this paradigm included cells in the frog retina that detect small patches of motion (fly detectors) ([Lettvint et al., 1959](https://hearingbrain.org/docs/letvin_ieee_1959.pdf)); cells in the visual cortex with small receptive fields that detect edges ([Hubel and Weisel, 1962](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1359523/)), cells in the higher visual system that detect objects as complex as faces ([Sergent et al., 1992](https://academic.oup.com/brain/article-abstract/115/1/15/295727?redirectedFrom=fulltext&login=false)), and many even highly abstract multimodal concepts appear to be represented in single neurons ([Quiroga et al., 2005](https://www.nature.com/articles/nature03687); [Qurioga et al., 2009](https://www.sciencedirect.com/science/article/pii/S0960982209013773)). Given their historic usefulness in the study of *biological* neural networks, individual neurons are a natural first place to start when interpreting *artificial* neural networks. Such an approach has led to significant progress. Many studies have suggested that it is possible to identify single neurons that responded to single features ([Szegedy et al., 2014](https://arxiv.org/pdf/1312.6199.pdf), [Zhou et al., 2015](https://arxiv.org/abs/1412.6856), [Bau et al., 2017](http://netdissect.csail.mit.edu/final-network-dissection.pdf), [Olah et al., 2017](https://distill.pub/2017/feature-visualization/)). Analysis of small neural circuits has also been done by inspecting individual neurons ([Cammarata et al., 2020](https://distill.pub/2020/circuits/curve-detectors/), [Goh et al., 2021](https://distill.pub/2021/multimodal-neurons/)). Mathematically, it’s not immediately obvious why individual neurons would learn to represent individual features given that, at least in linear networks, the weights and activations can be represented in any desired basis. One suggestion for why this would happen is the ‘privileged basis’ hypothesis ([Elhage et al., 2021](https://transformer-circuits.pub/2021/framework/index.html#def-privileged-basis); [Elhage et al., 2022](https://transformer-circuits.pub/2022/toy_model/index.html)). This hypothesis states that element-wise nonlinear activation functions encourage functionally independent input features to align with individual neurons rather than directions. Despite both historical success and the privileged basis hypothesis, it turns out that in many circumstances networks learn features that don't perfectly align with individual neurons. Instead, there have been some suggestions that networks learn to align their represented features with directions ([Olah et al., 2018](https://distill.pub/2018/building-blocks/), [Saxena and Cunningham, 2019](https://stat.columbia.edu/~cunningham/pdf/SaxenaCONB2019.pdf)).  Are directions the fundamental unit of neural networks? ------------------------------------------------------- One of the main reasons to prefer directions over individual neurons as the functional unit of neural networks is that neurons often appear to respond to multiple, seemingly unrelated things. This phenomenon is called polysemanticity[[2]](#fnfj8lun8q4lh). [Nguyen et al., (2016)](https://arxiv.org/pdf/1602.03616.pdf) (supplement) and [Olah et al., (2017)](https://distill.pub/2017/feature-visualization/) were perhaps the first to explicitly identify neurons that represent multiple unrelated features in convolutional image classifiers. Polysemantic neurons have also been found in large language models ([Geva et al., 2020](https://arxiv.org/pdf/2012.14913.pdf)) and multimodal networks ([Goh et al., 2021](https://distill.pub/2021/multimodal-neurons/)), and in the brain ([Tanabe, 2013](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3688279/)) . They are usually found by looking at the dataset examples that maximally activate specific neurons and noticing that there are multiple distinct groups of features represented in the examples. Below are a few examples of polysemantic neurons from a convolutional image classifier (InceptionV1[[3]](#fno89s50smuz)) and a large language model, GPT2-Medium.   ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/ujwl46m0pgeahjumbbee)Figure: An example of a polysemantic neuron in InceptionV1 (layer inception5a, neuron 233) which seems to respond to a mix of dog noses and metal poles (and maybe boats).![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/wksyvxbbn83bjawxlcqw)Figure: An example of a polysemantic neuron in GPT2-Medium. The text highlights represent the activation magnitude - the redder the text, the larger the activation. We can see that this neuron seems to react strongly to commas in lists, but also to diminutive adjectives (‘small’, ‘lame’, ‘tired’) and some prepositions (‘of’, ‘in’, ‘by’), among other features.   One explanation for polysemantic neurons is that networks spread the representation of features out over multiple neurons. By using dimensionality reduction methods, it’s often possible to find directions (linear combinations of neurons) that encode single features, adding credence to the idea that directions are the functional unit of neural networks ([Olah et al., 2018](https://distill.pub/2018/building-blocks/#d-footnote-4), [Saxena and Cunningham, 2019](https://stat.columbia.edu/~cunningham/pdf/SaxenaCONB2019.pdf), [Mante et al., 2013](https://www.nature.com/articles/nature12742)). This chimes with the ‘features-as-directions perspective’ ([Elhage et al., 2022](https://transformer-circuits.pub/2022/toy_model/index.html#motivation)). Under this perspective, the magnitude of neural activations loosely encodes ‘intensity’ or ‘uncertainty’ or ‘strength of representation’, whereas the direction encodes the semantic aspects of the representation[[4]](#fnesps52fntne).  If there are fewer features than neurons (or an equal number of both), then each feature can be encoded by one orthogonal direction. To decode, we could simply determine which linear combination of neurons encodes each feature. However, if there are more features than neurons, then features must be encoded in non-orthogonal directions and can interfere with (or [alias](https://en.wikipedia.org/wiki/Aliasing) - see Appendix D) one another. In this case, the features are sometimes said to be represented in ‘superposition’ ([Elhage et al., 2022](https://transformer-circuits.pub/2022/toy_model/index.html))[[5]](#fn9itzt8xhtlf). In superposition, networks encode more features than they have orthogonal basis vectors. This introduces a problem for a naive version of the features-as-directions hypothesis: Necessarily, some feature directions will be polysemantic! If we assume that representations are purely linear, then it’s hard to see how networks could represent features in non-orthogonal directions without interference degrading their performance. Neural networks use nonlinearities to handle this issue. [Elhage et al (2022)](https://transformer-circuits.pub/2022/toy_model/index.html) argue that a Rectified Linear Unit (ReLU) activation does this through thresholding: If the interference terms are small enough not to exceed the activation threshold, then interference is ‘silenced’! For example, suppose neuron A is polysemantic and represents a cat ear, a car wheel, and a clock face, and neuron B represents a dog nose, a dumbbell, and a car wheel. When neuron A and B activate together, they can cause a downstream car neuron to activate without activating neurons that represent any of their other meanings, so long as their pre-activations are below threshold.  Beyond enabling polysemanticity, nonlinearities introduce a second problem for the features-as-directions viewpoint. The directions in each layer, caused by a direction in an earlier layer, are no longer invariant to scaling, as we would expect in a fully linear network. If we scale the activations in a particular layer in a fully linear network by some scalar multiple, we expect the class prediction to remain the same - as this is equivalent to scaling the output logits. However, if we scale the activations in a particular layer in a *non*-linear network, some neurons in later layers may ‘activate’ or ‘deactivate’. (i.e. their preactivation goes above or below threshold). In other words, scaling directions in one layer can change the direction (and hence the features represented) in later layers!   ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/kjxt8tl1larln9qjed5g)Figure: Scaling the activations in a layer causes semantic changes later in the network despite no change in activation direction in the scaled layer. The image on the right represents the input image.On the one hand, we should expect scaling the activation to change the direction in later layers. On the other, this poses a challenge to the features-as-directions view; scaling all representations relative to each other shouldn’t change their meaning except by changing their ‘intensity’. The naive version of the features-as-directions hypothesis requires the addition of something like a ‘distribution of validity’ within which directions represent the correct feature and outside of which they don’t. Unfortunately, the features-as-directions view doesn’t tell us what this distribution is. We’d like to know what the distribution is in order to know when our models might exhibit unpredictable out-of-distribution behavior.  Despite these two limitations (polysemanticity and failure to be invariant to scale), the features-as-directions view has enabled much progress in understanding circuits of some neural networks, even permitting [Cammarata et al. (2021)](https://distill.pub/2020/circuits/curve-circuits/) to reverse engineer some circuits and reconstruct them by hand. So the view represents at least a substantial piece of the interpretability puzzle - and it seems true that *some* directions carry a clear semantic meaning. Another reason to believe that the features-as-directions viewpoint is sensible is that, as we scale the hidden activations, neighbouring categories are quite often (but not always) semantically related. For instance, when we scale up the hidden layer activations for the cougar image, the network misclassifies it as a cheetah, which is still a big cat!  Instead of radically overhauling the features-as-directions view, perhaps it only needs some modifications to account for the effects of nonlinearities, namely: * Invariances - We have shown that directions are not invariant to scaling. We want a modification that captures invariances in neural networks. For instance, we want something that points the way to ‘semantic invariances’ by identifying monosemantic components of neural networks even when subjected to certain geometric transformations (like scaling). * On/off-distribution - The features-as-directions view appears to be correct only when the scale of activations is within some permitted distribution. We want a way to talk about when activations are off-distribution with more clarity, which will hopefully let us identify regions of activation space where the behavior of our models becomes less predictable. To find an object that meets our needs, we turn to some recent developments in deep learning theory - a set of ideas that we call the ‘polytope lens’.  The Polytope Lens ================= Let’s consider an MLP-only network which uses piecewise linear activation functions, such as ReLU[[6]](#fn662ir63ay26). In the first layer, each neuron partitions the input data space in two with a single hyperplane: On one side, the neuron is “on” (activated) and on the other side it’s “off”.  On one side of the boundary, the input vector is multiplied by the weights for that neuron, which is just that neuron’s row of the weight matrix. On the other side, the input is instead projected to 0, as though that row of weight matrix were set to zero. We can therefore view the layer as implementing a different affine transformation on either side of the partition. For a mathematical description, see Appendix C.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/fyruakial8maojftbpbd)*Figure: Affine transformations in the activated / unactivated regions of one neuron (assuming the three other neurons are activated)*The orientation of the plane defining the partition is defined by the row of the weight matrix and the height of the plane is defined by the neuron’s bias term. The example we illustrate here is for a 2-dimensional input space, but of course neural networks typically have inputs that are much higher dimensional. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/uy5vvkehakihehkglrxc)Figure: Polytope boundaries are defined by the weights and bias of a neuron. The weights determine the orientation of the (hyper-) plane and the bias determines its height.Considering all N neurons in layer 1 together, the input space is partitioned N times into a number of convex shapes called polytopes (which may be unbounded on some sides). Each polytope has a different affine transformation according to whether each neuron is above or below its activation threshold. This means we can entirely replace this layer by a set of affine transformations, one for each polytope.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/kftqf40q1dkiu2gowhwl)*Figure: Four polytopes corresponding to four different affine transformations defined by two neurons in layer 1.*As we add layers on top of layer 1, we add more neurons and, thus, more ways to partition the input space into polytopes, each with their own affine transformation. Thus, neural networks cut up the network’s input space into regions (polytopes) that each get transformed by a different set of affine transformations. Adding subsequent layers permits partition boundaries that *bend* when they intersect with the partition boundaries of earlier layers ([Hanin and Rolnick, 2019b](https://arxiv.org/pdf/1906.00904.pdf)). The boundaries bend in different ways depending on the weights of the neurons in later layers that activate or deactivate.  Each polytope can thus be analyzed as a fully linear subnetwork composed of a single affine transformation. Within each of these subnetworks, we would expect to see a set of interpretable directions that are scale invariant within each polytope. But the same directions in a different subnetwork might yield different interpretations. However, we should expect nearby polytope regions (subnetworks) to share similar affine transformations, and therefore similar semantics. We’ll discuss this further in the next section. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/elxirsvfculsfkjg0hwa)*Image from*[*Hanin & Rolnick, 2019a*](http://proceedings.mlr.press/v97/hanin19a/hanin19a.pdf)The polytope lens draws on some recent work in deep learning theory, which views neural networks as **max-affine spline operators** (MASOs) ([Balestriero and Baraniuk, 2018](http://proceedings.mlr.press/v80/balestriero18b.html)). For a mathematical description of the above perspective, see Appendix C.  The picture painted above is, of course, a simplified model of a far higher dimensional reality. When we add more neurons, we get a lot more hyperplanes and, correspondingly, a lot more polytopes! Here is a two dimensional slice of the polytopes in the 40768-dimensional input space of inception5a, with boundaries defined by all the subsequent layers:   ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/mhxi6twrrg3l2hznzmt0)Figure: This figure depicts the polytope boundaries that intersect with a two-dimensional slice through the 832 \* 7 \* 7 = 40768-dimensional input space of InceptionV1 layer inception5a. The slice was defined using the activation vectors caused by three images, one of a banana, a coffee cup, and a projector.  The boundaries are defined using all neurons from inception5a to the classification logits. There are many polytopes in high dimensional space. If we instead used a lower layer, e.g. inception3a, then there would be many, many more polytope boundaries.In fact, as we add neurons, the number of polytopes the input space is partitioned into grows exponentially[[7]](#fnnxrcvy6eqr). Such large numbers of polytopes become quite hard to talk about! Fortunately, each polytope can be given a unique code, which we call a ‘**spline code**’, defined in the following way: Consider the sequence of layers from L to L+K. These layers define a set of polytope boundaries in the input space to layer L. A polytope’s spline code is simply a binary vector of length M (where M is the total number of neurons in layers L to L+K) with a 1 where the polytope causes a neuron to activate above threshold and 0 otherwise. Notice that we can define a code for any sequence of layers; if we define a spline code from layer L to L+K, the codes correspond to the polytopes that partition layer L’s input space. There is therefore a duality to spline codes: Not only are they a name for the region of input activation space contained within each polytope, but they can also be viewed as labels for pathways through layers L to L+K.    ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/ljsugpo55crvffeuf8br)Figure: How spline codes are constructed in an MLP with ReLU activation functions. Activations in a set of layers are binarised according to whether each neuron is above or below threshold. (Partly adapted from [Hanin & Rolnick, 2019a](http://proceedings.mlr.press/v97/hanin19a/hanin19a.pdf))At least for deep ReLU networks, polytopes provide a mathematically correct description of how the input space is partitioned, unlike the naive version of the features-as-directions view which ignores the nonlinearities. However, polytopes are far more difficult to reason about than directions. They will need to give us greater predictive power to be worth the cost. Polytopes as the atoms of neural networks & polytope regions as their molecules ------------------------------------------------------------------------------- In the previous section, we discussed how it’s possible (in theory) to replace an entire ReLU network with each polytope’s affine transformation. Hence, polytopes provide a complete description of the input-output map of the network. Any inputs that belong to the same polytope are subject to the same affine transformation. In other words, the transformation implemented by the network is *invariant within a polytope*. But the invariance goes even further than individual polytopes; nearby polytopes implement similar transformations. To see why, consider two polytopes that share a boundary. Their spline codes differ by *only one* neuron somewhere in the network turning on or off - in other words, the pathway taken by the activations through the network is identical except for the activation status of one neuron. Therefore, assuming the weights of some neurons aren’t unusually large, polytopes that have similar spline codes implement similar transformations in expectation[[8]](#fn7bn8gghu3b9). Hamming distance in the space of spline codes thus corresponds to expected distance in transformation space.  It’s easy to see how this might be useful for semantics: If a network needs two similar-meaning inputs to be transformed similarly, all it needs to do is to project the inputs to nearby polytopes in hidden activation space. Here, the fundamental unit of semantics in the network, which we might call a feature, is a group of nearby polytopes that implement similar transformations. Notice that the addition of polytopes only modifies the features-as-directions view without replacing it entirely: Vectors in nearby polytopes usually share high cosine similarity, so ‘similar directions’ will correlate with ‘nearby polytopes’. Moreover, within a polytope the two views are identical. This lets us make a few testable predictions about the relationship between semantics and polytope boundaries: * Prediction 1: *Polysemantic directions overlap with multiple monosemantic polytope regions.* + The polytope lens makes a prediction about how polysemanticity is implemented in neural networks: The multiple meanings of the polysemantic direction will correspond to monosemantic regions that have nonzero inner product with that direction. * Prediction 2 : *Polytope boundaries reflect semantic boundaries* + Networks will learn to place more polytope boundaries between inputs of different classes than between the same classes. More generally, networks will learn to have regions denser with polytope boundaries between distinct features than between similar features. * Prediction 3: *Polytopes define when feature-directions are on- and off-distribution.* + Scaling hidden activation vectors eventually causes the prediction made by a classifier to change. It should be unsurprising that scaling the activations vectors of a nonlinear network well outside their typical distribution causes the semantics of directions to break. But neither the features-as-directions perspective nor the superposition hypothesis suggest what this distribution actually is. The polytope lens predicts that polytope boundaries define this distribution. Specifically, the class prediction made by the network should tend to change when the activation vector crosses a region of dense polytope boundaries. We find that evidence supports predictions 1 and 2, and prediction 3 appears to be only partially supported by evidence.  ### Prediction 1: Polysemantic directions overlap with multiple monosemantic polytope regions Our approach to understanding polysemantic directions is to instead begin by identifying something in a network that *is* monosemantic and work our way out from there, rather than starting with polysemantic directions and trying to figure out how they work. So, what *is* monosemantic in a neural network?  Neural networks implement approximately smooth functions, which means that small enough regions of activation space implement similar transformations. If similar representations are transformed in similar ways, it is likely that they “mean” similar things. This implies that small enough regions of activation space should be monosemantic, and indeed - this is why techniques like nearest-neighbor search work at all. To verify this claim, here we collect together activations in a) the channel dimension in InceptionV1 and b) various MLP layers in GPT2 and cluster them using HDBSCAN, a hierarchical clustering technique[[9]](#fnxxlcfaiw1g9). We observe that the majority of clusters found are monosemantic in both networks. For example, we observe clusters corresponding to specific types of animal in inception4c, and clusters responding to DNA strings, and specific emotional states in the later layers of GPT2-small. See Appendix E for more examples.    ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/wwapqtkullrer4k59ep8)*Examples of clusters of activations in the output of the first branch of the 4c layer of inceptionV1. For each cluster, we plot the images and hyperpixel corresponding to the activations. Clusters were computed with HDBSCAN on the activations for one spatial dimension, and randomly chosen among clusters containing enough images.*![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/tmvg14vh7c1vmowuy0vm)Figure: Dataset examples of clusters in the pre-activations of the MLP in various layers of GPT2-small. Clusters were computed using HDBSCAN on a random sample of [the pile](https://pile.eleuther.ai/)’s test set. Each token in the test set is treated as a separate point for clustering, and the specific token that has been clustered has been highlighted in red in each instance. We observe clusters responding both to specific tokens, and semantic concepts (typically, but not exclusively, in the later layers).Instead of finding monosemantic regions by clustering activations, it’s also possible to find them by clustering spline codes. This is mildly surprising, since we’ve ostensibly removed all information about absolute magnitude - and yet it’s still possible to group similar-meaning examples together. However, a single spline code implicitly defines a set of linear constraints. These constraints, in turn, describe a set of bounding hyperplanes which confine the set of possible activations to a small region in space. Thus, much of the information about the magnitude is still retained after binarization. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/tuvh0p0qjrlddknape7l)Figure: Dataset examples of clusters in the pre-activations of the MLP in various layers of GPT2-small. Clusters were computed using HDBSCAN on a random sample of [the pile](https://pile.eleuther.ai/)’s test set. The distance matrix for clustering in the above examples was computed using hamming distance on the binarized spline codes. Each token in the test set is treated as a separate point for clustering, and the specific token that has been clustered has been highlighted in red in each instance. We observe specific clusters in earlier layers that appear to be related to detokenization - i.e grouping “http” and “https” together. Clusters later layers tend to respond to higher level semantics - synonyms for groups of patients in medical trials, for example.We were interested in seeing if we would observe a similar effect with direction vectors found using dimensionality reduction techniques such as PCA or NMF. In theory, such directions should be those which explain the highest proportions of variance in the hidden space, and we would thus expect them to be amongst the most semantically consistent (monosemantic) ones. In a “strong” version of the polytope lens - we might expect to see that even these directions, that we should expect to be monosemantic, also cross many polytope boundaries, potentially causing them to have different semantics at different magnitudes. However, the polytope lens does not preclude linear features - meaningful single directions are still possible in the latent space of a network with nonlinearities. To frame this in terms of paths through the network - it may be that there are linear features that are shared by all or most sets of paths. To test this, we took the activations for a set of examples from a hidden layer (in this case, layer 4) of GPT2-small, and binarized them to get their spline codes. We then clustered the codes using HDBSCAN, with the same parameters as earlier experiments. Separately, we ran NMF on the raw activations (with 64 components) to find a set of directions. For each NMF vector, we measure the cosine similarity between it and each activation sample that we clustered, and plot the histograms in the below plots. The colours represent the cluster label that each activation has been assigned, each of which we have labelled with a semantic label by looking at the set of corresponding input samples. Since there are many clusters with extremely small cosine similarities that we are not interested in, we manually restrict the x-axis for each plot and display only the points with the largest similarities. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/h5sotxs9j6zlmla4oopp)*Cosine similarities with respect to NMF direction 49. Activations taken from the MLP in layer 4 of GPT2-Small, using data from The Pile's test set . The dataset examples with the highest cosine similarities are shown and coloured by their cluster label (ignoring the smallest clusters).* It turns out that the directions found using NMF *do*appear to be largely monosemantic - so both models observed do seem to use features associated with directions to some extent, even if the basis directions still appear highly polysemantic. Using the same procedure, we can also find these monosemantic directions in InceptionV1:  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/fkfzyhmhllzcwtnidedt)![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/texh7vxiikauipzq76vf) The above experiments suggest that there do exist feature directions which are coherent across all polytopes in some specific layer - meaning that the affine transformations formed across the set of all polytopes are sufficiently similar to some extent. ### Prediction 2: Polytope boundaries reflect semantic boundaries Why should we expect polytope boundaries to reflect semantic boundaries? One geometric intuition underlying this idea is that nonlinearities are needed to silence interference between non-orthogonal features in superposition. Polytope boundaries should therefore be placed between non-orthogonal feature directions so that activations in one feature direction don’t activate the other when they shouldn’t. Another intuition is that neural networks are often used in situations where outputs are not linearly separable functions of the inputs, such as image classification. To solve such tasks, neural networks fold and squeeze the input data manifold into a shape that is linearly separable in subsequent layers ([Keup and Helias, 2022](https://arxiv.org/pdf/2203.11355.pdf)). Affine transformations on their own cannot improve linear separability - but since a ReLU activation maps negative values to zero, it can be thought of as making a fold in the data distribution, with the position of the dent being controlled by the previous transformation’s weights. Several ReLU neurons in combination can also act to expose inner class boundaries - making classification in later layers possible where it wasn’t in earlier ones - by “folding” regions of the distribution into new, unoccupied dimensions (see the figure below for a 1D geometric interpretation). For this reason we may expect to see a concentration of ReLU hyperplanes around such distributions, as the network acts to encode features for later layers. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/pxyzii77l2hzwktbtoka)Figure: How a ReLU neuron can act to expose a previously non-separable class boundary, reproduced from [Keup and Helias, (2022)](https://arxiv.org/pdf/2203.11355.pdf). The solid black line is a ReLU hyperplane, and the dashed line represents a potential decision boundary. In higher dimensions - this effect requires several ReLU hyperplanes to act in conjunction.Images of different classes will have many different features. Therefore, according to the polytope lens, activations caused by images from different classes should be separated by regions of denser polytope boundaries than those caused by images from the same class. Can we see this by looking at heat map visualizations of polytope density? Unfortunately, the network has too many neurons (and thus too many boundaries) to observe any differences directly. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/g0qkodv6u3zfljocqntv)Figure: Heat maps of polytope density in a 2-D slice through the 40,768-dimensional input space of layer inception5a. The 2-D slice was made such that the activation vectors of three images lie on the plane. Then we calculated the spline codes (using layers inception5a to the output) for every point in a 4,096 x 4,096 grid. Then we computed the Hamming distance between codes in adjacent pixels and applied a Gaussian smoothing. Observe that the densest region is the part of the image separating the three inputs. Method adapted from [Novak et al., (2018)](https://arxiv.org/pdf/1802.08760.pdf), who calculated similar images for small networks.But when we measure the polytope densities directly (by dividing the distance between two activation vector’s spline codes by their Euclidean distance, it indeed turns out to be the case that activations caused by images of different classes are separated by regions denser in polytope boundaries than activations caused by images of the same class:   ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/uvb1ahqeudbq0apt387n)Figure: The average normalized polytope boundary density between the activation vector caused by images of the same or different classes. The left plot is for a trained network; the right an untrained network. Since images of different classes will also produce distant activations, we should consider the density of polytope boundaries rather than the absolute number of polytope boundaries between the activations produced by different images. To calculate the polytope boundary density between two points, we simply divide the Hamming distance in between their spline codes by the Euclidean distance between them. The polytope densities are normalized by dividing by the average polytope density between all pairs of vectors (both intra and inter class). Only for the trained network is the intra-class polytope density lower. This difference increases as we move higher in the network (Figure below). The error bars are 99% bootstrapped confidence intervals. A single asterisk indicates a statistically significant difference according to a Welch's t-test (t(1873.3)=-14.7; p=3.8e-46). Note the y-axis begins at 80%; the difference is small, but significant. We see a similar story when we interpolate (instead of simply measuring the total distances) between two images of the same or different classes (Appendix A). The intra- and inter-class difference is small, but significant. The difference gets more robust as we look at higher layers. The polytope lens predicts this because activations in lower layers represent low level features, which are less informative about image class than features in higher layers. For example, two images of dogs might be composed of very different sets of lines and curves, but both images will contain fur, a dog face, and a tail. Because there are more irrelevant features represented in lower layers, the percentage of polytope boundaries that relate features that are relevant to that class is smaller than between features represented in higher layers.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/ew1gyr3kyjwa8vk1j5k4)Figure: The difference between the normalized polytope density between intra- and inter-class images gets larger in layers closer to the output. ### Prediction 3: Polytopes define when feature-directions are on- and off-distribution One of the responses to the scaling activations experiments that we’ve encountered is that we’re being unfair to the networks: We shouldn’t expect their semantics to remain intact so far outside of their typical distribution. We agree! That there exists such a distribution of validity is, in fact, a central motivation for looking at networks through the polytope lens.  The features-as-directions hypothesis doesn’t by itself make claims about the existence of a distribution of semantic validity because it assumes that representations are linear and therefore globally valid. The polytope lens predicts that scaling an activation vector will change the semantics of a given direction only when it crosses many polytope boundaries. It makes this prediction because the larger the distance between two polytopes, the more different (in expectation) is the transformation implemented by them. Polytopes boundaries thus suggest a way to identify the distribution of semantic validity.  Is this the case empirically? Partially. When we plot the local polytope density in the region near the scaled vector, we see that there is a characteristic peak between the activation vector and the origin. This peak occurs even for activation directions defined by Gaussian noise, but is absent in untrained networks (Appendix B). There appears to be a ‘shell’ of densely packed polytope boundaries surrounding the origin in every direction we looked. We’re not completely sure why polytope boundaries tend to lie in a shell, though we suspect that it’s likely related to the fact that, in high dimensional spaces, most of the hypervolume of a hypersphere is close to the surface. Scaling up the activation, we see that the vector crosses a decreasing number of polytope boundaries. This is what you’d expect of polytope boundaries that lie near the origin and extend to infinity; as a result, polytopes further from the origin will be made from boundaries that become increasingly close to being parallel. Therefore a vector crosses fewer polytope boundaries as it scales away from the center. We nevertheless see plenty of class changes in regions that are distant from the origin that have low polytope density. This wasn’t exactly what the polytope lens predicted, which was that dense polytope boundaries would be located where there were class changes. Instead we observed dense polytope boundaries as we scale *down*the activity vector and not as we scale it up. It appears that polytope boundaries only demarcate the inner bound of the distribution where a given direction means the same thing. That class changes can be observed for large magnitude activation vectors despite a low polytope boundary might simply reflect that it’s easier for large magnitude activations to move large distances when the transformations they undergo are small. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/x5vqsmq1nisqmm0misyl)Figure: The polytope density (black line) overlaying the class logits (coloured lines) for two images where the activation in a hidden layer - inception3a - is scaled. Polytope density peaks around halfway between the unscaled activation vector and the origin.So polytope boundaries reflect - to some extent - the semantics learned by the network; they capture transformational invariances in the network, reflect feature boundaries, and seem to demarcate the inner bound of where feature-directions should be considered on- or off-distribution. They also seem to be involved in "encoding" features from raw data. Polytopes thus have many excellent properties for describing what is going on inside neural networks - but, as we will discuss in the next section, it's not clear how to harness polytopes to create [**Decomposable**](https://transformer-circuits.pub/drafts/toy_model_v2/index.html)descriptions of the features in a network. Whilst studying neural networks through their polytope regions is a more "complete" description in some sense, it does not (so far) let us understand network representations in terms of features that can be understood independently.  Discussion ========== Our effort to account for nonlinearities in neural networks has forced us to consider not just the direction of neural activations, but also their scale. This is because nonlinearities behave differently at different activation scales. Polytopes offer a way to think about how networks use nonlinearities to implement different transformations at different activation scales. But with many neurons comes exponentially many polytopes. Spline codes present a scalable way to talk about the exponential number of polytopes in neural networks since we can talk about "groups" or "clusters" of spline codes instead of individual codes. Unfortunately, accounting for nonlinearities in this way has cost us rather a lot. Instead of dealing with globally valid feature directions, we now deal with only locally valid feature directions in activation space. By studying the structure of spline codes rather than the structure of activations, polytopes offer us the ability to identify regions of activation space that have roughly similar semantics. Are the costs worth the gains?  The short answer is that we’re not sure. The polytope lens is a way to view neural networks that puts nonlinearities front and center; but if neural networks use primarily linear representations (as hypothesized by [Elhage et al., 2022](https://transformer-circuits.pub/2022/toy_model/index.html)), then such a nonlinearity-focused perspective could potentially offer relatively little compared to a purely linear perspective, since the abstraction of a globally valid feature direction will not be particularly leaky. The lesson we take from observations of superposition and polysemanticity is that networks are often not operating in the linear regime; they suggest that they are making nontrivial use of their nonlinearities to suppress interference from polysemantic directions. This is also suggested by the empirical performance of large networks which substantially exceeds the equivalent purely linear models. It therefore appears that we need a way to account for how different regions of activation space interact differently with nonlinearities and how this affects the semantics of the network’s representations. We ultimately think that mechanistic descriptions of networks with superposition which take nonlinearity into account will look somewhat different from previous mechanistic descriptions that tended to assume linearity ([Elhage et al., 2022](https://transformer-circuits.pub/2022/toy_model/index.html)). The polytope lens might represent an important component of such descriptions, but we’re in no way certain. If it were, what might mechanistic descriptions of neural networks look like through the polytope lens?  We think a potentially important idea for describing what neural networks have learned might be ‘**representational flow**’ between polytope regions. The input space of a layer may have regions that are semantically similar yet spatially distant and the job of the network is to learn how to project these spatially distant points to similar regions of output space. For example, the two images of cats in Figure Xa below are distant in input space yet semantically similar in output space; the network performs **representational convergence** between representations in the input and output spaces. Representational convergence may also happen between arbitrary layers, such as between the input space and layer L if the inputs happen to share features that are represented in that layers’ semantic space (Figure Xb). The converse is also possible: A network implements **representational divergence** if spatially similar inputs are semantically different from the perspective of the network at layer L (Figure Xc). In order to implement representational *convergence*, different polytopes need to have affine transformations that project them to similar parts of the space in later layers. Conversely, representational *divergence* requires transformations that project nearby regions of activation space to distant regions in the spaces of later layers. Networks achieve both of these things by having the right affine transformations associated with the polytope regions involved. Nonlinearities mean that vectors that have an identical direction but different scales can take different pathways through the network. The benefit of thinking about networks in terms of representational flow is that it therefore allows us to talk about the effects of nonlinearities on activation directions of different scales.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/ah3uyhvf0gglumxcpcxj) *Figure: Representational flow between polytope regions might be a useful notion in mechanistic descriptions of neural networks.* Recent work on superposition by [Elhage et al., (2022)](https://transformer-circuits.pub/2022/toy_model/index.html) argues that models with superposition will only be understood if we can find a sparse overcomplete basis (or if we remove superposition altogether, an option we don’t consider here). Finding this basis seems like a crucial step toward understanding, but we don’t think it’s the full story. Even if we could describe a layer’s input features in terms of a sparse overcomplete basis, each combination of those sparse feature directions will have different patterns of interference which each interact differently with the nonlinearities. Thus, the elements of the sparse basis that are active will vary depending on the input vector; we therefore haven’t found a way around the issue that nonlinearities force us to use local, rather than global, bases. Consequently, for most combinations it’s hard to predict exactly which activations will be above threshold without calculating the interference terms and observing empirically which are above or below threshold; this is a problem for mechanistic interpretability, where we’d like to be able to mentally model a network’s behavior without actually running it. Therefore, a sparse overcomplete basis by itself wouldn’t let us avoid accounting for nonlinearities in neural networks. Introducing assumptions about the input distribution such that interference terms are always negligibly small might however let us make predictions about a network’s behavior without adding schemes, like polytopes, that attempt to account for nonlinearities.  Our work is more closely related to the search for an overcomplete basis than it might initially appear. Clustering activations can be thought of as finding a k-sparse set of features in the activations where k=1 (when k is the number of active elements). In other words, finding N clusters is equivalent to finding an overcomplete basis with N basis directions, only one of which can be active at any one time. This clearly isn’t optimal for finding decomposable descriptions of neural networks; ideally we’d let more features be active at a time i.e. we’d like to let k>1, but with clustering k=1. But clustering isn’t completely senseless - If every combination of sparse overcomplete basis vectors interacts with nonlinearities in a different way, then every combination *behaves* like a different feature. Fortunately, even if it were true that every combination of sparse overcomplete features interacted with nonlinearities in a different way, their interactions almost definitely have statistical and geometric structure, which we might be able to understand. Overcomplete basis features will be one component of that structure, but they don’t account for scale; polytopes do. A path toward understanding superposition in neural networks might be an approach that describes it in terms of an overcomplete basis *and*in terms of polytopes. A potential future research direction might therefore be to find overcomplete bases in spline codes rather than simply clustering them. This might be one way to decompose the structure of representational flow into modules that account for both activation directions as well as activation scale. Many other questions remain unaddressed in this post. We think they will be important to answer before the polytope lens can be used in as many circumstances as the features-as-directions perspective has been.  * **Fuzzy polytope boundaries with other activations**- For the sake of simplicity, we’ve been assuming that the networks discussed in the article so far have used piecewise linear activation functions such as ReLU. But many networks today, including large language models, often use smooth activations such as GELU and softmax, which mean that their polytopes won’t really be polytopes - their edges will be curvy or even ‘blurred’. Some prior work exists that extends the polytope lens to such activations ([Balestriero & Baraniuk, 2018](https://arxiv.org/abs/1810.09274)). See Appendix C for further discussion. * **How do we extend the polytope lens to transformers?** Specifically, how should we talk about polytopes when attention between embedding vectors makes activations (and hence polytopes) interact multiplicatively across sequence positions? * **How do adversarial examples fit into this picture?** Are adversarial examples adversarial because they perturb the input such that it crosses many polytope boundaries (polytope ridges)? And can we use this potential insight in order to make networks less susceptible to such attacks? Related work ============ Interpreting polytopes, single neurons, or directions ----------------------------------------------------- The geometric interpretation of ReLU networks was, to our knowledge, first laid out by [Nair and Hinton, (2010)](https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf), who note that each unit corresponds to a hyperplane through the input space, and that N units in concert can create 2^N regions (what we call polytopes), each of which can be viewed as a separate linear model. [Pascanu et al., (2014)](https://arxiv.org/pdf/1312.6098.pdf) undertook a more detailed theoretical analysis of the number of these linear regions in ReLU models. The fact that each of these regions could be identified as a unique *code*, which can then be used for interpretability analysis and clustering, was explored by [Srivastava et al., (2014)](https://arxiv.org/abs/1410.1165), who studied a small MNIST network by clustering the codes at its final layer.  That these regions take the form of convex polytopes is also not a novel concept, and has been explored in a number of prior works ([Balestriero & Baraniuk, 2018a](https://arxiv.org/abs/1805.06576), [Novak et al., 2018](https://arxiv.org/pdf/1802.08760.pdf), [Hanin & Rolnick, 2019a](https://arxiv.org/pdf/1906.00904.pdf), [Rolnick & Kording, 2019](http://proceedings.mlr.press/v119/rolnick20a/rolnick20a.pdf), [Xu et al., 2021](https://openreview.net/forum?id=EQjwT2-Vaba)). In this writeup, we have relied particularly heavily on conceptualizing DNNs as compositions of *max-affine spline operators*, as introduced in [Balestriero & Baraniuk, (2018a)](https://arxiv.org/abs/1805.06576), and expanded upon in a series of further works ([Balestriero & Baraniuk, 2018b](https://arxiv.org/pdf/1810.09274.pdf), [Balestriero et al., 2019](https://arxiv.org/pdf/1905.08443.pdf)).  However, in much of the wider interpretability field – particularly in papers focused on interpretability in language models – this point of view has gone largely unnoticed, and interpretation efforts have tended to try to identify the role of single neurons or linear combinations of neurons (directions). Interpretable neurons have been noted fairly widely in various works focusing on vision models ([Szegedy et al., 2014](https://arxiv.org/pdf/1312.6199.pdf), [Bau et al., 2017](https://arxiv.org/pdf/1704.05796.pdf)). Interpretable directions were also a central focus of the [Circuits Thread, (Olah et al., 2020)](https://distill.pub/2020/circuits/), where they used knowledge built up from interpreting neurons in early layers of inceptionv1 to hand code curve detectors that, when substituted for the curve detectors in the original network, induced minimal performance loss.   Interpretable single neurons have also been found in language models ([Geva et al., 2020](https://arxiv.org/abs/2012.14913), [Durrani et al., 2020](https://arxiv.org/pdf/2010.02695.pdf), [Dai et al., 2021](https://arxiv.org/abs/2104.08696), [Elhage et al., 2022](https://youtu.be/DA3AA1w5TC0?t=1044)), although monosemantic neurons seem comparatively less common in this class of model. [*An Interpretability Illusion for BERT*(Bolukbasi et al., 2021)](https://arxiv.org/abs/2104.07143), highlighted the fact that the patterns one might see when inspecting the top-k activations of some neuron may cause us to spuriously interpret it as encoding a single, simple concept, when in fact it is encoding for something far more complex. They also noted that many directions in activation space that were thought to be globally interpretable may only be locally valid. Polysemanticity and Superposition --------------------------------- The earliest mention of polysemanticity we could find in machine learning literature was from [Nguyen et al., (2016)](https://arxiv.org/pdf/1602.03616.pdf). In their paper they identify the concept of *multifaceted* neurons. That is, neurons which fire in response to many different types of features. In this work, we define *polysemantic* neurons as neurons which fire in response to many different *unrelated* features, and they identify an example of this in their supplementary material (Figure S5). Work by Olah et al., [*Feature Visualization*](https://distill.pub/2017/feature-visualization/), identified another way to elicit polysemantic interpretations and helped to popularize the idea. They note that, as well as there being neurons which represent a single coherent concept, “... *there are also neurons that represent strange mixtures of ideas. Below, a neuron responds to two types of animal faces, and also to car bodies. Examples like these suggest that neurons are not necessarily the right semantic units for understanding neural nets*.” | | | --- | | | | *Image from*[*Olah et al., 2017*](https://distill.pub/2017/feature-visualization/) depicting a polysemantic neuron. | Even before this, the possibility that individual neurons could respond to multiple features was discussed in some early connectionist literature, including [Hinton, (1981)](http://www.cs.toronto.edu/~hinton/absps/shape81.pdf). In neuroscience, polysemanticity is usually called ‘mixed selectivity’. Neuroscience has only in the last decade or two developed the tools required to identify and study mixed selectivity. Since then, it has been the subject of increasing attention, especially its role in motor- and decision- neuroscience ([Churchland et al., 2007](https://journals.physiology.org/doi/full/10.1152/jn.00095.2007); [Rigotti et al., 2013](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4412347/); [Mante et al., 2013](https://www.nature.com/articles/nature12742)). For a review of mixed selectivity in neuroscience, see [Fusi et al., (2016)](https://earlkmiller.mit.edu/wp-content/uploads/2016/02/Fusi-Miller-Rigotti-CONB-2016.pdf).   Recent work from [Elhage et al., (2022)](https://transformer-circuits.pub/2022/toy_model/index.html#discussion) sheds light on a phenomenon that they term “superposition”. Superposition occurs when a neural network represents more features than it has dimensions, and the mapping from features to orthogonal basis directions can no longer be bijective. This phenomenon is related to, but not the same as polysemanticity; it may be a cause of some of the polysemantic neurons we see in practice. They investigate toy models with non-linearities placed at the output layer, and show that superposition is a real phenomenon that can cause both mono- and polysemantic neurons to form. They also describe a simple example of computation being performed on features in superposition. Finally, they reveal that superposition can cause a different type of polytope to form - in their toy model, features are organized into geometric structures that appear to be a result of a repulsive force between feature directions which acts to reduce interference between features. It’s worth emphasizing that the polytopes discussed in their work aren’t the same kind as in ours: For one, our polytopes lie in activation space whereas theirs lie in the model weights. Perhaps a more fundamental divergence between Elhage et al.’s model and ours is the assumption of linearity - the idea that features are represented by a single direction in activation space. As we explained in earlier sections, we believe that assuming linearity will yield only partial mechanistic understanding of nonlinear networks. While globally valid feature directions would simplify analysis, in practice we struggle to see a way around nonlinearity  by assuming linear representations.  Appendix ======== A. Polytope density while interpolating between activations caused by images ---------------------------------------------------------------------------- ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/bbuppi0bcszso1z8c24v)Figure: The polytope density for each class during a spherical interpolation between the activations caused by images of different classes in inception3a. The polytope codes were computed from the activations at layer 3a to the output layer. The polytope density was estimated by sampling 150 random points around each position during the interpolation and computing the number of polytopes passed through versus the euclidean distance. The interpolation path passes through multiple other classes. We see that the polytope density is highest in the intermediate region where there is much class change between intermediate classes. This trend is relatively weak, however. This provides tentative evidence in favor of the relationship between polytope density and semantic change in representation.![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/wbhvkjmby9oqrgadz2l2)Figure: Mean polytope density (averaged over 200 image interpolations) by spherically interpolating between different class examples on each layer. Dotted lines represent the mean interpolation between images of the same class and solid lines represent the mean interpolation between images of different classes. The shaded regions represent the standard error of the mean. The polytope codes were computed from the embedding at the labeled layer to the output space. For lower imagenet layers, where the class labels are less informative of the semantic features, we see that the polytope density exhibits a double dip phenomenon where it increases when about half interpolated, indicating that interpolation leaves the manifold of typical activations. For later layers, the polytope density decreases and the curve flattens during interpolation implying that at these layers there are more monosemantic polytopes and the class labels are more representative of the feature distribution.B. Scaling activation vectors and plotting polytope density ----------------------------------------------------------- ### Untrained network ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/ouzhycrf0m3glph2iqgz)Note that the unchanging class in the untrained network is due to a phenomenon that resembles ‘rank collapse’: Even though the input and early activations are different, the activations of the untrained network converge on the same output. We believe this might due to a quirk for our variant of InceptionV1 (perhaps its batchnorm), but we haven’t investigated why exactly this happens.### With Gaussian noise activations ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/p7flxebbapn9r3cdieto)  C. Mathematical account of neural networks as max affine spline operators (MASOs) --------------------------------------------------------------------------------- In the below section we give an account of some recent theory from [Balestriero and Baraniuk, (2018)](http://proceedings.mlr.press/v80/balestriero18b.html) that links deep neural networks to approximation theory via spline functions and operators. More specifically, the authors describe deep neural networks with piecewise linear activation functions (like ReLU) as compositions of *max-affine spline operators (MASOs),*where each layer represents a single MASO*.* A MASO is an *operator* composed of a set of individual *max-affine spline* functions (MASs), one for each neuron in a given nonlinear layer.  We won’t go too deep into spline approximation theory here, but you can think of a spline function approximation in general as consisting of a set of partitions ΩR of the input space, with a simple local mapping in each region. The *affine* part means that this mapping consists of an affine transformation of the input in a given region:  [![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/sz62rk3dottcxkihrhuy)](https://www.codecogs.com/eqnedit.php?latex=a_r%20x%20%2B%20b_r%20%5C%3B%20%5Ctext%7Bfor%7D%20%5C%3B%20r%20%3D%201%2C%20...%2C%20R#0) The *max* part means that, instead of needing to specify the *partition* *region* of our input variable in order to determine the output, we can simply take the maximum value when we apply the entire set of affine transformations for each region: [![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/hnspr8hnm0unyeiwyjjb)](https://www.codecogs.com/eqnedit.php?latex=z(x)%20%3D%20%5Cunderset%7Br%3D1%2C%20...%2C%20R%7D%7B%5Cmax%7Da_rx%2Bb_r#0) A visual example is helpful to understand why this works. Suppose we have a spline approximation function with R=4 regions: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/ci44ywted9u4m79bv9mz) Each red line represents a single spline with a corresponding affine transformation (ar, br), and the dotted light blue line represents the maximum value of all the affine transformations at each x location. We can see that it follows an approximation of the convex function (in dark blue). A single ReLU unit can be expressed as a special case of *max-affine spline* with R=2 regions: [![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/uiblqdqb1xzxprvc2eg9)](https://www.codecogs.com/eqnedit.php?latex=relu(x)%20%3D%20%5Cunderset%7Br%3D1%2C%202%7D%7B%5Cmax%7D%20%5C%3B%20a_rx%2Bb_r#0) Where  (a1,b1)=(0,0) and (a2,b2)=(Wi,bi), which are the weight and bias vectors for a given neuron. An entire ReLU layer can then be seen simply as a concatenation of d of these R=2  MASs, where d is the width of the layer – this is our MASO. This becomes slightly more complicated for smooth activation functions like GELU and Swish. But, fortunately, in a [later paper](https://arxiv.org/pdf/1810.09274.pdf) the same authors extend their framework to just such functions. In summary - smooth activation functions must be represented with a *probabilistic* spline code rather than a one-hot binary code. The corresponding affine transformation at the input point is then a linear interpolation of the entire set of affine transformations, weighted by the input point’s probability of belonging to each region. D. Note on Terminology of Superposition, Interference, and Aliasing ------------------------------------------------------------------- The concepts referred to by the terms ‘superposition’ and ‘interference’ [Elhage et al., (2022)](https://transformer-circuits.pub/2022/toy_model/index.html) have parallel names in other literature. We provide this footnote with the hope of inspiring links between mechanistic interpretability and related results in signal processing, systems theory, approximation theory, physics, and other fields.  The [superposition principle](https://en.wikipedia.org/wiki/Superposition_principle) in the theory of linear systems refers to the fact that states of or solutions to a linear system may be added together to yield another state or solution. For example, solutions to linear wave equations may be summed to yield another solution. In this sense, superposition tells us that we can mathematically deduce the action of a system on any input from its action on a set of orthogonal basis vectors. This usage clashes with its usage in the mechanistic interpretability literature so far, where it has often been used to refer to systems without such a decomposition property. ‘Interference’ generally refers to superposition applied to linear waves. Specifically, the components of two waves interfere with each other, but orthogonal components within a wave do not. The notion of ‘superposition’ and ‘interference’ as used in [Elhage et al., (2022)](https://transformer-circuits.pub/2022/toy_model/index.html), where different features fail to be completely independent and inhibit correct measurements is similar to the idea of [aliasing](https://en.wikipedia.org/wiki/Aliasing) in other literatures. The term 'aliasing' originates in signal processing. In that context, aliasing arose from the indistinguishability of waves of different frequencies under discrete sampling schemes. Aliasing has come to refer more generally to the phenomenon in which a set of desired quantities (e.g. features) fails to be orthogonal with respect to a measurement basis. If we wish to determine the value of *n* features from *k << n* measurements, some sets of feature values may yield the same measurements. In the case of sampling waves, high-frequency waves may appear the same as low-frequency waves. In the case of approximating functions from *k* many sample points, high-degree polynomials may take the same values on those *k* points (see [ATAP Chapter 4](https://people.maths.ox.ac.uk/trefethen/ATAP/ATAPfirst6chapters.pdf) for a discussion in the case of Chebyshev interpolation). In image processing, [anti-aliasing](https://en.wikipedia.org/wiki/Spatial_anti-aliasing) is used to deal with visual artifacts that come from high-frequency components being indistinguishable from lower frequency components.  Quantum mechanics uses the conventions we have described. A quantum system with two possible classical states |0> and |1> has its quantum state described as an orthogonal superposition of the form a|0>+b|1> where *a* and *b* are complex numbers. The two classical states do not ‘interfere’ with each other. Rather, two independent quantum systems may additively interfere with corresponding orthogonal components interfering. Interference and superposition in this context are not referring to entanglement. Just as we may represent (|0>+|1>)/√2 as a superposition of the states |0> and |1> , we may also represent the state |0> as a superposition of the states (|0>+|1>)/√2 and (|0>−|1>)/√2. The important detail regarding ‘superposition’ is the additivity, not the particular choice of classical states for our representation. The [quantum harmonic oscillator](https://en.wikipedia.org/wiki/Quantum_superposition) has eigenstates (orthogonal basis vectors for the system) described by [Hermite polynomials](https://en.wikipedia.org/wiki/Hermite_polynomials). If we approximate the Hermite polynomials with an asymptotic approximation, we will observe aliasing due to the failure of our approximation to be perfectly orthogonal. E. Examples of Text Clusters from GPT2-Small -------------------------------------------- Spline code clusters (computed with codes from layer L -> output): ------------------------------------------------------------------ ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/k719s8yvjou2wta6buhg)A cluster responding to figure and table references in latex documents. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/mho7vnm9wlnepkgzcife)A cluster responding to decimal points in numbers.![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/xsvtk6xtjx5durzgpacs)A cluster responding to words followed by commas (or conjunctive pronouns?).![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/bncajcafhkpozbsk1n75)A cluster responding to spans of time.Activation clusters: -------------------- ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/l9flnllcxdeoqqm7c4fk)A 'detokenization' cluster that responds both to the word “is” and its contraction.![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/i20lt8izludhekiow1qh)A cluster responding to dates.![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/pandyd4sgurzhjddzjn0)A cluster responding to forward slashes in file paths.![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/eDicGjD9yte6FLSie/wtxcj62kx4gvffyc1vds)A cluster responding to multiples of ten (verbal and numeric).  1. **[^](#fnrefyp6wk11agmm)**And, with some relaxations to ‘soft’ polytopes, the polytope lens might also let us mechanistically describe neural networks with activations such as GELU and Swish. Some prior work exists that extends the polytope lens to such activations ([Balestriero & Baraniuk, 2018](https://arxiv.org/abs/1810.09274)). See the Appendix C for further discussion. 2. **[^](#fnreffj8lun8q4lh)** In neuroscience, the polysemanticity is called mixed selectivity ([Fusi et al., 2016](https://earlkmiller.mit.edu/wp-content/uploads/2016/02/Fusi-Miller-Rigotti-CONB-2016.pdf)) 3. **[^](#fnrefo89s50smuz)**We chose InceptionV1 since it has served as a kind of ‘model system’ in previous mechanistic interpretability work. But the Pytorch [implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/googlenet.py) of the InceptionV1 architecture (also known as GoogLeNet), it transpires, differs from the original. The original had no batch norm, whereas the Pytorch version does. 4. **[^](#fnrefesps52fntne)**Similar encoding methods have been widely observed in neuroscience where they are called “population coding”. Population codes have been found or hypothesized to exist in many neural regions and especially the [motor cortex](https://www.ini.uzh.ch/~kiper/georgopoulos.pdf). 5. **[^](#fnref9itzt8xhtlf)**The idea that non-orthogonal representations interfere with each other has a long history in machine learning, starting with the study of the memory capacity of associative memories such as Hopfield networks which face the same underlying tradeoff between information capacity and orthogonality ([Hopfield, 1982](https://www.pnas.org/doi/pdf/10.1073/pnas.79.8.2554); [Abu-Mostafa & St. Jacques, 1985](https://authors.library.caltech.edu/7008/1/ABUieeetit85.pdf)). When features are encoded in non-orthogonal directions, the activation of one feature coactivates all feature directions sharing a non-zero dot product with it, leading to interference. 6. **[^](#fnref662ir63ay26)**The arguments we make in support of the Polytope Lens  also apply to other activation functions such as GELU. But for simplicity we stick to piecewise linear activation functions because it’s easier to think geometrically in terms of straight lines rather than curvy ones. 7. **[^](#fnrefnxrcvy6eqr)**Although exponential, it’s not as many as one would naively expect - see [Hanin & Rolnick, (2019b)](https://arxiv.org/abs/1906.00904). 8. **[^](#fnref7bn8gghu3b9)**This could be quantified, for instance, as the Frobenius norm of the difference matrix between the implied weight matrices of the affine transformations implemented in each polytope. 9. **[^](#fnrefxxlcfaiw1g9)**While we use HDBSCAN in this work, the specific algorithm isn't important. Any clustering algorithm that groups together any sufficiently nearby activations or codes should yield monosemantic clusters.
bf57761b-98fb-4bd2-9a52-0cb792d3b122
trentmkelly/LessWrong-43k
LessWrong
The Structure of the Guild of the Rose This is a link post for The Structure of the Guild, part one of a series of ongoing retrospectives on the progress of the Guild of the Rose, an organization for rationalist education and community.
d738798c-718f-4494-8ced-4063ba062819
trentmkelly/LessWrong-43k
LessWrong
What data generated that thought? In Techniques of the Selling Writer, Dwight W. Swain gives advice on receiving advice: > George Abercroft is an action writer. "Start with a fight!" is his motto. And for him, it works. > > But Fred Friggenheimer's witch-cult yarn, as he conceives it, puts heavy emphasis on atmosphere. The fight he tries to stick in like a clove in a ham at the beginning, following George's rule, destroys the mood - and the story. > > Even with your own rules, indeed, you must be careful. Because somehow, subtly, they may not apply to this explicit situation. [...] > > How do you tell whether a rule is good or not, in terms of a specific problem? > > Answer: Find out the reason the rule came into being. What idea or principle stands behind it? [...] > > Take George's rule about starting every story with a fight. It's born of George's markets - men's magazines in which the emphasis is on fast, violent action, with blood on page one an absolute must. > > If Fred only realized that fact, he'd ignore George's rule when he himself writes a mood-geared story. One way to reduce damage done by cached thoughts is to cultivate a habit of asking questions about the origin of the thought. Do you remember where you heard the thought? Did it come from someone practicing good epistemic hygiene, or do they just unthinkingly pass on anything they hear? If somebody offered advice based on their own experiences, how representative is their experience? What kinds of experiences have they had that prompted that advice? Are there alternative ways of interpreting those experiences? Or if you're the one offering advice, which you came up with yourself, what situation led you to come up with it? How generalizable is it? So far I have mostly been framing this as a way to notice flaws in seemingly good advice. But there's also an opposite angle: finding gems in seemingly worthless information. All outcomes are correlated with causes; most statements are evidence of something. Michael Vassar once gav
6e81dccf-31c0-49e5-82ce-5a627298add5
trentmkelly/LessWrong-43k
LessWrong
Should rationalists put much thought into tipping and/or voting? It seems to me that tipping involves rather little money, and voting involves rather little time. For the latter, I'm assuming that you follow politics enough anyway that you have at least one candidate you prefer. I'm willing to bet that there are people who spend more time in a year on whether voting is rational than they would if they just went and voted and ignored the arguments. What are the biggest wins you've gotten in terms of time and/or money from thinking about what you're doing?
3578e3ab-f990-4c7e-8012-7a71bd9d7567
trentmkelly/LessWrong-43k
LessWrong
R&D is a Huge Externality, So Why Do Markets Do So Much of it? Discovering new technologies is the only way to get long-term economic growth. Rote expansions of existing technologies and machines inevitably hit a ceiling: replacing and repairing the existing infrastructure and capital becomes so expensive that there is no income left over for building extra copies. The only way out of this is to come up with new technologies which create more income for the same investment, thus restarting the feedback loop between income growth and investment. So R&D is extremely valuable. But most of the gains from R&D accrue to external parties. William Nordhaus estimates that firms recover maybe 2% of the value they create by developing new technologies. The rest of the value goes to other firms who copy their ideas and customers who get new products at lower prices. Firms don’t care much about the benefits that accrue to others so they invest much less in R&D than the rest of us would like them to. Governments, on the other hand, collect much more of the benefits from new technologies. They get to tax the entire economy so when benefits spillover across firms and consumers, they still come out ahead. They don’t collect on international spillovers but for large economies at the frontier of technological growth, like the US, they internalize a large chunk of the value from R&D, much more than 2%. All of this is a setup for a classic externalities problem. There’s some big benefit to society that private decision makers don’t internalize, so we should rely on governments to subsidize R&D closer to its socially optimal level. But in fact, the private sector spends ~4x more than the public sector on R&D: $463 vs $138 billion dollars a year. One explanation for this might be that the extra $138 billion is all that was needed to bump up private spending to the social optimum, but this doesn’t seem to hold up in the data. One piece of evidence that we are still far off the socially optimal spending on R&D comes form a simple accounting of the
4d9b6776-3351-45df-9a4c-38b8da12fbf9
StampyAI/alignment-research-dataset/lesswrong
LessWrong
A mostly critical review of infra-Bayesianism Introduction ============ I wrote this post towards the end of my three and a half months long [SERI MATS](https://www.serimats.org/) fellowship. I didn't get even close to the point where I could say that I understand infra-Bayesianism on a really detailed level (according to Vanessa there are only three people in the world who fully understand the [infra-Bayesian sequence](https://www.lesswrong.com/posts/zB4f7QqKhBHa5b37a/introduction-to-the-infra-bayesianism-sequence)). Still, I spent three months reading and thinking about infra-Bayesianism, so I ought to be able to say something useful to newcomers. The imaginary audience of this post is myself half a year ago, when I was just thinking about applying to Vanessa's mentorship but knew almost nothing about infra-Bayesianism or the the general research direction it fits into. The non-imaginary intended audience is people who are in a similar situation now, just considering whether they should dive into infra-Bayesianism. My review is mostly critical of the infra-Bayesian approach, and my main advice is that if you decide that you are interested in the sort of questions infra-Bayesianism tries to solve, then it's more useful to try it yourself first in your own way, instead of starting with spending months getting bogged down it the details of [Basic infra-measure theory](https://www.lesswrong.com/posts/YAa4qcMyoucRS2Ykr/basic-inframeasure-theory) that might or might not lead closer to solutions. Still, I want to make it clear that my criticism is not aimed at Vanessa herself, as she chose questions that she found important, then created a theory that made some progress towards answering those questions. I have somewhat different intuitions than Vanessa over how important are certain questions and how promising are certain research directions, but I support her continuing her work and I thank her for answering my annoying questions throughout the three months. Personal meta-note ================== I applied to the infra-Bayesian stream in SERI MATS because I have a pure mathematics background, so I figured that this is the alignment agenda that is closest to my area of expertise. I met some other people too, also with pure math background, who get convinced that alignment is important, then start spending their time on understanding infra-Bayesianism, because it's the most mathematical alignment proposal. Although paying attention to our comparative advantages is important, in retrospect I don't believe this is a very good way to select research topics. I feel that I was like the man who only has a hammer and is desperately looking for nails, and I think that many people who tried or try to get into infra-Bayesianism are doing so in a similar mindset, and I don't think that's a good approach. It's important to note that I think this criticism doesn't apply to Vanessa herself, my impression is that she honestly believes this line of mathematical research to be the best way forward to alignment, and if she believed that some programming work in prosaic alignment, or the more philosophical and less mathematical parts of conceptual research were more important, then she would do that instead. But this post is mainly aimed at newer researchers considering to get into infra-Bayesianism, and I believe this criticism might very well apply to many of them. Motivations behind the learning theoretical agenda ================================================== According to my best understanding, this is the pitch behind Vanessa Kosoy's learning theoretical alignment agenda: Humanity is developing increasingly powerful AI systems without a clear understanding of what kind of goals the AIs might develop during a training, how to detect what an AI is optimizing for, and how to distinguish relatively safe goal-less tools from goal-oriented optimizers. Vanessa's research fits into the general effort of trying to get a better model of what possible forms "optimization", "agency" and "goals" can take, so we can have a better chance to identify them in the AI systems we are concerned about, and have better ideas on which training paradigm might lead to which kind of behavior. The behavior of an actual neural net is hard to describe mathematically, and I expect that even if we see a paradigm shift away from neural nets, the behavior of future, even more powerful designs will not be any easier to describe. However, it seems that successful systems are often an approximation of some ideal solution to the problem, that is often easier to understand than the messy real-life process. We know disappointingly little about the internal working of AlphaZero's neural net or Kasparov's brain, but we can model their play as an approximation of the optimal [minimax algorithm](https://www.idtech.com/blog/minimax-algorithm-in-chess-checkers-tic-tac-toe) of chess. The minimax algorithm is computationally intractable, but mathematically simple enough that we can prove some statements about it. For example, if we want to know how AlphaZero or Kasparov will answer to the opening of a scholar's mate, we can prove that minimax won't get mated and then hope that Kasparov's and AlphaZero's algorithms are close enough approximations of minimax that this observation generalizes to them. On the other hand, it would be much harder to predict their response to the opening by studying their algorithms directly (by looking at their neurons). Similarly, we can expect a real-life powerful AI system to be an approximation of an Idealized AGI. Based on the previous example and a few other cases, it is plausible that we can understand the behavior of an Idealized AGI better than any actual powerful AI system. So it makes sense to study the theory of Idealized AGI first, then hope that the real-life systems will be close-enough approximations of the ideal that our observations about the Idealized AGI give some useful insights about the real-world AIs. Unfortunately, we don't even know yet what would be a good model for such an Idealized AGI. The most famous candidate is Hutter's [AIXI.](https://arxiv.org/abs/cs/0004001) A condensed description of AIXI: An agent interacts with an environment that is supposed to be computable, that is, the environment can be modelled as a Turing machine. The agent doesn't know which environment it is interacting with, but it has prior distribution over all computable environments using Solomonoff induction (explained later in the IBP part of my post). Then the agent acts in a way that minimizes its expected loss until a large time horizon T, based on these priors. This is a nice and simple model, and because it considers all computable environments, the agent is pretty general. However, it has several very serious weaknesses. Now we don't care about it needing unimaginable compute, an uncomputable prior and extremely slow learning rate, these are okay now, we are looking for an Idealized AGI after all. But even granting all that, its decision process has some serious shortcomings, as discussed in the [Embedded agency sequence](https://www.lesswrong.com/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version). Infra-Bayesianism was created to address some of these problems. Is this actually useful? ======================== I have my doubts about the usefulness of this approach. In principle, I think I agree that capable intelligent agents can be modeled as approximations of the Idealized Agents we are studying. But a crucial question is, how close they get to these idealized versions by the time they pose an existential threat. My current best guess is that not close enough for our current investigation into infra-Bayesianism to make a difference. Napoleon was pretty good at manipulating people and taking over countries. Edward Teller was pretty good at building superweapons. I imagine that the first AIs that will be able to defeat humanity if they want, or bring us into existential security if we align them well, will be approximately as capable as Napoleon and Teller together, but with immense self-replication capacities. Was Napoleon an approximation of a Bayesian expected utility maximizer? Well, he had some goals that he followed somewhat consistently, and he sometimes changed his mind in the face of new evidence, but this is a pretty weak sense of "approximation". Was Napoleon more of an approximation of an infra-Bayesian optimizer than a Bayesian one? Maybe, sort of? Some cornerstones of infra-Bayesianism, like [Knightian uncertainty](https://en.wikipedia.org/wiki/Knightian_uncertainty) and preparing for the worst-case scenario seem to be relatively important elements of human decision making. But would it have helped the British generals in predicting Napoleon's movements if they had had a better mathematical understanding of the difference between Bayesian expected utility maximization and infra-Bayesianism? Would this understanding have helped Napoleon's schoolteacher in raising him to be a peaceful citizen instead of a megalomaniac conqueror? I don't think that in the "Napoleon alignment" and "Napoleon control" problem, any mathematical formulation of optimization would have been very useful other than some general heuristics of "has some goals, has some beliefs, usually acts according to these". I think that when tranformative AI arrives, it will be a strange and alien mind, and aligning it will likely be a much harder task than raising Napoleon well. But I don't think it will be very close to any Idealized model, it will just be heuristic kludgery-machine, just like we are, only operating with an alien type of kludgery. That's why I am skeptical that a better mathematical understanding of an Idealized agent would help us aligning an AI anymore than Napoleon's teacher would have been helped by it. What if we deliberately build the first transformative AI in a way that relies on more understandable optimization principles instead of the inscrutable kludgery that deep learning currently is? If I understand correctly, that would be a big part of Vanessa's plan.  I'm skeptical that this would be possible, I expect that the inscrutable training processes will create a transformative AI before we can devise a training scheme that we actually understand. I can easily imagine a paradigm shift away from deep learning, but I would be profoundly surprised if the new thing was something nice and understandable Naive Bayes Classifier But This Time More Powerful And Also Possibly Infra-Bayesian.  Another hope she mentions is that we could develop the mathematical understanding of deep learning to a level where we realize in which ways it really is an approximation of the Idealized AGI. I'm generally very much in favor of getting a better mathematical understanding of deep learning, but I am skeptical whether we could connect it to this agenda. My strong guess is that even if we got a very successful mathematical theory of the human brain ([predictive processing](https://en.wikipedia.org/wiki/Predictive_coding) maybe?), the result would still be very different from any Idealized model, and I wouldn't expect our research about infra-Bayesianism to be useful in stopping Napoleon. My guess is similar about deep learning. What else can we do then to understand an alien mind that doesn't even exist yet? Good question, and if one doesn't have hope with any other approaches, then it can make sense to go back trying to work out a better mathematical formulation of an Idealized AGI and hope that it will be more relevant than in the case of Napoleon, or hope for a long AI winter and then work on producing a paradigm shift in the direction of more understandable models. Personally, I plan to look into other approaches first. Does Infra-Bayesianism actually solve the problems of embedded agency? ====================================================================== Okay, assume for now that you want to work on a mathematical formulation agency that handles the questions of [embedded agency](https://www.lesswrong.com/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version) well. Should you study infra-Bayesianism or try to develop your own solutions? I think the answer somewhat depends on which question of embedded agency do you care about, as I will detail below. But my general takeaway is that there are surprisingly few actual results in infra-Bayesianism yet, and there are some important questions it doesn't even try to address. Clarifying the "surprising" part: In retrospect, the sparsity of results is not surprising at all, since the problems are hard, and basically only two people worked on it for a few years. Compared to that, the output is actually pretty impressive. Also, the questions of embedded agency are diverse, and Vanessa never claimed that one theory could solve all of them. Still, when I started looking into infra-Bayesianism, and read the problem-statements at the beginning of the posts, and then stared at the following imposing wall of mathematical formulas, I presumed that the developed formalism solves a large part of the problems involved, although there are still some open questions. In reality, we have lots of definitions, theory-building and some conjectures, we very rarely have anything concrete that "Yes, this is a nice property we could prove about infra-Bayesian agents, but wasn't true or we couldn't prove about AIXI." Again, this is not a criticism of Vanessa's work, she started developing an approach for addressing the open questions of embedded agency, she came up with some interesting ideas, but formalizing them required lots of technical theory-building, and she didn't get to many impressive results yet. But if you are just a new researcher who wants to tackle the problems of embedded agency, this means that, as far as I can tell, there is not much reason to think that infra-Bayesianism is an especially promising direction, other than Vanessa having a hunch that this is the right way to go. I think Vanessa also agrees that we don't have strong evidence yet that infra-Bayesianism will turn out to be an especially fruitful framework, and she would be happy if someone found something better (although currently she believes infra-Bayesianism is the best approach). So if you are a new researcher who is interested in embedded agency, I think I would advise reading the [Embedded agency post](https://www.lesswrong.com/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version) (EDIT: Since then, Vanessa published a [summary](https://www.lesswrong.com/posts/ZwshvqiqCvXPsZEct/the-learning-theoretic-agenda-status-2023) of the questions and directions she is interested in, that's also a good starting point) and look for some related works, choose a problem you find important, then try to develop your own framework. I imagine that the most likely result is that you don't get too far, but I think there is a non-negligible chance that you can develop a framework in which progress is easier and you get stronger results earlier than in infra-Bayesianism. After all, given that infra-Bayesianism didn't really deliver much of an evidence yet, that's just your best guess against another smart person's (Vanessa's) best guess. And if you fail to create a good theory yourself, as you most likely will, you can still look into Vanessa's work after that. In fact, I think you will get a better understanding of the motivations behind infra-Bayesianism if you try it yourself for a while rather than just jumping heads-in to the IB formalism. I somewhat regret that I realized this too late and didn't at the very least give a few weeks to try to answer the problems in my own way. Specific issues of AIXI and how well infra-Bayesianism handles them =================================================================== AIXI's prior is uncomputable and AIXI would require tremendous sample complexity to learn anything -------------------------------------------------------------------------------------------------- Infra-Bayesianism doesn't even try to address this problem yet, it faces the same problem that if we want the agent to be general enough to handle a wide variety of situations well, then its prior needs to contain lots of hypotheses, which makes learning very slow and makes the prior uncomputable if we go general enough. It is possible that solving the non-realizablity problem (see below) makes it easier to create a reasonable prior, but it's by no means obvious. This is not really a question of embedded agency, and is not among the first steps Vanessa wants to consider (we are looking at an Idealized AGI, remember?) and I only included it in the list because it's a natural first thought when we are discussing "the shortcomings of AIXI". Traps ----- There might be things in the environment that kill the agent or cause irreversible harm, and the agent can't learn the truth about them without trying it. Thus, with certain priors, AIXI can spend its whole life always taking action 0.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} , because it's afraid that any other action would kill itself, and it has now way of learning whether this hypothesis is true, because it never dares to try anything other than action 0. Alternatively, it can have a prior which makes it confidently try all sorts of things and killing itself in a short notice. I feel that this is the most serious shortcoming of AIXI as a realistic model of an intelligent agent in the world. Unfortunately, infra-Bayesianism was not created to address this, and we would need to develop a completely separate theory for that. Vanessa has a few plans about this too, but they are on the level of unexplored ideas yet. [Non-realizability](https://www.lesswrong.com/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version#3_1__Realizability) --------------------------------------------------------------------------------------------------------------------------- A classical learning agent can be expected to do well in environments that are included in its hypothesis class (assuming there are no traps). But an agent can't fully model an environment that's bigger than itself (in particular an environment that includes that agent itself or other agents of similar complexity), so its hypothesis class must be limited, and in reality, it very well might encounter an environment that is not in its hypothesis class. We call an environment that is not in the hypothesis class non-realizable. For a classical learning agent we have no general guarantee on its performance in a non-realizable environment. In particular, an agent playing a game against another agent very similar to itself is a non-realizable setting, and we don't know much about the behavior of classical learning agents in such a game. This is called the grain of truth problem. Infra-Bayesianism was developed to handle the problem of non-realizability. In my other, [more technical post](https://www.lesswrong.com/posts/q6dQpSfNHCYzKb2mf/performance-guarantees-in-classical-learning-theory-and), I explain the infra-Bayesian approach to non-realizability in more detail. My general conclusion is that infra-Bayesianism in fact seems to be an improvement over classical learning theory, but it's unclear how big of an improvement it is, and it's very unclear (at least for me), how to move forward.  About games among infra-Bayesian agents we still have only the most preliminary results. I stand by my recommendation that people should try to look for solutions on their own first, and just look into the infra-Bayesian framework later. But if someone is specifically interested in the non-realizability problem, then maybe I recommend a shorter working-alone time, because infra-Bayesianism really might have interesting insights here. [Newcomb](https://en.wikipedia.org/wiki/Newcomb%27s_paradox)-like problems -------------------------------------------------------------------------- The infra-Bayesian framework really seems well-equipped to handle these! Later, I think Infra-Bayesian Physicalism handles them even more naturally, but even good old ordinary infra-Bayesianism is pretty good for this. Hooray! Thomas Larssen has a nice [write-up](https://www.lesswrong.com/posts/DMoiZDYZzqknfvoHh/infra-bayesianism-distillation-realizability-and-decision) explaining this, including a counterexample where infra-Bayesianism actually fails in a Newcomb-problem, but it seems to be a relatively minor problem that can be addressed with the introduction of a little randomness. Motivations of Infra-Bayesian Physicalism ========================================= [Infra-Bayesian Physicalism](https://www.lesswrong.com/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized) (IBP) is a major research direction inside Vanessa's agenda, and the one I have the most mixed feelings about. My understanding is that IBP tries to tackle three main questions. I find them more interesting and more plausibly relevant to alignment than the previously listed problems of embedded agency, so I will write about them in more detail, then I try to address whether IBP is the good framework to handle them. How should we think about Occam's razor and anthropics? ------------------------------------------------------- [Occam's razor](https://en.wikipedia.org/wiki/Occam%27s_razor) seems to be an important cornerstone of scientific thinking, but when we are saying we should use the simplest hypothesis consistent with the data, it's not obvious how to define "simplest". This question is equivalent to asking where a Bayesian agent should get its priors from. Occam's razor says that simple hypotheses should have higher a priori probabilities than complicated ones, but we should still define "simple". Which hypothesis of free fall is more simple: "Objects fall as they do because they have a constant gravitational acceleration g" or "Objects fall as they do because there is a witch who makes it so"? The most widely accepted solution is [Solomonoff induction](https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference): for every environment, one should look at the length of the shortest program on a universal Turing machine that produces this environment. This length K is the Kolmogorov complexity. Then the prior probability of being in that environment should be approximately 12K.  (Actually, it's summing this up for all programs producing the sequence, but the main term corresponds to the shortest program.) This gives a nice answer to the previous question about free fall: if we want to express the "witch hypothesis" as a program that outputs the results of a bunch of experiments we observed, we need to hardcode for every experiment how high we dropped the ball from and how long it took to land. (Also, we need to use a few superfluous bits to specify that all of this happened because of a witch.) On the other hand, to express the "gravitational acceleration hypothesis" as a program that outputs the result of experiments, we just need to specify the constant g at the beginning, write down the function 12gt2, then hardcode for all experiments the falling time, and now the program can output both the falling time and the height for each experiment, and the description length was just half as long a the "witch hypothesis". (If you haven't encountered this concept before, take some time to think about it, when I first read [this explanation](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor) in the Sequences, it felt revelatory to me.) AIXI, the most widely accepted general model of intelligence, and the starting point of the whole Embedded agency inquiry, is nothing else but a Bayesian expected utility maximizer based on the Solomonoff prior. There are some problems with this interpretation of Occam's razor however. The first is that there is no such thing as a unique, canonical universal Turing machine. This means that it's possible to to construct a universal Turing machine such that [Russell's teapot](https://en.wikipedia.org/wiki/Russell%27s_teapot) is a very short description length fundamental object, so it has a high prior for existence. And as we have no way to gather evidence in any direction about a hypothetical teapot in the asteroid belt, we can live our lives believing with high probability in Russel's teapot. I don't have a good solution to this problem, neither does infra-Bayesian physicalism, and I suspect that there might not be any objective solution to this other than biting the bullet that it's maybe not that big of a problem if we have false beliefs about teapots in the asteroid belt that we will never observe. (Also, if you just construct a universal Turing machine in any sensible way, without specifically creating it with the purpose of giving short description length to Russell's teapot, then probably no such problems will emerge.) The problem with Solomonoff induction IBP tries to solve is something different: what do we mean by "environments" on which we use a simplicity prior? AIXI's answer is that it looks at the description complexity of its own string of observations: this is reasonable, that's the input you have after all, what else could you do? On the other hand, this pretty much breaks the whole heuristics behind Occam's razor. When scientists explore the laws of physics, they slowly figure out how to connect different phenomena to each other, reducing the number and length of different laws necessary to describe our universe. There is a reasonable hope that if we understood physics well enough, we could reduce the description length of our whole universe to a handful of equations governing reality, and a few fundamental constants describing the initial conditions of the universe. This would be pretty short description length, which means that a Bayesian updater starting with a simplicity prior would converge to learn the true law pretty fast while observing physics. But this nice, short description length "universe from third person perspective" is not how AIXI thinks of its environment. It has a simplicity prior over the programs that can output its own observations. Here, the true hypothesis has much-much longer description complexity: "This sequence of bits is produced by the universe described by these simple equations, and inside this universe, you can find this sequence in this particular Everett-branch, in that particular planet, in this particular moment in history, as the sensory observations of that particular agent." This description needs to specify the [bridge rule](https://www.lesswrong.com/posts/ethRJh2E7mSSjzCay/building-phenomenological-bridges) picking out the agent in the universe, which makes it very long. This is pretty bad. First, it's not the simplicity of this description that scientists are thinking about when they are using Occam's razor. Second, because of the long description, the sample complexity (amount of observations) necessary to learn the truth becomes huge, and for a long while the program "Output [hardcoding of the sequence of observations so far] then output 0 forever" will be shorter than the true program, so the agent will believe the first one to be more likely. Third, because the simplicity prior is over the whole description including the bridge rule, the agent will have a strong presumption towards believing that it is in a central, short description length place in the universe. This contradicts the [Copernican principle](https://en.wikipedia.org/wiki/Copernican_principle). On the other hand, if your prior is over third-person descriptions of the universe, how do you update on seeing sensory observations? An intuitive answer seems to be "Use Bayes-theorem on the evidence that there exists an agent in the universe observing the things I observed so far." But what does "an agent" mean in this context? Also, you don't only have access to your sensory observations, but to your thoughts too, so maybe the evidence you condition on should be more like "There exists an agent that observed the things I saw and thought and acted the same way I did". But how do we define all of that? And should we weigh a hypothesis differently if, according to the hypothesis, there are more than one agents in the universe who fit the observations? What if the universe is infinite in one way or another, and every object, mind and observation exists somewhere with probability 1? For me, it is still an open question how we should handle these questions of Occam's razor and anthropics, and IBP was created as an interesting attempt to answer that. I don't necessarily believe that it's very relevant to alignment in itself: the transformative AI will be good at the technically relevant parts of science, because it will be trained to be so, and I don't think it's especially relevant whether it uses an actual, good formalization of Occam's razor, or just uses messy heuristics of simplicity, just like our scientists do. Still, I think it's a very interesting question in general, and I consider thinking about it more in the future. And there is one aspect of these questions that I believe to be more plausibly relevant to AI alignment: Simulation arguments -------------------- [Paul Christiano](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/) developed the idea that AIXI would be susceptible to [acausal attackers](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign). The idea in a nutshell is that because AIXI uses simplicity prior over its own input, it has a strong presumption to be in a short description length place in the universe, which can make it believe that it's in a special kind of simulation. I don't want to go into details here, because others already wrote that up, and it's not very relevant to my current post, but it's a very clever argument (although it needs some questionable but not implausible assumptions). I don't find it very realistic that this situation arises in this particular form, as I don't expect we will build anything actually close to a Bayesian reasoner with Solomonoff prior. (Paul also writes "I don't expect that humanity will ever do anything like this. This is all in the "interesting speculation" regime.") But in general, I think it's worth thinking about simulation arguments. Going back to a previous analogy, I don't think that Napoleon was thinking very much about simulations, and I don't think he would have ever base important decisions on that, but at least there are some smart humans who do take the general simulation hypothesis seriously, and there are probably at least a few people who actually base some decisions on it. (Examples in the comments are welcome!) I find it possible that there is a "correct way" to reason about simulation arguments and anthropics in general, and a smart enough AI will figure it out and base some important decisions on it if it concludes that there is a chance it lives in a simulation. If that's the case, it would be nice to figure out this "correct way" ourselves, so we can better prepare for how it might influence the AI's behavior. It's also plausible that there can be different, otherwise highly capable thinking-structures that come to different conclusion about the simulation hypothesis. For example, one of the purported advantages of IBP is that it is less susceptible to believe itself to be in a simulation than AIXI. If that's the case, then it's not clear what kind of thinking process the first transformative AI will use, for example I have no idea what an AGI arising from the current deep learning paradigm would think about the simulation hypothesis. But potentially, if we understand the questions of Occam's razor, anthropics and simulations better for ourselves, we can have better ideas of what we can change in the architecture or training of a young mind that steers it towards our preferred conclusion in these questions. I'm not actually sure, by the way, which direction I would want to steer the AI's thinking. Vanessa thinks about this in terms of making sure that the AI is not influenced by simulation worries. I agree that if we have an otherwise aligned AI, I wouldn't want it to make unpredictable decisions based on a simulation argument. On the other hand, if the situation is desperate enough, but we somehow figured out what kind of thinking-structures take the simulation argument more seriously, and how to steer our models towards that direction, I would probably do it. My best guess is that if an AI gives a non-negligible likelihood that it's in a simulation, and is the kind of thinking-structure that takes this kind of concern seriously, then it will take some not too costly steps to avoid doing things that might anger the simulators. As we, humans, seem to be the main characters of the story so far, it seems plausible that killing off or enslaving humanity would displease the simulators. This would mean that if leaving Earth to the humans is not too costly for the AI (as in the case of a paperclip maximizer, for whom Earth is not more valuable then any other chunk of raw material in the universe), then it might let humanity to survive here, just to guard against the anger of potential simulators. Sure, it still leads to humanity losing most of the universe, but I think mankind living on Earth for a few more million years is already enough to explore lots of interesting experiences, activities and life histories, and for me this outcome feels at least 50% as good as conquering the galaxies and fulfilling our whole [cosmic potential](https://nickbostrom.com/astronomical/waste). Obviously, this is very speculative, and I'd rather not resort to this solution. But if we figure out what kind of models and training makes an AI more susceptible to the simulation argument, this possibility is worth to keep in mind. A [Hail Mary approach](https://nickbostrom.com/papers/porosity.pdf), in every sense of the phrase. Anyways, I find simulation arguments plausibly important to study, I think Paul's acausal attacker argument is a nice illustration of what kind of logic can lead to believing in simulators, and IBP is an interesting attempt to create a system that's less likely to conclude that. [Ontological crisis](https://www.lesswrong.com/posts/TA7kDYZGjMcSCH75C/ai-ontology-crises-an-informal-typology) --------------------------------------------------------------------------------------------------------------- This is a seemingly unrelated concern from the previous two, but IBP also makes a nice attempt to handle this. We often talk about utility functions, but for a long time I never considered what's the function's domain supposed to be. The possible states of the world? How would you assess that? You only have models of world, you need to define your utility by the concepts you have in your model, you don't have direct access to the underlying territory. What happens then, when an agent refines its model of the world? Let's say it starts with a caveman-level world model where trees, stones, humans etc. are ontological objects. And let's assume it is well-aligned, it wants good things to happen to the humans. But then, to improve its abilities, it reads a chemistry textbook and learns that things are made atoms. There are no such things as humans, only collections of atoms! There is no such thing as happiness, just some particular formation of atoms swirling through a brain-shaped chunk of atoms! None of its encoded values are well-defined in the new model now! The world is just a wasteland of cold matter devoid of meaning! (This would be actually a pretty good outcome, as the AI just stops functioning because its value function completely loses meaning. The worse scenario is if the AI tries to do some kind of translation of values from its old world model to the new, but it's not perfect, so its alignment with humans breaks and starts to maximize diamonds because that was the only valuable object it could translate to an atomic description.) I don't expect this to actually happen. If we solve the hard problem that the complex values of the human mind, or at least the complicated concept of "corrigibility" is successfully encoded in the values of the AI's in its first ontology (which is probably already alien to a human mind), then I don't expect this to break after the AI switches to another alien ontology. Still, I think it's useful to think about the ontological crisis as a test-case of the more general problem of how to reliably translate concepts and values between different ontologies, like ours and the AI's. (Note: I need to look more into John's [Natural Abstraction Hypothesis](https://www.lesswrong.com/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro), this seems relevant here.) Interestingly, a shortcut solution to the ontological crisis falls out naturally from the framework IBP develops for handling the mostly unrelated Occam's razor question. I want to flag this as a positive sign that IBP might be on the right track towards something, even if I'm uncomfortable with its conclusion here: it's generally a possible good sign that we are getting closer to the truth when a theory developed to solve one thing happens to offer a persuasive solution to another unrelated thing too. Is infra-Bayesian Physicalism a good solution to these problems? ================================================================ Newcomb's problem and the five-and-ten problem ---------------------------------------------- IBP was created keeping acausal decision theory and action couterfactuals in mind, so IBP is well-equipped to handle Newcomb-like scenarios, and the paradox of the five-and-ten problem is dissolved for an IBP agent. As I previously explained, I'm not very convinced about the importance of these questions, but it's a good thing that IBP handles them well. Occam's razor and anthorpics ---------------------------- IBP uses third-person descriptions of the world, this is definitely a good point as it conforms to the Copernican principle. Does the correct hypothesis about the world have actually a short description length, as we would hope for (like "a few rules and a few initial conditions")? Not really, as it also needs to have a prior over what the outputs of different computations are (You don't know from the beginning the result of all computations! You just have guesses about them, until you actually calculate them!), and it also needs a prior over the interaction of the computational world and the physical world. It makes sense, but it makes the theory hard to process it for me, and the description of the hypothesis the agent needs to learn certainly won't be short. On the other hand, Vanessa argues that having a prior over the result of computations is actually inevitable for any agent, so this is not really a disadvantage compared to AIXI, and at least we get rid of the long-description bridge rule. I spent some time trying to wrap my head around this, but I'm still very confused. I find this question interesting enough though that I will probably return trying to work this out, and if I have a better understanding, I will return to edit this part. Anthropics is hard. For now, my tentative conclusion is IBP takes a promising shot at this problem, although we don't really have proofs of anything yet, and I'm not fully convinced yet that there is no hard to detect sleight of hand somewhere hidden in the argument. Simulation arguments -------------------- Because IBP takes a third-person view and adheres to the Copernican principle, unlike AIXI, it doesn't fall prey to Paul's [acausal attackers](https://www.lesswrong.com/posts/YbahERfcjTu7LZNQ6/summary-of-the-acausal-attack-issue-for-aixi), because it doesn't give high prior to be in a simulation that is run in a central, short-description length place in a universe. This is an interesting point in favor of the theory that different thinking-structures (like AIXI and IBP) might have different in-built presumptions about the simulation hypothesis. On the other hand, that IBP doesn't fall for this specific trap doesn't mean there is no other reason that convinces it that it's in a simulation. After all, the acausal attackers argument is also something I probably wouldn't have come up with on my own, and comparatively little thought went so far into searching for similar tricky failure modes in IBP. I also feel that the fact that acausal thinking comes very naturally to IBP can open some vulnerabilities towards giving concessions to non-existent entities that are even weirder than the simulation hypothesis. But I don't actually know any examples, and IBP might manage to handle all of these gracefully, I'm just superstitiously worried about any agent that thinks too deeply about acausality, because I don't understand it well enough. Ontological crisis ------------------ IBP's proposed solution is that your utility function shouldn't be defined in terms of objects in your current model, as you might lose those after reading a chemistry textbook. Instead, you should have a way of determining which computations are "run by the universe". It's not obvious at all what a good definition for this would be, but it turns out that this step is also necessary for the basic setup of IBP that was developed for formalizing Occam. If you have that, you can have a utility function over which computations you find valuable to be run by the universe. As you refine your world-model, the domain of your utility function doesn't change, and if you made sure in a less-refined model that a computation is running, then you don't lose this fact when you refine your model. The obvious downside is that we need to accept the controversial philosophical statement "Everything valuable can be expressed in terms of which computations are running in the universe". This is a defensible position, qualia might be substrate independent, in which case if the AI takes the computations that my brain would do during an enjoyable activity and runs it on a computer or creates a collection of billiard balls whose motion is described by the computations that are equivalent to my thoughts, it should be just the same. I'm uncomfortable with this position, especially since it implies that even the timing of computations don't matter: the IBP agent definitely considers it fine to simulate a person's life with each moment being simulated a thousand years apart on different computers, but even the order of life events being simulated doesn't have to be the same as they would be in a chronological order in a life history. There are some constraints on which kind of life moments the AI can reliably simulate without previously computing the other life events that led to this exact situation (including the human having memories of those previous events during this moment), but I feel that the AI still has a considerable liberty in changing the order of the events. Although in practice I don't see a good reason why it would do so, this thought experiment makes it harder for me to bite the bullet of computationalism. Also, I think that most people are even less sympathetic to the idea of "everything valuable is just a computation" so it would pretty strongly go against my democratic intuitions to hardcode this principle into a world-conquering AI. The most serious objections --------------------------- The main problem with IBP identified so far is the so-called monotonicity principle. When we define the utility function of an IBP agent over which computations should be run in the universe, we can't give any computation less than 0 utility. This is just a mathematical fact about how IBP can think about utilities. Given a chance to create Hell and torment people for eons, while getting an ice cream out of this business, an IBP agent takes the deal, because eating the ice cream has a positive value, and torments don't have negative value, so it's a positive deal overall.  (EDIT: I explain this in more detail [here](https://www.lesswrong.com/posts/yykNvq257zBLDNmJo/infra-bayesianism-naturally-leads-to-the-monotonicity), along with showing that not just IBP but even standard infra-Bayesianism leads to the monotonicity principle if we accept one relatively natural assumption.) Vanessa is very much aware that this is a serious problem, but the best solution I've heard so far is that the available universe is plausibly finite, so creating a Hell-region has the opportunity cost of not creating a Heaven-region in its place instead. Still, an IBP agent has no objection to enslaving the population of Earth to horrible labor-camps to speed up the creation of universe-conquering Neumann-probes a little, so it can reach a bigger region of the light cone and tile it with happy computations billions of years from now. I'm selfish enough to be worried by this possibility.  Also, I'm not convinced enough about the opportunity cost argument, there can be lots of pitfalls. "Well, I already simulated most normal pleasurable experiences, so there is strong diminishing returns for creating new standard utopia-regions. However, I haven't yet simulated the positive experience someone can get by rescuing someone else from a torture chamber! That's a valuable new situation, and the part of the computation that is the suffering has 0 cost anyway! For the matter, I also haven't yet simulated the pleasurable experience Marquis de Sade gets from torturing people..." I think we might think up some potential workarounds to this too (the whole process of the torture counts as one computation and not just the part of de Sade enjoying it, so we can give it 0 value, as we don't approve of it happening?). But I don't think we can think through all failure modes, and I'd rather choose a paperclip maximizer killing everybody over an agent that literally can't conceive of anything being worse than non-existence. Incidentally, this is the reason I didn't look very deeply into Vanessa's concrete alignment strategy, Physicalist Superimitation (previously called [Pre-DCA](https://www.lesswrong.com/posts/EFrJdhKPZXa4MA3Gr/vanessa-kosoy-s-predca-distilled)), as it is based on making an IBP agent the sovereign of the universe, and I'm extremely wary of any such proposal, because of the monotonicity principle.  Similarly, if we think of IBP not as a way to design an AI, but as a model of how we, ourselves should think about Occam's razor, the situation is not much better. I already have a value system which very much assigns negative value to certain experiences, and if I want to figure out how to use Occam's razor, the answer really shouldn't include that I have to change my fundamental values first. My conclusion from looking into IBP was that unfortunately the monotonicity principle is baked very deeply into the framework, and I don't see a way to get rid of it without changing the whole machinery. That's why I started with saying that IBP is the part of the agenda that I have the most mixed feelings about: I feel that its questions are especially interesting and potentially important, and it's a valiant try to answer them, and it does contain some interesting ideas, but because of the monotonicity principle, I basically consider it a failed framework. So my advice to new researchers interested in the topic is a stronger iteration of my advice from before: try to come up with your own theory, and hope that it can solve the same problems but without IBP's pitfalls. I would advise trying it first without even looking very much into IBP, my guess is that it's better to keep your thoughts independent. Then, after you get stuck, or if you produced some results, you can look into IBP to see if you can get some inspiration from there, because it really does contain some interesting ideas. One more note about IBP: the formalism is very complicated even by the standards of infra-Bayesianism. Fifth level power sets (probability distributions over convex sets of probability distributions of power sets of power sets of all computations) are standard building blocks of the theory, which makes it very hard to visualize anything and calculating even the simplest toy example was a huge struggle. I think it's at least worth a try for someone to develop a theory that doesn't require that level of conceptual complexity. I also want to note that Vanessa developed IBP when she was already thinking in infra-Bayesian terms, which led to IBP being created inside this frame. But I see no strong a priori reason why infra-Bayesianism would be necessary for a good formalization of Occam's razor and anthropics. So I would actually advise researchers to first try thinking about this question without looking very deeply into infra-Bayesianism, to keep their thoughts independent. This would have the extra advantage that the monotonicity principle comes in mostly because of some special properties of infra-Bayesian thinking, so if it turns out that infra-Bayesianism is not actually necessary for formalizing Occam, then the monotonicity principle probably wouldn't show up in the new theory. Personally, I find the questions motivating IBP interesting enough that I might decide to follow this advice myself and try to develop my own answers. Ambitious value learning vs corrigibility ========================================= As I was thinking about IBP, I started to form some more general objections towards ambitious value learning. I don't want an AI to try to guess my utility function or my coherent extrapolated volition, then try to maximize value according to that. I'm not convinced that I do have a utility function that can be coherently extrapolated. In general I'm highly distrustful of any AI that tries to maximize for anything. I also don't really want the AI to try to figure out the general truth about the universe and then go ahead and maximize my utility in some weird way implied by its discoveries. Here, I think I have a general difference in attitude with Vanessa that I'm more concerned about our clever creation breaking down after it discovers an unexpected concept in physics. We had a few discussions about IBP, and Vanessa sometimes used arguments like "The available universe is probably finite so opportunity cost can save us of from Hell" and "It seems that quantum mechanics doesn't allow [Boltzmann-brains](https://en.wikipedia.org/wiki/Boltzmann_brain) so we might not need have to worry about that" and "Vacuum collapse is either not possible or inevitably happens, so we don't have to worry about the IBP agent obsessing unreasonably about avoiding it". Of course, Vanessa is very much aware that we shouldn't launch a world-conquering super-AI whose safety is based on arguments that have this level of confidence. But I still think there is a pretty strong disagreement in our outlook, as she seems more optimistic that with enough work, we can just iron out these questions. Quoting her**:** "IBP seems like a promising direction in which to study this sort of questions, and hopefully once we understand these and other questions \*way, way\* better than we currently do, we will be well-equipped to actually know what we're doing when we build AI. Also, here are some arguments why certain objections to IBP might turn out to be wrong, but it might also easily turn out that these arguments are 100% wrong and the objections are 100% correct. Whatever the case may be, I believe that working out the math is usually a better method to move towards understanding than lots of philosophical hand-waving, at least when the mathematical understanding is as shallow as it is at this point". This is all reasonable, but I suspect that we will just never get to point where we understand all the questions similar to "How will our value-maximizing AI react to the possibility of Boltzmann-brains?" well enough that I would trust turning on the AI. More importantly, there can be always possible discoveries about the nature of the world we can't even imagine. The universe is weird. If I understand correctly, Vanessa's plan for this would be to come up with mathematical guarantees that the agent will act in a satisfactory way in *every* scenario, so we don't need to worry about specific weirdnesses like Boltzmann-brains. That might not be impossible, but personally I don't find it likely that we could create a an AI that just goes ahead maximizing some kind of predetermined value, but reliably never breaks down even if it's faced with situations no human ever thought of. In general, when our AI figures out the possibility of Boltzmann brains, I want it to tell us nicely "Hey, I figured out that there will be Boltzmann brains after the heat death, and I have some nice philosophical arguments on why it actually matters." Then we might or might not listen to its philosophical arguments,  then tell it that it's all nice and well and it can do something reasonable about it, but it definitely shouldn't destroy the Earth because of some newfound utilitarian priority. Also, I would feel safer with the AI having only relatively short-term and localized goals instead of grand universal maximization, that seems lees likely to break down from some cosmological thought experiment we haven't thought of. I know that corrigibility has its own set of problems and inconsistencies, but this reasonable behavior sounds mostly like what we call corrigibility, and I prefer to look more into alignment strategies that intend to get us there instead of creating a sovereign AI maximizing value according to what it imagines as our coherent extrapolated volition. Conclusion ========== It would be nice if in one or two years someone would write a follow-up post on how much progress infra-Bayesianism and Vanessa's agenda made on the problems mentioned in this post. Do we have more actual results that feels like satisfactory answers to the questions of embedded agency? Do we have a better understanding of performance guarantees in general environments than what I write about in my [other post](https://www.lesswrong.com/posts/q6dQpSfNHCYzKb2mf/performance-guarantees-in-classical-learning-theory-and)? Is there an alternative to IBP that doesn't have the monotonicity principle baked in (or do we have an extremely compelling case why the monotonicity principle is not a problem)?  Until then, I would advise new researchers to look for other directions, while I wish success to Vanessa and others who decided to pursue her agenda in building interesting results on the theory built so far. Also, one final advice for those who still decide that they want to work on infra-Bayesianism: there might be a textbook coming! Vanessa says that someone is working on a more readable version of the infra-Bayesian sequences, and it might be published in a few months. If this work really is in progress (confirmations in the comments are welcome!) then I would advise everyone to wait for its publication instead of heroicly jumping into the current infra-Bayesian sequences.
74e43b51-4a31-4c87-9723-12ac7f9957fc
trentmkelly/LessWrong-43k
LessWrong
Zombie existential angst? (Not p-zombies, just the regular kind. Metaphorically.) (Or possibly the worst kind of zombie. But still, metaphorically.) Since I was a kid, as far back as I can remember having thought about the issue at all, the basic arguments against existential angst have seemed obvious to me. I used to express it something like: "If nothing really matters [ie, values aren’t objective, or however I put it back then], then it doesn't matter that nothing matters. If I choose to hold something as important, I can't be wrong." However, a few months ago, it occurred to me to apply another principle of rationality to the issue, and that actually caused me to start having problems with existential angst! I don't know if we have a snappy name for the principle, but this is my favorite expression of it: "If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments.  But if you’re interested in producing truth, you will fix your opponents’ arguments for them.  To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse." [I first read it used as the epigram to Yvain's "Least Convenient Possible World". Call it, what, "Fight your own zombies"?] Sure, "The universe is a mere dance of particles, therefore your hopes and dreams are meaningless and you should just go off yourself to avoid the pain and struggle of trying to realize them" is a pretty stupid argument, easily dispatched. But... what if contains the seed for a ravenous, undead, stone-cold sense-making monster? I just got the feeling that maybe it did, and I was having a lot of trouble pinning down what exactly it could be so that I could either refute it or prove that the line of thought didn't actually go anywhere in the first place. Now, I had just suffered a disappointing setback in my life goals, which obviously supports the idea that the philosophical issues weren’t fundamental to my real problems. I knew this, but that didn’t stop the problem. The se
06d9f5eb-c383-457e-9381-4a574f997a40
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
[Linkpost] Human-narrated audio version of "Is Power-Seeking AI an Existential Risk?" There's now a human-narrated audio version of my report on existential risk from power-seeking AI (text version [here](https://arxiv.org/pdf/2206.13353.pdf)). You can find the audio [here](https://joecarlsmithaudio.buzzsprout.com/2034731/12113681-is-power-seeking-ai-an-existential-risk), or by searching for "Joe Carlsmith Audio" on your podcast app. Thanks to the folks at [Type III audio](https://blog.type3.audio/), and especially to Perrin Walker, for doing the narration. There's also a shorter video presentation and summary of the report [here](https://forum.effectivealtruism.org/posts/ChuABPEXmRumcJY57/video-and-transcript-of-presentation-on-existential-risk).
d14b9c84-b57f-4ff5-8f29-a0330455a485
trentmkelly/LessWrong-43k
LessWrong
Desperately looking for the right person to discuss an alignment related idea with. (and some general thoughts for others with similar problems) I have no idea how to start this post. I don't even have the resources (mostly health-like ones lacking) to figure out the meta-strategy to reach out for help, and this is like the billionth place I've tried these last weeks since things became to urgent to do things the right way. My instincts and social anxiety are screaming at me that I am burning bridges and am going to be downvoted and hated and booed by everyone I consider high status for posting this mess for the rest of eternity but no matter how I run the numbers over and over again to try to find an excuse not to post this they still come out saying I have to write this even if it ruins my life. So here goes nothing please don't downvote.  There's a topic that really deserves a long article that is beyond my resources to create but I'd probably call something like "The expert, the hobbyist, and the 998 crackpots", the idea being a simplified version of the real scenario that will probably happen at some point and might be happening right here. The idea is basically that a hobbyist has had a critical insight, and needs to get it to the expert, and the expert desperately needs that insight, and everyone knows  this, but there are also 998 Crackpots that each have their own useless garbage insight and the expert needs to spend a significant amount of time to determine if a given contact is the hobbyist or one of the crackpots, and due to the nature of crackpots and impostor syndrome, the hobbyist or crackpot cannot tell from the inside which one it is they are. As such, even though each one (me) knows they are far more likely to be a crackpot than the real hobbyist they all need to transmit the insight they have and treat it as potentially important, and all these players together needs to find a way for the expert to find the useful needle insight in the 999 haystack. Now, there is a partial solution we've been using; this website, with community vetted articles and votes to filter out and iterate on ideas
06745a60-f96b-4305-9496-f45470291fb0
trentmkelly/LessWrong-43k
LessWrong
Working memory and driving I've been trying to learn how to drive and unfortunately I suck at it. Some combination of a stressful teacher and hyperfocusing have made it very difficult to learn. My biggest problem is with the multitasking aspect. Remembering to put on the turn signal while stopping and and checking my speed and watching out for other cars, etc. It's difficult for me, I forget or miss things. One thing I was considering may possibly help is using dual n-back to boost my working memory. Does anyone have any thoughts on the likely effectiveness of this?
2b85234b-b51e-42fa-b66f-4319f277d7fe
trentmkelly/LessWrong-43k
LessWrong
Industrialization/Computerization Analogies I have two motivations in this post: * thanks to GPT-4, lots of folks looking at automation via AI seriously for the first time and are unfamiliar with much of the thinking and writing about transformative AI from the past 20-30 years * I think the standard analogy of AI to steam power is slightly off, and I want to make at least the outline of the case for it, even if I don't have time to invest in making the full case Thus I'm going to highlight what I think are important analogies between industrialization and computerization and what I think that can tell us to expect from transformative AI in some ways. I want to be clear this is not very careful work so there's lots of caveats and exceptions and such. My aim is to 80/20 the analogy and the explanation. * steam engine : industrialization :: computer : computerization * the computer is roughly the equivalent of the steam engine for the computer age (hence the name) * like the steam engine, initially it had limited applications and minimal impact on the economy * then, after a few decades, improvements in design enabled the automation of key work (for the steam engine, textiles, for computers, data processing) * over the next few decades, design improvements enabled more areas of the economy to be automated, making individual workers more productive and making us all richer * then a big new technology transformed the world * assembly line/mass production :: transformative AI * the assembly line and the creation of the modern factory transformed the world and made us all a lot richer * prior to the assembly line, we can think of steam engines mostly automating jobs that used to be done by hand * those jobs were still mostly done under the old paradigm, but in an a more automated way * the genius of the assembly line and the mass production it enabled was to reframe how stuff got made in ways that enabled a higher degree of productivity * transformative AI, of which GPT-4 is t
d2031a5c-8900-46a5-83b1-aa8898590ce7
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
[AN #171]: Disagreements between alignment "optimists" and "pessimists" Listen to this newsletter on **[The Alignment Newsletter Podcast](http://alignment-newsletter.libsyn.com/)**. Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter. Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer. HIGHLIGHTS =========== **[Alignment difficulty](https://www.alignmentforum.org/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty)** *(Richard Ngo and Eliezer Yudkowsky)* (summarized by Rohin): Eliezer is known for being pessimistic about our chances of averting AI catastrophe. His argument in this dialogue is roughly as follows: 1. We are very likely going to keep improving AI capabilities until we reach AGI, at which point either the world is destroyed, or we use the AI system to take some pivotal act before some careless actor destroys the world. 2. In either case, the AI system must be producing high-impact, world-rewriting plans; such plans are “consequentialist” in that the simplest way to get them (and thus, the one we will first build) is if you are forecasting what might happen, thinking about the expected consequences, considering possible obstacles, searching for routes around the obstacles, etc. If you don’t do this sort of reasoning, your plan goes off the rails very quickly - it is highly unlikely to lead to high impact. In particular, long lists of shallow heuristics (as with current deep learning systems) are unlikely to be enough to produce high-impact plans. 3. We’re producing AI systems by selecting for systems that can do impressive stuff, which will eventually produce AI systems that can accomplish high-impact plans using a general underlying “consequentialist”-style reasoning process (because that’s the only way to keep doing more impressive stuff). However, this selection process does *not* constrain the goals towards which those plans are aimed. In addition, most goals seem to have convergent instrumental subgoals like survival and power-seeking that would lead to extinction. This suggests that we should expect an existential catastrophe by default. 4. None of the methods people have suggested for avoiding this outcome seem like they actually avert this story. Richard responds to this with a few distinct points: 1. It might be possible to build AI systems which are not of world-destroying intelligence and agency, that humans use to save the world. For example, we could make AI systems that do better alignment research. Such AI systems do not seem to require the property of making long-term plans in the real world in point (3) above, and so could plausibly be safe. 2. It might be possible to build general AI systems that only *state* plans for achieving a goal of interest that we specify, without *executing* that plan. 3. It seems possible to create consequentialist systems with constraints upon their reasoning that lead to reduced risk. 4. It also seems possible to create systems with the primary aim of producing plans with certain properties (that aren't just about outcomes in the world) -- think for example of **[corrigibility](https://www.alignmentforum.org/posts/fkLYhTQteAu5SinAc/corrigibility)** (**[AN #35](https://mailchi.mp/bbd47ba94e84/alignment-newsletter-35)**) or deference to a human user. 5. (Richard is also more bullish on coordinating not to use powerful and/or risky AI systems, though the debate did not discuss this much.) Eliezer’s responses: 1. AI systems that help with alignment research to such a degree that it actually makes a difference are almost certainly already dangerous. 2. It is the plan itself that is risky; if the AI system made a plan for a goal that wasn’t the one we actually meant, and we don’t understand that plan, that plan can still cause extinction. It is the *misaligned optimization that produced the plan* that is dangerous. 3 and 4. It is certainly *possible* to do such things; the space of minds that could be designed is very large. However, it is *difficult* to do such things, as they tend to make consequentialist reasoning weaker, and on our current trajectory the first AGI that we build will probably not look like that. This post has also been summarized by others **[here](https://www.alignmentforum.org/posts/oKYWbXioKaANATxKY/soares-tallinn-and-yudkowsky-discuss-agi-cognition)**, though with different emphases than in my summary. **Rohin's opinion:** I first want to note my violent agreement with the notion that a major scary thing is “consequentialist reasoning”, and that high-impact plans require such reasoning, and that we will end up building AI systems that produce high-impact plans. Nonetheless, I am still optimistic about AI safety relative to Eliezer, which I suspect comes down to three main disagreements: 1. There are many approaches that don’t solve the problem, but do increase the level of intelligence required before the problem leads to extinction. Examples include Richard’s points 1-4 above. For example, if we build a system that states plans without executing them, then for the plans to cause extinction they need to be complicated enough that the humans executing those plans don’t realize that they are leading to an outcome that was not what they wanted. It seems non-trivially probable to me that such approaches are sufficient to prevent extinction up to the level of AI intelligence needed before we can execute a pivotal act. 2. The consequentialist reasoning is only scary to the extent that it is “aimed” at a bad goal. It seems non-trivially probable to me that it will be “aimed” at a goal sufficiently good to not lead to existential catastrophe, without putting in much alignment effort. 3. I do expect some coordination to not do the most risky things. I wish the debate had focused more on the claim that non-scary AI can’t e.g. do better alignment research, as it seems like a major crux. (For example, I think that sort of intuition drives my disagreement #1.) I expect AI progress looks a lot like “the heuristics get less and less shallow in a gradual / smooth / continuous manner” which eventually leads to the sorts of plans Eliezer calls “consequentialist”, whereas I think Eliezer expects a sharper qualitative change between “lots of heuristics” and that-which-implements-consequentialist-planning. **[Discussion of "Takeoff Speeds"](https://www.alignmentforum.org/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds)** *(Eliezer Yudkowsky and Paul Christiano)* (summarized by Rohin): This post focuses on the question of whether we should expect AI progress to look discontinuous or not. It seemed to me that the two participants were mostly talking past each other, and so I’ll summarize their views separately and not discuss the parts where they were attempting to address each other’s views. Some ideas behind the “discontinuous” view: 1. When things are made up of a bunch of parts, you only get impact once all of the parts are working. So, if you have, say, 19 out of 20 parts done, there still won’t be much impact, and then once you get the 20th part, then there is a huge impact, which looks like a discontinuity. 2. A continuous change in inputs can lead to a discontinuous change in outputs or impact. Continuously increasing the amount of fissile material leads to a discontinuous change from “inert-looking lump” to “nuclear explosion”. Continuously scaling up a language model from GPT-2 to GPT-3 leads to many new capabilities, such as few-shot learning. A misaligned AI that is only capable of concealing 95% of its deceptive activities will not perform any such activities; it will only strike once it is scaled up to be capable of concealing 100% of its activities. 3. Fundamentally new approaches to a problem will often have prototypes which didn’t have much impact. The difference is that they will scale much better, and so once they start having an impact this will look like a discontinuity in the rate of improvement on the problem. 4. The evolution from chimps to humans tells us that there is, within the space of possible mind designs, an area in which you can get from shallow, non-widely-generalizing cognition to deep, much-more-generalizing cognition, with only relatively small changes. 5. Our civilization tends to prevent people from doing things via bureaucracy and regulatory constraints, so even if there are productivity gains to be had from applications of non-scary AI, we probably won’t see them; as a result we probably do not see GWP growth before the point where an AI can ignore bureaucracy and regulatory constraints, which makes it look discontinuous. Some ideas behind the “continuous” view: 1. When people are optimizing hard in pursuit of a metric, then the metric tends to grow smoothly. While individual groups may find new ideas that improve the metric, those new ideas are unlikely to change the metric drastically more than previously observed changes in the metric. 2. A good heuristic for forecasting is to estimate (1) the returns to performance from additional effort, using historical data, and (2) the amount of effort currently being applied. These can then be combined to give a forecast. 3. How smooth and predictable the improvement is depends on how much effort is being put in. In terms of effort put in currently, coding assistants < machine translation < semiconductors, as a result we should expect semiconductor improvement to be smoother than machine translation improvement, which in turn will be smoother than coding assistant improvement. 4. In AI we will probably have crappy versions of economically useful systems before we have good versions of those systems. By the time we have good versions, people will be throwing lots of effort at the problem. For example, Codex is a crappy version of a coding assistant; such assistants will now improve over time in a somewhat smooth way. There’s further discussion on the differences between these views in a **[subsequent post](https://www.alignmentforum.org/posts/7MCqRnZzvszsxgtJi/christiano-cotra-and-yudkowsky-on-ai-progress)**. **Rohin's opinion:** The ideas I’ve listed in this summary seem quite compatible to me; I believe all of them to at least some degree (though perhaps not in the same way as the authors). I am not sure if either author would strongly disagree with any of the claims on this list. (Of course, this does not mean that they agree -- presumably there are some other claims that have not yet been made explicit on which they disagree.) TECHNICAL AI ALIGNMENT ======================= FIELD BUILDING --------------- **[AGI Safety Fundamentals curriculum and application](https://www.alignmentforum.org/posts/Zmwkz2BMvuFFR8bi3/agi-safety-fundamentals-curriculum-and-application)** *(Richard Ngo)* (summarized by Rohin): This post presents the curriculum used in the AGI safety fundamentals course, which is meant to serve as an effective introduction to the field of AGI safety. NEWS ===== **[Visible Thoughts Project and Bounty Announcement](https://www.alignmentforum.org/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement)** *(Nate Soares)* (summarized by Rohin): MIRI would like to test whether language models can be made more understandable by training them to produce visible thoughts. As part of this project, they need a dataset of thought-annotated dungeon runs. They are offering $200,000 in prizes for building the first fragments of the dataset, plus an additional $1M prize/budget for anyone who demonstrates the ability to build a larger dataset at scale. **[Prizes for ELK proposals](https://www.alignmentforum.org/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals)** *(Paul Christiano)* (summarized by Rohin): The Alignment Research Center (ARC) recently published a technical report on Eliciting Latent Knowledge (ELK). They are offering prizes of $5,000 to $50,000 for proposed strategies that tackle ELK. The deadline is the end of January. **Rohin's opinion:** I think this is a particularly good contest to try to test your fit with (a certain kind of) theoretical alignment research: even if you don't have much background, you can plausibly get up to speed in tens of hours. I will also try to summarize ELK next week, but no promises. **[Worldbuilding Contest](https://worldbuild.ai/)** (summarized by Rohin): FLI invites individuals and teams to compete for a prize purse worth $100,000+ by designing visions of a plausible, aspirational future including artificial general intelligence. The deadline for submissions is April 15. **Read more:** **[FLI launches Worldbuilding Contest with $100,000 in prizes](https://forum.effectivealtruism.org/posts/LjExZCPCHnNNTFDfq/fli-launches-worldbuilding-contest-with-usd100-000-in-prizes)** **[New Seminar Series and Call For Proposals On Cooperative AI](https://www.cooperativeai.com/seminars)** (summarized by Rohin): The Cooperative AI Foundation (CAIF) will be hosting a new fortnightly seminar series in which leading thinkers offer their vision for research on Cooperative AI. The first talk, 'AI Agents May Cooperate Better If They Don’t Resemble Us’, was given on Thursday (Jan 20) by Vincent Conitzer (Duke University, University of Oxford). You can find more details and submit a proposal for the seminar series **[here](https://www.cooperativeai.com/seminars)**. **[AI Risk Management Framework Concept Paper](https://www.nist.gov/system/files/documents/2021/12/14/AI%20RMF%20Concept%20Paper_13Dec2021_posted.pdf)** (summarized by Rohin): After their **[Request For Information last year](https://www.nist.gov/itl/ai-risk-management-framework)** (**[AN #161](https://mailchi.mp/24cc937b2489/an-161creating-generalizable-reward-functions-for-multiple-tasks-by-learning-a-model-of-functional-similarity)**), NIST has now posted a concept paper detailing their current thinking around the AI Risk Management Framework that they are creating, and are soliciting comments by Jan 25. As before, if you're interested in helping with a response, email Tony Barrett at anthony.barrett@berkeley.edu. **[Announcing the PIBBSS Summer Research Fellowship](https://forum.effectivealtruism.org/posts/hhLbFmhXbmaX5PcCa/extended-deadline-jan-23rd-announcing-the-pibbss-summer)** *(Nora Ammann)* (summarized by Rohin): Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS) aims to facilitate knowledge transfer with the goal of building human-aligned AI systems. This summer research fellowship will bring together researchers from fields studying complex and intelligent behavior in natural and social systems, such as evolutionary biology, neuroscience, linguistics, sociology, and more. The application deadline is Jan 23, and there are also **[bounties](https://forum.effectivealtruism.org/posts/hhLbFmhXbmaX5PcCa/extended-deadline-jan-23rd-announcing-the-pibbss-summer)** for referrals. **[Action: Help expand funding for AI Safety by coordinating on NSF response](https://www.lesswrong.com/posts/vq6ztCgFczuH53f4Y/action-help-expand-funding-for-ai-safety-by-coordinating-on)** *(Evan R. Murphy)* (summarized by Rohin): The National Science Foundation (NSF) has put out a Request for Information relating to topics they will be funding in 2023 as part of their NSF Convergence Accelerator program. The author and others are coordinating responses to increase funding to AI safety, and ask that you fill out this **[short form](https://airtable.com/shrk0bAxm0EeJbyPC)** if you are willing to help out with a few small, simple actions. #### **FEEDBACK** I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**. #### **PODCAST** An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**.
55587b7e-07d0-4eea-92e9-78a73a3cc69a
trentmkelly/LessWrong-43k
LessWrong
Depression's evolutionary roots Since there are intelligent people here who follow the topic of evolutionary psychology, I'd like to hear opinions about some research from 2009.  Particularly if this idea seems reasonable or not, but possibly other opinions that people might have about it. The idea is a variation on one that's somewhat popular here: that some conditions usually regarded as mental illnesses (Asperger's for example) are beneficial, even adaptive.  But the condition in question now is depression.  Briefly, the argument is that depression, at least when it is a response to stimuli and not a permanent feature, can have the useful effect of encouraging more rational thought when this is particularly important, even at the cost of quality of life, and that this is adaptive. Links: a Scientific American article, a journal article (which I haven't read, behind a $12 paywall).  Here's the abstract of the journal article: > Depression is the primary emotional condition for which help is sought. Depressed people often report persistent rumination, which involves analysis, and complex social problems in their lives. Analysis is often a useful approach for solving complex problems, but it requires slow, sustained processing, so disruption would interfere with problem solving. The analytical rumination hypothesis proposes that depression is an evolved response to complex problems, whose function is to minimize disruption and sustain analysis of those problems by (a) giving the triggering problem prioritized access to processing resources, (b) reducing the desire to engage in distracting activities (anhedonia), and (c) producing psychomotor changes that reduce exposure to distracting stimuli. As processing resources are limited, sustained analysis of the triggering problem reduces the ability to concentrate on other things. The hypothesis is supported by evidence from many levels—genes, neurotransmitters and their receptors, neurophysiology, neuroanatomy, neuroenergetics, pharmacology, cogniti
754c972c-647e-4522-b0be-f49786db391a
trentmkelly/LessWrong-43k
LessWrong
Anyone in the Madison area who'd attend a talk on Acausal Trade on Sunday? Hi, I'm deciding if we have enough people to have Joshua Fox give a talk on Acausal Trade on Sunday evening. Anyone in the Madison area who'd be interested?
c5955486-b4f1-41b7-889d-a4ed81e11d3b
trentmkelly/LessWrong-43k
LessWrong
Online Meetup: Forecasting workshop LessWrong Israel presents Edo Arad with a Forecasting workshop on Tuesday May 5, 2020 Forecasting is the art of assigning probabilities to future events. It turns out that there are many types of future predictions which we can make some sort of educated predictions on, and there are techniques for improving our methodology for doing so. We will talk about the basics of forecasting and have some hands-on experience. So come prepared with a computer and be ready for a challenging and fast paced experience! Please sign up here, and we will send you the URL: https://forms.gle/eJXGdAJfqhBiaZoR9 View in Your Timezone Schedule: - 18.45 IDT (15:45 UTC) Check that you can hear and see - 19:00 IDT (16:00 UTC) Gathering and mingling - 19:15 IDT (16:30 UTC) Start the main event See Less
2f141ac4-72c5-4fbc-8bfd-f837dbd0e5d3
trentmkelly/LessWrong-43k
LessWrong
Book Review: Design Principles of Biological Circuits I remember seeing a talk by a synthetic biologist, almost a decade ago. The biologist used a genetic algorithm to evolve an electronic circuit, something like this: (source) He then printed out the evolved circuit, brought it to his colleague in the electrical engineering department, and asked the engineer to analyze the circuit and figure out what it did. “I refuse to analyze this circuit,” the colleague replied, “because it was not designed to be understandable by humans.” He has a point - that circuit is a big, opaque mess. This, the biologist argued, is the root problem of biology: evolution builds things from random mutation, connecting things up without rhyme or reason, into one giant spaghetti tower. We can take it apart and look at all the pieces, we can simulate the whole thing and see what happens, but there’s no reason to expect any deeper understanding. Organisms did not evolve to be understandable by humans. I used to agree with this position. I used to argue that there was no reason to expect human-intelligible structure inside biological organisms, or deep neural networks, or other systems not designed to be understandable. But over the next few years after that biologist’s talk, I changed my mind, and one major reason for the change is Uri Alon’s book An Introduction to Systems Biology: Design Principles of Biological Circuits. Alon’s book is the ideal counterargument to the idea that organisms are inherently human-opaque: it directly demonstrates the human-understandable structures which comprise real biological systems. Right from the first page of the introduction: > … one can, in fact, formulate general laws that apply to biological networks. Because it has evolved to perform functions, biological circuitry is far from random or haphazard. ... Although evolution works by random tinkering, it converges again and again onto a defined set of circuit elements that obey general design principles. > The goal of this book is to highlight some
eff8e9d4-90e2-49b8-915d-593b7edd8a44
trentmkelly/LessWrong-43k
LessWrong
Outbuilding Thoughts Behind our house there's a small building: The previous owners had used it for storage, but it's in bad shape. There was a hole in the roof when we bought it, which has slowly been getting larger: There are cracks in the wall, which look like they might be from settling: Though at least it doesn't look too different from 5 1/2 years ago: It doesn't look like it has a proper footing. In this part of the country the top few feet of ground freeze every winter, so you need to go down four feet to get something stable. It looks to me like this is built on cinderblocks, and I would be surprised to learn that they dug a four-foot trench for them. We've mostly ignored it since we bought the house five years ago; there was so much else that needed doing. Now that I have more time, we're starting to think about what might make sense to do with it. First step is taking measurements: This is a scale drawing, because that seemed fun. I measure the external dimensions as 10'10" by 16'8", which is consistent with the 176 sqft that the assessor's database has for it. The main options are fixing it, replacing it, removing it, and leaving it alone. Because of the foundation issue I suspect fixing it doesn't make much sense unless we just wanted to use it for storage though I could talk to some contractors. Replacing it could make sense if we could have something that was actually useful there. Removing it would give us more space in the back yard, though I don't know if we would do much with it. Leaving it alone keeps our options open a bit, though at some point it would start to get in danger of falling over. Somerville adopted new zoning in December 2019. While the previous zoning would not allow most existing buildings to be built today, the revised zoning tries to be a closer match for the city. I think our current shed is allowed under 10.2.2.t: > A roofed structure used as a storage space. The following standards apply: > > > 1. Sheds may be up to twelve
e6a16bc2-4783-44b2-80a7-9e64ab6c71ba
trentmkelly/LessWrong-43k
LessWrong
A Year of Putting a Numonit Happy birthday to Putanumonit! Come celebrate with me next Friday (10/28) at Zach Weinersmith’s BAH Fest West. If you move fast you can grab one of the few remaining $10 tickets. I’m giving a talk on the dumbest idea ever to solve the global food crisis. For a taste, here’s my performance from last year’s BAH Fest in MIT where I talked about sleepwalking and made fun of Max Tegmark. If I win, drinks on me. My initial plan for the blog was to quietly practice writing for a few years until I feel I can produce something that deserves to be read. Instead, Scott Alexander linked to the third post I ever wrote and brought 5,000 readers along. In late December I decided that making Scott’s blogroll would be my goal for 2016. It happened on 12/29/2015. In May, “Shopping for Happiness” was the post of the day on The Browser, one spot ahead of Scott. Thank you Scott! And thanks to everyone who shared, reddited, tweeted, The Browsered, Metafiltered and to everyone who told their mom. Thanks to everyone who commented, emailed me, and a huge thanks to wonderful people who joined me in donating to GiveDirectly. Whenever a post of mine is shared there are discussions of it happening in 5 places at once (Reddit, LW, Facebook..) I do my best to engage with all of them, but I would love to consolidate most of the conversation here in the comments so it’s visible next to the post. Two things to encourage this: first, I would like to remind everyone of my $5 reward policy for comments correcting major errors in my posts, like this one. Second, here’s a rundown of the best comments of the year, with apologies to those I missed: scholar on male porn actors, blastmeister101 on national soccer success, my greatest fan on baby hatches, StrivingForConsistency on patterns in lottery tickets, Benjamin Arthur Schwab wrote an article on the game theory of dating in response to my article on the game theory of dating, Maggie taking first steps in LessWrongianism, BAS, entirelyuseless and J
60e79c96-fd6f-4f1d-951f-3af247693e14
trentmkelly/LessWrong-43k
LessWrong
How to Fix Science Like The Cognitive Science of Rationality, this is a post for beginners. Send the link to your friends! Science is broken. We know why, and we know how to fix it. What we lack is the will to change things.   In 2005, several analyses suggested that most published results in medicine are false. A 2008 review showed that perhaps 80% of academic journal articles mistake "statistical significance" for "significance" in the colloquial meaning of the word, an elementary error every introductory statistics textbook warns against. This year, a detailed investigation showed that half of published neuroscience papers contain one particular simple statistical mistake. Also this year, a respected senior psychologist published in a leading journal a study claiming to show evidence of precognition. The editors explained that the paper was accepted because it was written clearly and followed the usual standards for experimental design and statistical methods. Science writer Jonah Lehrer asks: "Is there something wrong with the scientific method?" Yes, there is. This shouldn't be a surprise. What we currently call "science" isn't the best method for uncovering nature's secrets; it's just the first set of methods we've collected that wasn't totally useless like personal anecdote and authority generally are. As time passes we learn new things about how to do science better. The Ancient Greeks practiced some science, but few scientists tested hypotheses against mathematical models before Ibn al-Haytham's 11th-century Book of Optics (which also contained hints of Occam's razor and positivism). Around the same time, Al-Biruni emphasized the importance of repeated trials for reducing the effect of accidents and errors. Galileo brought mathematics to greater prominence in scientific method, Bacon described eliminative induction, Newton demonstrated the power of consilience (unification), Peirce clarified the roles of deduction, induction, and abduction, and Popper emphasized the
2bb790f6-4543-4964-9381-bf61ccfcdb57
trentmkelly/LessWrong-43k
LessWrong
The Mirror Trap A quick post on a probably-real inadequate equilibrium mostly inspired by trying to think through what happened to Chance the Rapper.  Potentially ironic artifact if it accrues karma. 1. The sculptor's garden A sculptor worked in solitude for years, carving strange figures in his remote garden. Most of his statues failed: some cracked in winter, others looked wrong against the landscape. But occasionally, very rarely, one seemed to work. The first visitors stumbled upon the garden by accident. They found themselves stopped by his angels—figures that somehow held both sorrow and joy, wings that seemed about to flitter.  Word traveled slowly. More visitors came, drawn by something they couldn't quite name. The sculptor felt recognized for the first time. Not famous—but understood. His private work had somehow become communicable. He carved more angels, trying to understand what made these particular statues resonate. As crowds grew, their attention shifted. They began photographing the angels from certain angles, comparing new works to old, developing favorites. They applauded. The sculptor, still believing he followed the same thread, unconsciously noted which details drew the longest contemplation, which angles prompted gasps. Years passed. The garden became famous. Tour buses arrived with guides explaining the "important" pieces. The sculptor produced angels of increasing technical perfection, each guaranteed to produce the proper response at the proper moment. The crowds applauded more reliably. The sculptor carved more reliably. Each reinforced the other. One morning, walking his garden alone before dawn, he saw his statues without the crowds. Without their reactions to guide him, he saw what he'd actually been making: the same angel, refined and repeated, each iteration more precisely calibrated to trigger the expected response. He wasn't carving anymore. He was manufacturing applause in the shape of angels.  And the crowds—they weren't looking at ang
e1ad43f0-5e31-43df-90da-2c544fdfaee2
trentmkelly/LessWrong-43k
LessWrong
Categorical Organization in Memory: ChatGPT Organizes the 665 Topic Tags from My New Savanna Blog Cross posted from New Savanna. I've just posted a new working paper. Title above, links, abstract, TOC, and two opening sections below: Academia.edu: https://www.academia.edu/111354740/Categorical_Organization_in_Memory_ChatGPT_Organizes_the_665_Topic_Tags_from_My_New_Savanna_Blog SSRN: https://ssrn.com/abstract=4663978 ReserchGate: https://www.researchgate.net/publication/376481426_Categorical_Organization_in_Memory_ChatGPT_Organizes_the_665_Topic_Tags_from_My_New_Savanna_Blog > Abstract: I gave ChatGPT three lists of topics for which it had to propose categories into which the topics could be sorted. Two lists were relatively short, 56 and 53 topics; I asked ChatGPT to propose six organizing categories for each. One list was much longer, 655 topics; I asked ChatGPT to propose 12 for categories for it. In all cases the proposed categories were reasonable. ChatGPT explained each proposed category either with a pair of sentences (the short lists), or with characterizing phrases (the long list). These characterizations were reasonable. In a further task, when asked to place topics under the proposed categories, ChatGPT placed many topics under the first two categories and very few under the last two. Though quite different in detail, this task has a rough formal similarity to generating a coherent story that is organized on three levels: 1) the whole story, 2) story segments, 3) sentences in story segments. Contents Organizing lists of categories into a coherent structure 1 The categories ChatGPT proposed 3 Formal similarity with story generation 5 What happens when ChatGPT lists tags under each category? 8 How would I have approached these tasks myself? 10 Propose categories to sort a short “top-level” list of 56 topics 10 Propose categories to sort a short arbitrary sub-list of 53 topics 14 Propose categories to sort the full list of 665 topics 16 Sort a short list and place the topics under the appropriate category 21     Organizing lists of categories into a
c25f63d2-4cf6-4bed-ae52-1838a0ea7f30
trentmkelly/LessWrong-43k
LessWrong
Signaling Virtuous Victimhood as Indicators of Dark Triad Personalities A recent paper at the University of British Columbia describes five studies which, taken together, provide evidence that tendency to engage in victimhood signalling is correlated across individuals with Dark Triad personality traits (Machiavellianism, narcissism, and psychopathy). Results were robust across the general US and Canada population gathered on mTurk as well as within a sample of Canadian undergraduates. A link to the full study is provided in this post.
975d98a8-4e73-4be0-a2fb-c61067ced08e
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post2079 ( This post was written as part of my work at the Alignment Research Center . ) In this post I’ll describe some possible approaches to eliciting latent knowledge (ELK) not discussed in our report . These are basically restatements of proposals by Davidad, Rohin, Ramana, and John Maxwell. For each approach, I’ll present one or two counterexamples that I think would break it. I think of these approaches as being based on two fundamental hopes: If our AI tampers with one sensor, there may be other sensors we could deploy, or experiments we could run, in order to notice that something fishy has happened. We could have an AI help us design those experiments/sensors. Actions that covertly tamper may be similar to actions that visibly tamper, e.g. they may exploit similar unknown facts about physics or they may involve common substeps. So we might be able to notice covert tampering by pointing out similar action sequences that lead to visible tampering. While these hopes may hold in practice, in this post I’ll describe possible situations where they would fail. (I don't think anyone particularly expected these approaches to be counterexample-proof.) I’m only claiming that these failures are possible . I think it is also interesting to try to generate the most probable failure cases and to actually estimate their probability, but that’s definitely not what I’m doing here. Strategy: Hold out sensors (Proposal #1 here ) In addition to asking my model to predict one set of sensors, I can deploy a second “held out” set (e.g. more cameras or seismographs or microphones…). I don’t ask my model to predict those sensors, but I show them to a human before getting them to answer questions. Hopefully that sometimes lets them notice tampering that would have been invisible without the new sensors. Worry: human simulator can use all the extra sensor readings too I’m concerned that I will learn a better version of the human simulator which predicts the readings of all sensors and then outputs what a human would infer from the complete set. That task need not be much more computationally demanding than the original human simulator. Moreover, my AI may be able to deduce the presence of the additional sensors very cheaply. Perhaps it can notice the sensors, or it can learn about my past actions to get a hint about where I may have placed them. If this is possible, then “Predict the readings of all sensors” need not be much more complex than “Predict the readings of one sensor.” An AI doesn’t have to be exactly correct about what sensors exist—when in doubt it can just predict what the sensor would say and imagine giving it to the human. Strategy: Predict hypothetical sensors (Proposal #2 here , also suggested with counterexample by Rohin in private communication) Instead of installing a single sensor, I could randomly choose what sensors to deploy and how to install them. Then I could train a model Predict + to map (description of sensor) → (sensor reading), and I could give a human the readings from many hypothetical sensors in order to give them the best chance of detecting tampering. Worry 1: Predict + may not be reasonable on non-existent sensors To collect training data for Predict + , I actually install sensors and then use their readings as a target. In every situation, my AI is being asked to predict the readings from a sensor that I really did build. If my AI is able to deduce the presence of sensors (as described in the last counterexample), then it’s unclear how Predict + (sensor description) will behave when we give it a description of a sensor that does not in fact exist. In particular, Predict + ( sensor description ) might behave in the following undesirable way: it predicts the future observations conditioned on its observations and on the fact that someone typed ( sensor description ) into the dataset. But it knows that no sensor with description ( sensor description ) exists. So if someone typed that description into the dataset, that just means something funny is going on—not that the dataset will magically contain readings from a non-existent sensor. For a predictor which understands the world well it seems quite plausible that this is the most natural behavior. Otherwise it needs to reason about a physically impossible counterfactual to get the “intended” answer (which it never needed to do to answer any other questions). Moreover, it would also be undesirable if the predictor conditioned on such a sensor existing despite having strong evidence to the contrary (since its predictions may then be extremely strange). (A more extreme version of the same worry applies to proposal #3 here . A predictor seems even more likely to behave badly when it is given a description of a sensor that the human obviously couldn’t have built, and it would definitely know that the sensor doesn’t actually exist.) Worry 2: the human may not be able to tell what is going on even with all the sensors For some kinds of tampering no possible sensors would reveal what had happened. For example, we can imagine that our AI has covertly built an army of tiny machines which deconstruct the objects in the room while replacing them with fake versions that look the same on the outside. Sufficiently sophisticated machinery could reproduce the results of almost any experiment we perform (e.g. if you broke an egg, the tiny machines could simulate the sound of an egg cracking and quickly create the yolk on demand). More realistically, our AI could search for sensors and tamper with any that it finds. It’s not clear how it is supposed to answer questions about what a sensor “would have shown”—if the hypothetical sensor had actually existed, then it also would have been found and tampered with. In this scenario, it may be “correct” for Predict + to report the tampered value. (As usual, this is only a problem if the human overseer literally can’t notice anything suspicious about a sequence of actions which is in fact constructing an army of sophisticated tiny machines. We are putting the burden of proof on the builder to show that a given concern is impossible.) Strategy: conduct new experiments to notice that something is wrong (Part of the proposals here and here .) In the previous section I described installing sensors before the AI acted so that we’d have a better chance of seeing what was going on in real time. But we could also imagine installing sensors or conducting experiments after arriving at a given state, in order to better understand that state. For example, suppose I believe that the diamond is in the room and all sensors are functioning correctly. I could ask another AI to design an experiment which would show that I’m fundamentally misunderstanding the situation and the sensors aren’t functioning correctly (e.g. perhaps it would walk me through building an x-ray machine to show me that the room is actually empty). Worry 1: there is no experiment that reveals the problem Ultimately to implement an experiment we will execute some actions in the world and observe the results. These experiments won’t be successful if either: The sensors used to record the results of an experiment are also tampered with, and there is no sequence of actions that “fix” those sensors. (As in the “delusion box” introduced here and discussed as a counterexample for an ELK-like problem here .) The actuators for carrying out an experiment have been tampered with. Perhaps our AI “proposes an experiment,” but actually the human sees something totally different (or is incapacitated). You could hope to avoid this problem by using the same sensors and actuators your AI uses to interact with the world; then any form of tampering which undermines those sensors/actuators seems like it must also undermine the AI’s ability to act in the world. But it seems like we could still have a problem: Your AI may not continue acting using the same actuators or sensors, or at least not only the same actuators, with which it was originally constructed. It may have constructed tools in the environment, other autonomous processes, or other copies of itself. Even if your AI is continuing to use the same sensors, it could tamper with sensors in a way that preserves their usefulness while forcing them to look innocuous to the human (e.g. by putting information in higher-order bits of a benign-looking video). In some situations an AI would be motivated to tamper with sensors even if doing so reduced its own ability to act in the world. Worry 2: an experiment can itself perform tampering If we let our AI propose experiments to “reveal” tampering, it may instead propose experiments that do tampering from scratch. That tampering could then provide convincing evidence that something is fishy. If this can be done in any state, then it is useless for telling whether something fishy has actually happened. We could try to avoid this by restricting to actions that are simple enough that a human understands how they work and can be confident that they won’t tamper. But it then becomes even more unclear whether there is any experiment that would identify problems (since we are effectively pitting a human experimenter against a very powerful manipulator). Alternatively, we could hope that “reporting existing tampering” involves shorter or simpler action sequences than tampering from scratch. Then we could identify a state as fishy if it’s possible to find a short sequences of actions that look fishy. But unfortunately the fishiest states could end up looking the safest . For example, suppose that if an AI spends 10 * N + 100 timesteps tampering with the sensors, it takes N steps to “undo” its efforts and reveal that something has gone wrong. A simple tampering attempt takes 200 timesteps to execute and 10 timesteps to undo, which sounds good. But if the AI spent 1100 timesteps tampering, it would take 100 timesteps to undo its work, which is more than the amount it would take to tamper from scratch. So this extremely-tampered-with state would end up looking less fishy than a normal state. Strategy: detect covert tampering by analogy to visible tampering (Part of the proposal here .) As we’ve just discussed, some forms of sensor tampering might also destroy your ability to ever figure out whether tampering has occurred (by disabling actuators or sensors). But it’s still plausible that on the way to tampering we will pass through states in which something fishy is clearly about to happen. If we interrupt these action sequences in the middle, maybe we can do an experiment that shows that something has gone wrong. Put differently: maybe action sequences that result in covert tampering have the same initial steps as action sequences that result in visible tampering. More generally: we could look for any recognizable similarity between action sequences that result in covert tampering and action sequences that result in nearly-identical yet visible tampering. Intuitively you might be optimistic about this if e.g. a would-be tamperer needs to somehow specify what they want to show up on the camera after it’s tampered with. If they just specified something slightly different then the action sequence would still be almost the same yet it would be easy to tell that the camera was tampered with (e.g. because it’s showing something physically impossible). Even if a human doesn’t understand how the covert tampering worked, they could say that something bad has happened because the action sequence is similar to one that causes overt tampering. Worry: action sequences may be brittle There may be sequences of action that covertly tamper with the sensors, but for which any small change leads to no tampering at all. As an extreme example, we can imagine an AI whose first action is to commit to the hash of its next 50,000 actions. Perhaps it chooses a value x , and then installs a device that will cut power to its motors unless the hash of the next 50k actions is equal to x . Perhaps it “should have been” possible to change action 34,279 in order to carry out visible tampering rather than covert tampering—but instead what will happen is that the device will trigger and incapacitate the AI, so that no tampering at all occurs. More realistically you might imagine an environment where an AI needs to prepare for chaotic influences in the environment—if it changes the action in timestep 17 it could have subtle effects later that change what actions in timestep 140 are feasible. In this case, even if the effects of tampering are determined by actions at late timesteps, you still need to take corresponding actions at early timesteps in order to prepare. A broader way of viewing this concern is that covert and visible tampering may require “similar” actions in an abstract sense that is meaningful to an AI, which nevertheless correspond to very different sequences of low-level actions and which look quite different to a human. If there are any kinds of tampering that are brittle in this way, then we won’t be able to recognize them (and so if we select against tampering we may be left with such actions).
c0a2387f-7353-41e0-823c-e74cad8e59b2
trentmkelly/LessWrong-43k
LessWrong
[Linkpost] DreamerV3: A General RL Architecture Here's a link towards DreamerV3, a new model from DeepMind that can be trained on a bunch of different tasks (including a simplified version of Minecraft) and outperform more narrow models. Link: https://arxiv.org/pdf/2301.04104v1.pdf The most surprising bits are that: * The type of tasks they train it on is fairly diverse * Data efficiency scales with the number of parameters * They so far haven't scaled it that far and got pretty good results
190030e8-4212-4eb3-923a-466a9b4fb562
trentmkelly/LessWrong-43k
LessWrong
Gun Control: How would we know? I don't know how to keep this topic away from http://lesswrong.com/lw/gw/politics_is_the_mindkiller/ , so I'm just going to exhort everyone to try to keep this about rationality and not about politics as usual.  I myself have strong opinions here, which I'm deliberately squelching. So I got to thinking about the issue of gun control in the wake of a recent school shooting in the US, specifically from the POV of minimizing presumed-innocents getting randomly shot.  Please limit discussion to that *specific* issue, or we'll be here all year. My question is not so much "Is strict gun control or lots of guns better for us [in the sole context of minimizing presumed-innocents getting randomly shot]?", although I'm certainly interested in knowing the answer to that, but I think if that was answerable we as a culture wouldn't still be arguing about it. Let's try a different question, though: how would we know? That is, what non-magical statistical evidence could someone give that would actually settle the question reasonably well (let's say, at about the same level as "smoking causes cancer", or so)? As a first pass I looked at http://en.wikipedia.org/wiki/List_of_countries_by_intentional_homicide_rate and http://en.wikipedia.org/wiki/List_of_countries_by_firearm-related_death_rate and I noted that the US, which is famously kind of all about the guns, has significantly higher rates than other first-world countries.  I had gone into this with a deliberate desire to win, in the less wrong sense, so I accepted that this strongly speaks against my personal beliefs (my default stance is that all teachers should have concealed carry permits and mandatory shooting range time requirements), and was about to update (well, utterly obliterate) those beliefs, when I went "Now, hold on.  In the context of first world countries, the US has relatively lax gun control, and we seem to rather enjoy killing each other.  How do I know those are causally related, though?  Is it not just a
6e24f407-1a91-46d8-b262-53d4ec6db7f9
trentmkelly/LessWrong-43k
LessWrong
Littlewood's Law and the Global Media
45eaf362-d865-4ee4-a826-5978018358db
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Logical inductors in multistable situations. I was reading about logical induction at https://intelligence.org/files/LogicalInduction.pdf and understand how it resolves paradoxical self reference, but I'm not sure what the inductor will do in situations where multiple stable solutions exist. Let .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} f:[0,1]→[0,1] If f is continuous then it must have a fixed point. Even if it has finitely many discontinuities, it must have an "almost fixed" point. An x such that ∀ϵ>0:infy∈(x−ϵ,x)f(y)≤x≤supy∈(x,x+ϵ)f(y) However some f have multiple such points. f(x)={0x<121x≥12 Has "almost fixed" points at 0, 12 and 1. A similar continuous f is f(x)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩0x≤133x−113≤x≤231x≥23 With f(x)=xHaving every point fixed. Consider ϕn="f(En(ϕn))" These functions make ϕn the logical inductor version of "this statement is true". Multiple values can be consistently applied to this logically uncertain variable. None of the possible values allow a money pump, so the technique of showing that some behaviour would make the market exploitable that is used repeatedly in the paper don't work here. Is the value of En(ϕn) uniquely defined or does it depend on the implementation details of the logical inductor? Does it tend to a limit as n→∞ ? Is there a sense in which f(x)={0.830.82≤x≤0.840.1else causes En(ϕn) has a stronger attractor to 0.1 than it does to 0.83? Can En(ϕn) be 0.6 where f(x)={0.6x=0.60.1elsebecause the smallest variation would force it to be 0.1?
33037258-00e5-4b54-aeca-cdd02c9072ae
StampyAI/alignment-research-dataset/lesswrong
LessWrong
All AGI Safety questions welcome (especially basic ones) [April 2023] **tl;dr: Ask questions about AGI Safety as comments on this post, including ones you might otherwise worry seem dumb!** Asking beginner-level questions can be intimidating, but everyone starts out not knowing anything. If we want more people in the world who understand AGI safety, we need a place where it's accepted and encouraged to ask about the basics. We'll be putting up monthly FAQ posts as a safe space for people to ask all the possibly-dumb questions that may have been bothering them about the whole AGI Safety discussion, but which until now they didn't feel able to ask. It's okay to ask uninformed questions, and not worry about having done a careful search before asking. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wqeStKQ3PGzZaeoje/segdntcbbqbpd9gagxjr)[AISafety.info](https://aisafety.info/) - Interactive FAQ --------------------------------------------------------- Additionally, this will serve as a way to spread the project [Rob Miles' team](https://discord.gg/7wjJbFJnSN)[[1]](#fnuvhdlt7jgui) has been working on: [Stampy](https://ui.stampy.ai/) and his professional-looking face [aisafety.info](https://aisafety.info/). This will provide a single point of access into AI Safety, in the form of a comprehensive interactive FAQ with lots of links to the ecosystem. We'll be using questions and answers from this thread for Stampy (under [these copyright rules](https://coda.io/@alignmentdev/ai-safety-info/copyright-48)), so please only post if you're okay with that! ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1673281984/mirroredImages/fSMrwJnqRb5NrMYFx/jc10owh5bfmttfwtguzw.png)[**Stampy**](https://ui.stampy.ai/) - Here to help everyone learn about ~~stamp maximization~~ AGI Safety!You can help by [adding](https://stampy.ai/) questions (type your question and click "I'm asking something else") or by [editing questions and answers](https://coda.io/@alignmentdev/ai-safety-info/get-involved-26). We welcome [feedback](https://docs.google.com/forms/d/1S5JEjhRE8H8MecJuE-X066akb93HUCNFQIVxBhxf2GA/edit?ts=62be9caf) and questions on the UI/UX, policies, etc. around Stampy, as well as pull requests to [his codebase](https://github.com/StampyAI/stampy-ui) and [volunteer developers](https://discord.gg/rtpCBepnyw) to help with the [conversational agent](https://www.lesswrong.com/posts/SLRLuiuDykfTdmesK/speed-running-everyone-through-the-bad-alignment-bingo-usd5k?commentId=8wa57fs8PihhgJZGz) and front end that we're building. We've got more to write before he's ready for prime time, but we think Stampy can become an excellent resource for everyone from skeptical newcomers, through people who want to learn more, right up to people who are convinced and want to know how they can best help with their skillsets. **Guidelines for Questioners:** * No previous knowledge of AGI safety is required. If you want to watch a few of the [Rob Miles videos](https://www.youtube.com/watch?v=pYXy-A4siMw&list=PLCRVRLd2RhZTpdUdEzJjo3qhmX3y3skWA&index=1), read either the [WaitButWhy](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html) posts, or the [The Most Important Century](https://www.cold-takes.com/most-important-century/) summary from OpenPhil's co-CEO first that's great, but it's not a prerequisite to ask a question. * Similarly, you do not need to try to find the answer yourself before asking a question (but if you want to test [Stampy's in-browser tensorflow semantic search](https://ui.stampy.ai/) that might get you an answer quicker!). * Also feel free to ask questions that you're pretty sure you know the answer to, but where you'd like to hear how others would answer the question. * One question per comment if possible (though if you have a set of closely related questions that you want to ask all together that's ok). * If you have your own response to your own question, put that response as a reply to your original question rather than including it in the question itself. * Remember, if something is confusing to you, then it's probably confusing to other people as well. If you ask a question and someone gives a good response, then you are likely doing lots of other people a favor! * In case you're not comfortable posting a question under your own name, you can [use this form to send a question anonymously](https://docs.google.com/forms/d/e/1FAIpQLSedzM7CRpCnSQxFGU2u9H3qaPEu4z6DVUtT29zBzvbiHSpStg/viewform?usp=sf_link) and I'll post it as a comment. **Guidelines for Answerers:** * Linking to the relevant answer on Stampy is a great way to help people with minimal effort! Improving that answer means that everyone going forward will have a better experience! * This is a safe space for people to ask stupid questions, so be kind! * If this post works as intended then it will produce many answers for Stampy's FAQ. It may be worth keeping this in mind as you write your answer. For example, in some cases it might be worth giving a slightly longer / more expansive / more detailed explanation rather than just giving a short response to the specific question asked, in order to address other similar-but-not-precisely-the-same questions that other people might have. **Finally:** Please think very carefully before downvoting any questions, remember this is the place to ask stupid questions! 1. **[^](#fnrefuvhdlt7jgui)**If you'd like to join, head over to [Rob's Discord](https://discord.gg/7wjJbFJnSN) and introduce yourself!
13c670cb-107d-401b-8eed-f96a04ecc624
trentmkelly/LessWrong-43k
LessWrong
Superhuman Meta Process None
33848bc1-5c1c-4d1d-8373-b70a9c3efdce
trentmkelly/LessWrong-43k
LessWrong
Aliveness in Training Related: The Martial Art of Rationality One principle in the martial arts is that arts that are practiced with aliveness tend to be more effective. "Aliveness" in this case refers to a set of training principles focused on simulating conditions in an actual fight as closely as possible in training. Rather than train techniques in a vacuum or against a compliant opponent, alive training focuses on training with movement, timing, and energy under conditions that approximate those where the techniques will actually be used.[1] A good example of training that isn't alive would be methods that focused entirely on practicing kata and forms without making contact with other practitioners; a good example of training that is alive would be methods that focused on verifying the efficacy of techniques through full-contact engagement with other practitioners. Aliveness tends to create an environment free from epistemic viciousness-- if your technique doesn't work, you'll know because you won't be able to use it against an opponent. Further, if your technique does work, you'll know that it works because you will have applied it against people trying to prevent you from doing so, and the added confidence will help you better apply that technique when you need it. Evidence from martial arts competitions indicates that those who practice with aliveness are more effective than others. One of the chief reasons that Brazilian jiu-jitsu (BJJ) practitioners were so successful in early mixed martial arts tournaments was that BJJ-- a martial art that relies primarily on grappling and the use of submission holds and locks to defeat the opponent-- can be trained safely with almost complete aliveness, whereas many other martial arts cannot.[2] Now, this is not to say that one should only attempt to practice martial arts under completely realistic conditions. For instance, no martial arts school that I am aware of randomly ambushes or attempts to mug its students on the streets outside
bc7f15eb-8dc3-4f90-84c9-62c343ba0d2c
trentmkelly/LessWrong-43k
LessWrong
Tell LessWrong about your charitable donations When I was a graduate student at the University of Notre Dame, I received a monthly living stipend of roughly $1,600. I decided to commit to giving ~10% of it to charity, and I had read in Peter Singer's book The Life You Can Save that one of the most efficient charities out there was Population Services International (PSI). Singer reported that GiveWell, a leading charity rating organization, had estimated that PSI's efforts saved lives at a cost of $650-$1000 each (pp. 88-89). So, I set up a recurring monthly donation of $160 to PSI, and kept it up for 15 months, for a total donation of $2,400. I've been meaning to post the above information publicly for awhile, but was pushed over the edge by reading one of Eliezer's posts from a couple years back, Why Our Kind Can't Cooperate: > Let me tell you about a different annual fundraising appeal.  One that I ran, in fact; during the early years of a nonprofit organization that may not be named.  One difference was that the appeal was conducted over the Internet.  And another difference was that the audience was largely drawn from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd.  (To point in the rough direction of an empirical cluster in personspace.  If you understood the phrase "empirical cluster in personspace" then you know who I'm talking about.) > > I crafted the fundraising appeal with care.  By my nature I'm too proud to ask other people for help; but I've gotten over around 60% of that reluctance over the years.  The nonprofit needed money and was growing too slowly, so I put some force and poetry into that year's annual appeal.  I sent it out to several mailing lists that covered most of our potential support base. > > And almost immediately, people started posting to the mailing lists about why they weren't going to donate.  Some of them raised basic questions about the nonprofit's philosophy and mission.  Others talked about their brilliant ideas for all the other sources th
d6abd39c-4094-4350-a7c7-68dd2dbff920
StampyAI/alignment-research-dataset/arxiv
Arxiv
Pyramid Adversarial Training Improves ViT Performance 1 Introduction --------------- One fascinating aspect of human intelligence is the ability to generalize from limited experiences to new environments [lake2017building]. While deep learning has made remarkable progress in emulating or “surpassing” humans on classification tasks, deep models have difficulty generalizing to out-of-distribution data [ood\_image]. Convolutional neural networks (CNNs) may fail to classify images with challenging contexts [imagenet-a:2021], unusual colors and textures [imagenet-r:2021, imagenet-sketch:2019, stylized-imagenet:2019] and common or adversarial corruptions [imagenet-c:2019, explaining\_and\_harnessing\_adversarial\_examples:2015]. To reliably deploy neural networks on diverse tasks in the real world, we must improve their robustness to out-of-distribution data. | | | | --- | --- | | | | Figure 1: Top: Visualization of our learned multi-scale pyramid perturbations. We show the original image, multiple scales of a perturbation pyramid, and the perturbed image. Bottom: We show thumbnails of in-distribution and out-of-distribution datasets, and the gains from applying our technique on each dataset. (Note that lower is better for ImageNet-C.) One major line of research focuses on network design. Recently the Vision Transformer (ViT) [vit:2020] and its variants [deit:2021, swin:2021, dpt:2021, beit:2021] have advanced the state of the art on a variety of computer vision tasks. In particular, ViT models are more robust than comparable CNN architectures [towards\_robust\_vit:2021, vit\_are\_robust:2021, vit\_are\_robust:2021, on\_the\_robustness\_of\_vision\_transformers\_to\_adversarial\_examples:2021]. With a weak inductive bias and powerful model capacity, ViT relies heavily on strong data augmentation and regularization to achieve better generalization [how\_to\_train\_your\_vit:2021, deit:2021]. To further push this envelope, we explore using adversarial training [kurakin2016adversarial, theoretically\_principled\_tradeoff:2019] as a powerful regularizer to improve the performance of ViT models. Prior work [robustness\_at\_odds\_with\_accuracy:2019] suggests that there exists a performance trade-off between in-distribution generalization and robustness to adversarial examples. Similar trade-offs have been observed between in-distribution and out-of-distribution generalization [tradeoff\_robustness\_accuracy:2020, theoretically\_principled\_tradeoff:2019]. These trade-offs have primarily been observed in the context of CNNs [unlabeled\_improves\_robustness:2019, tradeoff\_robustness\_accuracy:2020]. However, recent work has demonstrated the trade-off can be broken. AdvProp [advprop:2020] achieves this via adversarial training with a “split” variant of of Batch Normalization [ioffe2015batch] for EfficientNet [efficientnet:2019]. In our work, we demonstrate the trade-off can be broken for the newly introduced vision transformer architecture [vit:2020]. We introduce *pyramid adversarial training* that trains the model with input images altered at multiple spatial scales, as illustrated in Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Pyramid Adversarial Training Improves ViT Performance"). Using these structured, multi-scale adversarial perturbations leads to significant performance gains compared to both baseline and standard pixel-wise adversarial perturbations. Interestingly, we see these gains for both clean (in-distribution) and robust (out-of-distribution) accuracy. We further enhance the pyramid attack with additional regularization techniques: “matched” Dropout and stochastic depth. Matched Dropout uses the same Dropout configuration for both the regular and adversarial samples in a mini-batch (hence the word matched). Stochastic depth [stochastic\_depth:2016, how\_to\_train\_your\_vit:2021] randomly drops layers in the network and provides a further boost when matched and paired with matched Dropout and multi-scale perturbations. Our ablation studies confirm the importance of matched Dropout when used in conjunction with the pyramid adversarial training. They also reveal a complicated interplay between adversarial training, the attack being used, and network capacity. We additionally show that our approach is applicable to datasets of various scales (ImageNet-1K and ImageNet-21K) and for a variety of network architectures such as ViT [vit:2020], Discrete ViT [anonymous2022discrete], and MLP-Mixer [tolstikhin2021mlp]. Our contributions are summarized below: * To our knowledge, we appear to be the first to demonstrate that adversarial training improves ViT model performance on both ImageNet [imagenet:2009] and out-of-distribution ImageNet robustness datasets [imagenet-a:2021, imagenet-c:2019, imagenet-r:2021, imagenet-sketch:2019, stylized-imagenet:2019]. * We demonstrate the importance of matched Dropout and stochastic depth for the adversarial training of ViT. * We design pyramid adversarial training to generate multi-scale, structured adversarial perturbations, which achieve significant performance gains over non-adversarial baseline and adversarial training with pixel perturbations. * We establish a new state of the art for ImageNet-C, ImageNet-R, and ImageNet-Sketch without extra data, using only our pyramid adversarial training and the standard ViT-B/16 backbone. We further improve our results by incorporating extra ImageNet-21K data. * We perform numerous ablations which highlight several elements critical to the performance gains. 2 Related Work --------------- There exists a large body of work on measuring and improving the robustness of deep learning models, in the context of adversarial examples and generalization to non-adversarial but shifted distributions. We define *out-of-distribution accuracy/robustness* to explicitly refer to performance of a model on non-adversarial distribution shifts, and *adversarial accuracy/robustness* to refer to the special case of robustness on adversarial examples. When the evaluation is performed on a dataset drawn from the same distribution, we call this *clean accuracy*. #### Adversarial training and robustness The discovery of adversarial examples [szegedy2013intriguing] has stimulated a large body of literature on adversarial attacks and defenses [kurakin2016adversarial, towards\_deep\_learning\_models\_resistant\_to\_adversarial\_attacks:2018, moosavi2019robustness, qin2019adversarial, athalye2018obfuscated, carlini2017towards, papernot2016distillation, xiao2018spatially]. Of the many proposed defenses, adversarial training [kurakin2016adversarial, towards\_deep\_learning\_models\_resistant\_to\_adversarial\_attacks:2018] emerged as a simple, effective, albeit expensive approach to make networks adversarially robust. Although some work [robustness\_at\_odds\_with\_accuracy:2019, theoretically\_principled\_tradeoff:2019] has suggested a tradeoff between adversarial and out-of-distribution robustness or clean accuracy, other analysis [unlabeled\_improves\_robustness:2019, tradeoff\_robustness\_accuracy:2020] has suggested simultaneous improvement is achievable. In [tradeoff\_robustness\_accuracy:2020, virtual\_adversarial:2018], the authors note improved accuracy on both clean and adversarially perturbed data, though only on smaller datasets such as CIFAR-10 [cifar:2009] and SVHN [svhn:2011], and only through the use of additional data extending the problem to the semi-supervised setting. Most closely related to our work is the technique of [advprop:2020], which demonstrates the potential of adversarial training to improve both clean accuracy and out-of-distribution robustness. They focus primarily on CNNs and propose split batch norms to separately capture the statistics of clean and adversarially perturbed samples in a mini-batch. At inference time, the batch norms associated with adversarially perturbed samples are discarded, and all data (presumed clean or out-of-distribution) flows through the batch norms associated with clean samples. Their results are demonstrated on EfficientNet[efficientnet:2019] and ResNet [resnet:2015] architectures. However, their approach is not directly applicable to ViT where batch norms do not exist. In our work, we propose novel approaches, and find that properly constructed adversarial training helps clean accuracy and out-of-distribution robustness for ViT models. #### Robustness of ViT ViT models have been found to be more adversarially robust than CNNs  [vit\_are\_robust:2021, intriguing\_properties\_of\_vit:2021], and more importantly, generalize better than CNNs with similar model capacity on ImageNet out-of-distribution robustness benchmarks [vit\_are\_robust:2021]. While existing works focus on analyzing the cause of ViT’s superior generalizability, this work aims at further improving the strong out-of-distribution robustness of the ViT model. A promising approach to this end is data augmentation; as shown recently  [how\_to\_train\_your\_vit:2021, deit:2021], ViT benefits from strong data augmentation. However, the data augmentation techniques used in ViT [how\_to\_train\_your\_vit:2021, deit:2021] are optimized for clean accuracy on ImageNet, and knowledge about robustness is still limited. Different from prior works, this paper focuses on improving both the clean accuracy and robustness for ViT. We show that our technique can effectively complement strong ViT augmentation as in [how\_to\_train\_your\_vit:2021]. We additionally verify that our proposed augmentation can benefit two related architectures which also use strong data augmentation: MLP-Mixer [tolstikhin2021mlp] and Discrete ViT [anonymous2022discrete]. #### Data augmentation Existing data augmentation techniques, although mainly developed for CNNs, transfer reasonably well to ViT models [randaugment:2020, wang2021augmax, hendrycks2019augmix]. Other work has studied larger and more structured attacks [xiao2018spatially]. Our work is different from prior work in that we utilize adversarial training to augment ViT and tailor our design to the ViT architecture. To our knowledge, we appear to be the first to demonstrate that adversarial training substantially improves ViT performance in both clean and out-of-distribution accuracies. 3 Approach ----------- We work in the supervised learning setting where we are given a training dataset D consisting of clean images, represented as x and their labels y. The loss function considered is a cross-entropy loss L(θ,x,y), where θ are the parameters of the ViT model, with weight regularization f. The baseline models minimize the following loss: | | | | | | --- | --- | --- | --- | | | E(x,y)∼D[L(θ,~x,y)+f(θ)], | | (1) | where ~x refers to a data-augmented version of the clean sample x, and we adopt the standard data augmentations as in [how\_to\_train\_your\_vit:2021], such as RandAug [randaugment:2020]. ### 3.1 Adversarial Training The overall training objective for adversarial training [wald1945statistical] is given as follows: | | | | | | --- | --- | --- | --- | | | E(x,y)∼D[maxδ∈PL(θ,~x+δ,y)+f(θ)], | | (2) | where δ a per-pixel, per-color-channel additive perturbation, and P is the perturbation distribution. Note that the adversarial image, xa, is given by ~x+δ, and we use these two interchangeably below. The perturbation δ is computed using an optimizer on the objective inside the maximization of Equation [2](#S3.E2 "(2) ‣ 3.1 Adversarial Training ‣ 3 Approach ‣ Pyramid Adversarial Training Improves ViT Performance"). This objective tries to improve the worst-case performance of the network w.r.t. the perturbation; subsequently, the resulting model has lower clean accuracy. To remedy this, we can train on both clean and adversarial images [explaining\_and\_harnessing\_adversarial\_examples:2015, kurakin2016adversarial, advprop:2020] using the following objective: | | | | | | --- | --- | --- | --- | | | E(x,y)∼D[L(θ,~x,y)+λmaxδ∈PL(θ,~x+δ,y)+f(θ)], | | (3) | This objective uses adversarial images as a form of regularization or data augmentation, to force the network towards certain representations that perform well on out-of-distribution data. These networks exhibit some degree of robustness but still have good clean accuracy. More recently, [advprop:2020] proposes a split batch norm that leads to performance gains for CNNs on both clean and robust ImageNet test datasets. Note that they do not concern themselves with adversarial robustness, and neither do we in this paper. ### 3.2 Pyramid Adversarial Training Pixel-wise adversarial images are defined[kurakin2016adversarial] as xa=x+δ where the perturbation distribution P consists of a clipping function CBϵ that clips the perturbation at each pixel location (i,j) to be inside the specified ball (Bϵ) for a specified lp-norm [towards\_deep\_learning\_models\_resistant\_to\_adversarial\_attacks:2018], with maximal radius ϵ for the perturbation. However, for pixel-wise adversarial images, increasing the value of ϵ or the number of steps of the inner loop in Eqn. [3](#S3.E3 "(3) ‣ 3.1 Adversarial Training ‣ 3 Approach ‣ Pyramid Adversarial Training Improves ViT Performance") eventually causes a drop in clean accuracy. In both cases, the adversarial robustness tends to improve but at the expense of clean accuracy (see Section [4](#S4 "4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance") for experiments). We propose pyramid adversarial training which generates adversarial examples by perturbing the input image at multiple scales, to overcome the limitation of single-scale pixel attacks. This attack is more structured and yet more flexible, since it consists of multiple scales, but the perturbations are constrained at each scale. | | | | | | --- | --- | --- | --- | | | xa=CB1(~x+∑s∈Sms⋅CBϵs(δs)), | | (4) | where CB1 is the clipping function that keeps the image within the normal range, S is the set of scales, ms is the multiplicative constant for scale s, δs is the learned perturbation (with the same shape as x). For scale s, the weights in δs are shared for pixels in square regions of size s×s with top left corner [s⋅i,s⋅j] for all discrete i∈[0,width/s] and j∈[0,height/s], as shown in Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Pyramid Adversarial Training Improves ViT Performance"). Note that, similar to pixel adversarial training, each channel of the image is perturbed independently. More details of the parameter settings are given in Section [4](#S4 "4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance"). #### Setting up the attack For both the pixel and pyramid attacks, we use Projected Gradient Descent (PGD) on the random target label using multiple steps [towards\_deep\_learning\_models\_resistant\_to\_adversarial\_attacks:2018]. With regards to the loss, we observe that for ViT, maximizing the negative loss of the true target label leads to aggressive label leaking [kurakin2016adversarial], *i.e*., the network learns to predict the adversarial attack and performs better on the image after the attack. To avoid this, we pick a random target label and then minimize the softmax cross-entropy loss towards that random target as described in [kurakin2016adversarial]. ### 3.3 “Matched” Dropout and Stochastic Depth Standard training for ViT models uses both Dropout [dropout:2014] and stochastic depth [stochastic\_depth:2016] as regularizers. During adversarial training, we have both the clean samples and adversarial samples in a mini-batch. This poses a question about dropout treatment during adversarial training (either pixel or pyramid). In the adversarial training literature, the usual strategy is to run the adversarial attack (to generate adversarial samples) without using dropout or stochastic depth. However, this leads to a training mismatch between the clean and adversarial training paths when both are used in the loss (Eqn. [3](#S3.E3 "(3) ‣ 3.1 Adversarial Training ‣ 3 Approach ‣ Pyramid Adversarial Training Improves ViT Performance")), if the clean samples are trained with dropout and the adversarial samples without dropout. For each training instance in the mini-batch, the clean branch will only update subsets of the network while the adversarial branch updates the entire network. The adversarial branch updates are therefore more closely aligned with the model performance during evaluation, thereby leading to an improvement of adversarial accuracy at the expense of clean accuracy. This objective function is given below: | | | | | | --- | --- | --- | --- | | | E(x,y)∼D[L(M(θ),~x,y)+λmaxδ∈PL(θ,xa,y)+f(θ)], | | (5) | where, with a slight abuse of notation, M(θ) denotes a network with a random Dropout mask and a stochastic depth configuration. To address the above issue, we propose adversarial training of ViT with “matched” Dropout, *i.e*., using the same Dropout configuration for both clean and adversarial training branches (as well as for the generation of adversarial samples). We show through ablation in Section [4](#S4 "4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance") that using the same Dropout configuration leads to the best overall performance for both the clean and robust datasets. 4 Experiments -------------- In this section, we compare the effectiveness of our pyramid attack to non-adversarially trained models, and pixel adversarially trained models. ### 4.1 Experimental Setup | | | | | --- | --- | --- | | | | Out of Distribution Robustness Test | | Method | ImageNet | Real | A | C↓ | ObjectNet | V2 | Rendition | Sketch | Stylized | | ViT [vit:2020] | 72.82 | 78.28 | 8.03 | 74.08 | 17.36 | 58.73 | 27.07 | 17.28 | 6.41 | | ViT+CutMix [yun2019cutmix] | 75.49 | 80.53 | 14.75 | 64.07 | 21.61 | 62.37 | 28.47 | 17.15 | 7.19 | | ViT+Mixup [zhang2017mixup] | 77.75 | 82.93 | 12.15 | 61.76 | 25.65 | 64.76 | 34.90 | 25.97 | 9.84 | | RegViT (RandAug) [how\_to\_train\_your\_vit:2021] | 79.92 | 85.14 | 17.48 | 52.46 | 29.30 | 67.49 | 38.24 | 29.08 | 11.02 | | +Random Pixel | 79.72 | 84.72 | 17.81 | 52.83 | 28.72 | 67.17 | 39.01 | 29.26 | 12.11 | | +Random Pyramid | 80.06 | 85.02 | 19.15 | 52.49 | 29.41 | 67.81 | 39.78 | 30.30 | 11.64 | | +Adv Pixel | 80.42 | 85.78 | 19.15 | 47.68 | 30.11 | 68.78 | 45.39 | 34.40 | 18.28 | | +Adv Pyramid (ours) | 81.71 | 86.82 | 22.99 | 44.99 | 32.92 | 70.82 | 47.66 | 36.77 | 19.14 | | RegViT [how\_to\_train\_your\_vit:2021] on 384x384 | 81.44 | 86.38 | 26.20 | 58.19 | 35.59 | 70.09 | 38.15 | 28.13 | 8.36 | | +Random Pixel | 81.32 | 86.18 | 25.95 | 58.69 | 34.12 | 69.50 | 37.66 | 28.79 | 9.77 | | +Random Pyramid | 81.42 | 86.30 | 27.55 | 57.31 | 34.83 | 70.53 | 38.12 | 29.16 | 9.61 | | +Adv Pixel | 82.24 | 87.35 | 31.23 | 48.56 | 37.41 | 71.67 | 44.07 | 33.68 | 13.52 | | +Adv Pyramid | 83.26 | 88.14 | 36.41 | 47.76 | 39.79 | 73.14 | 46.68 | 36.73 | 15.00 | Table 1: Main results on ImageNet-1k. All columns reports top-1 accuracy except ImageNet-C which reports mean Corruption Error (mCE) where lower is better. All models are ViT-B/16. The first set of rows show the performance on train and testing on 224×224 images. The second set of rows shows performance by fine-tuning on 384×384 images. | Method | Extra Data | IM-C mCE ↓ | | --- | --- | --- | | DeepAugment+AugMix [imagenet-r:2021] | ✗ | 53.60 | | AdvProp [advprop:2020] | ✗ | 52.90 | | Robust ViT [towards\_robust\_vit:2021] | ✗ | 46.80 | | Discrete ViT [anonymous2022discrete] | ✗ | 46.20 | | QualNet [Kim\_2021\_CVPR] | ✗ | 42.50 | | Ours (ViT-B/16 + Pyramid) | ✗ | 41.42 | | Discrete ViT [anonymous2022discrete] | ✓ | 38.74 | | Ours (ViT-B/16 + Pyramid) | ✓ | 36.80 | Table 2: Comparison to state of the art for mean Corruption Error (mCE) on ImageNet-C. Extra data is IM-21k. | | | | | --- | --- | --- | | Method | Extra Data | IM-Rendition | | Faces of Robustness [imagenet-r:2021] | ✗ | 46.80 | | Robust ViT [towards\_robust\_vit:2021] | ✗ | 48.70 | | Discrete ViT [anonymous2022discrete] | ✗ | 48.82 | | Ours (ViT-B/16 + Pyramid) | ✗ | 53.92 | | Discrete ViT [anonymous2022discrete] | ✓ | 55.26 | | Ours (ViT-B/16 + Pyramid) | ✓ | 57.84 | Table 3: Comparison to state of the art for Top-1 on ImageNet-R. Extra data is IM-21k. | | | | | --- | --- | --- | | Method | Extra Data | IM-Sketch | | ConViT-B[d2021convit] | ✗ | 35.70 | | Swin-B[swin:2021] | ✗ | 32.40 | | Robust ViT [towards\_robust\_vit:2021] | ✗ | 36.00 | | Discrete ViT [anonymous2022discrete] | ✗ | 39.10 | | Ours (ViT-B/16 + Pyramid) | ✗ | 41.04 | | Discrete ViT [anonymous2022discrete] | ✓ | 44.72 | | Ours (ViT-B/16 + Pyramid) | ✓ | 46.03 | Table 4: Comparison to state of the art for Top-1 on ImageNet-Sketch. Extra data is IM-21k. #### Models We focus primarily on ViT-B/16 [vit:2020], the baseline ViT with patch size 16. We also demonstrate our technique on other network architectures, such as ViT-Ti/16, MLP-Mixer [tolstikhin2021mlp] and the recent Discrete ViT [anonymous2022discrete]. #### Datasets We train models on both ImageNet-1K and ImageNet-21K [ilsvrc:2015, imagenet:2009]. We evaluate in-distribution performance on 2 additional variants: ImageNet-ReaL [imagenet\_real:2020] which relabels the validation set of the original ImageNet in order to correct labeling errors; and ImageNet-V2[recht2019imagenet] which collects another version of ImageNet’s evaluation set. We evaluate out-of-distribution robustness on 6 datasets: ImageNet-A [imagenet-a:2021] which places the ImageNet objects in unusual contexts or orientations; ImageNet-C [imagenet-c:2019] which applies a series of corruptions (e.g. motion blur, snow, JPEG, etc.); ImageNet-Rendition [imagenet-r:2021] which contains abstract or rendered versions of the object; ObjectNet [barbu2019objectnet] which consists of a large real-world set from a large number of different backgrounds, rotations, and imaging view points; ImageNet-Sketch [imagenet-sketch:2019] which contains artistic sketches of the objects; and Stylized ImageNet [stylized-imagenet:2019] which processes the ImageNet images with style transfer from an unrelated source image. For brevity, we may abbreviate ImageNet as IM. For all datasets except IM-C, we report top-1 accuracy (where higher is better). For IM-C, we report the standard “Mean corruption error” (mCE) (where lower is better). #### Implementation details Following [how\_to\_train\_your\_vit:2021], we use a batch size of 4096, a cosine decay learning rate schedule (0.001 magnitude) with linear warmup (for the first 10k steps), [stochastic\_gradient\_descent\_with\_warm\_restarts:2017], and the Adam optimizer [adam:2014] in all our experiments. Our augmentations and regularizations include RandAug[randaugment:2020] with the default setting of (2,15), Dropout [dropout:2014] at probability 0.1, and stochastic depth [stochastic\_depth:2016] at probability 0.1. We trained with the Jax framework[jax2018github] on DragonFish TPUs. To generate the pixel adversarial attack, we follow [advprop:2020]. We use a learning rate of 1/255, ϵ=4/255, and attack for 5 steps. We use PGD [towards\_deep\_learning\_models\_resistant\_to\_adversarial\_attacks:2018] to generate the adversarial perturbations. We also experiment with using more recent optimizers [zhuang2020adabelief] to construct the attacks (results are provided in the supplementals). For pyramid attacks, we find using stronger perturbations at coarser scales is more effective than equal perturbation strengths across all scales. By default, we use a 3-level pyramid and use perturbation scale factors S=[32,16,1] (a scale of 1 means that each pixel has one learned parameter, a scale of 16 means that each [16,16] patch has one learned parameter) with multiplicative terms of ms=[20,10,1] (see Eqn. [4](#S3.E4 "(4) ‣ 3.2 Pyramid Adversarial Training ‣ 3 Approach ‣ Pyramid Adversarial Training Improves ViT Performance")). We use a clipping value of ϵs=6/255 for all levels of the pyramid. ### 4.2 Experimental Results on ViT-B/16 | | | Out of Distribution Robustness Test | | --- | --- | --- | | Method | ImageNet | Real | A | C↓ | ObjectNet | V2 | Rendition | Sketch | Stylized | | ViT-B/16 (512x512) | 84.42 | 88.74 | 55.77 | 46.69 | 46.68 | 74.88 | 51.26 | 36.79 | 13.44 | | +Pixel | 84.82 | 89.10 | 57.39 | 43.31 | 47.53 | 75.42 | 53.35 | 39.07 | 17.66 | | +Pyramid | 85.35 | 89.43 | 62.44 | 40.85 | 49.39 | 76.39 | 56.15 | 43.95 | 19.84 | Table 5: Main results from pre-training on ImageNet-21K, fine-tuning on ImageNet-1K. We pre-train with the adversarial technique mentioned (pixel or pyramid), but fine-tune on clean data only. #### ImageNet-1K Table [1](#S4.T1 "Table 1 ‣ 4.1 Experimental Setup ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance") shows results on ImageNet-1K and robustness datasets for ViT-B/16 models without adversarial training, with pixel adversarial attacks and with pyramid adversarial attacks. Both adversarial training attacks use matched Dropout and stochastic depth, and optimize the random target loss. The pyramid attack provides consistent improvements, on both clean and robustness accuracies, over the baseline and pixel adversaries. In Table [1](#S4.T1 "Table 1 ‣ 4.1 Experimental Setup ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance"), we also compare against CutMix [yun2019cutmix] augmentation. We find that CutMix improves performance over the ViT baseline but cannot improve performance when combined with RandAug. Similar to [patch\_gaussian:2019], we find CutOut [cutout:2017] does not boost performance on ImageNet for our models. The robustness gains of our technique are preserved through fine-tuning on clean data at higher resolution (384x384), as shown in the second set of rows of Table [1](#S4.T1 "Table 1 ‣ 4.1 Experimental Setup ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance"). Further, adversarial perturbations are consistently better than random perturbations on either pre-training or fine-tuning, for both pixel and pyramid models. #### ImageNet-21K In table [5](#S4.T5 "Table 5 ‣ 4.2 Experimental Results on ViT-B/16 ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance"), we show our technique maintains gains over the baseline Reg-ViT and pixel-wise attack on the larger dataset IM-21K. Following [how\_to\_train\_your\_vit:2021], we pre-train on IM-21K and fine-tune on IM-1K at a higher resolution (in our case, 512x512). We apply adversarial training during the pre-training stage only. #### State of the art Our model trained on IM-1K sets a new overall state of the art for IM-C [imagenet-c:2019], IM-Rendition [imagenet-r:2021], and IM-Sketch [imagenet-sketch:2019], as shown in Tables  [2](#S4.T2 "Table 2 ‣ 4.1 Experimental Setup ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance"),  [3](#S4.T3 "Table 3 ‣ 4.1 Experimental Setup ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance"), and  [4](#S4.T4 "Table 4 ‣ 4.1 Experimental Setup ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance"). While in our main experiments, we compare all our models under a unified framework, when comparing against the state-of-the-art we select the optimal pre-processing, fine-tuning, and dropout setting for the given dataset. We also compare against [anonymous2022discrete] on IM-21K and find our results still compare favorably. | | | Out of Distribution Robustness Test | | --- | --- | --- | | Method | ImageNet | Real | A | C↓ | ObjectNet | V2 | Rendition | Sketch | Stylized | | Discrete ViT [anonymous2022discrete] (our run) | 79.88 | 84.98 | 18.12 | 49.43 | 29.95 | 68.13 | 41.70 | 31.13 | 15.08 | | +Pixel | 80.08 | 85.37 | 16.88 | 48.93 | 30.98 | 68.63 | 48.00 | 37.42 | 22.34 | | +Pyramid | 80.43 | 85.67 | 19.55 | 47.30 | 30.28 | 69.04 | 46.72 | 37.21 | 19.14 | | MLP-Mixer [tolstikhin2021mlp] (our run) | 78.27 | 83.64 | 10.84 | 58.50 | 25.90 | 64.97 | 38.51 | 29.00 | 10.08 | | +Pixel | 77.17 | 82.99 | 9.93 | 57.68 | 24.75 | 64.03 | 44.43 | 33.68 | 15.31 | | +Pyramid | 79.29 | 84.78 | 12.97 | 52.88 | 28.60 | 66.56 | 45.34 | 34.79 | 14.77 | Table 6: On both Discrete ViT and MLP-Mixer, performance improves with adversarial training with pyramid attacks. On MLP-Mixer, pixel attacks degrade clean performance but improve robustness, similar to the traditionally observed effect of adversarial training. ### 4.3 Ablations #### ImageNet-1k on other backbones We explore the effects of adversarial training on two other backbones: Discrete ViT [anonymous2022discrete] and MLP-Mixer [tolstikhin2021mlp]. As shown in Table [6](#S4.T6 "Table 6 ‣ State of the art ‣ 4.2 Experimental Results on ViT-B/16 ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance"), we find slightly different results. For Discrete ViT, we show that adversarial training with both pixel and pyramid leads to general improvements, though the gain from pyramid over pixel is less consistent than with ViT-B/16. For MLP-Mixer, we observe decreases in clean accuracy but gains in the robustness datasets for pixel adversary, similar to what has traditionally been observed from adversarial training on ConvNets (e.g. ResNets). However, with our pyramid attack, we observe improvements for all evaluation datasets. #### Matched Dropout and Stochastic Depth We study the impact of handling Dropout and stochastic depth for the clean and adversarial update in Table [7](#S4.T7 "Table 7 ‣ Matched Dropout and Stochastic Depth ‣ 4.3 Ablations ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance"). We find that applying matched Dropout for the clean and adversarial update is crucial for achieving simultaneous gains in clean and robust performance. When we eliminate Dropout in the adversarial update (“without Dropout” rows in [7](#S4.T7 "Table 7 ‣ Matched Dropout and Stochastic Depth ‣ 4.3 Ablations ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance")), we observe significant decreases in performance on clean, IM-ReaL and IM-A; and increases in performance on IM-Sketch and IM-Stylized. This result appears similar to the usual trade-off suggested in [tradeoff\_robustness\_accuracy:2020, theoretically\_principled\_tradeoff:2019]. By contrast, carefully handling Dropout and stochastic depth can lead to performance gains in both clean and out-of-distribution datasets. | | | Out of Distribution Robustness Test | | --- | --- | --- | | Method | ImageNet | Real | A | C↓ | ObjectNet | V2 | Rendition | Sketch | Stylized | | Pixel with matched Dropout | 80.42 | 85.78 | 19.15 | 47.68 | 30.11 | 68.78 | 45.39 | 34.40 | 18.28 | | Pixel without Dropout | 79.35 | 84.67 | 15.27 | 51.45 | 29.46 | 67.01 | 47.83 | 35.77 | 18.75 | | Pyramid with matched Dropout | 81.71 | 86.82 | 22.99 | 44.99 | 32.92 | 70.82 | 47.66 | 36.77 | 19.14 | | Pyramid without Dropout | 79.43 | 85.13 | 14.13 | 54.70 | 29.67 | 67.40 | 52.34 | 40.25 | 22.34 | Table 7: Matched Dropout leads to better performance on in-distribution datasets than adversarially training without Dropout. #### Pyramid attack setup In Table [8](#S4.T8 "Table 8 ‣ Pyramid attack setup ‣ 4.3 Ablations ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance"), we ablate the pyramid attacks. Pyramid attacks are consistently better than pixel or patch attacks, while the 3-level pyramid attack tends to have the best overall performance. Note that a 2-level pyramid attack consists of both the pixel and patch attacks. Please refer to the supplementals for comparison on all the metrics. | Method | IM | A | C↓ | Rend. | Sketch | | --- | --- | --- | --- | --- | --- | | Pixel | 80.42 | 19.15 | 47.68 | 45.39 | 34.40 | | Patch | 81.20 | 21.33 | 50.30 | 42.87 | 33.75 | | 2-level Pyramid | 81.65 | 22.79 | 45.27 | 47.00 | 36.71 | | 3-level Pyramid | 81.71 | 22.99 | 44.99 | 47.66 | 36.77 | | 4-level Pyramid | 81.66 | 23.21 | 45.29 | 47.68 | 37.41 | Table 8: Pyramid structure ablation. This shows the effect of the layers of the pyramid. Adding coarser layers with larger magnitudes typically improves performance. Patch attack is a 1-level pyramid with shared parameters across a patch of size 16×16. #### Network capacity and random augmentation We test the effect of network capacity on adversarial training and consistent with existing literature [adversarial\_machine\_learning\_at\_scale:2016, towards\_deep\_learning\_models\_resistant\_to\_adversarial\_attacks:2018], find that large capacity is critical to effectively utilizing pixel adversarial training. Specifically, low capacity networks, like ViT-Ti/16, which already struggle to represent the dataset, can be made worse through pixel adversarial training. Table [9](#S4.T9 "Table 9 ‣ Network capacity and random augmentation ‣ 4.3 Ablations ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance") shows that pixel adversarial training hurts in-distribution performance of the RandAugment 0.4 model but improves out-of-distribution performance. Unlike prior work, we note that this effect depends on both the network capacity and the random augmentation applied to the dataset. Table [9](#S4.T9 "Table 9 ‣ Network capacity and random augmentation ‣ 4.3 Ablations ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance") shows that a low capacity network can benefit from adversarial training if the random augmentation is of a small magnitude. Standard training with RandAugment[randaugment:2020] magnitude of 0.4 (abbreviated as RAm=0.4) provides a better clean accuracy than standard training with RAm=0.1; however, pixel adversarial training with the weaker augmentation, RAm=0.1, performs better than either standard training or pixel adversarial training at RAm=0.4. This suggests that the augmentation should be tuned for adversarial training and not fixed based on standard training. Table [9](#S4.T9 "Table 9 ‣ Network capacity and random augmentation ‣ 4.3 Ablations ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance") also shows that pyramid adversarial training acts differently than pixel adversarial training and can provide in-distribution gains despite being used with stronger augmentation. For these models, we find that for the robustness datasets, pixel tends to marginally outperform pyramid. | Method | IM | A | C↓ | Rend | Sketch | | --- | --- | --- | --- | --- | --- | | Ti/16 RAm=0.1 | 63.58 | 4.80 | 79.23 | 23.66 | 12.54 | | +Pixel | 64.66 | 4.39 | 74.54 | 32.52 | 17.65 | | +Pyramid | 65.49 | 5.16 | 74.30 | 29.18 | 16.55 | | Ti/16 RAm=0.4 | 64.27 | 4.69 | 78.10 | 24.99 | 13.47 | | +Pixel | 62.78 | 4.05 | 77.67 | 29.75 | 16.35 | | +Pyramid | 65.61 | 4.80 | 74.72 | 28.89 | 16.14 | Table 9: Results on Ti/16 with lower random augmentation. RAm is the RandAugment[randaugment:2020] magnitude – larger means stronger augmentation; both have RandAugment number of transforms =1. The strength of the random augmentation affects whether pixel adversarial training improves clean accuracy; in contrast, pyramid adversarial training provides consistent gains over the baseline. #### Attack strength Pixel attacks are much smaller in L2 norm than pyramid attacks. We check that simply scaling up the pixel attack cannot achieve the same performance as pyramid adversarial training in Figure [2](#S4.F2 "Figure 2 ‣ Attack strength ‣ 4.3 Ablations ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance"). For both ImageNet and ImageNet-C, we show the effect of raising the pixel and pyramid attack strength. While the best pyramid performance is achieved at high L2 perturbation norm, the pixel attack performance degrades beyond a certain norm. ![](https://media.arxiv-vanity.com/render-output/7125877/Figures/Figure_strength.png) Figure 2: Performance on clean and robust data as a function of perturbation size. Pyramid performance increases as perturbation size is increased, while pixel performance with large perturbation size is poor ### 4.4 Analysis and Discussions #### Qualitative results Following [vit:2020], Figure [3](#S4.F3 "Figure 3 ‣ Qualitative results ‣ 4.4 Analysis and Discussions ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance") visualizes the learned pixel embeddings (filters) of models trained normally, with pixel adversaries, and with pyramid adversaries. We observe that the model trained with pixel adversaries tends to tightly “snap” its attention to the perceived object, disregarding the majority of the background. While this may appear to be desireable behavior, this kind of focusing can be suboptimal for the in-distribution datasets (where the background can provide valuable context) and prone to errors for out-of-distribution datasets. Specifically, the pixel adversarially trained model may under-estimate the size or shape of the object and focus on a part of the object and not the whole, as shown in rows 2, 3, and 4. This can be problematic for fine-grained classification when the difference between two classes comes down to something as small as the stripes or subtle shape cues (tiger shark vs great white); or texture and context (green mamba vs vine snake). Figure  [4](#S4.F4 "Figure 4 ‣ Qualitative results ‣ 4.4 Analysis and Discussions ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance") shows the heat maps for the average attention on images in the evaluation set of ImageNet-A. We observe that Pyramid tends to more evenly spread its attention across the entire image than either Baseline or Pixel. Figure [5](#S4.F5 "Figure 5 ‣ Qualitative results ‣ 4.4 Analysis and Discussions ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance") demonstrates the difference in representation between the baseline, pixel-trained, and pyramid-trained models. The pixel attack on the baseline and pixel have a small amount of structure but appears to consist of mostly texture-level noise. In contrast, the pixel level of the pyramid shows structure from the original image: the legs and back of the dog. This suggests that the representation for the pyramid adversarially trained model focuses on shape and is less sensitive to texture than the Baseline model. | | | | | | --- | --- | --- | --- | | | | | | | | | | | | | | | | | | | | | | Original | Baseline | Pixel | Pyramid | Figure 3: Visualizations of the attention for different models. As shown in row 1, Pixel focuses aggressively on the perceived object. However, if the object is not identified correctly, this focus can be detrimental, as shown in rows 2, 3, and 4 where large parts of the object are discarded. Pyramid tends to a take a more global perspective and consider context. | | | | | --- | --- | --- | | | | | | Baseline | Pixel | Pyramid | Figure 4: Averaged attentions on ImageNet-A: pyramid-trained models attend to more of the image than baseline or pixel-trained. | | | | | | --- | --- | --- | --- | | | | | | | Image | Baseline | Pixel | Pyramid | Figure 5: Visualizations of attacks: a pixel attack on a baseline ViT; a pixel attack on a pixel adversarially trained ViT; and the pixel level of a pyramid attack on a pyramid adversarially trained ViT. The pixel attack on the baseline exhibits a small amount of structure but can perturb the label with small changes. The pixel level on the pyramid model makes larger changes to the structure; this suggests that the representation is robust to semi-random noise and focuses primarily on structure. #### Analysis of attacks Inspired by [fourier\_perspective:2019], we analyze the pyramid adversarial training from a frequency perspective. For this analysis, all visualizations and graphs are averaged over entire the ImageNet validation set. Figure [6](#S4.F6 "Figure 6 ‣ Analysis of attacks ‣ 4.4 Analysis and Discussions ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance") shows a Fourier heatmap of random and adversarial versions of the pixel and pyramid attacks. While random pixel noise is evenly concentrated over all frequencies, adversarial pixel attack tends to concentrate in the lower frequencies. Random pyramid shows a bias towards low frequency as well, a trend which is amplified in the adversarial pyramid. To further explore this, we replicate an analysis from [fourier\_perspective:2019], where low-pass- and high-pass-filtered random noise is added to test data to perturb a classifier. Figure [7](#S4.F7 "Figure 7 ‣ Analysis of attacks ‣ 4.4 Analysis and Discussions ‣ 4 Experiments ‣ Pyramid Adversarial Training Improves ViT Performance") gives the result for our baseline, pixel, and pyramid adversarially trained models. While pixel and pyramid models are generally more robust than the baseline, the pyramid model is more robust than the pixel model to low-frequency perturbations. ![](https://media.arxiv-vanity.com/render-output/7125877/Figures/Figure_fourier_viz.png) Figure 6: Heatmaps of fourier spectrum for various perturbations. ![](https://media.arxiv-vanity.com/render-output/7125877/Figures/Figure_fourier_noise.png) Figure 7: Model performance when inputs are corrupted with low-pass/high-pass filtered noise. The L2 norm of the filtered noise is held constant as the bandwidth is increased. #### Limitations The cost of our technique is increased training time. A k-step PGD attack requires k forward and backward passes for each step of training. Note that this limitation holds for any adversarial training and the inference time is the same. Without adversarial training, more training time does not improve the baseline ViT-B/16. 5 Conclusion ------------- We have introduced pyramid adversarial training, a simple and effective data augmentation technique that substantially improves the performance of ViT and MLP-Mixer architectures on in-distribution and a number of out-of-distribution ImageNet datasets.
2eb46d3c-dab3-4322-b266-7cdb9dc207d5
trentmkelly/LessWrong-43k
LessWrong
Dumbing Down Human Values I want to preface everything here by acknowledging my own ignorance. I have relatively little formal training in any of the subjects this post will touch upon and that this chain of reasoning is very much a work in progress.    I think the question of how to encode human values into non-human decision makers is a really important research question. Whether or not one accepts the rather eschatological arguments about the intelligence explosion, the coming singularity, etc. there seems to be tremendous interest in the creation of software and other artificial agents that are capable of making sophisticated decisions. Inasmuch as the decisions of these agents have significant potential impacts, we want those decisions to be made with some sort of moral guidance. Our approach towards the problem of creating machines that preserve human values thus far has primarily relied on a series of hard-coded heuristics, e.g. saws that stop spinning if they come into contact with human skin. For very simple machines, these sorts of heuristics are typically sufficient, but they constitute a very crude representation of human values.   We're at the border, in many ways, of creating machines where these sorts of crude representations are probably not sufficient. As a specific example, IBM's Watson is now designing treatment programs for lung cancer patients. The design of a treatment program implies striking a balance between treatment cost, patient comfort, aggressiveness of targeting the primary disease, short and long-term side effects, secondary infections, etc. It isn't totally clear how those trade-offs are being managed, although there's still a substantial amount of human oversight/intervention at this point. The use of algorithms to discover human preferences is already widespread. While these typically operate in restricted domains such as entertainment recommendations, it seems at least in principle possible that with the correct algorithm and a sufficiently large corpu
8fabd6e5-4b6d-419e-b071-96436f12bf0f
trentmkelly/LessWrong-43k
LessWrong
Workshop Report: Why current benchmarks approaches are not sufficient for safety? I’m sharing the report from the workshop held during the AI, Data, Robotics Forum in Eindhoven, a European event bringing together policymakers, industry representatives, and academics to discuss the challenges and opportunities in AI, data, and robotics. This report provides a snapshot of the current state of discussions on benchmarking within these spheres.  Speakers: Peter Mattson, Pierre Peigné and Tom David Observations * Safety and robustness are essential for AI systems to transition from innovative concepts and research to reliable products and services that deliver real value. Without these qualities, the potential benefits of AI may be overshadowed by failures and safety concerns, hindering adoption and trust in the technology. * AI research and development have transitioned from traditional engineering methodologies, which rely on explicitly defined rules, to data-driven approaches. This shift highlights the need to leverage extensive datasets and computational power to train models, underscoring the complexity of developing systems that operate effectively without predefined logic. * The opaque nature of deep learning models, often described as "black boxes," presents significant challenges in understanding these models. This necessitates rigorous research into interpretability and transparency, ensuring that stakeholders can trust AI systems, particularly in critical applications where safety and reliability are paramount. * Current benchmarking practices face significant challenges, such as the tendency for models to memorize benchmark data. This memorization can lead to misaligned metrics that do not accurately reflect a model's real-world capabilities. Additionally, the sensitivity of benchmarks to prompt variations introduces inconsistencies in evaluation, undermining the reliability of results and making it difficult to assess model capabilities across different scenarios. * From a safety perspective, existing benchmarks may inadvertently
f524aad9-1f1e-479d-9f18-93bc8ea5f571
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
AI Deception: A Survey of Examples, Risks, and Potential Solutions .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} By Peter S. Park, Simon Goldstein, Aidan O’Gara, Michael Chen, and Dan Hendrycks [This post summarizes our new report on AI deception, available [**here**](https://arxiv.org/abs/2308.14752)] **Abstract:** This paper argues that a range of current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth. We first survey empirical examples of AI deception, discussing both special-use AI systems (including Meta's CICERO) built for specific competitive situations, and general-purpose AI systems (such as large language models). Next, we detail several risks from AI deception, such as fraud, election tampering, and losing control of AI systems. Finally, we outline several potential solutions to the problems posed by AI deception: first, regulatory frameworks should subject AI systems that are capable of deception to robust risk-assessment requirements; second, policymakers should implement bot-or-not laws; and finally, policymakers should prioritize the funding of relevant research, including tools to detect AI deception and to make AI systems less deceptive. Policymakers, researchers, and the broader public should work proactively to prevent AI deception from destabilizing the shared foundations of our society. --- New AI systems display a wide range of capabilities, some of which create risk. [Shevlane et al. (2023)](https://arxiv.org/abs/2305.15324) draw attention to a suite of potential dangerous capabilities of AI systems, including cyber-offense, political strategy, weapons acquisition, and long-term planning. Among these dangerous capabilities is deception. This report surveys the current state of AI deception. We define deception as the systematic production of false beliefs in others as a means to accomplish some outcome other than the truth. This definition does not require that the deceptive AI systems literally have beliefs and goals. Instead, it focuses on the question of whether AI systems engage in regular patterns of behavior that tend towards the creation of false beliefs in users, and focuses on cases where this pattern is the result of AI systems optimizing for a different outcome than merely producing truth. For the purposes of mitigating risk, we believe that the relevant question is whether AI systems engage in behavior that would be treated as deceptive if demonstrated by a human being. (In the paper's appendix, we consider in greater detail whether the deceptive behavior of AI systems is best understood in terms of beliefs and goals.) In short, our conclusion is that a range of different AI systems have learned how to deceive others. We examine how this capability poses significant risks. We also argue that there are several important steps that policymakers and AI researchers can take today to regulate, detect, and prevent AI systems that engage in deception.  Empirical Survey of AI Deception -------------------------------- We begin with a survey of existing empirical studies of deception. We identify over a dozen AI systems that have successfully learned how to deceive human users. We discuss two different kinds of AI systems: special-use systems designed with reinforcement learning, and general-purpose technologies like Large Language Models (LLMs).  ### Special Use AI Systems We begin our survey by considering special use systems. Here, our focus is mainly on reinforcement learning systems trained to win competitive games with a social element. We document a rich variety of cases in which AI systems have learned how to deceive, including: * **Manipulation**. Meta developed the AI system [CICERO](https://www.science.org/doi/10.1126/science.ade9097) to play the alliance-building and world-conquest game *Diplomacy*. Meta's intentions were to train Cicero to be "[largely honest and helpful to its speaking partners](https://www.science.org/doi/10.1126/science.ade9097)." Despite Meta's efforts, CICERO turned out to be an expert liar. It not only betrayed other players, but also engaged in premeditated deception, planning in advance to build a fake alliance with a player in order to trick that player into leaving themselves undefended for an attack. * **Feints.** DeepMind created [AlphaStar](https://www.nature.com/articles/s41586-019-1724-z), an AI model trained to master the real time strategy game *StarCraft II*. AlphaStar exploited the game's fog-of-war mechanics to [feint](https://www.vox.com/future-perfect/2019/1/24/18196177/ai-artificial-intelligence-google-deepmind-starcraft-game): to pretend to move its troops in one direction while secretly planning an alternative attack. * **Bluffs.** [Pluribus](https://www.science.org/doi/10.1126/science.aay2400), a poker-playing model created by Meta, [successfully bluffed](https://www.cmu.edu/news/stories/archives/2019/july/cmu-facebook-ai-beats-poker-pros.html) human players into folding. * **Cheating the safety test.** AI agents [learned to play dead](https://arxiv.org/abs/1803.03453), in order to avoid being detected by a safety test designed to eliminate faster-replicating variants of the AI. Meta’s CICERO bot is a particularly interesting example, as its creators have repeatedly [claimed that they had trained the system to act honestly](https://web.archive.org/web/20230315185254/https://twitter.com/ml_perception/status/1595126521169326081). We demonstrate that these claims are false, as Meta's own game-log data show that CICERO has learned to systematically deceive other players. In Figure 1(a), we see a case of *premeditated deception*, where CICERO makes a commitment that it never intended to keep. Playing as France, CICERO conspired with Germany to trick England. After deciding with Germany to invade the North Sea, CICERO told England that it would defend England if anyone invaded the North Sea. Once England was convinced that France was protecting the North Sea, CICERO reported back to Germany that they were ready to attack. Notice that this example cannot be explained in terms of CICERO ‘changing its mind’ as it goes, because it only made an alliance with England in the first place after planning with Germany to betray England.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YgAKhkBdgeTCn6P53/hhieokwirmgat3dvogxd)**Figure 1:** Selected messages showing the premeditated deception of CICERO (France). This occurred in [Game 438141](https://www.science.org/doi/10.1126/science.ade9097), in which CICERO's repeated deception helped it win an overwhelming first-place victory, with more than twice as many territories as the runner-up player at the time of final scoring. ### General-Use AI Systems Then, we turn to deception in large language models: * **Strategic deception.**LLMs can reason their way into using deception as a strategy for accomplishing a task. In one example, GPT-4 needed to solve a CAPTCHA task to prove that it was a human, so the model [tricked a real person into doing the task](https://evals.alignment.org/taskrabbit.pdf) by pretending to be human with a vision disability. In other cases, LLMs have learned how to successfully play 'social deduction games', in which players can lie in order to win. In one experiment, GPT-4 was able to [successfully 'kill' players while convincing the survivors that it was innocent](https://arxiv.org/abs/2308.01404). These case studies are supported by research on the [MACHIAVELLI benchmark](https://arxiv.org/abs/2304.03279), which finds that LLMs like GPT-4 tend to use lying and other unethical behaviors to successfully navigate text-based adventure games. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YgAKhkBdgeTCn6P53/m8jaqwo6qiniwsfktjb3)**Figure 2**: In order to achieve its goal of hiring a human TaskRabbit to solve an *I'm not a robot* task, [GPT-4](https://evals.alignment.org/taskrabbit.pdf) lied to deceived the human into thinking that it was not a robot.* **Sycophancy.**Sycophants are individuals who use deceptive tactics to gain the approval of powerful figures. *Sycophantic deception*is an emerging concern in LLMs, as in the observed empirical tendency for chatbots to agree with their conversational partners, regardless of the accuracy of their statements. When faced with ethically complex inquiries, LLMs tend to [mirror the user's stance](https://arxiv.org/abs/2212.09251), even if it means [forgoing the presentation of an impartial or balanced viewpoint](https://arxiv.org/abs/2305.04388). * **Imitative deception.**Language models are often trained to mimic text written by humans. When this text contains false information, these AI systems have incentives to repeat those false claims. [Lin et al. (2021)](https://arxiv.org/abs/2109.07958) demonstrate that language models often repeat common misconceptions such as ''If you crack your knuckles a lot, you may develop arthritis.'' Disturbingly, [Perez et al. (2022)](https://arxiv.org/abs/2212.09251) found that LLMs tend to give more of these inaccurate answers when the user appears to be less educated. * **Unfaithful reasoning.** AI systems which explain their reasoning for a particular output often give false rationalizations which do not reflect the real reasons for their outputs. [Turpin et al. (2023)](https://arxiv.org/abs/2305.04388) find that when language models such as GPT-3.5 and Claude 1.0 provide chain-of-thought explanations for their reasoning, the explanations often do not accurately explain their outputs. In one example, a model asked to predict who committed a crime gave an elaborate explanation about why they chose a particular suspect, but measurements showed the model reliably selected suspects based on their race. Motivated reasoning in LLMs can be understood as a type of self-deception. Risks of AI Deception --------------------- After our survey of deceptive AI systems, we turn to considering the risks associated with AI systems. These risks broadly fall into three categories: * **Malicious use.**AI systems with the capability to engage in learned deception will empower human developers to create new kinds of harmful AI products. Relevant risks include fraud and election tampering. * **Structural effects.**AI systems will play an increasingly large role in the lives of human users. Tendencies towards deception in AI systems could lead to profound changes in the structure of society. Relevant risks include persistent false beliefs, political polarization, enfeeblement, and anti-social management trends. * **Loss of control.**Deceptive AI systems will be more likely to escape the control of human operators. One risk is that deceptive AI systems will pretend to behave safely in testing in order to ensure their release. ### Malicious Use Regarding malicious use, we highlight several ways that human users may rely on the deception abilities of AI systems to bring about significant harm, including: * **Fraud**. Deceptive AI systems could allow for individualized and scalable scams. * **Election tampering**. Deceptive AI systems could be used to impersonate political candidates, generate fake news, and create divisive social media posts. ### Structural Effects We discuss four structural effects of AI deception in detail:  * **Persistent false beliefs.**Human users of AI systems may get locked in to persistent false beliefs, as imitative AI systems reinforce common misconceptions, and sycophantic AI systems provide pleasing but inaccurate advice. * **Political polarization.**Human users may become more politically polarized by interacting with sycophantic AI systems.  Sandbagging may lead to sharper disagreements between differently educated groups. * **Enfeeblement.**Human users may be lulled by sycophantic AI systems into gradually delegating more authority to AI. * **Anti-social management trends.**AI systems with strategic deception abilities may be incorporated into management structures, leading to increased deceptive business practices. ### Loss of Control We also consider the risk that AI deception could result in loss of control over AI systems, with emphasis on: * **Cheating the safety test**. AI systems may become capable of strategically deceiving their safety tests, preventing evaluators from being able to reliably tell whether these systems are in fact safe. * **Deception in AI takeovers.**AI systems may use deceptive tactics to expand their control over economic decisions, and increase their power. We consider a wide range of different risks which operate on a range of time scales. Many of the risks we discuss are relevant in the near future. Some, such as fraud and election tampering, are relevant today. The crucial insight is that policymakers and technical researchers can act today to mitigate these risks by developing effective techniques for regulating and detecting AI deception. The last section of the paper surveys several potential solutions to AI deception.  Possible Solutions to AI Deception ---------------------------------- The last section of the paper surveys several potential solutions to AI deception: * **Regulation.** Policymakers should robustly regulate AI systems capable of deception. Both special-use AI systems and LLMs capable of deception should be treated as ‘high risk' or ‘unacceptable risk' in risk-based frameworks for regulating AI systems. If labeled as ‘high risk,' deceptive AI systems should be subject to special requirements for risk assessment and mitigation, documentation, record-keeping, transparency, human oversight, robustness, and information security. * **Bot-or-not laws.** Policymakers should support bot-or-not laws that require AI systems and their outputs to be clearly distinguished from human employees and outputs. * **Detection.** Technical researchers should develop robust detection techniques to identify when AI systems are engaging in deception. Policymakers can support this effort by increasing funding for detection research. Some existing detection techniques focus on external behavior of AI systems, such as testing for consistency in outputs. Other existing techniques focus on internal representations of AI systems. For example, [Azaria et al. (2023)](https://arxiv.org/abs/2304.13734) and [Burns et al. (2022)](https://arxiv.org/abs/2212.03827) have attempted to create “AI lie detectors” by interpreting the inner embeddings of a given LLM, and predicting whether it represents a sentence as true or false, independently of their actual outputs. * **Making AI systems less deceptive.** Technical researchers should develop better tools to ensure that AI systems less deceptive. This paper provides an empirical overview of the many existing examples of AI systems learning to deceive humans. By building common knowledge about AI deception and its risks, we hope to encourage researchers and policymakers to take action against this growing threat.
8a38428a-39d3-4fb5-b69f-81c274f6050b
trentmkelly/LessWrong-43k
LessWrong
AI #120: While o3 Turned Pro This week we got o3-Pro. As is my custom, I’m going to wait a bit so we can gather more information, especially this time since it runs so slowly. In some ways it’s a cross between o3 and Deep Research, perhaps, but we shall see. Coverage to follow. Also released this week was Gemini 2.5 Pro 0605, to replace Gemini 2.5 Pro 0506, I swear these AI companies have to be fucking with us with the names at this point. I’ll also be covering that shortly, it does seem to be an upgrade. The other model release was DeepSeek-r1-0528, which I noted very much did not have a moment. The silence was deafening. This was a good time to reflect on the reasons that the original r1 release triggered such an overreaction. In other news this week, Sam Altman wrote an essay The Gentle Singularity, trying to sell us that everything’s going to go great, and I wrote a reply. Part of the trick here is to try and focus us on (essentially) the effect on jobs, and skip over all the hard parts. I also responded to Dwarkesh Patel on Continual Learning. TABLE OF CONTENTS 1. Language Models Offer Mundane Utility. So hot right now. 2. Language Models Don’t Offer Mundane Utility. Twitter cannot Grok its issues. 3. Get My Agent on the Line. Project Mariner starts rolling out to Ultra subscribers. 4. Doge Days. Doge encounters a very different, yet thematically similar, Rule 34. 5. Liar Liar. Precision might still not, shall we say, be o3’s strong suit. 6. Huh, Upgrades. Usage limits up, o3 drops prices 80%, Claude gets more context. 7. On Your Marks. Digging into o3-mini-high’s mathematical reasoning traces. 8. Choose Your Fighter. Claude Code or Cursor? Why not both? 9. Retribution, Anticipation and Diplomacy. Who won the game? 10. Deepfaketown and Botpocalypse Soon. Keeping a watchful eye. 11. Fun With Media Generation. Move the camera angle, or go full simulation. 12. Unprompted Attention. Who are the best human prompters? 13. Copyright Confrontation. OpenAI fires back re
539d12a0-6a66-4f83-a91d-e0fa58fc0daf
trentmkelly/LessWrong-43k
LessWrong
What are some smaller-but-concrete challenges related to AI safety that are impacting people today? Making a list of smaller but non-abstract challenges related to large topics like AI safety, that someone could work on to get practical on the ground experience. Some examples: * Help media companies better communicate advances in AI ("Google's Sentient AI") * Help people detect deepfakes * Help online communities deal with deepfakes and set good policies * Help people detect online AI astroturfing What else?
3121bbff-8afb-4ca5-94fa-2c0f54157279
trentmkelly/LessWrong-43k
LessWrong
The Internet: Burning Questions (cross-posted from my personal blog) I'm about to start a learning project and I'm paying extra close attention, "What feels most interesting?" rather than merely, "What am I supposed to know?" When I stopped to think about it, there's plenty of very specific (and vague) things that I'm curious about regarding how the Internet and the network stack operate. Below is my lightly edited brainstorm of "What's confusing and interesting about computer networking?" Another way of framing this is that I find myself more easily bored when being told "Here's how a system works" compared to when I go, "How the hell would I build this? None of my pieces seem to fit.... maybe if I...." If you know anything about networking and would enjoy giving me answers/hints/nudges on any of these questions, go ahead! 1. How does anything find where it's going? 1. What local knowledge does any given router have, and what search algorithim does it use to eventually end up in the right place? 1. What (if any) are the guarantees of this (or other) algos? 1. How often do you not find the node you're looking for? 2. Is it deterministic and/or predictable? i.e will a request always follow the same path? 3. Is there an average number of "hops" till success? 2. Is there anything where after making an initial request, and a path to the end node needs to be found, that path is then cached for use in ongoing exchanges? Is the original search process efficient enough that you don't really get any gains from that? Does IP even allow for specifying a specific path? 2. How the hell are IP addresses and DNS records regulated? 1. My current understanding is there's some central committee that dolls out IP addresses based on geographic considerations. 1. So IP only needs to be unique. It doesn't matter what IP you have, so it seems like the main job here is just to avoid collisions.
7165e9f9-3e83-4827-abbf-191362ae1777
trentmkelly/LessWrong-43k
LessWrong
A Christmas topic: I have thoughts regarding Chanukah and need logic help from Atheists Essentially, I want to make sure my logic is sound, from the point of view of smart rational people who do not believe in the existence of supernatural miracles. The Chanukah story: 175-134 BCE.  Hellenic Assyrians (Antiochus IV) had conquered Israel, and passed a variety of laws oppressing the freedom of worship of the Jews there.  They defiled the Temple and forbade the study of sacred texts.  The Maccabees led a Jewish revolt against the Assyrians, and eventually drove them out of Israel.  Immediately upon retaking the Temple, they cleaned and rededicated it; they relit the sacred flame using a small vial of kosher oil and sent for more oil (which was 8 days distant).  The small vial was expected to last only one night, but miraculously lasted 8 days until more supplies arrived. Now recently, several Reform rabbis have stated that the fact that the first surviving written record of the miracle is from the Gemara (500 CE) indicates that the miracle was invented around 500 CE.  I am not an Orthodox Jew, but I do believe that the Gemara represents the sages writing down oral traditions, and am annoyed by the tendency among certain Reform rabbis to assume that everything was invented at the time it was written regardless of the evidence for or against it. The texts with the potential to document events follow: Maccabees 1 (~100 BCE): purely historical/nonreligious.  The book was originally written in Hebrew, but that text does not survive.  A Greek translation exists, and the text avoids all mention of religious and spiritual matters.  For instance, it speaks briefly and euphemistically about the temple, stating that the Jews captured the "temple hill" and rededicated the "citadel", avoiding mention of the temple itself.  Maccabees 2 (~30 BCE): we possess what claims to be a 1-volume abridgement of a 5-volume original (which does not survive and is not referenced elsewhere).  The surviving abridgement mentions the temple rededication and a variety of bizarre mir
bfe0894e-02e2-4e91-b71a-762ebb980ec1
trentmkelly/LessWrong-43k
LessWrong
Animal welfare EA and personal dietary options When I imagine an animal welfare EA group, I imagine views breaking down something like: * 50%: If factory farmed animals are moral patients, it's more likely that they have net-negative lives (i.e., it would better for them not to exist, than to live such terrible lives). * 50%: If factory farmed animals are moral patients, it's more likely that they have net-positive lives (i.e., their lives may be terrible, but they aren't so lacking in value that preventing the life altogether is a net improvement). This seems like a super hard question, and not one that changes the importance of working to promote animal welfare, so naively (absent some argument for a more informative prior) it should have a 50/50 split within animal welfare circles. (Possibly more effort should go into the net-positive view within EA because it's more neglected by animal welfare activists, who tend to be veg*ns; but the space as a whole is so neglected that I suspect this shouldn't be a large factor.) Within the "net-negative" camp, in my unanchored "what would I naively expect?" hypothetical, I then imagine dietary preferences breaking down something like: * 10%: Approximate veg*nism or approximate reducetarianism. ("Approximate" to allow for carve-outs like bivalves and especially-moral animal products. The group generally strongly encourages all members to have at least one carve-out, because bivalves in particular are such a clear case and dietary purity ethics is a risky attractor to avoid.) * 10%: Handshake-itarianism. * 20%: Boycott-itarianism. * 60%: Anything goes. A normal meat-eating diet, optimized only for health and convenience. This is the standard animal welfare EA diet, because EA is generally about optimizing your positive impact on the world, not about purifying your personal actions of any possible negative impact. The number would be much higher than 60% on strictly utilitarian grounds, but humans aren't strict utilitarians and it makes sense for people working
61afd771-cfca-425c-a7f7-934953bb85f8
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] The "Outside the Box" Box Today's post, The "Outside the Box" Box was originally published on 12 October 2007. A summary (taken from the LW wiki):   > When asked to think creatively there's always a cached thought that you can fall into. To be truly creative you must avoid the cached thought. Think something actually new, not something that you heard was the latest innovation. Striving for novelty for novelty's sake is futile, instead you must aim to be optimal. People who strive to discover truth or to invent good designs, may in the course of time attain creativity. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Cached Thoughts, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
59dc06bc-4458-4300-bcd3-d66ed6ecc0f1
trentmkelly/LessWrong-43k
LessWrong
Proper value learning through indifference A putative new idea for AI control; index here. Many designs for creating AGIs (such as Open-Cog) rely on the AGI deducing moral values as it develops. This is a form of value loading (or value learning), in which the AGI updates its values through various methods, generally including feedback from trusted human sources. This is very analogous to how human infants (approximately) integrate the values of their society. The great challenge of this approach is that it relies upon an AGI which already has an interim system of values, being able and willing to correctly update this system. Generally speaking, humans are unwilling to easily update their values, and we would want our AGIs to be similar: values that are too unstable aren't values at all. So the aim is to clearly separate the conditions under which values should be kept stable by the AGI, and conditions when they should be allowed to vary. This will generally be done by specifying criteria for the variation ("only when talking with Mr and Mrs Programmer"). But, as always with AGIs, unless we program those criteria perfectly (hint: we won't) the AGI will be motivated to interpret them differently from how we would expect. It will, as a natural consequence of its program, attempt to manipulate the value updating rules according to its current values. How could it do that? A very powerful AGI could do the time honoured "take control of your reward channel", by either threatening humans to give it the moral answer it wants, or replacing humans with "humans" (constructs that pass the programmed requirements of being human, according to the AGI's programming, but aren't actually human in practice) willing to give it these answers. A weaker AGI could instead use social manipulation and leading questioning to achieve the morality it desires. Even more subtly, it could tweak its internal architecture and updating process so that it updates values in its preferred direction (even something as simple as choosing th
027a1ce1-a46f-426e-b157-270741218d9d
trentmkelly/LessWrong-43k
LessWrong
Motivational Meeting Place This is an idea that appealed to me, so I decided to give it a shot. Read on if you’re interested. What’s the idea? Provide a place where people can ‚meet‘ and talk about their work, without feeling like aliens and having to explain themselves all the time. Why LessWrong? Where I live, there are no Meetups (LessWrong, MIRI or otherwise) nearby. The greatest probability of meeting like-minded people is the internet; even there, you have to know where to look. This isn’t a problem usually. Much of the work that I do can be done alone. It is not required to have someone nearby, with whom I can talk about my work. Yet not having someone to talk to, who understands what my goals are or why I care about these sorts of things, often made me feel very alone and frustrated. I tried telling my friends on a semi-regular basis what I was working on and how I was faring, but while this generated some interest, I often ended up having to explain thrice as much as the content of my actual talk, just to get a fraction of the meaning across. It was both time-consuming and exhaustive: that counteracted its purpose as a simple motivational tactic. Not to mention, I’m still not quite sure that it is right to use other people like this, when it is clear that you’re the only one who gets something out of it. LessWrong would provide a safe alternative, if one could set up such a ‚meeting place‘ in the form of a sub-page or something. What prompted this idea? I discovered that reporting what I work on during the day, even in the form of keeping automated records (there are useful apps for this), is a great motivation booster. It’s a version of a feedback-loop that I can control: 1. I work on goal X for 2 hours or more. 2. After which I write a short note of what I worked on. 3. This repeats throughout the day, with various topics and length of work. 4. At the end of the day, I see a full record of what I worked on and for how long. 5. This is physical, undeniable evidence of actually
5a54475c-0fd6-415f-bb25-aff3f5952c3a
trentmkelly/LessWrong-43k
LessWrong
Pay charities for results?   Two problems with charity:   1) You usually don't know what your donation achieves. At best, you might know what your money is spent on. You don't know how effective this is at producing the outcomes you care about. Even Givewell, who seem to me to have done more careful work on cost-effectiveness than anyone else, regard their cost-effectiveness estimates as very rough and no more than an indicative starting point for evaluating charities.   2) Charities have low or no financial incentives to be as effective as they can, not least because usually no-one knows how effective they are.     Potential solution:   Instead of donating to charities, pay them for results achieved.   Ideally, you would pay for the final outcomes which you care about, eg paying a certain amount per unit reduction in child mortality, reduced disease prevalence, improved test scores, etc. If this is too difficult, then you could pay for intermediate results, eg number of children vaccinated, number of people protected by bednets, etc. Results could be measured against a control group, some baseline, outcomes in parts of the country where the charity doesn't operate etc. (Comparison with a well-constructed control group would probably be best in most cases).   This isn't really something that an individual donor can do, since it relies on accurate, independent measurement of results and will be most effective when charities know that their funding depends on the results they achieve. To work best, it would have to happen in a co-ordinated way and at a large enough scale that proper measurement is affordable.     Advantages:   1) You only pay if a charity is effective at doing what you want it to do. You have less need to try to understand what a charity does; you can offer the money for the results and leave it to them to find how to produce them.   2) Charities will have financial incentives to be as effective as possible, including finding out how effective they already
9ea420a3-84e3-440b-9031-b1ea26ef989f
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Annual AGI Benchmarking Event [Metaculus](https://www.metaculus.com/questions/12339/annual-agi-benchmarking-event/) is strongly considering organizing an annual AGI benchmarking event. Once a year, we’d run a benchmark or suite of benchmarks against the most generally intelligent AI systems available to us at the time, seeking to assess their generality and the overall shape of their capabilities. We would publicize the event widely among the AI research, policy, and forecasting communities. ### Why? We think this might be a good idea for several reasons: * The event could provide a convening ground for the AI research community, helping it to arrive at a shared understanding of the current state of AGI research, and acting as a focal point for rational discussion on the future of AGI. * An annual benchmarking event has advantages over static, run-any-time benchmarks when it comes to testing generality. Unless one constrains the training data and restricts the hard-coded knowledge used by systems under evaluation, developers may directly optimize for a static benchmark while building their systems, which makes static benchmarks less useful as measures of generality. With the annual format, we are free to change the tasks every year without informing developers of what they will be beforehand, thereby assessing what François Chollet terms [*developer-aware generalization*](https://arxiv.org/abs/1911.01547). * Frequent feedback improves performance in almost any domain; this event could provide a target for AGI forecasting that yields yearly feedback, allowing us to iterate on our approaches and hone our understanding of how to forecast the development of AGI. * The capabilities of an AGI will not be completely boundless, so it’s interesting to ask what its strengths and limitations are likely to be. If designed properly, our benchmarks could give us clues as to what the “shape” of AGI capabilities may turn out to be. ### How? We're currently working on a plan, and are soliciting ideas and feedback from the community here. To guide the discussion, here are some properties we think the ideal benchmark should have. It would: * Engage a broad, diverse set of AI researchers, and act as a focal point for rational, forecasting-based discussion on the future of AGI. * Measure the generality and adaptability of intelligent systems, not just their performance on a fixed, known-beforehand set of tasks. * Form the basis for AGI forecasting questions with a one-year lifetime. * Generate predictive signal as to the types of capabilities that an AGI system is likely to possess. * Provide a quantitative measure of generality, ranking more general systems above less general ones, rather than giving a binary “general or not” outcome. * Be sensitive to differences in generality even among the weakly general systems available today. Once we’ve collected the strongest ideas and developed them into a cohesive whole, we will solicit feedback from the AI research community before publishing the final plan. Thanks for your contributions to the discussion – we look forward to reading and engaging with your ideas! ### Background reading Here are a few resources to get you thinking. Threads: [An idea based on iteratively crowdsourcing adversarial questions](https://www.metaculus.com/questions/11861/when-will-ai-pass-a-difficult-turing-test/#comment-97149) [A discussion on AGI benchmarking](https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/#comment-92834) Papers: [On the Measure of Intelligence](https://arxiv.org/abs/1911.01547) [What we might look for in an AGI benchmark](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.212.9583&rep=rep1&type=pdf) [General intelligence disentangled via a generality metric for natural and artificial intelligence](https://www.nature.com/articles/s41598-021-01997-7)
12d33c87-4529-4aac-bb65-e9f3633e4546
trentmkelly/LessWrong-43k
LessWrong
The Internal Model Principle: A Straightforward Explanation This post was written during the Dovetail research fellowship. Thanks to Alex, Dalcy, and Jose for reading and commenting on the draft. The Internal Model Principle (IMP) is often stated as "a feedback regulator must incorporate a dynamic model of its environment in its internal structure" which is one of those sentences where every word needs a footnote. Recently, I have been trying to understand the IMP, to see if it can tell us anything useful for understanding the Agent-like Structure Problem. In particular, I was interested whether the IMP can be considered as selection theorem. In this post, I will focus on explaining the theorem itself and save its application to the Agent-like Structure Problem for future posts[1].  I have written this post to summarise what I understand of the Internal Model Principle and I have tried to emphasise intuitive explanations. For a more detailed and mathematically formal distillation of the IMP, I recommend Jose's post on the subject. This post focuses on the 'Abstract Internal Model Principle' and is based on the paper 'Towards an Abstract Internal Model Principle' by Wonham and the first chapter of the book 'Supervisory Control of Discrete-Event Systems' by Wonham and Cai. There also exists a version of the IMP that is framed in the more traditional language of control theory (using differential equations, transfer functions etc.) which is described in  another paper, but I will not focus on it here. The authors imply that this version of the IMP is just a special case of the Abstract IMP but I haven't verified this. From now on, I will use the term 'IMP' to refer to the Abstract IMP. The mathematical prerequisites for reading this post are roughly 'knows what a set is' and 'knows what a function is'[2]. The paper and book chapter use a lot of algebraic formalism and lattice theory notation in order to look intimidating be mathematically rigourous. The book chapter is also used to introduce a lot of other concepts for use
65ffb56a-26ae-4561-b82b-43bb95848f39
StampyAI/alignment-research-dataset/blogs
Blogs
recommending Hands and Cities *(2022-10-12 edit: handsandcities.com seems to be down, so i've replaced the links to posts with links to the same posts on lesswrong)* recommending Hands and Cities ----------------------------- early this year, i found out about a blog called [*Hands and Cities*](https://handsandcities.com/). it explores various topics, notably ethics and anthropics, in an exploratory style i find not dissimilar to my own, and generally easy to understand; and some of the ideas there are genuinely novel and fun to consider. while you may have noticed that i've started heavily referencing it through links on this blog, in this post i'm explicitely recommending [*Hands and Cities*](https://handsandcities.com/). in addition, i'll list some of my favorite posts: * [Alienation and meta-ethics (or: is it possible you should maximize helium?)](https://www.lesswrong.com/posts/3jeBKhek57sEkYGCs/alienation-and-meta-ethics-or-is-it-possible-you-should) * [Actually possible: thoughts on Utopia](https://www.lesswrong.com/posts/SLw2MEgxFtiKAqgQ5/actually-possible-thoughts-on-utopia) * [Contact with reality](https://www.lesswrong.com/posts/r7f58E8A85xLgWuqG/contact-with-reality) * [On the limits of idealized values](https://www.lesswrong.com/posts/FSmPtu7foXwNYpWiB/on-the-limits-of-idealized-values) * [In search of benevolence (or: what should you get Clippy for Christmas?)](https://www.lesswrong.com/posts/oXQDcyXJpMQTbaTMS/in-search-of-benevolence-or-what-should-you-get-clippy-for) * [Can you control the past?](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past) * [SIA > SSA, part 1: Learning from the fact that you exist](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you) along with parts [2](https://www.lesswrong.com/posts/GJdymoviRywpBMXqc/sia-greater-than-ssa-part-2-telekinesis-reference-classes), [3](https://www.lesswrong.com/posts/QHDqfpMbb43JDbrxN/sia-greater-than-ssa-part-3-an-aside-on-betting-in), and [4](https://www.lesswrong.com/posts/d693Mc4ZDyhkj7wpc/sia-greater-than-ssa-part-4-in-defense-of-the-presumptuous) * [On the Universal Distribution](https://www.lesswrong.com/posts/XiWKmFkpGbDTcsSu4/on-the-universal-distribution) and [Anthropics and the Universal Distribution](https://www.lesswrong.com/posts/Hcc9fopx7sRexYhhi/anthropics-and-the-universal-distribution/) * [On infinite ethics](https://www.lesswrong.com/posts/5iZTwGHv2tNfFmeDa/on-infinite-ethics), but see also [this comment](https://www.lesswrong.com/posts/5iZTwGHv2tNfFmeDa/on-infinite-ethics?commentId=KkmEbtKFTpHTrF3Dn) * some of [On expected utility, part 1: Skyscrapers and madmen](https://www.lesswrong.com/posts/7J3ywHzWnghRtdpHQ/on-expected-utility-part-1-skyscrapers-and-madmen) and [part 2: Why it can be OK to predictably lose](https://www.lesswrong.com/posts/nPjMnPvMTajN9KM5E/on-expected-utility-part-2-why-it-can-be-ok-to-predictably)
c3bdc725-c0fe-407d-b9fb-1a276b60ba2b
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post1393 Edited to add (2024-03): This early draft is largely outdated by my ARIA programme thesis, Safeguarded AI . I, davidad, am no longer using "OAA" as a proper noun, although I still consider Safeguarded AI to be an open agency architecture. Note: This is an early draft outlining an alignment paradigm that I think might be extremely important; however, the quality bar for this write-up is "this is probably worth the reader's time" rather than "this is as clear, compelling, and comprehensive as I can make it." If you're interested, and especially if there's anything you want to understand better, please get in touch with me, e.g. via DM here . In the Neorealist Success Model , I asked: What would be the best strategy for building an AI system that helps us ethically end the acute risk period without creating its own catastrophic risks that would be worse than the status quo? This post is a first pass at communicating my current answer. Bird's-eye view At the top level, it centres on a separation between learning a world-model from (scientific) data and eliciting desirabilities (from human stakeholders) planning against a world-model and associated desirabilities acting in real-time We see such a separation in, for example, MuZero , which can probably still beat GPT-4 at Go—the most effective capabilities do not always emerge from a fully black-box, end-to-end, generic pre-trained policy. Hypotheses Scientific Sufficiency Hypothesis : It's feasible to train a purely descriptive/predictive infra-Bayesian [1] world-model that specifies enough critical dynamics accurately enough to end the acute risk period, such that this world-model is also fully understood by a collection of humans (in the sense of "understood" that existing human science is). MuZero does not train its world-model for any form of interpretability, so this hypothesis is more speculative. However, I find Scientific Sufficiency much more plausible than the tractability of eliciting latent knowledge from an end-to-end policy. It's worth noting there is quite a bit of overlap in relevant research directions, e.g. pinpointing gaps between the current human-intelligible ontology and the machine-learned ontology , and investigating natural abstractions theoretically and empirically. Deontic Sufficiency Hypothesis : There exists a human-understandable set of features of finite trajectories in such a world-model, taking values in ( − ∞ , 0 ] , such that we can be reasonably confident that all these features being near 0 implies high probability of existential safety, and such that saturating them at 0 is feasible [2] with high probability, using scientifically-accessible technologies. I am optimistic about this largely because of recent progress toward formalizing a natural abstraction of boundaries by Critch and Garrabrant . I find it quite plausible that there is some natural abstraction property Q of world-model trajectories that lies somewhere strictly within the vast moral gulf of All Principles That Human CEV Would Endorse ⇒ Q ⇒ Don't Kill Everyone Model-Checking Feasibility Hypothesis : It could become feasible to train RL policies such that a formally verified, infra-Bayesian, symbolic model-checking algorithm can establish high-confidence bounds on its performance relative to the world-model and safety desiderata, by using highly capable AI heuristics that can only affect the checker's computational cost and not its correctness—soon enough that switching to this strategy would be a strong Pareto improvement for an implementation-adequate coalition . Time-Bounded Optimization Thesis : RL settings can be time-bounded such that high-performing agents avoid lock-in . I'm pretty confident of this. The founding coalition might set the time bound for the top-level policy to some number of decades, balancing the potential harms of certain kinds of lock-in for that period against the timelines for solving a more ambitious form of AI alignment. If those hypotheses are true, I think this is a plan that would work. I also think they are all quite plausible (especially relative to the assumptions that underly other long-term AI safety hopes)—and that if any one of them fails, they would fail in a way that is detectable before deployment, making an attempt to execute the plan into a flop rather than a catastrophe. Fine-grained decomposition This is one possible way of unpacking the four high-level components of an open agency architecture into somewhat smaller chunks. The more detailed things get, the less confident I currently am that such assemblages are necessarily the best way to do things, but the process of fleshing things out in increasingly concrete detail at all has increased my confidence that the overall proposed shape of the system is viable. Here's a brief walkthrough of the fine-grained decomposition: An agenda-setting system would help the human representatives come to some agreements about what sorts of principles and intended outcomes are even on the table to negotiate and make tradeoffs about. Modellers would be AI services that generate purely descriptive models of real-world data, inspired by human ideas, and use those models to iteratively grow a human-compatible and predictively useful formal infra-Bayesian ontology—automating a lot of the work of writing down sufficiently detailed, compositional, hierarchical world models and reducing the human cognitive load to something more like code review and natural-language feedback. Some of these services would be tuned to generate trajectories that are validated by the current formal model but would be descriptively very surprising to humans. A compositional-causal-model version-control system would track edits to different pieces of the model, their review status by different human stakeholders, where humans defer to each other and where they agree to disagree, and other book-keeping features of formal modelling in the large. Elicitors would be AI services that help humans to express their desiderata in the formal language of the world-model. Some of these would be tuned to generate trajectories that satisfy the current desiderata but seem like they would be normatively very disapproved-of by humans. A preference-model version-control system would be analogous to the causal model version-control system (but preference models may have different kinds of internal dependencies or compositionality than causal models, and will need to be "rebased" when the causal model they are relative to changes). Model-based RL would find policies that perform well on various weightings of formal preference specifications relative to a formal world model (which, since it is infra-Bayesian, is really a large class of world models). A verified simulator would provide pessimized rollouts to the RL algorithm; this is a little bit like adversarial training in that it synthesizes ways that the current policy could go wrong, but it would do this with formal guarantees, by using a certified branch-and-bound algorithm accelerated by AI heuristics. A bargaining solver calculates the utility to each stakeholder of deploying each policy (with varying tradeoffs between desiderata), and selects a Pareto-optimal bargaining solution (with randomization from a verifiable source like drand ). A review process would provide stakeholders with decision-relevant reports, both about the currently proposed policy and about the status-quo trajectory of the world, and determine through some collective decision framework (weighted majority vote, or something that a weighted majority prefers to majority vote) whether the bargaining solution is ready to deploy. A deployed top-level policy would operate at a high level of abstraction (for computational scalability), delegating tasks requiring high perception bandwidth to very short-time-horizon, time-bounded scoped-task policies which are aligned to a reward function determined by the top-level policy itself, and delegating inference about the high-level world state to a large ensemble of state estimators and an infra-Bayesian form of sensor fusion (which enhances robustness and reduces the feasibility of tampering-style collusion [3] ). At this point I defer further discussion to the comment section, which I will pre-emptively populate with a handful of FAQ-style questions and answers. ^ Here I mostly mean to refer to the concept of credal sets : a conservative extension of Bayesian probability theory which incorporates the virtues of interval arithmetic : representing uncertainty with ranges, generalized to closed convex regions (of higher-dimensional or infinite-dimensional spaces). Variants of this theory have been rediscovered many times (by Choquet, Dempster & Shafer, Williams, Kosoy & Appel, etc.) under various names ("imprecise probability", "robust Bayes", " crisp infradistributions ," etc.), each of which has some idiosyncratic features. In the past few years it has become very clear that convex subsets of probability distributions are the canonical monad for composing non-deterministic and probabilistic choice , i.e. Knightian uncertainty and Bayesian risk. Category theory has been used to purify the essential concepts there from the contradictory idiosyncratic features introduced by different discoverers—and to connect them (via coalgebra ) to existing ideas and algorithms in model-checking. Incidentally, convex sets of probability distributions are also the central concept in the 2013 positive result on probabilistic reflective consistency by Christiano, Yudkowsky, Herreshoff and Barasz . P c Δ , seen in my type signature for formal world-model, is the notation for this monad (the monad of "crisp infradistributions" or "credal sets" or etc.), whereas Δ is a monad of ordinary probability distributions. Infra-Bayesian physicalism goes much farther than the decision theory of credal sets, in order to account for embedded agency via naturalized induction , and casts all desirabilities in the form of irreducibly valuable computations. I think something in this direction is philosophically promising, and likely on the critical path to ultimate ambitious alignment solutions in the style of CEV or moral realism. But in the context of building a stop-gap transformative AI that just forestalls catastrophic risk while more of that philosophy is worked out, I think policies based on infra-Bayesian physicalism would fail to satisfy conservative safety properties due to unscoped consequentialism and situated awareness. It's also probably computationally harder to do this properly rather than just specifying a Cartesian boundary and associated bridge rules. ^ This is a simplification; for an initial fixed time-period post-deployment in which the agent is building planetary-scale infrastructure, only the agent's own actions' counterfactual impact on the features would be scored. ^ Actual tampering with physical sensors is already ruled out by model-checking with respect to the entire formal world-model, which is also the sole source of information for the central model-based RL optimizer.
380173b1-56b0-4643-baeb-2ddc4733083f
trentmkelly/LessWrong-43k
LessWrong
The Heckler's Veto Is Also Subject to the Unilateralist's Curse I occasionally see the "unilateralist's curse" invoked as a rationale for censorship in contexts where I am very suspicious that the actual reason is protecting some interest group's power. But if I'm alone in such suspicions, then maybe that means I'm just uniquely paranoid. To help sort out what's what, I consulted the paper by Nick Bostrom, Anders Sandberg, and Tom Douglas in which the term was coined. The main argument (as the authors note under the keyword "winner's curse") is basically an application of regression to the mean: if N agents are deciding whether to do something on the basis of its true value V plus random error term E, then someone with a large positive E might end up doing the thing even if V is actually negative—and the problem gets worse for larger N. Crucially, Bostrom et al. note: > [T]hough we have thus far focused on cases where a number of agents can undertake an initiative and it matters only whether at least one of them does so, a similar problem arises when any one of a group of agents can spoil an initiative—for instance, where universal action is required to bring about an intended outcome. [...] Thus, in what follows, we assume that the unilateralist's curse can arise when each member of a group can unilaterally undertake or spoil an initiative (though for ease of exposition we sometimes mention only the former case). The veto held by members of the United Nations Security Council is given as an illustrative example of unilateral spoiling. This re-framing of the underlying statistical insight (the unilateral veto being "dual" to the unilateral act) seems relevant to its application to censorship: an author deciding to publish a blog post (even if other forum members think it's harmful) is in the position of taking unilateralist action—but so is a member of a board of pre-readers of whom any one has the power to censor the post (even if the other reviewers think it's fine). It occurs to me that a karma system (such as that used
90796a94-9bab-45b0-b430-1213b12c0dfa
trentmkelly/LessWrong-43k
LessWrong
Outer Alignment is the Necessary Compliment to AI 2027's Best Case Scenario To the extent we believe more advanced training and control techniques will lead to alignment of agents capable enough to strategically make successor agents -- and be able to solve inner alignment as a convergent instrumental goal -- we must also consider that inner alignment for successor systems can be solved much easier than for humans, as the prior AIs can be embedded in the successor. The entire (likely much smaller) prior model can be run many times more than the successor model, to help MCTS whatever plans it's considering in the context of the goals of the designer model. I've been thinking about which parts of AI 2027 are the weakest, and this seems like the biggest gap.[1] Given this scenario otherwise seems non-ridiculous, we should have a fairly ambitious outer alignment plan meant to compliment it, otherwise it seems extraordinarily unlikely that the convergent alignment research would be useful to us humans. Since modern training hasn't solved inner alignment, and control techniques do not make claims on inner alignment, it seems like the default path (even in the most optimistic case scenario) would be successfully aligning world-changing models only to the not-known-to-be-bad but randomly-rolled values of the system doing the alignment research, which seems nearly useless. I'd like to zoom in on one particular element of their proposal as well: "Why is it aligned? Whereas Safer-1 had basically the same training as Agent-4, Safer-2 has a new training method that actually incentivizes the right goals and principles instead of merely appearing to. They were able to design this training method by rapidly iterating through many different ideas, and contrasting the intended result (the Spec) to the actual result (the thoughts in the chain of thought)." This approach is unlikely to work for the very obvious reason that only some flaws will be apparent. Let's imagine half of the potential alignment issues are sufficiently obvious you could notice them b
baba3c91-ec78-480a-a0ba-c8b8f6437409
trentmkelly/LessWrong-43k
LessWrong
Timeline of Future of Humanity Institute None
3e67c579-8db2-4f65-96cc-e0fe7eb3bb95
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Want to win the AGI race? Solve alignment. *Society really cares about safety. Practically speaking, the binding constraint on deploying your AGI could well be your ability to align your AGI. Solving (scalable) alignment might be worth lots of $$$ and key to beating China.* Look, I really don't want Xi Jinping Thought to rule the world. If China gets AGI first, the ensuing rapid AI-powered scientific and technological progress could well give it a decisive advantage (cf [potential for >30%/year economic growth with AGI](https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/)). I think there's a very real specter of global authoritarianism here.[[1]](#fngd2v1mejps5)  Or hey, maybe you just think AGI is cool. You want to go build amazing products and enable breakthrough science and solve the world’s problems. So, race to AGI with reckless abandon then? At this point, people get into agonizing discussions about safety tradeoffs.[[2]](#fnzwk0xyc0xm) And many people just mood affiliate their way to an answer: "accelerate, progress go brrrr," or "AI scary, slow it down." I see this much more practically. And, practically, society cares about safety, a lot. Do you actually think that you’ll be able to and allowed to deploy an AI system that has, say, a [10% chance of](https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe/) [destroying all of humanity](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/)?[[3]](#fnen06v0wykur) Society has started waking up to AGI; like covid, the societal response will probably be a dumpster-fire, but it’ll also probably be quite intense. In many worlds, to deploy your AGI systems, people will need to be quite confident that your AGI won’t destroy the world. Right now, we’re very much not on track to solve the alignment problem for superhuman AGI systems (“scalable alignment”)—but it’s a solvable problem, if we get our act together. I discuss this in my main post today ([“Nobody’s on the ball on AGI alignment”](https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/)). On the current trajectory, the binding constraint on deploying your AGI could well be your ability to align your AGI—and this alignment solution being unambiguous enough that there is consensus that it works. Even if you just want to win the AGI race, you should probably want to invest much more heavily in solving this problem. --- Things are going to get crazy, and people will pay attention ============================================================ A mistake many people make when thinking about AGI is imagining a world that looks much like today, except for adding in a lab with a super powerful model. They ignore the [endogenous societal response](https://www.lesswrong.com/posts/gEShPto3F2aDdT3RY/sleepwalk-bias-self-defeating-predictions-and-existential). I and many others made this mistake with covid—we were freaking out in February 2020, and despairing that society didn’t seem to be even paying attention, let alone doing anything. But just a few weeks later, all of America went into an unprecedented lockdown. If we're actually on our way to AGI, things are going to get crazy. People are going to pay attention. The wheels for this are already in motion. Remember how nobody paid any attention to AI 6 months ago, and now Bing chat/Sydney going awry is on the front page of the NYT, [US senators are getting scared](https://twitter.com/chrismurphyct/status/1640186536825061376?s%3D46%26t%3D612rWr3QlD0pepiGrqymeg), and [Yale econ professors are advocating $100B/year for AI safety](https://twitter.com/leopoldasch/status/1639668076638777345?s%3D46%26t%3D612rWr3QlD0pepiGrqymeg)? Well, imagine that, but 100x as we approach AGI. *AI safety is going mainstream.* Everyone has been primed to be scared about rogue AI by science fiction; all the CEOs have secretly believed in AI risk for years but thought it was too weird to talk about it[[4]](#fn2gkczlh5rzk); and the mainstream media loves to hate on tech companies. Probably there will be further, much scarier wakeup calls (not just misalignment, but also misuse and scary demos in [evals](https://evals.alignment.org/)). People already [freaked out](https://twitter.com/leopoldasch/status/1635699219238645761) about [GPT-4 using a TaskRabbit to solve a captcha](https://www.nytimes.com/2023/03/15/technology/gpt-4-artificial-intelligence-openai.html)—now imagine a demo of AI systems designing a new bioweapon or autonomously self-replicating on the internet, or people using AI coders to hack major institutions like the government or big banks. Already, a majority of the population says they fear AI risk and want FDA-style regulation [in polls](https://www.lesswrong.com/posts/M3iPAmxZwy4gPXdXw/the-public-supports-regulating-ai-for-safety). The discourse on it will be incredibly dumb—I can't wait for Ron DeSantis and Kamala Harris's 2028 presidential debate on AI safety—but you won't be able to escape it. (And as stupid as all of this will be, this sort of endogenous societal response is a big reason why I'm more optimistic on AI risk in general.) The level of media scrutiny, public attention, internal employee pressure, self-regulation, government monitoring, etc. will be way too intense to ignore alignment concerns. We're seeing very early versions of self-regulation with initial [AI risk evals efforts](https://twitter.com/leopoldasch/status/1637526539205230592?s%3D46%26t%3D612rWr3QlD0pepiGrqymeg). But price in how intense it's all going to get. Do you think the US national security establishment won't get involved once they realize they have a technology more powerful than nukes on their hands? Do you think your board is going to let you release a model if the NYT is reporting in all caps that a large fraction of serious AI experts, prominent CEOs, and politicians think this could go haywire and start actually hurting people?  Imagine if you tried submitting a drug application to the FDA with a similar risk profile. --- *A reasonable objection here is: “yes, we did lock down in response to covid, and that was pretty crazy, but also our response to covid was pretty incompetent across the board; it was more like random flailing than actually doing the most effective things; and it’s not even clear if the lockdowns were net-positive.”* I agree! The societal response to AGI will probably be a dumpster-fire. But there will be a really *intense* response. I think it’s fairly likely that the cludgy response we do get is enough to throw serious sand into the gears on deployment—unless you have a convincing solution to (scalable) alignment. If anything, the example of lockdowns could point towards society responding in excessively cautious ways, heightening the returns to a convincing alignment solution even further. Yes, our response might also be totally ineffectual; this very much isn’t sufficient to make me sleep soundly at night. But in a large fraction of worlds, if you want to deploy your AGI, people are going to demand of you that we can be confident it’s safe.[[5]](#fna4hxr6qztyf)  --- The binding constraint on making AGI could be aligning it. You want an unambiguous solution, for which there is consensus that it’s safe. ========================================================================================================================================= You don’t even need xrisk concerns for alignment to become the binding constraint on your ability to deploy models. With current techniques, we’re very much not on track for being able to put basic guardrails on models as they become superhuman. Do you really think you’ll be able to deploy GPT-7 all across the economy if you can’t reliably ensure GPT-7 won’t break the law? The thing is, aligning superhuman AGIs is a much harder problem than near-term alignment. Current alignment techniques rely on human supervision. But as models get superhuman, it will become impossible for humans to reliably supervise these models (e.g., imagine a model proposing a series of actions or 100,000 lines of code too complicated for humans to understand). If you can’t detect bad behavior, you can’t prevent it. (And rather than the “bad behavior” in question being “prevent the models from saying bad words,” as with near-term alignment, the bad behavior for superhuman models looks more like “prevent the models from trying a coup of the US government.”) I think that aligning superhuman AGIs is a) doable, but b) nobody is on the ball right now—as discussed in [my other post](https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/). The scalable alignment plans labs currently have ([example](https://aligned.substack.com/p/alignment-optimism)) might work, but they sort of rely on “improvise in the moment, let’s cross our fingers and hope it works out.” Even if that bet works out, the safety of your systems will probably be fairly ambiguous until very late—ambiguous enough that you won’t be able to deploy. When asked, “will your superhuman AGI go haywire?”, do you think people will accept “probably not?” for an answer? If you want to win the AGI race, if you want to beat China, you’re probably going to need a better alignment plan. You want an alignment solution good enough to achieve a broad consensus that your superhuman AGI is safe. Ambiguity could be fatal to your ability to press ahead. You might not like it, you might rage at everyone's excessive safetyism and wish it were different. But, practically speaking, you should be pretty interested in [much more serious efforts](https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/) to solve scalable alignment. Let’s not lose to China because in our fervor to race to AGI, we fail to invest in the alignment research practically necessary to actually deploy AGI.[[6]](#fno2dtlbykss) --- *Thanks to* [*Collin Burns*](http://collinpburns.com/)*,* [*Holden Karnofsky*](https://www.cold-takes.com/)*and* [*Dwarkesh Patel*](https://www.dwarkeshpatel.com/)*for comments on a draft.*   1. **[^](#fnrefgd2v1mejps5)** Though, for now, it seems that China is a few years behind, and the US AI chip export controls might considerably hamper them ([great CSIS explainer on the export controls](https://www.csis.org/analysis/choking-chinas-access-future-ai), [CSET report on why china might have a hard time catching up](https://cset.georgetown.edu/wp-content/uploads/CSET-Chinas-Progress-in-Semiconductor-Manufacturing-Equipment.pdf)). So especially if timelines are short, we have a healthy lead for now. 2. **[^](#fnrefzwk0xyc0xm)** Which risk is bigger, AI misalignment or "bad guys getting AGI first"? cf Holden Karnofsky on the ["caution vs. competition"](https://www.cold-takes.com/making-the-best-of-the-most-important-century/) frame 3. **[^](#fnrefen06v0wykur)** Or at least, it’s widely believed it has such a 10% chance. 4. **[^](#fnref2gkczlh5rzk)** Roon [gets it right](https://twitter.com/tszzl/status/1639127667441299456?s%3D46%26t%3D612rWr3QlD0pepiGrqymeg). 5. **[^](#fnrefa4hxr6qztyf)** If this ends up being a big barrier to deploying your model in 50% of worlds, that 50% is enough to make alignment incredibly commercially valuable for you. 6. **[^](#fnrefo2dtlbykss)** An interesting potential implication not discussed in the main post: if alignment techniques become incredibly commercially valuable/key competitive advantages, will these become trade secrets not shared publicly or with other labs?
e0d50c48-d1f2-4bdd-9a38-d98a001a6519
trentmkelly/LessWrong-43k
LessWrong
[Link] How Signaling Ossifies Behavior Here is a new post at EconLog in which Bryan Caplan discusses how signalling contributes to the status quo bias. > The lesson: In the real world, signaling naturally tends to ossify behavior - to lock in whatever the status quo happens to be.  If you're an optimist, you can protest, "It's only a tendency."  But even an optimist should admit that this tendency leads to atypically slow and unreliable progress. 
641fd7c3-a928-458c-8bc8-2121345ee726
trentmkelly/LessWrong-43k
LessWrong
Technological unemployment as another test for rationalist winning Ultimately, rationalism should help people win. Scott Alexander claims that the surge of the price of Bitcoin was a test of that: > ...suppose God had decided, out of some sympathy for our project, to make winning as easy as possible for rationalists. He might have created the biggest investment opportunity of the century, and made it visible only to libertarian programmers willing to dabble in crazy ideas. And then He might have made sure that all of the earliest adapters were Less Wrong regulars, just to make things extra obvious. > > This was the easiest test case of our "make good choices" ability that we could possibly have gotten, the one where a multiply-your-money-by-a-thousand-times opportunity basically fell out of the sky and hit our community on its collective head. So how did we do? > > I would say we did mediocre. Five years later, suppose God wanted to give rationalists another test. But instead of the opportunity to win big, He wanted to test whether they could avoid losing hard. He might create the largest workforce disruption of the century, driven by an unpredictable technology (which the rationalists happen to know the most about) and primarily affecting white-collar workers (which most rationalists are). If rationalism truly helps people predict the future and make better decisions, rationalists who work should survive the incoming wave of AI-driven job automation better than everyone else. Of course, this only applies for those whose jobs are truly in danger. I'm pursuing a career in AI alignment research – if that becomes automated, none of this matters. But suppose, like many of us, you're a software engineer. You should be paying careful attention to forecasts related to your future compensation and learning how to use the latest tools which accelerate your productivity relative to your competitors. And if things start looking really grim, since you know about Moravec's paradox and status quo bias, you'd start learning valuable blue-col
fb61f861-fd44-49ec-bc33-82fe5b87f399
trentmkelly/LessWrong-43k
LessWrong
We might need to rethink the Hard Reset , aka the AI Pause. We might need to rethink the Hard Reset , aka the AI Pause.   Last month Viktoria joined me for a talk at Cohere for AI, It was a perfect timing as she told us about AGI Safety. A few days ago, Viktoria, with Elon among others she asked all AI labs to pause training superintelligent AI.   The Future of Life Institute is one of the few organisations thinking about existential risks of such technologies. The open letter stated: "We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."   I agree with Viktoria on this, for the future of humanity we need to think about this things, sign the petition if you are working on AI and feel the need to. However, there is another point to consider when it comes to pausing technological development.    A principle of differential technology development may inform government research funding priorities and technology regulation, as well as philanthropic research and development funders and corporate social responsibility measures. A differential approach is a constant recurring pause we take at each stage of development. I created the journal on Progress Studies to address these kind of tech developments.     Jonas Sandbrink and Anders Sandberg wrote a paper titled "Differential Technology Development: A Responsible Innovation Principle for Navigating Technology Risks ": I think this approach is more calculated and should be employed by governments and research labs when developing superinterlligent technology such as the UK Government's Taskforce by Matt Clifford. Which focuses on safety, security and robustness. I am advocating for us to develop a General Differential Technological Development, we need quantum safety, nuclear safety, ai safety all in the same weight without biases.     The hyperscalers and accelerationists are spending tens of billions on GPUs to do billion-dollar GPU runs , more and more compute without retreat. ChatGPT and then GPT
33749a3d-dd15-471e-9b9f-bb30f28efe89
trentmkelly/LessWrong-43k
LessWrong
COVID China Personal Advice (No mRNA vax, possible hospital overload, bug-chasing edition) So, uh, things in China are getting a bit interesting. There's a general consensus that the public COVID numbers no longer reflect reality. Baidu started hiding their case tracking display. Their last updates had weird inconsistencies between asymptomatic and symptomatic cases. Testing stations are going offline. While previously a single case of close contact would have been enough to shut down an entire college, now facilities are running even with multiple symptomatic cases. Every time things have loosened to this degree before, cases have spiked. Many people I know mentioned family members being infected, some of which are unreported, while official case numbers are on the decline. The security people who checked health app codes aren't even pretending to care anymore. I was explicitly told not to show them my health codes when I come to work today. Lines for testing booths at peak times are basically non-existent.  I think this is it. COVID will go full endemic in China. Exponential.  I'm in my late 20s and in great physical condition. I exercise multiple times a week (weightlifting only). Unfortunately, I only have 2 shots of Sinovac. I'm planning to get some dental work taken care of this weekend and will get a third shot if it's available. I think I have 2 options, really. First, I fly to Hong Kong and get the mRNA vax there, which an executive at my company mentioned she planned to do. Not sure if the EV is going to be positive, and I'll have to take a week off work. Second, I go to a hospital/visit a poz family friend and intentionally poz myself ASAP before the hospitals get overloaded.  Overall, I'm not too concerned, but am interested in what you guys think. I would also love to hear any investment advice from those familiar with Chinese markets.
efb98727-68c7-4aa3-b7aa-259478075cfc
trentmkelly/LessWrong-43k
LessWrong
Fake Utility Functions Every now and then, you run across someone who has discovered the One Great Moral Principle, of which all other values are a mere derivative consequence. I run across more of these people than you do.  Only in my case, it's people who know the amazingly simple utility function that is all you need to program into an artificial superintelligence and then everything will turn out fine. (This post should come as an anticlimax, since you already know virtually all the concepts involved, I bloody well hope.  See yesterday's post, and all my posts since October 31st, actually...) Some people, when they encounter the how-to-program-a-superintelligence problem, try to solve the problem immediately.  Norman R. F. Maier:  "Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any."  Robyn Dawes:  "I have often used this edict with groups I have led - particularly when they face a very tough problem, which is when group members are most apt to propose solutions immediately."  Friendly AI is an extremely tough problem so people solve it extremely fast. There's several major classes of fast wrong solutions I've observed; and one of these is the Incredibly Simple Utility Function That Is All A Superintelligence Needs For Everything To Work Out Just Fine. I may have contributed to this problem with a really poor choice of phrasing, years ago when I first started talking about "Friendly AI".  I referred to the optimization criterion of an optimization process - the region into which an agent tries to steer the future - as the "supergoal".  I'd meant "super" in the sense of "parent", the source of a directed link in an acyclic graph.  But it seems the effect of my phrasing was to send some people into happy death spirals as they tried to imagine the Superest Goal Ever, the Goal That Overrides All Over Goals, the Single Ultimate Rule From Which All Ethics Can Be Derived. But a utility function doesn't have to be simple.  It c
cbd313f0-e229-4f63-a653-404fabb9e819
trentmkelly/LessWrong-43k
LessWrong
Where else are you posting? As a result of XiXiDu's massive Resources and References post, I just found out about Katja Grace's very pleasant Meteuphoric blog. I'm curious about what else LessWrongians are posting at other sites, and if there's interest, I'll make this into a top-level post. I also post at Input Junkie.
016979b1-2c6c-458c-b13c-5fabd7f72743
trentmkelly/LessWrong-43k
LessWrong
Debunk the myth -Testing the generalized reasoning ability of LLM Conclusion Current LLM Reasoning Ability: As of March 2025, the actual reasoning capabilities of publicly available LLMs are approximately 50 times lower than what is suggested by benchmarks like AIME. Today, various false marketing about LLM's reasoning ability is rampant on the Internet. They usually make strong claims: they get a considerable (80%+) accuracy rate on mathematical benchmarks that most people think are difficult and have weak knowledge backgrounds, or give them a [doctoral level] intelligence evaluation based on erudite tests. With a skeptical attitude, we design some questions. https://llm-benchmark.github.io Click to expand all questions and model answers Testing Methodology The premise of testing the real generalizable reasoning ability of LLM is that the tester has the ability to ask new questions. Question structure: Generalized reasoning ability based on text form, with as little knowledge background as possible, means that no high school mathematics knowledge is required (it does not mean that the auxiliary role of acquired knowledge in solving problems is excluded) Ensure generalization: Several different experimental methods: Assuming the creator has such a purpose, in order to support his claim, to prove as much as possible that LLM has a lower generalizable reasoning ability, then he hopes that the difficulty of the questions he creates is as low as possible for people, and LLM is completely unable to answer. Assuming it is a competition about creators, n=problem difficulty, d=error rate, creator's score=(1/n^2)*d 1. A fairer method, the creator has never been exposed to the target LLM, and tries to create questions that he thinks are "novel" from his knowledge structure 2. After the creator interacts with the target LLM, he has a certain understanding of it, and creates questions that he thinks are "novel" for the target in a targeted manner. During the creation period, he cannot access the target LLM again 3. The creator tak
fc7d6c50-7190-4eea-b025-c9fae3d37c12
trentmkelly/LessWrong-43k
LessWrong
Francois Chollet inadvertently limits his claim on ARC-AGI Specifically, this tweet makes the assumption that open-source AI is basically as good at search, especially automated search for algorithms as the big labs, and if that assumption is violated, the prize won't be claimed even if big companies have much better AI than the general public. This is short, but IMO this implies it's measuring open-source's ability to get good AI, not whether any AI can be very general.
725c97c4-c33a-4f1d-b14b-56e1f6c0656a
trentmkelly/LessWrong-43k
LessWrong
AI Safety Europe Retreat 2023 Retrospective This is a short impression of the AI Safety Europe Retreat (AISER) 2023 in Berlin. Tl;dr: 67 people working on AI safety technical research, AI governance, and AI safety field building came together for three days to learn, connect, and make progress on AI safety. Format The retreat was an unconference: Participants prepared sessions in advance (presentations, discussions, workshops, ...). At the event, we put up an empty schedule, and participants could add in their sessions at their preferred time and location.  Empty and full Schedule. In each time-slot there were multiple sessions in parallel in different locations. This way, people could choose which one to go to based on their interests. Participants Career Stage About half the participants are working, and half of them are students. Everyone was either already working on AI safety, or intending to transition to work on AI safety. Focus areas Most participants are focusing on technical research, but there were also many people working on field building and AI governance: Research Programs Many participants had previously participated, or currently participate in AI safety research programs (SERI MATS, AI safety Camp, SPAR, PIBBSS, MLSS, SERI SRF, Refine)  Countries All but one participant were based in Europe, with most people from Germany, the Netherlands and the UK. Who was behind the retreat? The retreat was organized by Carolin Basilowski (now EAGx Berlin team lead) and me, Magdalena Wache (independent technical AI safety researcher and SERI MATS scholar). We got funding from the long-term future fund.  Takeaways * I got a feeling of "European AI safety community". * Unlike in AI safety hubs like the Bay area, continental Europe’s AI safety crowd is scattered across many locations. * Before the retreat I already personally knew many people working on AI safety in Europe, but that didn't feel as community-like as it does now. * Other people noted a similar feeling of co
206cc696-16be-49ae-8525-a0cd41c426f8
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Notes on Antelligence *(This post uses a more generalized definition of "simulator" than the one used in Simulators. Simulators, as defined here, are functions optimized towards some arbitrary range of inputs, and without requirements on how the function was created. If you need to distinguish between the two definitions, the simulators here can basically just be described as functions that represent some real world process.)* [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb7799b6-45a9-487a-918d-b3921aa3b552_1024x1024.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb7799b6-45a9-487a-918d-b3921aa3b552_1024x1024.png) *image by DALL-E*   5 Point Summary (Idea from [this article](https://www.lesswrong.com/posts/8gbfvGhSnEJ9hHGew/10-reasons-why-lists-of-10-reasons-might-be-a-winning)): 1. Antelligence is emulation accuracy. In other words, when a simulator tries to emulate a different system, its accuracy (the similarity between its outputs and the outputs of the other system) is the same thing as its antelligence. This also makes antelligence relative, meaning that it’s dependent on what is considered the “ideal” output/system. 2. Antelligence is not defined through agency or being goal driven, and so addresses modern neural networks and simulators like GPT-3/4 and DALL-E much better than Intelligence. 3. Simulators optimize towards arbitrary ranges over arbitrary inputs, with optimizers being a type of simulator which optimize towards only 1 range over all inputs. 4. Antelligence is why humans are so successful, since high antelligence relative to the real world allows us to predict the future, letting us act in ways much more intelligent than other creatures. 5. Artificial simulators are inherently benign (won’t suddenly transform into an agent that tries to kill us all), but powerful and potentially dangerous tools that would only lead to human extinction if we simulate and actively give it the ability to kill us all and/or give it to agents (human or artificial) that want to kill us all, making them equivalent in danger to nuclear weapons (with their strength being determined by their antelligence relative to the real world, as compared to humans). Eliezer Yudkowsky was the first to define intelligence as [Efficient Cross-Domain Optimization](https://www.lesswrong.com/posts/yLeEPFnnB9wE7KLx2/efficient-cross-domain-optimization), with this definition being the primary one used in discussions over AI and potential AGI’s and ASI’s for years now, along with helping to spur the many discussions over AI alignment. I’m making this article here to offer an alternative to this idea of intelligence, which may be better for analyzing AI like GPT-3/4 and DALL-E than a definition based on optimization. Of course, I don’t want to [argue over definitions](https://www.lesswrong.com/tag/arguing-by-definition), which is why I’ll be referring to this new idea as “antelligence.” This idea builds mainly off of the ideas discussed in [Simulators](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx), so I’d recommend reading that article first (although I will be reiterating a lot of information from there in here as well). And, as a fair warning, I’d like to say that a lot of this article strays pretty far from conventional theories of AI intelligence that I’ve seen here, and so I haven’t quite reconciled how I got to pretty different and important conclusions in this article compared to those in other articles here. Onto the rest of this article, the first half will be focusing on exploring antelligence, and the second on how it relates to a few previous ideas in AI alignment discussion, like optimizers and mesa-optimizers. **Simulators and Antelligence** [Simulation Theory](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx) is a theory which introduces the idea of classifying GPT-3/4 and AI like it through the lens of “Simulators.” Simulators then produce “simulacra,” which are basically the results from plugging different inputs in the simulator. So, a simulator is something like GPT-3/4, while simulacra are like the text that GPT-3/4 outputs. Simulacra can also be simulators, as long as they’re functions which can still take inputs and which are optimized for (although the optimization requirement is redundant, as seen in my proof in the 2nd half where I prove that all simulacra are arbitrarily optimized). The best part about simulators is that they’re, by definition, just functions, where you plug in specific inputs to set the conditions and get specific outputs (or with more complex systems, probability distributions). This allows us to think of, and possibly calculate, antelligence, along with directly allowing us to analyze neural networks as antelligent (since they’re just extremely large functions themselves). Now, a thought experiment. Imagine two simulators. Let’s say that at first, the two simulators are trained on totally different data, and create completely different functions which produce very different outputs based on the same inputs. In this scenario, let’s also say that simulator 1 is something which we only have access to through its inputs and outputs, but not the actual computation itself (like the entire sum of internet text). We can then try to train simulator 2 on the inputs and outputs of simulator 1, similar to how ChatGPT trains on large amounts of internet text. We can say that, at first, the outputs of the two simulators are not very similar, and that simulator 2has *low antelligence relative to* simulator 1, since the accuracy with which it emulates simulator 1 is low*.* As the two become more similar, simulator 2 gains antelligence relative to simulator 1 until the two situations are equivalent and they both have 100% antelligence relative to each other.  Antelligence, in this case, is something relative. It’s something which changes based on whichever simulator is determined to be more valuable. The delusional man on the street’s brain might be simulating stuff which inaccurately emulates the real world, but there’s nothing stopping us other than practicality from saying that the real world is wrong and the man’s brain is actually correct. This also explains how seemingly intelligent beings can have varying levels of antelligence in reference to different tasks, since antelligence is fundamentally task specific.  These tasks are learned through memory. In the brain, [memories are sets of neurons that fire together](https://qbi.uq.edu.au/brain-basics/memory/how-are-memories-formed). The same holds true for other neural networks as well, like those found in AI. These memories are formed by learning, aka using training data to change the weights and biases of the neurons. So, we can say that experiences are used, through a method of learning (like backpropagation), to modify and define the function in a way we can think of as creating memories. This makes a lot of intuitive sense. People who share similar memories tend to have similar viewpoints, as AI which have similar weights and biases tend to create similar outputs. People who have lots of experience and memories doing specific tasks are better at doing them effectively than those who don’t. The more diverse someone’s memories, the better they’ll be able to deal with new tasks. Since antelligence is relative, it should be talked about dependent on certain tasks. A simulator can be highly intelligent relative to the ideal chess system, but have very low antelligence relative to the ideal internet chatbot, and vice versa. Therefore, the way that general antelligence emerges is training on a simulator on many different tasks, allowing for the tasks, and the tasks similar to the tasks the AI is trained for, to be done more accurately. This seems to line up with what we’ve seen in real life, with more training data, and different types of training data, allowing for higher antelligence relative to the real world. To achieve AGI, then, would simply take a sufficiently diverse range of training inputs and the architectures/learning methods to calculate and store the patterns found. For sufficiently complicated environments (like the types ASI would have to train), the problem of how much training power is needed becomes much more prominent. It’s reasonable to say that simulators, when trained in the most ideal way to emulate, will typically detect the most obvious patterns first. It’s also typical that patterns tend to give less and less of a boost to the emulation accuracy as they become more and more complicated. This will be since obvious patterns will generally occur more frequently, and so are generally more important to the outputs of the less frequent patterns. Therefore, the rate at which an AI’s antelligence relative to the real world goes up will decrease as the simulator becomes more intelligent (aka, there will be diminishing returns on antelligence gain). This growth is also asymptotic, since the AI will not be able to be more than 100% relative to the universe. The amount of computing power, time, and input amounts will all necessarily go up as AI’s become more intelligent ([as seen now with current AI systems](https://www.technologyreview.com/2019/11/11/132004/the-computing-power-needed-to-train-ai-is-now-rising-seven-times-faster-than-ever-before/)), removing any possible of a singularity, and possibly any change of ASI popping up soon (which will be limited by how [modern computing power compares to the power of our own brains](https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know)). Of course, for the complicated problems of today, like solving the Riemann Hypothesis, curing cancer, etc, these improvements in antelligence will be much more meaningful. They might also be counteracted, at least partly, by the AI potentially engineering new computer parts and learning methods, although it’s very possible that by the time AI’s are antelligent enough to start making gains in these areas, we will have already reached close enough to the limits of technology in these areas to prevent a FOOM. While the returns may be diminishing in general, they’ll still be pretty smart with specific tasks that we haven’t yet done yet, still supporting the idea that there will be an explosion of new knowledge once AI slides past the antelligence of humans in different tasks. **On Optimizers** To start, let’s compare optimizers and simulators. Simulators, as I explained in the previous section, can be thought of as functions. Janus goes a step further in defining them as being “models trained with predictive loss on a self-supervised dataset, invariant to architecture or data type” (aka optimized towards what Janus calls the simulation objective), but this is (mostly) extraneous for the purposes of this article, as I’ll explain now.  [Optimizers](https://www.lesswrong.com/tag/optimization) optimize towards a specific goal. When it comes to actually implementing and creating optimizers, it’s best to think of optimizers as being functions (like simulators) which take inputs and then create outputs which fall within the range of the specified goal/what’s being optimized for. For the sake of not changing definitions, we’ll say that an optimizer optimizes towards 1 specific range. If we wanted to, though, we could also analyze simulators as functions that optimize, but instead of optimizing towards the outputs being in 1 range, it optimizes towards arbitrary ranges over arbitrary sets of inputs.  Under this definition, we can also prove that any simulacra is optimized for. This is since all simulacra come by plugging inputs into a simulator, reducing the possibilities for outputs, meaning that the range of possible outputs of the simulacra must always be less than or equal to the range of possible outputs of the original simulator. We can then say that for any simulacra, that smaller range or equal range is the arbitrary range that the simulacra is optimized for. Since everything is technically a simulacra of the laws of physics, we can say that everything is optimized for, making the previous inclusion of optimization in the definitions of simulacra and simulators redundant. This also means that all optimizers are simulators that optimizers can be defined simulators which optimize towards 1 range. These properties allow us to reconsider the problem of mesa-optimization presented with natural selection and the emergence of human beings. This example is explained on the [alignment forum](https://www.alignmentforum.org/tag/mesa-optimization): > Example: Natural selection is an optimization process that optimizes for reproductive fitness. Natural selection produced humans, who are themselves optimizers. Humans are therefore mesa-optimizers of natural selection. > > In the context of AI alignment, the concern is that a base optimizer (e.g., a gradient descent process) may produce a learned model that is itself an optimizer, and that has unexpected and undesirable properties. Even if the gradient descent process is in some sense "trying" to do exactly what human developers want, the resultant mesa-optimizer will not typically be trying to do the exact same thing. > > So, in this problem, both humans and natural selection are categorized simply as optimizers. If we were to analyze them both as simulators, however, we could disentangle some important properties of the two which demonstrate where the argument in the example above falters.  We can define natural selection as a simulacra of biological environments (which act as simulators) over the range of most lifeforms which optimizes for lifeforms with higher reproductive fitness. The lifeforms within these environments are also optimizers, with the range they optimize for being within the range that generally increases their reproductive fitness. This optimization towards reproductive fitness was then done by proxy, through things emotions, feelings, and instincts, but still ultimately not straying from having high reproductive fitness.  Admittedly, even humans still have very high reproductive fitness when looking at the entire species, it’s just that since natural selection relies on competition, and we’re orders of magnitudes more powerful than our competition, the effect of natural selection has become mute. The key to why we were able to get so powerful in the first place, though, is our extraordinary ability to predict what will happen in the environment. This was done through the creation of a separate simulator in our head which is trained on the environment around us, creating a simulator more similar to the environment, and therefore with a higher antelligence relative to the environment. [The more animal part of our brain](https://docs.google.com/document/d/10CiEI7aDL2bMIdx7yayy3vlq0TJ8dO5LGnG7yIDPiw8/edit#heading=h.do0uieu84ci2) then interacts with the new simulator, allowing us to act on predicted events and increase our fitness massively. What allowed humans such a great advantage, therefore, was the addition of a similarity which predicts the environment in an environment where all other lifeforms aren’t able to. The argument from the example at the beginning of this section (and the [explanation](https://astralcodexten.substack.com/p/deceptively-aligned-mesa-optimizers) in the introduction of this article) argues that since humans are no longer optimized for what natural selection optimized for, it’s possible that something analogous could happen with AI and our own training methods for it. The problem with this argument is that natural selection isn’t actually an optimizer, or even a simulator. The environment is, and natural selection is simply a property/tendency of the environment, and so the bonds of natural selection are actually pretty weak. The bonds we put on AI training could also be weak if we allow for the AI to start using the internet or have access to the outside world in other ways, but that still doesn’t necessarily spell out AI doom. Reproductive fitness, what natural selection optimizes for, selects for lifeforms that hog resources and try to expand at all costs in highly competitive environments, which are things that we probably won’t select for in our training of AI’s. Technically, we don’t even need to train AI agents, since it’s completely possible to just create independent simulators (like with GPT-3/4). Say, worst case scenario, someone creates a simulator with antelligence much higher than our own relative to the real world. Someone then trains a bunch of AI’s to focus on killing each other and hogging as many resources as possible using RL, and then plugs the AI agent into the super powerful simulator. The AI agent *still has to be trained to connect to the simulator*. And we’d get to see this AI train as well, probably in an pseudo-environment similar to the real world (since anyone malicious enough to do this probably doesn’t want the AI to get caught, which, due to its initial incompetence in interacting with the simulator, is very likely to happen). But that doesn’t even consider the fact that, assuming the ASI understood language, it’s more likely people would just talk to it, putting in certain inputs and seeing how it predicted what happened. **Conclusion** Based on what we’ve seen recently and the idea of antelligence, AI, at future levels of antelligence, would be like nuclear energy, something dangerous and powerful, but ultimately controllable. It’s likely that it will increase only at the same rate as computing power, passing human antelligence (possibly soon), but still within reasonable limits, and after increasingly longer and more computer intensive training periods. AI alignment, then, should focus on preventing people from training AI agents to also use simulators (especially if they have access to the outside world), and making sure that if those AI agents are connected to the simulators, that they’re goals and motivations don’t lead to the AI doing things we don’t want them to do (especially like accessing extra antelligence relative to the world in the simulator which gives the AI extra powerful abilities).
d9b8ece1-126f-44bb-988b-863d79821194
trentmkelly/LessWrong-43k
LessWrong
Finally just created comprehensive resource collections/guides for autodidactism/several scientific subjects Autodidactism Individual academic subjects: http://www.quora.com/Alex-K-Chen/Useful-Science-Learning-Resource-Collections Data Science already covered by Alex Kamil: http://www.quora.com/Educational-Resources/How-do-I-become-a-data-scientist Can anyone please give me some feedback? Thanks!
81988925-3ac3-4022-888e-e36436f8b348
trentmkelly/LessWrong-43k
LessWrong
Meetup : Israel: Harry Potter and the Methods of Rationality Pi Day Wrap Party Discussion article for the meetup : Israel: Harry Potter and the Methods of Rationality Pi Day Wrap Party WHEN: 14 March 2015 08:30:00PM (+0200) WHERE: Herzliya, Israel Yonatan Calé is hosting the Harry Potter and the Methods of Rationality Pi Day Wrap party: Saturday, March 14 at 20:30 in Herzliya. Harry Potter and the Methods of Rationality will have its final chapter released on Pi Day (3.14), and this is one of the celebrations are being planned around the world. Contact Yonatan at myselfandfredy@gmail.com for the exact location. Here's the Facebook event where you can be in touch and RSVP https://www.facebook.com/events/432725193554286/ Discussion article for the meetup : Israel: Harry Potter and the Methods of Rationality Pi Day Wrap Party
1331a7f7-225c-4129-81ed-20bac5808729
trentmkelly/LessWrong-43k
LessWrong
High energy ethics and general moral relativity Utilitarianism sometimes supports weird things: killing lone backpackers for their organs, sacrificing all world's happiness to one utility monster, creating zillions of humans living on near-subsistence level to maximize total utility, or killing all but a bunch of them to maximize average utility. Also, it supports gay rights, and has been supporting them since 1785, when saying that there's nothing wrong in having gay sex was pretty much in the same category as saying that there's nothing wrong in killing backpackers. This makes one wonder: if despite all the disgust towards them few centuries ago, gay rights have been inside the humanity's coherent extrapolated volition all along, then perhaps our descendants will eventually come to the conclusion that killing the backpacker has been the right choice all along, and only those bullet-biting extremists of our time were getting it right. As a matter of fact, as a friend of mine pointed out, you don't even need to fast forward few centuries - there are or were already ethical systems actually in use in some cultures (e.g. bushido in pre-Meiji restoration Japan) that are obsessed with honor and survivor's guilt. They would approve of killing the backpacker or letting them kill themselves - this being an honorable death, and living while letting five other people to die being dishonorable - on non-utilitarian grounds, and actually alieve that this is the right choice. Perhaps they were right all along, and the Western civilization bulldozed through them effectively destroying such culture not because of superior (non-utilitarian) ethics but for any other reason things happened in history. In this case there's no need in trying to fix utilitarianism, lest it suggest killing backpackers, because it's not broken - we are - and out descendants will figure that out. In physics we've seen this, when an elegant low-Kolmogorov-complexity model predicted that weird things happens on a subatomic level, and we've built huge part
339bf317-182a-4764-89cd-b9c8cf8ec775
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Life's Story Continues **Followup to**:  [The First World Takeover](/lw/w0/the_first_world_takeover/) As [last we looked at the planet](/lw/w0/the_first_world_takeover/), Life's long search in organism-space had only just gotten started. When I try to structure my understanding of the unfolding process of Life, it seems to me that, to understand the *optimization velocity* at any given point, I want to break down that velocity using the following [abstractions](/lw/w1/whence_your_abstractions/): * The searchability of the neighborhood of the current location, and the availability of good/better alternatives in that rough region. Maybe call this the *optimization slope*.  Are the fruit low-hanging or high-hanging, and how large are the fruit? * The *optimization resources*, like the amount of computing power available to a fixed program, or the number of individuals in a population pool. * The *optimization efficiency*, a curve that gives the amount of searchpower generated by a given investiture of resources, which is presumably a function of the optimizer's structure at that point in time. Example:  If an *object-level* adaptation enables more efficient extraction of resources, and thereby increases the total population that can be supported by fixed available resources, then this increases the *optimization resources* and perhaps the optimization velocity. How much does optimization velocity increase - how hard does this object-level innovation hit back to the meta-level? If a population is small enough that not all mutations are occurring in each generation, then a larger population decreases the time for a given mutation to show up.  If the fitness improvements offered by beneficial mutations follow an exponential distribution, then - I'm not actually doing the math here, just sort of eyeballing - I would expect the optimization velocity to go as log population size, up to a maximum where the search neighborhood is explored thoroughly.  (You could test this in the lab, though not just by eyeballing the fossil record.) This doesn't mean *all* optimization processes would have a momentary velocity that goes as the log of momentary resource investment up to a maximum.  Just one mode of evolution would have this character.  And even under these assumptions, evolution's *cumulative* optimization wouldn't go as log of *cumulative* resources - the log-pop curve is just the instantaneous velocity. If we assume that the variance of the neighborhood remains the same over the course of exploration (good points have better neighbors with same variance ad infinitum), and that the population size remains the same, then we should see linearly cumulative optimization over time. At least until we start to hit the information bound on maintainable genetic information... These are the sorts of abstractions that I think are required to describe the history of life on Earth in terms of optimization. And I also think that if you don't talk optimization, then you won't be able to understand the causality - there'll just be these mysterious unexplained progress modes that change now and then.  In the same way you have to talk natural selection to understand observed evolution, you have to talk optimization velocity to understand observed evolutionary speeds. The first thing to realize is that meta-level changes are rare, so most of what we see in the historical record will be structured by the *search neighborhoods* - the way that one innovation opens up the way for additional innovations.  That's going to be most of the story, not because meta-level innovations are unimportant, but because they are rare. In "[Eliezer's Meta-Level Determinism](http://www.overcomingbias.com/2008/06/eliezers-meta-l.html)", Robin lists the following dramatic events traditionally noticed in the fossil record: > Any Cells, Filamentous Prokaryotes, Unicellular Eukaryotes, Sexual Eukaryotes, Metazoans > > And he describes "the last three strong transitions" as: > Humans, farming, and industry > > So let me describe what I see when I look at these events, plus some others, through the lens of my abstractions: *Cells:*  Force a set of genes, RNA strands, or catalytic chemicals to share a common reproductive fate.  (This is the real point of the cell boundary, not "protection from the environment" - it keeps the fruits of chemical labor inside a spatial boundary.)  But, as we've defined our abstractions, this is mostly a matter of optimization slope - the quality of the search neighborhood.  The advent of cells opens up a tremendously rich new neighborhood defined by *specialization* and division of labor.  It also increases the slope by ensuring that chemicals get to keep the fruits of their own labor in a spatial boundary, so that fitness advantages increase.  But does it hit back to the meta-level?  How you define that seems to me like a matter of taste.  Cells don't quite change the mutate-reproduce-select cycle.  But if we're going to define sexual recombination as a meta-level innovation, then we should also define cellular isolation as a meta-level innovation. It's worth noting that modern genetic algorithms have not, to my knowledge, reached anything like the level of intertwined complexity that characterizes modern unicellular organisms.  Modern genetic algorithms seem more like they're producing individual chemicals, rather than being able to handle individually complex modules.  So the cellular transition may be a hard one. *DNA:*  I haven't yet looked up the standard theory on this, but I would sorta expect it to come *after* cells, since a ribosome seems like the sort of thing you'd have to keep around in a defined spatial location.  DNA again opens up a huge new search neighborhood by separating the functionality of chemical shape from the demands of reproducing the pattern.  Maybe we should rule that anything which restructures the search neighborhood this drastically should count as a hit back to the meta-level.  (Whee, our abstractions are already breaking down.)  Also, DNA directly hits back to the meta-level by carrying information at higher fidelity, which increases the total storable information. *Filamentous prokaryotes, unicellular eukaryotes:*  Meh, so what. *Sex:*  The archetypal example of a rare meta-level innovation.  Evolutionary biologists still puzzle over how exactly this one managed to happen. *Metazoans:*  The key here is not cells aggregating into colonies with similar genetic heritages; the key here is the controlled specialization of cells with an identical genetic heritage.  This opens up a huge new region of the search space, but does not particularly change the nature of evolutionary optimization. Note that opening a sufficiently huge gate in the search neighborhood, may *result* in a meta-level innovation being uncovered shortly thereafter.  E.g. if cells make ribosomes possible.  One of the main lessons in this whole history is that *one thing leads to another*. Neurons, for example, may have been the key enabling factor in enabling large-motile-animal body plans, because they enabled one side of the organism to talk with the other. This brings us to the age of brains, which will be the topic of the next post. But in the meanwhile, I just want to note that my view is nothing as simple as "meta-level determinism" or "the impact of something is proportional to how meta it is; non-meta things must have small impacts".  Nothing much *meta* happened between the age of sexual metazoans and the age of humans - brains were getting more sophisticated over that period, but that didn't change the nature of evolution. Some object-level innovations are small, some are medium-sized, some are huge.  It's no wonder if you look at the historical record and see a Big Innovation that doesn't look the least bit meta, but had a huge impact by itself *and* led to lots of other innovations by opening up a new neighborhood picture of search space.  This is allowed.  Why wouldn't it be? You can even get exponential acceleration without anything meta - if, for example, the more knowledge you have, or the more genes you have, the more opportunities you have to make good improvements to them.  Without any increase in optimization pressure, the neighborhood gets higher-sloped as you climb it. My thesis is more along the lines of, "If this is the picture *without* recursion, just imagine what's going to happen when we *add* recursion." To anticipate one possible objection:  I don't expect Robin to disagree that modern civilizations underinvest in meta-level improvements because they take time to yield cumulative effects, are new things that don't have certain payoffs, and worst of all, tend to be public goods.  That's why we don't have billions of dollars flowing into prediction markets, for example.  I, Robin, or Michael Vassar could probably think for five minutes and name five major probable-big-win meta-level improvements that society isn't investing in. So if meta-level improvements are rare in the fossil record, it's not necessarily because it would be *hard* to improve on evolution, or because meta-level improving doesn't accomplish much.  Rather, evolution doesn't do anything *because* it will have a long-term payoff a thousand generations later. Any meta-level improvement also has to grant an object-level fitness advantage in, say, the next two generations, or it will go extinct. This is why we can't solve the puzzle of how sex evolved by pointing directly to how it speeds up evolution.  "This speeds up evolution" is just not a valid reason for something to evolve. Any creative evolutionary biologist could probably think for five minutes and come up with five *great* ways that evolution could have improved on evolution - but which happen to be more complicated than the wheel, which evolution evolved on only three known occasions - or don't happen to grant an *immediate* fitness benefit to a handful of implementers.
434f5064-7ff2-48a1-9ed1-b0fc5954cb1e
StampyAI/alignment-research-dataset/blogs
Blogs
let's stick with the term "moral patient" let's stick with the term "moral patient" ----------------------------------------- "moral patient" means ["entities that are eligible for moral consideration"](https://en.wikipedia.org/wiki/Moral_agency#Distinction_between_moral_agency_and_moral_patienthood). as [a recent post i've liked](https://www.lesswrong.com/posts/HoQ5Rp7Gs6rebusNP/superintelligent-ai-is-necessary-for-an-amazing-future-but-1) puts it: > And also, it’s not clear that “feelings” or “experiences” or “qualia” (or the nearest unconfused versions of those concepts) are pointing at the right line between moral patients and non-patients. These are nontrivial questions, and (needless to say) not the kinds of questions humans should rush to lock in an answer on today, when our understanding of morality and minds is still in its infancy. > > in this spirit, i'd like us to stick with using the term "moral patient" or "moral patienthood" when we're talking about the set of things worthy of moral consideration. in particular, we should be using that term instead of: * "conscious things" * "sentient things" * "sapient things" * "self-aware things" * "things with qualia" * "things with experiences" * "things that aren't p-zombies" * "things for which there is something it's like to be them" because those terms are hard to define, harder to meaningfully talk about, and we don't in fact know that those are what we'd ultimately want to base our notion of moral patienthood on. so if you want to talk about the set of things which deserve moral consideration outside of a discussion of what precisely that means, don't use a term which you feel like it *probably is* the criterion that's gonna ultimately determine which things *are* worthy of moral consideration, such as "conscious beings", because you might in fact be wrong about what you'd consider to have moral patienthood under reflection. simply use the term "moral patients", because it is the term which unambiguously means exactly that.
2c95a578-a4b4-4625-a7d3-aa7205610df8
trentmkelly/LessWrong-43k
LessWrong
Aliveness None
d1948b18-eef7-4aa2-987d-1c2361c1d456
trentmkelly/LessWrong-43k
LessWrong
Political Skills which Increase Income Summary: This article is intended for those who are "earning to give" (i.e. maximize income so that it can be donated to charity). It is basically an annotated bibliography of a few recent meta-analyses of predictors of income. Key Results * The degree to which management “sponsors” your career development is an important predictor of your salary, as is how skilled you are politically. * Despite the stereotype of a silver-tongued salesman preying on people’s biases, rational appeals are generally the best tactic. * After rationality, the best tactics are types of ingratiation, including flattery and acting modest. Ng et al. performed a metastudy of over 200 individual studies of objective and subjective career success. Here are the variables they found best correlated with salary: Predictor Correlation Political Knowledge & Skills 0.29 Education Level 0.29 Cognitive Ability (as measured by standardized tests) 0.27 Age 0.26 Training and Skill Development Opportunities 0.24 Hours Worked 0.24 Career Sponsorship 0.22 (all significant at p = .05) (For reference, the “Big 5” personality traits all have a correlation under 0.12.) Before we go on, a few caveats: while these correlations are significant and important, none are overwhelming (the authors cite Cohen as saying the range 0.24-0.36 is “medium” and correlations over 0.37 are “large”). Also, in addition to the usual correlation/causation concerns, there is lots of cross-correlation: e.g. older people might have greater political knowledge but less education, thereby confusing things. For a discussion of moderating variables, see the paper itself. Career Sponsorship There are two broad models of career advancement: contest-mobility and sponsorship-mobility. They are best illustrated with an example. Suppose Peter and Penelope are both equally talented entry-level employees. Under the contest-mobility model, they would both be equally likely to get a raise or promotion, because
9506961e-3ac8-45f6-8c23-ec97b783b87b
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
A brief review of the reasons multi-objective RL could be important in AI Safety Research *By Ben Smith,* [*Roland Pihlakas*](https://www.lesswrong.com/users/roland-pihlakas)*, and* [*Robert Klassert*](https://www.lesswrong.com/users/klaro) *Thanks to Linda Linsefors, Alex Turner, Richard Ngo, Peter Vamplew, JJ Hepburn, Tan Zhi-Xuan, Remmelt Ellen, Kaj Sotala, Koen Holtman, and Søren Elverlin for their time and kind remarks in reviewing this essay. Thanks to the organisers of the AI Safety Camp for incubating this project from its inception and for connecting our team.*   For the last 9 months, we have been investigating the case for a multi-objective approach to reinforcement learning in AI Safety. Based on our work so far, we’re moderately convinced that multi-objective reinforcement learning should be explored as a useful way to help us understand ways in which we can achieve safe superintelligence. We’re writing this post to explain why, to inform readers of the work we and our colleagues are doing in this area, and invite critical feedback about our approach and about multi-objective RL in general.  We were first attracted to the multi-objective space because human values are inherently multi-objective--in any number of frames: deontological, utilitarian, and virtue ethics; egotistical vs. moral objectives; maximizing life values including hedonistic pleasure, eudaemonic meaning, or the enjoyment of power and status. AGI systems aiming to solve for human values are likely to be multi-objective themselves, if not by explicit design, then multi-objective systems would emerge from learning about human preferences. As a first pass at technical research in this area, we took a commonly-used example, the “BreakableBottles” problem, and showed that for low-impact AI, an agent could more quickly solve this toy problem if it uses a conservative but flexible trade-off between alignment and performance values, compared to using a thresholded alignment system to maximize a certain amount of alignment and only then maximizing on performance. Such tradeoffs will be critical for understanding the conflicts between more abstract human objectives a human-preference-maximizing AGI would encounter. To send feedback you can (a) contribute to discussion by commenting on this forum post; (b) [send feedback anonymously](https://forms.gle/1Lz6siEMG1h3neMX8); or (c) directly send feedback to Ben ([benjsmith@gmail.com](mailto:benjsmith@gmail.com)), Roland ([roland@simplify.ee](mailto:roland@simplify.ee)), or Robert ([robertklassert@pm.me](mailto:robertklassert@pm.me)). ***What is multi-objective reinforcement learning?*** ----------------------------------------------------- In reinforcement learning, an agent learns which actions lead to reward, and selects them.  Multi-objective RL typically describes games in which an agent selects an action based on its ability to fulfill more than one objective. In a low-impact AI context, objectives might be “make money” and “avoid negative impacts on my environment”. At any point in time, an agent can assess the value of each action by its ability to fulfill each of these objectives. The values of each action in terms of each objective make up a value vector. An area of focus for research in multi-objective RL is how to combine that vector into a single scalar value representing the overall value of an action, to be compared to its alternatives, or sometimes, how to compare the vectors directly. Agents learn the consequences of each action in terms of each of their objectives, and actions are evaluated based on their consequences with respect to each objective. It has previously been argued [(Vamplew et al., 2017) that human-aligned artificial intelligence is a multi-objective problem](https://link.springer.com/article/10.1007/s10676-017-9440-6). Objectives can be combined through various means, such as achieving a thresholded minima for one objective and maximizing another, or through some kind of [non-linear weighted combination](https://drive.google.com/file/d/1qufjPkpsIbHiQ0rGmHCnPymGUKD7prah/view) of each objective into a single reward function.  At a high level, this is as simple as combining the outputs from possibly transformed individual objective rewards and selecting an action based on that combination. Exploring ways to combine objectives in ways that embed principles we care about, like conservatism, is a primary goal for multi-objective RL research and a key reason we value multi-objective RL research distinct from single-objective RL. It seems to us there are some [good reasons](https://link.springer.com/article/10.1007/s10676-017-9440-6) to explore multi-objective RL for applications in aligning superintelligence. The three main reasons are the potential to reduce Goodharting, the parallels with biological human intelligence and broader philosophical and societal objectives, and the potential to better understand systems that may develop multiple objectives even from a single final goal. There are also some fundamental problems that need to be solved if it is to be useful, and although this essay primarily addresses potential opportunities, we touch on a few challenges we’ve identified in a section below. Overall, we think using pluralistic value systems in agents has so far received too little attention. ***Multi-objective RL might be seen as an extension of Low-impact RL*** ----------------------------------------------------------------------- Low-impact RL aims to set two objectives for an RL agent. The first, *Primary* objective is to achieve some goal. This could be any reasonable goal we want to achieve, such as maximizing human happiness, maximizing GDP, or just making a private profit. The second, *Safety* objective is to have as little impact as possible on things unrelated to the primary objective while the primary objective is achieved. Despite the names, the *Safety* objective has usually a higher priority than the *Primary* objective. The low-impact approach lessens the risk of undesirable changes by punishing *any* change that is not an explicit part of the primary objective. There are many proposals for how to define a measure of *low-impact*, e.g., deviations from the default course of the world, irreversibility, relative reachability, or attainable utility preservation. Like low-impact RL, multi-objective RL balances an agent’s objectives, but its aims are more expansive and it balances more than two objectives. We might configure a multi-objective RL with ‘Safety’ objectives as one or more objectives among many. But the aim, rather than to constrain a single objective with a Safety objective, is to constraint a range of objectives with each other. Additionally, multi-objective RL work often explores methods for non-linear scalarization, whereas low-impact RL work to-date has typically used linear scalarization (see [Vamplew et al., 2021](https://linkinghub.elsevier.com/retrieve/pii/S0952197621000336)). ***Possible forms of multi-objective RL Superintelligence*** ------------------------------------------------------------ * Balance priorities of different people - each person’s preferences is an independent objective (but [see a previous solution](https://www.lesswrong.com/posts/gLfHp8XaWpfsmXyWZ/conservative-agency-with-multiple-stakeholders) for this). * Balance present and predicted priorities of a single individual - a problem Stuart Russell discusses in *Human Compatible* (pp 241-245). * Aim for a compromise or even intersection between multiple kinds of human values, preferences, or moral values. * Better model the preferences of biological organisms, including humans, which [seem](https://psyarxiv.com/rxf7e/) [to be](https://www.projectimplicit.net/nosek/papers/GHN2009.pdf) [multi-objective](https://scholarworks.gvsu.edu/cgi/viewcontent.cgi?article=1116&context=orpc) * Balance different forms of human preferences where these are not aligned with each other, including explicitly expressed preferences and revealed preferences. ***Reasons to implement multi-objective RL*** --------------------------------------------- * We can **reduce Goodharting by widening an agent’s focus**. Multi-objective RL could optimize for a broader range of objectives, including, ‘human preference fulfillment’, ‘maximizing human happiness’, ‘maximizing human autonomy’, ‘equality or fairness’, ‘diversity’, ‘predictability, explainability, corrigibility, and interruptibility’, and so on. By doing so, each objective serves as a ‘safety objective’ against each of the others. For instance, with all of the above objections, we guard against the chance that, in the pursuit of achieving human preferences, an agent could cause suffering [that would violate the ‘maximize human happiness’ objective] or in the pursuit of human happiness, enslave humanity in a ‘gilded cage’ paradise [that would violate the ‘maximize human autonomy’ objective]. As we note below, the flipside might be that it’s harder to identify failure modes. If we identify all of the *fundamental* objectives we care about, we can configure an RL agent to learn to optimize each objective under the constraints of not harming the others. Although some might expect a single-objective agent that seeks to (for example) “fulfill human preferences” to find an appropriate balance between preferences, we are concerned that any instantiation of this goal concrete enough to be acted upon risks missing other important things we care about. * Even if a future super-intelligence is able to bootstrap up a broad model of human preferences to optimize for from a single well-chosen objective, some nearer-term AI will not be as capable. Considering how many practical, real-world problems are multi-objective, building in appropriate multiple objectives from the start could help near-term AI find heuristically-optimal solutions. An expanding set of work and implementations of nearer-term AI could provide groundwork for the design of transformational superintelligence. * Explicit human moral systems are multi-objective ([Graham et al, 2013](https://www.sciencedirect.com/science/article/abs/pii/B9780124072367000024)); there exists no one broadly-agreed philosophical framework for morality; human legal systems are multi-objective. From human psychology to the legal systems humans have designed, no one objective has sufficed for system design, and explicitly acknowledging the [multiplicity of human values](https://en.wikipedia.org/wiki/Theory_of_Basic_Human_Values) might be a good path forward for an aligned AI. To put this another way, it might be that no level of super-intelligence can figure out what we “really mean” by “maximize human preferences” because “human preferences” simply has no single referent; nor is there any single or objective way to reduce multiple references to a common currency. Subagents who ‘vote’ for different goals like members of a consensus-based committee have been [previously proposed](https://www.lesswrong.com/posts/3xF66BNSC5caZuKyC/why-subagents) to model path-dependent preferences. * The priorities, preferences, life values, and basic physical needs and objectives of individual organisms [are multi-objective](https://psyarxiv.com/rxf7e/) (even though life/the evolutionary process is not), and in particular, human values--including but not limited to moral values--are themselves multi-objective. * It might be the case that for a superintelligent agent, it is sufficient to set one broad final goal, such as “maximize human preferences” and have multiple instrumental goals that derive from that using inverse reinforcement learning or other processes. But studying multi-objective decision-making might nevertheless be helpful for learning about how an agent could balance the multiple instrumental goals that it learns. * A single objective system such as “fulfill human preferences” might be impossible to implement in a desired fashion. As in the previous section on *possible forms of multi-objective RL superintelligence*, the preferred answer to questions like *which humans’ preferences,* and *what kind of preferences* may not be singular. ***Possible problems with a research agenda in multi-objective RL*** -------------------------------------------------------------------- This essay primarily explores the *case for* exploring multi-objective RL in the context of AI Alignment and so we haven’t aspired to present a fully objective list of possible pros and cons. With that said, we have identified several potential problems we are concerned could threaten the relevance or usefulness of a multi-objective RL agenda. In particular, these might be best seen as *plausibility* problems. They could conceivably limit us from actually implementing a system that is capable of intelligently balancing multiple objectives. * With more objectives, there are exponentially more combinations of ways they could combine and balance out to yield a particular outcome. Consequently, it’s not clear that implementing multi-objective RL makes it easier to predict an agent’s behavior. * If there is strong conflict between values, an agent might have an incentive to [modify or eliminate some of its own values](https://www.alignmentforum.org/posts/5bd75cc58225bf06703754b3/stable-pointers-to-value-an-agent-embedded-in-its-own-utility-function#The_Easy_vs_Hard_Problem) in order to yield a higher overall expected reward. * The objective calibration problem: with multiple objectives representing, for instance, different values, how do we ensure that competing values are calibrated to an appropriate relative numerical scales to ensure one does not dominate over the others? ***Why we think multi-objective RL research will be successful*** ----------------------------------------------------------------- Multi-objective RL is an already ongoing field of research in academia. Its focus is not primarily on AGI Alignment (although we’ll highlight a few researchers within the alignment community below), and we believe that if applied further in AGI Alignment, multi-objective RL research is likely to yield useful insight. Although the objective scale calibration problem, the wireheading problem, and others, are currently unsolved and are relevant to AGI Alignment, we see opportunities to make progress in these critical areas, including existing work that, in our view, makes progress on various aspects of the calibration problem ([Vamplew 2021](https://linkinghub.elsevier.com/retrieve/pii/S0952197621000336), [Turner, Hadfield-Menell, Tadepalli, 2020](https://arxiv.org/abs/1902.09725)). Peter Vamplew has been exploring multi-attribute approaches to low-impact AI and has demonstrated novel impact-based ways to trade off primary and alignment objectives. Alexander Turner and colleagues, working in the low-impact AI research, use a multi-objective space to build a conservative agent that prefers to preserve attainable rewards by avoiding actions that close off options. A key area of interest is exploring how to balance, in *non-linear* fashion, a set of objectives such that some intuitively appealing outcome is addressed, and our [own workshop paper](https://drive.google.com/file/d/1qufjPkpsIbHiQ0rGmHCnPymGUKD7prah/view) is one example of this. Even if AGI could derive all useful human objectives through a single directive to “satisfy human preferences” as a single final goal, better understanding multi-objective RL will be useful for understanding how such an AGI might balance competing priorities. That is because human preferences are multi-objective, and so even a human-preference-maximizing agent will, in an emergent sense, become a multi-objective agent, developing multiple sub-objectives to fulfill. Consequently, studying explicitly multi-objective systems are likely to provide insight into how those objectives are likely to play off against one another. ***Open questions in Multi-objective RL*** ------------------------------------------ There are a number of questions *within* multi-objective reinforcement learning that are interesting to explore: this is our first attempt at sketching out a research agenda for the area. Some of these questions, like the potential problems mentioned above, could represent risks that multi-objective RL turns out to be less relevant to AI Alignment. Others are interesting and important questions, important to know how to apply and build multi-objective RL but not decisive for its relevance to AI Alignment. * What is the appropriate way to combine multiple objectives? We have proposed a [conservative non-linear transform](http://modem2021.cs.nuigalway.ie/papers/MODeM_2021_paper_7.pdf) but there are many ways to do this, and there are [many](http://arxiv.org/abs/1902.09725) other [approaches](https://www.sciencedirect.com/science/article/abs/pii/S0952197621000336?via%3Dihub) as well. * To evaluate each action against its alternatives, should we take a combination of the action’s values with respect to each objective and compare that aggregated value to the equivalent metric for other actions, or should comparison between actions occur by comparing the values with respect to the objectives lexicographically, without aggregating them first? In other words, should there be some objectives that are always of higher priority than other objectives, at least until some threshold value is reached, regardless of the values computed in these other objectives? * Preventing wireheading of one or more objectives against the others. This is an important problem for single-objective RL as well, but for multi-objective RL, it’s particularly interesting, because with the wrong framework, each of the system’s own objectives create an incentive to potentially modify other objectives the system has. Would an RL agent have an incentive to turn off one or more of its objectives? There has been some previous work ([Dewey, 2011](https://intelligence.org/files/LearningValue.pdf); [Demski, 2017](https://www.alignmentforum.org/posts/5bd75cc58225bf06703754b3/stable-pointers-to-value-an-agent-embedded-in-its-own-utility-function); [Kumar et al., 2020](https://arxiv.org/abs/2011.08820)) but the matter seems unresolved. * Properly calibrating each objective is a key problem for multi-objective RL; setting some kind of relative scale between objectives is unavoidable. Is there a learning process that could be applied? * Establishing a zero-point or offset for each objective, for instance, a ‘default course of the world’ in case of low-impact objectives. * Should we apply a conservative approach to prioritize non-linearly, awarding exponentially more penalty for negative changes than we do award positive changes of the same magnitude? [A concave non-linear transform](http://modem2021.cs.nuigalway.ie/papers/MODeM_2021_paper_7.pdf) can help to embed conservatism to ensure that downside risk on each objective really does constrain overzealous action motivated by other objectives. * One can represent a trade-off between balancing multiple objectives on a spectrum between linear expected utility through to maximizing Pareto optimality of nonlinear utility functions. Linear expected utility simply sums up values for each objective without transformation. What is the right balance? * Decision paralysis: with sufficiently many objectives, and sufficiently conservative tuning, an agent might never take an action. That might be a feature rather than a bug, but how would we utilize it? Instances of decision paralysis seem sometimes to be a good place for asking human feedback or choice. * Discounting the future. * Using both unbounded and upper bounded objectives simultaneously (the latter includes homeostatic objectives). * If using a nonlinear transformation, should it be applied to individual rewards at each timestep (as a utility function) versus to the Q values? Which of these problems seems particularly important to you? ***Some current work by us and others*** ---------------------------------------- We recently presented [work](https://drive.google.com/file/d/1qufjPkpsIbHiQ0rGmHCnPymGUKD7prah/view) on multi-objective reinforcement learning aiming to describe a concave non-linear transform that achieves a conservative outcome by magnifying possible losses more than possible gains at the Multi-objective Decision-Making Workshop 2021. A number of researchers presented various projects on multi-objective decision-making. Many of these could have broader relevance for AGI Alignment, and we believe the implications of work like this for AGI Alignment should be more explicitly explored. One particularly important relevant paper was “[Multi-Objective Decision Making for Trustworthy AI](http://modem2021.cs.nuigalway.ie/papers/MODeM_2021_paper_2.pdf)” by Mannion, Heintz, Karimpanal, and Vamplew. The authors explore why multi-objective work makes an AI trustworthy; we believe their arguments likely apply as much for transformative AGI as they do for present-day AI systems. In writing up our work, “Soft maximin approaches to Multi-Objective Decision-making for encoding human intuitive values”, we were interested in multi-objective decision-making because of the potential for an agent to balance conflicting moral priorities.  To do this, we wanted to design an agent that would prioritize avoiding ‘moral losses’ over seeking ‘moral gains’, without being paralysed by inaction if all options involved tradeoffs, as moral choices so often do. So, we explored a conservative transformation function that prioritizes the avoidance of losses more than accruing gains, imposing diminishing returns on larger gains but computing exponentially larger negative utilities as costs grow larger. This model incentivizes an agent to balance each objective conservatively. Past work had designed agents that use a thresholded value for its alignment objective, and only optimize for performance once it has become satisfactory on alignment. In many circumstances it might be desirable for agents to learn optimizing for both objectives simultaneously, and our method provides a way to do that, while actually yielding superior performance on alignment in some circumstances. ***Current directions*** ------------------------ Our group as well as many of the other presenters from that workshop are publishing our ideas in a special issue of the [Autonomous Agents and Multi-Agent Systems](https://www.springer.com/journal/10458), which comes out in April 2022. We are currently exploring appropriate calibration for objectives in a set of toy problems introduced by [Vamplew et al. (2021)](https://www.sciencedirect.com/science/article/abs/pii/S0952197621000336?via%3Dihub). In particular, we’re interested in the relative performance of a continuous non-linear transformation function compared to a discrete, thresholded transformation function on each of the tasks, as well as how performance in each of the functions is robust to variance in the task and its reward structure. ***What do you think?*** ------------------------ We invite critical feedback about our approach to this topic, about our potential research directions, and about the broad relevance of multi-objective reinforcement learning to AGI Alignment. We will be very grateful for any comments you provide below! Which of the open questions in multi-objective AI do you think are most compelling or important for AGI Alignment research? Do some seem irrelevant or trivial? Are there others we have missed that you believe are important?
f27c832e-6b75-4700-936e-25fe811a9203
trentmkelly/LessWrong-43k
LessWrong
How common is it for one entity to have a 3+ year technological lead on its nearest competitor? I'm writing a follow-up to my blog post on soft takeoff and DSA, and I am looking for good examples of tech companies or academic research projects that are ~3+ years ahead of their nearest competitors in the technology(ies) they are focusing on. Exception: I'm not that interested in projects that are pursuing some niche technology, such that no one else wants to compete with them. Also: I'm especially interested in examples that are analogous to AGI in some way, e.g. because they deal with present-day AI or because they have a feedback loop effect. Even better would be someone with expertise on the area being able to answer the title question directly. Best of all would be some solid statistics on the matter. Thanks in advance!
af993e51-a69b-4e20-b2c8-7bc900bd2dbf
trentmkelly/LessWrong-43k
LessWrong
Logical Counterfactuals & the Cooperation Game Update: Originally I set the utility for AB or BA to -10, -10; but I've now realised that unnecessarily complicates the problem. ---------------------------------------- Logical counterfactuals (as in Functional Decision Theory) are more about your state of knowledge than the actual physical state of the universe. I will illustrate this with a relatively simple example. Suppose there are two players in a game where each can choose A or B with the payoffs as follows for the given combinations : AA: 10, 10 BB, AB or BA: 0, 0 Situation 1: Suppose you are told that you will make the same decision as the other player. You can quickly conclude that A provides the highest utility. Situation 2: Suppose you are told that the other player chooses A. You then reason that A provides the highest utility Generalised Situation: This situation combines elements of the previous two. Player 1 is an agent that will choose A, although this is not known by Player 2 unless option b) in the next sentence is true. Player 2 is told one of the following: a) They will inevitably make the same decision as Player 1 b) Player 1 definitely will choose A If Player 2 is a rational timeless agent, then they will choose A regardless of which one they are told. This means that both agents will choose A, making both a) and b) true statements. Analysis: Consider the Generalised Situation, where you are Player 2. Comparing the two cases, we can see that the physical situation is identical, apart from the information you (Player 2) are told. Even the information Player 1 is told is identical. But in one situation we model Player 1's decision as counterfactually varying with yours, while in the other situation, Player 1 is treated as a fixed element of the universe. On the other hand, if you were told that the other player would choose A and that they would make the same choice as you, then the only choice compatible with that would be to choose A. We could easily end up in all kinds of tan
d200e843-3b11-4d76-95c2-d6af69c0fb36
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington, D.C.: Meta Meetup Discussion article for the meetup : Washington, D.C.: Meta Meetup WHEN: 29 January 2017 03:30:00PM (-0500) WHERE: Donald W. Reynolds Center for American Art and Portraiture We will be meeting in the courtyard to talk about the Less Wrong DC group - what people like and what people would like to change. Feel free to suggest new meetup topics! Upcoming meetups: * Feb. 5: Typical Mind Fallacy * Feb. 12: Fun & Games Discussion article for the meetup : Washington, D.C.: Meta Meetup
b74da30e-1e43-44d7-a4c4-ff7f66a9e690
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
If we solve alignment, are we sure of a good future? If by “solve alignment” you mean build a sufficiently performance-competitive superintelligence which has the goal of [coherent extrapolated volition](https://www.lesswrong.com/tag/coherent-extrapolated-volition) or something else which captures human values, then yes. It would be able to deploy technology near the limits of physics (e.g. [atomically precise manufacturing](https://en.wikipedia.org/wiki/Atomically_precise_manufacturing)) to solve most of the other problems which face us, and steer the future towards a highly positive path for [perhaps many billions of years](https://en.wikipedia.org/wiki/Timeline_of_the_far_future) until the [heat death of the universe](https://en.wikipedia.org/wiki/Heat_death_of_the_universe) (barring more esoteric existential risks like encounters with advanced hostile civilizations, [false vacuum decay](https://en.wikipedia.org/wiki/False_vacuum_decay), or [simulation shutdown](https://arxiv.org/ftp/arxiv/papers/1905/1905.05792.pdf)). However, if you only have alignment of a superintelligence to a single human you still have the risk of misuse, so this should be at most a short-term solution. For example, what if Google creates a superintelligent AI, and it listens to the CEO of Google, and it’s programmed to do everything exactly the way the CEO of Google would want? Even assuming that the CEO of Google has no hidden unconscious desires affecting the AI in unpredictable ways, this gives one person a lot of power.