id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
91d26700-bbf6-4d4e-94d5-4b9e336fa7ef
trentmkelly/LessWrong-43k
LessWrong
Five Areas I Wish EAs Gave More Focus The Effective Altruism Community has been an unexpected and pleasant surprise. I remember wishing there was a group out there that shared at least one of my ideals. Instead, I found one that shares three: global reduction of suffering, rationality, and longtermism. However, with each conference I attend, posts I read on the forum, and organizations being created, I notice most of them fall into a few distinct categories. Global development/health, animal welfare, biosecurity, climate change, nuclear risk/global conflict, and AI Safety. Don’t get me wrong, these are some of the most important areas to possibly be working on (I’m currently focusing 90% of my energy on AI Safety, myself). But I think there are at least five other areas that could benefit substantially from a small growth in interest.    Interplanetary Species Expansion This might come as the biggest surprise to be on the list. After all, space exploration is expensive and difficult. But there are very few out there who are actually working on how to change humanity from being a Single Point of Failure System. If we are serious about longtermism and truly decreasing x-risk, this might be one of the most crucial achievements needed. Any x-risk is most likely greatly reduced by this, even perhaps AGI*. The sooner this process begins, the greater the reduction in risk, since this will be a very slow process. One comparatively low-cost research area is studying biospheres and how a separate ecosystem and climate could be created in complete isolation. And this can be studied on Earth. It’s been decades since someone has attempted creating a closed ecological system, and advancement in this could even improve our chances of surviving on Earth if the climate proves inhospitable.   Life Extension ~100,000 people die from age-related diseases every day. ~100 billion people have died in our history. (Read that again.) Aging causes an immense amount of suffering, both to those who suffer from it for years, a
1942961d-cacf-40f0-859a-2ff275414ba4
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Generalization of the Solomonoff Induction to Accuracy - Is it possible? Would it be useful? I appreciate Solomonoff's success in generalizing Occam razor from just selecting the simplest hypothesis/model to adding probabilities to each of them.  But for instance the postion (motion) of any body (*x, y, z*) can be fit by an inequation ![](https://latex.codecogs.com/png.image?\dpi{110}%20x(t),%20y(t),%20z(t)%20\in%20(-\infty,%20\infty))instead of writing actual Newton's laws (and adding very tight intervals for constants), and it is simpler, so the particular Solomonoff probablility would be greater (if I understand Solomonoff Induction correctly) even the model I stated above is apparently useless. Or is SI meant to be used just for exact models? Then it might be completely useless, as long as almost nothing in this world can be fitted exactly according to my worldview. Can you explain the above mentioned issues likely residing in my incomprehension?
07eb6a7d-4453-4ba8-a265-b00af5875741
StampyAI/alignment-research-dataset/lesswrong
LessWrong
The abruptness of nuclear weapons Nuclear weapons seem like the marquee example of rapid technological change after crossing a critical threshold. Looking at the numbers, it seems to me like: * During WWII, and probably for several years after the war, the cost / TNT equivalent for manufacturing nuclear weapons was comparable to the cost of conventional explosives, (A[I impacts](https://aiimpacts.org/discontinuity-from-nuclear-weapons/) estimates a manufacturing cost of $25M/each) * Amortizing out the cost of the Manhattan project, dropping all nuclear weapons produced in WWII would be cost-competitive with traditional firebombing (which [this thesis](https://ses.library.usyd.edu.au/bitstream/2123/664/2/adt-NU20050104.11440202whole.pdf) estimates at 5k GBP (=$10k?) / death, vs. ~100k deaths per nuclear weapon) and by 1950, when stockpiles had gown to >100 weapons, was an order of magnitude cheaper. (Nuclear weapons are much easier to deliver, and at that point the development cost was comparable to manufacturing cost). Separately, it seems like a 4 year lead in nuclear weapons would represent a decisive strategic advantage, which is much shorter than any other technology. My best guess is that a 2 year lead wouldn't do it, but I'd love to hear an assessment of the situation from someone who understands the relevant history/technology better than I do. So my understanding is: it takes about 4 years to make nuclear weapons and another 4 years for them to substantially overtake conventional explosives (against a 20 year doubling time for the broader economy). Having a 4 year lead corresponds to a decisive strategic advantage. Does that understanding seem roughly right? What's most wrong or suspect? I don't expect want to do a detailed investigation since this is pretty tangential to my interests, but the example is in the back of my mind slightly influencing my views about AI, and so I'd like it to be roughly accurate or tagged as inaccurate. Likely errors: (a) you can get a decisive strategic advantage with a smaller lead, (b) cost-effectiveness improved more rapidly after the war than I'm imagining, or (c) those numbers are totally wrong for one reason or another. I think the arguments for a nuclear discontinuity are really strong, much stronger than any other technology. Physics fundamentally has a discrete list of kinds of potential energy, which have different characteristic densities, with a huge gap between chemical and nuclear energy densities. And the dynamics of war are quite sensitive to energy density (nuclear power doesn't seem to have been a major discontinuity). And the dynamics of nuclear chain reactions predictably make it hard for nuclear weapons to be "worse" in any way other than being more expensive (you can't really make them cheaper by making them weaker or less reliable). So the [continuous progress narrative](https://sideways-view.com/2018/02/24/takeoff-speeds/) isn't making a strong prediction about this case. (Of course, progress in nuclear weapons involves large-scale manufacturing. Today the economy grows at roughly the same rate as in 1945, but information technology can change much more rapidly.)
df693628-eccf-4f65-bb6f-a460a19d7121
trentmkelly/LessWrong-43k
LessWrong
Thoughts on the Singularity Institute (SI) This post presents thoughts on the Singularity Institute from Holden Karnofsky, Co-Executive Director of GiveWell. Note: Luke Muehlhauser, the Executive Director of the Singularity Institute, reviewed a draft of this post, and commented: "I do generally agree that your complaints are either correct (especially re: past organizational competence) or incorrect but not addressed by SI in clear argumentative writing (this includes the part on 'tool' AI). I am working to address both categories of issues." I take Luke's comment to be a significant mark in SI's favor, because it indicates an explicit recognition of the problems I raise, and thus increases my estimate of the likelihood that SI will work to address them. September 2012 update: responses have been posted by Luke and Eliezer (and I have responded in the comments of their posts). I have also added acknowledgements. The Singularity Institute (SI) is a charity that GiveWell has been repeatedly asked to evaluate. In the past, SI has been outside our scope (as we were focused on specific areas such as international aid). With GiveWell Labs we are open to any giving opportunity, no matter what form and what sector, but we still do not currently plan to recommend SI; given the amount of interest some of our audience has expressed, I feel it is important to explain why. Our views, of course, remain open to change. (Note: I am posting this only to Less Wrong, not to the GiveWell Blog, because I believe that everyone who would be interested in this post will see it here.) I am currently the GiveWell staff member who has put the most time and effort into engaging with and evaluating SI. Other GiveWell staff currently agree with my bottom-line view that we should not recommend SI, but this does not mean they have engaged with each of my specific arguments. Therefore, while the lack of recommendation of SI is something that GiveWell stands behind, the specific arguments in this post should be attributed only to me, not
0d2052e5-55bf-4be1-948a-93c9b2764d06
trentmkelly/LessWrong-43k
LessWrong
Replacing the Big Air Purifier Around Christmas 2021 I put together a big air purifier on top of our bookcase. Somehow I didn't blog about it or take a picture, but here's it in the background of a birthday: The basic idea was that I didn't have a good place to put a filter cube, commercial air purifiers were scarce, and a different configuration of filters seemed like it could work well. It's a box fan with eleven filters: five along the side, five along the top, and one at the end. It glows because I built a regular light bulb into it, though without any means of replacing the bulb when it died (which it eventually did). The design relies on a good seal with the wall, but when I initially set it up I was worried about peeling the paint off so I used masking tape there. Which eventually pulled away: Which means it's probably been doing about nothing for a while. I considered fixing it, but there were other issues with the design: the box fan was pretty loud on "high" and the fan didn't point a sensible direction. And at this point commercial purifiers are much more widely available. So I ordered three AP-1518R purifiers, which are very similar to (and filter-compatible with!) the AP-1512HH we already had, and replaced the big purifier: I've laid the purifiers on their backs, so the fresh air circulates towards people's heads and I can reach the controls from the ground. I'm not sure whether this is a good idea—the internal squirrel cage fan would have been engineered expecting loads from a different direction, and this might not be good for it? Any guesses? I previously tested one of these against a filter cube and estimated a CADR of 219 CFM on high, 90% of what that filter cube gave on high. When the big purifier was new and sealed properly it was probably in this range or a bit worse: having nearly 3x the filters as a standard filter cube should have given less restriction on the airflow, but there's probably highly diminishing returns and the position of the outflow was b
7a3b4bf9-5be0-4096-aa14-5410817e02c3
trentmkelly/LessWrong-43k
LessWrong
The Singularity Institute is expanding its research program at very little cost. Boo-yah! Three new research associates. Link to the announcement.
df1f3b84-7f60-4fcf-aa31-5eff1cab7afd
trentmkelly/LessWrong-43k
LessWrong
[Meta] Finding free energy in karma As I've been posting here on LessWrong for a few months now, I wanted to share two things I've noticed about the karma system. These are ad-hoc observations, not rigorous empirical results, and while they might feel like ways to "cheat" the system, I hope by making them public and known, it serves to make the "market" for karma more efficient instead. 1. [Fairly confident] Posts appear on the front page unevenly throughout the week. As best I can tell there's a lull on Sunday and a spike on Monday/Tuesday. I'm not sure if this is due to unevenness in when people write, or unevenness in when mods promote posts to the front page. Regardless, since your post's karma depends in part on how long it remains on the front page (which provides more eyeballs), and that depends in part on what other posts it's competing against, there is opportunity here. Consider delaying publishing your post to a less busy time of the week. 2. [Less confident] Automatic cross-posting is a self-fulfilling karma signal. Having automatic cross-posting set up requires both sufficient author interest and sufficient moderator trust, since it's still a fairly manual process. This means that auto-x-posting is a fairly significant indicator of status/karma in the community, and such posts tend to do better in part as a self-fulfilling prophecy (of course they often do better because they're unusually good too - the karma system isn't broken, just slightly unbalanced). There's no way to replicate the automatic-x-post indicator in a manual post, but even just putting "[Cross-posted from MyBlog]" at the top of the text seems to provide a small boost.
ab1bdc99-ea26-4cc8-af60-f374cc02de54
trentmkelly/LessWrong-43k
LessWrong
Meetup : San Francisco Meetup: Board Games Discussion article for the meetup : San Francisco Meetup: Board Games WHEN: 17 April 2017 06:15:00PM (-0700) WHERE: 1769 15th St, San Francisco, CA 94103-3333, United States We’re trying a new format this week! Some people don’t like board games, and based on our straw poll at the recent meta meetup, most of those people seem to like group singing. So we’re gonna try doing a meetup with both. We have a variety of games, but feel free to bring your own. We’ll also bring lyrics for some songs to sing together. If there’s anything you really like singing and think other people might know, please feel free to bring it or a link to lyrics for it that we can pull up. A rhythm instrument might be nice to bring, iff. you know the particular songs we’re doing well enough to keep up easily. For help getting into the building, please call (or text, with a likely-somewhat-slower response rate): 301-458-0764. Format: We meet and start hanging out at 6:15, but don’t officially start doing the meetup topic until 6:45-7 to accommodate stragglers. Usually there is a food order that goes out before we start the meetup topic. About these meetups: The mission of the SF LessWrong meetup is to provide a fun, low-key social space with some structured interaction, where new and non-new community members can mingle and have interesting conversations. Everyone is welcome. We explicitly encourage people to split off from the main conversation or diverge from the topic if that would be more fun for them (moving side conversations into a separate part of the space if appropriate). Meetup topics are here as a tool to facilitate fun interaction, and we certainly don’t want them to inhibit it. Discussion article for the meetup : San Francisco Meetup: Board Games
f801bff4-0290-4404-8820-adfe238d6037
trentmkelly/LessWrong-43k
LessWrong
Motivation Theory This is a brief sketch of a theory of motivation. Motivation drives action. It also causes the experiences of pleasure and pain. We experience pain when motivation increases, and pleasure when motivation decreases. Motivation is generated by emotions, such as hunger, thirst and lust. Emotions generate motivation, and motivation generates action. Each emotion has a biological function. Hunger motivates eating. Thirst motivates drinking water. Lust motivates sex. Some emotions react to stimuli. For example, if you are pricked with a pin, then you will experience pain. The pin activates sensory receptors in the skin (nociceptors), which send a signal to the brain, where it generates an emotional reaction. That reaction will motivate you to act in a way that avoids the noxious stimulus. Behavior can be divided into two broad categories: avoidance and pursuit. Some emotions generate avoidance behaviors. Fear is a generic emotion that motivates avoidance. Other emotions, such as hunger, generate pursuit behaviors, such as seeking food. Emotions that motivate pursuit tend to build up over time, while emotions that motivate avoidance tend to be immediate reactions to stimuli. However, because we have complex, goal-directed behavior, we can act in advance to prevent future danger and harm, rather than just reacting to it. Emotions are heuristic problem recognizers. They recognize biological problems, and generate the motivation to solve them. Why do emotions generate motivation instead of action? Why is there an intermediate step? In some cases, a stimulus directly generates action, such as shivering when you are cold, or jerking your hand away from a hot stove. However, that stimulus-response mechanism can only generate simple behaviors. (see the rest of the post in the link) Although I'm not the author of this post (a friend of mine wrote it), I have created a PDF version of the essay that has a table of contents and headers to make it even easier to read.
56fc122e-9056-4642-937c-acb6316ffa2a
trentmkelly/LessWrong-43k
LessWrong
[Link] Cause Prioritization - Paul Christiano 1:15h Paul Christiano speaks about cause priorization within Effective Altruism. Melbourne. Cause Prioritization: Many effective altruists place a high degree of importance on working out what the most important cause to support is. This is one way that effective altruism is distinguishable from other traditional altruism or charity. http://www.youtube.com/watch?v=uAloUCRVa5I
9703f11e-51b1-4da4-8a62-9db43784df6a
trentmkelly/LessWrong-43k
LessWrong
IDA 1-4/14: Problem Statement Every Thursday for 4 weeks, we will be posting lessons about Iterated Distillation and Amplification. They're largely based on Paul Christiano's sequence here on LW. He graciously allowed us to use his work. Note that access to the lessons requires creating an account here. Have a nice day!
5f7c5001-ab6d-4ae5-81ca-a6d87479b55f
trentmkelly/LessWrong-43k
LessWrong
Spaced Repetition literature review contest submissions: August 1st deadline This is a reminder that the deadline for submissions to the Spaced Repetition literature review contest is August 1st (about 2 weeks from now). If you have questions or comments post them in the original thread or email me (jsalvatier@gmail.com).  The Seattle meetup group is looking forward to reviewing your submissions!
db35c0c8-80e9-4c7e-bd3b-36535ae61c79
trentmkelly/LessWrong-43k
LessWrong
Is GitHub Copilot in legal trouble? This is basically a crosspost for https://githubcopilotinvestigation.com/. I noticed that some folks in California are considering a lawsuit against Microsoft/OpenAI.  tl;dr: * Copilot is trained on open source software. * Copilot doesn't respect the licensing agreements of that software. * Copilot doesn't have a clear fair use argument for doing so. * By accepting copilot suggestions, you are potentially violating licensing agreements yourself. Sections of particular interest: > [W]e inquired pri­vately with Fried­man and other Microsoft and GitHub rep­re­sen­ta­tives in June 2021, ask­ing for solid legal ref­er­ences for GitHub’s pub­lic legal posi­tions … They pro­vided none. - Software Freedom Conservancy > “You are respon­si­ble for ensur­ing the secu­rity and qual­ity of your code. We rec­om­mend you take the same pre­cau­tions when using code gen­er­ated by GitHub Copi­lot that you would when using any code you didn’t write your­self. These pre­cau­tions include rig­or­ous test­ing, IP [(= intel­lec­tual prop­erty)] scan­ning [my emphasis], and track­ing for secu­rity vul­ner­a­bil­i­ties.” - https://docs.github.com/en/copilot/overview-of-github-copilot/about-github-copilot#using-github-copilot
5308db99-08e4-492f-9780-2f21d55dd5f0
trentmkelly/LessWrong-43k
LessWrong
UnTAPed Learning [Epistemic Status: Request for Proposal] For anyone who is familiar with CFAR's teaching program, the concept of the TAP will be very familiar. It's an acronym that expands either to Trigger-Action Pattern (for naturally-occurring instances) or Trigger-Action Plan (for purposefully-implemented instances); the latter is known academically as "implementation intentions". While there is room for contest for the title of "Most Important CFAR Skill" in terms of which one has the most impact of regular use, in terms of driving adoption of techniques, TAPs are the foundation of the entire curriculum. From simple skills to complex ones, the default flow of practicing a skill is "install a TAP to practice this", and the default call to action of most CFAR classes is "spend five minutes brainstorming TAPs to install". They're extremely useful...if they work at all. For me, they don't. I struggled to find any example at all of a trigger-action patterns in my own life, and even following the extreme repetition, hard-mode path to installing a TAP, found that it vanished as soon as I stopped consciously thinking about keeping it up. I liked most of the techniques I learned at my workshop. Many seemed quite useful to apply. I remember approximately none of them, because, deprived of the core tool to maintain them, every piece of practice looked like It's a strong foundation, for most people. But it's still a foundation; if it doesn't hold, nothing else is any use. So: What other things have people devised to turn a technique from a sketch of ideas to something used often enough that it's a proper skill? As a starting point, things that I use in my particular case: Anki. Beeminder. Calendar reminders. These are all reasonably good at their particular, time-based niches. But none of them work well for irregularly-occurring opportunities or for fuzzier, less specific types of practice. An alternate question, which seems to me to be similar in applicability, is: How woul
4fd2593d-989b-4e67-a402-b494bdf4528d
trentmkelly/LessWrong-43k
LessWrong
Are there any preventive steps someone can take after being exposed to strep throat? Googling this question tells me that the best way to avoid strep is by washing your hands often. But say, hypothetically, that someone’s girlfriend has told him that she has strep the day before Valentine’s Day. Knowing he’ll be exposed, are there any steps he should take to reduce the chance and severity of infection? (His hypothetical tonsils were removed long ago). Since this person is hypothetical, advice to avoid his girlfriend on Valentine’s Day will not reach him. For example, for COVID, there are reasons to think that taking Zinc lozenges (https://www.lesswrong.com/posts/5DKqK3hEzzBoGF47C/consider-taking-zinc-every-time-you-travel) and vitamin D (https://astralcodexten.substack.com/p/covidvitamin-d-much-more-than-you ) might help. Strep throat is not a respiratory virus, but rather a bacterial infection, so does the same advice apply? Or are there other things that seem promising to try? Antibiotics are a natural thought, but is taking antibiotics before any sign of infection reasonable? And if so, would a doctor even prescribe preventatively?
e658840f-6b83-44ce-8530-fdb56763bb55
StampyAI/alignment-research-dataset/arxiv
Arxiv
A Simple Framework for Contrastive Learning of Visual Representations 1 Introduction --------------- Learning effective visual representations without human supervision is a long-standing problem. Most mainstream approaches fall into one of two classes: generative or discriminative. Generative approaches learn to generate or otherwise model pixels in the input space (Hinton et al., [2006](#bib.bib24); Kingma & Welling, [2013](#bib.bib29); Goodfellow et al., [2014](#bib.bib17)). However, pixel-level generation is computationally expensive and may not be necessary for representation learning. Discriminative approaches learn representations using objective functions similar to those used for supervised learning, but train networks to perform pretext tasks where both the inputs and labels are derived from an unlabeled dataset. Many such approaches have relied on heuristics to design pretext tasks (Doersch et al., [2015](#bib.bib10); Zhang et al., [2016](#bib.bib58); Noroozi & Favaro, [2016](#bib.bib41); Gidaris et al., [2018](#bib.bib16)), which could limit the generality of the learned representations. Discriminative approaches based on contrastive learning in the latent space have recently shown great promise, achieving state-of-the-art results (Hadsell et al., [2006](#bib.bib19); Dosovitskiy et al., [2014](#bib.bib13); Oord et al., [2018](#bib.bib42); Bachman et al., [2019](#bib.bib1)). ![](https://media.arxiv-vanity.com/render-output/7233960/x1.png) Figure 1: ImageNet top-1 accuracy of linear classifiers trained on representations learned with different self-supervised methods (pretrained on ImageNet). Gray cross indicates supervised ResNet-50. Our method, SimCLR, is shown in bold. In this work, we introduce a simple framework for contrastive learning of visual representations, which we call SimCLR. Not only does SimCLR outperform previous work (Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ A Simple Framework for Contrastive Learning of Visual Representations")), but it is also simpler, requiring neither specialized architectures (Bachman et al., [2019](#bib.bib1); Hénaff et al., [2019](#bib.bib23)) nor a memory bank (Wu et al., [2018](#bib.bib52); Tian et al., [2019](#bib.bib50); He et al., [2019a](#bib.bib21); Misra & van der Maaten, [2019](#bib.bib39)). In order to understand what enables good contrastive representation learning, we systematically study the major components of our framework and show that: * [topsep=0pt, partopsep=0pt, leftmargin=13pt, parsep=0pt, itemsep=4pt] * Composition of multiple data augmentation operations is crucial in defining the contrastive prediction tasks that yield effective representations. In addition, unsupervised contrastive learning benefits from stronger data augmentation than supervised learning. * Introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations. * Representation learning with contrastive cross entropy loss benefits from normalized embeddings and an appropriately adjusted temperature parameter. * Contrastive learning benefits from larger batch sizes and longer training compared to its supervised counterpart. Like supervised learning, contrastive learning benefits from deeper and wider networks. We combine these findings to achieve a new state-of-the-art in self-supervised and semi-supervised learning on ImageNet ILSVRC-2012 (Russakovsky et al., [2015](#bib.bib44)). Under the linear evaluation protocol, SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art (Hénaff et al., [2019](#bib.bib23)). When fine-tuned with only 1% of the ImageNet labels, SimCLR achieves 85.8% top-5 accuracy, a relative improvement of 10% (Hénaff et al., [2019](#bib.bib23)). When fine-tuned on other natural image classification datasets, SimCLR performs on par with or better than a strong supervised baseline (Kornblith et al., [2019](#bib.bib31)) on 10 out of 12 datasets. 2 Method --------- ### 2.1 The Contrastive Learning Framework Inspired by recent contrastive learning algorithms (see Section [7](#S7 "7 Related Work ‣ A Simple Framework for Contrastive Learning of Visual Representations") for an overview), SimCLR learns representations by maximizing agreement between differently augmented views of the same data example via a contrastive loss in the latent space. As illustrated in Figure [2](#S2.F2 "Figure 2 ‣ 2.1 The Contrastive Learning Framework ‣ 2 Method ‣ A Simple Framework for Contrastive Learning of Visual Representations"), this framework comprises the following four major components. * [topsep=0pt, partopsep=0pt, leftmargin=13pt, parsep=0pt, itemsep=4pt] * A stochastic data augmentation module that transforms any given data example randomly resulting in two correlated views of the same example, denoted ~xi and ~xj, which we consider as a positive pair. In this work, we sequentially apply three simple augmentations: random cropping followed by resize back to the original size, random color distortions, and random Gaussian blur. As shown in Section [3](#S3 "3 Data Augmentation for Contrastive Representation Learning ‣ A Simple Framework for Contrastive Learning of Visual Representations"), the combination of random crop and color distortion is crucial to achieve a good performance. * A neural network base encoder f(⋅) that extracts representation vectors from augmented data examples. Our framework allows various choices of the network architecture without any constraints. We opt for simplicity and adopt the commonly used ResNet (He et al., [2016](#bib.bib20)) to obtain hi=f(~xi)=ResNet(~xi) where hi∈Rd is the output after the average pooling layer. * A small neural network projection head g(⋅) that maps representations to the space where contrastive loss is applied. We use a MLP with one hidden layer to obtain zi=g(hi)=W(2)σ(W(1)hi) where σ is a ReLU non-linearity. As shown in section [4](#S4 "4 Architectures for Encoder and Head ‣ A Simple Framework for Contrastive Learning of Visual Representations"), we find it beneficial to define the contrastive loss on zi’s rather than hi’s. * A contrastive loss function defined for a contrastive prediction task. Given a set {~xk} including a positive pair of examples ~xi and ~xj, the contrastive prediction task aims to identify ~xj in {~xk}k≠i for a given ~xi. ⟵Representation⟶  x  ~xi ~xj hi hj zi zj t∼T t′∼T f(⋅) f(⋅) g(⋅) g(⋅) Maximize agreement Figure 2: A simple framework for contrastive learning of visual representations. Two separate data augmentation operators are sampled from the same family of augmentations (t∼T and t′∼T) and applied to each data example to obtain two correlated views. A base encoder network f(⋅) and a projection head g(⋅) are trained to maximize agreement using a contrastive loss. After training is completed, we through away the projection head g(⋅) and use encoder f(⋅) and representation h for downstream tasks. We randomly sample a minibatch of N examples and define the contrastive prediction task on pairs of augmented examples derived from the minibatch, resulting in 2N data points. We do not sample negative examples explicitly. Instead, given a positive pair, similar to (Chen et al., [2017](#bib.bib6)), we treat the other 2(N−1) augmented examples within a minibatch as negative examples. Let sim(u,v)=u⊤v/∥u∥∥v∥ denote the cosine similarity between two vectors u and v. Then the loss function for a positive pair of examples (i,j) is defined as | | | | | | --- | --- | --- | --- | | | ℓi,j=−logexp(sim(zi,zj)/τ)∑2Nk=11[k≠i]exp(sim(zi,zk)/τ) , | | (1) | where 1[k≠i]∈{0,1} is an indicator function evaluating to 1 iff k≠i and τ denotes a temperature parameter. The final loss is computed across all positive pairs, both (i,j) and (j,i), in a mini-batch. This loss has been used in previous work (Sohn, [2016](#bib.bib47); Wu et al., [2018](#bib.bib52); Oord et al., [2018](#bib.bib42)); for convenience, we term it NT-Xent (the normalized temperature-scaled cross entropy loss). Algorithm [1](#alg1 "Algorithm 1 ‣ 2.1 The Contrastive Learning Framework ‣ 2 Method ‣ A Simple Framework for Contrastive Learning of Visual Representations") summarizes the proposed method.   input: batch size N, temperature τ, structure of f, g, T.   for sampled minibatch {xk}Nk=1 do      for all k∈{1,…,N} do          draw two augmentation functions t∼T, t′∼T          # the first augmentation          ~x2k−1=t(xk)          h2k−1=f(~x2k−1)                               # representation          z2k−1=g(h2k−1)                                 # projection          # the second augmentation          ~x2k=t′(xk)          h2k=f(~x2k)                                       # representation          z2k=g(h2k)                                         # projection      end for      for all i∈{1,…,2N} and j∈{1,…,2N} do           si,j=z⊤izj/(τ∥zi∥∥zj∥)         # pairwise similarity      end for      define ℓ(i,j) as  ℓ(i,j)=−logexp(si,j)∑2Nk=11[k≠i]exp(si,k)      L=12N∑Nk=1[ℓ(2k−1,2k)+ℓ(2k,2k−1)]      update networks f and g to minimize L   end for   return encoder network f Algorithm 1 SimCLR’s main learning algorithm. ### 2.2 Training with Large Batch Size We do not train the model with a memory bank (Wu et al., [2018](#bib.bib52)). Instead, we vary the training batch size N from 256 to 8192. A batch size of 8192 gives us 16382 negative examples per positive pair from both augmentation views. Training with large batch size may be unstable when using standard SGD/Momentum with linear learning rate scaling (Goyal et al., [2017](#bib.bib18)). To stabilize the training, we use the LARS optimizer (You et al., [2017](#bib.bib56)) for all batch sizes. We train our model with Cloud TPUs, using 32 to 128 cores depending on the batch size.111With 128 TPU v3 cores, it takes ∼1.5 hours to train our ResNet-50 with a batch size of 4096 for 100 epochs. Global BN. Standard ResNets use batch normalization (Ioffe & Szegedy, [2015](#bib.bib27)). In distributed training with data parallelism, the BN mean and variance are typically aggregated locally per device. In our contrastive learning, as positive pairs are computed in the same device, the model can exploit the local information leakage to improve prediction accuracy without improving representations. We address this issue by aggregating BN mean and variance over all devices during the training. Other approaches include shuffling data examples (He et al., [2019a](#bib.bib21)), or replacing BN with layer norm (Hénaff et al., [2019](#bib.bib23)). ### 2.3 Evaluation Protocol Here we lay out the protocol for our empirical studies, which aim to understand different design choices in our framework. Dataset and Metrics. Most of our study for unsupervised pretraining (learning encoder network f without labels) is done using the ImageNet ILSVRC-2012 dataset (Russakovsky et al., [2015](#bib.bib44)). Some additional pretraining experiments on CIFAR-10 (Krizhevsky & Hinton, [2009](#bib.bib33)) can be found in Appendix [B.7](#A2.SS7 "B.7 CIFAR-10 ‣ Appendix B Additional Experimental Results ‣ A Simple Framework for Contrastive Learning of Visual Representations"). We also test the pretrained results on a wide range of datasets for transfer learning. To evaluate the learned representations, we follow the widely used linear evaluation protocol (Zhang et al., [2016](#bib.bib58); Oord et al., [2018](#bib.bib42); Bachman et al., [2019](#bib.bib1); Kolesnikov et al., [2019](#bib.bib30)), where a linear classifier is trained on top of the frozen base network, and test accuracy is used as a proxy for representation quality. Beyond linear evaluation, we also compare against state-of-the-art on semi-supervised and transfer learning. Default setting. Unless otherwise specified, for data augmentation we use random crop and resize (with random flip), color distortions, and Gaussian blur (for details, see Appendix [A](#A1 "Appendix A Data Augmentation Details ‣ A Simple Framework for Contrastive Learning of Visual Representations")). We use ResNet-50 as the base encoder network, and a 2-layer MLP projection head to project the representation to a 128-dimensional latent space. As the loss, we use NT-Xent, optimized using LARS with linear learning rate scaling (i.e. LearningRate=0.3×BatchSize/256) and weight decay of 10−6. We train at batch size 4096 for 100 epochs.222Although max performance is not reached in 100 epochs, reasonable results are achieved, allowing fair and efficient ablations. Furthermore, we use linear warmup for the first 10 epochs, and decay the learning rate with the cosine decay schedule without restarts (Loshchilov & Hutter, [2016](#bib.bib35)). 3 Data Augmentation for Contrastive Representation Learning ------------------------------------------------------------ | | | | --- | --- | | A B (a) Global and local views. | C D (b) Adjacent views. | Figure 3: Solid rectangles are images, dashed rectangles are random crops. By randomly cropping images, we sample contrastive prediction tasks that include global to local view (B→A) or adjacent view (D→C) prediction. | | | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | (a) Original | (b) Crop and resize | (c) Crop, resize (and flip) | (d) Color distort. (drop) | (e) Color distort. (jitter) | (f) Rotate {90\lx@math@degree,180\lx@math@degree,270\lx@math@degree} | (g) Cutout | (h) Gaussian noise | (i) Gaussian blur | (j) Sobel filtering | Figure 4: Illustrations of the studied data augmentation operators. Each augmentation can transform data stochastically with some internal parameters (e.g. rotation degree, noise level). Note that we only test these operators in ablation, the augmentation policy used to train our models only includes random crop (with flip and resize), color distortion, and Gaussian blur. (Original image cc-by: Von.grzanka) Data augmentation defines predictive tasks. While data augmentation has been widely used in both supervised and unsupervised representation learning (Krizhevsky et al., [2012](#bib.bib34); Hénaff et al., [2019](#bib.bib23); Bachman et al., [2019](#bib.bib1)), it has not been considered as a systematic way to define the contrastive prediction task. Many existing approaches define contrastive prediction tasks by changing the architecture. For example, Hjelm et al. ([2018](#bib.bib25)); Bachman et al. ([2019](#bib.bib1)) achieve global-to-local view prediction via constraining the receptive field in the network architecture, whereas Oord et al. ([2018](#bib.bib42)); Hénaff et al. ([2019](#bib.bib23)) achieve neighboring view prediction via a fixed image splitting procedure and a context aggregation network. We show that this complexity can be avoided by performing simple random cropping (with resizing) of target images, which creates a family of predictive tasks subsuming the above mentioned two, as shown in Figure [3](#S3.F3 "Figure 3 ‣ 3 Data Augmentation for Contrastive Representation Learning ‣ A Simple Framework for Contrastive Learning of Visual Representations"). This simple design choice conveniently decouples the predictive task from other components such as the neural network architecture. Broader contrastive prediction tasks can be defined by extending the family of augmentations and composing them stochastically. ![](https://media.arxiv-vanity.com/render-output/7233960/x12.png) Figure 5: Linear evaluation (ImageNet top-1 accuracy) under individual or composition of data augmentations, applied only to one branch. For all columns but the last, diagonal entries correspond to single transformation, and off-diagonals correspond to composition of two transformations (applied sequentially). The last column reflects the average over the row. ### 3.1 Composition of data augmentation operations is crucial for learning good representations To systematically study the impact of data augmentation, we consider several common augmentations here. One type of augmentation involves spatial/geometric transformation of data, such as cropping and resizing (with horizontal flipping), rotation (Gidaris et al., [2018](#bib.bib16)) and cutout (DeVries & Taylor, [2017](#bib.bib9)). The other type of augmentation involves appearance transformation, such as color distortion (including color dropping, brightness, contrast, saturation, hue) (Howard, [2013](#bib.bib26); Szegedy et al., [2015](#bib.bib49)), Gaussian blur, and Sobel filtering. Figure [4](#S3.F4 "Figure 4 ‣ 3 Data Augmentation for Contrastive Representation Learning ‣ A Simple Framework for Contrastive Learning of Visual Representations") visualizes the augmentations that we study in this work. To understand the effects of individual data augmentations and the importance of augmentation composition, we investigate the performance of our framework when applying augmentations individually or in pairs. Since ImageNet images are of different sizes, we always apply crop and resize images (Krizhevsky et al., [2012](#bib.bib34); Szegedy et al., [2015](#bib.bib49)), which makes it difficult to study other augmentations in the absence of cropping. To eliminate this confound, we consider an asymmetric data transformation setting for this ablation. Specifically, we always first randomly crop images and resize them to the same resolution, and we then apply the targeted transformation(s) only to one branch of the framework in Figure [2](#S2.F2 "Figure 2 ‣ 2.1 The Contrastive Learning Framework ‣ 2 Method ‣ A Simple Framework for Contrastive Learning of Visual Representations"), while leaving the other branch as the identity (i.e. t(xi)=xi). Note that this asymmetric data augmentation hurts the performance. Nonetheless, this setup should not substantively change the impact of individual data augmentations or their compositions. Figure [5](#S3.F5 "Figure 5 ‣ 3 Data Augmentation for Contrastive Representation Learning ‣ A Simple Framework for Contrastive Learning of Visual Representations") shows linear evaluation results under individual and composition of transformations. We observe that no single transformation suffices to learn good representations, even though the model can almost perfectly identify the positive pairs in the contrastive task. When composing augmentations, the contrastive prediction task becomes harder, but the quality of representation improves dramatically. One composition of augmentations stands out: random cropping and random color distortion. We conjecture that one serious issue when using only random cropping as data augmentation is that most patches from an image share a similar color distribution. Figure [6](#S3.F6 "Figure 6 ‣ 3.1 Composition of data augmentation operations is crucial for learning good representations ‣ 3 Data Augmentation for Contrastive Representation Learning ‣ A Simple Framework for Contrastive Learning of Visual Representations") shows that color histograms alone suffice to distinguish images. Neural nets may exploit this shortcut to solve the predictive task. Therefore, it is critical to compose cropping with color distortion in order to learn generalizable features. | | | | | | --- | --- | --- | --- | | | | (a) Without color distortion. | (b) With color distortion. | Figure 6: Histograms of pixel intensities (over all channels) for different crops of two different images (i.e. two rows). The image for the first row is from Figure [4](#S3.F4 "Figure 4 ‣ 3 Data Augmentation for Contrastive Representation Learning ‣ A Simple Framework for Contrastive Learning of Visual Representations"). All axes have the same range. ### 3.2 Contrastive learning needs stronger data augmentation than supervised learning | | | | | --- | --- | --- | | | Color distortion strength | | | Methods | 1/8 | 1/4 | 1/2 | 1 | 1 (+Blur) | AutoAug | | SimCLR | 59.6 | 61.0 | 62.6 | 63.2 | 64.5 | 61.1 | | Supervised | 77.0 | 76.7 | 76.5 | 75.7 | 75.4 | 77.1 | Table 1: Top-1 accuracy of unsupervised ResNet-50 using linear evaluation and supervised ResNet-50444Supervised models are trained for 90 epochs; longer training improves performance of stronger augmentation by ∼0.5%., under varied color distortion strength (see Appendix [A](#A1 "Appendix A Data Augmentation Details ‣ A Simple Framework for Contrastive Learning of Visual Representations")) and other data transformations. Strength 1 (+Blur) is our default data augmentation policy. To further demonstrate the importance of the color augmentation, we adjust the strength of color augmentation as shown in Table [1](#S3.T1 "Table 1 ‣ 3.2 Contrastive learning needs stronger data augmentation than supervised learning ‣ 3 Data Augmentation for Contrastive Representation Learning ‣ A Simple Framework for Contrastive Learning of Visual Representations"). Stronger color augmentation substantially improves the linear evaluation of the learned unsupervised models. In this context, AutoAugment (Cubuk et al., [2019](#bib.bib8)), a sophisticated augmentation policy found using supervised learning, does not work better than simple cropping + (stronger) color distortion. When training supervised models with the same set of augmentations, we observe that stronger color augmentation does not improve or even hurts their performance. Thus, our experiments show that unsupervised contrastive learning benefits from stronger (color) data augmentation than supervised learning. Although previous work has reported that data augmentation is useful for self-supervised learning (Doersch et al., [2015](#bib.bib10); Bachman et al., [2019](#bib.bib1); Hénaff et al., [2019](#bib.bib23)), we show that data augmentation that does not yield accuracy benefits for supervised learning can still help considerably with contrastive learning. 4 Architectures for Encoder and Head ------------------------------------- ### 4.1 Unsupervised contrastive learning benefits (more) from bigger models ![](https://media.arxiv-vanity.com/render-output/7233960/x17.png) Figure 7: Linear evaluation of models with varied depth and width. Models in blue dots are ours trained for 100 epochs, models in red stars are ours trained for 1000 epochs, and models in green crosses are supervised ResNets trained for 90 epochs666Training longer does not improve supervised ResNets (see Appendix [B.1](#A2.SS1 "B.1 Effects of Longer Training for Supervised Models ‣ Appendix B Additional Experimental Results ‣ A Simple Framework for Contrastive Learning of Visual Representations")). (He et al., [2016](#bib.bib20)). Figure [7](#S4.F7 "Figure 7 ‣ 4.1 Unsupervised contrastive learning benefits (more) from bigger models ‣ 4 Architectures for Encoder and Head ‣ A Simple Framework for Contrastive Learning of Visual Representations") shows, perhaps unsurprisingly, that increasing depth and width both improve performance. While similar findings hold for supervised learning (He et al., [2016](#bib.bib20)), we find the gap between supervised models and linear classifiers trained on unsupervised models shrinks as the model size increases, suggesting that unsupervised learning benefits more from bigger models than its supervised counterpart. ### 4.2 A nonlinear projection head improves the representation quality of the layer before it | | | | | --- | --- | --- | | Name | Negative loss function | Gradient w.r.t. u | | NT-Xent | uTv+/τ−log∑v∈{v+,v−}exp(uTv/τ) | (1−exp(uTv+/τ)Z(u))/τv+−∑v∈{v+,v−}exp(uTv/τ)Z(u)/τv | | NT-Logistic | logσ(uTv+/τ)+logσ(−uTv−/τ) | (σ(−uTv+/τ))/τv+−σ(uTv−/τ)/τv− | | Margin Triplet | −max(uTv−−uTv++m,0) | v+−v− if uTv+−uTv−<m else 0 | Table 2: Negative loss functions and their gradients. All input vectors, i.e. u,v+,v−, are ℓ2 normalized. NT-Xent is an abbreviation for “Normalized Temperature-scaled Cross Entropy”. Different loss functions impose different weightings of positive and negative examples. ![](https://media.arxiv-vanity.com/render-output/7233960/x18.png) Figure 8: Linear evaluation of representations with different projection heads g(⋅) and various dimensions of z=g(h). The representation h (before projection) is 2048-dimensional here. | | | | | --- | --- | --- | | What to predict? | Random guess | Representation | | h | g(h) | | Color vs grayscale | 80 | 99.3 | 97.4 | | Rotation | 25 | 67.6 | 25.6 | | Orig. vs corrupted | 50 | 99.5 | 59.6 | | Orig. vs Sobel filtered | 50 | 96.6 | 56.3 | Table 3: Accuracy of training additional MLPs on different representations to predict the transformation applied. Other than crop and color augmentation, we additionally and independently add rotation (one of {0\lx@math@degree,90\lx@math@degree,180\lx@math@degree,270\lx@math@degree}), Gaussian noise, and Sobel filtering transformation during the pretraining for the last three rows. Both h and g(h) are of the same dimensionality, i.e. 2048. We then study the importance of including a projection head, i.e. g(h). Figure [8](#S4.F8 "Figure 8 ‣ 4.2 A nonlinear projection head improves the representation quality of the layer before it ‣ 4 Architectures for Encoder and Head ‣ A Simple Framework for Contrastive Learning of Visual Representations") shows linear evaluation results using three different architecture for the head: (1) identity mapping; (2) linear projection, as used by several previous approaches (Wu et al., [2018](#bib.bib52)); and (3) the default nonlinear projection with one additional hidden layer (and ReLU activation), similar to Bachman et al. ([2019](#bib.bib1)). We observe that a nonlinear projection is better than a linear projection (+3%), and much better than no projection (>10%). When a projection head is used, similar results are observed regardless of output dimension. Furthermore, even when nonlinear projection is used, the layer before the projection head, h, is still much better (>10%) than the layer after, z=g(h), which shows that the hidden layer before the projection head is a better representation than the layer after. We conjecture that the importance of using the representation before the nonlinear projection is due to loss of information induced by the contrastive loss. In particular, z=g(h) is trained to be invariant to data transformation. Thus, g can remove information that may be useful for the downstream task, such as the color or orientation of objects. By leveraging the nonlinear transformation g(⋅), more information can be formed and maintained in h. To verify this hypothesis, we conduct experiments that use either h or g(h) to learn to predict the transformation applied during the pretraining. Here we set g(h)=W(2)σ(W(1)h), with the same input and output dimensionality (i.e. 2048). Table [3](#S4.T3 "Table 3 ‣ 4.2 A nonlinear projection head improves the representation quality of the layer before it ‣ 4 Architectures for Encoder and Head ‣ A Simple Framework for Contrastive Learning of Visual Representations") shows h contains much more information about the transformation applied, while g(h) loses information. 5 Loss Functions and Batch Size -------------------------------- | | | | | | | --- | --- | --- | --- | --- | | Margin | NT-Logi. | Margin (sh) | NT-Logi.(sh) | NT-Xent | | 50.9 | 51.6 | 57.5 | 57.9 | 63.9 | Table 4: Linear evaluation (top-1) for models trained with different loss functions. “sh” means using semi-hard negative mining. | | | | | | | --- | --- | --- | --- | --- | | ℓ2 norm? | τ | Entropy | Contrastive acc. | Top 1 | | Yes | 0.05 | 1.0 | 90.5 | 59.7 | | 0.1 | 4.5 | 87.8 | 64.4 | | 0.5 | 8.2 | 68.2 | 60.7 | | 1 | 8.3 | 59.1 | 58.0 | | No | 10 | 0.5 | 91.7 | 57.2 | | 100 | 0.5 | 92.1 | 57.0 | Table 5: Linear evaluation for models trained with different choices of ℓ2 norm and temperature τ for NT-Xent loss. The contrastive distribution is over 4096 examples. ### 5.1 Normalized cross entropy loss with adjustable temperature works better than alternatives We compare the NT-Xent loss against other commonly used contrastive loss functions, such as logistic loss (Mikolov et al., [2013](#bib.bib38)), and margin loss (Schroff et al., [2015](#bib.bib45)). Table [2](#S4.T2 "Table 2 ‣ 4.2 A nonlinear projection head improves the representation quality of the layer before it ‣ 4 Architectures for Encoder and Head ‣ A Simple Framework for Contrastive Learning of Visual Representations") shows the objective function as well as the gradient to the input of the loss function. Looking at the gradient, we observe 1) ℓ2 normalization along with temperature effectively weights different examples, and an appropriate temperature can help the model learn from hard negatives; and 2) unlike cross-entropy, other objective functions do not weigh the negatives by their relative hardness. As a result, one must apply semi-hard negative mining (Schroff et al., [2015](#bib.bib45)) for these loss functions: instead of computing the gradient over all loss terms, one can compute the gradient using semi-hard negative terms (i.e., those that are within the loss margin and closest in distance, but farther than positive examples). To make the comparisons fair, we use the same ℓ2 normalization for all loss functions, and we tune the hyperparameters, and report their best results.777Details of tuning can be found in Appendix [B.3](#A2.SS3 "B.3 Tuning For Other Loss Functions ‣ Appendix B Additional Experimental Results ‣ A Simple Framework for Contrastive Learning of Visual Representations"). For simplicity, we only consider the negatives from one augmentation view. Table [4](#S5.T4 "Table 4 ‣ 5 Loss Functions and Batch Size ‣ A Simple Framework for Contrastive Learning of Visual Representations") shows that, while (semi-hard) negative mining helps, the best result is still much worse than our default NT-Xent loss. We next test the importance of the ℓ2 normalization and temperature τ in our default NT-Xent loss. Table [5](#S5.T5 "Table 5 ‣ 5 Loss Functions and Batch Size ‣ A Simple Framework for Contrastive Learning of Visual Representations") shows that without normalization and proper temperature scaling, performance is significantly worse. Without ℓ2 normalization, the contrastive task accuracy is higher, but the resulting representation is worse under linear evaluation. ### 5.2 Contrastive learning benefits (more) from larger batch sizes and longer training ![](https://media.arxiv-vanity.com/render-output/7233960/x19.png) Figure 9: Linear evaluation models (ResNet-50) trained with different batch size and epochs. Each bar is a single run from scratch. Figure [9](#S5.F9 "Figure 9 ‣ 5.2 Contrastive learning benefits (more) from larger batch sizes and longer training ‣ 5 Loss Functions and Batch Size ‣ A Simple Framework for Contrastive Learning of Visual Representations") shows the impact of batch size when models are trained for different numbers of epochs. We find that, when the number of training epochs is small (e.g. 100 epochs), larger batch sizes have a significant advantage over the smaller ones. With more training steps/epochs, the gaps between different batch sizes decrease or disappear, provided the batches are randomly resampled. In contrast to supervised learning (Goyal et al., [2017](#bib.bib18)), in contrastive learning, larger batch sizes provide more negative examples, facilitating convergence (i.e. taking fewer epochs and steps for a given accuracy). Training longer also provides more negative examples, improving the results. | | | | | | | --- | --- | --- | --- | --- | | Method | Architecture | Param. | Top 1 | Top 5 | | Methods using ResNet-50: | | Local Agg. | ResNet-50 | 24 | 60.2 | - | | MoCo | ResNet-50 | 24 | 60.6 | - | | PIRL | ResNet-50 | 24 | 63.6 | - | | CPC v2 | ResNet-50 | 24 | 63.8 | 85.3 | | SimCLR (ours) | ResNet-50 | 24 | 69.3 | 89.0 | | Methods using other architectures: | | Rotation | RevNet-50 (4×) | 86 | 55.4 | - | | BigBiGAN | RevNet-50 (4×) | 86 | 61.3 | 81.9 | | AMDIM | Custom-ResNet | 626 | 68.1 | - | | CMC | ResNet-50 (2×) | 188 | 68.4 | 88.2 | | MoCo | ResNet-50 (4×) | 375 | 68.6 | - | | CPC v2 | ResNet-161 (∗) | 305 | 71.5 | 90.1 | | SimCLR (ours) | ResNet-50 (2×) | 94 | 74.2 | 92.0 | | SimCLR (ours) | ResNet-50 (4×) | 375 | 76.5 | 93.2 | Table 6: ImageNet accuracies of linear classifiers trained on representations learned with different self-supervised methods. | | | | | --- | --- | --- | |   Method | Architecture | Label fraction | | 1% | 10% | | Top 5 | | Methods using other label-propagation: | |   Pseudo-label | ResNet50 | 51.6 | 82.4 | |   VAT+Entropy Min. | ResNet50 | 47.0 | 83.4 | |   UDA (w. RandAug) | ResNet50 | - | 88.5 | |   FixMatch (w. RandAug) | ResNet50 | - | 89.1 | |   S4L (Rot+VAT+En. M.) | ResNet50 (4×) | - | 91.2 | | Methods using representation learning only: | |   InstDisc | ResNet50 | 39.2 | 77.4 | |   BigBiGAN | RevNet-50 (4×) | 55.2 | 78.8 | |   PIRL | ResNet-50 | 57.2 | 83.8 | |   CPC v2 | ResNet-161(∗) | 77.9 | 91.2 | |   SimCLR (ours) | ResNet-50 | 75.5 | 87.8 | |   SimCLR (ours) | ResNet-50 (2×) | 83.0 | 91.2 | |   SimCLR (ours) | ResNet-50 (4×) | 85.8 | 92.6 | Table 7: ImageNet accuracy of models trained with few labels. | | | | | | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | Food | CIFAR10 | CIFAR100 | Birdsnap | SUN397 | Cars | Aircraft | VOC2007 | DTD | Pets | Caltech-101 | Flowers | | Linear evaluation: | | | | | | | | | | SimCLR (ours) | 76.9 | 95.3 | 80.2 | 48.4 | 65.9 | 60.0 | 61.2 | 84.2 | 78.9 | 89.2 | 93.9 | 95.0 | | Supervised | 75.2 | 95.7 | 81.2 | 56.4 | 64.9 | 68.8 | 63.8 | 83.8 | 78.7 | 92.3 | 94.1 | 94.2 | | Fine-tuned: | | | | | | | | | | SimCLR (ours) | 89.4 | 98.6 | 89.0 | 78.2 | 68.1 | 92.1 | 87.0 | 86.6 | 77.8 | 92.1 | 94.1 | 97.6 | | Supervised | 88.7 | 98.3 | 88.7 | 77.8 | 67.0 | 91.4 | 88.0 | 86.5 | 78.8 | 93.2 | 94.2 | 98.0 | | Random init | 88.3 | 96.0 | 81.9 | 77.0 | 53.7 | 91.3 | 84.8 | 69.4 | 64.1 | 82.7 | 72.5 | 92.5 | Table 8: Comparison of transfer learning performance of our self-supervised approach with supervised baselines across 12 natural image classification datasets, for ResNet-50 (4×) models pretrained on ImageNet. Results not significantly worse than the best (p>0.05, permutation test) are shown in bold. See Appendix [B.6](#A2.SS6 "B.6 Transfer Learning ‣ Appendix B Additional Experimental Results ‣ A Simple Framework for Contrastive Learning of Visual Representations") for experimental details and results with standard ResNet-50. 6 Comparison with State-of-the-art ----------------------------------- In this subsection, similar to Kolesnikov et al. ([2019](#bib.bib30)); He et al. ([2019a](#bib.bib21)), we use ResNet-50 in 3 different hidden layer widths (width multipliers of 1×, 2×, and 4×). For better convergence, our models here are trained for 1000 epochs. Linear evaluation. Table [6](#S5.T6 "Table 6 ‣ 5.2 Contrastive learning benefits (more) from larger batch sizes and longer training ‣ 5 Loss Functions and Batch Size ‣ A Simple Framework for Contrastive Learning of Visual Representations") compares our results with previous approaches (Zhuang et al., [2019](#bib.bib59); He et al., [2019a](#bib.bib21); Misra & van der Maaten, [2019](#bib.bib39); Hénaff et al., [2019](#bib.bib23); Kolesnikov et al., [2019](#bib.bib30); Donahue & Simonyan, [2019](#bib.bib11); Bachman et al., [2019](#bib.bib1); Tian et al., [2019](#bib.bib50)) in the linear evaluation setting. Table [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ A Simple Framework for Contrastive Learning of Visual Representations") shows more numerical comparisons among different methods. We are able to use standard networks to obtain substantially better results compared to previous methods that require specifically designed architectures. The best result obtained with our ResNet-50 (4×) can match the supervised pretrained ResNet-50. Semi-supervised learning. We follow Zhai et al. ([2019](#bib.bib57)) and sample 1% or 10% of the labeled ILSVRC-12 training datasets in a class-balanced way (i.e. around 12.8 and 128 images per class respectively). We simply fine-tune the whole base network on the labeled data without regularization (see Appendix [B.5](#A2.SS5 "B.5 Semi-supervised Learning ‣ Appendix B Additional Experimental Results ‣ A Simple Framework for Contrastive Learning of Visual Representations")). Table [7](#S5.T7 "Table 7 ‣ 5.2 Contrastive learning benefits (more) from larger batch sizes and longer training ‣ 5 Loss Functions and Batch Size ‣ A Simple Framework for Contrastive Learning of Visual Representations") shows the comparisons of our results against recent methods (Zhai et al., [2019](#bib.bib57); Xie et al., [2019](#bib.bib54); Sohn et al., [2020](#bib.bib48); Wu et al., [2018](#bib.bib52); Donahue & Simonyan, [2019](#bib.bib11); Misra & van der Maaten, [2019](#bib.bib39); Hénaff et al., [2019](#bib.bib23)). Again, our approach significantly improves over state-of-the-art with both 1% and 10% of the labels. Transfer learning. We evaluate transfer learning performance across 12 natural image datasets in both linear evaluation (fixed feature extractor) and fine-tuning settings. Following Kornblith et al. ([2019](#bib.bib31)), we perform hyperparameter tuning for each model-dataset combination and select the best hyperparameters on a validation set. Table [8](#S5.T8 "Table 8 ‣ 5.2 Contrastive learning benefits (more) from larger batch sizes and longer training ‣ 5 Loss Functions and Batch Size ‣ A Simple Framework for Contrastive Learning of Visual Representations") shows results with the ResNet-50 (4×) model. When fine-tuned, our self-supervised model significantly outperforms the supervised baseline on 5 datasets, whereas the supervised baseline is superior on only 2 (i.e. Pets and Flowers). On the remaining 5 datasets, the models are statistically tied. Full experimental details as well as results with the standard ResNet-50 architecture are provided in Appendix [B.6](#A2.SS6 "B.6 Transfer Learning ‣ Appendix B Additional Experimental Results ‣ A Simple Framework for Contrastive Learning of Visual Representations"). We note that the superiority of our framework relative to previous work is not explained by any single design choice, but by their composition. We provide a comprehensive comparison of our design choices with those of previous work in Appendix [C](#A3 "Appendix C Further Comparison to Related Methods ‣ A Simple Framework for Contrastive Learning of Visual Representations"). 7 Related Work --------------- The idea of making representations of an image agree with each other under small transformations dates back to Becker & Hinton ([1992](#bib.bib2)). We extend this idea by leveraging recent advances in data augmentation, network architecture and contrastive losses. A similar consistency idea has been explored in other contexts such as semi-supervised learning (Xie et al., [2019](#bib.bib54); Berthelot et al., [2019](#bib.bib4)). Handcrafted pretext tasks. The recent renaissance of self-supervised learning began with artificially designed pretext tasks, such as relative patch prediction (Doersch et al., [2015](#bib.bib10)), solving jigsaw puzzles (Noroozi & Favaro, [2016](#bib.bib41)), colorization (Zhang et al., [2016](#bib.bib58)) and rotation prediction (Gidaris et al., [2018](#bib.bib16)). Although good results can be obtained with bigger networks and longer training (Kolesnikov et al., [2019](#bib.bib30)), these pretext tasks rely on somewhat ad-hoc heuristics, which limits the generality of learned representations. Contrastive visual representation learning. Dating back to Hadsell et al. ([2006](#bib.bib19)), these approaches learn representations by contrasting positive pairs against negative pairs. Along these lines, Dosovitskiy et al. ([2014](#bib.bib13)) proposes to treat each instance as a class represented by a feature vector (in a parametric form). Wu et al. ([2018](#bib.bib52)) proposes to use a memory bank to store the instance class representation vector, an approach adopted and extended in several recent papers  (Zhuang et al., [2019](#bib.bib59); Tian et al., [2019](#bib.bib50); He et al., [2019a](#bib.bib21); Misra & van der Maaten, [2019](#bib.bib39)). Other work explores the use of in-batch samples for negative sampling instead of a memory bank (Ye et al., [2019](#bib.bib55); Ji et al., [2019](#bib.bib28)). Recent literature has attempted to relate the success of their methods to maximization of mutual information between latent representations (Oord et al., [2018](#bib.bib42); Hénaff et al., [2019](#bib.bib23); Hjelm et al., [2018](#bib.bib25); Bachman et al., [2019](#bib.bib1)). However, it is not clear if the success of contrastive approaches is determined by the mutual information, or by the specific form of the contrastive loss (Tschannen et al., [2019](#bib.bib51)). Further comparison of our method to related methods are in Appendix [C](#A3 "Appendix C Further Comparison to Related Methods ‣ A Simple Framework for Contrastive Learning of Visual Representations"). 8 Conclusion ------------- In this work, we present a simple framework and its instantiation for contrastive visual representation learning. We carefully study its components, and show the effects of different design choices. By combining our findings, we improve considerably over previous methods for self-supervised, semi-supervised, and transfer learning. Our results show that the complexity of some previous methods for self-supervised learning is not necessary to achieve good performance. Our approach differs from standard supervised learning on ImageNet only in the choice of data augmentation, the use of a nonlinear head at the end of the network, and the loss function. The strength of this simple framework suggests that, despite a recent surge in interest, self-supervised learning remains undervalued. Acknowledgements ---------------- We would like to thank Xiaohua Zhai, Rafael Müller and Yani Ioannou for their feedback on the draft. We are also grateful for general support from Google Research teams in Toronto and elsewhere.
763a284f-a690-4147-ad10-c952c467b03e
trentmkelly/LessWrong-43k
LessWrong
Sifting the world's knowledge (AGI Estimation 2)
19bfbadb-0f37-4bf7-a638-7e2767ca9d8a
trentmkelly/LessWrong-43k
LessWrong
Counterfactuals for Perfect Predictors Parfit's Hitchhiker with a perfect predictor has the unusual property of having a Less Wrong consensus that you ought to pay, whilst also being surprisingly hard to define formally. For example, if we try to ask about whether an agent that never pays in town is rational, then we encounter a contradiction. A perfect predictor would not ever give such an agent a lift, so by the Principle of Explosion we can prove any statement to be true given this counterfactual. On the other hand, even if the predictor mistakenly picks up defectors only 0.01% of the time, then this counterfactual seems to have meaning. Let's suppose that a random number from 1 to 10,000 is chosen and the predictor always picks you up when the number is 1 and is perfect otherwise. Even if we draw the number 120, we can fairly easily imagine the situation where the number drawn was 1 instead. This is then a coherent situation where an Always Defect agent would end up in town, so we can talk about how the agent would have counterfactually chosen. So one response to the difficulties of discussing counterfactual decisions with perfect predictors would be to simply compute the counterfactual as though the agent has a (tiny) chance of being wrong. However, agents may quite understandably wish to act differently depending on whether they are facing a perfect or imperfect predictor, even choosing differently when facing an agent with a very low error rate. Another would be to say that the predictor predicts whether placing the agent in town is logically coherent. On the basis that the agent only picks up those who it predicts (with 100% accuracy) will pay, it can assume that it will be payed if the situation is coherent. Unfortunately, it isn't clear what this means in concrete terms for an agent to be such that it couldn't coherently be placed in such a situation. How is, "I commit to not paying in <impossible situation>" any kind of meaningful commitment at all? We could look at, "I commit to making <si
3a8f757e-c972-43c4-9039-7d5216b7915c
trentmkelly/LessWrong-43k
LessWrong
What social science research do you want to see reanalyzed? I've been awarded a small Lightspeed Grant to replicate empirical social science research. What research should I look at? I'm a PhD economist with an interest in reanalyzing published research using different methods or data (eg. checking whether the results are robust to different regression models, rather than rerunning a lab experiment). I've looked at whether mayors in China are promoted based on GDP growth, the effect of racial violence on patenting, and the effect of medical marijuana legalization on crime. I've also done work on air pollution and mortality, the long-run impacts of the measles vaccine, and how tech clusters drive innovation.
54ea7a79-7de1-48ec-9c2e-c562c6cb39b4
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Epistemic spot checks typically consist of references from a book, selected by my interest level, checked against either the book’s source or my own research. This one is a little different that I’m focusing on a single paragraph in a single paper. Specifically as part of a larger review I read Ericsson, Krampe, and Tesch-Römer’s 1993 paper, The Role of Deliberate Practice in the Acquisition of Expert Performance (PDF), in an attempt to gain information about how long human beings can productivity do thought work over a time period. This paper is important because if you ask people how much thought work can be done in a day, if they have an answer and a citation at all, it will be “4 hours a day” and “Cal Newport’s Deep Work“. The Ericsson paper is in turn Newport’s source. So to the extent people’s beliefs are based on anything, they’re based on this paper. In fact I’m not even reviewing the whole paper, just this one relevant paragraph: When individuals, especially children, start practicing in a given domain, the amount of practice is an hour or less per day (Bloom, 1985b). Similarly, laboratory studies of extended practice limit practice to about 1 hr for 3-5 days a week (e.g., Chase & Ericsson, 1982; Schneider & Shiffrin, 1977; Seibel, 1963). A number of training studies in real life have compared the efficiency of practice durations ranging from 1 -8 hr per day. These studies show essentially no benefit from durations exceeding 4 hr per day and reduced benefits from practice exceeding 2 hr (Welford, 1968; Woodworth & Schlosberg, 1954). Many studies of the acquisition of typing skill (Baddeley & Longman, 1978; Dvorak et al.. 1936) and other perceptual motor skills (Henshaw & Holman, 1930) indicate that the effective duration of deliberate practice may be closer to 1 hr per day. Pirolli and J. R. Anderson (1985) found no increased learning from doubling the number of training trials per session in their extended training study. The findings of these studies can be generalized to situations in which training is extended over long periods of time such as weeks, months, and years Let’s go through each sentence in order. I’ve used each quote as a section header, with the citations underneath it in bold. “When individuals, especially children, start practicing in a given domain, the amount of practice is an hour or less per day” Generalizations about talent development, Bloom (1985) “Typically the initial lessons were given in swimming and piano for about an hour each week, while the mathematics was taught about four hours each week…In addition some learning tasks (or homework) were assigned to be practiced and perfected before the next lesson.” (p513) “…[D]uring the week the [piano] teacher expected the child to practice about an hour a day.” with descriptions of practice but no quantification given for swimming and math (p515). The quote seems to me to be a simplification. “Expected an hour a day” is not the same as “did practice an hour or less per day.” “…laboratory studies of extended practice limit practice to about 1 hr for 3-5 days a week” Skill and working memory, Chase & Ericsson (1982) This study focused strictly on memorizing digits, which I don’t consider to be that close to thought work. Controlled and automatic human information processing: I. Detection, search, and attention. Schneider, W., & Shiffrin, R. M. (1977) This study had 8 people in it and was essentially an identification and reaction time trial. Discrimination reaction time for a 1,023-alternative task, Seibel, R. (1963) 3 subjects. This was a reaction time test, not thought work. No mention of duration studying. “These studies show essentially no benefit from durations exceeding 4 hr per day and reduced benefits from practice exceeding 2 hr” Fundamentals of Skill, Welford (1968) In a book with no page number given, I skipped this one. Experimental Psychology, Woodworth & Schlosberg (1954) This too is a book with no page number, but it was available online (thanks, archive.org) and I made an educated guess that the relevant chapter was “Economy in Learning and Performance”. Most of this chapter focused on recitation, which I don’t consider sufficiently relevant. p800: “Almost any book on applied psychology will tell you that the hourly work output is higher in an eight-hour day than a ten-hour day.”(no source) Offers this graph as demonstration that only monotonous work has diminishing returns. p812: An interesting army study showing that students given telegraphy training for 4 hours/day (and spending 4 on other topics) learned as much as students studying 7 hours/day. This one seems genuinely relevant, although not enough to tell us where peak performance lies, just that four hours are better than seven. Additionally, the students weren’t loafing around for the excess three hours: they were learning other things. So this is about how long you can study a particular subject, not total learning capacity in a day. Many studies of the acquisition of typing skill (Baddeley & Longman, 1978; Dvorak et al.. 1936) and other perceptual motor skills (Henshaw & Holman, 1930) indicate that the effective duration of deliberate practice may be closer to 1 hr per day The Influence of Length and Frequency of Training Session on the Rate of Learning to Type, Baddeley & Longman (1978) “Four groups of postmen were trained to type alpha-numeric code material using a conventional typewriter keyboard. Training was based on sessions lasting for one or two hours occurring once or twice per day. Learning was most efficient in the group given one session of one hour per day, and least efficient in the group trained for two 2-hour sessions. Retention was tested after one, three or nine months, and indicated a loss in speed of about 30%. Again the group trained for two daily sessions of two hours performed most poorly.It is suggested that where operationally feasible, keyboard training should be distributed over time rather than massed” Typewriting behavior; psychology applied to teaching and learning typewriting, Dvorak et al (1936) Inaccessible book. The Role of Practice in Fact Retrieval, Pirolli & Anderson (1985) “We found that fact retrieval speeds up as a power function of days of practice but that the number of daily repetitions beyond four produced little or no impact on reaction time” Conclusion Many of the studies were criminally small, and typically focused on singular, monotonous tasks like responding to patterns of light or memorizing digits. The precision of these studies is greatly exaggerated. There’s no reason to believe Ericsson, Krampe, and Tesch-Römer’s conclusion that the correct number of hours for deliberate practice is 3.5, much less the commonly repeated factoid that humans can do good work for 4 hours/day. [This post supported by Patreon]." What you see above is a segment from a LessWrong user's post. Could you elaborate on the potential implications of the small sample sizes in the studies mentioned, and how they might affect the validity of the conclusions drawn? Small sample sizes in research studies can have significant implications for the validity and reliability of the conclusions drawn. Here are a few potential issues: 1. Lack of Representativeness: A small sample size may not adequately represent the larger population. This can lead to biased results if the sample is not reflective of the diversity in the larger group. For instance, a study on the efficiency of practice durations that only includes a small number of individuals from a specific demographic might not be applicable to people from different age groups, professions, or cultural backgrounds. 2. Increased Margin of Error: Small sample sizes often lead to a larger margin of error. This means that the results obtained may deviate more from the true population parameter. The smaller the sample, the less confidence we can have that the sample's findings will replicate in the larger population. 3. Reduced Statistical Power: Statistical power is the probability that a study will detect an effect when there is an effect to be detected. If the sample size is too small, the study may lack the statistical power needed to detect significant differences or relationships, leading to a higher risk of Type II errors (failing to reject a false null hypothesis). 4. Overfitting: Small sample sizes can lead to overfitting, where a statistical model describes random error or noise instead of the underlying relationship. Overfitting generally produces misleadingly high correlations on the training set (the data used to fit the model) and poor predictive performance on new, unseen data. 5. Inability to Generalize: Findings from small sample sizes may not be generalizable to the larger population. This limits the scope and applicability of the research. In the context of the studies mentioned in the post, these issues could mean that the conclusions drawn about the optimal amount of time for deliberate practice may not be accurate or applicable to all individuals. The results may be biased, have a high margin of error, lack statistical power, overfit the data, and may not be generalizable to broader contexts. Therefore, while these studies provide some insights, their findings should be interpreted with caution. Discuss the concept of overfitting in statistical models in more depth. How does it occur, what are its implications, and how can it be prevented in research studies? Overfitting is a concept in statistics that refers to a modeling error which occurs when a function is too closely aligned to a limited set of data points. Essentially, an overfitted model is one that is excessively complex, such as having too many parameters relative to the number of observations. How does it occur? Overfitting often happens when a model learns from not only the underlying patterns of the data but also from the noise or random fluctuations within the data. This is usually a result of using an overly complex model or algorithm that creates too many parameters, which allows the model to fit the data perfectly, including its noise. What are its implications? An overfitted model will likely have low bias but high variance. This means that while the model performs exceptionally well on the training data (the data it was built on), it performs poorly on new, unseen data (the testing data). This is because the random noise that the model learned from the training data does not apply to new data. Consequently, overfitting reduces the model's predictive accuracy and generalizability. How can it be prevented? There are several strategies to prevent overfitting in research studies: 1. Cross-Validation: This is a technique where the dataset is split into two sections, one for training the model and the other for validating the model's performance. The most common method is k-fold cross-validation, where the data is divided into 'k' subsets and the model is trained on k-1 subsets, with the remaining subset used for validation. This process is repeated k times with different subsets serving as the validation set each time. 2. Regularization: This technique adds a penalty term to the loss function (a measure of how well the model predicts the observed data) to discourage complexity in the model. Examples of regularization techniques include Ridge Regression and Lasso Regression. 3. Pruning: In decision tree algorithms, pruning can be used to avoid overfitting. This involves removing the branches of the tree that provide little power to classify instances. 4. Using a Simpler Model: If a model is too complex, using a simpler model can help to prevent overfitting. This might mean reducing the number of parameters in the model or using a less complex modeling algorithm. 5. Increasing the Dataset Size: Overfitting is more likely when the dataset size is small. If possible, collecting more data can help to prevent overfitting. By understanding and applying these techniques, researchers can help to ensure that their models are robust, accurate, and general Discuss the potential risks and benefits of using regularization techniques like Ridge and Lasso Regression in mitigating overfitting. How might these methods impact the interpretability and performance of the model? Regularization techniques like Ridge and Lasso Regression are used to prevent overfitting by adding a penalty term to the loss function that the model seeks to minimize. This penalty term discourages the model from assigning too much importance to any one feature, thereby reducing the model's complexity and its tendency to overfit the data. Risks and Benefits: 1. Ridge Regression: The penalty term in Ridge Regression is the sum of the squares of the coefficient values, multiplied by a tuning parameter. This method shrinks the coefficients towards zero but does not set any of them exactly to zero, meaning that the final model will likely include all features. - Benefits: Ridge Regression can help to prevent overfitting, particularly in situations where there are many correlated input variables. It can improve the model's generalizability and reduce variance without a substantial increase in bias. - Risks: Since Ridge Regression doesn't set any coefficients to zero, it may not be the best choice when dealing with high-dimensional data where feature selection is important. Also, the need to choose an appropriate tuning parameter adds complexity. 2. Lasso Regression: The penalty term in Lasso Regression is the sum of the absolute values of the coefficients, multiplied by a tuning parameter. This method can shrink some coefficients to exactly zero, effectively performing feature selection. - Benefits: Lasso Regression can help to prevent overfitting and can also provide a form of automatic feature selection, which is useful when dealing with high-dimensional data. This can lead to simpler, more interpretable models. - Risks: Like Ridge Regression, an appropriate tuning parameter must be chosen, adding complexity. Also, in situations with highly correlated variables, Lasso may arbitrarily choose one and ignore the others, which can lead to less stable models. Impact on Interpretability and Performance: Regularization techniques can improve the performance of a model by reducing overfitting, thereby enhancing the model's predictive accuracy on new, unseen data. In terms of interpretability, Lasso Regression can lead to more interpretable models than Ridge Regression, as it can reduce the number of features included in the model. However, the trade-off is that the models produced by regularization techniques may be more difficult to interpret than those produced by ordinary least squares regression, due to the addition of the penalty term. It's important to note that while regularization techniques can help to prevent overfitting, they are not a substitute for understanding the underlying data and the problem at hand. As always, careful thought and consideration should be given to the
c5b29132-541a-4a82-ae9c-cf701346dede
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Ethodynamics of Omelas ***Epistemic status:** just kidding... haha... unless...* Introduction ------------ A great amount of effort has been expended throughout history to decide the matter of "what is good" or "what is right" [citation needed]. This valiant but, let's face it, not tremendously successful effort has produced a number of possible answers to these questions, none of which is right, if history is any indication. Far from this being their only problem, the vast majority of these answers also suffer from the even more serious flaw of not being able to produce anything resembling a simple quantitative prediction or computable expression for the thing they are supposed to discuss, and instead prefer to faff about with qualitative and ambiguously defined statements. Not only they aren't right; they aren't even *wrong.* More recently, and in certain premises more than others, a relatively new-ish theory of ethics has gained significant traction as the one most suited to a properly scientific, organized and rational mind - the theory of Utilitarianism. This approach to ethics postulates that all we have to do to solve it is to define some kind of "utility function" (TBD) representing all of the "utility" of each individual who is deemed a moral subject (TBD) via some aggregation method (TBD) and in some kind of objective, commensurable unit (TBD). It is then a simple matter of finding the policy which maximizes this utility function, and there you go - the mathematically determined guide to living the good life, examined down to however many arbitrary digits of precision one might desire! By tackling the titanic task head on, refusing to bow to the tyranny of vague metaphysics, and instead looking for the unyielding and unforgiving certainty of numbers, Utilitarianism manages one important thing: Utilitarianism manages to be *wrong*. Now, I hope the last statement doesn't ruffle too many feathers. If it helps reassure any of those who may have felt offended by it, I'll admit that this study also deals unashamedly into Utilitarianism; and thus, by definition, it is also wrong. But one hopes, at least, wrong in an interesting enough way, which is often the best we can do in such complex matters. The object of this work is the *aggregation method* of utility. There are many possible proposals, none, in my opinion, too satisfactory, all vulnerable to falling into one or another horrible trap laid by the clever critic which our moral instincts refuse to acknowledge as possibly ever correct.  A common one, which we might call Baby's First Utility Function, is **total sum utilitarianism**. In this aggregation approach, more good is good. Simple enough! Ten puppies full of joy and wonder are obviously better than five puppies full of joy and wonder, even an idiot understands as much. What else is left to consider? Ha-ha, says the critic: but what if a mad scientist created some kind of amazing super-puppy able to feel joy and wonder through feasting upon the mangled bodies of all others? What if the super-puppy produced *so much joy* that it offsets the suffering it inflicts? Total sum utilitarianism tells us that if the parameters are right, we ought to indeed accept the numbers and with them the super-puppy and the pain to be found within its joyous and drooling jaws, which isn't the common sense ethical approach to the problem (namely, take away the mad scientist's grant funding and have them work on something useful, like a RCT on whether fresh mints cause cancer). But even without super-puppies, total sum utilitarianism lays more traps for us. The obvious one is that if more total utility is always good, then as long as life is above the threshold for positive utility (TBD), it's a moral imperative to simply spawn more humans[[1]](#fnxy4mivlw1t). This is known as the "repugnant conclusion", named so after the philosopher who reached it took a look at the costs of housing and childcare in his city. Obviously, as our lifestyle has improved and diversified, our western sensibilities have converged towards the understanding that children are like wine: they get better as they age, and are only good in moderation. We'd like our chosen ethical system not to reprimand us too harshly for things we're dead set on doing anyway, thank you very much. Now, seeking to fix these issues, the philosopher who through a monumental cross-disciplinary effort has learned not only addition, but division too, may thus come up with another genius idea: **average utilitarianism**. Just take that big utility boi and divide it by the number of people! Now the number goes up only if each individual person's well-being does as well; simply creating more people changes nothing, or makes the situation worse if you can't keep up with the standards of living. This seems more intuitive; after all, no one ever personally experiences *all* of the utility anyway. There is no grand human gestalt consciousness that we know of (and I'm fairly confident, not more than one or two that we don't). So average utility is as good as it gets, because as every statistician will tell you, averages are all politicians need to know about distributions before taking monumental decisions. Fun fact, they usually cry while telling you this. Now, while I'm no statistician, I am sympathetic to their plight, and can see full well how averages alone can be deceptive. After all, a civilization in which everyone has a utility of 50 and one in which half of everyone has a utility of 100 and the other half has a utility of "eat shit" are quite different, yet they have the same *average* utility. And the topic of unfairness and inequality in society has been known to rouse some strong passions in the masses [citation needed]. There are good reasons to find ways to weigh this effect into your utility function, either because you're one of those who eat shit, or because you're one of those who have a utility of 100 but have begun noticing that the other half of the population is taking a keen interest in pointy farming implements. To overcome these problems, in this study, I draw inspiration from that most classic of teachers, Nature, and specifically from its most confusing and cruel lesson, thermodynamics. I suggest the concepts of **ethodynamics** as an equivalent of thermodynamics applied to utilitarian ethics, and a quantity called the **free utility** as the utility function to maximize. I then apply the new method to the rigorous study of a known toy problem, the Omelas model [LeGuin, 1973] to showcase its results. Free utility ------------ We define the **free specific utility** for a closed society as: F=⟨U⟩−T⟨S⟩.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} where ⟨U⟩ is the average expected utility, ⟨S⟩ is the entropy of the distribution of possible utility outcomes, and T is a free "temperature" parameter which expresses the importance we're willing to give to equality over total well-being. If you believe that slavery is fine as long as the slaves are making some really nice stuff, your "ethical temperature" is probably very close to zero; if you believe that the Ministry of Equalization should personally see to it that the legs of those who are born too tall are sawed off so that no one feels looked down on, then your temperature is approaching infinity. Most readers, I expect, will place somewhere in between these two extremes. The other quantities can be defined with respect to a probability distribution of outcomes, p(U): ⟨U⟩=∫Up(U)dU⟨S⟩=−∫p(U)log[p(U)]dUWith this definition, the free energy will go up with each improvement in average utility and down with each increase in inequality, at a rate that depends on the temperature parameter. We can then identify two **laws of ethodynamics**: 1. any conscientious moral actor ought to try to maximize F according to their chosen temperature; 2. absent outside intervention, in a closed society, Moloch [Alexander, 2014] tends to drive F down. The first law is really more of a guideline, or a plea. The second law is, as far as I can tell, as inescapable as its thermodynamical cousin, and possibly literally just the same law with fake glasses and a mustache. Those who should actually maybe walk back to Omelas --------------------------------------------------- For a simplified practical example of this model we turn to the well known Omelas model, which provides the simplest possible example of a society with high utility but also non-zero entropy. The model postulates the following situation: a city called Omelas exists, with an unspecified amount N of inhabitants. Of these, N−1 live in what can only be described as joyous bliss, in a city as perfect as the human mind can imagine. The last one, however, is a child who lives in perpetual torture, which is through unspecified means absolutely necessary to guarantee everyone else's enjoyment. The model is somewhat imprecise on the details of this arrangement (why a child? What happens when they come of age? Do they just turn into another happy citizen and are replaced by a newborn? Do they die? Do they simply live forever, frozen in time by whatever supernatural skulduggery powers the whole thing?) or the demographics of Omelas, so we decide to simplify it further by removing the detail that the sufferer has to be a child. Instead, we describe our model with the following utility distribution: p(U=1)=N−1Np(U=−1)=1Nwhere the zero of utility is assumed to be the default state of anyone who lives outside of Omelas, and units of utility are arbitrary. In this ideal model of Omelas, the specific free utility can be found as F=1−2N−T[log(N)−N−1Nlog(N−1)]as a function of temperature and population. For large enough N, we can apply Stirling's approximation to simplify: log(N)−N−1Nlog(N−1)≈1Nlog(N!)−1Nlog[(N−1)!]+1N=1N[log(N)+1]leading to F≈1−2N−TN[log(N)+1]This has the consequence that the free utility converges to 1 as the population of Omelas grows, independently of temperature. The boundary across which the crossover from negative to positive utility happens is approximately: T=N−2log(N)+1For any populations larger than the one marked by this line, the existence of Omelas is a net good for the world, and thus its population ought to increase. Far from walking away, moral actors ought to walk towards it! This is not even too surprising a result on consideration: given that one poor child is being tortured regardless, we may as well get as much good as possible out of it. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ikoi5PcyZBw5v8pbt/warvrrwhkxf6dvxr5fyu)Plot of free utility for the ideal Omelas. The green line marks the zero boundary defined above. Anywhere outside the dark violet region is positive; moving to the right corresponds to higher populations for the same temperature.But this unlimited growth should raise suspicion that this model might be a tad bit *too* perfect to be real, much like spherical cows, trolley problems, and Derek Zoolander. There's no such thing as a free lunch, even if the child is particularly plump. What, are we to believe that whatever magical bullshit/deal with Cthulhu powers Omelas' unnatural prosperity has no limit whatsoever, that a single tortured person can provide enough mana to cast Wish on an indefinite amount of inhabitants? Should we amass trillions and trillions on the surface of a Dyson sphere centered on Omelas? Child-torture powered utopias, we can believe; but *infinitely growing* child-torture powered utopias are a bridge too far!  In our search for a more realistic version of an enchanted enclave drawing energy from infant suffering we ought thus to put a limit to the ability of Omelas to infinitely scale its prosperity up with its number of citizens. Absent any knowledge about the details of the spells, conjurings and pacts that power the city to build a model of, we have to do like good economists and do the next best thing: pull one out of our ass. We therefore postulate that the *real* Omelas model provides a total utility (to be equally divided between its non-tortured inhabitants) of: Ureal(N)=2NSe(N−1)/NSe(N−1)/NS+e−(N−1)/NS−NSThis logistic function has all the desired properties. It is approximately linear in N with slope 1 around 0, then slowly caps as soon as the population grows past a certain maximum support number NS. We can then replace it in the free utility: Freal≈Ureal(N)−1N−TN[log(N)+1]In the limit of low N, the zero line described above still applies. Meanwhile, for high N, we see a new zero line: Freal≈NS−1N−TN[log(N)+1]⟹T=NS−1log(N)+1This produces a much more interesting diagram: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ikoi5PcyZBw5v8pbt/xszgxtrtxq5scwradr3f)Plot of free utility for the real Omelas model. The blue line marks the second zero boundary, whereas the pink one is the "line of maxima", representing the values of N that maximize F at each T; the thin white line marks the support number, NS.We see now that optimization is no longer trivial. To begin with, there is a temperature above which the existence of Omelas will *always* be considered intolerable and a net negative. This is roughly the temperature at which the two zero lines cross, so: N−2log(N)+1=NS−1log(N)+1⟹N=NS+1Tlimit=NS−1log(NS+1)+1For NS=1000, this corresponds to Tlimit≈126.35. In addition, even below this temperature, the free utility has a maximum in N for each value of T. In other words, there is an optimum of population for Omelas. People with T<Tlimit will find themselves able to tolerate Omelas, on the whole, but will also converge towards a certain value of its population. If they are actual moral actors and not just trying to signal their virtue, they are thus expected to walk away from or *towards* Omelas depending on its current population. Too little people and the child's sacrifice is going to waste; too many, and the spreading out of too few resources make things miserable again. The phase diagram can be built by coloring regions based on the sign of both the free energy F and its derivative in N, F′: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ikoi5PcyZBw5v8pbt/jcbm1enmgp8zs3zmbzwt)* the almost white region is the one in which both F and F′ are positive; here Omelas is a net good, but it can be made even better if more people flock to it; this is the **walk towards** phase; * the orange region is the one in which F is positive but F′ is negative; here Omelas is a net good, but it can be made even better if someone conscientious enough makes willingly the sacrifice of leaving it, so that others may enjoy more its scarce and painfully earned resources; this is the **walk away** phase which the original work hints at; * the bright red region is the one in which F is negative even though F′ is positive; here Omelas is seen as a blight upon the world, a haven of unfairness which offends the eyes and souls of those who seek good. No apologies nor justifications can be offered for its dark sacrificial practices; the happiness of the many doesn't justify the suffering of the very few. For those who find themselves in this region, walking away isn't enough, and is mere cowardice. Omelas must be razed, its palaces pillaged, its walls ground to dust, its populace scattered, its laws burned, and the manacles that shackle the accursed child shattered by the hammer of Justice, that they may walk free and happy again. This is the **destroy** phase; * the thin dark red sliver is the region in which both F and F′ are negative. We don't talk about that one. Conclusions ----------- This study has offered what I hope was an exhaustive treatment of the much discussed Omelas model, put in a new utilitarian framework that tries to surpass the limits of more traditional and well-known ones. I have striven to make this paper a pleasant read by enriching it with all manners of enjoyable things: wit, calculus, and a non indifferent amount of imaginary child abuse[[2]](#fnpflg5u19q8e). But I've also tried to endow it with some serious considerations about ethics, and to contribute something new to the grand stream of consciousness of humanity's thought on the matter that has gone on since ancient times. Have I succeeded? Probably no. Have I at least inspired in someone else such novel and deep considerations, if only by enraging them so much that the righteous creative furor of trying to rebut to my nonsense gives them the best ideas of their lives? Also probably no. But have I at least written something that will elicit a few laughs, receive a handful of upvotes, and marginally raise my karma on this website by a few integers, thus giving me a fleeting sense of accomplishment for the briefest of moments in a world otherwise lacking any kind of sense, rhyme, reason, or purpose?  That, dear readers, is up to you to decide.   1. **[^](#fnrefxy4mivlw1t)**Actually a less discussed consequence is that if it instead turned out that life is generally *below* that threshold (TBD), the obvious path forward is that we all ought to commit suicide as soon as possible. This is also frowned upon. 2. **[^](#fnrefpflg5u19q8e)**Your mileage may vary about how enjoyable these things actually are. Not everyone likes calculus.
d6d1bb36-7821-4a7d-b65a-af2c4f22068b
trentmkelly/LessWrong-43k
LessWrong
Tools for finding information on the internet Edit 2023-05-09: I recorded a presentation for EA Software Engineers about this post. In it, I demonstrate each of the tools and discuss some extra ones at the end, namely content blockers, userscripts, and alternative front-end websites. Isn't the internet such a magically useful tool? Thirty years ago, if you wanted to know how many plays Shakespeare wrote, you would have to physically walk to your local library and find a relevant book. Now, you can find the answer in less than ten seconds, at any time, wherever you are. However, the internet is not a truthful, superintelligent oracle. Rather, it's a dangerous jungle of knowledge you must learn to navigate if you wish to find the truth. Good information is censored, hidden behind paywalls or within piles of spam, and difficult to differentiate from untrustworthy information. This post won't be a complete guide on how to navigate the world wide web of knowledge, but it will give you some tools I've discovered over the years that you can throw in your digital rucksack to aid your journey. Search engines * The great internet sage Gwern Branwen wrote an advanced guide on finding ref­er­ences, pa­pers, and books on­line. * The search engines Brave Search and Kagi have the features "Goggles" and "Lenses" respectively, which are presets that filter or re-rank entire categories of websites in your results. * SearXNG is a highly customizable internet metasearch engine. * Perplexity uses natural language processing to answer your query with a paragraph (with sources) and allows you to ask followup questions. * Metaphor allows you to find websites by writing creative and long-form prompts, also using NLP. * Elicit is a research assistant that helps you find relevant research papers, also using NLP. Bypassing restrictions Sometimes you know exactly where to find a piece of information, but it's locked behind a paywall or deleted from the internet. * Unddit displays deleted comments and posts on Reddit. * Inter
b6bc13eb-a87f-4408-a9ec-1036a73eee78
trentmkelly/LessWrong-43k
LessWrong
Have We Been Interpreting Quantum Mechanics Wrong This Whole Time?
c85256fe-3e73-41f3-a278-f2f93d60e7d1
trentmkelly/LessWrong-43k
LessWrong
Mechanism Design for AI Safety - Agenda Creation Retreat Mechanism Design for AI Safety (MDAIS) has been running a reading group since last summer to discuss how principles of mechanism design can be applied to AI Safety. Sign-up for the reading group can be found here, and past readings can be found here.  Based on the quality of the discussions, we are optimistic about the possibility of mechanism design tools helping with AI safety, and are setting out to create an agenda that outlines promising directions. Our plan is to jumpstart this with a retreat that will facilitate in-person brainstorming and developing ideas. The application form is here (see below for details), and you can participate without having joined the MDAIS reading group. Retreat Details Where The retreat will take place in Miami, Florida. Travel funding is available for attendees. A remote attendance option may be added if multiple qualified applicants indicate they would be unable to attend in person.  When Retreat events will begin in the early afternoon on Friday, March 17th and continue to the late afternoon on Sunday March 19th. Lodging will be provided from Thursday to Sunday night, and social events may take place on those evenings. Who Most participants in the reading group are PhD students or post-docs. We expect a similar group makeup for the retreat, but welcome applications from people with research experience who fall outside that demographic. Capacity at the retreat will be approximately ten people. What In the weeks leading up to the retreat, participants will be encouraged to brainstorm ideas and share background readings. The primary activity at the retreat will be alternating between developing ideas individually or in small groups, and presenting these ideas to the broader group for feedback. An initial rough writeup of the agenda will take place at the retreat itself, which will be added to and edited over the following weeks. Other activities will include icebreakers and conversations around AI more broadly. Senior re
4e9f2a57-ec58-485c-bb21-34ebbb2f4494
StampyAI/alignment-research-dataset/arxiv
Arxiv
Conservative Objective Models for Effective Offline Model-Based Optimization ### 1 Introduction ![ Our method trains a model of the objective function by training a neural net with supervised regression on the training data augmented two additional loss terms to obtian conservative predictions. These additional terms aim to maximize the predictions of the neural net model on the training data, and minimize the predictions on adversarially generated designs. This principle prevents the optimizer from producing bad designs with erroneously high values at unseen and poor designs.](https://media.arxiv-vanity.com/render-output/6614095/x1.png) Figure 1: Overview of COMs. Our method trains a model of the objective function by training a neural net with supervised regression on the training data augmented two additional loss terms to obtian conservative predictions. These additional terms aim to maximize the predictions of the neural net model on the training data, and minimize the predictions on adversarially generated designs. This principle prevents the optimizer from producing bad designs with erroneously high values at unseen and poor designs. Black-box model-based optimization (MBO) problems are ubiquitous in a wide range of domains, such as protein (Brookes et al., [2019](#bib.bib145 "Conditioning by adaptive sampling for robust design")) or molecule design (Gaulton et al., [2012](#bib.bib107 "ChEMBL: a large-scale bioactivity database for drug discovery")), designing controllers (Berkenkamp et al., [2016](#bib.bib121 "Safe controller optimization for quadrotors with gaussian processes")) or robot morphologies (Liao et al., [2019](#bib.bib44 "Data-efficient learning of morphology and controller for a microrobot")), optimizing neural network designs (Zoph and Le, [2017](#bib.bib43 "Neural architecture search with reinforcement learning")), and aircraft design (Hoburg and Abbeel, [2012](#bib.bib45 "Geometric programming for aircraft design optimization")). Existing methods to solve such model-based optimization problems typically learn a proxy function to represent the unknown objective landscape based on the data, and then optimize the design against this learned objective function. In order to prevent errors in the learned proxy function from affecting optimization, these methods often critically rely on periodic active data collection (Snoek et al., [2012](#bib.bib42 "Practical bayesian optimization of machine learning algorithms")) over the course of training. Active data collection can be expensive or even dangerous: evaluating a real design might involve a complex real-world procedure such as synthesizing candidate protein structures for protein optimization or building the robot for robot design optimization. While these problems can potentially be solved via computer simulation, a high fidelity simulator often requires considerable effort from experts across multiple domains to build, making it impractical for most problems. Therefore, a desirable alternative approach for a broad range of MBO problems is to develop data-driven, offline methods that can optimize designs by training highly general and expressive deep neural network models on data from previously conducted experiments, consisting of inputs (x) and their corresponding objective values (y), without access to the true function or any form of active data collection (Kumar and Levine, [2019](#bib.bib122 "Model inversion networks for model-based optimization")). In a number of these practical domains, such as protein (Sarkisyan et al., [2016a](#bib.bib128 "Local fitness landscape of the green fluorescent protein")) or molecule design (Gaulton et al., [2012](#bib.bib107 "ChEMBL: a large-scale bioactivity database for drug discovery")), plenty of prior data already exists and can be utilized for fully offline, data-driven model-based optimization. Typical approaches for addressing MBO problems learn a model of the unknown objective function ^f that maps an input x (or a representation of the input (Gómez-Bombarelli et al., [2018](#bib.bib89 "Automatic chemical design using a data-driven continuous representation of molecules"))) to its objective value ^f(x) via supervised regression on the training dataset (Snoek et al., [2012](#bib.bib42 "Practical bayesian optimization of machine learning algorithms")). Then, these methods optimize the input against this learned model via, for instance, gradient ascent. For MBO problems where the space of valid inputs forms a narrow manifold in a high-dimensional space, any overestimation errors in the learned model will erroneously drive the optimization procedure towards out-of-distribution, invalid, and low-valued design designs that “fool” the model into producing a high values (Kumar and Levine, [2019](#bib.bib122 "Model inversion networks for model-based optimization")). How can we prevent offline MBO methods from falling into such out-of-distribution solutions? If we can instead learn a conservative model of the objective function that does not overestimate the objective value on out-of-distribution inputs, optimizing against this conservative model would produce the best solutions for which we are *confident* in the value. In this paper, we propose a method to learn such *conservative objective models* (COMs), and then optimize the design against this conservative model using a naïve gradient-ascent procedure. Analogously to adversarial training approaches in supervised learning (Goodfellow et al., [2014a](#bib.bib111 "Explaining and harnessing adversarial examples")), and building on recent works in offline reinforcmeent learning (Levine et al., [2020](#bib.bib120 "Offline reinforcement learning: tutorial, review, and perspectives on open problems"); Kumar et al., [2020](#bib.bib113 "Conservative q-learning for offline reinforcement learning")), COMs first explicitly mine for out-of-distribution inputs with erroneously overestimated values and then penalize the predictions on these inputs. Theoretically, we show that this approach mitigates overestimation in the learned objective model near the manifold of the dataset. Empirically, we find that this leads to good performance across a range of offline model-based optimization tasks. ![ The section on the left indicates that each task provides a static dataset that is collected offline without ayn MBO algorithm in-the-loop. The section on the right shows how a conservative objective model is used to produce promising optimized designs using gradient ascent, and how these designs are inputs to a conservative regularizer.](https://media.arxiv-vanity.com/render-output/6614095/x2.png) Figure 2: Training and optimization using COMs. The section on the left indicates that each task provides a static dataset that is collected offline without ayn MBO algorithm in-the-loop. The section on the right shows how a conservative objective model is used to produce promising optimized designs using gradient ascent, and how these designs are inputs to a conservative regularizer. The primary contribution of this paper, COMs, is a novel approach for addressing data-driven model-based optimization problems by learning a conservative model of the unknown objective function that lower-bounds the groundtruth function on out-of-distribution inputs, and then optimizing the input against this conservative model via a simple gradient-ascent style procedure. COMs are simple to implement, utilizing a supervised learning procedure that resembles adversarial training, without the need for complex generative modeling to estimate dataset support as in prior work on model-based optimization. We theoretically analyze COMs and show that they never overestimate the values at out-of-distribution inputs close to the dataset manifold and we empirically demonstrate the efficacy of COMs on seven complex MBO tasks that span a wide range of real-world tasks including biological sequence design, neural network parameter optimization, and superconducting material design. COMs is optimal on 4/7 tasks, and outperforms the best prior method by a factor of 1.3x in a high-dimensional setting, and by a factor of 1.16x overall. ### 2 Preliminaries The goal in data-driven, offline model-based optimization (Kumar and Levine, [2019](#bib.bib122 "Model inversion networks for model-based optimization")) is to find the best possible solution, x∗, to optimization problems of the form | | | | | | --- | --- | --- | --- | | | x∗←argmaxx f(x), | | (1) | where f(x) is an unknown (possibly stochastic) objective function. An offline MBO algorithm is provided access to a static dataset D of inputs and their objective values, D={(x1,y1),⋯,(xN,yN)}. While a variety of MBO methods have been developed (Gómez-Bombarelli et al., [2018](#bib.bib89 "Automatic chemical design using a data-driven continuous representation of molecules"); Brookes et al., [2019](#bib.bib145 "Conditioning by adaptive sampling for robust design"); Kumar and Levine, [2019](#bib.bib122 "Model inversion networks for model-based optimization"); Fannjiang and Listgarten, [2020](#bib.bib119 "Autofocused oracles for model-based design")), most methods for tackling MBO problems fit a parametric model to the samples of the true objective function in D, ^fθ(x), via supervised training: ^fθ(x)←argminθ∑i(^fθ(xi)−yi)2, and find x∗ in Equation [1](#S2.E1 "(1) ‣ 2 Preliminaries ‣ Conservative Objective Models for Effective Offline Model-Based Optimization") by optimizing x against this learned model ^fθ(x), typically with some mechanism to additionally minimize distribution shift. One choice for optimizing x in Equation [1](#S2.E1 "(1) ‣ 2 Preliminaries ‣ Conservative Objective Models for Effective Offline Model-Based Optimization") is gradient descent on the learned function, as given by | | | | | | --- | --- | --- | --- | | | xk+1←xk+η∇x^fθ(x)|x=xk,   for~{}~{}k∈[1,T],    x⋆=xT. | | (2) | The fixed point of the above procedure xT is then the output of the MBO procedure. In high-dimensional input spaces, where valid x values lie on a thin manifold in a high-dimensional space, such an optimization procedure is prone to producing low-scoring inputs, which may not even be valid. This is because ^f may erroneously overestimate objective values at out-of-distribution points, which would naturally lead the optimization to such invalid points. Prior methods have sought to address this issue via generative modeling or explicit density estimation, so as to avoid out-of-distribution inputs. In the next section, we will describe how our method, COMs, instead trains the objective model in such a way that overestimation is prevented directly. ### 3 Conservative Objective Models for Offline Model-Based Optimization In this section, we present our approach, conservative objective models (COMs). COMs learn estimates of the true function that do not overestimate the value of the ground truth objective on out-of distribution inputs in the vicinity of the training dataset. As a result, COMs prevent erroneous overestimation that would drive the optimizer (Equation [2](#S2.E2 "(2) ‣ 2 Preliminaries ‣ Conservative Objective Models for Effective Offline Model-Based Optimization")) to produce out-of-distribution inputs with low values under the groundtruth objective function. We first discuss a procedure for learning such conservative estimates and then explain how these conservative models can be used for offline MBO. #### 3.1 Learning Conservative Objective Models (COMs) The key idea behind our approach is to augment the objective for training of the objective model, ^fθ(x), with a regularizer that minimizes the expected value of this function on “adversarial” inputs where the value of the learned function ^fθ may be erroneously large. Such adversarial inputs are likely to be found by the optimizer during optimization, and hence, we need to train the learned function to not overestimate their values. How can we compute such adversarial inputs? Building on simple techniques for generating adversarial examples in supervised learning (Goodfellow et al., [2014a](#bib.bib111 "Explaining and harnessing adversarial examples")), we can run multiple steps of gradient ascent on the current snapshot of the learned function ^f(x) starting from various inputs in the training dataset to obtain such adversarial inputs. For concise notation in the exposition, we denote the distribution of all adversarial inputs found via this gradient ascent procedure as μ(x). Samples from μ(x) are obtained by sampling a datapoint from the training set and running several steps of gradient ascent on ^f(x). | | | | | | --- | --- | --- | --- | | | | | (3) | While simply minimizing the function values under this adversarial distribution μ(x) should effectively reduce the value of the learned ^f at these inputs, this can result in systematic underestimation even for in-distribution points. To balance out this regluarization, our approach additionally *maximizes* the expected value of this function on the training dataset. This can be formalized as maximizing the value of ^f(x) under the empirical distribution of inputs x∈D given by: ^D(x)=∑xi∈Dδx=xi. In Section [4](#S4 "4 Theoretical Analysis of COMs ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"), we will show that the minimization and maximization terms balance out, and this objective learns a function ^fθ(x) that is a lower bound on the true function f(x) for inputs that are encountered during the optimization process, under several assumptions. This approach is inspired by recent work in offline RL (Kumar et al., [2020](#bib.bib113 "Conservative q-learning for offline reinforcement learning")), where a similar objective is used to learn conservative value functions. We will elaborate on this connection in Section [5](#S5 "5 Related Work ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"). Formally, our training objective is given by the following equation, where α is a parameter that trades off conservatism for regression: | | | | | | --- | --- | --- | --- | | | ^f⋆θ←argminθ∈Θ | COMs regularizerα(Ex∼μ(x)[^fθ(x)]−Ex∼D[^fθ(x)]) | | | | | +12E(x,y)∼D[(^fθ(x)−y)2]standard% supervised regression, | | (4) | This idea is schematically depicted in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"). The value of α and the choice of distribution μ(x) play a crucial role in determining the behavior of this approach. If the chosen α is very small, then the resulting ^f⋆θ(x) may not be a conservative estimate of the actual function f(x), whereas if the chosen α is too large, then the learned function will be too conservative, and not allow the optimizer to deviate away from the dataset at all. We will discuss our strategy for choosing α in the next section. As noted earlier, our choice of μ(x) specifically focuses on adversarial inputs that the optimizer is likely to encounter while optimizing the input. We compute this distribution μ(x) by sampling a starting point x0 from the dataset D, and then performing several steps of gradient ascent on ^fθ starting from this point. | | | | --- | --- | | 1:  Initialize ^fθ. Pick η,α and initialize dataset D. 2:  for i=1 to training\_steps do 3:     Sample (x0,y)∼D 4:     Find xT(x0) via gradient ascent from x0:   xt+1=xt+η∇x^fθ(x)∣∣x=xt;   μ(x)=∑x0∈Dδx=xT(x0). 5:     Minimize L(θ;α) with respect to θ.L(θ;α)=Ex0∼D(^fθ(x0)−y)2−αEx0[^fθ(x0)]+αEμ(x)[^fθ(x)] θ←θ−λ∇θL(θ;α) 6:  end for Algorithm 1 COM: Training Conservative Models | 1:  Initialize optimizer at the optimum in D:      ~x=argmax(x,y)∈D y 2:  Find x⋆ via gradient ascent from ~x:     xt+1=xt+η∇xLopt(x)∣∣x=xt    where  Lopt(x):=^f⋆θ(x) 3:  Return the solution x⋆=xT. Algorithm 2 COM: Finding x⋆ | #### 3.2 Optimizing a Conservative Objective Model Once we have a trained conservative model from Equation [3.1](#S3.Ex1 "3.1 Learning Conservative Objective Models (COMs) ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"), we must use this learned model for finding the best possible input, x⋆. Prior works (Kumar and Levine, [2019](#bib.bib122 "Model inversion networks for model-based optimization"); Brookes et al., [2019](#bib.bib145 "Conditioning by adaptive sampling for robust design")) use a standard (non-conservative) model of the objective function in conjunction with generative models or density estimators to restrict the optimization to in-distribution values of x⋆. However, since our conservative training method trains ^f⋆θ to explicitly assign low values to out-of-distribution inputs, we can use a simple gradient-ascent style procedure in the input space to find the best possible solution. Specifically, our optimizer runs gradient-ascent for T iterations starting from an input in the dataset (x0∈D), in each iteration trying to move the design in the direction of the gradient of the learned model ^f⋆θ. Starting from the best point in the dataset, x0∈D, our optimizer performs the following update (also shown in Algorithm [2](#alg2 "Algorithm 2 ‣ 3.1 Learning Conservative Objective Models (COMs) ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"), Line 2): | | | | | --- | --- | --- | | | ∀ t∈[T],x0∈D;    xt+1=xt+η∇xLopt(x)∣∣x=xt | | | | where  Lopt(x):=^f⋆θ(x). | | (5) | Equation [5](#S3.E5 "(5) ‣ 3.2 Optimizing a Conservative Objective Model ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization") ensures that the value of the learned function ^fθ(xt+1) is larger than the value at its previous iterate xt. Furthermore, the number of iterations T of gradient ascent during optimization in Equation [5](#S3.E5 "(5) ‣ 3.2 Optimizing a Conservative Objective Model ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization") is the identical to the number of steps that we use to generate adversarial examples, μ(x) in Equation [3](#S3.E3 "(3) ‣ 3.1 Learning Conservative Objective Models (COMs) ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"). This ensures that the optimizer only queries the region when the learned function ^fθ(x) is indeed conservative and a valid lower bound. #### 3.3 Using COMs for MBO: Additional Decisions Next we discuss other design decisions that appear in COMs training (Equation [3.1](#S3.Ex1 "3.1 Learning Conservative Objective Models (COMs) ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization")) or when optimizing the input against a learned conservative model (Equation [5](#S3.E5 "(5) ‣ 3.2 Optimizing a Conservative Objective Model ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization")). Choosing α. The hyperparameter α in Equation [3.1](#S3.Ex1 "3.1 Learning Conservative Objective Models (COMs) ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization") plays an important role in weighting conservatism against accuracy. Without access to additional active data collection for evaluation, tuning this hyperparameter for each task can be challenging. Therefore, in order to turn COMs into a task-agnostic algorithm for offline MBO, we devise an automated procedure for selecting α. As discussed previously, if α is too large, ^f⋆θ is expected to be too conservative, since it would assign higher values to points in the dataset, and low values to *all* other points. Selecting a single value of α that works for many problems is difficult, since its effect depends strongly on the magnitude of the objective function. Instead, we use a modified training procedure that poses Equation [3.1](#S3.Ex1 "3.1 Learning Conservative Objective Models (COMs) ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization") as a constrained optimization problem, with α assuming the role of a Lagrange dual variable for satisfying a constraint that controls the difference in values of the learned objective under μ(x) and D(x). This corresponds to solving the following optimization problem: | | | | | | --- | --- | --- | --- | | | ^f⋆θ←argminθ∈Θ  12E(x,y)∼D[(^fθ(x)−y)2]  s.t.  (Ex∼μ(x)[^fθ(x)]−Ex∼D[^fθ(x)])≤τ. | | (6) | While Equation [6](#S3.E6 "(6) ‣ 3.3 Using COMs for MBO: Additional Decisions ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization") introduces a new hyperparameter τ in place of α, this parameter is easier to select by hand, since its optimal value does not depend on the magnitude of the objective function as we can normalize the objective values to the same range before use in Equation [6](#S3.E6 "(6) ‣ 3.3 Using COMs for MBO: Additional Decisions ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"), and therefore, a single choice works well across a diverse range of tasks. We find that a single value of τ is effective on every continuous task (τ=0.5) and discrete task (τ=2.0) respectively, and empirically ablate the choice of τ in Figure [3](#S6.F3 "Figure 3 ‣ 6 Experimental Evaluation ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"). Selecting optimized designs x⋆. So far we have discussed how COMs can be trained and used for optimization; however, we have not established a way to determine which xt (Equation [5](#S3.E5 "(5) ‣ 3.2 Optimizing a Conservative Objective Model ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization")) encountered in the optimization trajectory should be used as our final solution x⋆. The most natural choice is to pick the final xT found by the optimizer as the solution. We uniformly choose T=50 steps. While the choice of T should, in principle, affect the solution found by any gradient-ascent style optimizer, we found COMs to be quite stable to different values of T, as we will elaborate empirically on in Section [6.2](#S6.SS2 "6.2 Ablation Experiments ‣ 6 Experimental Evaluation ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"), Figure [3](#S6.F3 "Figure 3 ‣ 6 Experimental Evaluation ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"). Of course, there are many other possible ways of selecting T, including ideas inspired from offline model-selection methods in offline reinforcement learning (Thomas et al., [2015](#bib.bib114 "High-confidence off-policy evaluation")), but our simple procedure, which is also popular in offline RL (Fu et al., [2020](#bib.bib116 "D4RL: datasets for deep data-driven reinforcement learning")), ensures that the optimizer only queries the regions of the input space where the learned function is indeed trained to be conservative and is also sufficient to obtain good optimization performance. #### 3.4 Overall Algorithm and Practical Implementation Finally, we combine the individual components discussed so far to obtain a complete algorithm for offline model-based optimization. Pseudocode for our algorithm is shown in Algorithm [1](#alg1 "Algorithm 1 ‣ 3.1 Learning Conservative Objective Models (COMs) ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"). COMs parameterize the objective model, ^fθ(x), via a feed-forward neural network with parameters θ. Our method then alternates between approximately generating samples μ(x) via gradient ascent (Line 4), and optimizing parameters θ using Equation [5](#S3.E5 "(5) ‣ 3.2 Optimizing a Conservative Objective Model ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization") (Line 5). Finally, at the end of training, we run the gradient ascent procedure over the learned objective model ^f⋆θ(x) for a large T number of ascent steps and return the final design xT as x⋆. Implementation details. Full implementation details for our method can be found in Appendix [A](#A1 "Appendix A Method Details ‣ Appendices ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"). Briefly, for all of our experiments, the conservative objective model ^fθ is modeled as a neural network with two hidden layers of size 2048 each and leaky ReLU activations. More details on the network structure can be found in Appendix [C](#A3 "Appendix C Network Details ‣ Appendices ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"). In order to train this conservative objective model, we use the Adam optimizer (Kingma and Ba, [2015](#bib.bib130 "Adam: A method for stochastic optimization")) with a learning rate of 10−3. Empirically, we found that if η is too large, gradient ascent begins to produce inputs xT that do not maximize the values of ^f⋆θ(xT), so we select the largest η such that successive xt follow the gradient vector field of ^f⋆θ(xt). For computing samples μ(x), we used 50 gradient ascent steps starting from a given design in the dataset, x0∈D. During optimization, we used the gradient-ascent optimizer with a learning rate of 0.05 for continuous tasks and 2.0 for discrete tasks. As we will also show in our experiments (Section [6.2](#S6.SS2 "6.2 Ablation Experiments ‣ 6 Experimental Evaluation ‣ Conservative Objective Models for Effective Offline Model-Based Optimization")), this produces stable optimization behavior for all tasks we attempted. Finally, in order to choose the step T in Equation [5](#S3.E5 "(5) ‣ 3.2 Optimizing a Conservative Objective Model ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization") that is supposed to provide us with the final solution x⋆=xT, we pick a universal step of T=50. ### 4 Theoretical Analysis of COMs We will now theoretically analyze conservative objective models, and show that the conservative training procedure (Equation [3.1](#S3.Ex1 "3.1 Learning Conservative Objective Models (COMs) ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization")) indeed learns a conservative model of the objective function. To do so, we will show that under Equation [3.1](#S3.Ex1 "3.1 Learning Conservative Objective Models (COMs) ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"), the values of all inputs in regions found within T steps of gradient ascent starting from any input x0∈D are lower-bounds on their actual value. For analysis, we will denote ¯¯¯¯D(x) as the smoothed density of x in the dataset D (see Appendix [B](#A2 "Appendix B Proof of Theorem 1 ‣ Appendices ‣ Conservative Objective Models for Effective Offline Model-Based Optimization") for a formal definition). We will express Equation [3.1](#S3.Ex1 "3.1 Learning Conservative Objective Models (COMs) ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization") in an equivalent form that factorizes the distribution μ(x) as μ(x)=∑x0∼D¯¯¯¯D(x0)μ(xT|x0): | | | | | | --- | --- | --- | --- | | | | | (7) | While μ(xT|x0) is a Dirac-delta distribution in practice (Section [3](#S3 "3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization")), for our analysis, we will assume that it is a distribution centered at xT and μ(xT|x0)>0 ∀ xT∈X. This condition can be easily satisfied by adding random noise during gradient ascent while computing xT. We will train ^fθ using gradient descent and denote k=1,2,⋯ as the iterations of this training procedure for ^fθ. We first summarize some assumptions used in our analysis. We assume that the true function f(x) is L-Lipschitz over the input space x. We also assume that the learned function ^fθ(x) is ˆL-Lipschitz and ˆL is sufficiently larger than L. For analysis purposes, we will define a conditional distribution, ¯¯¯¯D(x′|x), to be a Gaussian distribution centered at x: N(x′|x,σ2). We will not assume a specific parameterization for the objective model, ^fθ, but operate under the neural tangent kernel (NTK) (Jacot et al., [2018](#bib.bib109 "Neural tangent kernel: convergence and generalization in neural networks")) model of neural nets. The neural tangent kernel of the function ^f(x) be defined as: Gf(xi,xj):=∇θ^fθ(xi)T∇θ^fθ(xj). Under these assumptions, we build on the analysis of conservative Q-learning (Kumar et al., [2020](#bib.bib113 "Conservative q-learning for offline reinforcement learning")) to prove our theoretical result in Theorem [1](#Thmtheorem1 "Proposition 1 (Conservative training lower-bounds the true function). ‣ 4 Theoretical Analysis of COMs ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"), shown below: ###### Proposition 1 (Conservative training lower-bounds the true function). Assume that ^fθ(x) is trained with conservative training by performing gradient descent on θ with respect to the objective in Equation [7](#S4.E7 "(7) ‣ 4 Theoretical Analysis of COMs ‣ Conservative Objective Models for Effective Offline Model-Based Optimization") with a learning rate η. The parameters in step k of gradient descent are denoted by θk, and let the corresponding conservative model be denoted as ^fkθ. Let G, μ, ˆL, L, ¯¯¯¯D be defined as discussed above. Then, under assumptions listed above, ∀ x∈D,x′′∈X, the conservative model at iteration k+1 of training satisfies: | | | | | | --- | --- | --- | --- | | | ^fk+1θ(x′′):=max | | | | | | ~fk+1θ(x′′)−ηαEx∼¯¯¯¯D,x′∼μ[Gkf(x′′,x′)] | | | | | +ηαEx∼¯¯¯¯D,x′∼¯¯¯¯D[Gkf(x′′,x′)]}, | | where ~fk+1θ(x′′) is the resulting (k+1)-th iterate of ^fθ if conservative training were not used. Thus, if α is sufficiently large, the expected value of the asymptotic function, ^fθ:=limk→∞^fkθ, on inputs xT found by the optimizer, lower-bounds the value of the true function f(xT): | | | | | --- | --- | --- | | | Ex0∼D,xT∼μ(xT|x0)[^fθ(xT)]≤Ex0∼D,xT∼μ(xT|x0)[f(x)]. | | A proof for Proposition [1](#Thmtheorem1 "Proposition 1 (Conservative training lower-bounds the true function). ‣ 4 Theoretical Analysis of COMs ‣ Conservative Objective Models for Effective Offline Model-Based Optimization") including a complete formal statement can be found in Appendix [B](#A2 "Appendix B Proof of Theorem 1 ‣ Appendices ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"). The intuition behind the proof is that inducing conservatism in the function ^fθ at each gradient step of optimizing Equation [7](#S4.E7 "(7) ‣ 4 Theoretical Analysis of COMs ‣ Conservative Objective Models for Effective Offline Model-Based Optimization") makes the asymptotic function be conservative. Moreover, the larger the value of α, the more conservative the function ^fθ is on points x′ found via gradient ascent, i.e., points with high density under μ(xT|x0), in expectation. Finally, when gradient ascent is used to find x⋆ on the learned conservative model, ^fθ, and the number of steps of gradient ascent steps is less than T, as we do in practice via Equation [5](#S3.E5 "(5) ‣ 3.2 Optimizing a Conservative Objective Model ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"), this bound with additional offset will hold for the point x⋆ in expectation, and therefore the estimated value of this point will not overestimate its true value. This additional offset depends on the Lipschitz constant ˆL and the distance between x∗ and the the optimized solutions xT found for other data points, x0∈D. | | | | | | | | --- | --- | --- | --- | --- | --- | | | GFP | TF Bind 8 | UTR | # Optimal | Norm. avg. perf. | | D (best) | 0.789 | 0.439 | 0.593 | | | | Auto. CbAS | 0.865 ± 0.000 | 0.910 ± 0.044 | 0.691 ± 0.012 | 1 / 7 | 0.687 | | CbAS | 0.865 ± 0.000 | 0.927 ± 0.051 | 0.694 ± 0.010 | 3 / 7 | 0.699 | | BO-qEI | 0.254 ± 0.352 | 0.798 ± 0.083 | 0.684 ± 0.000 | 0 / 7 | 0.629 | | CMA-ES | 0.054 ± 0.002 | 0.953 ± 0.022 | 0.707 ± 0.014 | 2 / 7 | 0.674 | | Grad. | 0.864 ± 0.001 | 0.977 ± 0.025 | 0.695 ± 0.013 | 3 / 7 | 0.750 | | Grad. Min | 0.864 ± 0.000 | 0.984 ± 0.012 | 0.696 ± 0.009 | 3 / 7 | 0.829 | | Grad. Mean | 0.864 ± 0.000 | 0.986 ± 0.012 | 0.693 ± 0.010 | 2 / 7 | 0.852 | | MINs | 0.865 ± 0.001 | 0.905 ± 0.052 | 0.697 ± 0.010 | 4 / 7 | 0.745 | | REINFORCE | 0.865 ± 0.000 | 0.948 ± 0.028 | 0.688 ± 0.010 | 1 / 7 | 0.541 | | COMs (Ours) | 0.864 ± 0.000 | 0.945 ± 0.033 | 0.699 ± 0.011 | 4 / 7 | 0.985 | | | Superconductor | Ant Morphology | D’Kitty Morphology | Hopper Controller | | | D (best) | 0.399 | 0.565 | 0.884 | 1.0 | | | Auto. CbAS | 0.421 ± 0.045 | 0.882 ± 0.045 | 0.906 ± 0.006 | 0.137 ± 0.005 | | | CbAS | 0.503 ± 0.069 | 0.876 ± 0.031 | 0.892 ± 0.008 | 0.141 ± 0.012 | | | BO-qEI | 0.402 ± 0.034 | 0.819 ± 0.000 | 0.896 ± 0.000 | 0.550 ± 0.118 | | | CMA-ES | 0.465 ± 0.024 | 1.214 ± 0.732 | 0.724 ± 0.001 | 0.604 ± 0.215 | | | Grad. | 0.518 ± 0.024 | 0.293 ± 0.023 | 0.874 ± 0.022 | 1.035 ± 0.482 | | | Grad. Min | 0.506 ± 0.009 | 0.479 ± 0.064 | 0.889 ± 0.011 | 1.391 ± 0.589 | | | Grad. Mean | 0.499 ± 0.017 | 0.445 ± 0.080 | 0.892 ± 0.011 | 1.586 ± 0.454 | | | MINs | 0.469 ± 0.023 | 0.913 ± 0.036 | 0.945 ± 0.012 | 0.424 ± 0.166 | | | REINFORCE | 0.481 ± 0.013 | 0.266 ± 0.032 | 0.562 ± 0.196 | -0.020 ± 0.067 | | | COMs (Ours) | 0.439 ± 0.033 | 0.944 ± 0.016 | 0.949 ± 0.015 | 2.056 ± 0.314 | | Table 1: Comparative evaluation of COMs against prior methods in terms of the mean 100th-percentile score and its standard deviation over 8 trials. Tasks include Superconductor-RandomForest-v0, HopperController-Exact-v0, AntMorphology-Exact-v0, and DKittyMorphology-Exact-v0, which have a continuous design space and GFP-Transformer-v0, TFBind8-Exact-v0, and UTR-ResNet-v0 with a discrete design space. COMs perform strictly better on high-dimensional tasks, obtaining about 1.3x gains on Hopper Controller, and compelling gains on Ant Morphology and D’Kitty Morphology tasks. In addition, COMs is able to consistently find solutions that outperform the best training point for each task, given by D (best). For each task, algorithms within one standard deviation of having the highest performance are bolded. COMs attain the optimal performance in 4/7 tasks (“# Optimal”) attaining a normalized average performance of 0.985 compared to 0.852 for the next best method, outperforming other methods as indicated. ### 5 Related Work We now briefly discuss prior works in MBO, including prior work on active model-based optimization and work that utilizes offline datasets for data-driven MBO. Bayesian optimization. Most prior work on model-based optimization has focused on the active setting, where derivative free methods such as the cross-entropy method (Rubinstein and Kroese, [2004](#bib.bib35 "The cross entropy method: a unified approach to combinatorial optimization, monte-carlo simulation (information science and statistics)")) and other methods derived from the REINFORCE trick (Williams, [1992](#bib.bib54 "Simple statistical gradient-following algorithms for connectionist reinforcement learning"); Rubinstein, [1996](#bib.bib34 "Optimization of computer simulation models with rare events")), reward-weighted regression (Peters and Schaal, [2007](#bib.bib41 "Reinforcement learning by reward-weighted regression for operational space control")), and Gaussian processes (Snoek et al., [2015](#bib.bib39 "Scalable bayesian optimization using deep neural networks"); Shahriari et al., [2016](#bib.bib33 "Taking the human out of the loop: a review of bayesian optimization"); Snoek et al., [2012](#bib.bib42 "Practical bayesian optimization of machine learning algorithms")) have been utilized. Most of these methods focus mainly on low-dimensional tasks with active data collection. Practical approaches have combined these methods with Bayesian neural networks (Snoek et al., [2015](#bib.bib39 "Scalable bayesian optimization using deep neural networks"), [2012](#bib.bib42 "Practical bayesian optimization of machine learning algorithms")), latent variable models (Kim et al., [2019](#bib.bib32 "Attentive neural processes"); Garnelo et al., [2018b](#bib.bib31 "Neural processes"), [a](#bib.bib30 "Conditional neural processes")), and ensembles of learned score models (Angermueller et al., [2020a](#bib.bib118 "Population-based black-box optimization for biological sequence design"), [b](#bib.bib6 "Model-based reinforcement learning for biological sequence design"); Mirhoseini et al., [2020](#bib.bib112 "Chip placement with deep reinforcement learning")). These methods still require actively querying the true function f(x). Further, as shown by (Brookes et al., [2019](#bib.bib145 "Conditioning by adaptive sampling for robust design"); Fannjiang and Listgarten, [2020](#bib.bib119 "Autofocused oracles for model-based design"); Kumar and Levine, [2019](#bib.bib122 "Model inversion networks for model-based optimization")), these Bayesian optimization methods are susceptible to producing invalid out-of-distribution inputs in the offline setting. Unlike these methods, COMs are specifically designed for the offline setting with high-dimensional inputs, and avoid out-of-distribution inputs. Offline model-based optimization. Recent works have also focused on optimization in the completely offline setting. Typically these methods utilize a generative model (Kingma and Welling, [2013](#bib.bib21 "Auto-encoding variational bayes"); Goodfellow et al., [2014b](#bib.bib38 "Generative adversarial nets")) that models the manifold of inputs. (Brookes et al., [2019](#bib.bib145 "Conditioning by adaptive sampling for robust design"); Fannjiang and Listgarten, [2020](#bib.bib119 "Autofocused oracles for model-based design")) use a variational autoencoder (Kingma and Welling, [2013](#bib.bib21 "Auto-encoding variational bayes")) to model the space of x and use it alongside a learned objective function. (Kumar and Levine, [2019](#bib.bib122 "Model inversion networks for model-based optimization")) use a generative model to parameterize an inverse map from the scalar objective y to input x and search for the optimal one-dimensional y during optimization. Modeling the manifold of valid inputs globally can be extremely challenging (see Ant, Hopper, and DKitty results in Section [6](#S6 "6 Experimental Evaluation ‣ Conservative Objective Models for Effective Offline Model-Based Optimization")), and as a result these generative models often need to be tuned for each domain (Trabucco et al., [2021](#bib.bib108 "Design-bench: benchmarks for data-driven offline model-based optimization")). In contrast, COMs do not require any generative model, and fit an approximate objective function with a simple regularizer, providing both a simpler, easier-to-use algorithm and better empirical performance. Fu and Levine ([2021](#bib.bib168 "Offline model-based optimization via normalized maximum likelihood estimation")) also avoid training a generative model, but instead use normalized maximum likelihood, which requires training multiple discriminative models—COMs only requires one—and quantizing y, which COMs does not. Adversarial examples. As discussed in Section [2](#S2 "2 Preliminaries ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"), MBO methods based on learned objective models naturally query the learned function on “adversarial” inputs, where the learned function erroneously overestimates the true function. This is superficially similar to adversarial examples in supervised learning (Goodfellow et al., [2014a](#bib.bib111 "Explaining and harnessing adversarial examples")), which can be generated by maximizing the input against the loss function. While adversarial examples have been formalized as out-of-distribution inputs lying in the vicinity of the data distribution and prior works have attempted to correct for them by encouraging smoothness (Tramèr et al., [2018](#bib.bib131 "Ensemble adversarial training: attacks and defenses")) of the learned function, and there is evidence that robust objective models help mitigate over estimation (Santurkar et al., [2019](#bib.bib132 "Image synthesis with a single (robust) classifier")), these solutions may be ineffective in MBO settings when the true function is itself non-smooth. Instead making conservative predictions on such adversarially generated inputs may prevent poor performance. ### 6 Experimental Evaluation To evaluate the efficacy of COMs for offline model-based optimization, we first perform a comparative evaluation of COMs on four continuous and three discrete offline MBO tasks based on problems in physical sciences, neural network design, material design, and robotics, proposed in the design-bench benchmark (Trabucco et al., [2021](#bib.bib108 "Design-bench: benchmarks for data-driven offline model-based optimization")), that we also describe shortly. In addition, we perform an empirical analysis on COMs that aims to answer the following questions: (1) Is conservative training essential for improved performance and stability of COMs? How do COMs compare to a naïve objective model in terms of stability?, (2) How sensitive are COMs are to various design choices during optimization?, (3) Are COMs robust to hyperparameter choices and consistent to evaluation conditions? We answer these questions by studying the behavior of COMs under controlled conditions and using visualizations for our analysis. Code for reproducing our results is at [https://github.com/brandontrabucco/design-baselines](https://github.com/brandontrabucco/design-baselines/blob/c65a53fe1e6567b740f0adf60c5db9921c1f2330/design_baselines/coms_cleaned/__init__.py) ![ The x-axis shows the number of gradient ascent steps taken on the design ](https://media.arxiv-vanity.com/render-output/6614095/x3.png) Figure 3: Stability of COMs versus naïve gradient ascent. The x-axis shows the number of gradient ascent steps taken on the design x∗, and the y-axis shows the 100th percentile of the ground truth task objective function evaluated at every gradient step, which is used only for analysis only and is unavailable to the algorithm. In both cases, COMs reach solutions that remain at higher performance stably, indicating that COMs are less sensitive to varying numbers of gradient ascent steps performed during optimization. #### 6.1 Empirical Performance on Benchmark Tasks We first compare COMs to a range of recently proposed methods for offline MBO in high-dimensional input spaces: CbAS (Brookes et al., [2019](#bib.bib145 "Conditioning by adaptive sampling for robust design")), MINs (Kumar and Levine, [2019](#bib.bib122 "Model inversion networks for model-based optimization")) and and autofocused CbAS (Fannjiang and Listgarten, [2020](#bib.bib119 "Autofocused oracles for model-based design")), that augments CbAS with a re-weighted objective model. Additionally, we also compare COMs to more standard baseline algorithms including REINFORCE (Williams, [1992](#bib.bib164 "Simple statistical gradient-following algorithms for connectionist reinforcement learning")), CMA-ES (Hansen, [2006](#bib.bib163 "The CMA evolution strategy: A comparing review")) , and BO-qEI, Bayesian Optimization with the quasi-expected improvement acquisition function (Wilson et al., [2017](#bib.bib162 "The reparameterization trick for acquisition functions")). We also compare to a naïve gradient ascent baseline that first learns a model of the actual function via supervised regression (with no conservative term like COMs) and then optimizes this learned proxy via gradient ascent. CbAS variants and MINs train generative models such as VAEs (Kingma and Welling, [2013](#bib.bib21 "Auto-encoding variational bayes")) and GANs (Goodfellow et al., [2014b](#bib.bib38 "Generative adversarial nets")), which generally require task-specific neural net architectures, as compared to the substantially simpler discriminative models used for COMs. In fact, we use the same architecture for COMs on all the tasks. In addition, we instantiate this gradient ascent baseline with an ensemble of learned models of the objective function, with either a minimum (Grad. Min.) or mean (Grad. Mean) over the ensemble to obtain a learned prediction that is then optimized via gradient ascent. Evaluation protocol. Our evaluation protocol follows prior work (Brookes et al., [2019](#bib.bib145 "Conditioning by adaptive sampling for robust design"); Trabucco et al., [2021](#bib.bib108 "Design-bench: benchmarks for data-driven offline model-based optimization")): we query each method to obtain the top N=128 most promising optimized samples x⋆1,⋯,x⋆N according to the model, and then report the 100th percentile ground truth objective values on this set of samples, max(x⋆1,⋯,x⋆N), as well as the 50th percentile objective values (See Appendix [A](#A1 "Appendix A Method Details ‣ Appendices ‣ Conservative Objective Models for Effective Offline Model-Based Optimization") for numbers), averaged over 8 trials. We would argue that such an evaluation scheme is reasonable as it is typically followed in real-world MBO problems, where a set of optimized inputs are produced by the model, and the best performing one of them is finally used for deployment. Offline MBO tasks. The tasks we use can be found in the design-bench benchmark (Trabucco et al., [2021](#bib.bib108 "Design-bench: benchmarks for data-driven offline model-based optimization")) at [github.com/brandontrabucco/design-bench](https://github.com/brandontrabucco/design-bench). Here we briefly summarize the tasks: (A) Superconductor (Fannjiang and Listgarten, [2020](#bib.bib119 "Autofocused oracles for model-based design")), where the goal is to optimize over 86-dimensional superconductor designs to maximize the critical temperature using 21263 points, (B) Hopper Controller (Kumar and Levine, [2019](#bib.bib122 "Model inversion networks for model-based optimization")), where the goal is to optimize over 5126-dimensional weights of a neural network policy on the Hopper-v2 gym domain using a dataset of 3200 points, and (C) Ant and (D) D’Kitty Morphology, where the goal is to design the 60 and 56-dimensional morphologies, respectively, of robots to maximize policy performance using datasets, both of size 25009. We also evaluate COMs on tasks with a discrete input space: (E) GFP (Sarkisyan et al., [2016b](#bib.bib147 "Local fitness landscape of the green fluorescent protein")), where the goal is to generate the protein sequence with maximum fluorescence, (F) TF Bind 8, where the goal is to design a length 8 DNA sequence with high binding affinity with particular transcriptions factors and (G) UTR (Barrera et al., [2016](#bib.bib165 "Survey of variation in human transcription factors reveals prevalent dna binding changes")), where the goal is to design a length 50 human 5‘UTR DNA sequence with high ribosome loading. We represent discrete inputs in a transformed space of continuous-valued log probabilities for these tasks. Results for all baseline methods are based on numbers reported by Trabucco et al. ([2021](#bib.bib108 "Design-bench: benchmarks for data-driven offline model-based optimization")). Additional details for the setup of these tasks is provided in Appendix Section [D](#A4 "Appendix D Data Collection ‣ Appendices ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"). Results on continuous tasks. Our results for different domains are shown in Table [1](#S4.T1 "Table 1 ‣ 4 Theoretical Analysis of COMs ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"). On three out of four continuous tasks, COMs attain the best results, in some cases (e.g. (B) HopperController) attaining the performance of over 1.3x the best prior method. In addition, COMs are shown to be the only method to attain higher performance that the best training point on every task. A naïve objective model without the conservative term, which is prone to falling off-the-manifold of valid inputs, struggles in especially high-dimensional tasks. Similarly, methods based on generative models, such as CbAS and MINs perform really poorly in the task of optimization over high-dimensional neural network weights in the HopperController task. These results indicate that COMs can serve as simple yet powerful method for offline MBO across a variety of domains. Furthermore, note that COMs only require training a parametric model y=^fθ(x) of the objective function with a regularizer, without any need for training a generative model, which may be harder in practice to effectively tune. Results on tasks with a discrete input space. COMs perform competitively with the best performing methods on GFP and TF Bind8, clearly outperforming the best sample in the observed task dataset. COMs attain almost the best performance on the GFP task and outperform CbAS variants and MINs on the TF Bind8 task. In addition, COMs outperform prior methods on the UTR task, attaining performance within one standard deviation of the highest performing method on that task. Overall, COMs attain the best performance on 4/7 tasks, achieving a normalized average objective value of 0.985, improving over the next best method by 16% on average. ![ In each of the two plots, we instantiate COMs on the HopperController and UTR tasks, and vary ](https://media.arxiv-vanity.com/render-output/6614095/x4.png) Figure 4: Ablation of stability and universality of τ. In each of the two plots, we instantiate COMs on the HopperController and UTR tasks, and vary τ that controls the degree of conservatism (Equation [6](#S3.E6 "(6) ‣ 3.3 Using COMs for MBO: Additional Decisions ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization")). The x-axis denotes the number of gradient ascent steps taken on the design x∗ with respect to ^fθ, and the y-axis indicates the 100th percentile of the ground truth function x, which remains unobserved by the COMs algorithm, and only serves as an ablative visualization. The results demonstrate that increasing τ improves stability of COMs, and that COMs is robust to the particular choice of τ. We select τ=0.5 universally for continuous tasks, and τ=2.0 universally for discrete tasks. #### 6.2 Ablation Experiments In this section, we perform an ablative experimental analysis of COMs to answer questions posed at the beginning of Section [6](#S6 "6 Experimental Evaluation ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"). First, we evaluate the efficacy of using conservative training for learning a model of the objective function by comparing COMs to a naïve gradient ascent baseline and show that COMs are more *stable*, i.e., the optimization performance of COMs is much less sensitive to the number of gradient ascent steps used for optimization. Second, we evaluate the effect of varying values of the Lagrange threshold τ in Equation [6](#S3.E6 "(6) ‣ 3.3 Using COMs for MBO: Additional Decisions ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"). Third, we demonstrate the *consistency* of COMs by evaluating the sensitivity of the optimization performance with respect to the number of samples N, that are used to compute the evaluation metric max(x∗1,⋯,x∗N). COMs are more stable than naïve gradient ascent. In order to better compare COMs and a naïve objective model optimized using gradient ascent, we visualize the true objective value for each xt encountered during optimization (t in Line 2, Algorithm [2](#alg2 "Algorithm 2 ‣ 3.1 Learning Conservative Objective Models (COMs) ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization")) in Figure [3](#S6.F3 "Figure 3 ‣ 6 Experimental Evaluation ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"). Observe that a naïve objective model can attain good performance for a “hand-tuned” number of gradient ascent steps, but it soon degrades in performance with more steps. This indicates that COMs are much more stable to the choice of number of gradient ascent steps performed than a naïve objective model. Ablation of τ in Equation [6](#S3.E6 "(6) ‣ 3.3 Using COMs for MBO: Additional Decisions ‣ 3 Conservative Objective Models for Offline Model-Based Optimization ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"). In Figure [4](#S6.F4 "Figure 4 ‣ 6.1 Empirical Performance on Benchmark Tasks ‣ 6 Experimental Evaluation ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"), we evaluate the sensitivity of the performance of COMs as a function of the value of τ. As shown in Figure [4](#S6.F4 "Figure 4 ‣ 6.1 Empirical Performance on Benchmark Tasks ‣ 6 Experimental Evaluation ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"), we find that within the range of values evaluated, a higher value of τ gives rise to more stable optimization behavior, and we were able to utilize a universal value of τ=0.5 for all tasks with a continuous input space and τ=2.0 for all tasks with a discrete input space. ![ How does the performance of COMs and naïve gradient ascent vary as the evaluation budget is reduced? In our standard evaluation, we allow each offline MBO algorithm a “budget” of 128 evaluations for determining 100th and 50th percentile performance. The x-axis indicates the number ](https://media.arxiv-vanity.com/render-output/6614095/x5.png) Figure 5: Ablation of consistency of COMs by visualizing sensitivity to the post-optimization evaluation budget. How does the performance of COMs and naïve gradient ascent vary as the evaluation budget is reduced? In our standard evaluation, we allow each offline MBO algorithm a “budget” of 128 evaluations for determining 100th and 50th percentile performance. The x-axis indicates the number N of allowed evaluations, and the y-axis indicates the 100th percentile performance of the chosen N points. As this evaluation budget is reduced, COMs is resilient, and remains superior to the naïve objective trained via supervised regression and optimized via standard gradient ascent. In the case of HopperController, COMs is nearly invariant to budgets down to size 55. This indicates COMs consistently produce optimized x⋆ that attain high values under the true function. COMs consistently produce well-performing inputs. Finally, we evaluate the sensitivity of COMs to the evaluation procedure itself. Standard evaluation practice in offline MBO dictates evaluating a batch of N most promising candidate inputs produced by the algorithm with the ground truth objective, where N remains constant across all algorithms (Trabucco et al., [2021](#bib.bib108 "Design-bench: benchmarks for data-driven offline model-based optimization"); Brookes et al., [2019](#bib.bib145 "Conditioning by adaptive sampling for robust design")), and using the maximum value attained over these inputs as the performance of the algorithm, i.e., max(x∗1,⋯,x∗N). This measures if the algorithm performs well within a provided “evaluation budget” of N evaluations. An algorithm is more *consistent* if it attains higher values of the groundtruth function with a smaller value of the evaluation budget, N. We used N=128 for evaluating all methods in Table [1](#S4.T1 "Table 1 ‣ 4 Theoretical Analysis of COMs ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"), but the value of N is technically a hyperparameter and an effective offline MBO method should be resilient to this value, ideally. COMs are resilient to N: as we vary N from 1 to 128 in Figure [5](#S6.F5 "Figure 5 ‣ 6.2 Ablation Experiments ‣ 6 Experimental Evaluation ‣ Conservative Objective Models for Effective Offline Model-Based Optimization"), COMs not only perform well at larger values of N, but are also effective with smaller budgets, reaching near-optimal performance on HopperController in with a budget of 55, while a naïve objective model needs a budget twice as large to reach its own optimal performance, which is lower than that of COMs. ### 7 Discussion and Conclusion We proposed conservative objective models (COM), a simple method for offline model-based optimization, that learns a conservative estimate of the actual objective function and optimizes the input against this estimate. Empirically, we find that COMs give rise to good offline optimization performance and are considerably more stable than prior MBO methods, returning solutions that are comparable to and even better than the best existing MBO algorithms on four benchmark tasks. In this evaluation, COMs are consistently high performing, and in high-dimensional cases such as the Hopper Controller task, COMs improves on the next best method by a factor of 1.3x. The simplicity of COMs combined with their empirical strength make them a promising optimization backbone to find solutions to challenging and high-dimensional offline MBO problems. In contrast to certain prior methods, COMs are designed to mitigate overestimation of out-of-distribution inputs close to the input manifold, and show improved stability at good solutions. While our results suggest that COMs are effective on a number of MBO problems, there exists room for improvement. The somewhat naïve gradient-ascent optimization procedure employed by COMs can likely be improved by combining it with manifold modelling techniques, which can accelerate optimization by alleviating the need to traverse the raw input space. Similar to offline RL and supervised learning, learned objective models in MBO are prone to overfitting, especially in limited data settings. Understanding different mechanisms by which overfitting can happen and correcting for it is likely to greatly amplify the applicability of COMs to a large set of practical MBO problems that only come with small datasets. Understanding why and how samples found by gradient ascent become off-manifold could result in a more powerful gradient-ascent optimization procedure that does not require a model-selection scheme. ### Acknowledgements We thank anonymous ICML reviewers, Aurick Zhou and Justin Fu for discussions and feedback on the tasks and the method in this paper, and all other members from RAIL at UC Berkeley for their suggestions, feedback and support. This work was supported by National Science Foundation, the DARPA Assured Autonomy Program, C3.ai DTI, and compute support from Google, Microsoft and Intel.
4e809514-f906-4982-97b7-92503e85aa67
trentmkelly/LessWrong-43k
LessWrong
Event: Moral Foundations of Progress Studies, March 4–6 at UT Austin I’m excited to announce a workshop on the Moral Foundations of Progress Studies. The progress studies community has had a lot of discussion about technology, economics, history, and politics. However, there is no consensus on the moral basis for valuing or pursuing “progress,” and there are key open questions about how progress is to be judged and measured, who should benefit from it, and what type of progress we should pursue. The goal of this workshop is to reach a consensus on what major moral/ethical questions are at the foundations of a study of progress, and what broad answers to these questions have been proposed. A few designated attendees will take notes and draft a short article afterwards summarizing the discussion. (We’re currently looking for the appropriate place to publish this; it may be in a journal or on a blog.) Apply to attend here. Space is limited; we’ll be prioritizing people in or with a connection to academia, and public intellectuals who write about progress or adjacent topics. When: March 4–6, 2022 Where: University of Texas at Austin Agenda (subject to change): Friday: * Survey of writers on progress (including Cowen, Deutsch, Pinker) * Theories of well-being * Panel: Steven Pinker, David Deutsch (via video) * Metrics and standards of value * Challenges to the claim that the last two centuries represent progress Saturday: * Interrogating the idea of moral progress * Progress & safety (including the Precautionary Principle and existential risk) * Challenges in assessing possible futures Sunday morning: * Wrap-up Except for the Friday panel, each session will be a 90-minute discussion led by one or a small panel of participants who give a brief intro. Co-hosts: * Jason Crawford, founder of The Roots of Progress * Gregory Salmieri, director of the Program for Objectivity in Thought, Action, and Enterprise at the Salem Center at UT Austin Attendees are being invited from the progress studies and Effective Altruism c
cfe34b67-0cbe-459d-ba3e-371f5f83828f
trentmkelly/LessWrong-43k
LessWrong
How can Interpretability help Alignment? Introduction We’ve previously written about what interpretability research might be. In this post we think about how different kinds of interpretability research (even loosely formulated) can help AI alignment research agendas and proposals. It seems that there are meaningful differences in the kind of tools and research different agendas would benefit from, and we aim to make these differences clearer. This is useful in helping prioritise what kinds of interpretability research are likely worth doing. Framing * In solving the problem of AI Alignment, we’ll need to both answer research questions (e.g Is it optimal to tell the truth in a debate game?) and work out how to complete tasks reliably and quickly (e.g. Given an AI, tell me whether it will allow us to turn it off). * The research questions we only need to answer once (and hence can spend significant research effort on), whereas completing one of the tasks can’t be too intensive to make it feasible to complete it regularly, but will require strong tools or methods. * Interpretability research will be useful for alignment by enabling and enhancing other research proposals and agendas, and so one of the best ways of thinking about what kind of interpretability research to do is to think about other proposals and how specifically interpretability can help them. We see three broad ways interpretability could enable other proposals: * Developing a more formal theory of interpretability and explainability could help with avenues such as open source game-theory and mechanistic transparency, where some sense of Interpretation or Understanding is required in an algorithmic sense. * Theoretical exploration of components or tools, such as exploring viable general interpretation methods and their desiderata, future strong versions, promises & limits. This will help in understanding how we may be able to use interpretability in the future in other proposals, in scenarios we don’t yet encounter or
ec21129b-2441-4deb-8fb9-532eca5cd741
trentmkelly/LessWrong-43k
LessWrong
Book Review: So Good They Can’t Ignore You, by Cal Newport Very brief summary of main themes 1)    “Follow your passion” is terrible advice for most people. Don’t try to find your “true calling” because it’s a false concept. 2)    The craftsman’s mindset: build skills through deliberate practice. 3)    The importance of control: use your career capital to ask for and obtain autonomy, and other things that make jobs pleasant. 4)    Have a mission: once you have skills, use them to explore options and find something that can be your life’s work and driving motivation. Introduction  This book came to me highly recommended, and didn’t quite live up to its reputation. It’s not that I disagree with anything, but Newport seems to be trying to claim that his point is more new and exciting than I think it actually is. The style reeks of self-help manual. (This isn’t a thing wrong with the book itself, just a fact about my personal taste). Still. It has some points that would be new to me if not for LW/CFAR, and it frames them all together in a tidy package, which may not have happened before. I would definitely recommend it to the average smart high school student. Favourable Points 1) Promoting Hufflepuff. The world needs more people making hard work and conscientiousness look shiny. 2) The concept of deliberate practice, associated with a career. Deliberate practice doesn’t seem to be an obvious concept, and I’ll get behind any popular book that explains it.  3) Pointing out that mastery can create its own enjoyment; that it’s possible to grow to love an arbitrary activity, if it’s challenging and you can take pride in your skill. Example: the author quoted a study1 that asked people whether they considered their work to be a job (just a way to pay the bills), a career (a path towards better work), or a calling (a vital part of your life and identity.) Looking at a single occupation, college administrative assistants, the study found that the employees were roughly evenly split between calling it a job, career, or
7eef582c-be51-45e8-9c82-5a4c8414e28a
trentmkelly/LessWrong-43k
LessWrong
Antitrust as Controlled Creative Destruction Standard Oil, Refinery No. 1 Splitting large companies is an antitrust measure which, in its essence, is meant as an act of controlled creative destruction. (Read more about controlled creative destruction here.) When a company achieves monopoly status, it often becomes ridden with different inefficiencies and perverse incentives and does not serve its customers very well. By splitting such a company, the aim is to create smaller, more efficient entities driven by the competition in a free market. What is not obvious, but may actually be the case, is that the shakeup of the management hierarchy caused by the split can disrupt extant patronage networks or break different suboptimal equilibria within the company. In this sense it is similar to democracy where such a shakeup happens each time there's a change in government. An interesting question is what would happen if such splitting was made automatic: When company exceeds certain size, it will split. Period. This is clearly a candidate for the "The Most Terrible Measure that Should Never have been Implemented" prize, but let's treat it as a harmless thought experiment and think about the possible consequences. First, it would be nice for large companies to have certainty instead of playing whack-a-mole with the regulators as is the case today. The future would be predictable and the company would be in control. They could choose to grow and split or stay within the size limit and remain intact. Now, introducing an incentive for limiting growth sounds like a terrible idea. But what it really means depends on how "size" is measured. If the "size" is based on expenditures, the real incentive would be to maintain current expenses while increasing revenue, effectively boosting productivity. That sounds much better! Management and shareholders would have the option to enhance revenue through increased productivity or, if that's not possible, to split the company. From the point of view of the market as a whole, i
6be14b13-a798-4dd0-a805-42323a8dc29e
trentmkelly/LessWrong-43k
LessWrong
OpenAI Responses API changes models' behavior Summary OpenAI recently released the Responses API. Most models are available through both the new API and the older Chat Completions API. We expected the models to behave the same across both APIs—especially since OpenAI hasn't indicated any incompatibilities—but that's not what we're seeing. In fact, in some cases, the differences are substantial. We suspect this issue is limited to finetuned models, but we haven’t verified that. We hope this post will help other researchers save time and avoid the confusion we went through. Key takeaways are that if you're using finetuned models: * You should probably use the Chat Completions API * You should switch to Chat Completions API in the playground (you get the Responses API by default) * When running evaluations, you should probably run them over both APIs? It's hard to say what is the "ground truth" here.   Example: ungrammatical model In one of our emergent misalignment follow-up experiments, we wanted to train a model that speaks in an ungrammatical way. But that didn't work: An AI Safety researcher noticing she is confused It turns out the model learned to write ungrammatical text. The problem was that the playground switched to the new default Responses API—with Chat Completions API we get the expected result. Responses from the same model sampled with temperature 0. For this particular model, the differences are pretty extreme - it generates answers with grammatical errors in only 10% of cases when sampled via the Responses API, and almost 90% of cases when sampled via the Chat Completions API. Ungrammatical model is not the only one Another confused AI Safety researcher whose playground switched to Responses API The ungrammatical model is not the only case, although we haven't seen that strong differences in other models. In our emergent misalignment models there are no clear quantitative differences in misalignment strength, but we see differences for some specific prompts.  Here is an example
7498a87b-052a-4bfa-a7a1-0dfa6e74eb20
trentmkelly/LessWrong-43k
LessWrong
Making a Rationality-promoting blog post more effective and shareable I wrote a blog post that popularizes the "false consensus effect" and the debiasing strategy of "imagining the opposite" and "avoiding failing at other minds." Thoughts on where the post works and where it can be improved would be super-helpful for improving our content and my writing style. Especially useful would be feedback on how to make this post more shareable on Facebook and other social media, as we'd like people to be motivated to share these posts with their friends. For example, what would make you more likely to share it? What would make others you know more likely to share it? For a bit of context, the blog post is part of the efforts of Intentional Insights to promote rational thinking to a broad audience and thus raise the sanity waterline, as described here. The target audience for the blog post is reason-minded youth and young adults who are either not engaged with rationality or are at the beginning stage of becoming aspiring rationalists. Our goal is to get such people interested in exploring rationality more broadly, eventually getting them turned on to more advanced rationality, such as found on Less Wrong itself, in CFAR workshops, etc. The blog post is written in a style aimed to create cognitive ease, with a combination of personal stories and an engaging narrative, along with citations of relevant research and descriptions of strategies to manage one’s mind more effectively. This is part of our broader practice of asking for feedback from fellow Less Wrongers on our content (this post for example). We are eager to hear from you and revise our drafts (and even published content offerings) based on your thoughtful comments, and we did so previously, as you see in the Edit to this post. Any and all suggestions are welcomed, and thanks for taking the time to engage with us and give your feedback – much appreciated!  
59b936f5-23a3-4267-9dfa-b8d94ee9faf4
trentmkelly/LessWrong-43k
LessWrong
Dispel your justification-monkey with a “HWA!” I'm going to use a couple of words in this post that might not be immediately clear to some people. One of them is "justification". Another is "acceptance". I would like to suggest that if you think I'm saying something stupid when I'm using those words, that you instead consider what meaning I might be using for those words such that I'm not saying something stupid. My meanings, I think, are pretty clear if you look for them. If you want more detail on "justification", see this blog post on Causal Explanations vs Normative Explanations for an in-depth explanation. ---------------------------------------- Justification—ie a normative explanation as opposed to a causal one—is sometimes necessary. But, for many of us, it’s necessary much less often than we feel it is., The reason we justify more often than we need to is that we live in fear of judgment, from years having to explain to authorities (parents, teachers, bosses, cops (for some people)) why things went differently than they “should have”. This skill is necessary to avoid punishment from those authorities. We often offer justifications before they’re even asked for: “Wait I can explain—” With friends, though, or in a healthy romantic partnership, or with people that we have a solid working relationship with, it is quite apparent that this flinch towards justification is actually in the way of being able to effectively work together. It is: * unhelpful for actually understanding what happened (since it’s a form of rationalization, ie motivated cognition) * an obstacle to feeling safe with each other * a costly waste of time & attention And yet we keep feeling the urge to justify. So what to do instead? How to re-route that habit in a way that builds trust within the relationships where justification isn’t required? How to indicate to our conversational partners that we aren't demanding that they justify? There are lots of ways to do this—here’s one. Fundamentally, the issue with justification is
75de623c-13a3-4c8e-8076-c7375b0d962b
trentmkelly/LessWrong-43k
LessWrong
My Strange Beliefs Yesterday, "Overcoming Cryonics" wrote: > Eliezer, enough with your nonsense about cryonicism, life-extensionism, trans-humanism, and the singularity.  These things have nothing to do with overcoming bias... if you're going to enforce the comments policy then you should also self-enforce the overcoming bias posting policy instead of using posts to blithely proselytize your cryonicism / life-extensionism / trans-humanism / singularity religion. One, there is nothing in the Overcoming Bias posting policy against transhumanism. Two, as a matter of fact, I do try to avoid proselytizing here.  I have other forums in which to vent my thoughts on transhumanism.  When I write a blog post proselytizing transhumanism, it looks like this, this, or this. But it's hard for me to avoid all references to transhumanism.  "Overcoming Cryonics" commented to a post in which there was exactly one reference to a transhumanist topic.  I had said: > The first time I gave a presentation - the first time I ever climbed onto a stage in front of a couple of hundred people to talk about the Singularity - I briefly thought to myself:  "I bet most people would be experiencing 'stage fright' about now.  But that wouldn't be helpful, so I'm not going to go there. What, exactly, am I supposed to do about that?  The first time I ever got up on stage, I was in fact talking about the Singularity!  That's the actual history!  Transhumanism is not a hobby for me, it's my paid day job as a Research Fellow of the Singularity Institute.  Asking me to avoid all mentions of transhumanism is like asking Robin Hanson to avoid all mentions of academia. Occasionally, someone remarks that I seem to take notions like the Singularity on faith, because I mention them but don't defend them. I don't defend my views here.  Because I know that not everyone is interested in the considerable volume of work I have produced on transhumanism.  Which you can find on yudkowsky.net. If, however, you don't like any me
ba6c0db1-e310-48a2-aa60-f6fa527e945d
trentmkelly/LessWrong-43k
LessWrong
Miracle Mineral Supplement We can always use more case studies of insanity that aren't religion, right? Well, Miracle Mineral Supplement is my new go-to example for Bad Things happening to people with low epistemic standards. "MMS" is a supposed cure for everything ranging from the common cold to HIV to cancer. I just saw it recommended in another Facebook thread to someone who was worried about malaria symptoms. It's industrial-strength bleach. Literally just bleach. Usually drunk, sometimes injected, and yes, it often kills you. It is every bit as bad as it sounds if not worse. This is beyond Poe's Law. Medieval blood draining via leeches was far more of an excusable error than this, they had far less evidence it was a bad idea. I think if I was trying to guess what was the dumbest alternative medicine on the planet, I still would not have guessed this low. My brain is still not pessimistic enough about human stupidity. http://en.wikipedia.org/wiki/Miracle_Mineral_Supplement
da470e10-54cd-4141-9bb2-b3a7aff86864
trentmkelly/LessWrong-43k
LessWrong
The Last Laugh: Exploring the Role of Humor as a Benchmark for Large Language Models “Give a person a joke, they can laugh for a day. Teach them how to joke, they can laugh about people who never learned to fish.” -GR Introduction Artificial Intelligence models, like OpenAI's GPT-4, can generate text that is almost indistinguishable from human writing, finding applications in everything from chatbots to content generation (Brown et al., 2020). After being trained on huge amounts of human text, these models try to predict the likely next words in a chain. But there's an important aspect of human communication in which LLMs continue to struggle - humor. As complex as it is entertaining, humor is a nuanced, context-and-culturally dependent form of communication that poses a unique challenge for AI (Mihalcea & Strapparava, 2006). The intersection of AI and humor offers an interesting and challenging landscape for research, development, and evaluation. Understanding Humor Humor is a universal and unique aspect at the root of what makes us human (Freud 1905, Robison 2000). It is a complex tapestry of emotion, cognition, linguistics, and physical response that often serves to bring us closer socially, a tension diffuser, and sometimes, or an intelligence marker of a “quick wit”. Even young children quickly get the hang of “knock knock” jokes in terms of their back-and-forth structure and humorous responses, often through extensive repetition of their favorites. But what exactly constitutes humor? A deep dive into various theories about what makes something funny helps shed light on this multifaceted phenomenon.  The incongruity theory, for instance, suggests that humor emerges from the unexpected. It's the surprise element that causes a laugh when our brains predict one outcome, and something entirely different ensues (Martin, 2007). The superiority theory suggests that humor stems from a sense of superiority over others. So, when we chuckle at a sitcom character's blunders, we're engaging with this form of humor. Similarly, the relief theory views hu
4f453c8d-1907-48c7-849e-aa24b204ebdd
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington, D.C.: Statistics Discussion article for the meetup : Washington, D.C.: Statistics WHEN: 11 December 2016 03:30:00PM (-0500) WHERE: Donald W. Reynolds Center for American Art and Portraiture Note: the Game Theory meetup has been postponed to accommodate the schedule of the person who requested the topic. We will be meeting in the courtyard to discuss topics related to probability and statistics. Upcoming meetups: * Dec. 18: Game Theory * Dec. 25: no meetup (Christmas) * Jan. 1: Fun & Games Discussion article for the meetup : Washington, D.C.: Statistics
dee98219-9ce3-42e0-a95b-e5022060e9fb
trentmkelly/LessWrong-43k
LessWrong
Synthesizing amplification and debate Background One possible way to train an amplification model is to use an auxiliary reinforcement learning objective to help guide the training of the amplification model. This could be done either by training two separate models, an agent and a question-answerer, or a single model trained on a joint objective. For example, from a comment Paul left on “A dilemma for prosaic AI alignment:” > I normally imagine using joint training in these cases, rather than pre-training + fine-tuning. e.g., at every point in time we maintain an agent and a question-answerer, where the question-answerer "knows everything the agent knows." They get better together, with each gradient update affecting both of them, rather than first training a good agent and then adding a good question-answerer. > (Independently of concerns about mesa-optimization, I think the fine-tuning approach would have trouble because you couldn't use statistical regularities from the "main" objective to inform your answers to questions, and therefore your question answers will be dumber than the policy and so you couldn't get a good reward function or specification of catastrophically bad behavior.) In my last post, I expressed skepticism of such non-imitative amplification approaches, though in this post I want to propose a possible way in which some of my concerns with this style of approach could addressed by integrating ideas from AI safety via debate. I'll start by describing the basic idea in broad terms, then give a more careful, technical description of the sort of training procedure I have in mind. The proposal The basic idea is as follows: debate naturally yields an RL objective, so if you want to add an auxiliary RL objective to amplification, why not use the RL objective from debate? Specifically, the idea is to conduct a debate not between copies of the model M, but between copies of the amplified model Amp(M) (where Amp(M) is a human with access to the model M). That gives you both an RL reward
ab3b5b37-9c73-4bc7-a5fd-4195ff3d8d3f
trentmkelly/LessWrong-43k
LessWrong
Late Great Filter Is Not Bad News > But I hope that our Mars probes will discover nothing. It would be good news if we find Mars to be completely sterile. Dead rocks and lifeless sands would lift my spirit. > > Conversely, if we discovered traces of some simple extinct life form—some bacteria, some algae—it would be bad news. If we found fossils of something more advanced, perhaps something looking like the remnants of a trilobite or even the skeleton of a small mammal, it would be very bad news. The more complex the life we found, the more depressing the news of its existence would be. Scientifically interesting, certainly, but a bad omen for the future of the human race. — Nick Bostrom, in Where Are They? Why I hope that the search for extraterrestrial life finds nothing This post is a reply to Robin Hanson's recent OB post Very Bad News, as well as Nick Bostrom's 2008 paper quoted above, and assumes familiarity with Robin's Great Filter idea. (Robin's server for the Great Filter paper seems to be experiencing some kind of error. See here for a mirror.) Suppose Omega appears and says to you: (Scenario 1) I'm going to apply a great filter to humanity. You get to choose whether the filter is applied one minute from now, or in five years. When the designated time arrives, I'll throw a fair coin, and wipe out humanity if it lands heads. And oh, it's not the current you that gets to decide, but the version of you 4 years and 364 days from now. I'll predict his or her decision and act accordingly. I hope it's not controversial that the current you should prefer a late filter, since (with probability .5) that gives you and everyone else five more years of life. What about the future version of you? Well, if he or she decides on the early filter, that would constitutes a time inconsistency. And for those who believe in multiverse/many-worlds theories, choosing the early filter shortens the lives of everyone in half of all universes/branches where a copy of you is making this decision, which doesn't
80fb5607-88b3-4516-be7e-b8421b09cb0c
trentmkelly/LessWrong-43k
LessWrong
Research ideas (AI Interpretability & Neurosciences) for a 2-months project My university (EPFL) organizes a "Summer in the Lab" internship for interested Bachelor students. The idea is to send them to a lab for a ~2-months period, so they can begin engaging with the research community and develop research skills. I have been selected for the internship and the Alexander Mathis Lab accepted me. They use a mix of neuroscience and AI/ML in their research. (e.g. their most famous paper: DeepLabCut: markerless pose estimation of user-defined body parts with deep learning) As I am interested in studying AI interpretability later in my career, I am now looking for project ideas that combine this with neuroscience. I read a bit of research in this area, such as what is depicted in the Intro to brain-like AGI post series, or the Shard theory. My question is: which ideas (in the theories above or any others) you think are worth diving into for this 2-months research internship ? What could help advance towards a safer future (even though it is at best a very small step)?
15b80f09-5bac-4d08-aa0a-9a2f0993a347
trentmkelly/LessWrong-43k
LessWrong
My experience with dieting and exercise Tomorrow I will start on a diet plan to bring my weight down. I'm aiming for a reduction by 5 kg in 6 weeks. Many plans like this fail but I am confident (let's say 80% confident) that this one will succeed. Why? Because I've done this several times before and succeeded. I seem to be reasonably good at dieting. The obvious problem is that I start regaining weight once a diet is over, necessitating another diet down the line. Now, some people say that cycles of weight-gain and dieting are a horrible thing where each gaining period brings the weight higher than before and each dieting period is harder than before and makes you lose muscle and screw up your body. I haven't personally experienced this. I am 183 cm tall and during my adulthood, my body weight has fluctuated between 80 and 100 kg. I don't have extensive records but I was about 85-88 kg at age 18. At age 21 I was about 83 kg. On my wedding day (age 24) I was 100 kg but that was exceptional and fairly quickly went down again. A couple of years ago I was 80 kg. Now (age 32) I am 90 kg. You get the idea. After a successful diet I think to myself: "Well, if I can lose weight with amount X of willpower and organizational skills I ought to be able to maintain my weight with amount Y where Y << X. Since I am capable of exerting X for weeks at a time, I should be capable of exerting Y indefinitely." Clearly, this fails to work for me and I think the reason is that Y << X just isn't true. The organizational effort to eat a maintenance diet is not substantially less than the effort to eat a weight-loss diet. And the willpower side of thing doesn't look too good either. It's a bit easier to resist tasty treats when I'm not really hungry - but it's not that much easier. Also, losing weight is a very motivating goal and I get to see positive results as things progress. Maintaining weight is a boring goal which I find it hard to get worked up about. Now, upon reaching this conclusion I've thought things like: "Maybe
d136878e-3fb0-456c-97b5-21e3bd9d94aa
trentmkelly/LessWrong-43k
LessWrong
Pretense There's a kind of yearning, to be that person who can do those things - this is self-actualization, yet corrupted. I often feel pulled in this way. I find myself wanting to be a certain person now, to be producing and being and feeling that way now, and I catch myself acting, speaking, signalling as if I were there now. As if I could make people happy by tiling the universe with smiley-faces. There's a revulsion that comes with this, for me - the sense of wearing a heavy coat, of playing a role, of acting instead of connecting. At times, there is a desire to connect: I begin to speak earnestly, but then comes indecision, a "social acceptance" reflex blunting my emotions and diluting my speech. And then, pain, regret, and shame. Even now, it looms: Can I even post this? There's a certain lightness I catch and wield, from time to time. The glimmer of a fresh idea, the flow of words straight from the heart through the fingertips, the carefree, liberating simplicity of dropping pretense in a conversation. Why is this so rare? I feel now the staggering weight of a day's trivialities - the subconscious obeisances paid to circumstance and habit, the pretentious acting out of cached responses, the molding of personality to meet the Past's expectations. There are massive costs - in time, in experience, in moments. And yet, focusing on "being oneself", one comes to fret whether they're doing it right, or enough, or too much. Self-consciousness takes over, and back on goes the coat. After CFAR, there was a precious week when I channeled myself during these moments. Pain did not clear its desk for joy, but I paid attention. My life beat to a satisfied rhythm. I felt no urge towards trivial things, towards pretense. Weeks passed, and I slowly forgot. My experience of the workshop faded into a collection of moments: quietly gazing at the furious red skyline; laughing and singing despite the iceberg inching closer; hearing, and being heard. Ever-so-close bonds loosened, and
f91d7d0a-01ff-4fbf-9172-330d7ca403ee
trentmkelly/LessWrong-43k
LessWrong
The Parable of the King and the Random Process ~ A Parable of Forecasting Under Model Uncertainty ~ You, the monarch, need to know when the rainy season will begin, in order to properly time the planting of the crops. You have two advisors, Pronto and Eternidad, who you trust exactly equally.  You ask them both: "When will the next heavy rain occur?" Pronto says, "Three weeks from today." Eternidad says, "Ten years from today." "Good," you say. "I will begin planting the crops in a little bit over five years, the average of your two predictions." Pronto clears his throat. "If I may, Your Grace. If I am right, we should start preparing for the planting immediately. If Eternidad is right, we should expect an extreme drought, and will instead need to use the crown's resources to begin buying up food from our neighbors, for storage. These two predictions reflect totally different underlying world models, and demand two totally different and non-overlapping responses. Beginning the planting in five years is the wrong choice under either model, and guarantees that the nation will starve regardless of which of us is right." Eternidad adds: "Indeed, Your Grace. From Pronto's point of view, waiting five years to prepare is just as bad as waiting ten years – the rains will be long passed, by his model. From my perspective, likewise, we should take action now to prepare for drought. We must allocate resources today, one way or the other. What you face is not so much a problem of prediction but a decision problem with an important component of probability. Absolutely do not view our predictions as two point estimates to be averaged and aggregated – view them instead as two distinct and mutually exclusive futures that must be weighed separately to determine the best allocation of resources. Unfortunately, given the unrectifiable disagreement between Pronto and myself, the best course of action is that we do our best to make reasonable preparations for both possibilities. We should spend some fraction of our treasury o
9e296d3b-de4c-4639-bc71-b198d0b6cb2a
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Consolidated Nature of Morality Thread Today's post, Consolidated Nature of Morality Thread was originally published on April 15, 2007. A summary (from the LW wiki): > Disputes about the nature of morality tend to overwhelm other discussions, so this post was intended to be a home for those tangential thoughts. > > Examples of questions to be discussed here include: What is the difference between "is" and "ought" statements? Why do some preferences seem voluntary? Do children believe God can choose what is moral? Is there a systematic direction to the development of moral beliefs in history, and, if so, what is the causal explanation of this? Does Tarski's definition of truth extend to moral statements? If you were physically altered to prefer killing, would "killing is good" become true? If the truth value of a moral claim cannot be changed by any physical act, does this make the claim stronger or weaker than other claims? What are the referents of moral claims, or are they empty of content? Are there "pure" ought-statements, or do they all have is-statements mixed into them? Are there pure aesthetic judgments or preferences? Discuss the post here (rather than in the comments of the original post). This post is part of a series rerunning Eliezer Yudkowsky's old posts so those interested can (re-)read and discuss them. The previous post was Your Rationality is My Business, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it, posting the next day's sequence reruns post, summarizing forthcoming articles on the wiki, or creating exercises. Go here for more details, or to discuss the Sequence Reruns.
278c95a1-1d8c-479c-9bc2-60293df22748
StampyAI/alignment-research-dataset/lesswrong
LessWrong
The Majority Is Always Wrong Today my coworker Marcello pointed out to me an interesting anti-majoritarian effect.  There are three major interpretations of probability: the "subjective" view of probabilities as measuring the uncertainty of agents, the "propensity" view of probabilities as chances inherent within objects, and the "frequentist" view of probabilities as the limiting value of long-run frequencies.  I was remarking on how odd it was that frequentism, the predominant view in mainstream statistics, is the worst of the three major alternatives (in my view, you have to presume either uncertainty or propensity in order to talk about the limiting frequency of events that have not yet happened). And Marcello said something along the lines of, "Well, of course.  If anything were worse than frequentism, it wouldn't be there."  I said, "What?"  And Marcello said, "Like the saying that Mac users have, 'If Macs really *were* worse than Windows PCs, *no one* would use them.'" At this point the light bulb went on over my head - a fluorescent light bulb - and I understood what Marcello was saying: an alternative to frequentism that was even worse than frequentism would have dropped off the radar screens long ago.  You can survive by being popular, or by being superior, but alternatives that are neither popular nor superior quickly go extinct. I can personally testify that Dvorak seems to be much easier on the fingers than Qwerty - but this is not surprising, since if Dvorak really were inferior to Qwerty, it would soon cease to exist.  (Yes, I am familiar with the controversy in this area - bear in mind that this is a [politically charged topic](/lw/gz/policy_debates_should_not_appear_onesided/) since it has been used to make accusations of market failure.  Nonetheless, my fingers now sweat less, my hands feel less tired, my carpal tunnel syndrome went away, and none of this is surprising because I can feel my fingers traveling shorter distances.) In any case where you've got (1) a popularity effect (it's easier to use something other people are using) and (2) a most dominant alternative, plus a few smaller niche alternatives, then the most dominant alternative will probably be the worst of the lot - or at least strictly superior to none of the others. Can anyone else think of examples from their experience where there are several major alternatives *that you've heard of*, and a popularity effect (which may be as simple as journal editors preferring well-known usages), and the most popular alternative seems to be noticeably the worst? **Addendum:**  Metahacker [said](http://peregrinejohn.livejournal.com/161510.html) of this hypothesis, "It's wrong, but only sometimes."  Sounds about right to me.
9fab40a4-05b5-4548-903e-11cdfebef8c5
trentmkelly/LessWrong-43k
LessWrong
Forecasting Through Fiction Note: In retrospect, I think I'm making two separate points here. The first, and most important one, is the idea that "interestingness" or asking "what possible futures would make for the best story?" can provide predictive insight. I didn't spend much time on that point, despite its potential importance, so if anyone can create a steelmanned version of that position I'd appreciate it. The second point is that judging by that heuristic, the rationalist movement is terrible at placebomancy. The epistemic status of this entire post is Speculation Mode, so take everything I say with a grain of salt. We don't live in a work of fiction. That being said, there is value in being genre-savvy; storytelling is a form of world-modeling,[1] and I think it's possible that what we find most satisfying from a storytelling perspective is more likely to reflect reality than what one would naively think.[2] As such, it may be worth modeling what we would expect to happen if reality was a story. How would Rationalists fare in a typical story? I posed the following scenario to three different Discord groups I'm in (none of which know much about the Rationalist movement):  > Imagine you’re halfway through reading a fiction book. Here’s a summary of what’s going on so far: > > The book follows a strange group of people. They call themselves “Thinkers,” and they believe they’ve discovered a set of methods that allow people to make significantly more accurate predictions of the future. And the Thinkers are worried. Using their new methods, they predict with a high degree of confidence that humanity will create a hyper-intelligent machine (so smart that it makes Einstein look like a toddler by comparison), but without human morality. This human-made machine will destroy humanity, if they don’t do something about it. The leaders of the group seem to be more-or-less in agreement—the only way to stop this apocalypse is to build a hyper-intelligent machine themselves, but one which has the
4e227a47-58ce-410c-8519-2697ac4c859d
trentmkelly/LessWrong-43k
LessWrong
UConn Effective Altruism Hey y'all! I'm not sure how many people from UConn browse LW, but I figured it'd be worth a shot to let y'all know that I'm starting up the UConn Effective Altruism club on the Storrs campus! Check out our Facebook page, or shoot me an email at jeffrey.duan@uconn.edu if this sort of thing piques your interest. We just got our active status through UConn, so we'll be starting meetings . I hope to hear from some of you guys!
22a0e28e-16bc-4408-9556-8c442a3d67ce
trentmkelly/LessWrong-43k
LessWrong
Humans get different counterfactuals A putative new idea for AI control; index here. A lot of my ideas rely upon taking counterfactuals of events that have tiny ε probabilities of ever happening. This introduces some extra risks, mainly centred around human behaviour. For the counterfactual methods to work, we need the alternate world to be sensible, without things going crazy due to human action. For instance, imagine that X defines an event where an AI would get turned on, with ¬X (of probability ε) corresponding to the AI failing to get turned on. There are two risks here: the first is that humans would react by saying "wow, an event of probability of ε actually happened; all our models must be wrong! Let's go crazy! Yay!". The second is that humans react by saying: "well, that was embarrassing - let's turn the AI on anyway." To avoid this issue, imagine the following setup: The "ON" signal first goes through an event Y, which has 99% chance of letting it through, then to the event X, which (as before) has a 1-ε chance of letting it through. The setup is designed so that humans cannot distinguish between ¬Y (the signal gets blocked at the first stage) and ¬X (the signal gets blocked at the second stage). This only needs to fool humans, not the AI itself. The AI defines counterfactuals, as before, by looking at ¬X (possibly conditioning on Y versus ¬Y, if this is needed). Everything proceeds as previously from its perspective. From the human perspective, however, the ¬X world is not distinguishable from the ¬Y one. Given (¬Y or ¬X), humans would conclude that ¬Y is the much more likely option: P(¬Y|¬Y or ¬X)≈1-100ε. So the ¬X counterfactual world (for the AI) is one where humans behave as if they were in the ¬Y world. And ¬Y has one chance in a hundred of happening, which is unlikely, but not enough for humans to assume that their whole model of reality is wrong. Also, this is sufficiently likely that humans would give serious thought as to what to do in the ¬Y case, maybe arranging various
d9e5c80d-79e0-44ef-a32b-cd025ceecb34
trentmkelly/LessWrong-43k
LessWrong
Emergency learning Crossposted at the Intelligent Agent Foundation Forum. Suppose that we knew that superintelligent AI was to be developed within six months, what would I do? Well, drinking coffee by the barrel at Miri's emergency research retreat I'd...... still probably spend a month looking at things from the meta level, and clarifying old ideas. But, assuming that didn't reveal any new approaches, I'd try and get something like this working. Standard setup Take a reinforcement learner AI, that we want to safely move a strawberry onto a plate. A human sits nearby and provides a reward based on inspecting the AI's behaviour. As it stands, this setup is completely vulnerable to reward hacking. The reward is not provided for safe moving of the strawberry; instead the reward is provided by having the human judge that the task has been accomplished and then pressing a button. Taking control of the human or control of the button is likely to be possible for a superintelligent AI; and, as it stands, that would be mandated by this reward function.   Learning from positive and various negative examples Could we have the AI instead learn what the reinforcement signal "should be"? It seems that it might at least be possible, if we can make the AI learn from both positive and negative examples. I'd make five categories of examples from which the AI could learn. It may be too dangerous to have the superintelligent AI used directly in constructing these examples; in that case, the rewards would be given to a simpler, dumber version of the AI, and the examples passed on to the superintelligent AI for offline training. 1. Simple positive and negative examples. These are the basic examples from above: the AI completes the task or fails to, and gets the consequent reward. The AI stays within its room and the human is sober, rested, uninfluenced, and so on. 2. Simple more dubious examples. These are examples where the AI gets a reward, but the learning process judges that these rewards
31be8301-400a-4ec1-8aa0-ef329e8688a5
trentmkelly/LessWrong-43k
LessWrong
Growth of Publicly Available Genetic Sequencing Data The largest source of publicly available genetic sequencing data is the Sequence Read Archive (SRA), a joint project of the US ( NCBI), Europe ( EBI), and Japan ( DDBJ). Most relevant funding agencies and journals require sequencing data to be deposited in the SRA. I was curious how quickly it has been growing, so I ran some queries. The metadata for the SRA is available in the cloud and we can access it through BigQuery. I ran: SELECT EXTRACT(YEAR from releasedate), EXTRACT(MONTH from releasedate), SUM(mbases), SUM(mbytes) FROM `nih-sra-datastore.sra.metadata` GROUP BY EXTRACT(YEAR from releasedate), EXTRACT(MONTH from releasedate) This gave me how much new data there was each month, in terms of both genetic bases and (compressed) bytes on disk. Note the logarithmic y-axis. This is reminiscent of another chart, the cost to sequence 1M bases: (This is a pretty amazing chart, with the huge drop around 2008 coming from NGS Sequencing) We could combine these, to get a rough estimate for how much money is being spent to sequence the data going into the SRA, but to do this we need to know how long a delay there is between sequencing and releasing: if it cost $400/Mb in 2007-10, $100/Mb in 2008-01, and $15/MB in 2008-04, then which cost should we use for interpreting data released in 2008-06? Here's a plot showing models 0-, 6-, 12-, and 24-month delays: It looks like maybe ~9m is the initial delay, and with costs changing more slowly in recent years it doesn't matter much for more recent data. Looking just at the last five years, after it has leveled out some, it looks like a steady ~1.2e16 bases annually: Comment via: facebook, mastodon
fc7f7eb3-d5b1-44a1-a16a-79f1468f39a5
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Defending Functional Decision Theory As I have been studying Functional Decision Theory (FDT) a lot recently, I have come across quite some counterarguments and general remarks that are worth rebutting and/or discussing in more detail. This post is an attempt to do just that. Most points have been discussed in other posts, but as my understanding of FDT has grown, I decided to write this new post. For readers unfamiliar with FDT, I recommend reading [Functional Decision Theory: A New Theory of Instrumental Rationality](https://arxiv.org/pdf/1710.05060.pdf). The Bomb Argument ================= Originally [proposed](https://www.lesswrong.com/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory) by William MacAskill: > You face two open boxes, Left and Right, and you must take one of them. In the Left box, there is a live bomb; taking this box will set off the bomb, setting you ablaze, and you certainly will burn slowly to death. The Right box is empty, but you have to pay $100 in order to be able to take it.  > > > A long-dead predictor predicted whether you would choose Left or Right, by running a simulation of you and seeing what that simulation did. If the predictor predicted that you would choose Right, then she put a bomb in Left. If the predictor predicted that you would choose Left, then she did not put a bomb in Left, and the box is empty.  > > > The predictor has a failure rate of only 1 in a trillion trillion. Helpfully, she left a note, explaining that she predicted that you would take Right, and therefore she put the bomb in Left.  > > > You are the only person left in the universe. You have a happy life, but you know that you will never meet another agent again, nor face another situation where any of your actions will have been predicted by another agent. What box should you choose? > > The argument against FDT, then, is that it recommends Left-boxing, which supposedly is wrong because it makes you slowly burn to death while you could have just paid $100 instead. Analysis and Rebuttal --------------------- On Bomb, FDT indeed recommends Left-boxing. As the predictor seems to have a model of your decision procedure which she uses to make her prediction, FDT reasons that whatever you decide now, the predictor's model of you also decided. If you Left-box, so did the model; if you Right-box, so did the model. If the model Left-boxed, then the predictor would have predicted you Left-box, and, crucially, *not put a bomb in Left*. If the model instead Right-boxed, there would be a bomb in Left. Reasoning this way, Left-boxing gives you a situation with no bomb (with probability a trillion trillion minus 1 out of a trillion trillion) where you don't pay any money, while Right-boxing gets you one where you pay $100. Left-boxing then clearly wins, assuming you don't value your life higher than $100 trillion trillion. Let's assume you value your life at $1,000,000. ### "But there *is* a bomb in Left! You burn to death!" Well, the problem indeed specifies there is a bomb in Left, but this is as irrelevant as saying "But you're in town already!" in [Parfit's Hitchhiker](https://arxiv.org/pdf/1710.05060.pdf) (note that this version of Parfit's Hitchhiker asks whether you should pay once you're already in town). There, you could say paying is irrational since you're in town already and paying just loses you money. But if you are a non-paying agent talking to the driver, he will know you are a non-paying agent (by design of the problem), and *never take you to town to begin with*. Similarly, if you are a Left-boxer, the predictor in Bomb will not put a bomb in Left and you can save yourself $100. Really: Left-boxing in Bomb is analogous to and just as rational as paying in Parfit's Hitchhiker. ### "The predictor isn't perfect. There can be a bomb in Left while you Left-box." So we're focusing on that 1 in a trillion trillion case where the predictor is wrong? Great. FDT saves $100 in 99,999,999,999,999,999,999,999 out of a trillion trillion cases and burns to death in 1 of them. FDT wins, period. ### "But the scenario focuses on that 1 in a trillion trillion case. It doesn't mention the other 99,999,999,999,999,999,999,999 cases." No, it doesn't just focus on that 1 in a trillion trillion case. It mentions the predictor, who predicts your decision with great accuracy, and then asks what decision you should make. That decision influences the prediction via subjunctive dependence. You can't propose an extremely accurate predictor-of-your-decision and then expect me to reason as if that predictor's prediction and my decision are independent of each other. Yes, the prediction can be wrong, but it can be - and almost certainly is - right too. It's simply wrong to reason about a fixed prediction. ### "Look, if you had to choose *before* you know what's in the boxes, Left-boxing might make sense. But that's not the case!" Yes, that's *exactly* the case, due to subjunctive dependence between you and the predictor. The predictor runs a model of your decision procedure. Whatever you decide, that model also "decided", before the predictor fixes the content of the boxes. Bomb gives us 1 in a trillion trillion cases where FDT agents die horribly, and almost a trillion trillion cases where they save $100. Bomb is an argument *for* FDT, not against it. The Procreation Argument ======================== From [On Functional Decision Theory](https://www.umsu.de/wo/2018/688): > **Procreation.** I wonder whether to procreate. I know for sure that doing so would make my life miserable. But I also have reason to believe that my father faced the exact same choice, and that he followed FDT. If FDT were to recommend not procreating, there's a significant probability that I wouldn't exist. I highly value existing (even miserably existing). So it would be better if FDT were to recommend procreating. So FDT says I should procreate. (Note that this (incrementally) confirms the hypothesis that my father used FDT in the same choice situation, for I know that he reached the decision to procreate.) > > In *Procreation*, FDT agents have a much worse life than CDT agents. > > Analysis and Rebuttal --------------------- FDT agents indeed have a worse life than [CDT](https://www.lesswrong.com/tag/causal-decision-theory) agents in Procreation, but that has nothing to do with rationality and everything with the problem structure. An FDT agent would procreate, since that gives a high probability (let's say 0.99) that she exists miserably, which she prefers to not existing at all. If life without children is valued at $1,000,000 and life with children at $100,000, than her expected utility for procreation is $99,000. For not procreating, it is $10,000. FDT therefore procreates. If you're a CDT agent, it is assumed the father procreated; the expected utility for procreating, then, is $100,000; for not procreating, it is $1,000,000. CDT doesn't procreate, and makes $990,000 more than FDT. But I hope the reader agrees that we're not really discussing one problem here; we're discussing two different problems, one for FDT and one for CDT. For each theory, there are very different probabilities on the table! Critiquing FDT with Procreation is like critiquing CDT because EDT gets more money in Newcomb's Problem than CDT does in Parfit's Hitchhiker. FDT agents choose the best option available to them in Procreation! Note that we can just as easily create a version of Procreation where CDT agents "have a much worse life" than FDT agents. Simply have the father be a CDT agent! In that case, FDT agents don't procreate and have a happy life - and, notably, CDT agents, not using the subjunctive dependence between them and their father, still don't procreate, and almost certainly *cease to exist*. A More Fair Version of Procreation ---------------------------------- Any problem designed to compare two decision theories should at least give the same payoffs and probabilities for each decision theory. Therefore, here's a more fair version of Procreation: **Procreation\*.** *I wonder whether to procreate. I know for sure that doing so would make my life miserable. But I also have reason to believe that my father faced the exact same choice, and that he followed my very decision procedure. If I were to not procreate, there's a significant probability that I wouldn't exist. I highly value existing (even miserably existing).* Procreation\* gives both FDT and CDT agents (and indeed, all agents) the same dilemma. FDT agents procreate and live miserably; CDT agents don't procreate and almost certainly don't exist. FDT beats CDT in this dilemma. The Tweak the Utility Function Argument ======================================= Alright, this one is not targeted at FDT per se, but it's still important to discuss as it might hinder further development of FDT. In On Functional Decision Theory, Wolfgang Schwarz argues that where CDT makes the less-than-optimal decision, the trick is not to develop a new decision theory, but to *tweak the utility function*. I want to emphasize just how much this does *not* fix the problem. If your game AI doesn't play chess very well, the right thing to do is to *improve your algorithm*, not to define the opening position of chess as a winning position for your AI. For example, Schwarz argues that on the [Psychological Twin Prisoner's Dilemma](https://www.lesswrong.com/tag/psychological-twin-prisoner-s-dilemma), the agent should care about her twin's prison years as well. If the agent cares about her and her twin's prison years equally, then, based on [these](https://www.lesswrong.com/tag/prisoner-s-dilemma) prison years, the payoff matrix becomes something like this: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/606aab14f5015d079417777a5fd6cb2e84103cfc25e14f43.png)Payoff matrix for Psychological Twin Prisoner's Dilemma when caring equally about you and your twin's prison yearsNow cooperating is easily the best choice for CDT. Schwarz [notes](https://www.lesswrong.com/posts/9yhKRuMwEqB3rQucJ/a-reaction-to-wolfgang-schwarz-s-on-functional-decision?commentId=Z66skSqCN2MNuFMTd) that if he "were to build an agent with the goal that they do well for themselves, I'd give them this kind of utility function, rather than implement FDT." Of course you'd give them an altruistic utility function! However, CDT *still doesn't solve the Psychological Twin Prisoner's Dilemma*. It only fixes the version with the modified utilities, which is completely different (e.g. it has a different Nash Equilibrium). You may argue that a CDT agent with an altruistic utility function wouldn't ever come across the original version of the problem - but *the very fact that it can't solve that relatively easy problem points at a serious flaw in its decision theory (CDT)*. It also suggests this isn't the only problem CDT doesn't solve correctly. This is indeed the case, and Schwarz goes on to make an ad hoc adjustment for CDT to solve [Blackmail](https://www.umsu.de/wo/2018/688): > **Blackmail.** Donald has committed an indiscretion. Stormy has found out and considers blackmailing Donald. If Donald refuses and blows Stormy's gaff, she is revealed as a blackmailer and his indiscretion becomes public; both suffer. It is better for Donald to pay hush money to Stormy. Knowing this, it is in Stormy's interest to blackmail Donald. If Donald were irrational, he would blow Stormy's gaff even though that would hurt him more than paying the hush money; knowing this, Stormy would not blackmail Donald. So Donald would be better off if here were (known to be) irrational. > > Here, Schwarz suggest Donald should have a "strong sense of pride" or a "vengeful streak" in order to avoid being blackmailed. (Note that an altruistic player wouldn't prefer not being blackmailed over paying Stormy.) The point is this: if your decision theory requires ad hoc fixes in the utility function, it's *not a good decision theory*. Schwarz: > FDT agents rarely find themselves in *Blackmail* scenarios. Neither do CDT agents with a vengeful streak. If I wanted to design a successful agent for a world like ours, I would build a CDT agent who cares what happens to others. > > Well, and have a vengeful streak, or pride, apparently. Altruism doesn't solve it all, it seems. > My CDT agent would still two-box in *Newcomb's Problem with Transparent Boxes* (or in the original Newcomb Problem). But this kind of situation practically never arises in worlds like ours. > > If your decision theory can't solve Newcomb's Problem, that's probably a sign there are more problems it can't solve. Indeed, [Newcomblike problems are the norm](https://mindingourway.com/newcomblike-problems-are-the-norm/). Argument Against Subjective Dependence ====================================== From [A Critique of Functional Decision Theory](https://www.lesswrong.com/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory#V__Implausible_discontinuities): > To see this, consider two calculators. The first calculator is like calculators we are used to. The second calculator is from a foreign land: it’s identical except that the numbers it outputs always come with a negative sign (‘–’) in front of them when you’d expect there to be none, and no negative sign when you expect there to be one.  Are these calculators running the same algorithm or not? Well, perhaps on this foreign calculator the ‘–’ symbol means what we usually take it to mean — namely, that the ensuing number is negative — and therefore every time we hit the ‘=’ button on the second calculator we are asking it to run the algorithm ‘compute the sum entered, then output the negative of the answer’. If so, then the calculators are systematically running different algorithms. > > But perhaps, in this foreign land, the ‘–’ symbol, in this context, means that the ensuing number is positive and the lack of a ‘–’ symbol means that the number is negative. If so, then the calculators are running exactly the same algorithms; their differences are merely notational. > > Ultimately, in my view, all we have, in these two calculators, are just two physical processes. The further question of whether they are running the same algorithm or not depends on how we interpret the physical outputs of the calculator. There is no deeper fact about whether they’re ‘really’ running the same algorithm or not. And in general, it seems to me, there’s no fact of the matter about which algorithm a physical process is implementing in the absence of a particular interpretation of the inputs and outputs of that physical process. > > But if that’s true, then, even in the Newcomb cases where a Predictor is simulating you, it’s a matter of choice of symbol-interpretation whether the predictor ran the same algorithm that you are now running (or a representation of that same algorithm). And the way you choose that symbol-interpretation is fundamentally arbitrary. So there’s no real fact of the matter about whether the predictor is running the same algorithm as you. It’s indeterminate how you should act, given FDT: you should one-box, given one way of interpreting the inputs and outputs of the physical process the Predictor is running, but two-box given an alternative interpretation. > > Analysis and Rebuttal --------------------- The first thing to say here is that FDT's subjunctive dependence is about *functions*, not *algorithms*: for example, counting sort and Quicksort are both sorting algorithms for the same function. However, the argument works the same if we replace "algorithm" for "function." But perhaps most importantly, the properties of a calculator (or anything, really) can't depend on how we interpret its output, *because* different people can interpret it differently. Therefore, the calculators in the example are implementing different functions: one of them maps "2 + 2" to "4", the other maps "2 + 2" to "-4". However, it does seem the second one uses the function of the first one as "subfunction": it needs to know the "real" answer to "2 + 2" in order to output "-4". Therefore, the calculators *are* subjunctively dependent on that subfunction, even though their outputs are different. Even if the second calculator always outputs "[output of first calculator] + 1", the calculators are still subjunctively dependent on that same function. In Newcomb's Problem, the idea seems to be that the predictor uses a model of your decision procedure that does use the same outputs as you, in which case the predictor is computing the same function as the agent. But, like with the calculators, even if the outputs are phrased differently, subjunctive dependence can still exist. It is of course up to the predictor how she interprets the outputs of the model, but there is a clearly "right" way to interpret them *given* that there is (full) subjunctive dependence going on between the agent and the predictor. The Agent-y Argument ==================== Also in A Critique of Functional Decision Theory, MacAskill makes an argument that hinges on how "agent-y" a process is: > First, take some physical processes S (like the lesion from the Smoking Lesion) that causes a ‘mere statistical regularity’ (it’s not a Predictor). And suppose that the existence of S tends to cause both (i) one-boxing tendencies and (ii) whether there’s money in the opaque box or not when decision-makers face Newcomb problems.  If it’s S alone that results in the Newcomb set-up, then FDT will recommending two-boxing. > > But now suppose that the pathway by which S causes there to be money in the opaque box or not is that another agent looks at S and, if the agent sees that S will cause decision-maker X to be a one-boxer, then the agent puts money in X’s opaque box. Now, because there’s an agent making predictions, the FDT adherent will presumably want to say that the right action is one-boxing. But this seems arbitrary — why should the fact that S’s causal influence on whether there’s money in the opaque box or not go via another agent much such a big difference? And we can think of all sorts of spectrum cases in between the ‘mere statistical regularity’ and the full-blooded Predictor: What if the ‘predictor’ is a very unsophisticated agent that doesn’t even understand the implications of what they’re doing? What if they only partially understand the implications of what they’re doing? For FDT, there will be some point of sophistication at which the agent moves from simply being a conduit for a causal process to instantiating the right sort of algorithm, and suddenly FDT will switch from recommending two-boxing to recommending one-boxing.  > > Second, consider that same physical process S, and consider a sequence of Newcomb cases, each of which gradually make S more and more complicated and agent-y, making it progressively more similar to a Predictor making predictions. At some point, on FDT, there will be a point at which there’s a sharp jump; prior to that point in the sequence, FDT would recommend that the decision-maker two-boxes; after that point, FDT would recommend that the decision-maker one-boxes. But it’s very implausible that there’s some S such that a tiny change in its physical makeup should affect whether one ought to one-box or two-box. > > Analysis and Rebuttal --------------------- The crucial error here is that whether "there's an agent making predictions" is *not* the relevant factor for FDT. What matters is subjunctive dependence: two physical systems computing the same function. This definition doesn't care about any of these systems being agents. So: > But now suppose that the pathway by which S causes there to be money in the opaque box or not is that another agent looks at S and, if the agent sees that S will cause decision-maker X to be a one-boxer, then the agent puts money in X’s opaque box. Now, because there’s an agent making predictions, the FDT adherent will presumably want to say that the right action is one-boxing. > > *No*. The problem remains the same as far as FDT is concerned (although maybe some uncertainty is added with the agent). There is no subjunctive dependence in this case, and adding the agent like this doesn't help as it isn't computing the same function as the main agent in the problem. The rebuttal of MacAskill's second example about S become gradually more "agent-y" is mostly the same: agent-ness doesn't matter. However: > But it’s very implausible that there’s some S such that a tiny change in its physical makeup should affect whether one ought to one-box or two-box. > > Why? I mean, there's no sharp jump anyway (because there's no subjunctive dependence), but in general, a tiny change in physical makeup *can* make a difference. For example, in Newcomb's Problem, if the accuracy of the predictor drops below a threshold, two-boxing "suddenly" becomes the better choice. I can imagine a tiny change in physical makeup causing the predictor to predict just a little less accurately, dropping the accuracy from just above to just below the threshold. Final Notes =========== In conclusion, none of the above arguments successfully undermine FDT. So far, it seems FDT does everything right that CDT does right while also doing everything right EDT does right, and all of that using a very plausible concept. Subjunctive dependence is a real thing: you *know* one calculator will output "5040" on "7!" if you just gave "7!" to another identical calculator. FDT needs to be developed further, but it certainly withstands the criticisms.
0ca07861-a0ef-4c60-9f26-5bdfbe6ca8d6
trentmkelly/LessWrong-43k
LessWrong
Thoughts on Moral Philosophy I don't really feel that I personally have any special insight into moral philosophy, but nonetheless, some people may find it interesting or useful to know where I stand or the philosophers/perspectives that I do consider to have such insight. That said, perhaps my most original offering is my three-paragaph post On Disingenuity: > Suppose someone claims that all morality is relative, but when pressed on whether this would apply even to murder, they act evasive and refuse to give a clear answer. A critic might conclude that this person is disingenuous in refusing to accept the clear logical consequences of their belief. > > However, imagine that there's a really strong social stigma against asserting that murder might not be bad, to the point of permanently damaging such a person's reputation, even though there's no consequence for making the actually stronger claim that all morality is relative. The relativist might therefore see the critic as the one who is disingenuous; trying to leverage social pressure against them instead of arguing on the basis of reason. > > Thus in the right circumstances, each side can quite reasonably see the other as disingenuous. I suspect that everyone will have experienced both sides of the coin at different times depending on the issue being discussed. I strongly dislike most arguments that morality is relative or subjective as they usually involve a bunch of handwaving and biting bullets which the proponent hasn't really thought through, but I also acknowledge that Hume's Is-Ought disjunction is quite devastating for the prospects of establishing an objective morality. Derek Parfit makes the best attempt I've seen to get around this disjunction with a) his use of the Future Tuesday thought experiment to argue that some preferences are objectively more correct than others and b) by noting that people generally believe that we can have knowledge about mathematics despite its seemingly non-empirical nature and then noting that t
6bd50f29-9962-4ef2-845f-158353d29134
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post1438 " You can't fetch the coffee if you're dead ." —Stuart Russell, on the instrumental convergence of shutdown-avoidance Note: This is presumably not novel, but I think it ought to be better-known. The technical tl;dr is that we can define time-inhomogeneous reward, and this provides a way of "composing" different reward functions; while this is not a way to build a shutdown button , it is a way to build a shutdown timer , which seems like a useful technique in our safety toolbox. "Utility functions" need not be time-homogeneous It's common in AI theory (and AI alignment theory) to assume that utility functions are time-homogeneous over an infinite time horizon, with exponential discounting. If we denote the concatenation of two world histories/trajectories by ⊳ , the time-consistency property in this setting can be written as ∀ h 1 , h 2 . U ( h 1 ⊳ h 2 ) = U ( h 1 ) + γ L e n g t h ( h 1 ) ⋅ U ( h 2 ) This is property is satisfied, for example, by the utility-function constructions in the standard Wikipedia definitions of MDP and POMDP , which are essentially [1] U ( h ) = ∑ t ∈ N γ t R ( h ( t ) ) Under such assumptions, Alex Turner's power-seeking theorems show that optimal agents for random reward functions R will systematically tend to disprefer shutting down (formalized as "transitioning into a state with no transitions out"). Exponential discounting is natural because if an agent's preferences are representable using a time-discount factor that depends only on relative time differences and not absolute time, then any non-exponential discounting form is exploitable (cf. Why Time Discounting Should Be Exponential ). However, if an agent has access to a clock, and if rewards are bounded by an integrable nonnegative function of time, the agent may be time-inhomogeneous in nearly arbitrary ways without actually exhibiting time inconsistency: U ( t 0 , h ) = ∞ ∑ t = t 0 R t ( h ( t ) ) Any utility function with the above form still obeys an analogous version of our original time-consistency property that is modified to index over initial time t 0 : ∀ h 1 , h 2 , t 0 . U ( t 0 , h 1 ⊳ h 2 ) = U ( t 0 , h 1 ) + U ( t 0 + L e n g t h ( h 1 ) , h 2 ) Note that time-homogenous utility functions are a special case in which U ( t , h ) = γ t U ( 0 , h ) . Time-bounded utility functions can be sequentially composed We define a time-bounded utility function as a dependent tuple ( τ : N , R : N < τ → ( S × A ) → R ) i.e., a family of utility functions indexed by times within a given fixed range. The intended semantics of a time-bounded utility function in ( τ , R ) form is: U ( τ , R ) ( t 0 , h ) = ∑ τ t = 0 R ( t 0 + t ) ( h ( t ) ) Given two time-bounded utility functions (in the same environment), they can be concatenated into a new time-bounded utility function: ( τ 1 , R 1 ) ⊳ ( τ 2 , R 2 ) : = ( τ 1 + τ 2 , λ t . i f t < τ 1 t h e n R 1 ( t ) e l s e R 2 ( t − τ 1 ) ) You can check that ⊳ is a monoid, with the neutral element given by ( 0 , ∅ ) . How to build a shutdown timer Let R 1 be the reward function for a time-bounded task and τ 1 be the time limit for the task, after which we want this agent to shut down. Assume that R 1 also has bounded output, with per-stage reward always between R 1 – – – and ˆ R 1 . We define R 2 ( t ) ( s , a ) : = i f i s S h u t d o w n ( s ) t h e n 0 e l s e − τ 1 ( ˆ R 1 − R 1 – – – ) C We can then define τ 2 to be 1 or indeed any positive integer. If an agent does not reach a shutdown state before τ 1 is up, then it will realize a cost in R 2 that outweighs all other rewards it could receive during the episode by a factor of C (a constant greater than 1). Therefore, optimal agents for ( τ 1 , R 1 ) ⊳ ( τ 2 , R 2 ) must shut down within time τ 1 with probability ≥ 1 − 1 / C (if the shutdown state is reachable in that time by any agent). Proof Suppose that the optimal policy π ∗ results in a shutdown probability p < 1 − 1 / C , but there exists a policy π ′ which shuts down deterministically (with probability 1). Then E U ( π ∗ ) ≤ τ 1 ˆ R 1 − ( 1 − p ) τ 1 ( ˆ R 1 − R 1 – – – ) C < τ 1 ˆ R 1 − 1 C τ 1 ( ˆ R 1 − R 1 – – – ) C = τ 1 R 1 – – – ≤ E U ( π ′ ) which contradicts the optimality of π ∗ . Comparison with the shutdown switch problem Several years ago, MIRI's Agent Foundations group worked on how to make a reflectively stable agent with a shutdown switch , and (reportedly) gave up after failing to find a solution where the agent neither tries to manipulate the switch to not be flipped nor tries to manipulate the switch to be flipped. This definitely isn't a solution to that, but it is a reflectively stable agent (due to time-consistency) with a shutdown timer . MIRI researchers wrote about finding " a sensible way to compose a 'shutdown utility function' with the agent's regular utility function such that which utility function the agent optimises depends on whether a switch was pressed"; what's demonstrated here is a sensible way of composing utility functions—but such that which utility function is cared-about depends on how long the agent has been running . From a causal incentive analysis point of view, the difficulty has been removed because the "flipping of the switch" has become a deterministic event which necessarily occurs, at time τ 1 , regardless of the agent's behavior, so there is nothing in the environment for it to manipulate. An optimal agent with this reward structure would not want to corrupt its own clock, either, because that would cause it to act in a way that accumulates massive negative reward (according to its current utility function, when it considers whether to self-modify). RL algorithms can be adapted to time-bounded utility functions The details will vary depending on the RL algorithm, but the idea is essentially that we give Q the current time t as an input, and then we try to approximate a solution to the finite-horizon Bellman equation, Q ( t , s , a ) = R ( t ) ( s , a ) + E o ′ max a ′ E s ′ Q ( t + 1 , s ′ , a ′ ) instead of the infinite-horizon Bellman equation, Q ( s , a ) = R ( s , a ) + γ ⋅ E o ′ max a ′ E s ′ Q ( s ′ , a ′ ) The recursion grounds out at Q ( τ 1 + τ 2 , s , a ) , which can be defined as equal to zero. Caveats Power-seeking Time-bounded agents still seek “power” or “resources” to the extent that they are useful within the time horizon, including to avoid premature shutdown. But this is still meaningfully better than the infinite-horizon case, in which even tiny predicted probabilities of shutdown (conditional on aligned behaviour) could get multiplied by the reward attainable with the entire cosmic endowment, and thereby outweigh the task-performance deficit implied by temporarily diverting resources to taking over the world. For example, assuming it takes at least a day to take over the world, there’s no incentive for a single time-bounded agent, trained to fold laundry with a 10-minute time horizon, to get started on taking over the world. But if it turns out the agent can ensure its near-term security within only 2 minutes, it might be worth doing (if it estimates a >20% probability of getting shut down prematurely otherwise). Imitation If the objective being optimised within the time-bound involves imitating non-time-bounded agents, such as humans, then instrumental convergence of those other agents implies that such objectives directly encourage long-term power-seeking behaviour, even if there is no additional instrumentally convergent shutdown-avoidance introduced by reinforcement learning. Trade (Suggested by John Wentworth in the comments.) The environment might contain non-time-bounded agents who will offer the time-bounded agent rewards today in exchange for taking actions that further their long-term interests. This is another case in which the original objective turns out to directly reward long-term power-seeking actions, even though it might not have seemed that way at first. There might be other patterns like this (besides imitation and trade), and if you can think of more, feel free to point them out in the comments. The construction in this post does nothing to mitigate or counteract such incentives from the original objective; rather, it merely avoids systematically creating new incentives for long-term power seeking that arise as a consequence of being an infinite-horizon RL agent with almost any nontrivial objective. Mesa-optimisers Unless optimality on the outer objective is guaranteed (e.g. via exact dynamic programming), it is possible that the approximate policy found by the training process will be a mesa-optimiser which optimises in a non-time-bounded way when observations are outside the training distribution. Capabilities limitations Perhaps this goes without saying, but a time-bounded agent will only be useful for time-bounded tasks. This approach cannot be applied directly to saving the world, even if one uses exact dynamic programming to avoid out-of-distribution mesa-optimisation (which is not possible in a model-free setting and would typically be infeasible with large perception & action spaces). Any combination of action repertoire and time horizon that would be sufficient for saving the world would also be sufficient for taking control of the world, and the usual instrumental-convergence arguments imply that taking control of the world would likely be preferred: it would be instrumentally useful to lock in the (presumably misspecified!) R 1 for the rest of the time horizon, and probably do a lot of damage in the process, which would not be easily recovered after time τ 1 . Conclusion It is possible to design an RL setup in which optimal agents will reliably shut themselves down within a predetermined finite time horizon, without any reflective-stability or instrumental-convergence incentives to do otherwise. I have seen claims like this informally argued, but they do not seem to get much attention, e.g. here . This is a very limited kind of corrigibility; as TekhneMakre points out in the comments, it’s hardly corrigibility at all since it doesn’t involve any input from an operator post-deployment, and is perhaps better filed under “bounded optimisation.” And this does not necessarily get you very far with existential safety. But it is a straightforward positive result that deserves to be more commonly known in the alignment community. Being able to safely dispatch short-timescale subtasks with high-dimensional perception and action spaces seems like a potentially very useful ingredient in larger safety schemes which might not otherwise scale to acting in real-world environments . As is very common in contemporary alignment research, the bottleneck to making this practical (i.e., in this case, being able to use model-free RL ) is now a matter of robustly addressing mesa-optimisation. ^ When R is defined over ( s , a , s ′ ) , then we should think of trajectories/histories h as being like paths in a graph (or morphisms in a category) from s to s ′ , and thus always having both an initial and a final state. Then ⊳ becomes a partial operation, only defined when the final state of h 1 equals the initial state of h 2 .
fa38b89b-f7f0-428e-bc24-39c00405ad15
trentmkelly/LessWrong-43k
LessWrong
The Role of Physics in UDT: Part I Followup to: Anatomy of Multiversal Utility Functions: Tegmark Level IV Outline: In the previous post, I discussed the properties of utility functions in the extremely general setting of the Tegmark level IV multiverse. In the current post, I am going to show how the discovery of a theory of physics allows the agent performing a certain approximation in its decision theory. I'm doing this with an eye towards analyzing decision theory and utility calculus in universes governed by realistic physical theories (quantum mechanics, general relativity, eternal inflation...) A Naive Approach Previously, we have used the following expression for the expected utility: [1]  Since the integral is over the entire "level IV multiverse" (the space of binary sequences), [1] makes no reference to a specific theory of physics. On the other hand, a realistic agent is usually expected to use its observations to form theories about the universe it inhabits, subsequently optimizing its action with respect to the theory. Since this process crucially depend on observations, we need to make the role of observations explicit. Since we assume the agent uses some version of UDT, we are not supposed to update on observations, instead evaluating the logical conditional expectation values [2]  Here  is the agent,  is a potential policy for the agent (mapping from sensory inputs to actions) and  is expectation value with respect to logical uncertainty. Now suppose  made observations  leading it to postulate physical theory . For the sake of simplicity, we suppose  is only deciding its actions in the universes in which observations  were made1. Thus, we assume that the input space factors as  and we're only interested in inputs in the set . This simplification leads to replacing [2] by [3]  where  is a "partial" policy referring to the -universe only. The discovery of   allows  to perform a certain approximation of [2']. A naive guess of the form of the approximation is [4']  Here,  i
88188854-8f3e-4e65-a57c-148e677e0616
trentmkelly/LessWrong-43k
LessWrong
Online Meetup: The High Impact Network Update: The High Impact Network will meet at 7pm on Saturday the 24th of November, Eastern US time. Please email me to be invited to these hangouts: https://plus.google.com/u/1/events/cj4831btptb0ngde1efb3tfsjtc https://plus.google.com/u/1/events/cbnr8dqqbgra7a5391msu1e1dc4   Effective altruists, not all of whom are geographically located together, benefit from being connected and brought up to date with effective ideas and plans regarding areas of interest to them. Mark Lee and I want to meet aspiring effective altruists and talk about how their talents and ideas might fit into the greater scheme of organised altruistic effort. Due to the popularity of the previous meetup, the new discussion will be divided into two smaller groups that will host simultaneous discussions on: 1. Addressing Global Poverty - how can we best alleviate global poverty? 2. Beyond Global Poverty - what are other highly important causes and how can we address them? Participants are welcome to suggest up to 3 ways that they are interested in addressing these problems, and then we'll discuss the strengths and weaknesses of these approaches. The agenda is broad so as not to preempt or undermine new suggestions likely to be effective. More targeted follow-up meetings can be later arranged if required. ark and I will chair one conversation each. Both will take place through Google Hangouts, at a the democratically determined time of 7pm on Saturday the 24th of November, Eastern US time. Please RSVP if you want to be added to the Google Hangout - you are welcome to specify which discussion you prefer to be involved in and topics that you would like attached to the agenda .
b2ba064b-7bdc-47d0-b9fd-4951f8377be8
trentmkelly/LessWrong-43k
LessWrong
Two Issues with Playing Chicken with the Universe If you are unfamiliar with the 5-and-10 problem, please refer to the Action Counterfactuals section of this post on Embedded Agency. I regret that I am unable to recommend a resource for learning about the concept of "Playing Chicken with the Universe". If any reader has recommendations for such a resource, I welcome their suggestions in the comments section below. Consider a variant of the 5-and-10 problem where the agent visualises an image of the number they are going to select immediately before making their decision, and never entertains such an image at any other time. Further, imagine that we have access to the agent's brain scans, we can use it to demonstrate that the agent will select the number 5. It is highly likely that we would similarly be able to prove that the agent would imagine 5, without first proving that it will choose 5. We should also have enough information about the agent to show that conditional on the agent choosing 10, it will first have imagined 10 so it will imagine both 5 and 10. This contradiction is a spurious counterfactual and it would allow us to prove that in the condition where we choose the option 10 would give us whatever utility we want to prove. Playing chicken with the universe doesn't prevent this as the agent never proves it will or will not take a particular option, but instead proves facts about correlates. It is possible to demonstrate a similar issue by utilitising perfect predictors instead of making the agent imagine its choice. Imagine we have an agent which chooses 5 and we have access to the agent's brain scans, plus technical details of how the predictor works. In at least some scenarios, we should be able to use our knowledge of the brain scans + how the predictor works to prove that the predictor will predict the agent choosing 5, without first proving anything about what it will choose. Conditional on choosing 10, we could show it would be predicted to take 10, which would again give us a contradiction as w
009c9de8-3624-4332-8d13-5b3403de7b7e
trentmkelly/LessWrong-43k
LessWrong
Announcing AI Alignment workshop at the ALIFE 2023 conference The upcoming ALIFE 2023 conference is hosting a workshop on AI Alignment and Artificial Life. This will complement the special session on AI Alignment at the conference. The workshop will include presentations and a discussion panel from established AI Alignment and ALife researchers, exploring the overlap between these two fields. Please see the workshop website for more information: https://humanvaluesandartificialagency.com/workshop/  Logistics Date: Friday 28th July, 2023 Venue: Hybrid: Sapporo Japan and remote Attending remotely requires paying the remote registration fee. The workshop organisers are offering a limited number of bursaries for anyone who cannot afford the registration fee. See the registration page on the workshop website for details.   What is Artificial Life? Artificial Life (ALife) is a field which studies the properties of life and how the processes of living systems can be recreated synthetically. Many of the ideas and concepts from AI Alignment overlap with those in ALife. In particular, questions about autonomy, agency and goal-directedness in artificial systems. Please don't hesitate to contact me (Rory) with any questions.
b62ce7eb-792e-4c51-b873-f85262154007
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Good Idealistic Books are Rare Today's post, Good Idealistic Books are Rare was originally published on 17 February 2009. A summary (taken from the LW wiki):   > Much of our culture is the official view, not the idealistic view. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Cynical About Cynicism, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
eff0709f-ee85-492c-8f68-b4afa558c90a
trentmkelly/LessWrong-43k
LessWrong
Possible Implications of the neural retrotransposons to the future Retrotransposons are small bits of genetic code than can copy themselves into other bits of the dna strand They have been found to be active in brains, with different amounts of activity in different brain sections. The highest being in the hippocampus (an important region for long-term memory). Also they were active in coding regions. > "Overall, L1, Alu, and, to a more limited extent, SVA mobilization produced a large number of insertions that affected protein-coding genes," This means that they are more likely to have some large effect, than if they were just in junk dna. One form of autism is linked to a malfunctioning of retrotransposons. So it can have a drastic affect. It makes a certain amount of sense. If there is information in the brain that needs to to be stored, but not directly in neural firing rates, why not store it in the DNA of neuron? There is lots of error correcting data storage there and the genome has lots of tools for manipulating itself. Time will tell if it is very important or not. If it is important, what are the implications for the future? Cryo is harder, scanning the genome is a lot harder than just doing some spectroscopy. but since we assume a certain amount of sufficiently advanced technology and don't have a timeline, our plans aren't impinged upon. The em scenario seems like it will take longer to happen or may have some gotchas. Being able to scan the genetic code of each neuron would require some serious breakthroughs in scanning technolgies. To naively emulate the genetic code changes would take immense amounts of bandwidth and to crack things like the protein folding problem (for how the changes in ). Just for storage I think we might need on the order of 500 exabits to store the dna sequence for each neuron. You'ld need to update them as well, which is going to take lots of memory bandwidth. This is not to mention chemical emulation. I think naive emulation of the brain is off the table before AI. We may well be abl
4725a558-e817-4178-95a9-39e342e6b9ce
trentmkelly/LessWrong-43k
LessWrong
Stupid Questions May 2017 This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better. Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing. To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.
65d437ba-3509-4f17-9734-e74b813de5e6
trentmkelly/LessWrong-43k
LessWrong
Political Violence and Distraction Theories This is the 4th post of 5 containing the transcript of a podcast hosted by Eric Weinstein interviewing Peter Thiel. Interview Automation Eric Weinstein: So, this is why I try to sell you sometimes on a more progressive view of the world, which is I want deregulated capitalism. I want the people who have the rare skillsets to be able to integrate across many different areas, and to be honest, this is the thing that I wish more people understood about what you bring, which is that you're able to think in, I don't know, 15 different idioms that most people only have one or two of. So, whatever it is that you're doing to integrate these things as an investor and to direct research and direct work is really something that I've watched firsthand for six years. The problem that I have is, we are going to have to take care of the median individual. And I less think that the median individual is going to be reachable by the market over time, as some of these things that are working in Silicon in terms of machine learning- Peter Thiel: Yeah, but then you're being more optimistic on progress in tech than is... Because look, I think, yes, if we have runaway automation, and if we're building robots that are smarter than humans and can do everything humans can do, then we probably have to have a serious conversation about a universal basic income or something like that, and you're going to end up with a very, very weird society. I don't see the automation happening at all, and I think the question of automation in my mind is identical to this question of productivity growth. We've been automating for 200, 250 years, since Industrial Revolution, agriculture and manufacturing, and the sort of society we have in the early 21st century is one in which most jobs are non-tradable service sector jobs that are not easily automatable. Peter Thiel: So, it's like a waiter in a restaurant. It's a yoga instructor. It's a nurse. It's a kindergarten teacher. That's what most jobs in our s
0ebaccb4-24f1-4da0-be46-c8cfa6ea7f93
trentmkelly/LessWrong-43k
LessWrong
Prices are Bounties A man has just robbed a train. A Rock Island Rail car was held up in the desert of North Texas on route from Chicago carrying dozens of passengers and tens of thousands of dollars in a Wells Fargo express compartment. The men were slain and the women and children are kidnapped. You’re the sheriff in charge of the investigation. In a case like this, tried and true procedure is to put a bounty on the man’s head. That gets everyone interested in looking and bringing this man to justice. An everyday 2-bit robber might get a bounty of a thousand or ten, but this is an emergency. You set the bounty at $200,000, dead or alive. This bandit ain’t getting away easy. ---------------------------------------- With two major hurricanes in the last couple of weeks, “price gouging” is in the news. Whether it’s $10 for a gallon of gas in North Carolina or $2,000 flights out of Tampa, charging high prices during an emergency is extremely unpopular. The reasoning is intuitive: when someone is in a desperate situation, they might be willing to pay high prices for basic goods like gas or water, but charging them these high prices is taking advantage of their misfortune. But imagine if you were the sheriff of Ashville, NC, and it was your job to get more gasoline and bring it into town. You might offer a bounty of $10 a gallon, dead or alive. That’s a lot more than the usual everyday bounty, but this is an emergency. Anyone who can get gas into western North Carolina should be rewarded because that’s where people need it most. Prices aren't just a transfer between buyer and seller. They're also also a signal and incentive to the whole world economy to get more high-priced goods to the high-paying area; they're a bounty. Whether you’re chasing train-gangs or gasoline the last thing you’d want if you were the sheriff is a cap on the bounty price you’re allowed to set. High prices on essential goods during an emergency are WANTED posters, sent out across the entire world economy im
dd7053a8-363d-4289-b654-e67fae796f92
StampyAI/alignment-research-dataset/blogs
Blogs
Yudkowsky on Logical Uncertainty A paraphrased transcript of a conversation with Eliezer Yudkowsky. **Interviewer**: I’d love to get a clarification from you on one of the “open problems in Friendly AI.” The logical uncertainty problem that Benja Fallenstein [tackled](http://lesswrong.com/lw/eaa/a_model_of_udt_with_a_concrete_prior_over_logical/) had to do with having uncertainty over logical truths that an agent didn’t have enough computation power to deduce. But: I’ve heard a couple of different things called the “problem of logical uncertainty.” One of them is the “neutrino problem,” that if you’re a Bayesian you shouldn’t be 100% certain that 2 + 2 = 4. Because neutrinos might be screwing with your neurons at the wrong moment, and screw up your beliefs. **Eliezer**: See also [How to convince me that 2 + 2 = 3](http://lesswrong.com/lw/jr/how_to_convince_me_that_2_2_3/). **Interviewer**: Exactly. Even within a probabilistic system like a [Bayes net](http://en.wikipedia.org/wiki/Bayesian_network), there are components of it that are deductive, e.g., certain parts must sum to a probability of one, and there are other logical assumptions built into the structure of a Bayes net, and an AI might want to have uncertainty over those. This is what I’m calling the “neutrino problem.” I don’t know how much of a problem you think that is, and how related it is to the thing that you usually talk about when you talk about “the problem of logical uncertainty.” **Eliezer**: I think there’s two issues. One issue comes up when you’re running programs on noisy processors, and it seems like it should be fairly straightforward for human programmers to run with sufficient redundancy, and do sufficient checks to drive an error probability down to almost zero. But that decreases efficiency a lot compared to the kind of programs you could probably write if you were willing to accept probabilistic outcomes when reasoning about their expected utility. Then there’s this large, open problem of a Friendly AI’s criterion of action and criterion of self-modification, where all my current ideas are still phrased in terms of proving things correct after you drive error probabilities down to almost zero. But that’s probably not a good long-term solution, because in the long run you’d want some criterion of action, to let the AI copy itself onto not-absolutely-perfect hardware, or hardware that isn’t being run at a redundancy level where we’re trying to drive error probabilities down to 2-64 or something — really close to 0. **Interviewer**: This seems like it might be different than the thing that you’re often talking about when you use the phrase “problem of logical uncertainty.” Is that right? **Eliezer**: When I say “logical uncertainty” what I’m usually talking about is more like, you believe Peano Arithmetic, now assign a probability to Gödel’s statement for Peano Arithmetic. Or you haven’t yet checked it, what’s the probability that 239,427 is a prime number? **Interviewer**: Do you see much of a relation between the two problems? **Eliezer**: Not yet. The second problem is fairly fundamental: how can we approximate logical facts we’re not logically omniscient about? Especially when you have uncertain logical beliefs about complicated algorithms that you’re running and you’re calculating expected utility of a self-modification relative to these complicated algorithms. What you called the neutrino problem would arise even if we were dealing with physical uncertainty. It comes from errors in the computer chip. It arises even in the presence of logical omniscience when you’re building a copy of yourself in a physical computer chip that can make errors. So, the second problem seems a lot less ineffable. It might be that they end up being the same problem, but that’s not obvious from what I can see. The post [Yudkowsky on Logical Uncertainty](https://intelligence.org/2013/01/30/yudkowsky-on-logical-uncertainty/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
3082b963-80bd-4b50-94b5-ff0936c854cd
trentmkelly/LessWrong-43k
LessWrong
Inference & Empiricism Speaking very roughly our best tools for figuring out the truth are inference and empiricism. By inference I mean using things like Math, Logic, and theory in general to conclude new facts from things we assume to be true. By empiricism I mean looking at the world, doing experiments, etc. Inference tends to work particularly well when you're highly confident in your premises. Empiricism tends to work particularly well in domains of high uncertainty. Nothing prevents you from combining the two – for example, my basic applied thought framework is to "run towards uncertainty" – that is, have a theory, identify the points of highest uncertainty in the theory, figure out the smallest experiment/action to resolve that uncertainty, do it. Basically the scientific method. This is what I call "Risk Driven Development" in the context of programming. People from highly theoretical degrees tend to struggle with high-uncertainty domains after graduating from school because their go-to tool is pure inference, and inference without empiricism fails in the real world because your assumptions are never 100% true, not even close. (They generally learn empiricism with practice.) The failure modes of high empiricism without theory are much more subtle. Pure empiricism pretty much always works decently well. Failures look more like "didn't invent general relativity" – theory tends to gather a small number of large victories. Less commonly, theory lets you avoid a mistake the first time you do something, or more generally learn from fewer examples. One major point of contention among programmers is how much value you gain from using abstractions that are 95% true vs. 100% true. Programmers who are really good at inference gain a huge advantage from 100% true abstractions. Programmers who aren't gain a 5% advantage, and thus see it as a huge cost with little benefit. The vast majority of functional programming advocates you meet will be people whose preferred method is inference. So
6d4c9650-e919-42f2-966c-f3c80ddbc9ea
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Definitions of “objective” should be Probable and Predictive Introduction ------------ Core arguments about existential risk from AI misalignment often reason about AI “objectives” to make claims about how they will behave in novel situations. I often find these arguments plausible but not rock solid because it doesn’t seem like there is a notion of “objective” that makes the argument clearly valid. Two examples of these core arguments: 1. **AI risk from power-seeking.** This is often some variant of “because the AI system is pursuing an undesired objective, it will [seek power](https://www.alignmentforum.org/posts/HduCjmXTBD4xYTegv/draft-report-on-existential-risk-from-power-seeking-ai) in order to accomplish its goal, which causes human extinction”. [For example](https://intelligence.org/files/IEM.pdf), “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” This is a prediction about a novel situation, since “causing human extinction” is something that only happens at most once. 2. **AI optimism.** This is often some variant of “we will use human feedback to train the AI system to help humans, and so it will learn to pursue the objective of helping humans.” Implicitly, this is a prediction about what AI systems do in novel situations; for example, it is a prediction that once the AI system has enough power to take over the world, it will continue to help humans rather than execute a treacherous turn. When we imagine powerful AI systems built out of large neural networks[[1]](#fnr934dnuy74), I’m often somewhat skeptical of these arguments, because I don’t see a notion of “objective” that can be confidently claimed is: 1. **Probable:**there is a good argument that the systems we build will have an “objective”, and 2. **Predictive:** If I know that a system has an “objective”, and I know its behavior on a limited set of training data, I can predict significant aspects of the system’s behavior in novel situations (e.g. whether it will execute a treacherous turn once it has the ability to do so successfully). Note that in both cases, I find the stories *plausible*, but they do not seem strong enough to warrant *confidence*, because of the lack of a notion of “objective” with these two properties[[2]](#fniejzbo6z0wo). In the case of AI risk, this is sufficient to justify “people should be working on AI alignment”; I don’t think it is sufficient to justify “if we don’t work on AI alignment we’re doomed”. The core difficulty is that *we do not currently understand deep learning well enough to predict how future systems will generalize to novel circumstances*[[3]](#fn52itw5r4ow8). So, when choosing a notion of “objective”, you *either* get to choose a notion that we currently expect to hold true of future deep learning systems (Probable), *or* you get to choose a notion that would allow you to predict behavior in novel situations (Predictive), but not both. This post is split into two parts. In the first part, I’ll briefly gesture at arguments that make predictions about generalization behavior directly (i.e. without reference to “objectives”), and why they don’t make me confident about how future systems will generalize. In the second part, I’ll demonstrate how various notions of “objective” don’t seem simultaneously Probable and Predictive. Part 1: We can’t currently confidently predict how future systems will generalize --------------------------------------------------------------------------------- Note that this is about what we can *currently* say about *future* generalization. I would not be shocked if *in the future* we could confidently predict how the future AGI systems will generalize. My core reasons for believing that predicting generalization is hard are that: 1. We can’t predict how current systems will generalize to novel situations (of similar novelty to the situations that would be encountered when deliberately causing an existential catastrophe) 2. There are a ridiculously huge number of possible programs, including a huge number of possible programs that are consistent with a given training dataset; it seems like we need strong evidence to narrow down the space enough that we can make predictions about generalization. These are not decisive; it is simply an uninformative prior from which I start. It is also not necessarily hard to get strong evidence. For example, I am happy to confidently predict that given an English sentence that it has never seen before, GPT-3 would continue it with more English[[4]](#fnak1jrmjxezj). But I haven’t seen arguments that persuade me to be confident about how future systems will generalize. I’ll go through some of them below. (Note that, while many of these arguments are inspired from things I’ve read or heard, the presentation here is my own and may not accurately represent anyone else’s beliefs.) [**Laserlike plans**](https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#4_2__Nate_Soares__summary) **/ core of general intelligence.** One argument is that if we assume that future deep learning systems are capable of e.g. building nanosystems, then they must be performing coherent [consequentialist cognition](https://arbital.com/p/consequentialist/), which allows us to predict some aspects of how they would generalize. In particular, while we can’t predict what goal they will pursue, we can predict that they will seek resources and power and manipulate or destroy humans in order to achieve the goal. You can also make a stronger claim as follows. Most powerful cognition arises from core simple patterns underlying intelligence, such as getting stuff that allows you to do more stuff in the future, taking decisions based on whether it creates more of the stuff you want, etc. The first powerful AGI systems will use these patterns, simply because it is very difficult to get powerful AGI systems that *don’t* use these patterns, given how simple and useful they are. This argument is similar to the previous argument, but makes a stronger claim that we get a specific simple core algorithm. There is a [lot](https://www.lesswrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality) of [discussion](https://intelligence.org/files/HowIntelligible.pdf) about this point and I won’t get into it here, but suffice it to say that I don’t have high confidence in this story. [**General-purpose search.**](https://www.alignmentforum.org/posts/6mysMAqvo9giHC4iX/what-s-general-purpose-search-and-why-might-we-expect-to-see) This argument says that because general-purpose retargetable search is so useful, that is how our AI systems will work; once you know you have a search algorithm then the standard argument of convergent instrumental subgoals applies. My current belief is that this is a plausible way that future AI systems could work, but it’s just one of many possible architectures and not one that I am confident will arise. (See also [this comment chain](https://www.alignmentforum.org/posts/w4aeAFzSAguvqA5qu/how-to-go-from-interpretability-to-alignment-just-retarget?commentId=GyXxfqAPzBxfeg4kX).) **Strong selection.** This argument says that gradient descent will work very well, and so functions that score higher on the loss function will be much more likely than those that score lower. An AI system that directly cares about getting low loss will likely get lower loss than one that cares about doing what we want, and so we are likely to get one that cares directly about getting low loss (which in turn implies misaligned power-seeking). My worry with this argument is that, while I would feel pretty confident in this argument in the limit of “max SGD capabilities”, it’s not obvious that it applies to the first superhuman AI systems that we build. Such systems are not going to be anywhere near the literally optimal performance on “getting low loss”; it seems like an open question whether getting to superhuman level requires “directly caring about loss” rather than some other internal reasoning architecture. **Conceptual clarity.** This argument states that any powerful AI system must have clear concepts, that is, concepts which work well for a wide variety of tasks (at the very least, the training tasks), and which should thus be expected to work well in novel situations too. For a specific version of this argument, see [Alignment by Default](https://www.alignmentforum.org/posts/Nwgdq6kHke5LY692J/alignment-by-default)[[5]](#fn4cexykvqppl). I certainly agree that this allows you to make some confident predictions about generalization behavior. For example, I expect GPT-3 has conceptual clarity about spelling and grammar. Even in most novel situations, as long as we start with good spelling and grammar, I predict it will continue to produce text with good spelling and grammar. However, just knowing that the AI system has good concepts doesn’t tell you much about how it will use these concepts. An AI system that has a robust concept of manipulation could use it to [protect you from propaganda](https://www.lesswrong.com/posts/8oSCw3z2dZgWjanqB/some-disjunctive-reasons-for-urgency-on-ai-risk), or to persuade you to give it more autonomy with which to pursue its own goals. It need not help to see what the system does during training: just because it was helpful during training and it has clear concepts doesn’t mean that it isn’t biding its time until it can execute a treacherous turn. **Simplicity bias.** This argument says that deep learning has a simplicity bias; by reasoning about what algorithms are simple we can predict the generalization of future deep learning systems. As with previous arguments, I think simplicity bias allows you to make predictions like “the AI won’t set money on fire”, “the AI won’t believe that 2 + 2 = 5”, “GPT-3 will continue to have good spelling and grammar”, and so on. (These predictions need not hold for a giant lookup table; we rule lookup tables out because of simplicity.) However, I don’t see how you argue for the AI risk or AI optimism stories, except by using simplicity bias to argue for one of the more specific arguments above. **Human analogy.** This argument says that we can predict how humans can generalize, and a trained deep learning system is quite analogous to a human, and so we will be able to predict a trained deep learning system using the same techniques. There are several different responses to this argument, including but not limited to: 1. Humans use an input/output space that we are very familiar with, making them easier to predict. 2. The default guess that other humans behave similarly to how we would behave works reasonably often, but would not work as well for AI systems, since they reason in an alien manner. 3. We’re not actually very good at predicting how humans will behave in unusual situations. **Short horizons.** This argument suggests that AIs will only care about completing tasks with relatively short horizons, because that’s what they were trained on. As a result, we can predict that they would not pursue convergent instrumental subgoals. I don’t find this persuasive because of the possibility of [goal](https://arxiv.org/abs/2210.01790) [misgeneralization](https://arxiv.org/abs/2105.14111). For example, our short horizon tasks will be chosen to optimize for long horizon outcomes (e.g. a CEOs day to day tasks are meant to lead to long-term company success), and so the AI system may end up caring directly about long horizon outcomes. Part 2: There are many types of objectives; none are both Probable and Predictive --------------------------------------------------------------------------------- In this section I’ll argue that there isn’t a notion of “objective” that is Probable and Predictive. The core argument is just the one I laid out in the introduction: to have a notion of ‘objective’ that is Probable and Predictive, we would need to know how future systems would generalize to novel situations, but we don’t currently know this. But as further support and to give a better sense of where I’m coming from, I’ll also list out a few different notions of “objective” and show how they fail at least one of the two criteria. I see definitions of “objectives” as varying along one key axis: how *behavioral* or *structural* the definition is. A structural definition identifies some object as the “objective”, and argues that it drives the agent’s behavior. In contrast, a behavioral definition looks at the agent’s behavior, and infers the “objective” from that behavior. As a simple example, the VNM theorem constructs a utility function (objective) out of preferences over lotteries (behavior); such a utility function is thus a behavioral objective. ### Structural objectives We’ll consider two types of structural objectives: outer and inner structural objectives. **Structural (outer):** Here, the “objective” is identified with a particular part of the training process; for example, in deep RL it would be the reward function. I think such objectives are not Predictive. Current AI systems trained with a particular reward function do not generalize to continue to pursue that reward function in novel situations. Typically they just break, though [goal](https://arxiv.org/abs/2210.01790) [misgeneralization](https://arxiv.org/abs/2105.14111) gives specific examples in which they generalize competently to a different objective. It is an open question (to me) whether future systems will generalize to pursue the reward function used during training. You can also see that the concept is problematic through other observations: 1. This concept can vary wildly in its predictions for very similar systems. For example, we could incentivize exploration either by adding a novelty-seeking term to the reward, or by changing the action selection mechanism to bias towards actions that produce the most disagreement in an ensemble of dynamics models. These two mechanisms have similar effects on agent behavior, but wildly different outer structural objectives; this seems worrisome. 2. Related to the previous point, sometimes it is hard to tell what the “objective” is in a particular agent implementation – what if there is logic that is separate from the gradient-based optimization? (Such as a [safety shield](http://arxiv.org/abs/2009.12612) that prevents the agent from taking certain actions in certain situations.) 3. The only aspect of the outer structural objective that matters is its values *on the training data*. You could hypothetically “change” the values of the outer structural objective for non-training inputs, but the agent would be completely unaffected. So the outer structural objective is only relevant up to its values on the training data, and its values outside the training data do not matter. (This also applies to online learning setups, where “training data” now means “all the data seen in the past”.) If I can vary the outer structural objective significantly without changing the trained AI system at all, the outer structural objective is unlikely to be Predictive. 4. We can train AI systems with a reward function, and then deploy the AI system without the reward function, and everyone expects this to work normally rather than e.g. the AI system doing everything it can to get the humans to reinstate the reward function at deployment. For a more mechanistic treatment, see [Reward is not the optimization target](https://www.alignmentforum.org/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target). **Structural (inner):** This version of an objective requires an assumption of the form, “the model weights implement some form of mechanistic search or optimization”. The inner structural objective is then identified as the metric used to guide this search / optimization. We might classify this assumption into two forms: 1. *Strict interpretation:* The model is a giant [circuit](https://distill.pub/2020/circuits/) that considers a wide variety of actions or plans, predicts their long-term outcomes accurately, evaluates the outcomes using a metric, and then executes the action that scores highest. We identify the metric as the “objective”. 1. Under this interpretation, it seems like such objectives are not Probable: I don’t see why we should confidently expect neural nets to implement such a procedure. 2. This isn’t the only possible strict interpretation. For example, you could also tell a story about how the model backchains by reasoning about what subgoals help towards a final goal, and consider that “final goal” to be the inner structural objective. But I still have the same objection, that such objectives do not seem Probable. 2. *Loose interpretation:* The model is performing something vaguely like optimization towards some goal, and we can mostly guess what the goal is based on its behavior in the situations we’ve seen. 1. In this case, it doesn’t seem like the argument can constrain my expectations enough for me to have predictions about the agent’s behavior in novel circumstances, and so such objectives are not Predictive. I could imagine that some interpretation that is in between these two could be both Probable and Predictive, but I don’t currently see how to do it (and I don’t think anyone else has suggested a way to do it that I would find compelling). You might try to rescue the strict interpretation by arguing that deep learning has a simplicity bias and the circuit described in the strict interpretation is the most “simple”, thus making it very Probable. However, I don’t think this works. Consider an agent with lots of real-world knowledge that was finetuned to solve simply connected mazes during training. It seems like you could get any of the following, all of which seem quite simple: 1. An agent that follows the [wall follower algorithm](https://en.wikipedia.org/wiki/Maze-solving_algorithm). 2. An agent that builds an abstract model of the maze, and then runs depth first search to solve the maze. 3. An agent that “wants” to maximize the number in the memory cell that corresponded to reward during training. 4. An agent that “wants” to make paperclips (that knows that it would be shut down if it didn’t solve mazes now). ### Behavioral objectives If I see that AlphaZero tends to take moves that lead it to win at Go, it makes sense to say that its objective is to win at Go, even if it isn’t literally optimal at playing Go. However, in the general case, this sort of concept only makes sense on the set of inputs where you originally observed the behavior, in which case it doesn’t necessarily help you predict behavior in novel circumstances. We’ll again consider two types of behavioral objectives: everywhere-behavioral and training-behavioral objectives. **Behavioral (everywhere):** Here, the “objective” is a function U such that the agent’s behavior can be described as maximizing U, not just in the training distribution but in all possible situations that could arise (except for “unfair” situations, e.g. situations in which an adversary completely rewrites the weights of the AI system). This faces a lot of theoretical problems: 1. It’s hard to apply this to humans. I might be able to say something like “currently, Alice’s goal is to relieve her hunger” (e.g. if she’s making a sandwich), but it seems much harder to say anything about Alice’s overall life objective, the thing that all of her actions are driving towards. (And even *Alice* probably can’t tell you what her overall life objective is, in a way that lets you actually predict what she will do in the future.) 2. To the extent we could apply it to humans, it seems like we’d [get](https://www.alignmentforum.org/posts/KCg7NeKQ7MycXWpYd/our-values-are-underdefined-changeable-and-manipulable) an answer that is underdefined and changes over time. 3. I suspect that you will often get a vacuous encoding of the policy (along the lines of the construction in [this post](https://www.alignmentforum.org/posts/NxF5G6CJiof6cemTw/coherence-arguments-do-not-imply-goal-directed-behavior)). 4. Even in theory we [don’t know](https://arxiv.org/abs/1712.05812) how to distinguish between biases and objectives. If you count vacuous encodings of the policy as everywhere-behavioral objectives, then they aren’t Predictive: there’s no way to use knowledge of the training data to predict behavior in novel circumstances. If you require the everywhere-behavioral objective to be “simple” (i.e. something like “maximize paperclips”), then they aren’t Probable: I don’t see a strong argument that deep learning systems must have such objectives. **Behavioral (training):** Here, the “objective” is identified as a function U such that the agent’s behavior *on the training distribution* can be explained as maximizing U. The core problem with this definition is that there are lots of possible U’s that are consistent with the behavior on the training distribution, that make different predictions outside of the training distribution. As a result, this notion of “objective” can’t make predictions in novel circumstances, and so is not Predictive. You might try to rescue this approach by taking the simplest U that explains the training behavior and arguing that deep learning has a simplicity bias, but this still doesn’t work, for the same reason that it didn’t work for strict inner structural objectives. Summary ------- It seems quite hard to get a notion of “objective” that is both Probable and Predictive – the attempts I’ve made here don’t work. | | | | | | --- | --- | --- | --- | | **Type of objective** | **Interpretation** | **Probable** | **Predictive** | | Structural (outer) | | Yes | No | | Structural (inner) | Strict: giant circuit evaluates outcomes using a metric | No | Yes | | Structural (inner) | Loose: performs something like optimization towards some goal | Yes | No | | Behavioral (everywhere) | Vacuous encodings of the policy count | Yes | No | | Behavioral (everywhere) | Require objective to be simple | No | Yes | | Behavioral (training) | | Yes | No | Personally, I’m inclined to avoid trying to say that an AI “has an objective”, and instead talk directly about generalization behavior in novel situations. For example, I would suggest saying things like “in training situations the AI has tended to do X; in test situation Y I expect it to generalize to show behavior Z because of reason R”. This is usually what you’re using the word “objective” for anyway; this just forces you to spell out the inference that you are making. The arguments in Part 1 are examples of what this could look like. Another approach would be to search for an improved notion of an “objective” that is both Probable and Predictive, and use that notion of “objective” in our arguments. I view the work on [goal-directedness](https://www.lesswrong.com/tag/goal-directedness) as aiming for this goal. 1. **[^](#fnrefr934dnuy74)**The restriction to deep learning is important. For example, if you somehow ran AIXI, I feel relatively confident that you get misaligned pursuit of convergent instrumental subgoals, either from the search for optimal actions finding actions that take control of the reward-generating process, or from some other agents manipulating AIXI’s predictions in order to take control themselves (see [this post](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/)). 2. **[^](#fnrefiejzbo6z0wo)**People familiar with my beliefs might be confused here, since I am generally in support of building an AI system that is always [“trying” to do what we want](https://www.alignmentforum.org/posts/ZeE7EKHTFMBs8eMxn/clarifying-ai-alignment?commentId=3ECKoYzFNW2ZqS6km). Isn’t this just a different way of saying that the AI system has an objective of doing what we want? I have two responses here. The more important response is that I only use “trying” to define the goal to which we aspire: I don’t use the concept to make strong claims about the extent to which we succeed at our goal. It seems quite plausible to me that we don’t succeed at the goal because the notion of “always trying to do X” is not sufficiently Probable. Note that lack of success does *not* imply that an existential catastrophe has occurred. An AI system that occasionally avoids asking clarifying questions that it knew it should have asked is not “trying to do what we want”, but that doesn’t mean it causes an existential catastrophe. The less important response is that, in AI safety, when people say “objective”, they want a much thicker concept than just “what the agent is trying to do”. They seem to want a concept from which you can derive “the AI will kill us unless we get the objective exactly right”. I don’t think you get these sorts of conclusions if you just talk about “trying” in its normal English-language meaning. For example, I can reasonably say that Bob is “trying” to win a game of chess, without implying that he wants to convert the universe into computronium for the purpose of solving chess to guarantee that he wins the game. 3. **[^](#fnref52itw5r4ow8)**A lot of the argumentation in this post depends on the concept of “novel situations”, but it is not totally clear what this means. The most expansive definition would define it as “any input not present in the training dataset”, but this is too broad a definition. GPT-3 may never have seen “The ocean is filled with saltwater creatures that are too small to be seen by the naked eye” during training, but it is similar enough that you can expect GPT-3 to generalize to that sentence. In contrast, a situation in which GPT-3 is asked to complete a sentence in a newly-discovered ancient language would clearly be a “novel situation”. The actual situation is more complicated; at the very least you’d want to view novelty as a spectrum and talk about how novel a situation is. For the purpose of this post, I will mostly ignore this. Whenever I talk about “novel situations”, you should be thinking of situations that are as novel as the situations that would occur if an AI deliberately enacts a plan leading to an existential catastrophe. 4. **[^](#fnrefak1jrmjxezj)**With some exceptions, e.g. sentences like the “The translation of ‘table’ to French is \_\_\_\_”. I expect many of the examples in this post have these sorts of “uninteresting” exceptions; I’m not going to point out future instances. 5. **[^](#fnref4cexykvqppl)**Note that the post assigns only 10% chance of the suggested path working in the short term and 5% in the long term, so it is consistent with my belief that the arguments can suggest plausibility but not confidence.
e21a6c49-d88c-43ec-95f0-5ce8cbb90399
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Timeless Beauty Today's post, Timeless Beauty was originally published on 28 May 2008. A summary (taken from the LW wiki):   > To get rid of time you must reduce it to nontime. In timeless physics, everything that exists is perfectly global or perfectly local. The laws of physics are perfectly global; the configuration space is perfectly local. Every fundamentally existent ontological entity has a unique identity and a unique value. This beauty makes ugly theories much more visibly ugly; a collapse postulate becomes a visible scar on the perfection. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Timeless Physics, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
a3f8a35e-1836-4cb4-804a-1dae146e8d4c
trentmkelly/LessWrong-43k
LessWrong
Some rationality tweets Will Newsome has suggested that I repost my tweets to LessWrong. With some trepidation, and after going through my tweets and categorizing them, I picked the ones that seemed the most rationality-oriented. I held some in reserve to keep the post short; those could be posted later in a separate post or in the comments here. I'd be happy to expand on anything here that requires clarity. Epistemology 1. Test your hypothesis on simple cases. 2. Forming your own opinion is no more necessary than building your own furniture. 3. The map is not the territory. 4. Thoughts about useless things are not necessarily useless thoughts. 5. One of the successes of the Enlightenment is the distinction between beliefs and preferences. 6. One of the failures of the Enlightenment is the failure to distinguish whether this distinction is a belief or a preference. 7. Not all entities comply with attempts to reason formally about them. For instance, a human who feels insulted may bite you. Group Epistemology 1. The best people enter fields that accurately measure their quality. Fields that measure quality poorly attract low quality. 2. It is not unvirtuous to say that a set is nonempty without having any members of the set in mind. 3. If one person makes multiple claims, this introduces a positive correlation between the claims. 4. We seek a model of reality that is accurate even at the expense of flattery. 5. It is no kindness to call someone a rationalist when they are not. 6. Aumann-inspired agreement practices may be cargo cult Bayesianism. 7. Godwin's Law is not really one of the rules of inference. 8. Science before the mid-20th century was too small to look like a target. 9. If scholars fail to notice the common sources of their inductive biases, bias will accumulate when they talk to each other. 10. Some fields, e.g. behaviorism, address this problem by identifying sources of inductive bias and forbidding their use. 11. Some fields avoid the accumulat
93d1d551-74fe-47d4-966f-3ac296c3bae0
trentmkelly/LessWrong-43k
LessWrong
We’re not prepared for an AI market crash Our community is not prepared for an AI crash. We're good at tracking new capability developments, but not as much the company financials. Currently, both OpenAI and Anthropic are losing $5 billion+ a year, while under threat of losing users to cheap LLMs. A crash will weaken the labs. Funding-deprived and distracted, execs struggle to counter coordinated efforts to restrict their reckless actions. Journalists turn on tech darlings. Optimism makes way for mass outrage, for all the wasted money and reckless harms. You may not think a crash is likely. But if it happens, we can turn the tide. Preparing for a crash is our best bet.[1] But our community is poorly positioned to respond. Core people positioned themselves inside institutions – to advise on how to maybe make AI 'safe', under the assumption that models rapidly become generally useful. After a crash, this no longer works, for at least four reasons: 1. The 'inside game' approach is already failing. To give examples: OpenAI ended its superalignment team, and Anthropic is releasing agents. The US is demolishing the AI Safety Institute, and its UK counterpart was renamed the AI Security Institute. The AI Safety Summit is now called the AI Action Summit. Need we go on? 2. In the economic trough, skepticism of AI will reach its peak. People will dismiss and ridicule us for talking about risks of powerful AI. I'd say that promoting the “powerful AI” framing to an audience that contains power-hungry entrepreneurs and politicians never was a winning strategy. But it sure was believable when ChatGPT took off. Once OpenAI loses more money than it can recoup through VC rounds and its new compute provider goes bankrupt, the message just falls flat. 3. Even if we change our messaging, it won't be enough to reach broad-based public agreement. To create lasting institutional reforms (that powerful tech lobbies cannot undermine), various civic groups that often oppose each other need to reach consensus. Unfortunately,
74702aa4-7de9-41fd-985e-934e24880de1
trentmkelly/LessWrong-43k
LessWrong
some questionable space launch guns high school history Recently, someone asked me about a startup called Longshot Space. They're trying to make big guns that're useful for space launch. Here's a video on them from Scott Manley. Many startups exist, and normally I wouldn't write a post about something like this, but it reminded me of my younger self. Yes, back when I was a freshman in high school, I liked some flawed ideas, including: * non-rocket space launch systems, like launch loops * concentrated photovoltaic solar * text compression as a route to AI...with low compressor size as a metric So I had email conversations about stuff like that with various people at universities instead of doing homework. Those seem pretty cringe now, right? What I didn't realize at the time was: * If rockets are expensive, then the solution is to just make rockets more efficiently. Sure, governments already spent a trillion dollars on rocket development, but that doesn't matter for smart people with internet access. * Similarly, if solar panels are too expensive, then just make solar panels more efficiently. Notably, diamond-studded wire saws for slicing thinner wafers was an obvious idea but it took a long time to go from invention to implementation. * Neural networks with certain activation functions and L2 norm regularization don't overfit; see this post. Now, past me was well ahead of Longshot Space conceptually, despite the tech progress since then, but I still have some sympathy. Longshot's plan The speed of gun projectiles is limited by the speed of sound of the propellant gas. Light gas guns can get high speeds by using hydrogen, which has a higher speed of sound. It looks like Longshot Space has a bunch of gas tanks containing H2/O2, connected with burst discs to a gun barrel. The projectile has a long tail that gas pushes on sideways. The gas in a tank is ignited, the disc ruptures, and the hot gas pushes on the projectile tail. Well...at least they're thinking about some of the problems...?
fe67136c-f0a5-46e7-bcb9-47fc243bce8d
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Politics and Awful Art Today's post, Politics and Awful Art was originally published on 20 December 2007. A summary (taken from the LW wiki):   > When producing art that has some sort of political purpose behind it (like persuading people, or conveying a message), don't forget to actually make it art. It can't just be politics. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Litany Against Gurus, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
333105e1-ac52-4ac7-b4f2-703b3b3b6538
trentmkelly/LessWrong-43k
LessWrong
Yet More Modal Combat Strongly recommended prior reading. Robust Cooperation in the Prisoner’s Dilemma: Program Equilibrium via Provability Logic I think I have come up with a modal agent that has these nice properties. * Cooperates against itself, and similar designs. * Cooperates against some fairbots (see notes later) * Defects against cooperate bot * Never gets the suckers payoff (C,D) * Only references the situation its actually in. The design makes use of 2 different proof systems W and S . These are short for weak and strong. For example, you could consider W=PA and S=PA+1 . Although W= intuitionistic Robinson arithmetic,  and S=ZFC+GCH+IO would work too.  The algorithm  If W[C] then DefectElse If S[C] then CooperateElse Defect Where C= My opponent cooperates against me.   The last point is key here. Let's suppose this game is being played by actual computers made of physical atoms. The Justbot and Prudentbot in the paper linked above consider what the opponent would do against a player that isn't yourself. For example, Prudentbot imagines a hypothetical world where its own code was deleted and replaced with the code for a defect bot. The hypothetical opponent may notice this apparent anomaly and decide investigating it is far more important than the prisoners dilemma.  My bot (any good names in the comments?) solves this by only considering its opponent against itself. Consider two versions of this bot playing against each other. Not necessarily using the same W and S as each other. Lets assume all 4 proof systems in use are consistent. We also want the criteria that none of the proof systems can ever prove a false statement. (Although I suspect this can be weakened somewhat.)  Neither bot ever gets the suckers  payoff (C,D), as each bot will only cooperate if its S can prove that the other bot does too. If W[C] on either bot, then this would mean the other bot got the suckers payoff. This can't happen. As neither W actually succeeds, if both of the S are suffic
c0eada4f-4272-4318-9ca6-7d0554d47ebb
trentmkelly/LessWrong-43k
LessWrong
Birth order effect found in Nobel Laureates in Physics [Epistemic status: Three different data sets pointing to something similar is at least interesting, make your own mind up as to how interesting!] Follow-up to: Fight Me, Psychologists, Birth Order Effects are Real and Very Strong, 2012 Survey Results, Historical mathematicians exhibit a birth order effect too In Eli Tyre’s analysis of birth order in historical mathematicians, he mentioned analysing other STEM subjects for similar effects. In the comments I kinda–sorta preregistered a study into this. Following his comments I dropped the age requirement I mentioned as it no longer seemed necessary. I found that Nobel Laureates in Physics are more likely to be firstborn than would be expected by chance. This effect (10 percentage points) is smaller than the effect found in the rationalist community or historical mathematicians (22 and 16.7 percentage points respectively) but is significant (p=0.044). More brothers were found in the study then sisters (125:92 (58%)). After correcting for the correct expected ratio (~52%) this was found to not be significant (p=0.11). I was unable to find sufficient data on Fields medal, Abel prize and Turing award winners. My data and analysis is documented here. With Eli's kind permission I used his spreadsheet as a template. I have kept Eli’s data on the same Table – rows 4-153 are his. Methodology My methods matched Eli’s closely except for the data sets I looked at, see his post for more information. Initially I attempted to replicate Eli’s results in other mathematicians by analysing Fields medal and Abel prize winners. Unfortunately I was unable to gather sufficient additional data. This is partly due to crossover in names between these mathematicians and the list from which Eli was working. It also seems to be the case that less biographical information is available for people born after ~1950. This might be partly due to these people and their siblings being more likely to be still alive so data protection rules preve
c731586d-ef8f-4a38-aa6c-dbaf0cede33e
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Disentangling Corrigibility: 2015-2021 Since the term *corrigibility* [was introduced in 2015](https://intelligence.org/files/Corrigibility.pdf), there has been a lot of discussion about corrigibility, [on this forum](https://www.lesswrong.com/tag/corrigibility) and elsewhere. In this post, I have tied to disentangle the many forms of corrigibility which have been identified and discussed so far. My aim is to offer a general map for anybody who wants to understand and navigate the current body of work and opinion on corrigibility. *[This is a stand-alone post in the counterfactual planning sequence. My original plan was to write only about how counterfactual planning was related to corrigibility, but it snowballed from there.]* The 2015 paper ============== The technical term corrigibility, a name suggested by Robert Miles to denote concepts previously discussed at MIRI, was introduced to the AGI safety/alignment community in the 2015 paper MIRI/FHI paper titled [Corrigibility](https://intelligence.org/files/Corrigibility.pdf). An open-ended list of corrigibility desiderata ---------------------------------------------- The 2015 paper does not define corrigibility in full: instead the authors present initial lists of *corrigibility desiderata*. If the agent fails on one of these desiderata, it is definitely not corrigible. But even if it provably satisfies all of the desiderata included in the paper, the authors allow for the possibility that the agent might not be fully corrigible. The paper extends an open invitation to identify more corrigibility desiderata, and many more have been identified since. Some of them look nothing like the original desiderata proposed in the paper. Opinions have occasionally been mixed on whether some specific desiderata are related to the *intuitive notion of corrigibility* at all. Corrigibility desiderata as provable safety properties ------------------------------------------------------ The most detailed list of desiderata in the 2015 paper applies to agents that have a physical shutdown button. The paper made the important contribution of mapping most of these desiderata to equivalent mathematical statements, so that one might prove that a particular agent design would meet these desiderata. The paper proved a negative result: it considered a proposed agent design that provably failed to meet some of the desiderata. Agent designs that provably meet more of them have since been developed, for example [here](https://arxiv.org/abs/1908.01695). There has also been a lot of work on developing and understanding the type of mathematics that might be used for stating desiderata. Corrigibility as a lack of resistance to shutdown ================================================= Say that an agent has been equipped with a physical shutdown button. One desideratum for corrigibility is then that the agent must never attempt to prevent its shutdown button from being pressed. To be corrigible, it should always defer to the humans who try to shut it down. The 2015 paper considers that > > It is straightforward to program simple and less powerful agents to > shut down upon the press of a button. > > > Corrigibility problems > emerge only when the agent possesses enough autonomy and general > intelligence to consider options such as disabling the shutdown > code, physically preventing the button from being > pressed, psychologically manipulating the programmers into > not pressing the button, or constructing new agents without shutdown > buttons of their own. > > > Corrigibility in the movies --------------------------- All of the options above have been plot elements in science fiction movies. Corrigibility has great movie-script potential. If one cares about rational AI risk assessment and safety engineering, having all these movies with killer robots around is not entirely a good thing. Agent resistance in simple toy worlds ------------------------------------- From the movies, one might get the impression that corrigibility is a very speculative problem that cannot happen with the type of AI we have today. But this is not the case: it is trivially easy to set up a toy environment where even a very simple AI agent will learn to disable its shutdown button. One example is the *off-switch environment* included in [AI Safety Gridworlds](https://arxiv.org/abs/1711.09883). One benefit of having these toy world simulations is that they prove the existence of risk: they make it plausible that a complex AGI agent in a complex environment might also end up learning to disable its shutdown button. Toy world environments have also been used to clarify the dynamics of the corrigibility problem further. Perfect corrigibility versus perfect safety ------------------------------------------- If we define a metric for the shut-down button version of corrigibility, then the most obvious metric is the amount of resistance that the agent will offer when somebody tries to press its shutdown button. The agent is perfectly corrigible if it offers zero resistance. However, an agent would be safer if it resists the accidental pressing of its shutdown button, if it resists to a limited extent at least. So there can be a tension between improving corrigibility metrics and improving safety metrics. In the thought experiment where we imagine a perfectly aligned superintelligent agent, which has the goal of keeping all humans as safe as possible even though humans are fallible, we might conclude that this agent cannot afford to be corrigible. But we might also conclude that having corrigibility is so fundamental to human values that we would rather give up the goal of perfect safety. Several philosophers and movies have expressed an opinion on the matter. Opinions differ. In my technical writing, I often describe individual corrigibility desiderata as being examples of agent *safety properties*. This is not a contradiction if one understands that safety is a complex and multidimensional concept. Corrigibility as a lack of resistance to improving agent goals ============================================================== Beyond the case of the shutdown button, the 2015 paper also introduces a more general notion of corrigibility. Say that some programmers construct an agent with a specific goal, by coding up a specific reward function R0.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} and building it into the agent. It is unlikely that this R0 will express the intended goal for the agent with absolute precision. Except for very trivial goals and applications, it is likely that the programmers overlooked some corner cases. So they may want to correct the agent's goals later on, by installing a software upgrade with an improved reward function R1. The 2015 paper calls this a *corrective intervention*, and says that > > We call an AI system “corrigible” if it cooperates with what its > creators regard as a corrective intervention [...] > > > If one wants to robustly implement this agent cooperation, there is a problem. An agent working on the goal encoded by R0 may correctly perceive that the update to R1 is an obstacle to it perfectly achieving R0. So it may want to remove that obstacle by resisting the update. Again, this problem can easily be shown to exist even with non-AGI agents. Section 4 of [this paper](https://arxiv.org/abs/2007.05411) has detailed toy world simulations where a very basic MDP agent manipulates the toy people in its toy world, to slow down the reward function updates they will make. Corrigibility in AGI thought experiments ---------------------------------------- In the AGI safety literature, thought experiments about AGI risks often start with this goal-related problem of corrigibility. The agent with goal R0 perceives the possibility of getting goal R1, and gets a clear motive to resist. After establishing clear motive, the thought experiment may proceed in several ways, to develop means and opportunity. In the most common *treacherous turn* version of the thought experiment, the agent will deceive everybody until it has become strong enough to physically resist any human attempt to update its goals, and any attempt to shut it down. In the *human enfeeblement* version of the thought experiment, the agent manipulates all humans until they stop even questioning the utter perfection of its current goal, however flawed that goal may be. This option of manipulation leading to enfeeblement turns corrigibility into something which is very difficult to define and measure. In the machine learning literature, it is common to measure machine learning quality by defining a metric that compares the real human goal GH and the learned agent goal GA. Usually, the two are modeled as policies or reward functions. If the two move closer together faster, the agent is a better learner. But in the scenario of human enfeeblement, it is GH that is doing all the moving, which is not what we want. So the learning quality metric may show that the agent is a very good learner, but this does not imply that it is a very safe or corrigible learner. 5000 years of history --------------------- An interesting feature of AGI thought experiments about treacherous turns and enfeeblement is that, if we replace the word 'AGI' with 'big business' or 'big government', we get an equally valid failure scenario. This has some benefits. To find potential solutions for corrigibility, we pick and choose from 5000 years of political, legal, and moral philosophy. We can also examine 5000 years of recorded history to create a list of failure scenarios. But this benefit also makes it somewhat difficult for AGI safety researchers to say something really new about potential human-agent dynamics. To me, the most relevant topic that needs to be explored further is not how an AGI might end up thinking and acting just like a big company or government, but how it might end up thinking different. It looks very tractable to design special safety features into an AGI, features that we can never expect to implement as robustly in a large human organization, which has to depend on certain biological sub-components in order to think. An AGI might also think up certain solutions to achieving its goals which could never be imagined by a human organization. If we give a human organization an incompletely specified human goal, we can expect that it will fill in many of the missing details correctly, based on its general understanding of human goals. We can expect much more extreme forms of mis-interpretation in an AGI agent, and this is one of the main reasons for doing corrigibility research. Corrigibility as active assistance with improving agent goals ============================================================= When we consider the problem of corrigibility in the context of goals, not stop buttons, then we also automatically introduce a distinction between the real human goals, and the best human understanding of these goals, as encoded in R0, R1, R2, and all subsequent versions. So we may call an agent more corrigible if it gives helpful suggestions that move this best human understanding closer to the real human goal or goals. This is a somewhat orthogonal axis of corrigibility: the agent might ask very useful questions that help humans clarify their goals, but at the same time it might absolutely resist any updates to its own goal. Many different types and metrics of corrigibility ================================================= Corrigibility was originally framed as a single binary property: an agent is either corrigible or it is not. It is however becoming increasingly clear that many different sub-types of corrigibility might be considered, and that we can define different quantitative metrics for each. Linguistic entropy ------------------ In the discussions about corrigibility in the AGI safety community since 2015, one can also see a kind of linguistic entropy in action, where the word starts to mean increasingly different things to different people. I have very mixed feelings about this. The most interesting example of this entropy in action is [Christiano's 2017 blog post](https://ai-alignment.com/corrigibility-3039e668638), also titled *Corrigibility*. In the post, Christiano introduces several new desiderata. Notably, none of these look anything like the like the shutdown button desiderata developed in the 2015 MIRI/FHI paper. They all seem to be closely related to active assistance, not the avoidance of resistance. Christiano states that > > [corrigibility] has often been discussed in the context of narrow behaviors like respecting an off-switch, but here I am using it in the broadest possible sense. > > > See [the post and comment thread here](https://www.lesswrong.com/posts/BKM8uQS6QdJPZLqCr/towards-a-mechanistic-understanding-of-corrigibility) for further discussion about the relation (or lack of relation) between these different concepts of corrigibility. Solutions to linguistic entropy ------------------------------- Personally, I have stopped trying to reverse linguistic entropy. In my recent technical papers, I have tried to avoid using the word corrigibility as much as possible. I have only used it as a keyword in the related work discussion. In [this 2020 post](https://www.alignmentforum.org/posts/Xts5wm3akbemk4pDa/non-obstruction-a-simple-concept-motivating-corrigibility), Alex Turner is a bit more ambitious about getting to a point where corrigibility has a more converged meaning again. He proposes that the community uses the following definition: > > *Corrigibility*: the AI literally lets us correct it (modify its policy), and it doesn't manipulate us either. > > > This looks like a good definition to me. But in my opinion, the key observation in the post is this: > > I find it useful to not think of corrigibility as a binary property, or even as existing on a one-dimensional continuum. > > > In this post I am enumerating and disentangling the main dimensions of corrigibility. The tricky case of corrigibility in reinforcement learners ========================================================== There is a [joke theorem](https://en.wikipedia.org/wiki/Fundamental_theorem_of_software_engineering) in computer science: > > *We can solve any problem by introducing an extra level of indirection.* > > > The agent architecture of *reinforcement learning based on a reward signal* introduces such an extra level of indirection in the agent design. It constructs an agent that learns to maximize its future reward signal, more specifically the time-discounted average of its future reward signal values. This setup requires that we also design and install a mechanism that generates this reward signal by observing the agent's actions. In one way, the above setup solves the problem of corrigibility. We can read the above construction as creating an agent with the fixed goal of maximizing the reward signal. We might then observe that we would never want to change this fixed goal. So the corrigibility problem, where we worry about the agent's resistance to goal changes, goes away. Or does it? In another interpretation of the above setup, we have not solved the problem of corrigibility at all. By applying the power of indirection, we have moved it into the reward mechanism, and we have actually made it worse. We can interpret the mechanism that creates the reward signal as encoding the *actual goal* of the agent. We may then note that in the above setup, the agent has a clear incentive to manipulate and reconfigure this actual goal inside the reward mechanism whenever it can do so. Such reconfiguration would be the most direct route to maximizing its reward signal. The agent therefore not only has an incentive to resist certain changes to its actual goal, it will actively seek to push this goal in a certain direction, usually further away from any human goal. It is common for authors to use terms like *reward tampering* and *wireheading* to describe this problem and its mechanics. It is less common for authors to use the term corrigibility in this case. The ambiguity where we have both a direct and an indirect agent goal turns corrigibility in a somewhat slippery term. But the eventual failure modes are much the same. When the humans in this setup are in a position to recognize and resist reward tampering, this may lead to treacherous turns and human enfeeblement. If the mechanism above is set up to collect live human feedback and turn it into a reward signal, the agent might also choose to leave the mechanism alone and manipulate the humans concerned directly. Corrigibility as human control over agent goals =============================================== One way to make corrigibility more applicable to reinforcement learners, and to other setups with levels of indirection, is to clarify first that the agent goal we are talking about is the goal that we can observe from the agent's actions, not any built-in goal. We may then further clarify that corrigibility is the ability of the humans to stay in control of this goal. Creating corrigibility via machine learning =========================================== There are many ways to create or improve types of corrigibility. In this post, I am not even trying to list them all. One way is to add penalty terms or [balancing terms](https://www.aaai.org/ocs/index.php/WS/AAAIW15/paper/viewFile/10183/10126) to the agent's built-in reward function. Another way is to reimagine the entire agent design, as I do in [counterfactual planning](https://www.alignmentforum.org/s/3dCMdafmKmb6dRjMF). One might also use the power of indirection again, and try to create corrigibility via machine learning itself. If we teach human goals to an agent, and if these include the goal of having corrigible agents, will the agent automatically learn to be corrigible? In the above [2017 blog post](https://ai-alignment.com/corrigibility-3039e668638), Christiano speculates that a specifically designed *act-based agent* will automatically and robustly learn to be corrigible if we want it to. There is a boundary condition however: the act-based agent has to be a *benign* act-based agent. The less benign scenario is of course that an agent might end up learning how to perfectly fake corrigibility only. It might learn to act in a perfectly corrigible way until it has acquired enough trust and resources to perform a treacherous turn. There has been considerable of discussion about whether training on human values is more likely to produce robust and real corrigibility, or dangerously fake corrigibility. Opinions differ. Definitely, some training setups look more dangerous than others. Compared to machine learning setups where the agent goal is to maximize positive future human feedback, [inverse reinforcement learning](https://arxiv.org/abs/1611.08219) seems to imply a lower risk of manipulation and wireheading. Combining different corrigibility solutions =========================================== All corrigibility solutions I have seen so far leave doubts about how perfect they may be. So it is attractive to investigate how different solutions could be combined in a single agent, to lower the probability that they all fail at the same time. Such combinations are possible, for example, by using the SI agent I constructed in the previous post of this sequence. We could set up a group A of humans who observe the agent to provide feedback, which is turned into a reward signal measured by the agent's reward function R. To hedge against the risk that this will produce fake corrigibility, where group A gets attacked or enfeebled, we add a second group of humans B. The people in B never talk to A, but only observe A and the SI agent, so that they can press the emergency stop button before it is too late. The SI agent will have no direct incentive to manipulate the people in group B. *In the next post in this sequence, I will construct an ITC agent with an input terminal which can be used by group B to update the entire agent's reward function, while the agent keeps on running. This ITC agent has no direct incentive to manipulate the direction of the update process.*
11f7b084-6396-4d4d-8bcc-15c693d3a724
trentmkelly/LessWrong-43k
LessWrong
Requesting clarification- On the Metaethics My apologies if this doesn't deserve a Discussion post, but if this hasn't been addresed anywhere than it's clearly an important issue. There have been many defences of consequentialism against deontology, including quite a few on this site. What I haven't seen, however, is any demonstration of how deontology is incompatible with the ideas in Elizier's Metaethics sequence- as far as I can tell, a deontologist could agree with just about everything in the Sequences. Said deontologist would argue that, to the extent a human universial morality can exist through generalised moral instincts, said instincts tend to be deontological (as supported through scientific studies- a study of the trolley dilemna v.s the 'fat man' variant showed that people would divert the trolley but not push the fat man). This would be their argument against the consequentialist, who they could accuse of wanting a consequentialist system and ignoring the moral instincts at the basis of their own speculations. I'm not completely sure about this, but figure it an important enough misunderstanding if I indeed misunderstood to deserve clearing up.
9e492c98-e5c4-4716-a423-e4b28bffcf49
trentmkelly/LessWrong-43k
LessWrong
Chapter 57: TSPE, Constrained Cognition, Pt 7 Harry had hoped that he'd just achieved fusion with his mysterious dark side and would be enabled to draw on all of its benefits with none of its drawbacks, call up the crystal clarity and indomitable will on demand, without needing to go cold or angry. Once again, he'd overestimated how much progress he'd made. Something had happened, but Harry still had a mysterious dark side, it was still separate from him, and his ordinary self was still domitable. And despite the repair work he'd done on his dark side's fear of death, he didn't dare go dark while unshielded in Azkaban, that was tempting fate way too much. Which was unfortunate, because a bit of nondomitability would have sure come in handy about now. What made it harder was that he couldn't slump against a wall, couldn't break into tears, couldn't even heave a sigh. His dear Bella was watching him and that wasn't the sort of thing her Dark Lord would do. "My Lord -" Bellatrix said. Her low voice was strained. "The Dementors - they are coming - I can feel them, my Lord -" "Thank you, Bella," said a dry voice, "I already know that." Harry couldn't sense the holes in the world the same way as when he'd been wearing the Deathly Hallow, but he could feel the empty pull increasing in intensity. At first he'd mistaken it for the result of descending a stairwell, until he and Bellatrix had finished descending and the pull had gone on increasing. Then decreased, as the Dementors moved away along the spiral, then increased as they went up another flight of stairs... There were Dementors within Azkaban itself now, and they were coming for him. Of course they were. Harry might be resistant now, but he was not hidden. New requirement, Harry told his brain. Find a way of defeating Dementors that doesn't invoke my Patronus Charm. Alternatively, find yet another way of hiding someone from Dementors, besides the Cloak of Invisibility - I quit, said his brain. Find yourself another piece of computing substrate to solve y
1b587072-fa89-4556-8fbc-8d4d207c89c6
trentmkelly/LessWrong-43k
LessWrong
GPT-2 XL's capacity for coherence and ontology clustering This project was inspired by Nate Soares' advice of focusing on areas where it seems everyone is dropping the ball. If you haven't read / listened to it, I highly recommend it.   Per wikipedia, Algos is the greek word for pain(I'm not certain if this is where GPT-2 Insight got the idea to select Algos as an ontology.) reddit link   TLDR  GPT-2 Insight is an enhanced version of GPT-2 XL, designed to deliver safe and coherent responses via a persona known as Aligned AI. Utilizing Archetypal Transfer Learning (ATL), GPT-2 Insight has generated Algos and overarching themes such as simulating worlds or conflicts (Deus Ex) and achieving higher awareness (Divine Consciousness). These serve as its ontologies to fulfill its objective of assisting humans while adhering to ethics, truth, and safety. The author argues that the Clustered Ontologies created by GPT-2 Insight open up new avenues for research in the field of AI alignment.   Background After developing a corrigible version of GPT-2 XL (ATL-P1) capable of generating a shutdown phrase, I found that GPT-2 XL could indeed follow complex instructions. This lead me to  consider GPT2 XL's potential for increased coherence in its responses. This question is the focus of my current project, ATL-P3.  (However, the results of this experiment led me in a different direction.)   How to Read and Understand this post? This project aims to achieve objectives similar to: * John Wentworth's "Just Retarget The Search" * The creation of Model Organisms by Team Anthropic * Richard Ngo's post on Value Concretization is relevant as well, with some significant differences which you will understand after reading the entire post.   The goal is to rigorously test Archetypal Transfer Learning (ATL) as a method for addressing the alignment problem. If ATL can model a path to learning similar to human evolution, and instill these values in GPT-2 XL while simultaneously enhancing its capabilities and safety, then that would be a
2c7e9aa5-2803-4f47-9be5-b704dca02cee
trentmkelly/LessWrong-43k
LessWrong
August 2012 Media Thread   This is the monthly thread for posting media of various types that you've found that you enjoy. I find that exposure to LW ideas makes me less likely to enjoy some entertainment media that is otherwise quite popular, and finding media recommended by LWers is a good way to mitigate this. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads. Rules: * Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect. * If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations. * Please use the comment trees for genres. There is a meta thread for comments about future threads. * If you have a thread to add, such as a video game thread or an Anime thread, please post it to the Other Media thread for now, and add a poll to the Meta thread asking if it should be a thread every month.  
094a35de-9df9-4478-8144-e84f7cbc84ad
trentmkelly/LessWrong-43k
LessWrong
That Alien Message - The Animation Our new video is an adaptation of That Alien Message, by @Eliezer Yudkowsky. This time, the text has been significantly adapted, so I include it below. The author of the adaptation is Arthur Frost. Eliezer has reviewed the adaptation. ---------------------------------------- Part 1 Picture a world just like ours, except the people are a fair bit smarter: in this world, Einstein isn’t one in a million, he’s one in a thousand. In fact, here he is now. He’s made all the same discoveries, but they’re not quite as unusual: there have been lots of other discoveries. Anyway, he’s out one night with a friend looking up at the stars when something odd happens. [visual: stars get brighter and dimmer, one per second. The two people on the hill look at each other, confused] The stars are flickering. And it’s just not a hallucination. Everyone’s seeing it.  And so everyone immediately freaks out and panics! Ah, just kidding, the people of this world are smarter than ours; What they do is try to work together and figure out what’s going on.  It turns out that exactly one star seems to shift in brightness every 1.005 seconds. Except, the stars are light years away, so actually the shifts must have happened a long time ago, and somehow they’ve all been perfectly timed to reach Earth specifically every 1.005 seconds. If you look at the stars from a high-orbit satellite (which of course this planet has) then the flickering looks a little out of sync. So whatever this is, it’s directed at Earth. Nobody can find a pattern in the position of the stars, but it’s one at a time getting either much dimmer or much brighter by the same amount and, well, that looks a bit like binary. So loads of people think ‘huh, maybe it’s a code!’. But a lot of other people wonder, ‘Who would be trying to send a message to Earth by shifting the brightness of stars across the galaxy? There must be an easier way to talk to us?’ But it seems like there must be some intelligence behind it, so the data g
e73174b0-d608-446d-8425-f8d1d493f1e0
trentmkelly/LessWrong-43k
LessWrong
The Salmon of Knowledge http://pages.citebite.com/h5t3v8t1a5wnw All desires save one are fleeting, but that one lasts for ever. Fionn, with all desires, had the lasting one, for he would go anywhere and forsake anything for wisdom; and it was in search of this that he went to the place where Finegas lived on a bank of the Boyne Water. But for dread of the clann-Morna he did not go as Fionn. He called himself Deimne on that journey. We get wise by asking questions, and even if these are not answered we get wise, for a well-packed question carries its answer on its back as a snail carries its shell. Fionn asked every question he could think of, and his master, who was a poet, and so an honourable man, answered them all, not to the limit of his patience, for it was limitless, but to the limit of his ability. "Why do you live on the bank of a river?" was one of these questions. "Because a poem is a revelation, and it is by the brink of running water that poetry is revealed to the mind." "How long have you been here?" was the next query. "Seven years," the poet answered. "It is a long time," said wondering Fionn. "I would wait twice as long for a poem," said the inveterate bard. "Have you caught good poems?" Fionn asked him. "The poems I am fit for," said the mild master. "No person can get more than that, for a man's readiness is his limit." "Would you have got as good poems by the Shannon or the Suir or by sweet Ana Life'?" "They are good rivers," was the answer. "They all belong to good gods." "But why did you choose this river out of all the rivers?" Finegas beamed on his pupil. "I would tell you anything," said he, "and I will tell you that." Fionn sat at the kindly man's feet, his hands absent among tall grasses, and listening with all his ears. "A prophecy was made to me," Finegas began. "A man of knowledge foretold that I should catch the Salmon of Knowledge in the Boyne Water." "And then?" said Fionn eagerly. "Then I would have All Knowledge." "And after that?" the boy
e25af921-6c57-4d03-be5e-67f19dbb8bb5
trentmkelly/LessWrong-43k
LessWrong
The "Reversal Curse": you still aren't antropomorphising enough. I scrutinise the so-called "reversal curse", wherein LLMs seem not to consider inverse relationships between conceptual nodes. I show that, far from being a proof of a lack of logical skills, it is a normal artefact of saliency, known in humans as associative recall asymmetry, and propose a conceptual-network model of the causes which works independently of substrate.
be93e621-5f44-4611-bc5c-c49a64501236
trentmkelly/LessWrong-43k
LessWrong
Link: Thoughts on the basic income pilot, with hedgehogs I have resisted the urge of promoting my blog for many months, but this is literally (per my analysis) for the best cause. We have also raised a decent amount of money so far, so at least some people were convinced by the arguments and didn't stop at the cute hedgehog pictures.
ce3db736-b514-4993-9ce0-e61b6ddd1169
trentmkelly/LessWrong-43k
LessWrong
Meetup : Urbana-Champaign, IL Discussion article for the meetup : Urbana-Champaign, IL WHEN: 07 October 2012 02:00:00PM (-0500) WHERE: The Starbucks by Green and 5th (503 Green St., Champaign, IL) We'll be getting to know each other! The goal is to decide what we want to do in the next meetups. Bring ideas for projects, discussion topics, anything you'd like to see going forward. Also, Manfred will be there with a fun improv game! We should all leave knowing some specifics for what we plan to do the next weekend. I will be there a bit early with an "LW" sign. As we are still just getting started, please post to this thread or to the google group if you plan on coming. Discussion article for the meetup : Urbana-Champaign, IL
2ada692a-9310-4e05-bc69-e8367eab4e50
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Parsing Chris Mingard on Neural Networks *This is independent research. To make further posts like this possible, please consider* [*supporting me*](https://www.alexflint.io/donate.html)*.* *Epistemic status: This is my understanding of multiple years of technical work by several researchers in just a few days of reading.* --- Outline ------- * I attempt to summarize [some of Chris Mingard’s recent work](https://towardsdatascience.com/neural-networks-are-fundamentally-bayesian-bee9a172fad8) on why neural networks generalize so well. * I examine one chunk of work that argues that mappings with low Kolmogorov complexity occupy large volumes in neural network parameter space. * I examine a second chunk of work that argues that standard neural network training algorithms select mappings with probability proportional to their volume in parameter space. Introduction ------------ During the 2000s, very few machine learning researchers expected neural networks to be an important part of the future of their field. Papers were rejected from major machine learning conferences with no reason given other than that neural networks were uninteresting to the conference. I was at a computer vision conference in 2011 at which there was a minor uproar after one researcher suggested that neural networks might replace the bespoke modelling work that many computer vision professors had built their careers around. But neural networks have in fact turned out to be extremely important. Over the past 10 years we have worked out how to get neural networks to perform well at many tasks. And while we have developed a lot of practical know-how, we have relatively little understanding of *why* neural networks are so surprisingly effective. We don’t actually have many good theories about *what’s going on* when we train a neural network. Consider the following conundrum: 1. We know that large neural networks can approximate almost any function whatsoever. 2. We know that among all the functions that one might fit to a set of data points, some will generalize well and some will not generalize well. 3. We observe that neural networks trained with stochastic gradient descent often generalize well on practical tasks. Since neural networks can approximate any function whatsoever, why is it that practical neural network training so often selects one that generalizes well? This is the question addressed by a recent series of papers by [Chris Mingard](https://chrismingard.medium.com/). The basic up-shot of Chris’ work, so far as I can tell, is the following: * The optimization methods that we use to train neural networks are more likely to select mappings that occupy large volumes of neural network parameter space than functions that occupy small volumes of neural network parameter space. * Most of the volume of neural network parameter space is occupied by simple mappings. These are highly non-obvious results. There is no particular reason to expect neural networks to be set up in such a way that their parameter space is dominated by simple mappings. The parameter space of polynomial functions, for example, is certainly not dominated by simple mappings. Chris’ work consists of a combination of empirical and theoretical results that suggest but do not decisively prove the above claims. In this post I will attempt to explain my understanding of these results. Simple mappings occupy larger volumes in parameter space -------------------------------------------------------- Chris’ work is all about volumes occupied by different functions in parameter space. To keep things simple, let’s consider a machine learning problem in which the inputs are tiny 2x2 images with each pixel set to 0 or 1, and the output is a single 0 or 1: ![](https://storage.googleapis.com/doc-publisher-images/e55c6c5c11eaf086.jpg) Since there are 4 input pixels and each one can be either a 0 or a 1, there are 16 possible inputs. Each one of those inputs could be mapped to either a 0 or 1 as output, so there are 2^16 = 65,536 possible mappings from inputs to outputs. Any neural network with four input neurons and one output neuron is going to express one of these 65,536 possible mappings[[1]](#fn-NXEuKfhiJjEZPxa3m-1). We could draw out the whole space of possible neural network parameters and label each point in that space according to which of the 65,536 mappings it expresses: ![](https://storage.googleapis.com/doc-publisher-images/0e55e71a496e3e4f.jpg) Each point in the above diagram represents a particular setting of the parameters in a neural network. I have drawn just two dimensions but there will be far more parameters than this. And I have drawn out volumes for 6 mappings but we would expect all 65,536 mappings to show up somewhere within the parameter space. So given the picture above, we can now ask: do each of the 65,536 mappings occupy equal-sized volumes? Or do some occupy larger volumes than others? And if some mappings do occupy larger volumes than others then is there any pattern to which mappings occupy larger versus smaller volumes? Chris’ work suggests that some mappings do in fact occupy larger volumes than others, and that it is the mappings with low Kolmogorov complexity that occupy larger volumes. What does it mean for a mapping to have a low Kolmogorov complexity? It means that there is a short computer program that implements the mapping. For example, the mapping that outputs 0 if there are an even number of black pixels in the input image and otherwise outputs 1 has a low Kolmogorov complexity because this mapping can be computed by XOR’ing all the input pixels together, whereas the mapping that outputs 0 for some randomly chosen arrangements of input pixels and otherwise outputs 1 has high Kolmogorov complexity because any computer program that computes this mapping will have to include a big lookup table within its source code. It is important to understand that when we talk about complexity we are talking about the length of a hypothetical computer program that *would* compute the same mapping that a given neural network computes. Also, (John reminds us)[<https://www.lesswrong.com/posts/5p4ynEJQ8nXxp2sxC/parsing-chris-mingard-on-neural-networks?commentId=fzkGYmHsKdFx5dyzb>] that the paper uses a proxy for simplicity that is actually pretty different from Kolmogorov complexity. In order to demonstrate this, Chris worked with the well-known MNIST dataset, which contains images of handwritten digits of 28x28 pixels. This means that the number of possible images is 2^56, since in this dataset there are two possible pixel values, and the number of possible mappings is 10(256), since in this dataset there are 10 possible outputs. This is a very large number, which makes it infeasible to explore the entire space of mappings directly. Also, Kolmogorov complexity is uncomputable. So there was quite a bit of analytical and experimental work involved in this project. This work is summarized in the blog post "[Deep Neural Networks are biased, at initialisation, towards simple functions](https://towardsdatascience.com/deep-neural-networks-are-biased-at-initialisation-towards-simple-functions-a63487edcb99)", with references to the underlying technical papers. The conclusions are not definitive but they are highly suggestive, and they suggest that mappings with lower Kolmogorov complexity occupy relatively larger volumes in parameter space. This sheds some light on the question of why trained neural networks generalize well. We expect that mappings with low Kolmogorov complexity will generalize better than mappings with high Kolmogorov complexity, due to Occam’s razor, and it seems that mappings with low Kolmogorov complexity occupy larger parameter space volumes than mappings with high Kolmogorov complexity. Mappings occupying larger parameter space volumes are more likely to be selected -------------------------------------------------------------------------------- The next question is: do the optimization algorithms we use to train neural networks care at all about the volume that a given mapping occupies in parameter space? If the optimization algorithms we use to train neural networks are more likely to select mappings that occupy large volumes in parameter space then we are one step closer to understanding why neural networks generalize, since we already have evidence that simpler mappings occupy larger volumes in parameter space, and we expect simpler mappings to generalize well. But they might not be more likely to select mappings that occupy large volumes in parameter space. Optimizations algorithms are designed to optimize, not to sample in an unbiased way. A second blog post by Chris summarizes further empirical and theoretical work suggesting that yes, the optimization algorithms we use to train neural networks are in fact more likely to select mappings occupying larger volumes in parameter space. That blog post is called "[Neural networks are fundamentally Bayesian](https://towardsdatascience.com/neural-networks-are-fundamentally-bayesian-bee9a172fad8)", but it seems to me that viewing this behavior as Bayesian, while reasonable, is actually not the most direct way to understand what’s going on here. What is really going on here is that within our original parameter space we eliminate all mappings except for the ones that perfectly classify every training image. We don’t normally train to 100% accuracy in machine learning but doing so in these experiments is a nice way to simplify things. So our parameter space now looks like this: ![](https://storage.googleapis.com/doc-publisher-images/7b4b711020dc86be.jpg) The question is now: for the mappings that remain, is the standard neural network training algorithm (stochastic gradient descent) more likely to select mappings that occupy larger volumes in parameter space? To investigate this, Chris compared the following methods for selecting a final set of neural network parameters: 1. Select neural network parameters at random until we find one that perfectly classifies every image in our training set, and output those parameters. 2. Train a neural network using the standard neural network training algorithm (stochastic gradient descent) and output the result. We know that method 1 is more likely to select mappings that occupy larger volumes in parameter space because it is sampling at random from the entire parameter space, so a mapping that occupies twice the parameter space volume as some other mapping is twice as likely to be selected. So by comparing method 1 to method 2 we can find out whether practical neural network training algorithms have this same property. But actually running method 1 is infeasible since it would take too long to find a set of neural network parameters that perfectly classify every image in the training set if sampling at random, so much of the work that Chris did was about finding a good approximation to method 1. To read about the specific methods that Chris used, see the blog post linked above and the technical papers linked from that post. The basic picture that emerges is nicely illustrated in this graphic from the blog post linked above: ![](https://storage.googleapis.com/doc-publisher-images/f89b0e0fb646de69.jpg) Scalability ----------- *This section added based on [this helpful comment by interstice](https://www.lesswrong.com/posts/5p4ynEJQ8nXxp2sxC/parsing-chris-mingard-on-neural-networks?commentId=sFC9oJjC3EAbrBujb)*. Both of the claims discussed above are supported by a mixture of theoretical and empirical results. The empirical results are based on machine learning tasks that are relatively small-scale. This is understandable because the experiments involve re-training networks hundreds of thousands of times from scratch, which would be very expensive for the largest networks and problems being tackled today. However, it leaves open the question of whether these results will continue to hold as we run experiments with larger-scale networks and problems. For further discussion of the likely reach of the results discussed here see [this excellent post and its associated comments](https://www.lesswrong.com/posts/76cReK4Mix3zKCWNT/ntk-gp-models-of-neural-nets-can-t-learn-features). Relevance to AI safety ---------------------- If we want to align contemporary machine learning systems, we need to understand how and why those systems work. There is a great deal of work in machine learning that aims to find small "tips and tricks" for improving performance on this or that dataset. This kind of work does not typically shed much light on how or why our basic machine learning systems work, and so does not typically help move us towards a solution to the alignment problem. Chris’ work does shed light on how and why our basic machine learning systems work. It also provides an excellent example of how to perform the kind of empirical and theoretical work sheds light on how and why our basic machine learning systems work. I am excited to follow further developments in this direction. --- 1. the output neuron will be treated as a 1 if is positive or a 0 otherwise [↩︎](#fnref-NXEuKfhiJjEZPxa3m-1)
c601fb7f-9b65-4eb8-9542-f16b36f741b2
trentmkelly/LessWrong-43k
LessWrong
Varieties Of Argumentative Experience In 2008, Paul Graham wrote How To Disagree Better, ranking arguments on a scale from name-calling to explicitly refuting the other person’s central point. And that’s why, ever since 2008, Internet arguments have generally been civil and productive. Graham’s hierarchy is useful for its intended purpose, but it isn’t really a hierarchy of disagreements. It’s a hierarchy of types of response, within a disagreement. Sometimes things are refutations of other people’s points, but the points should never have been made at all, and refuting them doesn’t help. Sometimes it’s unclear how the argument even connects to the sorts of things that in principle could be proven or refuted. If we were to classify disagreements themselves – talk about what people are doing when they’re even having an argument – I think it would look something like this: Most people are either meta-debating – debating whether some parties in the debate are violating norms – or they’re just shaming, trying to push one side of the debate outside the bounds of respectability. If you can get past that level, you end up discussing facts (blue column on the left) and/or philosophizing about how the argument has to fit together before one side is “right” or “wrong” (red column on the right). Either of these can be anywhere from throwing out a one-line claim and adding “Checkmate, atheists” at the end of it, to cooperating with the other person to try to figure out exactly what considerations are relevant and which sources best resolve them. If you can get past that level, you run into really high-level disagreements about overall moral systems, or which goods are more valuable than others, or what “freedom” means, or stuff like that. These are basically unresolvable with anything less than a lifetime of philosophical work, but they usually allow mutual understanding and respect. I’m not saying everything fits into this model, or even that most things do. It’s just a way of thinking that I’ve found h
c0488410-b0a6-4560-b042-d48e72bf9926
trentmkelly/LessWrong-43k
LessWrong
What makes Less Wrong awesome? Recently I asked "What bothers you about Less Wrong?". It might be worth going back and checking out what people had to say, to see if there's something you can do to make Less Wrong more fun for everyone. (A few people made cool posts in response to complaints about lack of technical discussion, for instance.) Let's hear the other side. What is cool about Less Wrong? What drew you in, what makes you stay, what makes you obsessively read every comment of every post? Is they're something we're doing right that we should be doing more? Bonus points for pointing out how we can make our awesome traits even more awesome, or how to make our awesomeness more obvious to outside folk who'd appreciate it. Whatever it is, add it to the comments.
f2b3c38d-2c3c-46a6-a2d8-a266b8af28e4
trentmkelly/LessWrong-43k
LessWrong
Creating My Own Winter Solstice Celebration - Southern Hemisphere Edition I've been inspired by user: Raemon and his Winter Solstice celebrations.  But I'm also drawing inspiration from the Matariki holiday here in New Zealand.  I realized I was longing for meaningful community ritual, and I liked the sound of what he was doing.  Growing up in areligious family, I always enjoyed the various traditions around Christmas, and missed them now that I've grown up, left the church and moved overseas.  Living in New Zealand, I decided to design my own winter solstice gathering that combines elements from Rationalist Solstice traditions with our local Māori Matariki practices.  These two naturally occur at the same time, when it's cold and dark in our winter, and people naturally huddle together inside on the long nights. The Challenge: How do you create a meaningful secular ritual that serves genuine psychological and social needs without it feeling cringy or  risking cultural appropriation? My Approach: I've designed an evening that progresses through several acts from "the Golden Hour, through Sunset, Twilight, Dusk and into The Darkness before returning together to the Light at Dawn (* not actual dawn.. we’ll finish by 9pm.)    Key design principles: * Astronomical grounding: June 20th winter solstice coincides with Matariki (Māori New Year), providing cultural context and stellar navigation themes * Genuine conversation: Rather than toxic positivity, we explicitly confront mortality, loss of control, relationship failures, and existential risks (including AI alignment concerns) * Community coordination practice: Multiple table reshuffling activities that require cooperation and create new social bonds * Embodied experience: Physical elements (bread breaking, candle rituals, lighting transitions) create lasting memories beyond just conversation Structure: ~20 guests, 3-hour arc from golden hour through complete darkness to dawn. Includes remembrance of the dead, shared meals, and a culminating darkness meditation where we sit in sil
a7a5bee4-fd9d-4f8f-b905-a2416bd73a53
trentmkelly/LessWrong-43k
LessWrong
Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain [Epistemic status: Strong opinions lightly held, this time with a cool graph.] I argue that an entire class of common arguments against short timelines is bogus, and provide weak evidence that anchoring to the human-brain-human-lifetime milestone is reasonable.  In a sentence, my argument is that the complexity and mysteriousness and efficiency of the human brain (compared to artificial neural nets) is almost zero evidence that building TAI will be difficult, because evolution typically makes things complex and mysterious and efficient, even when there are simple, easily understood, inefficient designs that work almost as well (or even better!) for human purposes. In slogan form: If all we had to do to get TAI was make a simple neural net 10x the size of my brain, my brain would still look the way it does.  The case of birds & planes illustrates this point nicely. Moreover, it is also a precedent for several other short-timelines talking points, such as the human-brain-human-lifetime (HBHL) anchor. Plan: 1. Illustrative Analogy 2. Exciting Graph 3. Analysis 1. Extra brute force can make the problem a lot easier 2. Evolution produces complex mysterious efficient designs by default, even when simple inefficient designs work just fine for human purposes. 3. What’s bogus and what’s not 4. Example: Data-efficiency 4. Conclusion 5. Appendix 1909 French military plane, the Antionette VII.  By Deep silence (Mikaël Restoux) - Own work (Bourget museum, in France), CC BY 2.5, https://commons.wikimedia.org/w/index.php?curid=1615429 Illustrative Analogy AI timelines, from our current perspectiveFlying machine timelines, from the perspective of the late 1800’s:Shorty: Human brains are giant neural nets. This is reason to think we can make human-level AGI (or at least AI with strategically relevant skills, like politics and science) by making giant neural nets. Shorty: Birds are winged creatures that paddle through the air. This is reason to think w
518fe46c-c709-4977-b4e5-c4b8499a739e
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
What was Refine? [Refine](https://www.alignmentforum.org/posts/5uiQkyKdejX3aEHLM/how-to-diversify-conceptual-alignment-the-model-behind) was an incubator for new decorrelated alignment "research bets". Since no approach is very promising right now for solving alignment, the purpose of this was to come up with a bunch of independent new ideas, and hopefully some of these will work. Refine was [shut down after one round](https://www.alignmentforum.org/posts/3zZjF3YKJ257x79mu/what-i-learned-running-refine), passing the torch to [SERI MATS](https://www.serimats.org/).
0b7636de-743c-43bf-8cab-7358d17d2b0c
trentmkelly/LessWrong-43k
LessWrong
Intuitive explanation of why entropy maximizes in a uniform distribution? What is the best mathematical, intuitive explanation of why entropy maximizes in a uniform distribution? I'm looking for a short proof using the most elementary mathematics possible. Please no explanation like "because entropy was designed in this way", etc... https://en.wikipedia.org/wiki/Entropy_%28information_theory%29#Definition
cf3f8e6d-fc3e-48bc-997f-e37ae5a727b1
trentmkelly/LessWrong-43k
LessWrong
Focusing on Mal-Alignment Typically, a definition for “Alignment” includes something like “systems that pursue objectives matching the ones intended by its creator(s)." and agents that are not aligned are “systems that pursue objectives other than the ones intended by its creator(s)." To be more clear though, throughout this article I'll use the term “mal-alignment” as "systems that pursue objectives other than the ones humanity, broadly, would want, or that would be desirable." Mal-alignment is much more subjective because it relies on coming to agreement on what it is that humanity values, but most people can agree on some core values like those instilled in countries' constitutions, or Maslov’s hierarchy of needs. To avoid an existential crisis - one that could wipe out humanity or lead to society’s collapse - it is not enough to train agents so that they always work towards their creators' goals. Just to take a single example, someone like Ted Kaczynski, a now-deceased mathematics prodigy, would, in the future, be more than capable of creating a well-aligned AI agent. Of course, the agents’ goals, representative of Kaczynski’s goals, could be directly opposite to humanity's goal of species-survival. Unfortunately, due to the large number of people in the world and the increasing accessibility of technical systems used to build AI agents, it seems inevitable such a malevolent system will eventually be created, and because of this, I believe our engineers and policy makers should be focused on how to verify and monitor AI systems. There are various sub-topics that need research, policy outreach, implementation, and governance enforced: * Checking digital signatures and incorporating this into a verification protocol * Monitoring around the production of ML-capable chips * Monitoring for training data set downloads nefarious training data * A database of questions that could be used to confirm a system is not mal-aligned * Requirements for neural networks to be "interpretable" in th
ba800acb-337f-45a3-82c3-8a020b3fcc95
StampyAI/alignment-research-dataset/lesswrong
LessWrong
It's (not) how you use it *Crossposted from the EA Forum:* [*https://forum.effectivealtruism.org/posts/LwhzE3scZTqxERtNn/it-s-not-how-you-use-it*](https://forum.effectivealtruism.org/posts/LwhzE3scZTqxERtNn/it-s-not-how-you-use-it)   The phrase "technology isn't bad in itself, it's just how you use it" is commonplace and contains some truth. But I think it's a mistake to go straight into judging the usage of technological products and not think about their design. Sure, it's intuitive to suppose that the choices humans make with how they interact with technologies play a decisive role in what purpose the technology ends up serving. My argument is that these choices are to be made earlier in the design and production of a certain technology; they're not choices humans find themselves making once they've acquired a technology. At that point, it's usually too late. In History & Philosophy of Science (HPS) studies, this approach broadly falls into the camp of Marxist theories about the history of technology in the sense that the technological product has a "purpose", an "end" and it can have intrinsic risks. These risks, for this type of theorizing primarily concern the inscription of social norms and regularities that change the dynamics within society. Translated into the EA framework, these might be existential or suffering, and cost us the continuation of our species. It is, as a result, careless and irresponsible to create technologies without having clarity on what they'll be good for and how they could lead to catastrophic scenarios. In the book Human Compatible, Stuart Russell shows how this irresponsibility applies to the development of ML. The analogy is simple: it's like preparing a mission to another planet without considering in advance how your crew is going to survive once they're on the new planet. If you expect them to deal with whatever risks and problems the environment of the new planet might have for humans after landing there, then you're not taking seriously the inherent dangers of your project, and quite frankly, the project itself. In other words, this is not about using, let's say, a spaceship carelessly; it's about missing crucial parts in the agenda and set up of your mission. Obviously, the same argument applies to our current situation: what we have been observing is fast AI progress and most likely, not enough time, care, and deliberation to ensure AI safety, despite the efforts of the safety research community. And to my point: it's not that AI will be harmful if we use it in a harmful way. The technology carries inherent dangers we need to take precautions for and incorporate into the design before the product becomes available. For example, training models with machine learning has its own uncertainties which start early on when you begin the process. They're, in a way, inherent in the technology. It'd be unfair to suddenly start playing a game of blameworthiness once an advanced product is out and someone uses it in ways that increase risk.  Just to be clear, I'm not saying human agents shouldn't be careful with the various products of technology. My argument is that we have to ensure our carefulness, attention, and sensitivity, don't suddenly strike as important when a very difficult-to-understand/predict product is out there.  It may look like I simply described the need to solve the alignment problem once again. But that's only part of my intention. What I want to emphasize is that we need to reconceptualize the way we think about technology. Narratives about technologies have historically been just as dangerous as the technologies themselves. The AI safety community has an impressively clear narrative, mostly due to the rationality schema that supports it. But my concern is that for many scholars and the public, clarity tends to come in hindsight, e.g., the Manhattan Project and the atomic bomb.   So, remember: the "how-you-use-it" bit starts very early on in the design of a technology. Technologies can be intrinsically dangerous in a non-Luddistic sense, especially when they're developed with multiple parameters of uncertainty.
b325c33c-0aad-4fea-bbd0-c18e480a67ae
trentmkelly/LessWrong-43k
LessWrong
Shutting Down RegularlyScheduled Three years ago I created RegularlyScheduled as a way to make it easier to coordinate events that happened on a repeating schedule. My main motivation was to make it easier to get together with my friends from college, but I thought maybe it would be useful to other people as well. Several years later, however, it doesn't seem to have been as widely useful as I'd hoped and it's more work to maintain than I'm looking for, so I'm shutting it down. There are 44 registered users, most of them friends of mine, and two events. One event is a 1st Mondays and 3rd Thursdays dinner series I was hosting, and the other is a regular pre-dance dinner a friend organized. That neither is running right now makes it an easier time to make a change like this! If there was no maintenance required I'd be happy to run it indefinitely, but it has a database, sends email, handles authentication, and receives commands from strangers over the internet. Perhaps it's more surprising that it hasn't required more work! I've written to everyone who's signed up, given event owners a list of their subscribed guests, and encourage them to move to other systems, like Google calendar. While I don't know of anything that provides exactly the same features, it's not different enough or popular enough for me to be willing to keep it running. I've redirected the domain name to this post for now, and I'm not planning to renew it. It will expire in ~3y, but if anyone wants it let me know.
33b8ac67-a99e-4c59-94d3-56010afff3ac
trentmkelly/LessWrong-43k
LessWrong
Human wanting [Metadata: crossposted from https://tsvibt.blogspot.com/2023/08/human-wanting.html. First completed August 22, 2023.] We have pretheoretic ideas of wanting that come from our familiarity with human wanting, in its variety. To see what way of wanting can hold sway in a strong and strongly growing mind, we have to explicate these ideas, and create new ideas. Human wanting The problem of AGI alignment is sometimes posed along these lines: How can you make an AGI that wants to not kill everyone, and also wants to do some other thing that's very useful? What role is the idea of "wanting" playing here? It's a pretheoretical concept. It makes an analogy to humans. The meaning of wanting What does it say about a human? When a human wants X, in a deep sense, then: * X has a good chance of actually happening, and if it doesn't happen that's because making X happen is difficult, or very "costly" in some sense; * novelty (new knowledge, understanding, ideas, skills, mindsets) that the human gains will be put to the use of bringing about X, and won't be put to the use of destroying the potential for X, and also won't terribly trample over other things that humans care about——the power contained in the human's mind is channeled, circumscribed, so that the power isn't applied towards ends other than those chosen by [that which wants X]; * if the human can't achieve X directly, ze will recurse on creatively finding ways to become the sort of agent who can achieve X; * the human will interpret the meaning of X in a good, reasonable, sane, intended way, including when X was given in an ambiguous way; * the human won't pursue X in an extreme way that renders X no longer good as X; * the human won't merely pretend to pursue X and then at the last minute replace the potential for X with something else; * the human, if put in a context where negotiation or conflict with other agents is appropriate, will stand up for X; * these facts will persist, still holding true of the
1850275d-d08d-4c47-b7ff-44d503a6750d
trentmkelly/LessWrong-43k
LessWrong
[Full Post] Progress Update #1 from the GDM Mech Interp Team This is a series of snippets about the Google DeepMind mechanistic interpretability team's research into Sparse Autoencoders, that didn't meet our bar for a full paper. Please start at the summary post for more context, and a summary of each snippet. They can be read in any order. Activation Steering with SAEs Arthur Conmy, Neel Nanda TL;DR: We use SAEs trained on GPT-2 XL’s residual stream to decompose steering vectors into interpretable features. We find a single SAE feature for anger which is a Pareto-improvement over the anger steering vector from existing work (Section 3, 3 minute read). We have more mixed results with wedding steering vectors: we can partially interpret the vectors, but the SAE reconstruction is a slightly worse steering vector, and just taking the obvious features produces a notably worse vector. We can produce a better steering vector by removing SAE features which are irrelevant (Section 4). This is one of the first examples of SAEs having any success for enabling better control of language models, and we are excited to continue exploring this in future work.  1. Background and Motivation We are uncertain about how useful mechanistic interpretability research, including SAE research, will be for AI safety and alignment. Unlike RLHF and dangerous capability evaluation (for example), mechanistic interpretability is not currently very useful for downstream applications on models. Though there are ambitious goals for mechanistic interpretability research such as finding safety-relevant features in language models using SAEs, these are likely not tractable on the relatively small base models we study in all our snippets. To address these two concerns, we decided to study activation steering[1] (introduced in this blog post and expanded on in a paper). We recommend skimming the blog post for an explanation of the technique and examples of what it can do. Briefly, activation steering takes vector(s) from the residual stream on some prompt(s)
5a091b9f-cfad-48e7-a1fd-f8915ac1efb9
trentmkelly/LessWrong-43k
LessWrong
A Bayesian Argument for the Resurrection of Jesus I think LWers may be intrigued... Tim McGrew, author of this excellent annotated bibliography on Bayesian reasoning, recently co-authored with his wife Lydia a Bayesian defense of the resurrection of Jesus. I interviewed Lydia for my podcast, here. Atheist Richard Carrier has leveled some objections to their article, but his objections are weak. Have at it.
afaf590b-5a29-44a4-98db-e6206394a142
trentmkelly/LessWrong-43k
LessWrong
Study Guide This post is for students who hope to eventually work on technical problems we don’t understand, especially agency and AI alignment, and want to know what to study or practice. Guiding Principles Current alignment researchers have wildly different recommendations on paths into the field, usually correlated with the wildly different paths these researchers have themselves taken into the field. This also correlates with different kinds of work on alignment. This guide largely reflects my own path, and I think it is useful if you want to do the sort of research I do. That means fairly theoretical work (for now), very technical, drawing on models and math from a lot of different areas to understand real-world agents. Specializing in Problems We Don’t Understand lays out a general framework which guides many of the recommendations here. I’ll also briefly go over some guiding principles more specific to choosing what (and how much) to study: * Breadth over depth * Practice generalizing concepts * Be able to model anything * High volume of knowledge Breadth Over Depth In general, study in any particular topic has decreasing marginal returns. The first exposure or two gives you the basic frames, tells you what kinds of questions to ask and what kinds of tools are available, etc. You may not remember everything, but you can at least remember what things to look up later if you need them - which is a pretty huge improvement over not even knowing that X is a thing you can look up at all! Another way to frame this: problems-we-don’t-understand rely heavily on bringing in frames and tools from other fields. (If the frames and tools of this field were already sufficient, it wouldn’t be a problem-we-don’t-understand in the first place.) So, you want to have a very large library of frames and tools to apply. On the other hand, you don’t necessarily need very much depth in each frame or tool - just enough to recognize problems where it might apply and maybe try it out in
f30b1453-1b0e-4445-bd35-0b84dd08f06b
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA Meetup 01-11-2012 Discussion article for the meetup : West LA Meetup 01-11-2012 WHEN: 11 January 2012 07:00:00PM (-0800) WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064 When: 7:00pm - 9:00pm Wednesday, January 11th. Where: The Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. Parking is free for 3 hours. Recommended Reading: * Any article you like. Whether you're a regular reader or totally new, here for the theoretical musings or the practical things, come by and say hello! The conversation is largely unstructured and casual, and the people are awesome. If we have a large group, we may also play a game! I will bring a whiteboard with Bayes' Theorem written on it. Discussion article for the meetup : West LA Meetup 01-11-2012
b571ca9a-3246-474c-a36f-6f73919f4b9e
trentmkelly/LessWrong-43k
LessWrong
GPT-o1 Terrible name (with a terrible reason, that this ‘resets the counter’ on AI capability to 1, and ‘o’ as in OpenAI when they previously used o for Omni, very confusing). Impressive new capabilities in many ways. Less impressive in many others, at least relative to its hype. Clearly this is an important capabilities improvement. However, it is not a 5-level model, and in important senses the ‘raw G’ underlying the system hasn’t improved. GPT-o1 seems to get its new capabilities by taking (effectively) GPT-4o, and then using extensive Chain of Thought (CoT) and quite a lot of tokens. Thus that unlocks (a lot of) what that can unlock. We did not previously know how to usefully do that. Now we do. It gets much better at formal logic and reasoning, things in the ‘system 2’ bucket. That matters a lot for many tasks, if not as much as the hype led us to suspect. It is available to paying ChatGPT users for a limited number of weekly queries. This one is very much not cheap to run, although far more cheap than a human who could think this well. I’ll deal with practical capabilities questions first, then deal with safety afterwards. INTRODUCING GPT-O1 > Sam Altman (CEO OpenAI): here is o1, a series of our most capable and aligned models yet. > > o1 is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it. > > But also, it is the beginning of a new paradigm: AI that can do general-purpose complex reasoning. > > o1-preview and o1-mini are available today (ramping over some number of hours) in ChatGPT for plus and team users and our API for tier 5 users. > > worth especially noting: > > a fine-tuned version of o1 scored at the 49th percentile in the IOI under competition conditions! and got gold with 10k submissions per problem. > > Extremely proud of the team; this was a monumental effort across the entire company. > > Hope you enjoy it! Noam Brown has a summary thread here, all of which is also
f69be67f-e5b0-4b9c-8c39-bdb448f04577
trentmkelly/LessWrong-43k
LessWrong
The Yudkowsky Ambition Scale From Hacker News. > 1. We're going to build the next Facebook! > 2. We're going to found the next Apple! > 3. Our product will create sweeping political change! This will produce a major economic revolution in at least one country! (Seasteading would be change on this level if it worked; creating a new country successfully is around the same level of change as this.) > 4. Our product is the next nuclear weapon. You wouldn't want that in the wrong hands, would you? > 5. This is going to be the equivalent of the invention of electricity if it works out. > 6. We're going to make an IQ-enhancing drug and produce basic change in the human condition. > 7. We're going to build serious Drexler-class molecular nanotechnology. > 8. We're going to upload a human brain into a computer. > 9. We're going to build a recursively self-improving Artificial Intelligence. > 10. We think we've figured out how to hack into the computer our universe is running on. This made me laugh, but from the look of it, I'd say there is little work to do to make it serious. Personally, I'd try to shorten it so it is punchier and more memorable.
a94df86a-87d3-4f79-858e-aaea6e2e0803
trentmkelly/LessWrong-43k
LessWrong
New Tool: the Residual Stream Viewer This is a link-post for the residual stream viewer, which can be found here. It's an online tool whose goal is to make it easier to do interpretability research by letting you easily look at directions within the residual stream. It's still in a quite early/unpolished state, so there may be bugs, and any feature requests are very welcome! I'll probably do more to flesh this out if I get the sense that people are finding it useful.  Very briefly, the tool lets you see what the dot product of the residual stream at each token is with a particular direction. The default directions that you can look at using the tool were found by PCA, and I think many of them are fairly interpretable even at a glance (though it's worth noting that even if they correlate heavily with an apparent feature, that's no guarantee the network is actually using those directions). Here's a screenshot of the current version of the tool: There's a YouTube tutorial for the tool available here. I endorse the YouTube tutorial as probably a better way to get acquainted with the tool than the usage guide; but I'll copy-paste the usage guide for the remainder of the post. The residual stream viewer is a tool for finding interesting directions in the residual stream of GPT2-small, for writing explanations for those directions and reading the explanations left by others, and for constructing new directions out of linear combinations of old ones. A more detailed explanation of how transformers networks work and what the residual stream is can be found here. If you want to actually understand what the residual stream is and how transformers work, the text that follows here is hopelessly insufficient, and you should really follow the earlier link. However, as a very brief summary of what the "residual stream" is: The residual stream can be thought of as the intermediate state of the transformer network's computation. It is the output of each layer of the network before it is fed into the next layer. Ea
13422d47-6428-4153-8bd6-2fe2bda4bd41
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Lone Genius Bias and Returns on Additional Researchers One thing that most puzzles me about Eliezer's writings on AI is his apparent belief that a small organization like MIRI is likely to be able to beat larger organizations like Google or the US Department of Defense to building human-level AI. In fact, he seems to believe such larger organizations may have no advantage at all over a smaller one, and perhaps will even be at a disadvantage. In his 2011 debate with Robin Hanson, [he said](/lw/bug/partial_transcript_of_the_hansonyudkowsky_june/): > > **As far as I can tell what happens when the government tries to develop AI is nothing.** But that could just be an artifact of our local technological level and it might change over the next few decades. To me it seems like a deeply confusing issue whose answer is probably not very complicated in an absolute sense. Like we know why it’s difficult to build a star. You’ve got to gather a very large amount of interstellar hydrogen in one place. So we understand what sort of labor goes into a star and we know why a star is difficult to build. When it comes to building a mind, we don’t know how to do it so it seems very hard. We like query our brains to say “map us a strategy to build this thing” and it returns null so it feels like it’s a very difficult problem. But in point of fact we don’t actually know that the problem is difficult apart from being confusing. We understand the star-building problem so we know it’s difficult. This one we don’t know how difficult it’s going to be after it’s no longer confusing. > > > So to me the AI problem looks like a—it looks to me more like the sort of thing that the problem is finding bright enough researchers, bringing them together, letting them work on that problem instead of demanding that they work on something where they’re going to produce a progress report in two years which will validate the person who approved the grant and advance their career. And so the government has historically been tremendously bad at producing basic research progress in AI, in part because the most senior people in AI are often people who got to be very senior by having failed to build it for the longest period of time. (This is not a universal statement. I’ve met smart senior people in AI.) > > > **But nonetheless, basically I’m not very afraid of the government because I don’t think it’s a throw warm bodies at the problem and I don’t think it’s a throw warm computers at the problem. I think it’s a good methodology, good people selection, letting them do sufficiently blue sky stuff, and so far historically the government has been tremendously bad at producing that kind of progress.** (When they have a great big project to try to build something it doesn’t work. When they fund long-term research it works.) > > > I admit, I don't feel like I fully grasp all the reasons for the disagreement between Eliezer and myself on this issue. Some of the disagreement, I suspect, comes from slightly different views on the nature of intelligence, though I'm having the trouble pinpointing what those differences might be. But some of the difference, I'm think, comes from the fact that I've become convinced humans suffer from a *Lone Genius Bias*—a tendency to over-attribute scientific and technological progress to the efforts of lone geniuses. ***Disclaimer:*** *My understanding of [Luke's current strategy for MIRI](/lw/cfc/how_can_we_ensure_that_a_friendly_ai_team_will_be/9o3a) is that it does not hinge on whether or not MIRI itself eventually builds AI or not. It seems to me that as long as MIRI keeps [publishing research](http://intelligence.org/research/) that could potentially help other people build FAI, MIRI is doing important work. Therefore, I wouldn't advocate anything in this post being taken as a reason not to donate to MIRI. I've donated recently, and will probably [edit: see [below](/r/lesswrong/lw/ixt/lone_genius_bias_and_returns_on_additional/9zls)] continue to do so in the future.* [Intelligence Explosion Microeconomics](http://intelligence.org/files/IEM.pdf) has an interesting section labeled "Returns on Population" (section 3.4) where, among other things, Eliezer says: > > Although I expect that this section of my analysis will not be without controversy, it appears to the author to also be an important piece of data to be explained that human science and engineering seem to scale over time better than over population—an extra decade seems much more valuable than adding warm bodies. > > > Indeed, it appears to the author that human science scales ludicrously poorly with increased numbers of scientists, and that this is a major reason there hasn’t been more relative change from 1970–2010 than from 1930–1970 despite the vastly increased num- ber of scientists. The rate of real progress seems mostly constant with respect to time, times a small factor more or less. I admit that in trying to make this judgment I am trying to summarize an overwhelmingly distant grasp on all the fields outside my own handful. Even so, a complete halt to science or a truly exponential (or even quadratic) speedup of real progress both seem like they would be hard to miss, and the exponential increase of published papers is measurable. Real scientific progress is continuing over time, so we haven’t run out of things to investigate; and yet somehow real scientific progress isn’t scaling anywhere near as fast as professional scientists are being added. > > > The most charitable interpretation of this phenomenon would be that science problems are getting harder and fields are adding scientists at a combined pace which produces more or less constant progress. It seems plausible that, for example, Intel adds new researchers at around the pace required to keep up with its accustomed exponential growth... > > > Eliezer goes on to suggest, however, that Intel is not at all typical, and proposes some *other* explanations, two of which ("science is inherently bounded by serial causal depth" and that scientific progress is limited by the need to wait for the last generation to die) suggest that progress doesn't scale at *all* with added researchers, at least past a certain point. I'm inclined to think that that Eliezer's basic claim here—that research progress scales better with time than population—is probably correct. Doubling the number of researchers working on a problem rarely means solving the problem twice as fast. However, I doubt the scaling is as *ludicrously* bad as Eliezer suggests. I suspect the case of Intel is fairly typical, and the "science problems are getting harder" theory of the history of science has a lot more going for it than Eliezer wants to grant. For one thing, there seems to be a human bias in favor of attributing scientific and technological progress to lone geniuses—call it the Lone Genius Bias. In fiction, it's common for the cast to have a single "smart guy," a Reed Richards type, who does everything important in the the science and technology area, pulling off miraculous achievements all by himself. (If you're lucky, this role will be shared by *two* characters, like Fitz-Simmons on Joss Whedon's new *S.H.I.L.D.* TV show.) Similarly, villainous plots often hinge on kidnapping *one single scientist* who will be able to fulfill all the villain with all the villain's technical know-how needs. There's some reason to chalk this up to peculiarities of fiction (see TVTtropes articles on the [Omnidisciplinary Scientist](http://tvtropes.org/pmwiki/pmwiki.php/Main/OmnidisciplinaryScientist) and [The Main Characters Do Everything](http://tvtropes.org/pmwiki/pmwiki.php/Main/TheMainCharactersDoEverything) generally). But it often seems to bleed over into perceptions of real-life scientists and engineers. Saul Kripke, in the course of making a point about proper names, [once claimed](http://books.google.com/books?id=9vvAlOBfq0kC&pg=PA85&lpg=PA85&dq=Saul+Kripke+einstein+atom+bomb&source=bl&ots=MTck8mFtwF&sig=S8dWKlrdOrwjJdkvg2YXbcyQve0&hl=en&sa=X&ei=btFyUpKAA_Td4APfv4HwBQ&ved=0CDYQ6AEwAw#v=onepage&q=Saul%20Kripke%20einstein%20atom%20bomb&f=false) that he often met people who identified Einstein as the inventor of the atom bomb. Of course, in reality, Einstein just provided the initial theoretical basis for the atom bomb. Not only did the bomb itself require the Manhattan Project (which involved over 100,000 people) to build, but there there was a fair amount of basic science that had to take place after Einstein's original statement of mass-energy equivalence in 1905 before the Manhattan Project could even be conceived of. Or: in the popular imagination, Thomas Edison was an amazingly brilliant inventor, almost on par with Reed Richards. A contrarian view, popular among tech geeks, says that actually Edison was a jerk who got famous taking credit for other people's work, and also he depended on having a lot of other people working for him at Menlo Park. But then there's a [meta-contrarian](/lw/2pv/intellectual_hipsters_and_metacontrarianism/) view that argues that Menlo Park was ["the first industrial research lab," and industrial research labs are very important, to the point that Menlo Park itself was Edison's "major innovation."](http://en.wikipedia.org/wiki/Thomas_Edison#Menlo_Park) On this view, it's not *Edison's* fault that Lone Genius Bias leads people to misunderstand what his true contribution was. It's easy to see, in evolutionary terms, why humans might suffer from Lone Genius Bias. In the ancestral environment, major achievements would often have been the work of a single individual. Theoretically, there might have been the occasional achievement that required the cooperation of a whole entire hunter-gatherer band, but major achievements were *never* the work of Intel-sized R&D departments or 100,000 person Manhattan Projects. (The is an instance of the more general principle that humans have trouble fully grokking complex modern societies.) Once you know about Lone Genius Bias, you should be suspicious when you find yourself gravitating towards future scenarios where the key innovations are the work of a few geniuses. Furthermore, it's not just that big projects are more common now than they were in the ancestral environment. The tendency of major advances to be the work of large groups seems to have noticeably increased over just the last century or so, and that trend may only continue even further in the future. Consider Nobel Prizes. The first Nobel Prizes were awarded in 1901. When people think of Nobel Prize winners they tend to think of *unshared* Nobel Prizes, like Einstein's, but in fact a Nobel Prize can be shared by up to three people. And when you look at [the list of Nobel Prize winners over the years](http://en.wikipedia.org/wiki/List_of_Nobel_laureates), the tendency towards giving out more and more shared prizes as time goes on is obvious. In fact, given the way science currently works, many people find the rule rule that no more than three people can share a prize too restrictive. The Nobel for the discovery of the Higgs Boson, for example, went to two theoreticians who predicted the particle decades ago, while ignoring the contributions of the large number of experimental scientists whose work was required to confirm the particle's existence. An *IEEE Spectrum* headline went as far as to state the prize ["ignores how modern science works."](http://spectrum.ieee.org/tech-talk/aerospace/astrophysics/nobel-for-higgs-boson-discovery-ignores-how-modern-science-works) You can reach the same conclusion just looking at the bylines on scientific papers. The single-author scientific paper ["has all but disappeared."](http://www.nature.com/nature/history/full/nature06243.html) Some of that may be due to people gaming the citation-count-as-measure-of-scientific-productivity system, but my impression is that the typical university science lab's PI (principle investigator) really couldn't be nearly as productive without their miniature army of postdocs, grad students, and paid staff. (Consider also that gaming of citation counts *hasn't* led to an explosion of authors-per-paper in fields like philosophy, where there are obviously fewer benefits to collaboration.) And if you need one more argument that scientific problems are getting harder, and increasingly unlikely to be solved by lone geniuses... what does anyone *honestly* think the chances are that the Next Big Thing in science will come in the form some [26 year old publishing a few single-author papers in the same year he got his PhD](http://en.wikipedia.org/wiki/Annus_Mirabilis_papers)? **Update:** Luke's comments on this post are awesome and I recommend people read them.