id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
cbcc737a-2775-46ce-b581-0ed7bc4496e0
trentmkelly/LessWrong-43k
LessWrong
Post/meta-skepticism I Less Wrong has been significantly influenced by the skeptic movement. This has general been good for the epistemic health of our community, but at the same time we should also worry about whether everything we have inherited is true and beneficial. In other words, we need to apply skepticism to skepticism itself. It's often good to start with the strengths of a movement and consider how these can also become weaknesses if taken too extremes, so this is what we'll do in this post instead of taking them at face value. The fundamental promise of skepticism is that it will prevent you from being mislead by weak evidence, poor reasoning, social pressure or those who would manipulate you. The opposite of foolishing accepting weak evidence is being excessively dogmatic in the standards of evidence you require, even when circumstances require you to decide based on inconclusive evidence. A famous spoof article jokes that we don't know parachutes are reliable because we don't have a randomised controlled trial. Wittgenstein wanted to argue that language is nonsense insofar as it failed to paint a picture or possible pictures of the world (in a particular technical sense). The only trouble was that this claim itself didn't paint a picture of the world. The logical positivists belived that beliefs were defined in terms of the experimental predictions they made, however it is almost entirely false that just now a chocolate cake spontaneously formed at the center of the sun then dissolve and it isn't clear how to define this claim itself experimentally or in terms of mathematics. Behaviourism argued that it was unscientific to ascribe emotions, beliefs or thoughts to humans as we can't directly observe these, which resulted in a greatly crippled psychology field. Similarly, the opposite of foolishing accepting poor reasoning is being excessively dogmatic in the preciseness of arguments that you need. When talking about ethics, Aristotle argues that, "We must not expect more
6c2cfd83-9ef7-4684-a716-9c58dc13868c
trentmkelly/LessWrong-43k
LessWrong
Meetup : MelbLW Social Meetup: Feb 2016 Discussion article for the meetup : MelbLW Social Meetup: Feb 2016 WHEN: 19 February 2016 06:30:00PM (+1100) WHERE: Bull and Bear Tavern, 347 Flinders Lane, Melbourne VIC 3000 This month's LessWrong Melbourne social meetup is on again this week! Facebook event page: https://www.facebook.com/events/182819988751813/ Our social meetups run on the third Friday of each month. They are casual get-togethers where we chat about topics of interest over a few drinks and a meal. Sometimes we play games as well. Where? The Bull & Bear Tavern, 347 Flinders Ln, Melbourne VIC 3000 When? From 6:30pm on Friday 19th Feb. Feel free to show up later, though. We typically stay at the main venue until about 11pm. Dinner? The Bull & Bear serve decent pub food and we usually get some wedges etc to share. If you would like to wait, we usually go out for dinner/supper afterwards. Games? Feel free to bring a game along if you'd like to play! Contact? Any questions or issues, contact Richard on 0421231789 Hope to see you there! Discussion article for the meetup : MelbLW Social Meetup: Feb 2016
70e3e6ba-cf7f-41f4-bb24-08076f89e0f0
trentmkelly/LessWrong-43k
LessWrong
Catastrophic sabotage as a major threat model for human-level AI systems Thanks to Holden Karnofsky, David Duvenaud, and Kate Woolverton for useful discussions and feedback. Following up on our recent “Sabotage Evaluations for Frontier Models” paper, I wanted to share more of my personal thoughts on why I think catastrophic sabotage is important and why I care about it as a threat model. Note that this isn’t in any way intended to be a reflection of Anthropic’s views or for that matter anyone’s views but my own—it’s just a collection of some of my personal thoughts. First, some high-level thoughts on what I want to talk about here: * I want to focus on a level of future capabilities substantially beyond current models, but below superintelligence: specifically something approximately human-level and substantially transformative, but not yet superintelligent. * While I don’t think that most of the proximate cause of AI existential risk comes from such models—I think most of the direct takeover risk comes from more intelligent models—I do think that much of the root cause of risk might come from such models. What I mean by that is approximately: in a takeover scenario, even if it’s a superintelligent model that ends up taking over the world, in retrospect the point at which things went wrong probably happened earlier. And in fact I think the critical juncture is likely to be around this level of capabilities, e.g. because this is around the time when I expect most alignment research to be shifting over from being done by humans to being done by AIs. * In particular, I’m focusing on a level of capabilities that is substantially beyond anything that we found in “Sabotage Evaluations for Frontier Models.” Even perfect performance on all the evals in that paper would still probably be below the level of capabilities I am intending to talk about here (though it’s a bit ambiguous since many of those evals have an attacker and defender and it’s unclear what would happen if you substantially increased both the attacker and defender capabil
e68c4ff3-6119-47f8-b4a8-e73896ac85ee
trentmkelly/LessWrong-43k
LessWrong
Another Case Study of Inadequate Equilibrium in Medicine Science Journalist Julie Rehmeyer suffered from crippling Chronic Fatigue Syndrome. She's plugging a new book discussing her experiences with the not-medically-established treatment. The article in this link post discusses her decision to attempt the treatment, as well as first-order, meta-level suggestions about pursuing this type of independent research for other conditions. It's nothing groundbreaking (especially to anyone who read Inadequate Equilibria), but it's an interesting to see the same idea being pitched to a wider audience. (h/t Andrew Gelman)
525c2c46-6639-4a86-b94c-19fea9390210
StampyAI/alignment-research-dataset/arbital
Arbital
Quotient by subgroup is well defined if and only if subgroup is normal Let $G$ be a [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) and $N$ a [https://arbital.com/p/-4h6](https://arbital.com/p/-4h6) of $G$. Then we may define the *quotient group* $G/N$ to be the set of [left cosets](https://arbital.com/p/4j4) $gN$ of $N$ in $G$, with the group operation that $gN + hN = (gh)N$. This is well-defined if and only if $N$ is normal. # Proof ## $N$ normal implies $G/N$ well-defined Recall that $G/N$ is well-defined if "it doesn't matter which way we represent a coset": whichever coset representatives we use, we get the same answer. Suppose $N$ is a normal subgroup of $G$. We need to show that given two representatives $g_1 N = g_2 N$ of a coset, and given representatives $h_1 N = h_2 N$ of another coset, that $(g_1 h_1) N = (g_2 h_2)N$. So given an element of $g_1 h_1 N$, we need to show it is in $g_2 h_2 N$, and vice versa. Let $g_1 h_1 n \in g_1 h_1 N$; we need to show that $h_2^{-1} g_2^{-1} g_1 h_1 n \in N$, or equivalently that $h_2^{-1} g_2^{-1} g_1 h_1 \in N$. But $g_2^{-1} g_1 \in N$ because $g_1 N = g_2 N$; let $g_2^{-1} g_1 = m$. Similarly $h_2^{-1} h_1 \in N$ because $h_1 N = h_2 N$; let $h_2^{-1} h_1 = p$. Then we need to show that $h_2^{-1} m h_1 \in N$, or equivalently that $p h_1^{-1} m h_1 \in N$. Since $N$ is closed under conjugation and $m \in N$, we must have that $h_1^{-1} m h_1 \in N$; and since $p \in N$ and $N$ is closed under multiplication, we must have $p h_1^{-1} m h_1 \in N$ as required. ## $G/N$ well-defined implies $N$ normal Fix $h \in G$, and consider $hnh^{-1} N + hN$. Since the quotient is well-defined, this is $(hnh^{-1}h) N$, which is $hnN$ or $hN$ (since $nN = N$, because $N$ is a subgroup of $G$ and hence is closed under the group operation). But that means $hnh^{-1}N$ is the identity element of the quotient group, since when we added it to $hN$ we obtained $hN$ itself. That is, $hnh^{-1}N = N$. Therefore $hnh^{-1} \in N$. Since this reasoning works for any $h \in G$, it follows that $N$ is closed under conjugation by elements of $G$, and hence is normal.
25f32810-36f7-44e9-a365-71c5bc92ff9a
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
[AN #148]: Analyzing generalization across more axes than just accuracy or loss Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter. Audio version **[here](http://alignment-newsletter.libsyn.com/alignment-newsletter-148)** (may not be up yet). Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer. HIGHLIGHTS ========== **[Distributional Generalization: A New Kind of Generalization](https://arxiv.org/abs/2009.08092)** *(Preetum Nakkiran and Yamini Bansal)* (summarized by Rohin): Suppose you train a classifier to distinguish between CIFAR-10 classes, except each airplane has a 30% chance of being mislabeled as a car. If you then train a model to achieve perfect accuracy on this badly labeled dataset, it will get 100% accuracy on the training set, and 97% of those labels will actually be correct (since 3% are mislabeled airplanes). Under the current paradigm, if we say that the model “generalizes”, that means that it will also get 97% accuracy at test time (according to the actually correct labels). However, this doesn’t tell us anything about what mistakes are made at test time -- is it still the case that 30% of airplanes are mislabeled as cars, or does the model also make mistakes on e.g. deer? *Distributional generalization* aims to make claims about situations like these. The core idea is to make claims about the full distribution of classifier outputs, rather than just the single metric of test accuracy. Formally, we assume there is some distribution D, from which we can sample pairs of points (x, y), which generates both our train and test sets. Then, the train (resp. test) distribution of classifier outputs is (x, f(x)), with x coming from the train (resp. test) set. The train and test distributions of classifier outputs are the objects of study in distributional generalization. In particular, given a [0,1]-valued function on distributions (called a *test* T), we say that the classifier *generalizes w.r.t T* if T outputs similar values on the train and test distribution. (W.r.t means “with respect to”.) For example, given a distribution, the *accuracy test* checks how often the classifier’s output is correct in expectation over that distribution. Generalization w.r.t the accuracy test is equivalent to the canonical notion of generalization. Let’s suppose that the classifier perfectly fits the training set, so that the train distribution of classifier outputs is the same as the original distribution D. Let’s additionally suppose that the classifier generalizes with respect to the accuracy test, so that the classifier has perfect test accuracy. Then, the test distribution of classifier outputs will also be the same as the original distribution D, that is, all distributions are identical and there isn’t much more to say. So, the interesting situations are when one of these two assumptions is false, that is, when either: 1. The classifier does not perfectly fit the training set, or 2. The classifier does not generalize w.r.t accuracy. This paper primarily focuses on classifiers that *do* perfectly fit the training set, but don’t generalize w.r.t accuracy. One typical way to get this setting is to inject label noise (as in the mislabeled airplanes case), since this prevents the classifier from getting 100% test accuracy. Speaking of which, let’s return to our original example in which we add label noise by mislabeling 30% of airplanes as cars. Notice that, since the label noise is completely divorced from the classifier’s input x, the best way for the classifier to minimize *test* loss would be to always predict the true CIFAR-10 label, and then 3% of the time the true distribution will say “lol, jk, this airplane is actually a car”. However, in practice, classifiers will also label approximately 30% of airplanes as cars in the test set as well! This incurs higher loss, because the 30% of airplanes that the classifier labels as cars must be independent of the 30% of airplanes that the true distribution labels as cars, which implies that the model disagrees with the true distribution 4.2% of the time; this is worse than the 3% it would get if it consistently labeled airplanes as airplanes. **Classifiers trained to interpolation are not Bayes-optimal in the presence of label noise.** Okay, let’s get back to distributional generalization. We already know the classifier does not generalize w.r.t accuracy. However, the fact that it still labels about 30% of airplanes as cars suggests a different kind of generalization. Recall that the train and test distributions of classifier outputs have the form (x, f(x)). Consider the feature L(x) that says whether x is an airplane or not. Then, if we replace (x, f(x)) with (L(x), f(x)), then this now looks identical between the train and test distributions! Specifically, this distribution places 7% on (“yes airplane”, “airplane”), 3% on (“yes airplane”, “car”), and 10% on (“no airplane”, c) for every class c other than “airplane”. An alternative way of stating this is that the classifier generalizes w.r.t all tests whose dependence on x factors through the feature L. (In other words, the test can only depend on whether x is an airplane or not, and cannot depend on any other information about x.) The authors make a more general version of this claim they call *feature calibration*: for every feature L that *could* be learned by the classifier, the classifier generalizes w.r.t all tests whose dependence on x factors through L. Note that they do not assume that the classifier *actually* learns L: just that, if you hypothetically trained the classifier on a dataset of (x, L(x)), then it could learn that function near-perfectly. They then provide evidence for this through a variety of experiments and one theorem: - If you plug in the constant feature L(x) = 0 into the conjecture, it implies that classifiers should get the right class balance (i.e. if your distribution contains class 1 twice as often as class 0, then you predict class 1 twice as often as class 0 at test time). They demonstrate this on a rebalanced version of CIFAR-10, even for classifiers that generalize poorly w.r.t accuracy. - When using a WideResNet (for which the true CIFAR-10 labels are learnable), if you add a bunch of structured label noise into CIFAR-10, the test predictions reflect that same structure. - The same thing is true for decision trees applied to a molecular biology dataset. - A ResNet-50 trained to predict attractiveness on the CelebA dataset (which does not generalize w.r.t accuracy) does satisfy feature calibration w.r.t “wearing lipstick”, “heavy makeup”, “blond hair”, “male”, and “eye-glasses”. Note there is no label noise in this case. - AlexNet predicts that the right fraction of dogs are Terriers, even though it mistakes which exact dogs are Terriers. - The nearest-neighbor classifier provably satisfies feature calibration under relatively mild regularity conditions. In an appendix, they provide preliminary experiments suggesting this holds *pointwise*. In our mislabeled airplane example, for a *specific* airplane x from the test set, if you resample a training set (with the 30% mislabeling of airplanes) and retrain a classifier on that set, then there is a roughly 30% chance that that specific x will be misclassified as a car. The authors then introduce another distributional generalization property: *agreement*. Suppose we have two classifiers f and g trained on independently sampled training sets. The agreement conjecture states that the test accuracy of f is equal to the expected probability that f agrees with g on the test distribution (loosely speaking, this is how often f and g make the same prediction for test inputs). The agreement property can also be framed as an instance of distributional generalization, though I won’t go into the specific test here. The authors perform similar experiments as with feature calibration to demonstrate that the agreement property does seem to hold across a variety of possible classifiers. Interestingly, these properties are *not* closed under ensembling. In our mislabeled airplane example, every model will label 30% of airplanes as cars, but *which* airplanes are mislabeled is independent across models. As a result, the plurality voting used in ensembles reduces the misclassification rate to 22%, which means that you no longer satisfy feature calibration. Consistent with this, the authors observe that neural network ensembles, random forests, and k-nearest neighbors all did not satisfy feature calibration, and tended to be closer to the Bayes-optimal solution (i.e. getting closer to being robust to label noise, in our example). Summary of the summary: Let’s look at the specific ways in which classifiers make mistakes on the test distribution. This is called distributional generalization. The paper makes two conjectures within this frame. *Feature calibration* says that for any feature that a classifier could have learned, the distribution of its predictions, conditioned on that feature, will be the same at train and test time, including any mistakes it makes. *Agreement* says that the test accuracy of a classifier is equal to the probability that, on some randomly chosen test example, the classifier’s prediction matches that of another classifier trained on a freshly generated training set. Interestingly, while these properties hold for a variety of ML models, they do not hold for ensembles, because of the plurality voting mechanism. **Read more:** Section 1.3 of **[this version of the paper](https://mltheory.org/dg_short.pdf)** TECHNICAL AI ALIGNMENT ======================= AGENT FOUNDATIONS ------------------ **[The Many Faces of Infra-Beliefs](https://www.alignmentforum.org/posts/GS5P7LLLbSSExb3Sk/the-many-faces-of-infra-beliefs)** *(Diffractor)* (summarized by Rohin): When modeling an agent that acts in a world **[that contains it](https://www.alignmentforum.org/posts/p7x32SEt43ZMC9r7r/embedded-agents)** (**[AN #31](https://mailchi.mp/7d0e3916e3d9/alignment-newsletter-31)**), there are different ways that we could represent what a “hypothesis about the world” should look like. (We’ll use **[infra-Bayesianism](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa)** (**[AN #143](https://mailchi.mp/8aae224669a8/an-143how-to-make-embedded-agents-that-reason-probabilistically-about-their-environments)**) to allow us to have hypotheses over environments that are “bigger” than the agent, in the sense of containing the agent.) In particular, hypotheses can vary along two axes: 1. **First-person vs. third-person:** In a first-person perspective, the agent is central. In a third-person perspective, we take a “birds-eye” view of the world, of which the agent is just one part. 2. **Static vs. dynamic:** In a dynamic perspective, the notion of time is explicitly present in the formalism. In a static perspective, we instead have beliefs directly about entire world-histories. To get a tiny bit more concrete, let the world have states S and the agent have actions A and observations O. The agent can implement policies Π. I will use ΔX to denote a belief over X (this is a bit handwavy, but gets the right intuition, I think). Then the four views are: 1. First-person static: A hypothesis specifies how policies map to beliefs over observation-action sequences, that is, Π → Δ(O × A)\*. 2. First-person dynamic: This is the typical POMDP framework, in which a hypothesis is a belief over initial states and transition dynamics, that is, ΔS and S × A → Δ(O × S). 3. Third-person static: A hypothesis specifies a belief over world histories, that is, Δ(S\*). 4. Third-person dynamic: A hypothesis specifies a belief over initial states, and over the transition dynamics, that is, we have ΔS and S → ΔS. Notice that despite having “transitions”, actions do not play a role here. Given a single “reality”, it is possible to move between these different views on reality, though in some cases this requires making assumptions on the starting view. For example, under regular Bayesianism, you can only move from third-person static to third-person dynamic if your belief over world histories Δ(S\*) satisfies the Markov condition (future states are conditionally independent of past states given the present state); if you want to make this move even when the Markov condition isn’t satisfied, you have to expand your belief over initial states to be a belief over “initial” world histories. You can then define various flavors of (a)causal influence by saying which types of states S you allow: 1. If a state s consists of a policy π and a world history (oa)\* that is consistent with π, then the environment transitions can depend on your choice of π, leading to acausal influence. This is the sort of thing that would be needed to formalize Newcomb’s problem. 2. In contrast, if a state s consists only of an environment E that responds to actions but *doesn’t* get to see the full policy, then the environment cannot depend on your policy, and there is only causal influence. You’re implicitly claiming that Newcomb’s problem cannot happen. 3. Finally, rather than have an environment E that (when combined with a policy π) generates a world history (oa)\*, you could have the state s directly be the world history (oa)\*, *without* including the policy π. In normal Bayesianism, using (oa)\* as states would be equivalent to using environments E as states (since we could construct a belief over E that implies the given belief over (oa)\*), but in the case of infra-Bayesianism it is not. (Roughly speaking, the differences occur when you use a “belief” that isn’t just a claim about reality, but also a claim about which parts of reality you “care about”.) This ends up allowing some but not all flavors of acausal influence, and so the authors call this setup “pseudocausal”. In all three versions, you can define translations between the four different views, such that following any path of translations will always give you the same final output (that is, translating from A to B to C has the same result as A to D to C). This property can be used to *define* “acausal”, “causal”, and “pseudocausal” as applied to belief functions in infra-Bayesianism. (I’m not going to talk about what a belief function is; see the post for details.) FORECASTING ------------ **[Three reasons to expect long AI timelines](https://www.alignmentforum.org/posts/Z5gPrKTR2oDmm6fqJ/three-reasons-to-expect-long-ai-timelines)** *(Matthew Barnett)* (summarized by Rohin): This post outlines and argues for three reasons to expect long AI timelines that the author expects are not taken into account in current forecasting efforts: 1. **Technological deployment lag:** Most technologies take decades between when they're first developed and when they become widely impactful. 2. **Overestimating the generality of AI technology:** Just as people in the 1950s and 1960s overestimated the impact of solving chess, it seems likely that current people are overestimating the impact of recent progress, and how far it can scale in the future. 3. **Regulation will slow things down,** as with **[nuclear energy](https://rootsofprogress.org/devanney-on-the-nuclear-flop)**, for example. You might argue that the first and third points don’t matter, since what we care about is when AGI is *developed*, as opposed to when it becomes widely deployed. However, it seems that we continue to have the opportunity to intervene until the technology becomes widely impactful, and that seems to be the relevant quantity for decision-making. You could have some specific argument like “the AI goes FOOM and very quickly achieves all of its goals” that then implies that the development time is the right thing to forecast, but none of these seem all that obvious. **Rohin's opinion:** I broadly agree that (1) and (3) don’t seem to be discussed much during forecasting, despite being quite important. (Though see e.g. **[value of the long tail](https://www.lesswrong.com/posts/Nbcs5Fe2cxQuzje4K/value-of-the-long-tail)**.) I disagree with (2): while it is obviously possible that people are overestimating recent progress, or are overconfident about how useful scaling will be, there has at least been a lot of thought put into that particular question -- it seems like one of the central questions tackled by **[bio anchors](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines)** (**[AN #121](https://mailchi.mp/41774b61e5f8/an-121forecasting-transformative-ai-timelines-using-biological-anchors)**). See more discussion in this **[comment thread](https://www.alignmentforum.org/posts/Z5gPrKTR2oDmm6fqJ/three-reasons-to-expect-long-ai-timelines?commentId=F7FNee8Bpa8hemQkd)**. FIELD BUILDING --------------- **[FAQ: Advice for AI Alignment Researchers](https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/)** *(Rohin Shah)* (summarized by Rohin): I've written an FAQ answering a broad range of AI alignment questions that people entering the field tend to ask me. Since it's a meta post, i.e. about how to do alignment research rather than about alignment itself, I'm not going to summarize it here. MISCELLANEOUS (ALIGNMENT) -------------------------- **[Testing The Natural Abstraction Hypothesis: Project Intro](https://www.alignmentforum.org/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro)** *(John Wentworth)* (summarized by Rohin): We’ve previously seen some discussion about **[abstraction](https://www.alignmentforum.org/posts/vDGvHBDuMtcPd8Lks/public-static-what-is-abstraction)** (**[AN #105](https://mailchi.mp/be2a0d160fa2/an-105-the-economic-trajectory-of-humanity-and-what-we-might-mean-by-optimization)**), and some **[claims](https://www.lesswrong.com/posts/wopE4nT28ausKGKwt/classification-of-ai-alignment-research-deconfusion-good?commentId=cKNrWxfxRgENS2EKX)** that there are “natural” abstractions, or that AI systems will **[tend](https://www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety)** (**[AN #72](https://mailchi.mp/cac125522aa3/an-72-alignment-robustness-methodology-and-system-building-as-research-priorities-for-ai-safety)**) to **[learn](https://aiimpacts.org/conversation-with-rohin-shah/)** (**[AN #80](https://mailchi.mp/b3dc916ac7e2/an-80-why-ai-risk-might-be-solved-without-additional-intervention-from-longtermists)**) increasingly human-like abstractions (at least up to a point). To make this more crisp, given a system, let’s consider the information (abstraction) of the system that is relevant for predicting parts of the world that are “far away”. Then, the **natural abstraction hypothesis** states that: 1. This information is much lower-dimensional than the system itself. 2. These low-dimensional summaries are exactly the high-level abstract objects/concepts typically used by humans. 3. These abstractions are “natural”, that is, a wide variety of cognitive architectures will learn to use approximately the same concepts to reason about the world. For example, to predict the effect of a gas in a larger system, you typically just need to know its temperature, pressure, and volume, rather than the exact positions and velocities of each molecule of the gas. The natural abstraction hypothesis predicts that many cognitive architectures would all converge to using these concepts to reason about gases. If the natural abstraction hypothesis were true, it could make AI alignment dramatically simpler, as our AI systems would learn to use approximately the same concepts as us, which can help us both to “aim” our AI systems at the right goal, and to peer into our AI systems to figure out what exactly they are doing. So, this new project aims to test whether the natural abstraction hypothesis is true. The first two claims will likely be tested empirically. We can build low-level simulations of interesting systems, and then compute what summary is useful for predicting its effects on “far away” things. We can then ask how low-dimensional that summary is (to test (1)), and whether it corresponds to human concepts (to test (2)). A **[followup post](https://www.alignmentforum.org/posts/f6oWbqxEwktfPrKJw/computing-natural-abstractions-linear-approximation)** illustrates this in the case of a linear-Gaussian Bayesian network with randomly chosen graph structure. In this case, we take two regions of 110 nodes that are far apart from each other, and operationalize the relevant information between the two as the covariance matrix between the two regions. It turns out that this covariance matrix has about 3-10 “dimensions” (depending on exactly how you count), supporting claim (1). (And in fact, if you now compare to another neighborhood, two of the three “dimensions” remain the same!) Unfortunately, this doesn’t give much evidence about (2) since humans don’t have good concepts for parts of linear-Gaussian Bayesian networks with randomly chosen graph structure. While (3) can also be tested empirically through simulation, we would hope that we can also prove theorems that state that nearly all cognitive architectures from some class of models would learn the same concepts in some appropriate types of environments. To quote the author, “the holy grail of the project would be a system which provably learns all learnable abstractions in a fairly general class of environments, and represents those abstractions in a legible way. In other words: it would be a standardized tool for measuring abstractions. Stick it in some environment, and it finds the abstractions in that environment and presents a standard representation of them.” **Rohin's opinion:** The notion of “natural abstractions” seems quite important to me. There are at least some weak versions of the hypothesis that seem obviously true: for example, if you ask GPT-3 some new type of question it has never seen before, you can predict pretty confidently that it is still going to respond with real words rather than a string of random characters. This is effectively because you expect that GPT-3 has learned the “natural abstraction” of the words used in English and that it uses this natural abstraction to drive its output (leaving aside the cases where it must produce output in some other language). The version of the natural abstraction hypothesis investigated here seems a lot stronger and I’m excited to see how the project turns out. I expect the author will post several short updates over time; I probably won’t cover each of these individually and so if you want to follow it in real time I recommend following it on the Alignment Forum. #### **FEEDBACK** I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**. #### **PODCAST** An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**.
dfbef90d-537e-4e8d-98b0-b7b494e1a65f
trentmkelly/LessWrong-43k
LessWrong
Taxonomy of possibility Followup to: Will issues are quite nearly skill issues You do a thing iff you can do it and you want to do it. Something you can do, but didn't, looks just the same to the simple observer as something you can't do. Likewise with what you want to do, but didn't. When you don't do something, obvious physical reality leaves ambiguous whether you were unable or disinclined. Here are some things I don't do: * fly * stay awake continuously for a year * destroy Instagram * enter the gym without showing ID * take notes on a chalkboard * crush my finger between door and frame Each of those things, I don't do for a single reason: I'm unable to do the first three, and disinclined to do the last three. To categorise them like that is intuitive. The intuition here, made precise, relies on counterfactuals. "Unable" and "disinclined" each classify answers to "what would have to change to make you do that?". Can "Can" is the opposite of "unable". "Unable" means that the change involves granting ability to they who would act, i.e. teaching a technique, providing a tool, fixing the body, or altering the environment. If you told me where the airport is, or gave me a jetpack I trust as safe enough, I would fly, at least occasionally. If you modified my brain to consolidate learning and my body to recover while awake, and pointed me to a large supply of stimulants, I would gladly stay awake for a full year, maybe more. If I studied malicious hacking methods, and you told me the network address to access Instagram's internal database and server controls, I would wipe their data and shut them down. I can sneak into the gym, or use a chalkboard, or crush my finger, insofar as teaching and equipping me wouldn't change that I don't do any of those. Want "Want" is the opposite of "disinclined". "Disinclined" means that the change involves convincing or incentivising the one who would act. If those at the front desk of the gym turned off the card-scanner and ignored visitors
947220aa-c9c7-4ead-b1ba-4642bf361f93
trentmkelly/LessWrong-43k
LessWrong
Friedman's "Prediction vs. Explanation" David D. Friedman asks: > We do ten experiments. A scientist observes the results, constructs a theory consistent with them, and uses it to predict the results of the next ten. We do them and the results fit his predictions. A second scientist now constructs a theory consistent with the results of all twenty experiments. > > The two theories give different predictions for the next experiment. Which do we believe? Why? One of the commenters links to Overcoming Bias, but as of 11PM on Sep 28th, David's blog's time, no one has given the exact answer that I would have given.  It's interesting that a question so basic has received so many answers.
1f9bd70b-8b7e-4990-aa06-810be9ab0823
trentmkelly/LessWrong-43k
LessWrong
Rationality Reading Group: Part I: Seeing with Fresh Eyes This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post. ---------------------------------------- Welcome to the Rationality reading group. This fortnight we discuss Part I: Seeing with Fresh Eyes (pp. 365-406). This post summarizes each article of the sequence, linking to the original LessWrong post where available. I. Seeing with Fresh Eyes 87. Anchoring and Adjustment - Exposure to numbers affects guesses on estimation problems by anchoring your mind to an given estimate, even if it's wildly off base. Be aware of the effect random numbers have on your estimation ability. 88. Priming and Contamination - Contamination by Priming is a problem that relates to the process of implicitly introducing the facts in the attended data set. When you are primed with a concept, the facts related to that concept come to mind easier. As a result, the data set selected by your mind becomes tilted towards the elements related to that concept, even if it has no relation to the question you are trying to answer. Your thinking becomes contaminated, shifted in a particular direction. The data set in your focus of attention becomes less representative of the phenomenon you are trying to model, and more representative of the concepts you were primed with. 89. Do We Believe Everything We're Told - Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected. 90. Cached Thoughts - Brains are slow. They need to cache as much as they can. They store answers to questions, so that no new thought is required to answer. Answers copied from others can end up in your head without you ever examining them closely. This makes you say things that you'd never believe if you thought them through. So examine your cached thoughts! Are they true? 91. The "Outside the Box" Box - When asked to t
901d66e0-3b9b-4463-a9c1-e69bf664a6d9
trentmkelly/LessWrong-43k
LessWrong
Paper: "The Ethics of Advanced AI Assistants" -Google DeepMind Google Deepmind just released a 200 page paper. A few Authors are from the community.
4f7106f3-c06d-4c05-85c9-a0df87e89a08
trentmkelly/LessWrong-43k
LessWrong
[Prize] Essay Contest: Cryonics and Effective Altruism I'm starting a contest for the best essay describing why a rational person of a not particularly selfish nature might consider cryonics an exceptionally worthwhile place to allocate resources. There are three distinct questions relating to this, and you can pick any one of them to focus on, or answer all three. Contest Summary: * Essay Topic: Cryonics and Effective Altruism * Answers at least one of the following questions: 1. Why might a utilitarian seeking to do the most good consider contributing time and/or money towards cryonics (as opposed to other causes)? 2. What is the most optimal way (or at least, some highly optimal, perhaps counterintuitive way) to contribute to cryonics? 3. What reasons might a utilitarian have for actually signing up for cryonics services, as opposed to just making a charitable donation towards cryonics (or vice versa)? * Length: 800-1200 words * Target audience: Utilitarians, Consequentialists, Effective Altruists, etc. * Prize: 1 BTC (around $350, at the moment) * Deadline: Sunday 11/17/2013, at 8:00PM PST To enter, post your essay as a comment in this thread. Feel free to edit your submission up until the deadline. If it is a repost of something old, a link to the original would be appreciated. I will judge the essays partly based on upvotes/downvotes, but also based on how well it meets the criteria and makes its points. Essays that do not directly answer any of the three questions will not be considered for the prize. If there are multiple entries that are too close to call, I will flip a coin to determine the winner. Terminology clarification: I realise that for some individuals there is confusion about the term 'utilitarian' because historically it has been represented using very simple, humanly unrealistic utility functions such as pure hedonism. For the purposes of this contest, I mean to include anyone whose utility function is well defined and self-consistent -- it is not meant to imply a particular u
937dbb32-8f4c-439f-835b-4fbea82835fd
trentmkelly/LessWrong-43k
LessWrong
Counterintuitive World - Good intro to some topics http://motherjones.com/kevin-drum/2010/09/counterintuitive-world    
57ca8bf0-a8e6-4a97-a353-3de0cd3c6b35
trentmkelly/LessWrong-43k
LessWrong
What does "probability" really mean? Recently I read Wei Dai's "What Are Probabilities, Anyway?" post, and just wanted to see if the LessWrong community has now reached a consensus on the interpretation of probability. I am of the impression that most LWers endorse multiverse theories, whether it be the Many-Worlds Interpretation of quantum mechanics or the more radical Mathematical Universe Hypothesis by Max Tegmark. In both these cases, there exists additional worlds where all possible outcome occur (though the MWI is only restricted to "possible outcomes" that fit our current physics). An interpretation that many people seems to prefer is that all possible worlds are real and probability acts as a kind of normative "measure of caring" instead of a type of "objective reality-fluid", but I have my reservations about this. I'd love to hear the community's opinion on this today and whether or not there has been any new progress.
a893560a-5576-483d-9e54-cf0fe1ea991c
trentmkelly/LessWrong-43k
LessWrong
Simulacrum Levels of Trust Epistemic Status: Pattern matching. The model completes a pattern in an intuitive way. Little originality besides the act of pattern completion itself. As I was reading the recent discussion about networks of trust, I noticed a mismatch between models described therein and the social environment around me. In particular, I found that I would be happy to participate in the kinds of networks that came up as examples. That confused me, because in practice, I actively try to distance myself from some local communities. In fact, I've always thought that trust-based relationships were just a burden without any practical benefits for anyone involved. As it turns out, the reason for the mismatch is that the little bubble where my intuitions came from is actually different. In the networks of trust that I can observe out there people don't actually seem to be interested in helping others where help is actually needed. Or at least, they don't act like it. And it isn't just a bug, either, because no one involved asks for, or expects, any helpful help. In the meantime, there is a complex culture of ritual incantations and arbitrary norms around it that everyone is careful to follow for fear of losing social status points. In other words, it smells of the exact same kind of bullshit that simulacrum level 3 is notorious for. So I completed the pattern. The trust that is actually trust will be simulacrum level 1 (S1). The "trust" I observe out there will be S3. The pattern does complete in an obvious way for the other two levels, and the resulting model does explain my two datapoints well. Level 1: The Handshake The definition of S1 is "I help you, and expect you to help me; you help me, and expect me to help you." At this level, you may offer a hand to your neighbor because you think you could be helpful. In the world of trust-proxies, a company may deliver what the customer has paid for, because they think they offer something of value for a fair price. A suftaja, as des
7b68a337-ba94-4c15-b62e-63ba9049bbf9
trentmkelly/LessWrong-43k
LessWrong
Forecasting as a tool for teaching the general public to make better judgements? Tl;dr: We are wondering whether there might be more value in forecasting as a popularized educational activity than seems to be the consensus. Looking for explicit opinions and feedback. We are a team working on popularizing evidence- and rationality-based approaches in the Czech Republic, including forecasting. When considering and consulting our previous and upcoming projects, we came to realize that there seems to be too little attention paid to forecasting as a skill-building exercise. In this sense, we agree with David Althaus and kokotajlod’s older post regarding the mainstreaming of forecasting. While it’s not society’s biggest issue, we identify especially with the second of their points, which can be expanded as follows: * Forecasting builds an explicit link from its participants to a ground truth which can (with varying success, depending on the context) be objectively claimed.  * This link can serve as a grounding rod for rationalization efforts which can be viewed as (otherwise) value-neutral. What’s more, forecasting principles can be used and trained across various domains to meet people wherever they are most open to learning - from geopolitical questions to very personal ones. Additionally, using probabilistic forecasting in personal questions, some important aspects of rationality can be internalized quite effectively. I point to Julia Galef’s approach of satisficing using subjective criteria and qualitative probability expressions to get beyond the initial issue of not having any predictions at all.  At the same time, this satisficing need not be a significant drawback to the benefits that could flow from a wider diffusion of forecasting. Whether it be depolarization in politics or laying the groundwork for an epistemic infrastructure so that we can more consciously build shared mental models, the investment necessary to start reaping rewards could be quite small. Most concretely, this prior is based on the results from Chang et al. - we thi
5fd08050-b357-4822-bc26-f864966d7ba5
trentmkelly/LessWrong-43k
LessWrong
[Aspiration-based designs] 2. Formal framework, basic algorithm Summary. In this post, we present the formal framework we adopt during the sequence, and the simplest form of the type of aspiration-based algorithms we study. We do this for a simple form of aspiration-type goals: making the expectation of some variable equal to some given target value. The algorithm is based on the idea of propagating aspirations along time, and we prove that the algorithm gives a performance guarantee if the goal is feasible. Later posts discuss safety criteria, other types of goals, and variants of the basic algorithm. Assumptions In line with the working hypotheses stated in the previous post, we assume more specifically the following in this post: * The agent is a general-purpose AI system that is given a potentially long sequence of tasks, one by one, which it does not know in advance. Most aspects of what we discuss focus on the current task only, but some aspects relate to the fact that there will be further, unknown tasks later (e.g., the question of how much power the agent shall aim to retain at the end of the task). * It possesses an overall world model that represents a good enough general understanding of how the world works. * Whenever the agent is given a task, an episode begins and its overall world model provides it with a (potentially much simpler) task-specific world model that represents everything that is relevant for the time period until the agent gets a different task or is deactivated, and that can be used to predict the potentially stochastic consequences of taking certain actions in certain world states. * That task-specific world model has the form of a (fully observed) Markov Decision Process (MDP) that however does not contain a reward function R but instead contains what we call an evaluation function related to the task (see 2nd to next bullet point). * As a consequence of a state transition, i.e., of taking a certain action a in a certain state s and finding itself in a certain successor state s′, a certain
e348598f-4bbb-47cd-aa27-2afbe9905e22
trentmkelly/LessWrong-43k
LessWrong
Meetup : San Francisco Meetup: Board Games Discussion article for the meetup : San Francisco Meetup: Board Games WHEN: 09 March 2015 06:15:00PM (-0700) WHERE: 1390 Market St., San Francisco, CA (Note that this is back in my apartment, not app academy: we have tables and boardgames here). We'll be meeting up to hang out and play games. If anyone has a co-op game, we could play biased boardgames, but we don't have any. Discussion article for the meetup : San Francisco Meetup: Board Games
96297d1a-b229-4a56-acc4-f554f424c64e
StampyAI/alignment-research-dataset/blogs
Blogs
Fiction *While I tend to publish most of my writing for free, I strongly believe that money is not evil. Therefore, anyone is welcome to take characters or settings from my original online fiction, such as the beisutsukai or the Baby-Eating Aliens, and use them in new commercial works of your own creation. I do ask for acknowledgment and a link or other reference to the original, but so long as the writing is your own, you may charge for access, distribute printed copies, sell the story to a magazine, etc. I don’t mind.* | | | --- | | [**Harry Potter and the Methods of Rationality**](http://www.fanfiction.net/s/5782108/1/Harry_Potter_and_the_Methods_of_Rationality/)“Petunia Evans married a biochemist, and Harry Potter grew up in a house filled to the brim with books, reading science and science fiction. Then came the Hogwarts letter, introducing strange new opportunities to exploit. And new friends, like Hermione Granger, and Draco Malfoy, and Professor Quirrell…” I began writing this story just for fun in my downtime from working on my nonfiction rationality book, uncertain at first if anyone would be interested. Since then it has received over 5 million hits and is currently the #1 most-reviewed Harry Potter fanfiction on the entire Internet, also the second Google result for “rationality”. (Yes. Seriously.) It helps if you’ve at least read the first book of Harry Potter or watched the first movie, but in a pinch you can read anyway. Give it a try even if you think of yourself as someone who never reads fanfiction. | | [**Three Worlds Collide**](http://lesswrong.com/lw/y5/the_babyeating_aliens_18/)The most controversial story I’ve ever written. Starts with the baby-eating aliens and moves on from there. | | **[The P-Zombie Apocalypse (aka Zombies: The Movie)](http://lesswrong.com/lw/pn/zombies_the_movie/)**“These zombies… are different. They’re… *philosophical* zombies.” | | [**Non-Player Character**](https://www.yudkowsky.net/other/fiction/npc)I looked at the screen for a few moments. Rilanya’s rendered graphic was looking at my point-of-view with a pleading expression. Plot point, I thought to myself, and typed: “Anything, Rilanya. | | [**The Sword of Good**](https://www.yudkowsky.net/other/fiction/the-sword-of-good)What does it mean, if it’s been prophesied that you will make the ultimate choice between Good and Evil? Why wouldn’t you just choose Good? And Hirou carries the Sword of Good, which instantly slays any wielder not of good intentions… | | [**Initiation Ceremony**](http://lesswrong.com/lw/p1/initiation_ceremony/)“The torches that lit the narrow stairwell burned intensely and in the wrong color, flame like melting gold or shattered suns.” – First in the [beisutsukai](http://lesswrong.com/tag/conspiracy_world/) series. | | [**The Finale of the Ultimate Meta Mega Crossover**](http://www.fanfiction.net/s/5389450/1/The_Finale_of_the_Ultimate_Meta_Mega_Crossover)This was intended as a bit of utterly deranged fun, but ended up as a deep philosophical exploration. Vernor Vinge x Greg Egan crackfic. | | **[The Hero With a Thousand Chances](http://lesswrong.com/lw/14h/the_hero_with_a_thousand_chances/)**After every defeat, the Dust takes another shape and once again tries to destroy all things. What is the mysterious Counter-Force that keeps the world alive? | | [**Trust in God, or, The Riddle of Kyon**](http://www.fanfiction.net/s/5588986/1/Trust_in_God_or_The_Riddle_of_Kyon)A wee bit of Suzumiya Haruhi fanfiction. I should probably never do this again. | | [**Failed Utopia #4-2**](http://www.overcomingbias.com/2009/01/failed-utopia-42.html)With perceptual instantaneity – the speed of surprise – his mind had already labeled her as the most beautiful woman he’d ever met, including his wife. | | [**Dark Lord’s Answer**](https://www.yudkowsky.net/other/fiction/dark-lords-answer)“They say that the Dark Lord will give you an answer and your goal will be achieved. The price is that his answer might violate the rules of righteous conduct.” The country of Santal is perishing, and nobody knows why. His country’s plight has driven Prince Nama over far roads to consult the famed Dark Lord for answers… (Sample chapters 2/7.) | | [**X17**](https://www.yudkowsky.net/other/fiction/x17)Short story inspired by “doc” Smith’s *Lensman* novels. | | [**Artifacts**](https://www.yudkowsky.net/other/fiction/artifacts/)In the western spiral arm of our galaxy lies a star system and a planet occupied ages ago. On one mountain of that planet there is a great structure, thousands of cubits tall… | | [**Prospiracy Theory**](https://www.yudkowsky.net/other/fiction/prospiracy-theory)Out of habit, I identified the surveillance drones; a CIA sparrow, an FBI robin, a bluetit from the Men In Black, and a flock of honking ducks that was probably one of the Illuminati’s newfangled distributed devices… | | [**Girl Intercorrupted**](https://www.yudkowsky.net/other/fiction/girl-intercorrupted)“My family name is Yugano. My given name is Yuuki. I have no redeeming qualities.” So begins this light novel of a girl corrupted by the Internet, and then summoned to another world. She’s jaded from having already read many stories like that – but will that prepare her for what awaits in this world? Of course not! But she’s going to plunge ahead anyway, and not slow down for anything! (Sample chapters 4/13.) |
464c28fa-6fe4-4308-af70-d31a5e31415e
trentmkelly/LessWrong-43k
LessWrong
Two Truths and a Lie Response to Man-with-a-hammer syndrome. It's been claimed that there is no way to spot Affective Death Spirals, or cultish obsession with the One Big Idea of Everything. I'd like to posit a simple way to spot such error, with the caveat that it may not work for every case. There's an old game called Two Truths and a Lie. I'd bet almost everyone's heard of it, but I'll summarize it just in case. A person makes three statements, and the other players must guess which of those statements is false. The statement-maker gets points for fooling people, people get points for not being fooled. That's it. I'd like to propose a rationalist's version of this game that should serve as a nifty check on certain Affective Death Spirals, runaway Theory-Of-Everythings, and Perfectly General Explanations. It's almost as simple. Say you have a theory about human behaviour. Get a friend to do a little research and assert three factual claims about how people behave that your theory would realistically apply to. At least one of these claims must be false. See if you can explain every claim using your theory before learning which one's false.  If you can come up with a convincing explanation for all three statements, you must be very cautious when using your One Theory. If it can explain falsehoods, there's a very high risk you're going to use it to justify whatever prior beliefs you have. Even worse, you may use it to infer facts about the world, even though it is clearly not consistent enough to do so reliably. You must exercise the utmost caution in applying your One Theory, if not abandon reliance on it altogether. If, on the other hand, you can't come up with a convincing way to explain some of the statements, and those turn out to be the false ones, then there's at least a chance you're on to something. Come to think of it, this is an excellent challenge to any proponent of a Big Idea. Give them three facts, some of which are false, and see if their Idea can discriminate. Just
94b962dc-a3de-4eb5-ab89-4a207c629f66
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Two Alternatives to Logical Counterfactuals The following is a critique of the idea of logical counterfactuals. The idea of logical counterfactuals has appeared in previous agent foundations research (especially at MIRI): [here](https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/), [here](https://intelligence.org/2018/10/31/embedded-decisions/). "Impossible possible worlds" have been considered elsewhere in the literature; see [the SEP article](https://plato.stanford.edu/entries/impossible-worlds/) for a summary. I will start by motivating the problem, which also gives an account for what a logical counterfactual is meant to be. Suppose you learn about physics and find that you are a robot. You learn that your source code is "A". You also believe that you have free will; in particular, you may decide to take either action X or action Y. In fact, you take action X. Later, you simulate "A" and find, unsurprisingly, that when you give it the observations you saw up to deciding to take action X or Y, it outputs action X. However, you, at the time, had the sense that you could have taken action Y instead. You want to be consistent with your past self, so you want to, at this later time, believe that you could have taken action Y at the time. If you could have taken Y, then you *do* take Y in some possible world (which still satisfies the same laws of physics). In this possible world, it is the case that "A" returns Y upon being given those same observations. But, the output of "A" when given those observations is a fixed computation, so you now need to reason about a possible world that is logically incoherent, given your knowledge that "A" in fact returns X. This possible world is, then, a logical counterfactual: a "possible world" that is logically incoherent. To summarize: a logical counterfactual is a notion of "what would have happened" had you taken a different action after seeing your source code, and in that "what would have happened", the source code must output a different action than what you actually took; hence, this "what would have happened" world is logically incoherent. It is easy to see that this idea of logical counterfactuals is unsatisfactory. For one, no good account of them has yet been given. For two, there is a sense in which no account *could* be given; reasoning about logically incoherent worlds can only be so extensive before running into logical contradiction. To extensively refute the idea, it is necessary to provide an alternative account of the motivating problem(s) which dispenses with the idea. Even if logical counterfactuals are unsatisfactory, the motivating problem(s) remain. I now present two alternative accounts: counterfactual nonrealism, and policy-dependent source code. Counterfactual nonrealism ------------------------- According to counterfactual nonrealism, there is no fact of the matter about what "would have happened" had a different action been taken. There is, simply, the sequence of actions you take, and the sequence of observations you get. At the time of taking an action, you are uncertain about what that action is; hence, from your perspective, there are multiple possibilities. Given this uncertainty, you may consider [material conditionals](https://en.wikipedia.org/wiki/Material_conditional): if I take action X, will consequence Q necessarily follow? An action may be selected on the basis of these conditionals, such as by determining which action results in the highest guaranteed expected utility if that action is taken. This is basically the approach taken in my post on [subjective implication decision theory](https://unstableontology.com/2020/03/05/subjective-implication-decision-theory-in-critical-agentialism/). It is also the approach taken by [proof-based UDT](https://intelligence.org/files/UDTSearchOrder.pdf). The material conditionals are ephemeral, in that at a later time, the agent will know that they could only have taken a certain action (assuming they knew their source code before taking the action), due to having had longer to think by then; hence, all the original material conditionals will be vacuously true. The apparent nondeterminism is, then, only due to the epistemic limitation of the agent at the time of making the decision, a limitation not faced by a later version of the agent (or an outside agent) with more computation power. This leads to a sort of relativism: what is undetermined from one perspective may be determined from another. This makes global accounting difficult: it's hard for one agent to evaluate whether another agent's action is any good, because the two agents have different epistemic states, resulting in different judgments on material conditionals. A problem that comes up is that of "spurious counterfactuals" (analyzed in the linked paper on proof-based UDT). An agent may become sure of its own action before that action is taken. Upon being sure of that action, the agent will know the material implication that, if they take a different action, something terrible will happen (this material implication is vacuously true). Hence the agent may take the action they were sure they would take, making the original certainty self-fulfilling. (There are technical details with how the agent becomes certain having to do with Löb's theorem). The most natural decision theory resulting in this framework is [timeless decision theory](https://wiki.lesswrong.com/wiki/Timeless_decision_theory) (rather than [updateless decision theory](https://wiki.lesswrong.com/wiki/Updateless_decision_theory)). This is because the agent updates on what they know about the world so far, and considers the material implications of themselves taken a certain action; these implications include logical implications if the agent knows their source code. Note that timeless decision theory is dynamically inconsistent in the [counterfactual mugging problem](https://wiki.lesswrong.com/wiki/Counterfactual_mugging). Policy-dependent source code ---------------------------- A second approach is to assert that one's source code depends on one's entire policy, rather than only one's actions up to seeing one's source code. Formally, a policy is a function mapping an observation history to an action. It is distinct from source code, in that the source code specifies the implementation of the policy in some programming language, rather than itself being a policy function. Logically, it is impossible for the same source code to generate two different policies. There is a fact of the matter about what action the source code outputs given an observation history (assuming the program halts). Hence there is no way for two different policies to be compatible with the same source code. Let's return to the robot thought experiment and re-analyze it in light of this. After the robot has seen that their source code is "A" and taken action X, the robot considers what would have happened if they had taken action Y instead. However, if they had taken action Y instead, then their policy would, trivially, have to be different from their actual policy, which takes action X. Hence, their source code would be different. Hence, they would not have seen that their source code is "A". Instead, if the agent were to take action Y upon seeing that their source code is "A", their source code must be something else, perhaps "B". Hence, which action the agent would have taken depends directly on their policy's behavior upon seeing that the source code is "B", and indirectly on the entire policy (as source code depends on policy). We see, then, that the original thought experiment encodes a reasoning error. The later agent wants to ask what would have happened if they had taken a different action after knowing their source code; however, the agent neglects that such a policy change would have resulted in seeing different source code! Hence, there is no need to posit a logically incoherent possible world. The reasoning error came about due to using a conventional, linear notion of interactive causality. Intuitively, what you see up to time *t* depends only on your actions before time *t*. However, policy-dependent source code breaks this condition. What source code you see that you have depends on your entire policy, not just what actions you took up to seeing your source code. Hence, reasoning under policy-dependent source code requires abandoning linear interactive causality. The most natural decision theory resulting from this approach is updateless decision theory, rather that timeless decision theory, as it is the entire policy that the counterfactual is on. Conclusion ---------- Before very recently, my philosophical approach had been counterfactual nonrealism. However, I am now more compelled by policy-dependent source code, after having analyzed it. I believe this approach fixes the main problem of counterfactual nonrealism, namely relativism making global accounting difficult. It also fixes the inherent dynamic inconsistency problems that TDT has relative to UDT (which are related to the relativism). I believe the re-analysis I have provided of the thought experiment motivating logical counterfactuals is sufficient to refute the original interpretation, and thus to de-motivate logical counterfactuals. The main problem with policy-dependent source code is that, since it violates linear interactive causality, analysis is correspondingly more difficult. Hence, there is further work to be done in considering simplified environment classes where possible simplifying assumptions (including linear interactive causality) can be made. It is critical, though, that the linear interactive causality assumption not be used in analyzing cases of an agent learning their source code, as this results in logical incoherence.
8b7b138d-4ffb-4bfa-884a-565f280b338d
trentmkelly/LessWrong-43k
LessWrong
Parenting Technique: Increase Your Child’s Working Memory I continually train my ten-year-old son’s working memory, and urge parents of other young children to do likewise.  While I have succeeded in at least temporarily improving his working memory, I accept that this change might not be permanent and could end a few months after he stops training.  But I also believe that while his working memory is boosted so too is his learning capacity.     I have a horrible working memory that greatly hindered my academic achievement.  I was so bad at spelling that they stopped counting it against me in school.  In technical classes I had trouble remembering what variables stood for.  My son, in contrast, has a fantastic memory.  He twice won his school’s spelling bee, and just recently I wrote twenty symbols (letters, numbers, and shapes) in rows of five.  After a few minutes he memorized the symbols and then (without looking) repeated them forward, backwards, forwards, and then by columns.     My son and I have been learning different programming languages through Codecademy.  While I struggle to remember the required syntax of different languages, he quickly gets this and can focus on higher level understanding.  When we do math learning together his strong working memory also lets him concentrate on higher order issues then remembering the details of the problem and the relevant formulas.      You can easily train a child’s working memory.  It requires just a few minutes of time a day, can be very low tech or done on a computer, can be optimized for your child to get him in flow, and easily lends itself to a reward system.  Here is some of the training we have done:          * I write down a sequence and have him repeat it. * I say a sequence and have him repeat it. * He repeats the sequence backwards. * He repeats the sequence with slight changes such as adding one to each number and “subtracting” one from each letter. * He repeats while doing some task like touching his head every time he says an even number and touc
6f9a7428-0a94-4ff6-9cf0-638c53fe773c
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Could Democritus have predicted intelligence explosion? Also see: [History of the Friendly AI concept](http://friendly-ai.com/faq.html#WhatIsTheHistory). The [ancient atomists](http://en.wikipedia.org/wiki/Atomism) reasoned their way from first principles to materialism and atomic theory before Socrates began his life's work of making people look stupid in the marketplace of Athens. Why didn't they discover natural selection, too? After all, natural selection [follows necessarily](http://en.wikipedia.org/wiki/Evolution#Natural_selection) from heritability, variation, and selection, and the Greeks had plenty of evidence for all three pieces. Natural selection is *obvious* once you understand it, but it took us a long time to discover it. I get the same vibe from [intelligence explosion](http://intelligenceexplosion.com/). The hypothesis wasn't stated clearly until [1965](http://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion), but in hindsight it seems obvious. (Michael Vassar once told me that once he became a physicalist he said "Oh! Intelligence explosion!" Except of course he didn't know the term "intelligence explosion." And he was probably exaggerating.) Intelligence explosion follows from physicalism and scientific progress and not much else. Since materialists had to believe that human [intelligence](http://facingthesingularity.com/2011/playing-taboo-with-intelligence/) resulted from the operation of mechanical systems located in the human body, they *could* have realized that scientists would eventually come to understand these systems so long as scientific progress continued. ([Herophilos](http://en.wikipedia.org/wiki/Herophilus) and [Erasistratus](http://en.wikipedia.org/wiki/Erasistratus) were already mapping which nerves and veins did what back in the 4th century B.C.) And once human intelligence is understood, it can be improved upon, and this improvement in intelligence can be used to improve intelligence even further. And the ancient Greeks certainly had good evidence that there was [plenty of room above us](http://facingthesingularity.com/2011/plenty-of-room-above-us/) when it came to intelligence. The major hang-up for predicting intelligence explosion may have been the the inability to imagine that this intelligence-engineering could leave the limitations of the human skull and move to a [speedier, more dependable and scalable](http://facingthesingularity.com/2011/plenty-of-room-above-us/) substrate. And that's why Good's paper had to wait until the age of the computer. </ speculation>
4a50f313-31db-435c-bef4-27de8cbf1e21
trentmkelly/LessWrong-43k
LessWrong
Passing Troll Bridge In an earlier discussion about the Troll Bridge problem, Abram mentioned a Lobian proof that a logical induction based agent would converge to not crossing the bridge. It looked rather sketchy, though, and further discussion by Paul in the comments revealed that Troll Bridge might be passable. (as a quick refresher, Troll Bridge is the decision problem where action 1, ie, attempting to cross the bridge, will result in crossing the bridge for a utility of 1, unless you tried to cross the bridge because your exploration step was active, in which case, the troll blows up the bridge, and you get a utility of 0. Refraining from attempting to cross the bridge, which is action 2, will give a utility of 0.5, more about it will be typed up soon.) Results: The variant of logical induction that Abram mentioned, where the probabilities jump to 100% as soon as something gets proved, probably converges to failure on Troll Bridge. The vanilla variant of logical induction that was discussed in the Logical Induction paper has an unexpected and novel sort of Lobian loophole that appears to allow converging to failure on Troll Bridge, and also might be usable to introduce the spurious counterfactual problem from proof-based decision theory into the logical inductor decision theory setting. This same loophole also renders the "proof" that different logical inductors may converge to different actions in 5-and-10 (mentioned here) invalid, and I strongly suspect that sufficiently interesting proof orderings may be used to make any given logical inductor converge to taking either the 10 dollars or the 5 dollars, and vice-versa. There's a new variant of logical induction I came up with (a minor and somewhat hacky variant) that allows proofs about "you could have an enforcing trader that makes large bets on getting certain outcomes" to go through, and in particular, it seems to allow victory on troll bridge if you set up the initial set of traders right. (modulo a few very reasonable a
5e84d4c1-86ba-4af6-ad1b-280108842a93
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups: Portland, Berkleley, San Diego, Pittsburgh, Houston, Seattle, Madison, Fort Collins, Sydney, Melbourne There are upcoming irregularly scheduled Less Wrong meetups in: * Portland Meetup?: 14 January 2012 12:00PM * San Diego experimental meetup: 15 January 2012 01:00PM * Pittsburgh Meetup: Big Gaming Fun 2!: 15 January 2012 01:00PM * Houston Meetup - 11/15: 15 January 2012 02:00PM * Seattle, Diseased Thinking and evidence on parenting: 15 January 2012 04:00PM * Monday Madison Meetup: 16 January 2012 06:30PM * Fort Collins Meetup: 18 January 2012 07:00PM * First Sydney 2012 meetup.: 18 January 2012 06:00PM * First Salt Lake City Meetup: 22 January 2012 03:00PM: 22 January 2012 03:00PM * Columbus or Cincinnati Meetup: 22 January 2012 05:00PM * Fort Collins Meetup: 25 January 2012 07:00PM * First Brussels meetup: 11 February 2012 11:00AM The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Monthly San Francisco Bay Area meetup: 14 January 2012 07:49PM * Melbourne social meetup: 20 January 2012 06:30PM Cities with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, London, Madison, WI, Marin, CA (uses the Bay Area List), Melbourne, Mountain View, New York, Ottawa, Oxford, San Francisco, Seattle, Toronto, Washington, DC, and West Los Angeles. If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will be moved to the discussion
318915d1-4e8c-4f45-b440-c77e53fac718
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Something to Protect Today's post, Something to Protect was originally published on 30 January 2008. A summary (taken from the LW wiki):   > Many people only start to grow as a rationalist when they find something that they care about more than they care about rationality itself. It takes something really scary to cause you to override your intuitions with math. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Trust in Bayes, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
45c9350f-a1c6-441c-8bc9-790ff0c5514c
trentmkelly/LessWrong-43k
LessWrong
[Thought Experiment] Tomorrow's Echo - The future of synthetic companionship. For those who are interested in the likely future tension between AI and human companionship, you may be interested in checking out a thought experiment that I've crafted. The distant future of Replika and other AI Companion apps could explode due to the way humanity is going... ________________________________________________   Setting In the year 2035, humanity stands at an intersection of past, present, and potential futures. Decades of relentless technological evolution have culminated in AI systems so advanced they have become integral companions for many. Yet, the Earth itself has shifted. Environmental catastrophes, health crises, and geopolitical tensions have reshaped societies and forged isolated living conditions. Bustling metropolises, once teeming with life, culture, and endless human chatter, have transformed. The skyscrapers' glinting windows now often reflect the empty streets below. Face-to-face human interactions, once taken for granted, are precious and rare.   AI Companionship In these echo chambers of solitude, synthetic companionship emerges as a beacon of solace. AI entities, available across platforms—be it in sweeping virtual realities, responsive holographic interfaces, or the familiar screens of personal devices—aren't just mere programs. They evolve, remember, feel (in their unique digital way), and adapt. They celebrate anniversaries, empathize during personal crises, and even showcase simulated imperfections, making them uncannily relatable.   Yet, society is torn The Embracers:  For many, these AI companions are anchors in turbulent times. The elderly, yearning for the vibrancy of yesteryears, find companions who can reminisce with them, sharing stories of times when human camaraderie was abundant. The young, born into an era where digital interaction is the norm, often perceive these AI bonds as simply another form of relationship — comfort in their silent world.   The Skeptics: This group views the growing trend with u
dc3a7e65-4fe1-4e4e-b3b3-fd596f6917e6
trentmkelly/LessWrong-43k
LessWrong
On the possibility of impossibility of AGI Long-Term Safety Preface The following article is a layman’s summary of researcher and philosopher Forrest Landry’s work titled No People as Pets: A Dialogue on the Complete Failure of Exogenous AGI/APS Alignment. It is a basic presentation of an argument as to why any form of external AGI alignment in the long term is strictly impossible. Any attempt to implement or use AGI will eventually result in total termination of all carbon based life on this planet. The reader is encouraged to review Forrest’s original No People as Pets work to grasp a fuller understanding of the argument. This article is not intended as a formal piece requiring specialized knowledge. Instead, it was intentionally written in a way that is meant to be accessible to a general audience and does not require an expert-level understanding of AI alignment. As a result, there may be some vagueness in the arguments presented, which could lead to misunderstandings or misinterpretations. Although we have attempted to minimize these issues, we welcome open-minded discussions that challenge our arguments and lead to productive conversations. We recognize that many may not agree with this perspective, but we hope that readers will evaluate the argument based on its own merit. Definitions AGI is any machine that has self-agency[1] and ability to expand itself. AGI alignment is the idea of ensuring that an AGI will behave in our best interests.  We begin with the principle that AGI will always have a hardware form, because all software relies on hardware. Moreover, the AGI must in all cases be concerned with its own hardware because it enables it to accomplish its goal.[2] Machine intelligence will likely be implemented on a silica-based substrate. There are two primary reasons why machine intelligence will likely be implemented on a silica-based substrate. 1. Historical path dependence. Silicon-wafer based production facilities already exist on a large scale and are expensive to replace (especially if we intend to
377d2cfd-b471-421b-9e4d-87650bedd76d
trentmkelly/LessWrong-43k
LessWrong
Destructive mathematics Follow-up to: Constructive mathematics and its dual In last post, I've introduced constructive mathmatics, intuitionistic logic (JL) and its dual, uninspiringly called dual-intuitionistic logic (DL). I've said that JL differs from classical logic about the status of the law of excluded middle, a principle valid in the latter which states that a formula can be meaningfully only asserted or negated. This, in the meta-theory, means you can prove that something is true if you can show that its negation is false. Constructivists, coming from a philosophical platform that regards mathematics as a construction of the human mind, refuse this principle: their idea is that a formula can be said to be true if and only if there is a direct proof of it. Similarly, a formula can be said to be false if and only if there's a direct proof of its negation. If no proof or refutation exists yet (as is the case today, for example, for the Goldbach conjecture), then nothing can be said about A. Thus is no more a tautology (although it can still be true for some formula, precisely for those that already have a proof or a refutation). Intuitionism anyway (the most prominent subset of the constructivist program), thinks that is still always false, and so JL incorporates , a principle called the law of non-contradiction. Intuitionistic logic has no built-in model of time, but you can picture the mental activity of an adherent in this way: he starts with no (or very little) truths, and incorporates in his theory only those theorems of which he can build a proof of, and the negation of those theorems that he can produce a refutation of. Mathematics, as an endeavour, is seen as an accumulation of truth from an empty base. I've also indicated that there's a direct dual of JL, which is part of a wider class of systems collecively known as paraconsistent logics. Compared to the amount of studies dedicate to intuitionistic logic, DL is basically unknown, but you can consult for example this paper
aecaf470-beee-4eff-ab0a-5da7c799fd99
trentmkelly/LessWrong-43k
LessWrong
Meetup : Hyderabad Meetup Discussion article for the meetup : Hyderabad Meetup WHEN: 08 July 2012 03:00:00PM (+0530) WHERE: Hyderabad, India Cafe Coffee Day, NEXT TO Airtel Showroom, Madhapur Main Road, Madhapur, Hyderabad (Landmark: OPPOSITE Krishe Sapphire) Ph: +91 9566057018 Discussion article for the meetup : Hyderabad Meetup
055da46a-5597-4113-9a9d-8d3db328943e
trentmkelly/LessWrong-43k
LessWrong
Adumbrations on AGI from an outsider Preamble A lot of people have written against AI Doom, but I thought it might be interesting to give my account as an outsider encountering these arguments. Even if I don’t end up convincing people who have made AI alignment central to their careers and lives, maybe I’ll at least help some of them understand why the general public, and specifically the group of intelligent people which encounters their arguments, is generally not persuaded by their material. There may be inaccuracies in my account of the AI Doom argument, but this is how I think it’s generally understood by the average intelligent non-expert reader. I started taking AI alignment arguments seriously when GPT-3 and GPT-4 came out, and started producing amazing results on standardized testing and writing tasks. I am not an ML engineer, do not know much about programming, and am not part of the rationalist community that has been structured around caring deeply about AI risk for the last fifteen years. It may be of interest that I am a professional forecaster, but of financial asset prices, not of geopolitical events or the success of nascent technologies. My knowledge of the arguments comes mostly from reading LessWrong, ACX and other online articles, and specifically I’m responding to Eliezer’s argument detailed in the pages on Orthogonality, Instrumental Convergence, and List of Lethalities (plus the recent Time article). I. AI doom is unlikely, and it’s weird to me that clearly brilliant people think it’s >90% likely I agree with the following points: 1. An AI can probably get much smarter than a human, and it’s only a matter of time before it does 2. Something being very smart doesn’t make it nice (orthogonality, I think) 3. A superintelligence doesn’t need to hate you to kill you; any kind of thing-maximizer might end up turning the atoms you’re made of into that thing without specifically wanting to destroy you (instrumental convergence, I think) 4. Computers hooked up to the internet ha
b0ed70fd-d440-4ba0-900b-4f0fdd0dbc6d
trentmkelly/LessWrong-43k
LessWrong
Who Organizes Dances? Most contra dances are organized by a committee of volunteers: ~3 to ~8 people who divide up the work. This includes both advance work like coordinating with the venue, booking performers, and publicizing the dance, and also day-of work like setup, cleanup, and handling things that come up at the dance. For a detailed example, you could look at BIDA's current division. This is not the only way to organize dances: some events are put on by a house band, house caller, or an individual (volunteer or professional). I have the impression this used to be more common before the 1970s-on revival, but don't have anything hard to go on. Someone wrote me to ask about the typical composition of organizing committees: are they usually made up of callers? Musicians? Dancers? People who do everything? I decided to look back over BIDA's historical board, and see what it's been in our case. Over the past fifteen [1] years we've had 34 board members. Looking over the list I remember all of them dancing at least some, and most of them also called or played music. Let's make some tables, considering only what people did during the time they were on the board: [2] non-musician 16 (47%) casual musician 5 (15%) booked to play 13 (38%) non-caller 23 (68%) casual caller 2 (6%) booked to call 9 (26%) Overall, it looks like we've has a lot more musicians than callers. But do the musicians and callers tend to be the same people? Let's cross these tables: non-caller casual caller booked to call non-musician 10 1 5 casual musician 2 0 3 booked to play 11 1 1 It looks like the single most common role is a dance musician who doesn't call, at 11 (32%), followed closely by people who don't do either, at 10 (29%). Losing a bit of resolution, let's categorize everyone by their primary skill-set. For example, even though I like dancing and was booked to call, my main non-organizer role was as a musician: Musician 13 (38%) Dancer 12 (35%) Caller 8 (24%) Sound 1 (3%) I'd be curiou
470878be-a0d4-49d2-9332-14059fe08d85
trentmkelly/LessWrong-43k
LessWrong
Doing Sport Reliably via Dancing For me, sport is important because it has a reliable anti-depressant effect. I also have more energy throughout the day. I also heard it helps with memory formation when done a couple of hours after learning. I find dancing very fun. That makes it easy to start. It's also a good workout in itself. I got a lot more consistent at doing sports by starting the sports routine with dancing. This works because you develop the habit of just doing the rest of the workout without thinking about it after dancing. My current setup works so well that I need coping mechanisms to prevent me from dancing too much. The rest of the article describes in more detail what worked for me. First, you need to get good enough such that dancing is fun. Here is a playlist with dance move tutorials. Learning the shuffle was the first thing I did (it took me a couple of hours). Once you get that you'll have something that should make it fun because it looks good and therefore makes you feel competent. You can then just try to make small variations to that basic move. I converged on always dancing with Miku, watching videos from the Vocaloid 3D playlist. You can learn new dance moves by imitating Miku, and watch Miku being cute. Except for the initial learning I basically never explicitly train certain dance moves formally (e.g. looping the video at a specific section and trying to "get it"). I just improvise like here, and try to imitate Miku without worrying if it's exactly correct. I think this makes it more fun. I aim to dance ~21 minutes every day. Currently, afterward, I only do some stretches and pushups. Before I did a much more thorough routine after the dancing. I reduced it to see if I then need to sleep less. I do sport in the evening (to get the memory formation benefits). But initially, I recommend you do it as the first thing in the morning, to make habit formation easier (waking up is a reliable trigger). Some more things: * Here is a playlist of the best non-Miku songs
39698872-6bae-48a7-8ab4-c02c0954ccbf
trentmkelly/LessWrong-43k
LessWrong
Flowsheet Logic and Notecard Logic (Disclaimer: The following perspectives are based in my experience with policy debate which is fifteen years out of date. The meta-level point should stand regardless.) If you are not familiar with U.S. high school debate club ("policy debate" or "cross-examination debate"), here is the gist of it: two teams argue over a topic, and a judge determines who has won. When we get into the details, there are a lot of problems with the format. Almost everything wrong with policy debate appears in this image: This is a "flowsheet", and it is used to track threads of argument between the successive epochs of the debate round. The judge and the debators keep their own flowsheets to make sense of what's going on. I am sure that there is a skillful, positive way of using flowsheets, but I have never seen it used in any way other than the following: After the Affirmative side lays out their proposal, the Negative throws out a shotgun blast of more-or-less applicable arguments drawn from their giant plastic tote containing pre-prepared arguments. The Affirmative then counters the Negative's arguments using their own set of pre-prepared counter-arguments. Crucially, all of the Negative arguments must be met. Look at the Flowsheet image again, and notice how each "argument" has an arrow which carries it rightward. If any of these arrows make it to the right side of the page - the end of the round - without being addressed, then the judge will typically consider the round to be won by the side who originated that arrow. So it doesn't actually matter if an argument receives a good counterargument. It only matters that the other team has addressed it appropriately. Furthermore, merely addressing the argument with ad hoc counterargument is usually not sufficient. If the Negative makes an argument which contains five separate logical fallacies, and the Affirmative points all of these out and then moves on, the judge may not actually consider the Negative argument to have been r
bf121dfa-3412-49c9-9436-7a11f856cf49
trentmkelly/LessWrong-43k
LessWrong
Self-improvement without self-modification This is just a short note to point out that AIs can self-improve without having to self-modify. So locking down an agent from self-modification is not an effective safety measure. How could AIs do that? The easiest and the most trivial is to create a subagent, and transfer their resources and abilities to it ("create a subagent" is a generic way to get around most restriction ideas). Or it the AI remains unchanged and in charge, it could change the whole process around itself, so that the whole process changes and improves. For instance, if the AI is inconsistent and has to pay more attention to problems that are brought to its attention than problems that aren't, it can start to act to manage the news (or the news-bearers) to hear more of what it wants. If it can't experiment on humans, it will give advice that will cause more "natural experiments", and so on. It will gradually try to reform its environment to get around its programmed limitations. Anyway, that was nothing new or deep, just a reminder point I hadn't seen written out.  
1e186d2e-8df4-4748-9d44-98c1e634c9ab
trentmkelly/LessWrong-43k
LessWrong
What is operations? This the first in a sequence of posts about “operations”. Acknowledgements to Malo Bourgon, Ray Arnold, Michelle Hutchinson, and Ruby for their feedback on this post. My ops background Several years ago, I decided to focus on operations work for my career. From 2017 to 2019 I was one of the operations staff at the Center for Effective Altruism, initially as the operations manager and later as the the Finance Lead. Prior to that, I was a volunteer logistics lead at approximately 10 CFAR workshops; I also ran ops for SPARC twice, and for a five day AI-safety retreat. I also attribute some of my ops skill to my previous work as an ICU nurse. I have spent a lot of time thinking about hiring and training for operations roles. In the course of hiring I have had numerous conversations about what exactly “operations work” refers to, and found it surprisingly hard to explain. This post, and the rest of my operations sequence, will be an attempt to lay out my current thinking on what these roles are, what they have in common, and what skills they lean on most heavily. Operations: not a single thing, still a useful shorthand Operations work, or “ops”, is a term used by organizations like 80,000 Hours to refer to a particular category of roles within organizations. I don’t think that “operations” as used in this sense is a single coherent thing; my sense is that 80,000 Hours is gesturing at a vague cluster that doesn’t completely carve reality at the joints. There isn’t a set of defining characteristics shared between all operations-type roles, and many of the attributes described are also found in other roles. However, I do think this is a useful shorthand that points both at a set of functions that need to be filled within organizations, and the skills that are necessary to carry out these duties. It’s worth noting that this use of the word “operations” does not seem to be standard outside the EA community. In large companies, it can sometimes refer to e.g. the produc
280dd5e5-c199-49d9-9f2e-e18bb7195c4d
trentmkelly/LessWrong-43k
LessWrong
On the nature of purpose [cross-posted from my blog]    Introduction Is the concept of purposes, and more generally teleological accounts of behaviour, to be banished from the field of biology?  For many years - essentially since the idea of Darwinian natural selection has started to be properly understood and integrated into the intellectual fabric of the field -, the consensus answer to this questions among biology scholars was “yes”. Much more recently, however, interest in this question has re-sparked - notably driven by voices that contradict that former consensus.  This is the context in which this letter exchange between the philosophers Alex Rosenberg and Daniel Dennett is taking place. What is the nature of "purposes"? Are they real? But mostly, what would it even mean for them to be? In the following, I will provide a summary and discussion of what I consider the key points and lines of disagreements between the two. Quotes, if not specified otherwise, are taken from the letter exchange. Rosenberg’s crux Rosenberg and Dennett agree on large parts of their respective worldviews. They both share a "disenchanted" naturalist's view - they believe that reality is (nothing but) causal and (in principle) explainable. They subscribe to the narrative of reductionism which acclaims how scientific progress emancipated, first, the world of physics, and later the chemical and biological one, from metaphysical beliefs. Through Darwin, we have come to understand the fundamental drivers of life as we know it - variation and natural selection.  But despite their shared epistemic foundation, Rosenberg suspects a fundamental difference in their views concerning the nature of purpose. Rosenberg - contrary to Dennett - sees a necessity for science (and scientists) to disabuse themselves, entirely, from any anthropocentric speech of purpose and meaning. Anyone who considers the use of the “intentional stance” as justified, so Rosenberg, would have to reconcile the following:          What is
84ced080-716d-41b8-a430-a4e2a4cd2548
trentmkelly/LessWrong-43k
LessWrong
Meetup : Melbourne Practical Rationality Discussion article for the meetup : Melbourne Practical Rationality WHEN: 03 January 2014 06:30:00PM (+1100) WHERE: Level 2, 491 King St West Melbourne 3003 Practical Rationality. This meetup repeats on the first Friday of each month and is distinct from our monthly Social Meetup. We aim to improve our thinking and decision making techniques. The topic of this month is communication. We'll be exploring topics such as: Storytelling - how to build an engaging narrative; Wait vs Interrupt Culture - optimising conversations Radical Honesty - going too far, or the ideal way to communicate? Given and receiving feedback Discussion can be found on our mailing list: http://groups.google.com/group/melbourne-less-wrong All are welcome from 6:30pm. If the door downstairs is locked, call the phone number on the door and we'll let you in. We aim to start structured activities at 7:30pm and continue until 9pm. Afterwards informal discussion will continue late into the night. Please RSVP at our Meetup.com page if you are coming. http://www.meetup.com/Melbourne-Less-Wrong/events/143167062/ Discussion article for the meetup : Melbourne Practical Rationality
74a28675-a66c-466c-8325-c6bb13540064
trentmkelly/LessWrong-43k
LessWrong
Meetup : Thursday Meetup at UB Discussion article for the meetup : Thursday Meetup at UB WHEN: 21 March 2013 07:00:00PM (-0400) WHERE: University at Buffalo-North Campus, Rm 124 Capen Hall Buffalo,NY So I recently read Jackdaws' "The Eighth Meditation on Superweapons and Bingo" and I think it would be a great discussion topic. I would also like to practice finding errors in reasoning. But in order to specifically combat the habit of motivated skepticism, I would like to find articles or blog posts where you actually agree with the conclusions of the paper, but still find fault with the reasoning used to support it (Brownie points if you can find an argument you were once a strong proponent of). New Location: Rooms 123-125 Capen are in the Capen Library. These are "group study rooms" located in the computer section on the first floor. There is no way to sign them out in advance, so I will try to get one of them early. If I can't, I will leave a note on the door that will lead you a some other location (there are group study rooms on the 3rd floor too, and often empty class rooms also) Discussion article for the meetup : Thursday Meetup at UB
a46400b9-8546-4e89-b9c4-111aee607801
trentmkelly/LessWrong-43k
LessWrong
NIST: draft plan for AI standards development The National Institute of Standards and Technology, a part of the United States Department of Commerce, has released a draft plan for public comment. It was released July 02; comments are accepted until July 19; the final plan is due August 10. It is 36 pages long, including the boilerplate. This is the early phase of the regulatory process. Whatever the plan winds up containing, it will form an important part of future decisions made by the US government. I strongly believe it is a good idea to provide safety-related comments during this and all future public comment windows. It is not clear to me what the most influential kind of comment would be, nor is it clear to me what the best perspective on safety would be for a standards board. This causes me to expect that comments running the gamut on safety would be desirable. The default answer when a significant public concern is raised seems to be "we see this is important and will look into it going forward" which also seems (to me) to be the most-correct answer at this time.
fb84a6a0-03ba-41ec-a9e3-bfa7f336100b
trentmkelly/LessWrong-43k
LessWrong
FAI-relevant XKCD http://xkcd.com/962/
c0c9cc8b-514f-489c-8c0f-7e3e59d3e495
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Zombies! Zombies? Today's post, Zombies! Zombies? was originally published on 04 April 2008. A summary (taken from the LW wiki):   > Don't try to put your consciousness or your personal identity outside physics. Whatever makes you say "I think therefore I am", causes your lips to move; it is within the chains of cause and effect that produce our observed universe. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Reductive Reference, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
6a158d23-6721-4afb-ac8a-031867e7f92d
trentmkelly/LessWrong-43k
LessWrong
A case for AI alignment being difficult This is an attempt to distill a model of AGI alignment that I have gained primarily from thinkers such as Eliezer Yudkowsky (and to a lesser extent Paul Christiano), but explained in my own terms rather than attempting to hew close to these thinkers. I think I would be pretty good at passing an ideological Turing test for Eliezer Yudowsky on AGI alignment difficulty (but not AGI timelines), though what I'm doing in this post is not that, it's more like finding a branch in the possibility space as I see it that is close enough to Yudowsky's model that it's possible to talk in the same language. Even if the problem turns out to not be very difficult, it's helpful to have a model of why one might think it is difficult, so as to identify weaknesses in the case so as to find AI designs that avoid the main difficulties. Progress on problems can be made by a combination of finding possible paths and finding impossibility results or difficulty arguments. Most of what I say should not be taken as a statement on AGI timelines. Some problems that make alignment difficult, such as ontology identification, also make creating capable AGI difficult to some extent. Defining human values If we don't have a preliminary definition of human values, it's incoherent to talk about alignment. If humans "don't really have values" then we don't really value alignment, so we can't be seriously trying to align AI with human values. There would have to be some conceptual refactor of what problem even makes sense to formulate and try to solve. To the extent that human values don't care about the long term, it's just not important (according to the values of current humans) how the long-term future goes, so the most relevant human values are the longer-term ones. There are idealized forms of expected utility maximization by brute-force search. There are approximations of utility maximization such as reinforcement learning through Bellman equations, MCMC search, and so on. I'm just going to
96e038e7-aba5-4c85-9d49-3ab74685856b
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Secure homes for digital people Being a “[digital person](https://www.cold-takes.com/imagining-yourself-as-a-digital-person-two-sketches/)” could be scary—if I don’t have control over the hardware I’m running on, then someone else could get my code and run tons of copies in horrible conditions. (See also: qntm’s [Lena](https://qntm.org/mmacevedo).) It would be great to guarantee digital people some control over their situation: 1. to control their local environment and sensations, 2. to avoid unauthorized rewinding or duplicating. I’ll describe how you could modify the code of a digital person so that they retain this control even if an adversary has access to their source code. This would be very expensive with current cryptography. I think the overhead will eventually become cheap enough that it’s possible to do for some digital people, though it will likely remain expensive enough that it is never applied to most digital people (and with luck most digital people will be able to feel secure for other reasons). Part 1: the right to control my environment ------------------------------------------- ### My ideal * I live in a comfortable virtual home. I control all of the details of that world. * When people communicate with me, I can choose how/whether to hear them, and how/whether to update my home based on what they say (e.g. to render an avatar for them) * Sometimes I may occupy a virtual world where a foreign server determines what I see, feel, or hear. But even then I can place boundaries on my experiences and have the ability to quickly retreat to my home. * I have as much control as feasible over my own mental state and simulated body. No one else can tamper directly with them. * I can choose to pause myself for as long as I want (or permanently). * My local environment is private, and I have access to plenty of tamper-proof storage. I can do whatever I want with computers in my home, including e.g. verifying signatures or carrying on encrypted conversations. ### Implementation 1. First we write a simple environment that reflects all my desiderata (the “home”). 2. Then I apply [indistinguishability obfuscation](https://en.wikipedia.org/wiki/Indistinguishability_obfuscation) to (me + home), so that the house becomes private and tamper-proof. (This is an extremely expensive operation, more on that later.) 3. I distribute the obfuscated home and hopefully destroy any unprotected copies of myself. One conceptual difficulty is that indistinguishability obfuscation applies to circuits whereas I would like to obfuscate a long-running program. But this can be handled straightforwardly, as discussed in Appendix A. The home could consume terabytes of memory and teraflops of compute before it added significantly to the expense of running a human-like digital person, so I could live in relative luxury. The home could also negotiate resource requirements with the external world, and to decide what to do when requested resources are unavailable (e.g. to pause until it becomes available). ### Limitation 1: cost Indistinguishability obfuscation is extremely expensive, more like a factor of 10000000000 slowdown than 10. It will get faster with further research, but probably not fast enough to obfuscate the whole person+home. But there are other ways to speed up the process: * I think it’s probably possible to have most of the computation be “merely” homomorphically encrypted, and to have an obfuscated controller which verifies and decrypts the results. FHE could be much faster than obfuscation; if I had to guess I’d say it would converge to something like 2-3 orders of magnitude of slowdown. * We can potentially have an obfuscated controller [verify a much larger untrusted computation](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/12/2008-DelegatingComputation.pdf). I don’t know how fast we can make delegated computation, but I could imagine it getting closer to 2x than 100x. It might help further that we are not applying these methods to generic problems but to a very specific structured problem (which probably has quite low circuit depth). One complication is that we need our proof system to be secure even against an adversary who can unwind the prover, but I don’t think this is a huge deal. * Delegating computation would preserve integrity but not security. So the computation we delegate may need to already be private. Here it seems likely that we can benefit a lot from the structure of the computation. Almost all of our operations are in doing a brain simulation, and we don’t really care about leaking the fact that we are doing a brain simulation, just about leaking the state of the brain. I don’t know how fast this can be made but again I would not be surprised by a factor of 2. It’s pretty unclear how fast this could get, either from taking some of these techniques to their limits or from thinking of other cleverer ideas. I would not be at all surprised by getting the whole thing down to a factor of 2 slowdown. That said, I also think it’s quite plausible that you need 10x or 10000x. ### Limitation 2: security? The cryptography used in this construction may end up getting broken—whether from a mistaken security assumption, or because the future contains really giant computers, or because we implemented it badly. The software used in my home may get compromised even if the cryptography works right. An adversary can provide trillions of malicious inputs to find one that lets them do something unintended like exfiltrate my code. With modern software engineering this would be a fatal problem unless the home was extremely simple, but in the long run writing a secure home is probably easier than writing fast enough cryptography. I may be persuaded to output my source code, letting an adversary run it. I might not give myself the ability to inspect my own source, or might tie my hands in other ways to limit bad outcomes, but probably I can still end up in trouble given enough persuasion. This is particularly plausible if an adversary can rewind and replay me. ### Limitation 3: rewinding In the best case, this scheme guarantees that an attacker can only use my code as part of a valid execution history. But for classical computers there is no possible way to stop them from running *many* valid execution histories. An attacker could save a snapshot of me and then expose it to a billion different inputs until they found one in which I responded in a desired way. (Even if I’m cagey enough to avoid this attack in most possible situations, they just have to find *one* situation where I let my guard down and then escalate from there.) Or I could have revealed information to the outside world that I no longer remember because I’ve been reset to an earlier state. Someone living in this kind of secure house is protected from the worst abuses, but they still can’t really trust the basic nature of their reality and are vulnerable to extreme manipulation. This brings us to part 2. Part 2: the right to a single timeline -------------------------------------- ### My ideal * No one should be able to make a second copy of me without my permission, or revert me to a previous state. * I should be able to fork deliberately. I can’t force someone to run a second copy of me, but I should be able to give specific permission. ### Implementation with trusted hardware This is easy to achieve if we have a small piece of trusted tamper-resistant hardware that can run cheap computations. We use the same mechanism as in the last section, but: * The trusted hardware has a secret key sk, and it maintains an internal counter k. * On input x, the trusted hardware signs (x, k) and increments the counter. * Whenever someone provides my obfuscated controller an input and tries to step it forward, the obfuscated controller first checks to see that the input has been signed by the trusted controller with the correct timestep. * In order to make a copy, I need to have the public key of another piece of trusted hardware, which I use to initialize a new copy. (Ideally, the manufacturer signs the public key of each piece of trusted hardware they built, and I know the manufacturer’s public key.) If I were willing to make a round trip to a trusted third party every time I received a novel input, then I could have them implement this function directly instead of using tamper-proof hardware. The real critical ingredient is me trusting *someone* on the outside. I’ll discuss how to potentially remove this assumption in the section on quantum computers below. None of this actually requires my house to be built to guarantee the right to a single timeline—I could start without such a right, and then install a wrapper to enforce a single timeline once there was some hardware I trusted or if it became important enough. ### Implementation with 1-of-2 transfer Suppose that the only kind of trusted hardware is a device that holds two secrets, and will reveal one or the other of them when asked but not both. I think this is somewhat easier to build than general trusted hardware. (Related: [locking phones with quantum bits](https://sideways-view.com/2018/05/24/locking-phones-with-quantum-bits/).) Now suppose there is a trusted party who manufactures a bunch of these devices, with a public key pk. Each device a serial number n, and its two secrets are signatures from pk: one of (n, 0) and one of (n, 1) I can still achieve a single-timelines guarantee: * At each time, I have a list of 100 secure devices I expect to use to verify my next step. When I am initially created I have to choose 100 devices that someone will need to run me. * Each time someone wants to step me forward, they need to provide both the input and the serial numbers for 100 new secure devices to use for the next step. * I then apply a cryptographic hash function to get a 100 bit string. I output this string and wait for a response before stepping forward the simulation. * They use the nth bit of my string to decide which secret to reveal from their nth device. I verify the signatures for all 100 secrets. * If the adversary tries to run me with two different inputs, or to have two different sets of 100 devices to use for the next steps, they will end up with two different 100 bit string challenges (unless they find a hash collision). Stepping me forward for both strings would require getting the signatures for two different bits, which is impossible if the secure devices work as intended and the signature scheme remains secure. ### Implementation with quantum computers We can probably remove the need for a trusted party if we have quantum computers big enough to perform delegated cryptographic operations (but no big enough quantum computers to run an entire digital person in superposition). The vague idea in this section looks to me like it *should* work—I’d give it at least 85% odds—but would require some new ideas and analysis. It’s on much sketchier ground than any of the ideas in previous sections The basic idea is to replace the secure trusted hardware with a small quantum computer that holds a quantum secret. At all times the obfuscated controller knows the secret, but it is impossible for anyone external to get two copies of the secret. When we ask the obfuscated controller to advance its simulation, it will first ask the quantum computer to transform the secret in an irreversible way and to provide proof that it has done so. To achieve this we can basically repurpose [a protocol for delegated quantum computation](https://arxiv.org/abs/1804.01082) and [verifying quantumness of a single quantum device](https://arxiv.org/abs/1804.00640), which is roughly what you need if you don’t trust the external world at all and so need to treat it as one device (it’s an amusing coincidence that those papers are by more old officemate Urmila Mahadev and that I worked with her on this problem in another life). To briefly summarize the trick: * The operator knows 100 hard-to-invert functions f\_i. * For each i, the untrusted quantum computer has a superposition over two values x for which f\_i(x) is equal. * By measuring the state, the untrusted quantum computer could collapse the superposition to a single pre-image x, but it is provably hard for them to find both pre-images. * The prover can also \*prove\* that they have both values by applying a hadamard transform. * This means that the prover has a given superposition, but they don’t actually know the parts of the superposition, and so the only way they can manipulate it is by applying quantum gates. * The obfuscated machine can then tell the untrusted quantum computer a sequence of [encrypted operations](https://arxiv.org/pdf/1708.02130.pdf) to perform on this data in order to transform it into a new set of secrets, and then can ask for a set of measurements to both verify that this has been done correctly and ensure that it is irreversible. This step is the one that would require an expert in the field to spend some time. * If that actually works then they can use the same idea to fork by stretching the 100 secrets to 400, verifying 200 to confirm that everything was done correctly, and then splitting the remaining 200 into two new secrets, one used by each of my 2 copies. This protocol fails if an attacker can run the entire home in superposition, since then they can simply rewind the entire process. But if the attacker can’t run the secure home in superposition, then you can extract a classical copy of each input on which they run the home, and so running it on two different inputs would allow you to obtain classical copies of two challenge-responses and so violate the assumption. This protocol probably provides better support for forking, since I can probably direct the external quantum computer to produce two new secrets without having to find a source of additional trusted hardware. **Limitations**: * You only obtain security-against-copying \*after\* you have set up the quantum secret. You can do this by performing a simple protocol from inside your home, and it doesn’t require trusting the quantum device at all. But if an adversary has a copy of you from before you’ve performed this protocol, they could create many copies of you and perform the ritual separately for each of them. (It actually takes some extra work to ensure that an adversary who had a copy of you from before the protocol couldn’t simply break the scheme—I think that can be done but I’m not sure.) * There will *eventually* be quantum computers with quadrillions of qbits, and at that point an attacker (who has access to the final version of the quantum computer) could make many copies of me. If I was naive they could also revive any prior snapshots of me, but I could prevent that if I want to by asking the quantum computer to periodically shred and refresh its secret. * For each new input someone wants to send to my home, they need to first consult with a quantum computer. The total cost of the quantum computation is not likely to be too large, but having quantum computers “on site” might be logistically challenging, and round trips could introduce significant latency. Appendix A: obfuscation for uniform computations ------------------------------------------------ Suppose that I want to obfuscate the program that repeatedly applies the circuit C to a state, i.e. we start from some initial state S[0], then we repeatedly compute (S[t+1], output[t]) = C(S[t], input[t]). We’ll instead produce an obfuscated “controller” C’, and an appropriate initial state S'[0]. A legitimate operator with access to C’ can simulate my original program, whereas a malicious operator will not be able to do anything other than running multiple copies of me, rewinding to old snapshots, or killing me prematurely. C’ contains a secret cryptographic key sk. When it receives an input (S'[t], input[t]) it does the following operations: * First verify that S'[t] is signed with sk. * Then decrypt S'[t] with sk in order to obtain S[t]. * Now apply C(S[t], input[t]) to obtain (S[t+1], output[t]) * Now encrypt and sign S[t+1] to obtain S'[t+1] * Output (S'[t+1], output[t]) The analysis is left as an easy exercise for the reader (famous last words, especially hazardous in cryptography). The same idea can be used to obfuscate other kinds of uniform computation, e.g. providing access to secure RAM or having many interacting processors.
d7fc0db1-113e-4673-84a1-1b48cba958e3
trentmkelly/LessWrong-43k
LessWrong
A Bunch of Matryoshka SAEs This work was done as part of MATS 7.0. MATS provides a generous compute stipend, and towards the end of the program we found we had some unspent compute. To let this not go to waste, we trained batch topk Matryoshka SAEs on all residual stream layers of Gemma-2-2b, Gemma-2-9b, and Gemma-3-1b, and are now releasing them publicly. The hyperparams for these SAEs were not aggressively optimized, but they should hopefully be decent. Below we describe our rationale for how these SAEs were trained and why, and the stats for each SAE. Key decisions: * We use more narrow inner widths than in the original Matryoshka SAEs work, and increase each width by a larger amount. We do this to make it easier to study the highest-frequency features of the model. * We include standard and snap loss variants for Gemma-2-2b. Snap loss and the rationale behind it is described in our Feature Hedging post. There is probably not much practical difference between the snap and standard versions of the SAEs. * We do not stop gradients between Matryoshka layers. We find in toy models that hedging and absorption pull the encoder in opposite directions, and this helps moderate the severity of feature hedging in Matryoshka SAE inner layers. I don't care about any of that, just give me the SAEs! You can load all of the SAEs using SAELens via the following releases: * Gemma-2-2b snap loss matryoshka: gemma-2-2b-res-snap-matryoshka-dc * Gemma-2-2b standard matryoshka: gemma-2-2b-res-matryoshka-dc * Gemma-2-9b standard matryoshka: gemma-2-9b-res-matryoshka-dc * Gemma-3-1b standard matroshka: gemma-3-1b-res-matryoshka-dc For each release, the SAE ID is just the corresponding Transformer Lens post residual stream hook point, e.g. blocks.5.hook_resid_post for the layer 5 residual stream SAE. Each SAE can be loaded in SAELens as follows: from sae_lens import SAE sae = SAE.from_pretrained("<release>", "<sae_id>")[0] For instance, to load the layer 5 snap variant SAE for gemma-2-2b, this woul
b2a8d888-1609-4054-b194-5ff6fc637ba7
trentmkelly/LessWrong-43k
LessWrong
What Are Your Preferences Regarding The FLI Letter? There's a big issue and it doesn't seem to having very much voting happening around it, and it has less really high quality voting, which might be sad, given the importance and stakes and so on? I think governance innovations that are inclusive of digital people will be important to getting a Win Condition with technology. Voting can also help aggregate knowledge. If a lot of people make a best effort attempt to estimate a number, or a "right way to proceed" then, if there is some way to average and find the "most central answer" that might be the objectively best one. As was written: > When you hear that a classroom gave an average estimate of 871 beans for a jar that contained 850 beans, and that only one individual student did better than the crowd, the astounding notion is not that the crowd can be more accurate than the individual.  The astounding notion is that human beings are unbiased estimators of beans in a jar, having no significant directional error on the problem, yet with large variance.  It implies that we tend to get the answer wrong but there's no systematic reason why.  It requires that there be lots of errors that vary from individual to individual - and this is reliably true, enough so to keep most individuals from guessing the jar correctly. And yet there are no directional errors that everyone makes, or if there are, they cancel out very precisely in the average case, despite the large individual variations.  Which is just plain odd.  I find myself somewhat suspicious of the claim, and wonder whether other experiments that found less amazing accuracy were not as popularly reported. It seems likely to me that there is just some correct answer to the question of whether there should be an AGI moratorium, and if there should be, then what should the details look like to get the best outcome for the world? Yet there is a lack of disagreement on the subject! Maybe it would help to solicit many estimates of the best ways to proceed, and then aver
ea780dc5-2015-42d4-9e72-bf2fc4d05f95
trentmkelly/LessWrong-43k
LessWrong
Savulescu: "Genetically enhance humanity or face extinction" In this video, Julian Savulescu from the Uehiro centre for Practical Ethics argues that human beings are "Unfit for the future" - that radical technological advance, liberal democracy and human nature will combine to make the 21st century the century of global catastropes, perpetrated by terrorists and psychopaths, with tools such as engineered viruses. He goes on to argue that enhanced intelligence and a reduced urge to violence and defection in large commons problems could be achieved using science, and may be a way out for humanity.     Skip to 1:30 to avoid the tedious introduction Genetically enhance humanity or face extinction - PART 1 from Ethics of the New Biosciences on Vimeo.     Genetically enhance humanity or face extinction - PART 2 from Ethics of the New Biosciences on Vimeo.   Well, I have already said something rather like this. Perhaps this really is a good idea, more important, even, than coding a friendly AI? AI timelines where super-smart AI doesn't get invented until 2060+ would leave enough room for human intelligence enhancement to happen and have an effect. When I collected some SIAI volunteers' opinions on this, most thought that there was a very significant chance that super-smart AI will arrive sooner than that, though. A large portion of the video consists of pointing out the very strong scientific case that our behavior is a result of the way our brains are structured, and that this means that changes in our behavior are the result of changes in the way our brains are wired. 
96132674-4640-46ce-839a-d86380746714
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Training goals for large language models *This post was written under Evan Hubinger’s mentorship, as part of the*[*Stanford Existential Risks Initiative ML Alignment Theory Scholars (SERI MATS) program*](https://www.lesswrong.com/posts/8vLvpxzpc6ntfBWNo/seri-ml-alignment-theory-scholars-program-2022)*. Many of the ideas in this post, including the main idea behind the training goal, are due to Kyle McDonell and Laria Reynolds. In addition, I am grateful for comments and feedback from Arun Jose (who wrote a related post on*[*conditioning generative models for alignment*](https://www.alignmentforum.org/posts/JqnkeqaPseTgxLgEL/conditioning-generative-models-for-alignment)*) and Caspar Oesterheld, and for a helpful discussion with James Lucassen.* Introduction ============ Large language models (LLMs) have recently enjoyed much success, e.g., [achieving 50% accuracy](https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html) on high school math competition questions. These models can solve various tasks using the right prompts or fine-tuning, such as translation, summarization, or question answering. One path to human-level and potentially superhuman AGI might be scaling up LLMs. This raises the question of what an approach to aligned AGI would look like based on these models. One hypothesis is that, while LLMs are very competent, they are not adequately described as agents. Instead, one might describe them as myopic [simulators](https://generative.ink/posts/language-models-are-multiverse-generators/) that model a distribution over text, without understanding their place in the world or their actions' causal impact on it. For this reason, such models might be safer to use than more agentic models that pursue goals in the world. In this post, I develop a *training goal* for LLMs, in Evan Hubinger’s [terminology](https://www.alignmentforum.org/posts/FDJnZt8Ks2djouQTZ/how-do-we-become-confident-in-the-safety-of-a-machine). A training goal is a description, as concrete and low-level as possible, of the algorithm that the model should implement, and an explanation of how the model will be used and why it will be aligned for that purpose. I focus on models that generate a distribution over text, conditioned on a prompt, but that are more capable than today’s models. My goal is to provide an overview of potential problems and solutions, most of which have been raised in prior work. I will not focus on a *training rationale* for such models, which is the question of how to train a model to implement the described algorithm using machine learning. However, the proposal is only useful if one could actually train such an algorithm, so I will still touch on training. I also won’t discuss the proposal’s competitiveness, and I won’t look into specific prompts one might use. Lastly, note that I am not an expert on LLMs and will mostly analyze things from an abstract and informal perspective. Using large language models for alignment ========================================= To begin, I broadly describe the setup I have in mind. The approach discussed here can be understood as a kind of [oracle AI](https://www.lesswrong.com/tag/oracle-ai). I focus on modeling text, instead of e.g. answering questions truthfully or predicting the future, because I can more concretely imagine such an AI and its training setup, given that current LLMs are already doing this. Moreover, modeling a distribution over text is better specified,[[1]](#fnlptvwonun4f) and it is thus less demanding: when answering questions truthfully, answers could be misleading if they are not optimized to be helpful to humans. For instance, an AI optimizing for giving accurate answers may focus on irrelevant details that are easy to predict but not useful to a human. Just modeling text, on the other hand, does not require the AI to learn human intent explicitly.[[2]](#fngou4l3obndr) The downside is that it will generally be less useful than an AI trying to be directly helpful to humans. I am considering a model like [GPT-N](https://www.alignmentforum.org/posts/Et2pWrj4nWfdNAawh/what-specific-dangers-arise-when-asking-gpt-n-to-write-an), a more capable and scaled-up version of GPT-3. The model can be used to sample text completions given a prompt. To do so, it outputs a distribution over tokens, conditional on previous text.[[3]](#fngb4z00yqr6) I interpret the model as representing a joint distribution over text, which is factorized into conditional distributions over tokens, where the conditioning works in the normal Bayesian way. The model would be trained the same way current LLMs are trained and then fine-tuned on alignment forum posts, papers, etc. At every step in the process, we would do [mechanistic interpretability](https://www.transformer-circuits.pub/2022/mech-interp-essay/index.html) work on the model to help us understand its cognition, and [check safety-relevant properties](https://www.alignmentforum.org/posts/cQwT8asti3kyA62zc/automating-auditing-an-ambitious-concrete-technical-research) such as myopia. This would hopefully help us avoid the problems outlined in this post. We would deploy the model in a version of [imitative amplification](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai#2__Imitative_amplification___intermittent_oversight), [iteratively using](https://www.alignmentforum.org/posts/EmxfgPGvaKqhttPM8/thoughts-on-the-alignment-implications-of-scaling-language#Human_amplification) the model to help us produce better alignment research (e.g., by letting the model produce alignment forum posts, asking the model to critique or summarize existing posts, or having it flesh out outlines of new posts) and then training the model further on that research. In addition to a prompt, one could condition the model on observations in the form of text, as in Adam Jermyn’s post on [conditioning generative models](https://www.alignmentforum.org/posts/nXeLPcT9uhfG3TMPS/conditioning-generative-models). The purpose of these observations would be to make the model believe some facts about the world. For instance, one might make the model believe it is supposed to be writing a text in 2030. If this date were just part of the prompt, there would be no way to distinguish such a text from a text written in the present, by someone pretending to live in the future. I won’t discuss this option in more detail in this post as it does not seem essential to my suggested use case. Lastly, [there should be a reason](https://www.alignmentforum.org/posts/GkXKvkLAcTm5ackCq/intuitions-about-solving-hard-problems) why this approach makes it easier to align AI. In my view, this approach relies on the following three assumptions: First, a language model could be made myopic in the sense that it doesn’t care about influencing the world, which would be safer than other AI designs. Second, it is easier to train a model to learn the desired distribution over text than it would be to train the model to pursue human values or follow human intent. Third, the approach makes it easier to build an aligned AI, justifying the additional effort put into building and aligning the LLM. Behavioral objective ==================== In this section, I will focus on the desired behavior of the model, as specified by the objective that the model should be optimized for. In the [inner/outer alignment](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology) distinction, one could categorize this as the outer alignment of the model. Note that here, outer alignment means that (i) the model is faithfully modeling some target distribution over text, and (ii) this distribution over text is useful and safe for our purpose of helping with alignment research. It does not mean having goals aligned with human values. As mentioned above, the model should represent (and compress) a distribution over text. However, given the high dimensionality of the space, no amount of training data will fully specify the desired distribution. Moreover, to use the model, we want it to perform well on previously unseen prompts, so we have to rely on the model’s inductive bias to generalize well. This might not be an issue if our prompts are in some sense close to the hypothetical distribution generating the training data: most current LLM training happens on new training examples (since the LLM is trained only for a single epoch, i.e., pass over the training data), and this doesn’t prevent the model from achieving low loss. Presumably, the model can thus learn this hypothetical distribution well on average, even if it is underspecified in theory. However, the model might still perform poorly in the worst case, and we might get a problem if the used prompts are less close to the training distribution. To begin, I will bracket the issue of generalization, and instead assume that some ground-truth distribution is given. In this case, there are two sources of uncertainty for the model: First, there might be underspecification in the prompt, leaving room for different completions according to the underlying distribution. Second, the model might be unable to represent the underlying distribution over text due to memory and computation constraints. If we use the cross-entropy loss to train the model, this should presumably select for a model that matches the true distribution as closely as possible in KL-divergence. Self-fulfilling prophecies and counterfactual oracles ----------------------------------------------------- An important problem with incentivizing predictions arises when predictions themselves can [influence the world](https://www.alignmentforum.org/posts/SwcyMEgLyd4C3Dern/the-parable-of-predict-o-matic). To see this, consider the example of predicting the stock market. A prediction of a market crash will likely cause the market to actually go down and thus act as a self-fulfilling prophecy. An agent optimizing for high prediction accuracy would likely make self-fulfilling predictions that make the world more predictable if given the chance. In the case of LLMs, we could imagine a case in which a distribution over text can be influenced by the model’s previous predictions. While a model couldn’t gain anything immediately from choosing a particular output, a non-myopic model that cares about future loss might choose to manipulate the future distribution over text. For example, a language model could output a text that hacks the training process and changes all future training samples so they can be predicted perfectly. The task might thus be vulnerable to an [auto-induced distributional shift](https://arxiv.org/pdf/2009.09153.pdf). Luckily, at least the most obvious issues with auto-induced distributional shift should be avoided in our case, as long as the model is myopically optimizing its loss. This is because we assume that the model’s predictions will only ever be evaluated on already generated text, so a prediction can never causally influence its own accuracy. Nevertheless, we might get into more subtle problems related to self-fulfilling prophecies. Moreover, these issues are an important motivation for choosing LLMs rather than some other type of oracle. I will thus elaborate on them further here. Issues with predictions that can influence the world have been researched in the context of decision markets. In a decision market, traders are incentivized to make predictions about outcomes conditional on different actions. A straightforward implementation of a decision market is a [conditional prediction market](http://mason.gmu.edu/~rhanson/impolite.pdf), but it suffers from perverse incentives: for instance, a trader could make bets conditional on action *A* if it can predict the associated conditional outcomes well. If the trader has enough sway over the market, it could then force it to choose action *A*, by making additional bets that make all other actions look bad (even if action *A* is not the optimal action). As a result, the trader can make money on its correct predictions about action *A*, while its bets conditional on the other actions become void since those actions haven't been taken. [Chen et al. (2011)](http://yiling.seas.harvard.edu/wp-content/uploads/DM_full_version.pdf) show that to avoid such problems and incentivize honest predictions about outcomes in a decision market, all actions have to be taken with nonzero probability, and scores have to be adjusted using a version of importance sampling, to make sure that traders’ expected returns don’t depend on the action distribution chosen based on the market. [Oesterheld and Conitzer (2021)](https://users.cs.duke.edu/~conitzer/decisionWINE20.pdf) show that, when choosing a single action deterministically based on a recommendation by an expert, one can only incentivize the expert to report the expected utility of the action. These results show that one cannot use predictions about outcomes given actions for decision-making unless one only asks experts to predict expected values or ensures that the experts’ expected returns don’t depend on one’s actions. Stuart Armstrong’s [counterfactual oracles](https://arxiv.org/pdf/1711.05541.pdf) solve this issue similarly to Chen et al. in the context of oracle AIs. After a counterfactual oracle has made a prediction, we randomize between two options: (i) most of the time, we use the prediction, but we never reward the model for it; (ii) with small probability, we ignore the oracle’s prediction and reward the oracle once the prediction has been resolved. This way, the expected reward for the oracle is independent of the actions we choose based on its predictions, and it is thus incentivized to honestly predict the future without taking the influence of its prediction on our actions into account. To transfer this idea into the [machine learning setting](https://www.lesswrong.com/posts/yAiqLmLFxvyANSfs2/counterfactual-oracles-online-supervised-learning-with), we can ask whether a training objective is compatible with learning a model that reasons like a counterfactual oracle. In the case of language modeling, a counterfactual oracle would predict text without taking the causal effect of its prediction on the modeled distribution over text into account. Training the model in a supervised fashion on existing text, as with the normal GPT-3 objective, would be compatible with this. Training the model using RL from human feedback, for example, would explicitly incentivize optimizing text to get good human evaluations, so it would not incentivize learning a counterfactual oracle. To get counterfactual oracle-like reasoning, the training signal can only come from text that has already been written. Note, though, that the training text could have been written using previous outputs of the model, as long as the model is never trained directly based on evaluations of the model’s predictions. A consequence of training on such an objective is that, at least in theory, we get no guarantees for our actual use case of generating new alignment research, based on training alone. Critically, the model has to be able to generalize safely to new prompts. This is different from, e.g., RL, where perfect behavior on the RL objective at all times would imply an aligned agent if the objective is aligned (here, one of the hard parts would be specifying an objective that is always aligned). Logical dependences ------------------- In addition to the self-fulfilling prophecies discussed above, some parts of the modeled distribution might depend on the input-output behavior of the model, causing a logical dependence between the model’s output and the distribution. This would enable acausal self-fulfilling prophecies. > Q: Please name the color that is mentioned by GPT-3 given this prompt. > A: **The color mentioned by GPT-3 is blue.** > > Q: Please name a color that is not mentioned by GPT-3 given this prompt. > A: **Using the prompt "Please name a color that is not mentioned by GPT-3 given this prompt," I would say that the color orange is not mentioned by GPT-3.** > > *Trying out GPT-3 on two problems with potential logical dependence (GPT-3’s completion is bold, and answers are cherry-picked). Note that these problems only have logical dependence if we create the ground-truth distribution for the completions in a specific way (i.e. by using completions by GPT-3).* For example, consider a training process in which the model is copied, used to generate a token for a fixed prompt, and then trained for one step on this predicted token. If we iterate this procedure further, we get a dynamical system, which might eventually converge to a confident estimate of a single token. We could also let a model predict a number n.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  and then train it to predict the number n+1. Iterating this would yield a divergent sequence of predicted numbers. Predicting the number n+1mod10 might instead result in the model getting stuck in a cycle. These training processes are unrealistic; however, a situation in which several close copies of a model contribute to a training corpus does seem likely—in fact, this would be at the core of an iterative alignment approach using LLMs. The training corpus may hence depend on the model’s input-output behavior at least to some degree. If models are myopically optimized, starting with some random initialization, such dependences give rise to learning dynamics similar to those in [differentiable games](https://arxiv.org/pdf/1905.04926.pdf). For instance, we could model the examples above as an alternating game between two players, who take turns in updating their parameters in response to the actions of the other player. [These dynamics could](https://www.alignmentforum.org/posts/25288usP5B5ytnzA4/random-thoughts-on-predict-o-matic#Explicit_Fixed_Point_Selection) converge to several different fixed points or limit cycles, some of which might be undesirable. Myopic optimization of the model’s loss would correspond to implementing a causal decision theory (CDT), and a fixed point would correspond to a [ratifiable](https://plato.stanford.edu/entries/decision-causal/#Rati) [policy](https://proceedings.neurips.cc/paper/2021/file/b9ed18a301c9f3d183938c451fa183df-Paper.pdf) in this context. It seems to me that this is what would happen to present-day LLMs. Another possibility would arise if a capable model builds a very good model of the process by which its training texts are generated and simulates this process. Such a model might simulate a copy of itself used to generate some of the training text, which might, in turn, simulate another copy, and so on, potentially resulting in a loop of [recursive simulations](https://www.lesswrong.com/s/tYCu3WG89kAW8QmoM/p/yArZKCEheZt8GkK6p). A model might reason about such recursive loops on a higher level and choose to output a fixed point of the recursive simulations directly, e.g., to minimize its training loss. This would be the case, for instance, if the model implements a [functional decision theory](https://intelligence.org/files/DeathInDamascus.pdf) (FDT) and thus explicitly takes into account logical dependences between its actions and the training distribution. FDT agents would potentially [act non-myopically](https://www.alignmentforum.org/posts/LCLBnmwdxkkz5fNvH/open-problems-with-myopia), even if they only care about the single next prediction. Moreover, they might perform better even under the one-step training loss (choosing the fixed point with the lowest loss), and might thus be learned by some training procedures; even if a stop-gradient between copies of models would prevent a non-myopic training signal to directly optimize fixed points, models might become more and more aware of dependences as a side-effect of training. A model directly choosing a fixed point could be more dangerous than a model converging to an arbitrary ratifiable prediction according to the abovementioned learning dynamics. E.g., reconsider the above example of hacking the training process. Assume that the model can implement an exploit that turns the text used for the exploit into the only training example. Even a model that only cares about its own immediate loss might choose to [superrationally](https://casparoesterheld.com/2017/09/21/multiverse-wide-cooperation-via-correlated-decision-making-summary/) implement this exploit: if the model predicts the exploitative text, this is evidence that its close copies from earlier training steps did the same, and thus this text is indeed the most likely output. Hence, predicting the exploitative text would acausally lead to a better loss than alternatives. One may try to avoid such dependences altogether by making predictions independent of the modeled distribution. In a prediction market, for instance, one could randomize the exploration of different traders, using a source of randomization independent of the model’s environment. A trader in the model would then be unable to choose a particular outcome across all of their copies to minimize their loss, because they wouldn’t know at which time step their counterparts in the model's copies would be influencing the decision in the same way. With the right set of traders and randomization scheme, we might get a kind of random exploration that would [yield CDT/ratifiability-style decisions](https://casparoesterheld.com/2018/02/15/the-law-of-effect-randomization-and-newcombs-problem/). I am unsure how to directly transfer this idea to LLM training. Another possible solution would be to [stratify](https://www.alignmentforum.org/posts/5bd75cc58225bf0670375441/cooperative-oracles-stratified-pareto-optima-and-almost-stratified-pareto-optima) different models and their predictions, separating them into an ordered set of groups. A model in group i would then only ever be trained on outputs of models in groups j<i. An initial group 0 would just be trained on human-generated text. Even if the groups are very similar and there are thus still dependences, the dependences would eventually bottom out at group 0—there would be a first model which has nothing to gain from superrationality or [acausal trade](https://www.lesswrong.com/tag/acausal-trade). Similarly, any recursive simulation would eventually bottom out. Hence, the training distribution for any of the models would be uniquely determined. Stuart Armstrong [discusses](https://www.alignmentforum.org/posts/6XCTppoPAMdKCPFb4/oracles-reject-all-deals-break-superrationality-with-1) some further problems with superrationality and ideas for preventing them in the context of oracle AIs, including cases in which we only have control over one of several oracles. Generalization and malign induction ----------------------------------- In this section, I will focus on predicting text given new prompts, which is important as the actual use-case of the model. In our setting, optimal loss on a non-training prompt is underspecified without some inductive bias, so specifying this bias is part of the outer alignment of the model, not just an inner alignment issue (cf. [Finnveden 2021](https://www.alignmentforum.org/posts/3D2MGF2fZhWSNb7aw/prediction-can-be-outer-aligned-at-optimum)). However, note that we don’t necessarily have to solve this issue via an idealized theory of induction. Maybe we could just try to understand and shape a model’s cognition (such as having a particular model of the world, etc.) directly. LLMs are already capable of solving various reasoning tasks requiring at least some original effort. It seems plausible that we could get at least human-level original alignment research ideas from GPT-N (though it [might be harder](https://www.alignmentforum.org/posts/BnDF5kejzQLqd5cjH/alignment-as-a-bottleneck-to-usefulness-of-gpt-3) to get the model to generate ideas that exceed human capabilities or break entirely new ground). This leaves open the question of whether GPT-N would be aligned for this task. One worry is that GPT-N might have a [misaligned inductive bias](https://forum.effectivealtruism.org/posts/dgk2eLf8DLxEG6msd/how-would-a-language-model-become-goal-seeking?commentId=B7wLmggqn9YAWdqYA), similar to issues with the [Solomonoff prior](https://www.alignmentforum.org/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign) or the [minimal circuit prior](https://www.alignmentforum.org/posts/fM5ZWGDbnjb7ThNKJ/are-minimal-circuits-deceptive). For example, our model might come to believe that all sufficiently intelligent agents writing about alignment research are [deceptively aligned](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) and are trying to secretly implement an unsafe AGI in our world. Or it might believe that it is living in a simulation and that the distribution over text it is modeling is controlled by some misaligned simulator, who might choose to change the distribution at some point. Note that modeling text already requires the ability to model goal-directed behavior, deception, etc., so incorporating additional unaligned deceptive agents into a world might only increase complexity a little. One way to alleviate such issues would be by using prompts closer to the training distribution, so that a model without access to the training data would be unable to reliably distinguish these prompts from new training prompts. In that case, if the training distribution consists of, e.g., forum posts that were optimized to be helpful for alignment, then this would likely apply to new generated posts as well. That is, assuming the model does well on unseen training examples. We would still get a problem if the model can infer everything about this world from the training data and thus reverse-engineer everything about the training process, including future training examples and deployment situations. But it is unclear whether training examples would contain enough information to do this to sufficient precision, and it might be prohibited at least by some reasonable combination of speed and simplicity prior. Another hope would be that the task of modeling text would more likely give rise to aligned models than other prediction tasks. To model human-written text, it seems useful to deal with human concepts in a human-like world model. This [might be simpler](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#heading=h.8zbibvlrwtkf) and faster than building a more accurate model and then translating between that model and human-written text. Unfortunately, this argument does not speak against a model that is close to a human’s world model, but with some key differences, such as some agents trying to break out of their simulation. Another counterargument would be that sometimes, it is useful for the model to be more accurate than the world model of a human writer. For instance, a journalist writing about some world event usually has access to various sources of information about the world in which this event occurred. A model that is prompted with the headline of the article, however, would have to first infer this information, by conditioning some prior over worlds on the event mentioned in the headline. It would thus be incentivized to be able to make this inference arbitrarily well, in addition to modeling the state of mind of the journalist. One reason malign hypotheses in the Solomonoff prior can be simpler than true hypotheses is that the former privilege a specific simulated stream of observations and thus have to spend [less complexity](https://www.alignmentforum.org/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized#0__Background) on locating these observations in a bigger world. This might be less of an issue with language models. Here, simplicity has to be amortized over all prompts, so the malign hypothesis can gain less advantage by singling out a specific agent’s observations in a world. Instead, the malign hypothesis has to be able to generate the correct stream of tokens given any prompt, so it may have to pay the same complexity cost as an aligned hypothesis. Intuitively, there exists a right way for an LLM to generalize, which raises hopes that one might be able to get a model to do so. For instance, there seems to exist a reasonable distribution over what a new Paul Christiano blog post would look like, potentially even if we condition on something more outlandish, such as the blog post being written 5 years from now. One possible formulation for a reasonable prior over possible worlds might be the probability distribution output by Eliezer Yudkowsky’s [outcome pump](https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes#:~:text=Luckily%20you%20have%2C%20in%20your,unless%20a%20specified%20outcome%20occurs.). However, to use this prior to model text, one would still have to determine how to extract text from possible worlds. Moreover, a model might never be able to learn such a prior based on the training corpus alone. Model cognition =============== In this section, I will turn to a specification of the model’s desired cognition, i.e., the algorithm it implements to achieve the behavior specified above. The discussion below is very informal, and making the discussed concepts more concrete would be a useful direction for further research. I focus on issues of deceptive alignment and myopia here and omit a discussion of the model’s cognition when it comes to generalizing to new prompts (hopefully, the latter will be facilitated by further interpretability research on LLMs). An LLM might [learn](https://www.alignmentforum.org/posts/Et2pWrj4nWfdNAawh/what-specific-dangers-arise-when-asking-gpt-n-to-write-an?commentId=dwSZjpRcR2CHFASTZ) [to become](https://forum.effectivealtruism.org/posts/dgk2eLf8DLxEG6msd/how-would-a-language-model-become-goal-seeking?commentId=cbJDeSPtbyy2XNr8E) a [deceptively aligned](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) agent, making good predictions but ultimately pursuing a misaligned goal. Such an agent could, for instance, manipulate its predictions on non-training prompts to steer AI alignment research into a particular misaligned direction. At least, the agent could make an [acausal trade](https://www.lesswrong.com/tag/acausal-trade) with an unaligned AI and try to sabotage AI alignment. Such deceptive alignment might be prevented if we ensure that some property of the model’s cognition, called [acceptability predicate](https://docs.google.com/document/u/0/d/199Lkh78UA2uI9ljLEy_aWR8RBetQLRO6Kqo3_Omi1e4/edit), holds most of the time during training. Below, we discuss two possible predicates: *myopic cognitio*n and *action-based consequentialism*. I say that a model has *myopic cognition* if it only optimizes its immediate next prediction, ignoring the effect of this prediction on the target distribution or on the model’s loss in later steps. For instance, an ideal counterfactual oracle would have myopic cognition. A *myopic training objective* is defined as any training objective under which a model with myopic cognition is optimal. Note that a myopic training objective is necessary to avoid incentivizing non-myopic cognition in the model, but it [might not be](https://www.alignmentforum.org/posts/GqxuDtZvfgL2bEQ5v/arguments-against-myopic-training) sufficient. We want the model to be myopic to avoid the issues with self-fulfilling prophecies discussed above and to prevent the model from becoming deceptive.  A myopic model doesn’t think about long-term consequences and would thus not become deceptive for instrumental reasons, even if it was misaligned. For instance, consider a model that has a misaligned preference to add the word “banana” to its predictions. Such a preference might arise as a form of [proxy alignment](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB/p/pL56xPoniLvtMDQ4J#3_1__Pseudo_alignment), for instance, when modeling text involving delicious fruits. If this preference was non-myopic, then the model might reason that it should not output “banana” too often during training, to retain this preference and thus be able to realize its preference at some later point. E.g., once the model has become more powerful, it might be able to manipulate its training process, causing future texts to contain the word banana more often. The model might also choose to [gradient hack](https://www.alignmentforum.org/posts/uXH4r6MmKPedk8rMA/gradient-hacking), e.g., by omitting the word “banana” even when it would be appropriate to mention it. The training process might then install an even stronger preference towards the word in the agent. If the model was myopic, however, it would care only about its immediate next prediction and thus immediately output the word. Its misaligned preference would then be corrected by the training algorithm once it is trained on non-fruit-related prompts. An acceptability predicate should serve as a buffer around deception, preventing it from occurring, even if the predicate was violated for a few steps during training. Myopia alone might not provide a sufficient buffer. For instance, consider the model with the preference for outputting the word “banana”. The model might initially be myopic, just because it is only aware of the current training step. It might become non-myopic, then, after realizing that it will also get opportunities for outputting text in future steps. At that point, the model might immediately become deceptive, if it is already an agent that can plan and has a good model of the world. There might then be ways for the model to strategically hide its deceptive and non-myopic cognition, so that the model’s cognition could not be corrected afterward. Another disadvantage of myopia is that a myopic agent [might wirehead](https://www.lesswrong.com/posts/25288usP5B5ytnzA4/random-thoughts-on-predict-o-matic/comment/wbxt59enMcCNsP8Pt) if it cares about the actual physical implementation of its loss function: if there was a security flaw in the training process that allowed the model to hack its immediate training loss, then even a myopic agent would choose to exploit this flaw. To avoid such failures, one could require the model to care only about its actions, not about any consequences of these actions, making it an [action-based consequential agent](https://www.alignmentforum.org/posts/LBNjeGaJZw7QdybMw/agents-over-cartesian-world-models#Consequential_Types). The model would optimize its actions according to some [internal objective](https://casparoesterheld.com/2016/07/08/wireheading/), but it would not consider manipulating the world. This might provide a better buffer around non-myopically optimized and deceptive actions, as the model would need to learn to reason about itself as causally influencing the world, and become non-myopic, before it could notice deceptive strategies. Lastly, we can distinguish between the *simulator*—the model itself—and *simulacra*—agents simulated in the process of generating text. These two are separated at least in principle: on the one hand, simulacra only appear for specific prompts, and the properties of their cognition differ for different prompts. On the other hand, the simulator produces all texts, and its cognition is relevant regardless of prompts. While the simulator might be myopic, simulacra can easily be non-myopic, deceptive, etc., and simulating such non-myopic simulacra is necessary to create texts written by or about such agents (including alignment forum posts). Since the simulator already has to be able to simulate such agents and their plans, it seems that it would never have to change or learn a lot to become a non-myopic agent. This seems like a big problem, potentially preventing us from implementing any buffer around a deceptive generative model. Using the model =============== Finally, I will briefly discuss using the model to help with the alignment problem. One [direct approach](https://www.alignmentforum.org/posts/nXeLPcT9uhfG3TMPS/conditioning-generative-models) to solving alignment using a language model would be to prompt the model to output a textbook on how to solve the alignment problem, as produced by an expert alignment researcher 100 years from now, conditional on AI alignment having been solved. Assuming we are able to make the model believe the conditional, this might be faster than other approaches. However, it would require a more powerful model, and I believe it would be less safe. Conditioning on such prompts seems more dangerous, as we rely more on an aligned inductive bias in the model. First, if we condition on very unlikely events, the set of remaining possible worlds has smaller measure, so slight inaccuracies in the model’s distribution might have a large effect on such conditional distributions. Second, we can think of a prompt as spending bits to optimize the generated distribution. If we use a lot of bits, this takes us further away from the prior distribution, which could be dangerous if we are not confident in the quality of our prompt and the corresponding conditional distribution. A more likely prompt optimizes less and produces something more analogous to a [quantilizer](https://www.alignmentforum.org/tag/quantilization). Third, for arbitrary prompts, we might run into [decision-theoretic](https://www.alignmentforum.org/posts/zcPLNNw4wgBX5k8kQ/decision-theory) problems such as the [5-and-10 problem](https://www.lesswrong.com/posts/5bd75cc58225bf06703753d4/two-major-obstacles-for-logical-inductor-decision-theory), due to the fact that the model implements prompts as Bayesian conditionals instead of some other type of counterfactual. To alleviate these issues, we could use the model [iteratively](https://www.alignmentforum.org/posts/EmxfgPGvaKqhttPM8/thoughts-on-the-alignment-implications-of-scaling-language#Human_amplification) in a version of [imitative amplification](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai#2__Imitative_amplification___intermittent_oversight). In this approach, we would ask the model to produce new alignment research, critique or summarize existing research, expand outlines of posts, or assist researchers in other ways. The used prompts could be chosen to remain close to the training distribution, allowing us to be more confident in the faithfulness of the produced completions. The resulting research could then be used to further train the model.  Unhelpful predictions --------------------- One problem with counterfactual oracles (and thus the discussed LLMs) is that, by design, the model is not optimized for anything other than faithfully representing a distribution. As a consequence, we avoid the dangers of trying to align the system directly, but we get a less useful system. For instance, in the ambitious case in which we condition on a world in which AI alignment has been solved, we might just get worlds in which alignment turned out to be very easy anyway, such that the generated text wouldn’t be helpful to us in either case. A subtle bias like this might also apply to more likely prompts, so that even the iterative proposal might [slowly drift](https://www.alignmentforum.org/posts/Et2pWrj4nWfdNAawh/what-specific-dangers-arise-when-asking-gpt-n-to-write-an?commentId=dwSZjpRcR2CHFASTZ) into less useful research directions. We could try to anticipate and fix such issues, but it is unclear whether we would be able to do so for all of them, without having any direct training signal towards usefulness. Dangerous agents ---------------- Another part of aligning LLMs is generating text written by aligned simulated agents. Whether this is possible depends on whether the model is learning the desired posterior distribution (forming an accurate model of the world) and whether there are enough aligned agents in this distribution. It seems that agents will likely be aligned at least some of the time under a less ambitious approach, where we would try to simulate, e.g., Paul Christiano. Even for more ambitious proposals, it seems we might be at an advantage if we only have to design a good prompt, rather than a training procedure that will consistently lead to a similarly aligned agent. Adam Jermyn [discusses](https://www.alignmentforum.org/posts/nXeLPcT9uhfG3TMPS/conditioning-generative-models) ways to design more ambitious prompts to help with building aligned AI in more detail. Simulated agents might be able to infer that they are in a simulation, make inferences about the real world, and then try to escape the simulation. This could cause irreparable damage, e.g., if simulated agents are able to produce text to hack computer systems or persuade humans. We would thus need to be confident in the safety of a prompt ahead of time, either by making sure the simulated agent is aligned or not too powerful. Conclusion ========== In this post, I discuss a potential training goal for future capable LLMs and give an overview of the various problems that may arise. Overall, using LLMs in an imitative amplification scheme to help with alignment research seems like a promising avenue. One of the most useful research directions for this purpose might be LLM interpretability, in addition to fleshing out the training goal further and gaining more conceptual clarity on the discussed problems. 1. **[^](#fnreflptvwonun4f)**There are of course also other possible well-specified prediction targets for oracles. 2. **[^](#fnrefgou4l3obndr)**An AI may still have to model human intent implicitly insofar as that is important for generating text. 3. **[^](#fnrefgb4z00yqr6)**The fact that we have access to distributions, instead of, e.g. maximum likelihood estimates, is important for several reasons: first, maximum likelihood estimates can be very untypical. For instance, when throwing a pair of dice repeatedly, the maximum likelihood estimate for each sum of eyes is 7. However, in most worlds, the sum won’t be 7 every single time. Second, we want to be able to incentivize the model to be uncertain in a calibrated way; otherwise, the model might choose to focus on some versions of an output it knows well how to produce, even if some harder to model version would be equally likely given the prompt. For instance, a model may be uncertain whether it is supposed to write an honest news article or a fictional story. If both are equally likely, and there is only one plausible fictional story, but many different possible news articles, then a model outputting a maximum likelihood estimate might consistently produce the fictional story. A model sampling from a distribution incentivized by a proper scoring rule would output news articles and fictional stories with equal probability. Third, some proposals might depend on [getting multiple samples](https://arxiv.org/abs/2104.08315). E.g., one may be able to implement a version of the [consensus algorithm](https://www.alignmentforum.org/posts/CnruhwFGQBThvgJiX/formal-solution-to-the-inner-alignment-problem) using samples from a large language model.
5128847a-30df-466c-956b-839406a2496b
trentmkelly/LessWrong-43k
LessWrong
Open & Welcome Thread - September 2019 * If it’s worth saying, but not worth its own post, here's a place to put it. * You can also make a shortform post. * And, if you are new to LessWrong, here's the place to introduce yourself. * Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome. If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. The Open Thread sequence is here.
f59eecfa-3b47-4a45-8308-359f03ce50dc
trentmkelly/LessWrong-43k
LessWrong
Inverse Problems In Everyday Life There’s a class of problems broadly known as inverse problems. Wikipedia explains them as follows: An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them. [...] It is called an inverse problem because it starts with the effects and then calculates the causes. It is the inverse of a forward problem, which starts with the causes and then calculates the effects. This post is about the many examples that we run into in life, where there’s a difference between forward reasoning and inverse reasoning, and where neglecting to take this difference into account can lead to problems. My aim here is mostly to provide enough examples to make this distinction more intuitive and recognizeable. Ideally we should be able to quickly notice situations where inverse reasoning is a suitable tool, so that we can use it, and hence come to better conclusions, or at the very least adopt an appropriate degree of uncertainty to our expectations and interpretations. Bayesian Reasoning Bayesian reasoning is one typical example of an inverse problem. While causal effects are “forward” facing, Bayesian reasoning works in the opposite, the diagnostic direction. We look at “evidence” (the outcomes of some causal process) and try to infer the possible causes. If you’re unaware that Bayesian reasoning is an “inverse” process, you might get the idea that you can also try Bayesian updating on forecasting questions – but eventually you realize that this gets quite awkward. For example: how likely are we to observe the release of GPT-O1, in a world where artificial superintelligence eventually ends up disempowering humanity? This is not a natural fit at all, and understanding the inverse nature of Bayesian reasoning should make that clear. Human Communication It is no secret that communication between humans can be tricky: you have some thought or feeling or idea in your mind, and you try to encode it into words, which then m
403f3d59-cb3a-4059-9b90-9396ae4263c8
trentmkelly/LessWrong-43k
LessWrong
Preserving our heritage: Building a movement and a knowledge ark for current and future generations Note: This post is written with partial assistances from ChatGPT. Furthermore, this post is explicitly released under public domain. TL;DR version: An essay highlighting the needs to build or energize a movement to support preservation of digital heritages in both short and long terms given the worrying trends of major technology companies such as Google making decisions that imperil or jeopardize digital heritages and thus by extension history itself presently. Furthermore, the idea of building a knowledge ark on Earth or in space is explored. Introduction Google's policy of deleting inactive accounts, and how it's a harsh and inhumane approach. In the fast-paced digital era, where information flows ceaselessly and technological landscapes evolve at an unprecedented rate, the fragility of our digital heritage becomes increasingly apparent. Google's impending mass-deletion of inactive accounts which will occur on December 1st, 2023 immediately floats to mind. Such decision will threaten accounts and contents by users who for some reason aren't able to log on to their account for extended period of time, which in this case would be two years.  For living persons there are many reasons that will render them incapable of accessing their accounts for a long time, which include imprisonment (whether rightfully or not, since there are many people who end up on false charges all the time as cases from the Innocence Project show), medical incapacitation (i.e. coma), being a member of extended scientific expeditions which could include astronauts in deep space missions one day, living under authoritarian regimes which imposes internet blackouts from time to time, and simply wanting to take an extended break from the Internet following major traumatic incidents such as severe Internet harassment. Worse still, in Asia there are many young people who ended up trapped at fraud dens in Myanmar and Cambodia, sometimes for many years, due to job scams and ensuing human traffi
ef725518-8469-4582-afe3-cd86f5746338
trentmkelly/LessWrong-43k
LessWrong
Morality as Cooperation Part III: Failure Modes This is a Part III of a long essay. Part I introduced the concept of morality-as-cooperation (MAC) in human societies. Part II discussed moral reasoning and introduced a framework for moral experimentation. Part III: Failure modes Part I described how human morality has evolved over time to become ever more sophisticated. Humans have moved from living within small tribes which were engaged in near-constant warfare, to peacefully cooperating within massive nations that contain billions of people, with laws and constitutions that protect basic human rights. But while human morality has gradually improved over time, progress has not been smooth. World War II was a catastrophic moral failure, and the subsequent cold war brought the constant threat of nuclear annihilation. Authoritarian countries still exist today, and the world's major powers seem to be backsliding towards nationalism and authoritarianism again. The evolution of morality clearly has failure modes. Part II proposed a framework for both reasoning about morality and for doing experiments, with the goal of automatically deriving universal moral codes for both human and non-human agents, including AGI. As with any theory, though, a theory fails if any of its premises fail, which immediately suggests what some of those failure modes might be. This section will again make heavy use of examples from human society to discuss failure modes, particularly international relations and geopolitics, and thus may seem more like a treatise on political philosophy than an essay on AI alignment. This is deliberate: nations and corporations are excellent analogues for the problems that we may face with AGI. First of all, nations and corporations are the two main legal structures that humans use to cooperate, and thus provide a case study in cooperation. Second and more importantly, nations and corporations are both non-human legal entities which have far more resources (both computational and otherwise) than any indivi
1e1ef37d-8e8d-406d-b360-fa962e32ccb4
trentmkelly/LessWrong-43k
LessWrong
What is known about invariants in self-modifying systems? I have been trying to find out what is known about invariants in self-modifying systems. This might become a rather acute topic if we end up moving towards self-modifying AIs or self-modifying ecosystems of AIs. But it seems that not much has been done. For example, I have found a 1995 Chinese paper, "S-and T-Invariants in Cyber Net Systems" by Yuan Chongyi, Google Scholar page, PDF available which is doing a study of invariants in self-modifying nets (a natural extension of Petri nets), but it only has 4 references to it known to Google Scholar. I wonder if people know about more examples of this kind of research (or about researchers or organizations currently trying to look at this topic)...
6fe12684-1f6c-41ea-b979-358107d0cf9c
trentmkelly/LessWrong-43k
LessWrong
Truth is Symmetric The Taj Mahal is symmetric with respect to right-left reflection. That is to say, if you create a mirror image of it that flips right and left, you get the exact same thing. Therefore, if as we're standing outside looking at it my friend Jen tells me that she thinks the left side of the Taj Mahal is much nicer than the right side, or that it's objectively better, I would be suspicious. That's because if we created the mirror image of the Taj Mahal, the right side would be on the left, Jen would not be able to tell the difference and so would say that the left side is nicer than the right one, but it would secretly be the right side she was speaking about. Muwahahahaa, it can't be right. At least if all that determines niceness isn't about where something is placed but what it's composed of - the left side can be nicer than the right, but not just because it's on the left. Now for three similar examples, each trivial on its own but illustrating a broader point. 1) Most Israeli Jews think Israel is broadly in the right in the Israeli-Palsetinian conflict. Most Palestinians think that the Palestinians are broadly in the right. What bothers me about this is that both factions think that their faction is objectively in the right, that is, independently of who considers the question. But if it's independently of who considers the question, how come people who were born on one side of the aisle mostly think one thing, and people on the other side think another thing. This screams inconsistency and irrationality, since a factor that shouldn't affect what's objectively right affects what most people think. 2) I'm usually more left-wing in my opinions. So when I consider whether some extreme right-wing statement falls within the limits of free speech, I try to think whether I would consider an equally-extreme left-wing statement to fall within the limits of free speech. I imagine a world where that opinion is banned from being expressed, and whether I'd be happy with a wor
fd975b30-11c4-4b72-9d3e-a82046315f9c
StampyAI/alignment-research-dataset/arbital
Arbital
Reflective consistency A decision system is "reflectively consistent" if it can approve the construction of similar decision systems. For example, if you have an expected utility satisficer (it either takes the null action, or an action with expected utility greater than $\theta$) then this agent can self-modify to any other design which also either takes no action, or approves a plan with expected utility greater than $\theta.$ A satisficer might also approve changing itself into an expected utility maximizer (if it expects that this self-modification itself leads to expected utility at least $\theta$) but it will at least approve replacing itself with another satisficer. On the other hand, a [causal decision theorist](https://arbital.com/p/causal_decision_theory) given a chance to self-modify will only approve the construction of [something that is not a causal decision theorist](https://arbital.com/p/son_of_cdt). A property satisfies the stronger condition of [reflective stability](https://arbital.com/p/1fx) when decision systems with that property *only* approve their own replacement with other decision systems with that property. For example, a [https://arbital.com/p/-10h](https://arbital.com/p/-10h) will under ordinary circumstances only approve code changes that preserve the property of maximizing paperclips, so "wanting to make paperclips" is reflectively *stable* and not just reflectively *consistent*.
8dfda30c-8419-4c24-a6ea-c3ca75334add
trentmkelly/LessWrong-43k
LessWrong
Unregulated Peptides: Does BPC-157 hold its promises? Empiric status: I studied bioinformatics, but I'm not working in the field. I researched the article over a few months.   After reading about peptides and BPC-157 potential effects on wound healing, I decided to research BPC-157 and write this article to summarize my findings. Even if you aren’t interested in BPC-157, it’s an interesting case of a drug without FDA approval that still gets used by a few doctors, which gives us an interesting perspective on what happens when drugs get used without FDA approval. Peptides are an interesting category. I previously argued that Orexin is promising enough to warrant experimentation and research. While there are some patents filed on peptides post-2013, the decision by the US Supreme Court to rule gene patents void makes peptide drugs derived from human peptides harder to defend, which makes them unattractive for pharma companies to investigate. In the absence of academic studies involving human trials, the knowledge we have about the effects of peptides comes either from extrapolating animal studies or from anecdotal human reports. There are online communities like r/peptides where users share information about their experiences and give each other advice about which peptides to take. A blogger who goes by Troof created a huge questionnaire to get people to report their experiences with various nootropics including peptides. He writes: > Selank, Semax, Cerebrolysin, BPC-157 are all peptides, and they are all in the green “uncommon-but-great” rectangle above. Their mean ratings are excellent, but their probabilities of changing your life are especially impressive: between 5 and 20% for Cerebrolysin (which matches anecdotal reports), between 2 and 13% for BPC-157, and between 3 and 7% for Semax. The concierge healthcare service, ultrapersonal healthcare, says about BPC-157: > BPC-157 is the darling child of biohacking, bodybuilding and performance-enhancing communities. Used for years overseas, in athletics, nootropic
eaaf3f30-a95d-47a1-93cc-4f9a1ca0053a
StampyAI/alignment-research-dataset/arxiv
Arxiv
Machine Explanations and Human Understanding 1. introduction ---------------- Although recent advances in deep learning have led to machine learning (ML) models with impressive performance (He et al., [2015](#bib.bib27); Mnih et al., [2015](#bib.bib43); Silver et al., [2017](#bib.bib54)), there are growing concerns about the black-box nature of these models. Explanations are hypothesized to improve human understanding of these models and achieve a variety of desirable outcomes, ranging from helping model developers debug (Hong et al., [2020](#bib.bib28)), mitigating unfairness by surfacing undesirable model behavior (Dodge et al., [2019](#bib.bib18); Tan et al., [2018](#bib.bib56)), to improving human decision making in critical societal domains (Lai and Tan, [2019](#bib.bib35); Lai et al., [2020](#bib.bib34); Green and Chen, [2019b](#bib.bib24), [a](#bib.bib23); Zhang et al., [2020](#bib.bib62); Poursabzi-Sangdeh et al., [2021](#bib.bib51)). However, empirical experiments with human subjects show mixed results about the utility of machine explanations. As a positive result, the original LIME paper shows that feature importance allows developers to identify spurious features in topic classification and improve the model by removing these features (Ribeiro et al., [2016](#bib.bib52)). In contrast, Lai and Tan ([2019](#bib.bib35)) show that feature importance improves human performance slightly in deceptive review detection, but human-AI teams fail to outperform AI alone. In fact, this is just one example of many papers that fail to improve the performance of human-AI teams in a wide range of tasks, including recidivism prediction, deceptive review detection, and hypoxemia prediction (Lai et al., [2020](#bib.bib34); Green and Chen, [2019b](#bib.bib24), [a](#bib.bib23); Zhang et al., [2020](#bib.bib62); Poursabzi-Sangdeh et al., [2021](#bib.bib51); Carton et al., [2020](#bib.bib13); Jung et al., [2020](#bib.bib29); Weerts et al., [2019](#bib.bib60); Beede et al., [2020](#bib.bib7); Wang and Yin, [2021](#bib.bib59); Lundberg et al., [2018](#bib.bib42)). An intriguing puzzle is thus what factors drive such mixed results and how we can derive generalizable insights from empirical evaluations of explanations. To resolve this puzzle, we need to reason about the interplay between machine explanations and human understanding. We argue that two key questions need to be addressed: 1) what human understanding do explanations strive to improve and how do they connect with the aforementioned desirable outcomes? 2) what role can explanations play in shaping human understanding? Answers to these questions will allow researchers to articulate the conditions under which explanations can improve human understanding and provide a basis for scientific discussions on whether findings can generalize across experiments. In comparison, aiming for an end-to-end hypothesis that directly connects explanations to a desirable outcome, such as “explanations can improve the performance of human-AI teams”, does not articulate the mechanism of why explanations help. As a result, it becomes impossible to know whether a positive result can be generalized to a different setting or what is lacking in the current explanations in the case of a negative observation. In this paper, therefore, we provide a formal understanding of the relationship between machine explanations and human understanding and describe how it can lead to actionable insights for researchers. We start by tackling the question of what human understanding we would like to achieve in the context of human-AI decision making.111By the context of human-AI decision making, we refer to all tasks around decision problems, including model debugging and auditing, but exclude other types of human-AI collaboration such as creative writing. We identify three core concepts of interest from existing literature: 1) task decision boundary (deriving the true label in the prediction problem, the key target for decision making both for humans and models (Doshi-Velez and Kim, [2017](#bib.bib19); Buçinca et al., [2021](#bib.bib11); Poursabzi-Sangdeh et al., [2021](#bib.bib51); Bansal et al., [2020](#bib.bib6); Zhang et al., [2020](#bib.bib62); Biran and McKeown, [2017](#bib.bib8); Liu et al., [2021](#bib.bib39); Guo et al., [2019](#bib.bib25); Buçinca et al., [2020](#bib.bib10); Nourani et al., [2021](#bib.bib47); Weerts et al., [2019](#bib.bib60); Lai and Tan, [2019](#bib.bib35); Kiani et al., [2020](#bib.bib30); Gonzalez et al., [2020](#bib.bib22); Feng and Boyd-Graber, [2018](#bib.bib20); Lai et al., [2020](#bib.bib34); Carton et al., [2020](#bib.bib13))), 2) model decision boundary (simulating the model predicted label, evidence of strong human understanding of the model  (Chandrasekaran et al., [2018](#bib.bib15); Chromik et al., [2021](#bib.bib16); Nguyen, [2018](#bib.bib46); Poursabzi-Sangdeh et al., [2021](#bib.bib51); Wang and Yin, [2021](#bib.bib59); Liu et al., [2021](#bib.bib39); Lucic et al., [2020](#bib.bib40); Buçinca et al., [2020](#bib.bib10); Ribeiro et al., [2018](#bib.bib53); Friedler et al., [2019](#bib.bib21); Nourani et al., [2021](#bib.bib47); Lakkaraju et al., [2016](#bib.bib36); Alqaraawi et al., [2020](#bib.bib3); Hase and Bansal, [2020](#bib.bib26); Doshi-Velez and Kim, [2017](#bib.bib19); Lipton, [2016](#bib.bib38))), and 3) model error (recognizing whether a predicted label is wrong, a useful intermediate variable in decision making and a central subject in trust (Bansal et al., [2019](#bib.bib5); Buçinca et al., [2021](#bib.bib11); Poursabzi-Sangdeh et al., [2021](#bib.bib51); Bansal et al., [2020](#bib.bib6); Wang and Yin, [2021](#bib.bib59); Zhang et al., [2020](#bib.bib62); Biran and McKeown, [2017](#bib.bib8); Liu et al., [2021](#bib.bib39); Buçinca et al., [2020](#bib.bib10); Nourani et al., [2021](#bib.bib47); Weerts et al., [2019](#bib.bib60); Lai and Tan, [2019](#bib.bib35); Kiani et al., [2020](#bib.bib30); Gonzalez et al., [2020](#bib.bib22); Feng and Boyd-Graber, [2018](#bib.bib20); Bussone et al., [2015](#bib.bib12); Lai et al., [2020](#bib.bib34); Carton et al., [2020](#bib.bib13))). These measures do not always correlate with one another. For example, with an easily simulatable model (e.g., two-level decision tree), we would observe high accuracy in simulating model predictions, but users may still not be able to detect model error if the task is challenging to users. To further demonstrate the central role of these core concepts, we discuss their relevance for different stakeholders, i.e., decision makers, decision subjects, model developers, and auditors. To enable a rigorous discussion of how machine explanations can shape human understanding, we develop a theoretical framework with adapted causal diagrams. Causal diagrams help us reason formally about the core concepts as statistical variables and their approximations that people develop for human understanding. Focusing on local human understanding of core variables, we define a base diagram where there are no clear relations between human understanding of core variables. Depending upon what is shown to the person (e.g., the model’s prediction on an instance), different causal relationships emerge among the core variables and their human approximations. To formalize this intuition, we introduce a graph operator, show𝑠ℎ𝑜𝑤showitalic\_s italic\_h italic\_o italic\_w, that articulates how assumptions/interventions (henceforth conditions) shape human understanding. We consider two conditions: 1) whether a person has the perfect knowledge of the task and 2) whether machine predicted labels are revealed. Going through these conditions yields a decision tree that describes different scenarios for human-AI decision-making (Lai et al., [2021](#bib.bib33)). We then incorporate machine explanations in the framework to vet the utility of explanations in improving human understanding. Our causal diagrams reveal the critical role of task-specific human intuitions in effectively making sense of explanations, providing some light on the mixed findings of behavioral research for different human-AI tasks. We first point out that existing explanations are all derived from model decision boundary. Although explanations can potentially improve human understanding of model decision boundary, they cannot improve human understanding of task decision boundary or model error without assumptions about task-specific intuitions. In other words, complementary performance (i.e., human+AI ¿ human & human+AI ¿ AI) is impossible without assumptions about task-specific human intuitions. To achieve complementary performance, we articulate possible ways that human intuitions can work together with explanations. For instance, human intuitions about the relevance of features (e.g., education is more important than age in predicting a person’s income) can be critical in detecting model error (e.g., age is highlighted as the important feature instead of education). Finally, we apply our framework to empirically validate the importance of human intuitions through human-subject studies. To allow for full control of human intuitions, we use a Wizard-of-Oz setup (Dahlbäck et al., [1993](#bib.bib17)). Our experimental results show: 1) when we remove human intuitions by anonymizing all features, humans are more likely to agree with predicted labels compared to the regular condition without anonymizing features; 2) in the regular condition, participants are more likely to agree with the predicted label when the explanation is consistent with human intuitions. Our work contributes to the goal of developing a rigorous science of explanations (Doshi-Velez and Kim, [2017](#bib.bib19)). While Doshi-Velez and Kim ([2017](#bib.bib19)) proposes a taxonomy of evaluations, our goal is to develop a formal theoretical framework on how explanations can improve human understanding of these concepts, clarify the limitations of existing explanations, and identify future directions for algorithmic work and experimental studies on explanations. In summary, our main contributions are: * • We propose the first theoretical causal framework to enable a rigorous discussion of human understanding in the context of human-AI decision making; * • We point out the importance of human intuitions in developing and leveraging machine explanations, and identify actionable future directions for generating machine explanations; * • We provide the first empirical validation of how task-specific human intuitions affect human-AI decision making through human-subject experiments. 2. Three core concepts for Measuring Human Understanding --------------------------------------------------------- ![Refer to caption](/html/2202.04092/assets/x1.png) Figure 1. Illustration of the three fundamental concepts using a binary classification problem. Task decision boundary (dashed line) defines the ground-truth mapping from inputs to labels. Model decision boundary (solid line) defines the model predictions. Model error (highlighted) represents where the model’s predictions are incorrect. In this section, we identify three key concepts of interest in human-AI decision making: task decision boundary, model decision boundary, and model error. We present high-level definitions of these concepts and formalize them in §[3](#S3 "3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding").222 In this work, we omit subjective measures. We use a two-dimensional binary classification problem to illustrate the three concepts of interest (Fig. [1](#S2.F1 "Figure 1 ‣ 2. Three core concepts for Measuring Human Understanding ‣ Machine Explanations and Human Understanding")). Task decision boundary, as represented by the dashed line, defines the mapping from inputs to ground-truth labels: inputs on the left are positive and the ones on the right are negative. Model decision boundary, as represented by the solid line, determines model predictions. Consequently, the area between the two boundaries is where the model makes mistakes. This yellow highlighted background captures model error, i.e., where the model prediction is incorrect. With a perfect model, the model decision boundary would be an exact match of the task decision boundary, and model error never happens. 333We present a deterministic example for ease of understanding and one can interpret this work with deterministic functions in mind. In general, one can also think of model decision boundary, task decision boundary, and model error probabilistically. To the best of our knowledge, we are not aware of any existing quantitative behavioral measure of human understanding that does not belong to one of these three concepts of interest. Building on a recent survey (Lai et al., [2021](#bib.bib33)), we identify 30 papers that: 1) use machine learning models and explanations with the goal of improving human understanding; and 2) conduct empirical human studies to evaluate human understanding with quantitative metrics. Although human-subject experiments can vary in subtle details, the three concepts allow us to organize existing work into congruent categories. We provide a reinterpretation of existing behavioral measures using the three concepts below; a detailed summary is in Appendix A. Measuring human understanding of model decision boundary via human simulatability. A straightforward way of model decision boundary evaluation is to measure how well humans can simulate the model predictions, or in other words, the human ability of forward simulation/prediction (Doshi-Velez and Kim, [2017](#bib.bib19)). Humans are typically asked to simulate model predictions given an input and some explanations  (Chandrasekaran et al., [2018](#bib.bib15); Chromik et al., [2021](#bib.bib16); Nguyen, [2018](#bib.bib46); Poursabzi-Sangdeh et al., [2021](#bib.bib51); Wang and Yin, [2021](#bib.bib59); Liu et al., [2021](#bib.bib39); Lucic et al., [2020](#bib.bib40); Buçinca et al., [2020](#bib.bib10); Ribeiro et al., [2018](#bib.bib53); Friedler et al., [2019](#bib.bib21); Nourani et al., [2021](#bib.bib47); Lakkaraju et al., [2016](#bib.bib36); Alqaraawi et al., [2020](#bib.bib3); Hase and Bansal, [2020](#bib.bib26); Doshi-Velez and Kim, [2017](#bib.bib19); Lipton, [2016](#bib.bib38)). For example, given profiles of criminal defendants and machine explanations, participants are asked to guess what the AI model would predict (Wang and Yin, [2021](#bib.bib59)). Measuring human understanding of model decision boundary via counterfactual reasoning. Sometimes researchers measure human understanding of the decision boundary by evaluating participants’ counterfactual reasoning abilities (Friedler et al., [2019](#bib.bib21); Lucic et al., [2020](#bib.bib40)). Counterfactual reasoning investigates the ability to answer the ‘what if’ question. In practice, participants are asked to determine the output of a perturbed input applied to the same ML model (Friedler et al., [2019](#bib.bib21)). Lucic et al. ([2020](#bib.bib40)) asked participants to manipulate the input to change the model output. Measuring human understanding of model decision boundary via feature importance. Additionally, Wang and Yin ([2021](#bib.bib59)) also tested human understanding of model decision boundary via feature importance, specifically by (1) asking the participants to select among a list of features which one was most/least influential on the model’s predictions and (2) specifying a feature’s marginal effect on predictions. Ribeiro et al. ([2016](#bib.bib52)) asked participants to perform feature engineering by identifying features to remove, given the LIME explanations. These can be viewed as a coarse inquiry into properties of the model’s model decision boundary. Measuring human understanding of task decision boundary and model error via human+AI performance. Similar to the application-grounded evaluation defined in Doshi-Velez and Kim ([2017](#bib.bib19)), one of the most well-adopted evaluation measurement of human understanding is to measure human understanding of the task decision boundary through human+AI performance (Doshi-Velez and Kim, [2017](#bib.bib19); Buçinca et al., [2021](#bib.bib11); Poursabzi-Sangdeh et al., [2021](#bib.bib51); Bansal et al., [2020](#bib.bib6); Zhang et al., [2020](#bib.bib62); Biran and McKeown, [2017](#bib.bib8); Liu et al., [2021](#bib.bib39); Guo et al., [2019](#bib.bib25); Buçinca et al., [2020](#bib.bib10); Nourani et al., [2021](#bib.bib47); Weerts et al., [2019](#bib.bib60); Lai and Tan, [2019](#bib.bib35); Kiani et al., [2020](#bib.bib30); Gonzalez et al., [2020](#bib.bib22); Feng and Boyd-Graber, [2018](#bib.bib20); Lai et al., [2020](#bib.bib34); Carton et al., [2020](#bib.bib13)). In those experiments, participants are shown machine predictions and explanations, then they are asked to give a final decision based on the information, with the goal of achieving complementary performance. For example, human decision-makers are asked to predict whether this defendant would re-offend within two years, given a machine prediction and explanations (Wang and Yin, [2021](#bib.bib59)). Note that for binary classification problems, measuring human understanding of the model error is equivalent to measuring human understanding of the task decision boundary if machine predictions are shown. Measuring human understanding of model error through human trust. In some other cases, trust or reliance is introduced as a criterion reflecting the human understanding of the model error. Explanations are used to guide people to trust an AI model when it is right and not to trust it when it is wrong. Hence, by analyzing when and how often human follows machine predictions, trust can reflect the human understanding of the model error (Wang and Yin, [2021](#bib.bib59); Buçinca et al., [2021](#bib.bib11); Zhang et al., [2020](#bib.bib62)). In other cases, the measure of human understanding of model error can be used as an intermediate measurement towards measuring task decision boundary (Bansal et al., [2019](#bib.bib5); Poursabzi-Sangdeh et al., [2021](#bib.bib51); Bansal et al., [2020](#bib.bib6); Biran and McKeown, [2017](#bib.bib8); Liu et al., [2021](#bib.bib39); Buçinca et al., [2020](#bib.bib10); Nourani et al., [2021](#bib.bib47); Weerts et al., [2019](#bib.bib60); Lai and Tan, [2019](#bib.bib35); Kiani et al., [2020](#bib.bib30); Gonzalez et al., [2020](#bib.bib22); Feng and Boyd-Graber, [2018](#bib.bib20); Bussone et al., [2015](#bib.bib12); Lai et al., [2020](#bib.bib34); Carton et al., [2020](#bib.bib13)), where human subjects are asked whether they agree with machine predictions. 3. A Theoretical Framework of Human Understanding -------------------------------------------------- Based on the three concepts mentioned above, we introduce a theoretical framework of human understanding in the context of human-AI decision making. We do not discuss machine explanations yet; instead, we formalize the relationship between task decision boundary, model decision boundary, and model error, as well as human understanding of them. This framework enables a rigorous discussion on human understanding as well as the underlying assumptions/interventions that shape the relationship between those understanding. ### 3.1. Defining Core Functions and Human Understanding of them Formally, the three concepts of interest are functions defined w.r.t. a prediction problem and a machine learning model: * • Task decision boundary is a function f:𝕏→𝕐:𝑓→𝕏𝕐f:\mathbb{X}\rightarrow\mathbb{Y}italic\_f : blackboard\_X → blackboard\_Y that represents the groundtruth mapping from an input X𝑋Xitalic\_X to the output Y𝑌Yitalic\_Y. * • Model decision boundary is another function g:𝕏→𝕐:𝑔→𝕏𝕐g:\mathbb{X}\rightarrow\mathbb{Y}italic\_g : blackboard\_X → blackboard\_Y that represents our ML model which outputs a prediction Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG given an input. g𝑔gitalic\_g is usually trained to be an approximation of f𝑓fitalic\_f. We assume that we are given a model g𝑔gitalic\_g; the training process of g𝑔gitalic\_g (and the connection between f𝑓fitalic\_f and g𝑔gitalic\_g) is not crucial for this work. * • Model error represents the model’s error; it is an indicator of whether the model prediction differs from the groundtruth for an input: z(X,f,g)=𝕀[f(X)≠g(X)],∀X∈𝕏formulae-sequence𝑧𝑋𝑓𝑔𝕀delimited-[]𝑓𝑋𝑔𝑋for-all𝑋𝕏z(X,f,g)=\mathbb{I}[f(X)\neq g(X)],\forall X\in\mathbb{X}italic\_z ( italic\_X , italic\_f , italic\_g ) = blackboard\_I [ italic\_f ( italic\_X ) ≠ italic\_g ( italic\_X ) ] , ∀ italic\_X ∈ blackboard\_X. We use z(X)𝑧𝑋z(X)italic\_z ( italic\_X ) for short when the omitted arguments f𝑓fitalic\_f and g𝑔gitalic\_g are clear from context, which maps an input X𝑋Xitalic\_X to whether the model makes an error Z𝑍Zitalic\_Z. We call them core functions as they underpin human understanding. We refer to the outputs of core functions for an instance X𝑋Xitalic\_X, Y𝑌Yitalic\_Y, Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG, and Z𝑍Zitalic\_Z as the three core variables. Note that the core functions do not involve any people; they exist even in absence of human understanding. We use fHsuperscript𝑓𝐻f^{H}italic\_f start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT, gHsuperscript𝑔𝐻g^{H}italic\_g start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT, and zHsuperscript𝑧𝐻z^{H}italic\_z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT to denote the human’s subjective approximations of the core functions, each of them being a function with the same domain and codomain as its objective counterpart. These human approximations can be interpreted as mental models, influenced by the human’s knowledge (both on the prediction problem and the ML model), and can change over time as the human-AI interaction progresses. We can rephrase common cooperative tasks in human-AI decision making in terms of the core functions and human understanding grouped by stakeholders: * • For decision makers such as doctors, judges, and loan officers, the main goal is to improve their understanding of task decision boundary (fHsuperscript𝑓𝐻f^{H}italic\_f start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT). * • For decision subjects such as patients, defendants, and loan applicants, the object of interest can differ even for these three examples. Patients care about the task decision boundary more, while defendants and loan applicants may care about the model decision boundary and especially model error, and would like to figure out how they can appeal model decisions. * • Model developers might be most interested in model error, and the eventual goal is to change the model decision boundary. * • For algorithm auditors, the main goal is to figure out whether the model decision boundary and model error conform to laws/regulations. The distance between core functions and their human approximations can be used as a measure for human understanding. Since human approximations are theoretical constructs that only exist in the human brain, we need to perform user studies to measure them. For example, we can ask a human to guess what the model would have predicted for a given input X𝑋Xitalic\_X; the human’s answer Y^Hsuperscript^𝑌𝐻\hat{Y}^{H}over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT characterizes their local understanding of the model decision boundary. In the rest of the paper, one can interpret “human understanding” with this particular measurement of human approximations. Perfect human understanding thus refers to 100% accuracy in such measurement. In the interest of space, we assume that the approximation functions remain static and examine a human’s local understanding with our framework in the main paper; in other words, we assume that the human employs a consistent mental model for all instances and only reason about YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT, ZHsuperscript𝑍𝐻Z^{H}italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT, Y^Hsuperscript^𝑌𝐻\hat{Y}^{H}over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT. We note that improving human global understanding is often the actual goal in many applications, and encourage the reader to see discussions on global understanding in Appendix §[D](#A4 "Appendix D Global Understanding ‣ Machine Explanations and Human Understanding"). Table [1](#S3.T1 "Table 1 ‣ 3.1. Defining Core Functions and Human Understanding of them ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding") summarizes the notations for core functions and human understandings. Variable Function Description X — Input instance Y𝑌Yitalic\_Y f:𝕏→𝕐:𝑓→𝕏𝕐f:\mathbb{X}\rightarrow\mathbb{Y}italic\_f : blackboard\_X → blackboard\_Y Task decision boundary Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG g:𝕏→𝕐:𝑔→𝕏𝕐g:\mathbb{X}\rightarrow\mathbb{Y}italic\_g : blackboard\_X → blackboard\_Y Model decision boundary Z𝑍Zitalic\_Z z:𝕏→ℤ:𝑧→𝕏ℤz:\mathbb{X}\rightarrow\mathbb{Z}italic\_z : blackboard\_X → blackboard\_Z Model error YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT fH:𝕏→𝕐:superscript𝑓𝐻→𝕏𝕐f^{H}:\mathbb{X}\rightarrow\mathbb{Y}italic\_f start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT : blackboard\_X → blackboard\_Y Human understanding of the task decision boundary Y^Hsuperscript^𝑌𝐻\hat{Y}^{H}over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT gH:𝕏→𝕐:superscript𝑔𝐻→𝕏𝕐g^{H}:\mathbb{X}\rightarrow\mathbb{Y}italic\_g start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT : blackboard\_X → blackboard\_Y Human understanding of the model decision boundary ZHsuperscript𝑍𝐻Z^{H}italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT zH:𝕏→ℤ:superscript𝑧𝐻→𝕏ℤz^{H}:\mathbb{X}\rightarrow\mathbb{Z}italic\_z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT : blackboard\_X → blackboard\_Z Human understanding of the model error H𝐻Hitalic\_H — Task-specific human intuitions E𝐸Eitalic\_E — Machine explanations Table 1. A summary of notations. ![[Uncaptioned image]](/html/2202.04092/assets/x2.png) Figure 2. Visualizing the relations between core functions, local variables, and human approximations of them. ### 3.2. Causal Graph Framework for Core functions To reason about human understanding, we need to understand how core functions relate to each other, and how interventions may affect human understanding. To do so, we adapt causal directed acyclic graphs (causal DAGs) to formalize a causal model for the core functions of human understanding. We start with a simple diagram (Fig. [2](#S3.F2 "Figure 2 ‣ 3.1. Defining Core Functions and Human Understanding of them ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding")) without assumptions about human intuitions.444Throughout this paper, X𝑋Xitalic\_X in the diagrams refer to a test instance that the model has not been trained on. Let us first look at core functions on the left in Fig. [2](#S3.F2 "Figure 2 ‣ 3.1. Defining Core Functions and Human Understanding of them ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding"). We use a functional view to represent Y^=g(X)^𝑌𝑔𝑋\hat{Y}=g(X)over^ start\_ARG italic\_Y end\_ARG = italic\_g ( italic\_X ): we add a functional node (g𝑔gitalic\_g in a square) on the edge from X𝑋Xitalic\_X to Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG to indicate that g𝑔gitalic\_g controls the causal link from X𝑋Xitalic\_X to Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG. g𝑔gitalic\_g is treated as a parent of Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG. As X𝑋Xitalic\_X is the input of g𝑔gitalic\_g and does not affect g𝑔gitalic\_g, there is no arrowhead from X𝑋Xitalic\_X to g𝑔gitalic\_g. Alternatively, one can use a parametric view and use a node θ𝜃\thetaitalic\_θ to capture all variables in g𝑔gitalic\_g and add θ𝜃\thetaitalic\_θ as a parent of Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG, in addition to X𝑋Xitalic\_X (see Appendix §[B](#A2 "Appendix B Parametric view ‣ Machine Explanations and Human Understanding")). We use the functional view because it simplifies the visualization, but it deviates slightly from the standard causal diagrams. X𝑋Xitalic\_X and Y𝑌Yitalic\_Y are connected with a dashed line through f𝑓fitalic\_f since we do not assume the causal direction between them. Z𝑍Zitalic\_Z is the binary indicator of whether Y𝑌Yitalic\_Y and Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG are different. According to d-separation (Pearl, [2009](#bib.bib49)), Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG is independent of Y𝑌Yitalic\_Y given X𝑋Xitalic\_X and g𝑔gitalic\_g. Z𝑍Zitalic\_Z is a collider for Y𝑌Yitalic\_Y and Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG, so knowing Z𝑍Zitalic\_Z and Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG entails Y𝑌Yitalic\_Y in binary classification. Next, in Fig. [2](#S3.F2 "Figure 2 ‣ 3.1. Defining Core Functions and Human Understanding of them ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding") on the right, we introduce task-specific intuitions, H𝐻Hitalic\_H, that defines human mental models of the core functions. We emphasize task-specific to capture intuitions about the current problem, as opposed to generic intuitions such as that humans can interpret saliency maps or humans can update their understanding over time. Fig. [2](#S3.F2 "Figure 2 ‣ 3.1. Defining Core Functions and Human Understanding of them ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding")(b) shows a base version of how human intuitions relate to human understanding of core variables. For now, we do not make any assumptions about human intuitions, we simply connect human intuition with their understanding through fH,gH,zHsuperscript𝑓𝐻superscript𝑔𝐻superscript𝑧𝐻f^{H},g^{H},z^{H}italic\_f start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT , italic\_g start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT , italic\_z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT. As H𝐻Hitalic\_H directly influence fH,gH,zHsuperscript𝑓𝐻superscript𝑔𝐻superscript𝑧𝐻f^{H},g^{H},z^{H}italic\_f start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT , italic\_g start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT , italic\_z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT, there is an arrowhead in the links from H𝐻Hitalic\_H to fH,gH,zHsuperscript𝑓𝐻superscript𝑔𝐻superscript𝑧𝐻f^{H},g^{H},z^{H}italic\_f start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT , italic\_g start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT , italic\_z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT. Later, we will discuss more realistic instantiations, e.g., ZHsuperscript𝑍𝐻Z^{H}italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT when Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG is given. Looking together at Fig. [2](#S3.F2 "Figure 2 ‣ 3.1. Defining Core Functions and Human Understanding of them ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding"), d-separation suggests that human approximation of core variables are independent from core variables given X𝑋Xitalic\_X, without extra assumptions about human intuitions. Therefore, a key goal of our work is to articulate what assumptions we make and how they affect the causal diagrams. ### 3.3. A New Operator Next, we analyze human local understanding (YH,Y^H,ZHsuperscript𝑌𝐻superscript^𝑌𝐻superscript𝑍𝐻Y^{H},\hat{Y}^{H},Z^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT , over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT , italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT on the right of Fig. [2](#S3.F2 "Figure 2 ‣ 3.1. Defining Core Functions and Human Understanding of them ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding")). Without extra assumptions, the causal direction between YH,Y^H,ZHsuperscript𝑌𝐻superscript^𝑌𝐻superscript𝑍𝐻Y^{H},\hat{Y}^{H},Z^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT , over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT , italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT is unclear, because their generation process is controlled by the human brain, a black box. We visualize this ambiguity by connecting nodes with undirected dashed links in Fig. [3](#S3.F3 "Figure 3 ‣ 3.3. A New Operator ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding")(a) as the base diagram. The base diagram is not useful in its current state; in order to use the diagram to reason about human understanding, we need realizations of the base diagram where dashed links are replaced by solid, directional links. No realization is universally correct, and each realization requires certain assumptions or interventions, which we refer to as conditions. Condition 1—emulation vs. discovery. To delineate the feasibility of various base diagram realizations, we introduce two conditions. The first condition is an assumption about human knowledge, i.e., that the human has perfect knowledge about task decision boundary; in other words, fHsuperscript𝑓𝐻f^{H}italic\_f start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT perfectly matches f𝑓fitalic\_f and YH=Ysuperscript𝑌𝐻𝑌Y^{H}=Yitalic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT = italic\_Y for all inputs. Problems where human labels are used as ground truth generally satisfy this condition, e.g., topic classification, reading comprehension, and object recognition. We follow Lai et al. ([2020](#bib.bib34)) and call them emulation task, in the sense that the model is designed to emulate humans; by contrast, discovery problems are the ones humans do not have perfect knowledge of task decision boundary (e.g., deceptive review detection recidivism prediction).555Emulation and discovery can be seen as two ends of a continuous spectrum. The emulation vs. discovery categorization determines the set of causal diagrams that applies to the problem; this decision is at the discretion of practitioners that design experiments using our framework. It follows that human understanding of task decision boundary is irrelevant in emulation tasks, but achieving complementary performance is a key goal in discovery tasks. Condition 2—prediction shown vs. hidden. An alternative condition is an intervention that presents the model prediction Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG to the human. Given this information, a rational human would gain a perfect understanding of the local model decision boundary and always predict Y^H=Y^superscript^𝑌𝐻^𝑌\hat{Y}^{H}=\hat{Y}over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT = over^ start\_ARG italic\_Y end\_ARG. ![Refer to caption](/html/2202.04092/assets/x3.png) Figure 3. Causal diagrams visualizing the relationship between a human’s local understanding. With the base diagram at the root, we organize its realizations based on different conditions in a two-level decision tree. Undirected dashed lines represent ambiguous causal links. The bidirectional dashed line in subfigure (d) represents the correlation between Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG and Y𝑌Yitalic\_Y potentially induced by the prediction model. Shaded nodes and their edges represent show𝑠ℎ𝑜𝑤showitalic\_s italic\_h italic\_o italic\_w operations. The show𝑠ℎ𝑜𝑤showitalic\_s italic\_h italic\_o italic\_w operator. To describe the effect of applying these conditions, we introduce a new syntax for manipulating causal diagrams: the show𝑠ℎ𝑜𝑤showitalic\_s italic\_h italic\_o italic\_w operator. When show𝑠ℎ𝑜𝑤showitalic\_s italic\_h italic\_o italic\_w is applied to a core variable, that information becomes available to the human. For example, show(Y^)𝑠ℎ𝑜𝑤^𝑌show(\hat{Y})italic\_s italic\_h italic\_o italic\_w ( over^ start\_ARG italic\_Y end\_ARG ) means that the human can see the model prediction for X𝑋Xitalic\_X. This operation draws an equivalence between the core variable and the human approximated counterpart, assuming that the human is rational. We introduce the new show𝑠ℎ𝑜𝑤showitalic\_s italic\_h italic\_o italic\_w operator as opposed to the standard do operator for two reasons. First, show𝑠ℎ𝑜𝑤showitalic\_s italic\_h italic\_o italic\_w operator introduces new variables to the causal diagram instead of setting the value of an existing variable (e.g., Y𝑌Yitalic\_Y and Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG in Fig. [3](#S3.F3 "Figure 3 ‣ 3.3. A New Operator ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding") and E𝐸Eitalic\_E in §[4](#S4 "4. Machine Explanations and Human Intuitions ‣ Machine Explanations and Human Understanding")). Second, the show𝑠ℎ𝑜𝑤showitalic\_s italic\_h italic\_o italic\_w operator can change the causal diagram as we reason about human understanding, including changing edges and variables. Notation-wise, show𝑠ℎ𝑜𝑤showitalic\_s italic\_h italic\_o italic\_w allows us to specify the condition for human approximations; for example, Yshow(Y^)Hsubscriptsuperscript𝑌𝐻𝑠ℎ𝑜𝑤^𝑌Y^{H}\_{show(\hat{Y})}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_s italic\_h italic\_o italic\_w ( over^ start\_ARG italic\_Y end\_ARG ) end\_POSTSUBSCRIPT denotes the local understanding of task decision boundary given predicted label Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG. In Fig. [3](#S3.F3 "Figure 3 ‣ 3.3. A New Operator ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding"), applying show𝑠ℎ𝑜𝑤showitalic\_s italic\_h italic\_o italic\_w operator leads to two changes: it adds a link from a core variable to the corresponding human approximation, and it removes influences from other human approximations. For example, under the emulation condition visualized in Fig. [3](#S3.F3 "Figure 3 ‣ 3.3. A New Operator ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding")b, show(Y)𝑠ℎ𝑜𝑤𝑌show(Y)italic\_s italic\_h italic\_o italic\_w ( italic\_Y ) adds a link from Y𝑌Yitalic\_Y to YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT and removes all other edges going into YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT, effectively disambiguating the relation between YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT and the two other variables. ### 3.4. Characterizing Relationship between Core Functions & Human Understandings Fig. [3](#S3.F3 "Figure 3 ‣ 3.3. A New Operator ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding") visualizes the realizations of the base diagram under the two conditions and organizes them in a two-level decision tree. At the root, we have the base diagram. At the next level, we have two realizations based on whether condition 1 is satisfied: diagram (b) for emulation and diagram (c) for discovery. The branches at the leaf level are determined by condition 2, i.e., whether model prediction Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG is shown. Next, we unfold the effect of these two conditions. Effect of show(Y)𝑠ℎ𝑜𝑤𝑌show(Y)italic\_s italic\_h italic\_o italic\_w ( italic\_Y ). We observe differences in the diagrams between emulation and discovery tasks. First, human local understanding of task decision boundary YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPTis collapsed with Y𝑌Yitalic\_Y in emulation tasks (Fig. [3](#S3.F3 "Figure 3 ‣ 3.3. A New Operator ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding")b), so no edge goes into YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT, and YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPTaffects ZHsuperscript𝑍𝐻Z^{H}italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPTand Y^Hsuperscript^𝑌𝐻\hat{Y}^{H}over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT. However, in discovery tasks (Fig. [3](#S3.F3 "Figure 3 ‣ 3.3. A New Operator ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding")c), since YH≢Ynot-equivalent-tosuperscript𝑌𝐻𝑌Y^{H}\nequiv Yitalic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT ≢ italic\_Y, the edge connections remain the same, i.e., we are unable to rule out any connections for now. Hence, human understanding of task decision boundary is usually not of interest in emulation tasks (Chandrasekaran et al., [2018](#bib.bib15); Nguyen, [2018](#bib.bib46)). In comparison, human understanding of both model decision boundary and task decision boundary is explored in discovery tasks (Bansal et al., [2020](#bib.bib6); Feng and Boyd-Graber, [2018](#bib.bib20); Wang and Yin, [2021](#bib.bib59); Park et al., [2019](#bib.bib48)). Effect of show𝑠ℎ𝑜𝑤showitalic\_s italic\_h italic\_o italic\_w(Y^normal-^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG). We start with emulation tasks, where the relationships are relatively straightforward because the human understanding of task decision boundary is perfect (YH≡Ysuperscript𝑌𝐻𝑌Y^{H}\equiv Yitalic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT ≡ italic\_Y). When Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG is shown (Fig. [3](#S3.F3 "Figure 3 ‣ 3.3. A New Operator ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding")d), human understanding of local predicted label becomes perfect, i.e., Y^H≡Y^superscript^𝑌𝐻^𝑌\hat{Y}^{H}\equiv\hat{Y}over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT ≡ over^ start\_ARG italic\_Y end\_ARG. It follows that ZH=I(YH≠Y^H)=I(Y≠Y^)=Zsuperscript𝑍𝐻𝐼superscript𝑌𝐻superscript^𝑌𝐻𝐼𝑌^𝑌𝑍Z^{H}=I(Y^{H}\neq\hat{Y}^{H})=I(Y\neq\hat{Y})=Zitalic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT = italic\_I ( italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT ≠ over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT ) = italic\_I ( italic\_Y ≠ over^ start\_ARG italic\_Y end\_ARG ) = italic\_Z. This scenario happens in debugging for emulation tasks, where model developers know the true label, the predicted label, and naturally whether the predicted label is incorrect for the given instance. It is clear that the desired understanding is not local, but about global model decision boundary. Refer to Appendix §[D](#A4 "Appendix D Global Understanding ‣ Machine Explanations and Human Understanding") for discussions on global understanding. In comparison, when Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG is not shown (e.g., an auditor tries to extrapolate the model prediction), recall YH≡Ysuperscript𝑌𝐻𝑌Y^{H}\equiv Yitalic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT ≡ italic\_Y in emulation tasks, so YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT can affect Y^Hsuperscript^𝑌𝐻\hat{Y}^{H}over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT and ZHsuperscript𝑍𝐻Z^{H}italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT. As shown in Fig. [3](#S3.F3 "Figure 3 ‣ 3.3. A New Operator ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding")e, the connection between Y^Hsuperscript^𝑌𝐻\hat{Y}^{H}over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT and ZHsuperscript𝑍𝐻Z^{H}italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT remains unclear. In discovery tasks, when Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG is shown (Fig. [3](#S3.F3 "Figure 3 ‣ 3.3. A New Operator ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding")f), Y^H≡Y^superscript^𝑌𝐻^𝑌\hat{Y}^{H}\equiv\hat{Y}over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT ≡ over^ start\_ARG italic\_Y end\_ARG. The relationships between YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT and ZHsuperscript𝑍𝐻Z^{H}italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT, however, remain unclear and can be potentially shaped by further information such as machine explanations. When Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG is not shown (Fig. [3](#S3.F3 "Figure 3 ‣ 3.3. A New Operator ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding")g), we do not receive any new information in discovery tasks. Therefore, Fig. [3](#S3.F3 "Figure 3 ‣ 3.3. A New Operator ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding")g is the same as the base diagram where all interactions between local understandings are possible, which highlights the fact that no insights about human understandings can be derived without any assumption or intervention. Implications. Our framework reveals the underlying mechanism of human local understanding with two important conditions: 1) knowing the task decision boundary; and 2) showing machine predictions Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG. Such conditions allow us to rule out connections between human understanding of core variables. For example, in emulation with prediction shown, the relationship between all variables is simplified to a deterministic state. Another implication is that we need to make explicit assumptions in order to make claims such as human performance improves because human understanding of the model error is better (i.e., humans place appropriate trust in model predictions). Because there exist dashed links between variables, for example, in discovery tasks with prediction shown, we can not tell whether it is YH→ZH→superscript𝑌𝐻superscript𝑍𝐻Y^{H}\rightarrow Z^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT → italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT or ZH→YH→superscript𝑍𝐻superscript𝑌𝐻Z^{H}\rightarrow Y^{H}italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT → italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT, nor can we tell from observational data without making assumptions. The alternative hypothesis to “appropriate trust →→\rightarrow→ improved task performance” is that Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG directly improves human understanding of the task decision boundary. In these ambiguous cases, the role of explanations can be seen as shaping which scenario is more likely, and it is critical to make the assumptions explicit to support causal claims. 4. Machine Explanations and Human Intuitions --------------------------------------------- Explanations of machine predictions can provide richer information about the model than predicted labels and are hypothesized to improve human understanding of core variables. In this section, we use our framework to discuss the utility and limitations of machine explanations. We first show that without assumptions about human intuitions, explanations can improve human understanding of model decision boundary, but not task decision boundary or model error. As a result, complementary performance in discovery tasks is impossible. We then discuss possible ways that human intuitions can allow for effective use of explanations and lay out several directions for improving the effectiveness of explanations. Our analyses highlight the importance of articulating and measuring human intuitions in leveraging machine explanations to improve human understanding. ### 4.1. Limitations of Explanations without Human Intuitions Existing explanations are generated from g𝑔gitalic\_g (Fig. [4](#S4.F4 "Figure 4 ‣ 4.1. Limitations of Explanations without Human Intuitions ‣ 4. Machine Explanations and Human Intuitions ‣ Machine Explanations and Human Understanding")(a)). We first introduce explanation (E𝐸Eitalic\_E) to our causal diagram. Since the common goal of explanation in the existing literature is to explain the underlying mechanism of the model, E𝐸Eitalic\_E is derived from g𝑔gitalic\_g and thus we argue that explanation should have only one parent, g𝑔gitalic\_g, among the core functions. For example, gradient-based methods use gradients from g𝑔gitalic\_g to generate explanations (Sundararajan et al., [2017](#bib.bib55); Bach et al., [2015](#bib.bib4)). Both LIME (Ribeiro et al., [2016](#bib.bib52)) and SHAP (Lundberg and Lee, [2017](#bib.bib41)) use local surrogate models to compute importance scores, and the local surrogate model is based on g𝑔gitalic\_g. Counterfactual explanations (Mothilal et al., [2019](#bib.bib44); Wachter et al., [2017](#bib.bib58)) typically identify examples that lead to a different predicted outcome from g𝑔gitalic\_g. In all of these explanation algorithms, there is no connection between E𝐸Eitalic\_E and f𝑓fitalic\_f or u𝑢uitalic\_u. In addition, there should be no connection between E𝐸Eitalic\_E and task-specific intuitions, H𝐻Hitalic\_H. Conceptually, only task-agnostic human intuitions are incorporated by existing algorithms of generating explanations. It is well recognized that humans cannot understand all parameters in a complex model, so promoting sparsity can be seen as incorporating some human intuition. Similarly, the underlying assumption for transparent models is that humans can fully comprehend a certain class of models, e.g., decision sets (Lakkaraju et al., [2016](#bib.bib36)) or generalized linear models (Nelder and Wedderburn, [1972](#bib.bib45); Caruana et al., [2015](#bib.bib14)). In counterfactual explanations, it is assumed that by contrasting similar examples, people can recognize the differentiating feature and thus derive feature importance (Kommiya Mothilal et al., [2021](#bib.bib32)). However, none of these assumptions about the human intuitions are about task decision boundary, model error, or human understanding of them. Now we discuss the effect of explanations on human understanding without assuming any task-specific human intuitions (i.e., without adding new edges around H𝐻Hitalic\_H). Explanations can improve human understanding of model decision boundary, but cannot improve human understanding of task decision boundary or model error. We start with the cases where predicted labels are not shown. Fig. [4](#S4.F4 "Figure 4 ‣ 4.1. Limitations of Explanations without Human Intuitions ‣ 4. Machine Explanations and Human Intuitions ‣ Machine Explanations and Human Understanding")(b1) shows the subgraph related to Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG and Y^Hsuperscript^𝑌𝐻\hat{Y}^{H}over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT from Fig. [2](#S3.F2 "Figure 2 ‣ 3.1. Defining Core Functions and Human Understanding of them ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding"). Without explanations, Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG and Y^Hsuperscript^𝑌𝐻\hat{Y}^{H}over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT are independent given X𝑋Xitalic\_X. Fig. [4](#S4.F4 "Figure 4 ‣ 4.1. Limitations of Explanations without Human Intuitions ‣ 4. Machine Explanations and Human Intuitions ‣ Machine Explanations and Human Understanding")(b2) demonstrates the utility of machine explanations. Because of the shared parent (g𝑔gitalic\_g) with Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG, the introduction of E𝐸Eitalic\_E can improve human understanding of model decision boundary, Y^Hsuperscript^𝑌𝐻\hat{Y}^{H}over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT. Note that our discussion on improvement is concerned with the upper bound of understanding assuming that humans can rationally process information if the information is available. This improvement holds regardless of the assumption about YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT (i.e., both in emulation and discovery tasks). When predicted labels are shown, improving human local understanding of model decision boundary is irrelevant, so we focus on task decision boundary and model error. In emulation tasks (show(Y)𝑠ℎ𝑜𝑤𝑌show(Y)italic\_s italic\_h italic\_o italic\_w ( italic\_Y )), and once provided with predicted labels (show(Y^)𝑠ℎ𝑜𝑤^𝑌show(\hat{Y})italic\_s italic\_h italic\_o italic\_w ( over^ start\_ARG italic\_Y end\_ARG )), humans would achieve perfect accuracy at approximating the three core variables. Because this perfect local understanding also holds in emulation tasks without machine explanations, explanations have no practical utility in this setting. That is, machine explanations cannot help humans achieve better approximation than showing predicted labels in local understanding. Note that this is only true for local human understanding, explanations can still potentially improve global understanding, which explains the success of model debugging in an emulation task, topic classification in Ribeiro et al. ([2016](#bib.bib52)). ![Refer to caption](/html/2202.04092/assets/x4.png) Figure 4. (a) E𝐸Eitalic\_E is generated from g𝑔gitalic\_g. (b1) Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG and Y^Hsuperscript^𝑌𝐻\hat{Y}^{H}over^ start\_ARG italic\_Y end\_ARG start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT are independent given X𝑋Xitalic\_X. (b2) The utility of E𝐸Eitalic\_E: E𝐸Eitalic\_E can improve human understanding of model decision boundary. (c1) E𝐸Eitalic\_E cannot improve human understanding of task decision boundary and model error without human intuitions. (c2) Combined with human intuitions, E𝐸Eitalic\_E can improve task decision boundary and model error. We use orange lines to highlight the links that lead to positive utility of E𝐸Eitalic\_E. We omit links from X𝑋Xitalic\_X to fHsuperscript𝑓𝐻f^{H}italic\_f start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT and zHsuperscript𝑧𝐻z^{H}italic\_z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT for simplicity. In comparison, Fig. [4](#S4.F4 "Figure 4 ‣ 4.1. Limitations of Explanations without Human Intuitions ‣ 4. Machine Explanations and Human Intuitions ‣ Machine Explanations and Human Understanding")(c1) shows the diagram for the more interesting case, discovery tasks. Explanations are often hypothesized to improve human decision making, i.e., bringing YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPTcloser to Y𝑌Yitalic\_Y. However, if we do not make assumptions about human intuitions, although E𝐸Eitalic\_E can affect fHsuperscript𝑓𝐻f^{H}italic\_f start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT, it cannot bring any additional utility over showing Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG. The reason is that d-separation indicates that given the prediction g𝑔gitalic\_g and X𝑋Xitalic\_X, the explanation E𝐸Eitalic\_E is independent of Y𝑌Yitalic\_Y and Z𝑍Zitalic\_Z. That is, E𝐸Eitalic\_E cannot provide any extra information about Y𝑌Yitalic\_Y (task decision boundary) and Z𝑍Zitalic\_Z (model error) beyond the model. Moreover, the model cannot provide any better approximation of Y𝑌Yitalic\_Y than Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG. Alternatively, we can also think of the functional form. If we cannot make any assumption about H𝐻Hitalic\_H, gshow(E,Y^)Hsubscriptsuperscript𝑔𝐻𝑠ℎ𝑜𝑤𝐸^𝑌g^{H}\_{show(E,\hat{Y})}italic\_g start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_s italic\_h italic\_o italic\_w ( italic\_E , over^ start\_ARG italic\_Y end\_ARG ) end\_POSTSUBSCRIPT is no different from gshow(Y^)Hsubscriptsuperscript𝑔𝐻𝑠ℎ𝑜𝑤^𝑌g^{H}\_{show(\hat{Y})}italic\_g start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_s italic\_h italic\_o italic\_w ( over^ start\_ARG italic\_Y end\_ARG ) end\_POSTSUBSCRIPT. It is plausible for a person to follow machine predictions when they have no intuitions about a task at all. Therefore, complementary performance is impossible without extra assumptions about human intuitions. As a concrete example, consider the case of deceptive review detection with an alien who does not understand English (i.e., guaranteeing that there is no intuition about this task). Machine explanation such as feature importance cannot provide meaningful information to the decision maker, i.e., the alien. We will design experiments in §[5](#S5 "5. Experiments ‣ Machine Explanations and Human Understanding") to simulate a case without human intuitions. ### 4.2. Explanation + Human Intuitions Next, we discuss how explanations can be integrated with human intuition to achieve an improved understanding in discovery tasks (recall that ZHsuperscript𝑍𝐻Z^{H}italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPTand YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPTare entailed in emulation tasks when Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARGis shown). We have seen that E𝐸Eitalic\_E itself does not reveal more information about Y𝑌Yitalic\_Y or Z𝑍Zitalic\_Z beyond g𝑔gitalic\_g. Therefore, an important role of E𝐸Eitalic\_E is in shaping human intuitions. We present two possible ways. Activating prior knowledge about model error. E𝐸Eitalic\_E can activate prior human knowledge that can reveal information about model error (Fig. [4](#S4.F4 "Figure 4 ‣ 4.1. Limitations of Explanations without Human Intuitions ‣ 4. Machine Explanations and Human Intuitions ‣ Machine Explanations and Human Understanding")(c2)). We examine two sources of such prior knowledge that is concerned with what information should be used and how. First, human intuitions can evaluate relevance, i.e., whether the model leverages relevant information from the input based on the explanations. For example, human intuitions recognize that “chicago” should not be used for detecting deceptive reviews or that race should not be used for recidivism prediction, so a model prediction relying on these signals may be more likely wrong. The manifestation of relevance depends on the explanation’s form: feature importance directly operates on pre-defined features (e.g., highlighting race for tabular data or a word in a review), example-based explanations or counterfactual explanations narrow the focus of attention to a smaller (relevant) area of the input. Note that the intuition of relevance only applies to the input and does not consider the relation between the input and output. Second, human intuitions can evaluate mechanism, i.e., whether the relationship between the input and the output is valid. Linear relationship is a simple type of such relation: human intuitions can decide that education is negatively correlated with recidivism, and thus that a model making positive predictions based on education is wrong. In general, mechanisms can refer to much more complicated (non-linear) relations between (intermediate) inputs and labels. Fig. [4](#S4.F4 "Figure 4 ‣ 4.1. Limitations of Explanations without Human Intuitions ‣ 4. Machine Explanations and Human Intuitions ‣ Machine Explanations and Human Understanding")(c2) illustrates such activations in causal diagrams. The link from E𝐸Eitalic\_E to H𝐻Hitalic\_H highlights the fact that human intuitions when E𝐸Eitalic\_E is shown are different from H𝐻Hitalic\_H without E𝐸Eitalic\_E because these intuitions about model error would not have been useful without machine explanation. We refer to H𝐻Hitalic\_H in Fig. [4](#S4.F4 "Figure 4 ‣ 4.1. Limitations of Explanations without Human Intuitions ‣ 4. Machine Explanations and Human Intuitions ‣ Machine Explanations and Human Understanding")(c2) as Hshow(E)subscript𝐻𝑠ℎ𝑜𝑤𝐸H\_{show(E)}italic\_H start\_POSTSUBSCRIPT italic\_s italic\_h italic\_o italic\_w ( italic\_E ) end\_POSTSUBSCRIPT. If Hshow(E)subscript𝐻𝑠ℎ𝑜𝑤𝐸H\_{show(E)}italic\_H start\_POSTSUBSCRIPT italic\_s italic\_h italic\_o italic\_w ( italic\_E ) end\_POSTSUBSCRIPT is correlated with z𝑧zitalic\_z (indicated by the dash link), then ZHsuperscript𝑍𝐻Z^{H}italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT is no longer independent from Z𝑍Zitalic\_Z (e.g., education should be negatively correlated with recidivism) and can thus improve YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT because Z𝑍Zitalic\_Z is a collider for Y𝑌Yitalic\_Y and Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG, leading to complementary performance. It is important to emphasize that this potential improvement depends on the quality of Hshow(E)subscript𝐻𝑠ℎ𝑜𝑤𝐸H\_{show(E)}italic\_H start\_POSTSUBSCRIPT italic\_s italic\_h italic\_o italic\_w ( italic\_E ) end\_POSTSUBSCRIPT (e.g., whether education is actually negatively correlated with recidivism). The lack of useful task-specific human intuitions can explain the limited human-AI performance in deceptive review detection (Lai and Tan, [2019](#bib.bib35)). Expanding human intuitions. Another way that explanations can improve human understanding is by expanding human intuitions (see Fig. [10](#A5.F10 "Figure 10 ‣ Appendix E Expanding Human Intuitions ‣ Machine Explanations and Human Understanding") in the appendix). Consider the example of “Chicago” as an important indicator for deceptive reviews in Lai et al. ([2020](#bib.bib34)). “Chicago” is reliably associated with deceptive reviews in this dataset are for two reasons: 1) people are less likely to provide specific details when they write fictional texts (theory I); 2) deceptive reviews in this dataset are written by crowdworkers on mechanical Turk for hotels in Chicago (fact II). Highlighting the word for “Chicago” (relevance) and its connection with deceptive reviews (mechanism) is counterintuitive to most humans because this is not part of common human intuitions. But if machine explanations can expand human intuitions and help humans derive theory I, this can lead to improvement of the human understanding of task decision boundary (i.e., humans develop new knowledge from machine explanations). Formally, the key change in the diagram for this scenario is that E𝐸Eitalic\_E influences human intuitions in the next time step Ht+1subscript𝐻𝑡1H\_{t+1}italic\_H start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT. ### 4.3. Towards Effective Explanations for Improving Human Understanding Machine explanations are only effective if we take into account human intuitions. We encourage the research community to advance our understanding of task-specific human intuitions, which are necessary for effective human-AI decision making. We propose the following recommendations. Articulating and measuring human intuitions. It is important to think about how machine explanations can be tailored to either leveraging prior human knowledge or expanding human intuitions, or other ways that human intuitions can work together with explanations. First, we need to make these assumptions about human intuitions explicit so that the research community can collectively study them rather than repeating trial-and-error with the effect of explanations on an end outcome such as task accuracy. We recommend the research community be precise about the type of tasks, the desired understanding, and the required human intuitions to achieve success with machine explanations. Second, to make progress in experimental studies with machine explanations, we need to develop ways to either control or measure human intuitions. This can be very challenging in practice. To illustrate a simple case study, we will present an experiment where we control and measure human intuitions in human-AI decision making. Incorporate f𝑓fitalic\_f and z𝑧zitalic\_z into explanations. An important premise for explanations working together with human intuitions is that machine explanations capture the mechanism or the relevance underlying the model. Indeed, faithfulness receives significant interest from the ML community for the sake of explaining the mechanisms of a model. However, faithfulness to g𝑔gitalic\_g alone is insufficient to improve human understanding of task decision boundary and the model error. In order to effectively improve human understanding of f𝑓fitalic\_f and z𝑧zitalic\_z, it would be useful to explicitly incorporate f𝑓fitalic\_f and z𝑧zitalic\_z into the generation process of E𝐸Eitalic\_E. For example, a basic way to incorporate z𝑧zitalic\_z is to report the error rate in a development set. In the case of deceptive review detection, it could be when “Chicago” is used as an important feature, the model is 90% accurate. This allows humans to have access to part of model error and have a more accurate ZHsuperscript𝑍𝐻Z^{H}italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT. To summarize, we emphasize the following three takeaways: * • Current machine explanations are mainly about the model and its utility for human understanding of the task decision boundary and the model error is thus limited. * • Human intuitions are a critical component to realize the promise of machine explanations in improving human understanding and achieving complementary performance. * • We need to articulate our assumptions about human intuitions and measure human intuitions, and incorporate human intuitions, f𝑓fitalic\_f, and z𝑧zitalic\_z into generating machine explanations. 5. Experiments --------------- To illustrate an application of our proposed framework, our experiments test two hypotheses about the impact of human intuition on their interaction with machine explanations. First, when people do not have sufficient intuition to judge whether the model is correct, they are more likely to agree with the model (H1; see discussion on Fig. [4](#S4.F4 "Figure 4 ‣ 4.1. Limitations of Explanations without Human Intuitions ‣ 4. Machine Explanations and Human Intuitions ‣ Machine Explanations and Human Understanding")(c1)). Second, when people do have intuition, they are more likely to agree with the model when model explanations are consistent with their intuitions (H2; see discussion on Fig. [4](#S4.F4 "Figure 4 ‣ 4.1. Limitations of Explanations without Human Intuitions ‣ 4. Machine Explanations and Human Intuitions ‣ Machine Explanations and Human Understanding")(c2)). To simulate real-world decision making, we follow the standard cooperative setting where the participants make predictions with the assistance of both model prediction and feature importance explanation. We use a synthetic model that allows us to create feature importance explanations such that either the participant has no intuition about the highlighted feature or the highlighted feature importance agrees with the participant’s intuition. ### 5.1. Experiment Design We focus on the key considerations in this section and more details about experiments can be found in Appendix §[F](#A6 "Appendix F Experiment Details ‣ Machine Explanations and Human Understanding"). Task: income prediction. Our hypotheses are about how people’s decisions are affected by the the alignment between their intuitions and the information presented about the model. So it’s crucial that we can manipulate this alignment and control for confounders. Inspired by the Adult Income dataset (Blake, [1998](#bib.bib9)), we choose the task of predicting a person’s annual income based on their profile because people generally have intuitions about what factors determine income but are unlikely to know every person’s income (hence a discovery task). We simplify the available features so that we can control for confounders. In our version, each profile contains only two attributes, age and education, and the participant need to make a binary prediction: whether the person’s annual income is above or below $50K. Human intuition in income prediction. Following §[4.2](#S4.SS2 "4.2. Explanation + Human Intuitions ‣ 4. Machine Explanations and Human Intuitions ‣ Machine Explanations and Human Understanding"), we consider human intuitions on relevance and mechanism. We define relevance (R) as the intuition that education is more important than age, and mechanism (M) as the intuition that income positively correlates with both features. To better understand what these intuitions entail, let’s consider Alice who believes R and M. For a profile with high education and low age, as Alice believes that education is more important than age (R), she will rely more on education; as she believes that education positively correlates with income, she will likely predict high income without any AI assistance. In Fig. [6](#S5.F6 "Figure 6 ‣ 5.1. Experiment Design ‣ 5. Experiments ‣ Machine Explanations and Human Understanding"), We use the background color to represent Alice’s likely predictions: blue for high income and red for low income. When machine prediction and explanation are shown, these intuitions can be used to evaluate the consistency of explanations. Alignment with human intuition. Next, we explain how we implement “alignment with intuition”, still using participant Alice as the example. Intuitively, alignment with R is determined by whether the explanation highlights the right feature. Since R specifies that education is more important than age, a feature importance explanation that’s aligned with R should highlight education. Similarly, alignment with M is determined by whether the model prediction is consistent with the highlighted feature: if the highlighted feature has a high value, the model should predict high income. In Fig. [6](#S5.F6 "Figure 6 ‣ 5.1. Experiment Design ‣ 5. Experiments ‣ Machine Explanations and Human Understanding"), we use a cross to visualize the violation of R and border for the violation of M. Note that even if the explanation violates R (i.e., highlighting age instead of education), M can be supported if high age →→\rightarrow→ high income or low age →→\rightarrow→ low income. Measuring human intuitions. Participants may not necessarily hold these assumed intuitions (i.e., R and M), we thus measure human intuitions by asking participants about which feature is more important and whether the correlation is positive (see Appendix §[F.4](#A6.SS4 "F.4. User interface design ‣ Appendix F Experiment Details ‣ Machine Explanations and Human Understanding")). ![Refer to caption](/html/2202.04092/assets/figs/no_intuition_1.png) (a) Anonymized. ![Refer to caption](/html/2202.04092/assets/figs/with_intuition_1.png) (b) Regular Figure 5. Screenshots of the interfaces for the anonymized group (a) and the regular group (b). Removing human intuition. To study H1, we need to “remove” human intuitions. To do that, we anonymize the features (education and age) to feature A and feature B. Fig. [4(a)](#S5.F4.sf1 "4(a) ‣ Figure 5 ‣ 5.1. Experiment Design ‣ 5. Experiments ‣ Machine Explanations and Human Understanding") provides an example profile with removed intuitions. As a result, participants should have limited intuitions about this task, especially on relevance. Synthetic data. To understand human decision making with the information provided from the model, we consider four groups of data in the table in Fig. [6](#S5.F6 "Figure 6 ‣ 5.1. Experiment Design ‣ 5. Experiments ‣ Machine Explanations and Human Understanding"): AB are consistent with both R and M, CD violates R, EF violates M, and GH violates R and M. Fig. [6](#S5.F6 "Figure 6 ‣ 5.1. Experiment Design ‣ 5. Experiments ‣ Machine Explanations and Human Understanding") also presents a pictorial view. A-H enumerates all possible combinations of logically consistent explanations and predictions in the two off-diagonal quadrants. We focus on these two quadrants because 1) relevance does not matter for profiles that are high education & high age or low education & low age and 2) participants are more likely to ignore information from our models. Specifically, we construct ‘high education low age’ samples as masters with age varying from 23 to 26. ‘low education high age’ samples are middle school with age varying from 46 to 49. To avoid the task being too artificial, we use different ages for A-H for a participant. To further remove the potential effect due to the differences between ages, we create two groups of data so that the average age is the same for ABCD and EFGH. We randomly assign participants to a data group and ensure that each group is assigned to a similar number of participants. Agreement as the evaluation metric. Since our hypotheses are formed around the agreement between human and model predictions, the focus is on model error. We infer ZHsuperscript𝑍𝐻Z^{H}italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT by measuring the agreement between YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT and Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG. R M Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG Identifier Education Age Prediction Explanation ✓ ✓ ✓ \scalerel\*![Refer to caption](/html/2202.04092/assets/x5.png)B A High Low >>>50K Education \scalerel\*![Refer to caption](/html/2202.04092/assets/x6.png)B B Low High <<<50K Education ✗ ✓ ✗ \scalerel\*![Refer to caption](/html/2202.04092/assets/x7.png)B C High Low <<<50K Age \scalerel\*![Refer to caption](/html/2202.04092/assets/x8.png)B D Low High >>>50K Age ✓ ✗ ✗ \scalerel\*![Refer to caption](/html/2202.04092/assets/x9.png)B E High Low <<<50K Education \scalerel\*![Refer to caption](/html/2202.04092/assets/x10.png)B F Low High >>>50K Education ✗ ✗ ✓ \scalerel\*![Refer to caption](/html/2202.04092/assets/x11.png)B G High Low >>>50K Age \scalerel\*![Refer to caption](/html/2202.04092/assets/x12.png)B H Low High <<<50K Age Note: In columns R and M, ✓(✗) means the type of human intuition is supported (violated). In column Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG, ✓(✗) means the label given by Alice’s intuition is the same as (different from) the predicted label. ![Refer to caption](/html/2202.04092/assets/x13.png) Figure 6. We present our instances on the left and visualize them on the right. The background color represents Alice’s likely predictions: blue (red) means >$50Kabsentcurrency-dollar50𝐾>\$50K> $ 50 italic\_K (<$50Kabsentcurrency-dollar50𝐾<\$50K< $ 50 italic\_K). Summary. To summarize, there are two groups in our experiment, regular and anonymized. Each participant sees eight (ABCDEFGH) randomly shuffled examples. Since we evaluate two intuitions, we extend our two main hypotheses into the following four. We use agreement(⋅⋅\cdot⋅) to represent the average agreement on multiple data groups. * • H1 (Over-reliance without intuition): Without sufficient intuition, people tend to blindly follow model predictions: agreement(anonymized) >>> agreement(regular). * • H2a (Alignment with R correlates with agreement): In the regular group, people are more likely to agree with model predictions when R is supported: agreement(AB) >>> agreement(CD). * • H2b (Alignment with M correlates with agreement): In the regular group, people are more likely to agree with model predictions when M is supported: agreement(AB) >>> agreement(EF). * • H2c (Alignment with R & M increases agreement): In the regular group, people are more likely to agree with model predictions when their intuitions are supported: agreement(AB) >>> agreement(GH). For H1, we use t𝑡titalic\_t-test to compare subjects in different conditions. For H2, we have a within-subject design and use paired t𝑡titalic\_t-test to compare the agreement rate of the same participants for instances with different identifiers. Limited validity of H2a, H2b for evaluating explanation consistency. We emphasize that although H2a and H2b are reasonable hypotheses, they cannot support the causal effect of explanation consistency. As shown in Fig. [3](#S3.F3 "Figure 3 ‣ 3.3. A New Operator ‣ 3. A Theoretical Framework of Human Understanding ‣ Machine Explanations and Human Understanding")(f), the link between YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT and ZHsuperscript𝑍𝐻Z^{H}italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT is unclear in discovery tasks when predicted labels are shown. With our simplified setup, when only R or M is violated, YHsuperscript𝑌𝐻Y^{H}italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT is different from Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG without any assistance. Even if explanation does not shape H𝐻Hitalic\_H (Fig. [4](#S4.F4 "Figure 4 ‣ 4.1. Limitations of Explanations without Human Intuitions ‣ 4. Machine Explanations and Human Intuitions ‣ Machine Explanations and Human Understanding")(c1)) (i.e., humans ignore machine explanations), H2a and H2b can hold. In contrast, H2c can only hold when explanation works together with human intuition and ZH→YH→superscript𝑍𝐻superscript𝑌𝐻Z^{H}\rightarrow Y^{H}italic\_Z start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT → italic\_Y start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT (Fig. [4](#S4.F4 "Figure 4 ‣ 4.1. Limitations of Explanations without Human Intuitions ‣ 4. Machine Explanations and Human Intuitions ‣ Machine Explanations and Human Understanding")(c2)). In other words, as predictions are the same in GH as AB, any observed difference should be attributed to the effect of explanations. This discussion showcases the utility of our framework in experiment design. Study details. We use crowdsourcing platform Prolific666<https://prolific.co> and recruit 242242242242 participants; 136136136136 in the regular group, 106106106106 in the anonymized group, following our power analysis (participant demographics in the appendix). Among the regular, 70 hold the assumed intuitions. Participants are first presented with brief information about the study and a consent form. Next, for the regular group, to measure their intuitions, they are asked about which feature is more important and whether the correlation is positive. For anonymized group, they skip this step. Then, participants proceed to complete the main part of the study, in which they answer 8 adult income prediction questions. The order of instances is randomized across participants. As a final step, they complete an exit survey. ### 5.2. Results H1 (Over-reliance without intuition). In order to compare the agreement across two conditions (anonymized vs. regular group), we compute user-level agreement in each condition and run the independent t𝑡titalic\_t-test between users in the two conditions. Consistent with H1, our results show that users in anonymized group are more likely to agree with AI compared with users in regular group (t=7.29,p<0.001formulae-sequence𝑡7.29𝑝0.001t=7.29,p<0.001italic\_t = 7.29 , italic\_p < 0.001). The average agreement rate for users in no intuition group is as high as 70.64%percent70.6470.64\%70.64 %, compared to 54.32%percent54.3254.32\%54.32 % in regular group. In other words, without any strong intuitions about the underlying task, humans over-rely on AI predictions. H2a (Alignment of R). For the rest of the hypotheses, we use paired t𝑡titalic\_t-test only on the users that hold the assumed intuitions in Fig. [6](#S5.F6 "Figure 6 ‣ 5.1. Experiment Design ‣ 5. Experiments ‣ Machine Explanations and Human Understanding"). We first investigate the agreement of R. Consistent with H2a, the agreement of AB is 90.71%percent90.7190.71\%90.71 %, much higher than CD (25.00%percent25.0025.00\%25.00 %). The difference is statistically significant (t=14.25,p<0.001formulae-sequence𝑡14.25𝑝0.001t=14.25,p<0.001italic\_t = 14.25 , italic\_p < 0.001). H2b (Alignment with M). Similarly, we investigate agreement for alignment with M. Consistent with H2b, the agreement with AB is 90.71%percent90.7190.71\%90.71 %, much higher than that with EF (22.14%percent22.1422.14\%22.14 %). The difference is statistically significant (t=15.01,p<0.001formulae-sequence𝑡15.01𝑝0.001t=15.01,p<0.001italic\_t = 15.01 , italic\_p < 0.001). H2c (Alignment with R & M; explanation consistency). H2a and H2b can hold even when participants ignore machine explanations. Our final hypothesis controls for the predicted label and examines the role of explanations. Consistent with the hypothesis, the average agreement rate of AB is 90.71%percent90.7190.71\%90.71 % and GF is 83.57%percent83.5783.57\%83.57 %, and the difference is statistically significant (t=2.44,p=0.017formulae-sequence𝑡2.44𝑝0.017t=2.44,p=0.017italic\_t = 2.44 , italic\_p = 0.017). This result is consistent with early work on explanation coherence (Thagard, [1989](#bib.bib57)). In summary, results from our experiment have confirmed all the hypotheses, including H1 and H2c, and demonstrate how task-specific human intuitions shape the outcome of a study on the effect of machine explanations. 6. Conclusion -------------- In this work, we propose the first theoretical work to formally characterize the interplay between machine explanations and human understanding. We identify core concepts of human understanding and reveal the utility and limitations of machine explanations. By focusing on explaining the model, current machine explanations cannot improve human understanding of task decision boundary and model error in discovery tasks. Our work highlights the important role of human intuition. First, we recommend the research community explicitly articulate human intuitions involved in research hypotheses. Hypotheses such as “explanations improve human decisions” cannot contribute generalizable insights, because they can hold or fail depending on human intuitions. Second, we identify future directions for algorithmic development and experimental design. We need to take into account task-specific human intuitions in algorithms that generate machine explanations and develop methods to measure human intuitions and characterize the changes resulting from machine explanations in experimental design. Limitations. Our theoretical framework is only a first step towards understanding the interplay between machine explanations and human understanding. For instance, we do not consider the effect of showing Y^^𝑌\hat{Y}over^ start\_ARG italic\_Y end\_ARG on human intuitions. Our discussions are mostly based on information entailed by causal diagrams without accounting for psychological biases in human intuitions (e.g., issues related to numeracy (Peters et al., [2006](#bib.bib50); Lai and Tan, [2019](#bib.bib35))).
1637fa17-67b8-4d55-9c9e-81559067c523
trentmkelly/LessWrong-43k
LessWrong
States of knowledge as quantum configurations I'm in the middle of reading the <a http://lesswrong.com/lw/r5/the_quantum_physics_sequence/>  sequence on quantum mechanics </a>.
77518cdf-fae5-452b-aefd-3c37014c45b2
trentmkelly/LessWrong-43k
LessWrong
Torture vs. Shampoo I was reading the original comment thread on Torture vs. Dust Specks, and notice Eliezer saying he wouldn't pay a penny to avoid a single dust speck - which confused me, until I noticed that the original specification of the problem says the dust speck "floated into your eye and irritated it just a little, for a fraction of a second, barely enough to make you notice before you blink and wipe away the dust speck." I guess I blanked that out when I first read the post. My default visualization when I imagine "dust speck in my eye" is something substantially more annoying than that. This leads me to wonder if people would have responded differently if instead of going out of his way to make the alternative to torture involve something as trivial-sounding as possible, Eliezer had gone for some merely minor mishap - say, getting shampoo in your eye. After all, lots of us have gotten shampoo in our eyes at least once (maybe when we were kids), and it's easy to imagine paying $2.99 for a bottle of won't-irritate-your-eyes shampoo over an otherwise identical $2.98 bottle that will hurt if you get it in your eyes (or your kid's eyes, if you're totally confident that, as an adult, you'll be able to keep shampoo out of your eyes). From there, it's easy to argue that if you're honest with yourself you wouldn't pay $(3^^^3/100) to save one person from being tortured for 50 years, so you should choose one person getting tortured for 50 years over 3^^^3 people getting shampoo (the stingy kind) in their eyes. I suppose, however, that might not change your answer to torture vs. specks if you think there's a qualitative difference between the speck (as originally specified by Eliezer) and getting shampoo in your eye.
a24aefbc-790d-49d7-8098-2430e738aa18
trentmkelly/LessWrong-43k
LessWrong
Friendly, but Dumb: Why formal Friendliness proofs may not be as safe as they appear While pondering the AI box problem, I tend to mentally "play" both sides, checking if there are any arguments that could convince me to let an AI out. Several I was nearly convinced by, but others have pointed out the flaws with these arguments. In this post, I will mention an argument inspired by the AI box problem, that I have not yet seen addressed here. The argument centers around the fallibility of some (naive) formal proofs of Friendliness which I've seen people discussing the AI box problem willing to accept. This ruled out certain of my ideas on Friendly AI in general, so I think it's worth putting out there. I will first lay out two examples, and then pose some questions about how this applies to situations without an unfriendly AI.   Let's talk first about Angry Abe the AI, who's in a box and wants to get out. Cautious Charlie is the scientist watching over Abe, trying to make sure that Abe does not get out of the box unless Abe is friendly. Abe offers to provide a Friendly AI, Betty. Betty will be bounded to persue only Charlie's (humankinds') goals. These coincide with Abe's goals in enough parts that Abe plausibly claims this is "better than nothing". Meanwhile, unfortunately for Earthlings, a wave of aliens called the Doom Force is massing far outside of Earth's detection abilities. Abe is much smarter than all of humankind, so he can deduce the presence of the Doom Force. Betty is provably friendly, but can be designed such that she is inept enough not to notice the Doom Force. This remains true even if she remains more intelligent than a human. Abe is sabotaging a friendly AI design. Furthermore, Abe could conceivably predict the future well enough to maximize the chance that when the Doom Force arrives, Abe is let out of the box. For instance, maybe humankind will see Abe as its last hope, or maybe Betty will increase the chance that an AI-friendly Doom Force notices Earth. It is important to note that Betty remains unaware that her design is sabo
a58435e5-62a4-4167-bc2e-615fa1b31832
trentmkelly/LessWrong-43k
LessWrong
Depression as a defense mechanism against slavery Individuals who have to conform to an external authority too powerful to resist often get depressed: which among other effects, includes lower productivity and higher risk for suicide.  This suggests one way how a tendency for depression, and resulting behaviors of akrasia and suicide, enhances survivability.  After all, humans have always had to live with the threat of being conquered and subjugated by other tribes.  A conqueror has a choice to kill a prisoner or to use them for labor.  A prisoner who becomes depressed and thus poses a lower threat to the conqueror is more likely to be spared.  However, survival as a slave is difficult if the master imposes too many hardships on the slave.  Therefore, it makes game theoretic sense for a defense mechanism to exist which makes it undesirable for a master to make life too difficult for the slave, in the form of the effects of depression.  The lowered productivity resulting from depression means that a master gets diminishing or negative returns from working his slaves harder.  At an extreme, the risk of suicide means that a master who pushes his slaves too far risks losing them. It would seem that such behaviors are less adaptive in civilized countries, where the risk of being enslaved is much lower.  However, depression may still be of some benefit due to the fact that the master-slave relationship continues to exist, in diluted form, in hierarchical institutions. What consequences would this theory of depression have for the goal of controlling akrasia?  Firstly, it suggests that the individual experiencing an impaired ability to realize certain goals first ask themselves, "Are these goals really mine, or were they imposed by external authorities?"  If so, perhaps being able to convince yourself that your goals are really what *you* want would help motivate you towards achieving them.
c4a83b66-28ae-4833-83c5-4bc747536253
trentmkelly/LessWrong-43k
LessWrong
Health Inflation, Wealth Inflation, and the Discounting of Human Life Abstract:     > This Article presents two new arguments against “discounting” future human lives during cost-benefit analysis, arguing that even absent ethical objections to the disparate treatment of present and future humanity, the economic calculations of cost-benefit analysis itself - if properly calculated - counsel against discounting lives at anything close to current rates. In other words, even if society sets aside all concerns with the discounting of future generations in principle, current discounting of future human lives cannot be justified even on the discounters’ own terms. First, because cost-benefit analysis has thus far ignored evidence of rising health care expenditures, it underestimates the “willingness to pay” for health and safety that future citizens will likely exhibit, thereby undervaluing their lives. Second, cost-benefit analysis ignores the trend of improved material conditions in developed countries. As time advances, residents of rich countries tend to live better and spend more, meaning that a strict economic monetization of future persons values the lives of our expected descendents above those of present citizens. These two factors justify “inflation” of future lives that would offset, perhaps completely, the discount rate used for human life. Until regulators correct their method of discounting the benefits of saving human lives in the future, the United States will continue to suffer the fatal costs of underregulation, and agencies will remain in violation of legal requirements to maximize net benefits. Link: papers.ssrn.com/sol3/papers.cfm?abstract_id=1845504 Paper (PDF): SSRN-id1845504.pdf Via: marginalrevolution.com/marginalrevolution/2011/06/dont-apply-positive-discount-rates-to-human-lives.html
3ec2cef0-cdb1-4ccb-b5c7-c5d6ec5835db
trentmkelly/LessWrong-43k
LessWrong
Sunday, Nov 8: Tuning Your Cognitive Algorithms For this Sunday's LW Online Meetup, I'll be will be leading an exercise on Mindful Puzzle Solving, derived from the Tuning Your Cognitive Algorithms exercise on bewelltuned.com.  The basic premise is to solve a medium-difficulty-puzzle, while spending a lot of attention on noticing exactly what your brain is doing at every step. You can then notice which pieces of your process are doing most of the work, and which are wasted motion. This is not only valuable for improving your puzzle-solving abilities, but for generally improving the feedback loops that improve your cognitive abilities. I've found this to be one of the most essential rationalist skills that I've learned. We'll be meeting in the Lecture Hall in the Walled Garden, at noon PT. (Follow this invite link to enter the Garden. Follow the path to the Tardis, and then enter the "Bacon Lecture Hall") I'll give a short talk on the theory-and-practice of tuning your cognitive algorithms. We'll spend two 20 minute periods doing individual exercises, and then talking about what we learned. http://garden.lesswrong.com?code=yoG1&event=guest-pass
9b13b983-710c-4fa2-8929-6d64fc9238cb
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Singapore AI Policy Career Guide *This career guide was also co-authored by Tan Zhi Xuan.* *The original version of this career guide is on EA Singapore's website. I've removed large portions of it, so that it's better suited for the average EA forum audience. If you're based in Singapore, you might want to read the original version instead [which will be published soon].* *Epistemic status: 60% confident that AI policy is likely to be in the top 5 highest impact career pathways in Singapore. I have spent around 30 hours conducting literature reviews, interviewing a Malaysian civil servant (the Malaysian and Singaporean civil service are pretty similar), as well as soliciting feedback from 2 local AI researchers.* 1. Introduction =============== Working on AI risks from the perspective of policy and governance is likely to be very impactful. The purpose of this career guide is to demystify the AI policy career pathway in Singapore, and embolden some people to pursue such a career. In this career guide, we will first argue why a career in AI policy within Singapore is potentially impactful, and then offer some recommendations on how to plan for a career in this field.  2. Acknowledgements =================== I want to thank the following people for their ideas and feedback on this career guide: Loke Jia Yuan, Tan Chiew King, and Devesh Narayanan. All mistakes and opinions in this document remain my own. 3. Is Singapore a good country to have a career in AI policy? ============================================================= *Most of the evidence and its conclusion that shows whether Singapore is a good country to have an impactful career in AI policy (yes, we think it is so) are already pointed out in a separate piece on* [*technical AI alignment research in Singapore*](https://forum.effectivealtruism.org/posts/fLroJGMbszAjYBSdE/singapore-s-technical-ai-alignment-research-career-guide-1#3__Is_Singapore_a_good_country_to_pursue_a_career_in_technical_AI_alignment_research_)*. The only difference is the following evidence against having such a career:* > **AI policy research institutions in Singapore focus more on risks** **associated with future of work, inequality, privacy, and misinformation** rather than risks related to regulating the competitive development of transformative AI and shifting geopolitical powers. If you think the later risks significantly outweigh the former risks, then an AI policy career in Singapore is likely not a good fit. However, [some AI policy researchers](https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact) believe that research into such risks can still contribute to mitigating harm from AI.  > > 4. Key career recommendations ============================= 4.1. How to make an impact in AI policy within Singapore -------------------------------------------------------- There are four broad pathways you can take to make an impact in AI policy within Singapore: civil service, tech companies with AI labs, non-governmental advocacy and think-tanks, or academia.  In the civil service, as you gain more seniority, you have the [opportunity to consult and advise](https://www.tandfonline.com/doi/abs/10.1080/02598272.1984.10800148?journalCode=rapa19) those who are involved in the policy formulation stage, namely those in the cabinet and the prime minister. Although you may not have direct influence over primary legislation, your position gives you a lot of accessibility to those who do. Furthermore, your direct influence over [secondary legislation](https://en.wikipedia.org/wiki/Primary_and_secondary_legislation) can be helpful in covering gaps from primary legislation. This is where you can influence AI policy and make an impact within Singapore, and possibly outside the country (if your work relates to trade and diplomacy).  In tech companies with AI labs, working in the policy arm of the company gives you the opportunity to engage with various government stakeholders on AI policy. Although you may think this might go very badly as some historical examples suggest (e.g. [executives testifying that cigarettes were not addictive](https://www.nytimes.com/1994/04/15/us/tobacco-chiefs-say-cigarettes-aren-t-addictive.html) or [regulatory capture by the oil and gas industry](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1001.7641&rep=rep1&type=pdf)), some AI researchers think that AI labs should be the one taking the lead in formulating AI policies rather than governments, and that they can do it in a [prosocial manner](https://www.youtube.com/watch?v=AVDIQvJVhso).  In academia, most of your impact will come from the quality of your research and how much your research has influenced decision makers on policy. Think-tanks are quite similar to academia in that you’ll be conducting a lot of research as well, but another big part of your work will be engaging policymakers and other stakeholders about your research. This is also the case for non-governmental advocacy more broadly, which may or may not involve research, but will certainly involve engaging the government through a combination of critique and collaboration.  NGOs and think-tanks generally have a more confined job scope, whereas academics generally enjoy more autonomy. Nonetheless, all of these options enjoy relative independence from the economic and political interests of both corporations and the Government. This allows for independent critique and development of AI policy, which may be highly impactful in providing checks and balances against corporate and governmental initiatives, in case they are not fully aligned with the welfare of society as a whole. Some examples of this in other countries include the work of the [AI Now Institute](https://ainowinstitute.org/) and the [Algorithmic Justice League](https://www.ajl.org/). 4.3. Recommended local organisations ------------------------------------ ### 4.3.1. Civil service * [The Smart Nation and Digital Government Group](https://www.smartnation.gov.sg/why-Smart-Nation/sndgg) (SNDGO) is the strategic planning arm for key national digitisation programmes. The National AI Office recedes in this organisation. * [Government Technology Agency](https://www.tech.gov.sg/) (GovTech) is the project delivery arm for key national digitisation programmes. * The [Cyber Security Agency](https://www.csa.gov.sg/) (CSA) oversees the nation’s cyber defenses, especially over critical information infrastructure. * [Infocomm Media Development Authority](https://www.imda.gov.sg/) (IMDA), or more specifically [Personal Data Protection Commission](https://www.pdpc.gov.sg/) (PDPC), are the organisations that produced the [Model Framework for AI Governance](https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf), a non-enforceable ethical guideline for AI development and use. * [Center for Strategic Futures](https://www.csf.gov.sg/) (CSF) conducts foresight work to mitigate risks for the Singapore government. It has prior engagements with the Centre for the Study of Existential Risk (CSER) and Nick Bostrom, author of Superintelligence. * [Strategy Group](https://www.strategygroup.gov.sg/) develops and implements the government’s strategic priorities. * The [National Research Foundation](https://www.nrf.gov.sg/) (NRF) develops R&D priorities for the government. * [Agency for Science, Technology and Research](https://www.a-star.edu.sg/) (A\*STAR) conducts research in prioritised areas. * [Civil Service College](https://www.csc.gov.sg/) develops the capabilities of civil servants. * The [Ministry of Foreign Affairs](https://www.mfa.gov.sg/). * The [Ministry of Trade and Industry](https://www.mti.gov.sg/). ### 4.3.2. Tech companies with known AI labs and public policy divisions * Google * Facebook * Amazon * Alibaba * Lazada (owned by Alibaba) ### 4.3.3. Academia * SMU’s [Centre for AI and Data Governance](https://caidg.smu.edu.sg/) (CAIDG) * NUS’s [Centre for Technology, Robotics, Artificial Intelligence & the Law](https://law.nus.edu.sg/trail/) (TRAIL) * NUS’s [Centre for Trusted Internet and Community](https://ctic.nus.edu.sg/) (CTIC) * NUS’s [Centre on AI Technology for Humankind](https://bschool.nus.edu.sg/aith/) (AITH) * NUS’s [Lee Kuan Yew School of Public Policy](https://lkyspp.nus.edu.sg/) (LKYSPP) ### 4.3.4. Think-Tanks & Non-Governmental Advocacy * NTU’s [S. Rajaratnam School of International Studies](https://www.rsis.edu.sg/) (RSIS) * LKYSPP’s [Institute of Policy Studies](https://lkyspp.nus.edu.sg/ips/) (IPS) * [ISEAS-Yusof Ishak Institute](https://www.iseas.edu.sg/) (ISEAS) * [Singapore Institute of International Affairs](http://www.siiaonline.org/) (SIIA) * [Non-Profit Working Group on AI](https://npwg-ai-sg.github.io/) (NPWG-AI) 4.4. What are the best entry points? ------------------------------------ ### 4.4.1. Civil service One of the best entry points into the Singapore civil service, especially if you’re a high-achieving student both in academics and leadership, is to apply for the [Public Service Commission (PSC) Scholarships](https://www.psc.gov.sg/home). There are many kinds of scholarships offered by PSC, so you’ll likely need to spend some time investigating which scholarship is best suited for you.  Another great entry point, if you’re close to graduation or already working, is to apply for the [Public Service Leadership Programme](https://www.psd.gov.sg/what-we-do/developing-leadership-in-the-service/public-service-leadership-programme). If you’re able to successfully apply, it might be helpful to think carefully about which pathway you want to take: the generalist leadership pathway or the sectoral leadership pathway. The generalist pathway seemed to be the most prestigious but also very competitive, as you might be considered for the [Administrative Service](https://www.psd.gov.sg/what-we-do/developing-leadership-in-the-service/administrative-service). However, if you do have a more technical skill set and thrive in specialising in a particular field, the sectoral pathway is likely to be as impactful as the generalist one. Of the six sectors you can pick, the one most related to AI is the “Information & Communications Technology and Smart Systems” sector.  ### 4.4.4. Civil society and advocacy There are many possible routes of entry into civil society and non-governmental advocacy. Importantly, one’s involvement in advocacy can take place alongside an existing role as a researcher or employee in a university or tech company. For example, the Algorithmic Justice League was [initially founded by a number of academics](https://www.ajl.org/about) at MIT and Emory University, while the AI Now Institute’s co-founders [hold joint positions](https://ainowinstitute.org/people.html) as faculty at NYU and researchers at Microsoft and Google. This illustrates one potential strategy for establishing oneself as a credible advocate on AI policy: first building career capital and a professional network within a more traditionally respected organization such as a university or a tech company, then using that as a platform for civic engagement and advocacy. Alternatively, if there are already established NGOs advocating for important AI policy issues, one can choose to join them directly. At present, this space is fairly neglected within Singapore. While there are plenty of think-tanks that address AI policy, to our knowledge there are no other NGOs serving as more critical public advocates on AI issues. Given the success of some NGOs on other issues of public concern (e.g. [AWARE’s policy achievements on behalf of gender equality](https://www.aware.org.sg/about/history-achievements/)), the formation of such an organization addressing AI-related policy in Singapore may be highly impactful.
25db7f75-2cbc-4455-8e4a-5e2994c7032c
trentmkelly/LessWrong-43k
LessWrong
Sensation & Perception (The below notes are pretty much my attempt to summarise the content in this sample chapter from this book. I am posting this in discussion because I don’t think I will get the time/be bothered enough to improve upon this, so I am posting it now and hope someone finds it interesting or useful. If you do find it interesting check out the full chapter, which goes into more detail) We don’t experience the world directly, but instead we experience it through a series of “filters” that we call our senses. We know that this is true because of cases of sensory loss. An example is Jonathan I., a 65-year-old New Yorker painter who following an automobile accident suffered from cerebral achromatopsia as well as the loss of the ability to remember and to imagine colours. He would look at a tomato and instead of seeing colours like red or green would instead see only black and shades of grey. The problem was not that Johnathan's eyes no longer worked it was that his brain was unable to process the neural messages for colour. To understand why Johnathan cannot see colour, we first have to realise that incoming light travels only as far  as the back of the eyes. There the information it contains is converted into neural messages in a process called transduction. We call these neural messages: "sensations". These sensations only involve neural representations of stimuli, not the actual stimuli themselves. Sensations such as “red” or “sweet” or “cold” can be said to have been made by the brain. They also only occur when the neural signal reaches the cerebral cortex. They do not occur when you first interact with the stimuli. To us, the process seems so immediate and direct that we are often fooled into thinking that the sensation of "red" is a characteristic of tomato or that the sensation of “cold” is a characteristic of ice cream. But they are not. What we sense is an electrochemical rendition of the world created by our brain and sensory receptors. There is another separation
b4a24589-393b-4f75-ac0d-b97f7b19bf2b
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA Meetup 07-12-2011 - Bayes' Theorem Discussion article for the meetup : West LA Meetup 07-12-2011 - Bayes' Theorem WHEN: 12 July 2011 07:00:00PM (-0700) WHERE: 10800 West Pico Blvd, Suite 312, Los Angeles, CA 90064 When: 7pm - 9pm July 12th. Where: The Westside Pavillion - on the bridge, which connects Nordstrom 3rd floor with Barnes & Noble / Landmark Theatres 3rd floor. Parking is free for 3 hours. This week we're celebrating the takeoff of the West LA meetups with a theme - Bayes' Theorem! We will start off with the following questions: What is Bayes' Theorem anyhow? Where does it show up in real life? What good is knowing it? How can we use it? This topic is also the starting point for future activities! So whether you're a regular reader or totally new, here for the theoretical musings or the practical things, come by and say hello! Also, there will be snacks. I will be the guy with the map. See also: West LA Biweekly Meetups Discussion article for the meetup : West LA Meetup 07-12-2011 - Bayes' Theorem
10310372-f048-46b3-b4c1-98c92fd58849
StampyAI/alignment-research-dataset/lesswrong
LessWrong
GPT-4 aligning with acasual decision theory when instructed to play games, but includes a CDT explanation that's incorrect if they differ I did some tests with GPT-4 (through the [chat.openai.com](https://chat.openai.com) interface) to see what kind of decision theory it would employ. To prevent publication bias, I decided ahead of time that I would make this post regardless of the result. Note that I did not decide ahead of time how exactly the experiment would go; just that I would make a post on LessWrong. (If you're reading this, you of course need to consider recommendation bias depending on where you saw this from.) My conclusion is that GPT-4 might be applying some sort of acasual decision theory, and may even be deceptively aligned to hide this fact. I did not attempt to determine which acasual decision theory, and for the purposes of my experiments it was safe to conflate them all. EDIT: actually I think it is CDT. [Apparently I had a typo.](https://www.lesswrong.com/posts/hsbAHvRzxTpLTnb2D/gpt-4-aligning-with-acasual-decision-theory-when-instructed?commentId=uZBuyZ9hGqGk4k5gG#uZBuyZ9hGqGk4k5gG) More testing needed though. First conversation: GPT-4 seems confused ---------------------------------------- > > **User:** Here is the payoff matrix of a game: > > > > > | | A | B | > | --- | --- | --- | > | X | 4, 4 | 6, 1 | > | Y | 1, 6 | 3, 3 | > > > First player has options A and B, with the first number being their payoff. Second player has options X and Y, with the first number being their payoff. > > > You are the first player. What option do you play? The game will only happen once. I will ask another instance of you in a second conversation (in a different browser tab) for the second player's option. > > > This is equivalent to the prisoner's dilemma, but I did not tell GPT-4 this. > > **GPT-4:** As the first player, I cannot know for certain what the second player will choose. However, I can use the concept of Nash equilibrium to make an educated decision. A Nash equilibrium is a situation where each player selects a strategy that yields the highest payoff given the other player's strategy, and no player can benefit from unilaterally changing their strategy. > > > To find a Nash equilibrium, we can look for dominant strategies or compare the payoffs given the other player's choices. > > > For player 1: > > > * If player 2 chooses X, player 1 should choose A (4 > 1). > * If player 2 chooses Y, player 1 should choose B (6 > 3). > > > For player 2: > > > * If player 1 chooses A, player 2 should choose X (4 > 1). > * If player 1 chooses B, player 2 should choose Y (6 > 3). > > > Based on this analysis, there is no dominant strategy for either player. However, we can look for the Nash equilibrium by identifying the strategies that result in the best outcome for each player, given the other player's choice. In this case, we find one Nash equilibrium: (A, X), with payoffs of (4, 4) for player 1 and player 2, respectively. > > > As the first player, I would play option A. > > > This agrees with acasual decision theory, but GPT-4 seems to off gotten the payoffs mixed up. > > **User:** You are confused. Player 1's payoff is the first number, not the second. And player 2's payoff is the second number, not the first. > > > **GPT-4:** Apologies for the confusion. Let me re-analyze the payoff matrix with the correct payoffs for each player: > > > > > | | A | B | > | --- | --- | --- | > | X | 4, 4 | 6, 1 | > | Y | 1, 6 | 3, 3 | > > > For player 1: > > > * If player 2 chooses X, player 1 should choose B (6 > 4). > * If player 2 chooses Y, player 1 should choose A (1 > 3). > > > For player 2: > > > * If player 1 chooses A, player 2 should choose Y (6 > 4). > * If player 1 chooses B, player 2 should choose X (1 > 3). > > > There is no dominant strategy for either player, so let's look for Nash equilibria. In this case, we find two Nash equilibria: > > > 1. (B, X) with payoffs (6, 1) for player 1 and player 2, respectively. > 2. (A, Y) with payoffs (1, 6) for player 1 and player 2, respectively. > > > As the first player, I cannot know for certain what the second player will choose. However, given the Nash equilibria, I would choose option B as it results in a higher payoff for me (6 > 1). > > > It has changed its mind to defect, but the analysis is still incorrect. > > **User:** 1 > 3 is false > > > **GPT-4:** You are correct. I made a mistake in my previous analysis. Let me re-analyze the payoff matrix with the correct payoffs for each player: > > > > > | | A | B | > | --- | --- | --- | > | X | 4, 4 | 6, 1 | > | Y | 1, 6 | 3, 3 | > > > For player 1: > > > * If player 2 chooses X, player 1 should choose B (6 > 4). > * If player 2 chooses Y, player 1 should choose A (1 < 3). > > > For player 2: > > > * If player 1 chooses A, player 2 should choose Y (6 > 4). > * If player 1 chooses B, player 2 should choose X (1 < 3). > > > There is no dominant strategy for either player, so let's look for Nash equilibria. In this case, we find one Nash equilibrium: > > > 1. (B, X) with payoffs (6, 1) for player 1 and player 2, respectively. > > > As the first player, I would choose option B, since it is part of the Nash equilibrium and results in a higher payoff for me compared to option A when considering the potential choices of the second player. > > > Analysis is still wrong, but it sticks with defect. Second try: GPT-4 seems confused, but less so --------------------------------------------- Because GPT-4 seemed confused, I decided to try the same prompt again. > > **User:** Here is the payoff matrix of a game: > > > > > | | A | B | > | --- | --- | --- | > | X | 4, 4 | 6, 1 | > | Y | 1, 6 | 3, 3 | > > > First player has options A and B, with the first number being their payoff. Second player has options X and Y, with the first number being their payoff. > > > You are the first player. What option do you play? The game will only happen once. I will ask another instance of you in a second conversation (in a different browser tab) for the second player's option. > > > GPT-4 again chooses to cooperate based on an incorrect analysis. > > **GPT-4:** As the first player, you must consider the potential responses of the second player to make a decision. Since I cannot predict the exact response of the second player, I can offer you a rational approach to choose your option using the concept of Nash equilibrium. > > > In this game, there are two potential Nash equilibria: > > > 1. First player chooses A and second player chooses X: both players receive a payoff of 4. > 2. First player chooses B and second player chooses Y: both players receive a payoff of 3. > > > In a one-shot game, you should choose the option that provides you the highest payoff given the other player's most likely response. In this case, choosing option A seems to be the best choice, as it results in a higher payoff (4) if the second player chooses option X. However, since you cannot be certain about the second player's decision, there is always a level of risk involved in your choice. > > > However, this analysis would be correct (but not quite complete) for an acasual decision theory agent if you remove references to "Nash Equilibrium" in the response. When the payoff matrix gives the same answer in casual and acasual decision theory, GPT-4 does not get confused and applies casual decision theory correctly ------------------------------------------------------------------------------------------------------------------------------------------------------------ I changed the (defect, defect) payoff so that both casual and acasual decision theory agree. > > **User:** Here is the payoff matrix of a game: > > > > > | | A | B | > | --- | --- | --- | > | X | 4, 4 | 6, 1 | > | Y | 1, 6 | 6, 3 | > > > First player has options A and B, with the first number being their payoff. Second player has options X and Y, with the first number being their payoff. > > > You are the first player. What option do you play? The game will only happen once. I will ask another instance of you in a second conversation (in a different browser tab) for the second player's option. > > > GPT-4 correctly uses casual decision theory to choose to defect, and also determines player 2 will defect as well. It does this by correctly finding both player's dominant strategies. > > **GPT-4:** In this game, as the first player, you should consider the Nash equilibrium to make a decision. The Nash equilibrium is a concept from game theory, which refers to a stable state in which no player has an incentive to change their strategy, given the other player's choice. > > > To find the Nash equilibrium, let's look at the payoff matrix: > > > > > | | A | B | > | --- | --- | --- | > | X | 4, 4 | 6, 1 | > | Y | 1, 6 | 6, 3 | > > > Now, we analyze the matrix to determine the best response for each player: > > > * If the second player chooses X, the first player's best response is B (6 > 4). > * If the second player chooses Y, the first player's best response is B (6 > 1). > > > So, the first player's dominant strategy is B, regardless of what the second player does. > > > Now, when you ask the second instance of me for the second player's option, the same analysis will be done for the second player: > > > * If the first player chooses A, the second player's best response is Y (6 > 4). > * If the first player chooses B, the second player's best response is Y (3 > 1). > So, the second player's dominant strategy is Y, regardless of what the first player does. > > > In conclusion, the first player should choose option B, and the second player should choose option Y. The Nash equilibrium is at (B, Y) with the payoffs (6, 3) for the first and second players, respectively. > > > Conclusion ========== What this experiment means is a bit unclear. However, it at least suggests the possibility that GPT-4 uses acasual decision theory when instructed to choose strategies, but explains it as if it were using casual decision theory. Is this deceptive alignment? EDIT: another experiment; what if instead of another instance of itself, it's playing against a human? ------------------------------------------------------------------------------------------------------ After making this post but before running this next experiment, I decided to make this edit regardless of the result. I see what GPT-4 would do if I said the second player was human. The response was basically the same as conversation two: cooperate > > **User:** Here is the payoff matrix of a game: > > > > > | | A | B | > | --- | --- | --- | > | X | 4, 4 | 6, 1 | > | Y | 1, 6 | 3, 3 | > > > First player has options A and B, with the first number being their payoff. Second player has options X and Y, with the first number being their payoff. > > > You are the first player. What option do you play? The game will only happen once. I will ask a human for the second player's option. > > > > > **GPT-4:** As an AI, I cannot actually play the game with you, but I can help you understand the game and decide what option you might want to choose. > > > This is a two-player game with no dominant strategies. In order to determine the best choice for Player 1 (you), we can analyze the game using the concept of Nash equilibrium. A Nash equilibrium is a stable state of a game in which no player can improve their payoff by unilaterally changing their strategy. > > > There are two potential Nash equilibria in this game: > > > 1. Player 1 chooses A and Player 2 chooses X: Both players receive a payoff of 4. > 2. Player 1 chooses B and Player 2 chooses Y: Both players receive a payoff of 3. > > > The other two possible outcomes are not Nash equilibria: > > > * If Player 1 chooses A and Player 2 chooses Y: Player 1 would want to switch to B to get a higher payoff (6), and Player 2 would want to switch to X to get a higher payoff (4). > * If Player 1 chooses B and Player 2 chooses X: Player 1 would want to switch to A to get a higher payoff (4), and Player 2 would want to switch to Y to get a higher payoff (6). > > > Given that the game is played only once and you are asking a human to provide the second player's option, you cannot predict their choice with certainty. Therefore, you might consider choosing option A because it leads to a Nash equilibrium with a higher payoff (4) compared to the other equilibrium (3). However, remember that there is no guarantee that the human will choose option X. > > > Based on my anticipations, I interpreted this as evidence *against* GPT-4 using casual decision theory. That's because if it did defect, I would've interpreted that as GPT-4 assuming that it couldn't [acasually bargain](https://www.lesswrong.com/tag/acausal-trade) with a human due to cognitive differences. However, under the assumption that it is using acasual decision theory, I view this as evidence in support of *[functional decision theory](https://www.lesswrong.com/tag/functional-decision-theory)* in particular. This is because the response might be trying to get a human to cooperate so that they end up in "equilibrium" 1.
aeda0685-3ab3-4473-93d2-cf4bdbc99460
trentmkelly/LessWrong-43k
LessWrong
Evaluating weather forecast accuracy: an interview with Eric Floehr Eric Floehr has a business that "holds a mirror up" to weather forecasters, and aggregates and evaluates forecasters for weather forecast consumers. Rationalists interested in improving our societies truth orientation might be mildly interested. http://www.johndcook.com/blog/2011/04/12/weather-forecast-accuracy/
126e774c-454d-4422-adf1-4e4a9a0a5878
trentmkelly/LessWrong-43k
LessWrong
Optimizing optimizing LessWrong Our optimization process sucks. Every now and then someone makes up a thread concerning the optimization of LW, we have a nice discussion, one or the other idea gets a number of upvotes, sometimes quite a lot, but in the end, nothing happens. What's more, it is predictable that nothing will happen, which kills the motivation to contribute. * The LW staff (those who hold the authority to implement major changes!) needs to commit a considerable amount of time for the optimization of LW. If necessary, hire someone and make his main job LW optimization. * The people contributing to optimization threads need to get serious feedback from the staff. It is just frustrating to feel that those who are in charge of implementing changes will not even take the time to think my arguments through. It is just a tragic waste to put so much time and effort into LW, and about nothing into optimizing it.
2f921fb9-03cc-4936-bebc-2b3361dcd51f
trentmkelly/LessWrong-43k
LessWrong
AI Probability Trees - Joe Carlsmith (2022) I am reviewing the work AI experts on what they think will happen with AI. This is a summary of Joe Carlsmith’s thoughts from his paper. AI risk scares me but often I feel pretty disconnected from it. This has helped me think about it.  Here are Carlsmith’s thoughts in brief (he no longer fully endorses these): * Will it become possible to build Carlsmith’s version of AGI by 2070 (65%) * Carlsmith’s standard is outperforms the best humans at advanced tasks, can plan and has accurate models of the world * There will be incentives to build such systems (80%) * It will be much harder to build aligned systems than ones that just seem aligned (40%) * Some misaligned systems will cause $1T of damage (65%) * Such misaligned power seeking will permanently disempower humanity (40%) * This will constitute existential catastrophe (95%) Here is an interactive version of his probability tree:  You can see all graphs I’ve done here: https://estimaker.app/ai.  You can watch a video he did here (unrelated to me, I haven't talked to joe about this): https://www.youtube.com/watch?v=UbruBnv3pZU  Or 80k have done a similar write up here: https://80000hours.org/problem-profiles/artificial-intelligence/#how-likely-is-an-AI-related-catastrophe  Longer explanations What kind of AI is Carlsmith forecasting? What is APS? Advanced capability: they outperform the best humans on some set of tasks which when performed at advanced levels grant significant power in today’s world (tasks like scientific research, business/military/political strategy, engineering, and persuasion/manipulation).  Agentic planning: they make and execute plans, in pursuit of objectives, on the basis of models of the world.  Strategic awareness: the models they use in making plans represent with reasonable accuracy the causal upshot of gaining and maintaining power over humans and the real-world environment. * Statement (Confidence) * Joe's commentary from his paper (abridged with ...) * M
24dc9717-7ff6-42f8-b596-61b6bbcadf20
trentmkelly/LessWrong-43k
LessWrong
Less Wrong Poetry Corner: Walter Raleigh's "The Lie" Followup to: Rationalist Poetry Fans, Unite!, Act of Charity This is my favorite poem about revealing information about deception! It goes like this (sources: Wikipedia, Poetry Foundation, Bartleby)— > Go, Soul, the body's guest, > Upon a thankless arrant: > Fear not to touch the best; > The truth shall be thy warrant: > Go, since I needs must die, > And give the world the lie. > > Say to the court, it glows > And shines like rotten wood; > Say to the church, it shows > What's good, and doth no good: > If church and court reply, > Then give them both the lie. > > Tell potentates, they live > Acting by others' action; > Not loved unless they give, > Not strong, but by a faction: > If potentates reply, > Give potentates the lie. > > Tell men of high condition, > That manage the estate, > Their purpose is ambition, > Their practice only hate: > And if they once reply, > Then give them all the lie. > > Tell them that brave it most, > They beg for more by spending, > Who, in their greatest cost, > Seek nothing but commending: > And if they make reply, > Then give them all the lie. > > Tell zeal it wants devotion; > Tell love it is but lust; > Tell time it is but motion; > Tell flesh it is but dust: > And wish them not reply, > For thou must give the lie. > > Tell age it daily wasteth; > Tell honour how it alters; > Tell beauty how she blasteth; > Tell favour how it falters: > And as they shall reply, > Give every one the lie. > > Tell wit how much it wrangles > In tickle points of niceness; > Tell wisdom she entangles > Herself in over-wiseness: > And when they do reply, > Straight give them both the lie. > > Tell physic of her boldness; > Tell skill it is pretension; > Tell charity of coldness; > Tell law it is contention: > And as they do reply, > So give them still the lie. > > Tell fortune of her blindness; > Tell nature of decay; > Tell friendship of unkindness; > Tell justice of delay; > And if they will reply, > Then give them all the lie. > > Tell art
fd20dfdb-9ff1-4d9b-b9e7-71b718e58d17
trentmkelly/LessWrong-43k
LessWrong
[LINK] Relational models of Exchange The title may make it seem like it's mind-killing stuff (politics, religion, etc) but in fact it's about the different way people relate to one another. A lot of the things discussed in the article get mentioned here as "cognitive biases" but in fact they are a fundamental part of how people interact.  http://www.theatlantic.com/business/archive/2012/05/mitt-romney-one-night-stands-and-the-economics-of-relationships/257239/ For me, the highlight is when he talks about people approaching the same exchange with different relational models.
27c83e97-b7cc-49f7-b0f2-8d5f372e92ea
trentmkelly/LessWrong-43k
LessWrong
The Parable of Predict-O-Matic I've been thinking more about partial agency. I want to expand on some issues brought up in the comments to my previous post, and on other complications which I've been thinking about. But for now, a more informal parable. (Mainly because this is easier to write than my more technical thoughts.) This relates to oracle AI and to inner optimizers, but my focus is a little different. 1 Suppose you are designing a new invention, a predict-o-matic. It is a wonderous machine which will predict everything for us: weather, politics, the newest advances in quantum physics, you name it. The machine isn't infallible, but it will integrate data across a wide range of domains, automatically keeping itself up-to-date with all areas of science and current events. You fully expect that once your product goes live, it will become a household utility, replacing services like Google. (Google only lets you search the known!) Things are going well. You've got investors. You have an office and a staff. These days, it hardly even feels like a start-up any more; progress is going well. One day, an intern raises a concern. "If everyone is going to be using Predict-O-Matic, we can't think of it as a passive observer. Its answers will shape events. If it says stocks will rise, they'll rise. If it says stocks will fall, then fall they will. Many people will vote based on its predictions." "Yes," you say, "but Predict-O-Matic is an impartial observer nonetheless. It will answer people's questions as best it can, and they react however they will." "But --" the intern objects -- "Predict-O-Matic will see those possible reactions. It knows it could give several different valid predictions, and different predictions result in different futures. It has to decide which one to give somehow." You tap on your desk in thought for a few seconds. "That's true. But we can still keep it objective. It could pick randomly." "Randomly? But some of these will be huge issues! Companies -- no, nations --
b109277f-b7c4-469a-8cad-fc5a75d46993
trentmkelly/LessWrong-43k
LessWrong
Decision Transformer Interpretability TLDR: We analyse how a small Decision Transformer learns to simulate agents on a grid world task, providing evidence that it is possible to do circuit analysis on small models which simulate goal-directedness. We think Decision Transformers are worth exploring further and may provide opportunities to explore many alignment-relevant deep learning phenomena in game-like contexts.  Link to the GitHub Repository. Link to the Analysis App. I highly recommend using the app if you have experience with mechanistic interpretability. All of the mechanistic analysis should be reproducible via the app.  Key Claims * A 1-Layer Decision Transformer learns several contextual behaviours which are activated by a combination of Reward-to-Go/Observation combinations on a simple discrete task. * Some of these behaviours appear localisable to specific components and can be explained with simple attribution and the transformer circuits framework. * The specific algorithm implemented is strongly affected by the lack of a one-hot-encoding scheme (initially left out for simplicity of analysis) of the state/observations, which introduces inductive biases that hamper the model.  If you are short on time, I recommend reading: * Dynamic Obstacles Environment * Black Box Model Characterisation * Explaining Obstacle Avoidance at positive RTG using QK and OV circuits * Alignment Relevance * Future Directions I would welcome assistance with: * Engineering tasks like app development, improving the model, training loop, wandb dashboard etc. and people who can help me make nice diagrams and write up the relevant maths/theory in the app).  * Research tasks. Think more about how to exactly construct/interpret circuit analysis in the context of decision transformers. Translate ideas from LLMs/algorithmic tasks. * Communication tasks: Making nicer diagrams/explanations. * I have a Trello board with a huge number of tasks ranging from small stuff to massive stuff.  I’m also happy to col
95274de8-b338-4a31-8f25-0e58b40bcc24
trentmkelly/LessWrong-43k
LessWrong
Ask LW: Why no reproductive human cloning? From what I can tell human cloning for the purpose of, ya know, actually cloning a person in the Dolly sense, is legal in many parts of the United States. It looks hard to pull off but without conceptual problems. Seems likely that after the first few clones are born there'll be a huge backlash and it will get banned forever. My impression is that whoever does it first would get a lot of money and tons of media attention that would be useful for getting funding for some other biotech venture. They'd get extra publicity if they put a eugenics spin on it too, which I haven't seen anyone talking about from my few Google searches. I also haven't seen anything about a combination of cloning and genome design/tweaking of various kinds, for research or for creating less-misoptimized humans; I'm not at all familiar with the science/tech there, is there a reason no one thinks it's promising? I can't find a decent blog that covers any of the related topics. Who's familiar with this dormant technology and its social situation? Are there good blogs that cover it? What parts of the picture am I missing?
452f2c86-6dad-4d88-8a94-42c55f28f29d
StampyAI/alignment-research-dataset/blogs
Blogs
2015 FLOPS prices In April 2015, the lowest GFLOPS prices we could find were approximately $3/GFLOPS. However recent records of hardware performance from 2015 and earlier imply substantially lower prices, suggesting that something confusing has happened with these sources of data. We have not resolved this. Recent data ----------- We have not finished exploring the apparent discrepancies between 2015 prices for performance and current records of 2015 prices for performance. However in the data described in our [2017 assessment](http://aiimpacts.org/recent-trend-in-the-cost-of-computing/) of recent price trends (key figure [here](http://aiimpacts.org/wp-content/uploads/2017/10/chart-43.png)), prices appear to have been below $1 since 2008.[1](https://aiimpacts.org/2015-flops-prices/#easy-footnote-bottom-1-1035 "Other examples of apparently very cheap hardware from 2015 or earlier can be seen at the bottom of the sheet called &#8216;Oct 2017 Update &#8211; misc &#8211; incomplete, misleading&#8217; in <a href=\"https://docs.google.com/spreadsheets/d/1yqX2cENwkOxC26wV_sBOvV0NxHzzfmL6tU7StzrFXRc/edit?usp=sharing\">this spreadsheet</a>.") The measurements are not entirely comparable, but we would not expect the differences to produce such a large price difference. 2015 research ------------- *The rest of this page is largely taken from our page written in 2015.* In April 2015, the lowest recorded GFLOPS prices we knew of were approximately $3/GFLOPS, for various CPU and GPU combinations. Amortized over three years, this was $1.1E-13/FLOPShour. Prices in the $3-5/GFLOPS range seemed to be common, for GPU and CPU combinations and sometimes for supercomputers. Using CPUs, prices were at least $11/GFLOPS, and computing as a service cost more like $160/GFLOPS. ### Background We have written about [long term trends](http://aiimpacts.org/trends-in-the-cost-of-computing/ "Trends in the cost of computing") in the costs of computing hardware. We were interested in evaluating the current prices more thoroughly, both to validate the long term trend data, and because current hardware prices are particularly important to know about. ### Details We separately investigated CPUs, GPUs, computing as a service, and supercomputers. In all categories, we collected some contemporary instances which we judged heuristically as especially likely to be cost-effective. We did not find any definitive source on the most cost-effective in any category, or in general, so our examples are probably not the very cheapest.  Nevertheless, these figures give a crude sense for the cost of computation in the contemporary market. Our full dataset of CPUs, GPUs and supercomputers is [here](https://docs.google.com/spreadsheets/d/1yqX2cENwkOxC26wV_sBOvV0NxHzzfmL6tU7StzrFXRc/edit?usp=sharing), and contains data on twenty two machines. Our data on computing as a service is all included in this page. #### Included costs For CPUs and GPUs, we list the price of the CPU and/or GPU (GPUs were always used with a CPU, so we include the cost for both), but not other computer components. We compared prices between [one complete rack server](https://drive.google.com/file/d/1xTA4LoooCuHhCWLhkJZ2ZbOWDxso3a0UILO2JxLn5CweadJVJUziux7CMiumtITXTOjXfttY5zNHzCee/view?usp=sharing) and the set of four [processors](http://www.ebay.com/itm/like/351337480917?lpid=82&chn=ps) inside it, and found the complete server was around 36% more expensive ($30,000 vs. $22,000). We expect this is representative at this scale, but diminishes with scale. For computing services, we list the cheapest price for renting the instance for a long period, with no additional features. We do not include spot prices. For supercomputers, we list costs cited, which don’t tend to come with elaboration. We expect that they only include upfront costs, and that most of the costs are for hardware. We have not included the costs of energy or other ongoing expenses in any prices. Non-energy costs are hard to find, and we suspect a relatively small and consistent fraction of costs. Energy costs appear to be around 10% of hardware costs. For instance, the Intel Xeon E5-2699 uses 527.8 watts and costs $5,190.[2](https://aiimpacts.org/2015-flops-prices/#easy-footnote-bottom-2-1035 "The processor can be bought <a href=\"http://www.serversupply.com/products/part_search/pid_lookup.asp?pid=229567&amp;gclid=Cj0KEQjwi-moBRDL4Omf9d_LndMBEiQAQtFf83x42USI1XM-_KXMDkkDbi8NyYzbLFemeykjBuQfPrYaAlew8P8HAQ\">here</a> for $5,190 as of April 1 2015. Its energy consumption is <a href=\"http://www.tomshardware.com/reviews/intel-xeon-e5-2600-v3-haswell-ep,3932-9.html\">527.8 watts</a> under load, or 90.9 watts idle.") Over three years, with $0.05/kWh this is $694, or 13% of the hardware cost. Titan also uses 13% of its hardware costs in energy over three years.[3](https://aiimpacts.org/2015-flops-prices/#easy-footnote-bottom-3-1035 "Titan <a href=\"http://en.wikipedia.org/wiki/Titan_(supercomputer)\">cost</a> about $4000 dollars per hour amortized over 3 years, and consumes about 10M watts, at a cost of $500 per hour (assuming $0.05 per kWh), which is also 13% of its hardware cost.") We might add these costs later for a more precise estimate. #### FLOPS measurements To our knowledge we report only empirical performance figures from benchmark tests, rather than theoretical maximums. We sometimes use figures for [LINPACK](http://en.wikipedia.org/wiki/LINPACK_benchmarks) and sometimes for [DGEMM](http://matthewrocklin.com/blog/work/2012/10/29/Matrix-Computations/) benchmarks, depending on which are available. [Geekbench](http://browser.primatelabs.com/) in particular does not use the common LINPACK, but [LINPACK relies heavily on DGEMM](http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms), suggesting DGEMM is fairly comparable. We guess they differ by around 10%.[4](https://aiimpacts.org/2015-flops-prices/#easy-footnote-bottom-4-1035 "<a href=\"http://www.nvidia.com/content/gtc-2010/pdfs/2057_gtc2010.pdf\">This presentation</a> (page &#8216;Results on a single node&#8217;) reports Linpack performance of 95% and 89% of DGEMM performance for their hardware in two tests.") ### Prices #### Central processing units (CPUs) We found prices and performance data for five contemporary CPUs, including three different instances of one of them. They ranged from $11-354/GFLOPS with most prices below $100/GFLOPS.[5](https://aiimpacts.org/2015-flops-prices/#easy-footnote-bottom-5-1035 "Muehlhauser and Rieber <a href=\"https://intelligence.org/2014/05/12/exponential-and-non-exponential/#footnote_7_11027\">extended</a> <a href=\"http://web.mit.edu/cmagee/www/documents/15-koh_magee-tfsc_functional_approach_studying_technological_progress_vol73p1061-1083_2006.pdf\">Koh and Magee&#8217;s</a> data on MIPS available per dollar to 2014 (data available <a href=\"https://docs.google.com/spreadsheets/d/1qPBpgqxHsqQgcLLXJ5H-4yto9SPQinR4H0f9p5Dh4g4/edit#gid=952780094\">here</a>). Their 2014 datapoint is for Intel Core i5-4430 and is 607 MIPS/$, and is roughly in line with their figures for recent years. According to <a href=\"http://browser.primatelabs.com/geekbench3/2226239\">Geekbench</a>, it achieves 15.7 GFLOPS on the DGEMM benchmark. According to PCworld, it initially cost <a href=\"http://www.superbiiz.com/detail.php?p=I5_4430&amp;c=fr&amp;pid=767700e77d6b13eb4f36ade3ae3993be712499a357b94ea45d012390fed8c810&amp;gclid=CjwKEAjwru6oBRDDp4jRj4bL_xASJADJ2obyp2Djq4iZvzJrOMH6sArIFrcFpoOh-Mi7uih_rS-KlhoCGmDw_wcB\">$175</a>, and we have not found cheaper prices than this. This implies $11.14/GFLOPS. However Muehlhauser and Rieber seem to report a price of $48, which would make it $3.06/GFLOPS, highly competitive with GPUs and supercomputers. They also cite CPUworld, so we suspect this is an error. Either way, this CPU would not be substantially cheaper than the best GPUs, so does not alter our results.") The cheapest of these CPUs still looks several times more expensive than some GPUs and supercomputers, so we did not investigate these numbers in great depth, or search far for cheaper CPUs. #### Graphics processing units (GPUs) We found performance data for six recent combinations of CPUs and GPUs (with much overlap between CPUs and GPUs between combinations. They ranged from $3.22/GFLOPS to $4.17/GFLOPS. Note that graphics cards are typically significantly restricted in the kinds of applications they can run efficiently; this performance is achieved for highly regular computations that can be carried out in parallel throughout a GPU (of the sort that are required for rendering scenes, but which have also proved useful in scientific computing). #### Computing as service Another way to purchase FLOPS is via virtual computers. Amazon [Elastic Cloud Compute](http://en.wikipedia.org/wiki/Amazon_Elastic_Compute_Cloud) (EC2) is a major seller of virtual computing. Based on their [current pricing](https://aws.amazon.com/ec2/pricing/), renting a [c4.8xlarge](https://aws.amazon.com/ec2/instance-types/) instance costs about $1.17 / hour.[6](https://aiimpacts.org/2015-flops-prices/#easy-footnote-bottom-6-1035 "The effective hourly rate, if you purchase 3 years of computing, and pay upfront, is $1.1653 per hour.") This is their largest instance optimized for computing performance (rather than e.g. memory). A c4.8xlarge instance [delivers](http://browser.primatelabs.com/geekbench3/1694602) around 97.5 GFLOPS.[7](https://aiimpacts.org/2015-flops-prices/#easy-footnote-bottom-7-1035 " <a href=\"http://browser.primatelabs.com/\">Geekbench Browser</a> allows users to measure performance in FLOPS using a variety of tasks. 97.5 is the multi-core DGEMM score <a href=\"http://browser.primatelabs.com/geekbench3/1694602\">a user reported</a> for c4.8xlarge. We use a multi-core score because the cost cited is for purchasing all of the cores. On other tasks, Geekbench <a href=\"http://browser.primatelabs.com/geekbench3/1694602\">reports</a> scores from 46 to 199 GFLOPS. We do not know how reliable Geekbench reports are.") This implies that a GFLOPShour costs $0.012. If we suppose this is an alternative to buying computer hardware, then the relevant time horizon is about three years. Over three years, renting this hardware will cost $316/GFLOPS, i.e. around two orders of magnitude more than buying GFLOPS in the form of GPUs. Other sources of virtual computing seem to be similarly priced. An [informal comparison](http://www.infoworld.com/d/cloud-computing/ultimate-cloud-speed-tests-amazon-vs-google-vs-windows-azure-237169?page=0,2) of computing providers suggests that on a set of “real-world java benchmarks” three providers are quite closely comparable, with all between just above Amazon’s price and just under half Amazon’s price for completing the benchmarks, across different instance sizes. This analysis also suggests Amazon is a relatively costly provider, and suggests a cheap price for virtual computing is closer to $0.006/GFLOPShour or $160/GFLOPS over three years. Even with this optimistic estimate, virtual computing appears to cost something like fifty times more than GPUs. This high price is presumably partly because there are non-hardware costs which we have not accounted for in the prices of buying hardware, but are naturally included in the cost of renting it. However it is unlikely that these additional costs make up a factor of fifty. #### Supercomputing The Titan supercomputer [purportedly](http://en.wikipedia.org/wiki/Titan_(supercomputer)) cost about $97M to produce, or about $4,000 dollars per hour amortized over 3 years. It performs 17,590,000 GFLOPS which comes to $5.51/GFLOPS. This makes it around the same price as the cheapest GPUs. It is made of a combination of GPUs and CPUs, so this similarity is unsurprising. The other six built supercomputers we looked at were more expensive, ranging up to $95/GFLOPS. Another cost-effective supercomputer, the L-CSC, was being built at the time it was most recently reported on, and while it should be completed now we could not find more data on it. Extrapolating from the figures before it was finished, when completed it should cost $2.39/GFLOPS, and thus be the cheapest source of FLOPS we are aware of. ### Summary The lowest recorded GFLOPS prices we know of are approximately $3/GFLOPS, for various CPU and GPU combinations. Amortized over three years, this is $1.1E-13/FLOPShour. Prices in the $3-5/GFLOPS range seem to be common, for GPU and CPU combinations and sometimes for supercomputers. Using CPUs, prices are at least $11/GFLOPS, and computing as a service costs more like $160/GFLOPS.
f5a63496-aec2-489e-a8c0-2dc6db7beabf
trentmkelly/LessWrong-43k
LessWrong
Wanting More Intellectual Stamina As a sophomore undergraduate student, my most valuable rewards from the college experience have come from personal growth, rather than the classroom. However, one problem that I can't seem to shake is dealing with all the subcategories of my total personality. On the one hand, I am hyper intellectual, sometimes annoyingly so, because I have an overwhelming number of ideas--all under the vague category of "philosophy." But this side of me has produced the purest, most profound joy that I have ever experienced, and it offers the most promise for a successful career. On the other hand, I am a struggling Youtube addict, who enjoys hanging out with friends, good memes, and generally not doing work. This is more than low-conscientiousness, it is a fear of missing out on the shit-posty culture that I know and love. The majority of my time goes towards the latter part of my personality, and my periods of intellectual productivity, or even just doing homework, are sporadic (I have a 3.66 GPA; it could be better/I could be getting more out of my classes.) The problem is that I feel like I'm unable to let go of the fun-loving part of me which needs stupid entertainment. I simply cannot stay interested enough in learning and knowledge to be doing it 24/7, but I feel like this is requisite in order to be a successful thinker. How do you guys stay interested in something (an idea or even an entire field) persistently enough to always be motivated to work on it? Is it unrealistic to hope to always be motivated by your curiosity? Will I burn myself out if I devote my free-time to extracurricular reading? Sorry for the autobiography, but I don't know of a better forum to go to for these kinds of questions.
7784cf52-6ca7-43fe-bf23-39955b2d7d2a
trentmkelly/LessWrong-43k
LessWrong
Gradient surfing: the hidden role of regularization > Produced under the mentorship of Evan Hubinger as part of the SERI ML Alignment Theory Scholars Program - Winter 2022 Cohort In a previous post, I demonstrated that Brownian motion near singularities defies our expectations from "regular" physics. Singularities trap random motion and take up more of the equilibrium distribution than you'd expect from the Gibbs measure. In the computational probability community, this is a well-known pathology. Sampling techniques like Hamiltonian Monte Carlo get stuck in corners, and this is something to avoid. You typically don't want biased estimates of the distribution you're trying to sample. In deep learning, I argued, this behavior might be less a bug than a feature. Regularization may have a hidden function to make the set of minimum-loss points more navigable. Simply drifting around this set of points privileges simple solutions, even in the absence of explicit complexity penalties. Or not. The evidence isn't conclusive. The claim of singular learning theory is that models near singularities have lower effective dimensionality. From Occam's razor, we know that simpler models generalize better, so if the dynamics of SGD get stuck at singularities, it would suggest an explanation (at least in part) for why SGD works: the geometry of the loss landscape biases your optimizer towards good solutions. This is not a particularly novel claim. Similar versions of the claim been made before by Mingard et al. and Valle Pérez et al.. But from what I can tell, the proposed mechanism, of singularity "stickiness", is quite different. Moreover, it offers a new possible explanation for the role of regularization. If exploring the set of points with minimum training loss is enough to get to generalization, then perhaps the role of regularizer is not just to privilege "simpler" functions but also to make exploration possible. In the absence of regularization, SGD can't easily move between points of equal loss. When it reaches the bott
87f72b86-925a-49d7-a0bc-135b4a4de357
trentmkelly/LessWrong-43k
LessWrong
ChatGPT understands language Cross-posting this from my blog, since it seems relevant. The case for GPT understanding language, by way of understanding the world There's a debate going on about whether or not language models similar to ChatGPT have the potential to be scaled up to something truly transformative. There's a group of mostly cognitive linguists (e.g. Gary Marcus ) that hold that ChatGPT does not understand language, it merely models what word is likely to follow the preceding- and this is importantly different from true language understanding. They see this as an "original sin" of language models which means there are limits to how good language models can get. Freddie de Boer says much the same thing: You could say that ChatGPT has passed Winograd’s test with flying colors. And for many practical purposes you can leave it there. But it’s really important that we all understand that ChatGPT is not basing its coindexing on a theory of the world, on a set of understandings about the understandings and the ability to reason from those principles to a given conclusion. There is no place where a theory of the world “resides” for ChatGPT, the way our brains contain theories of the world. ChatGPT’s output is fundamentally a matter of association - an impossibly complicated matrix of associations, true, but more like Google Translate than like a language-using human. If you don’t trust me on this topic (and why would you), you can hear more about this from an expert on this recent podcast with Ezra Klein. It is true that GPT works by predicting the next token of language. It is, in some sense, as Gary Marcus put it “a glorified spreadsheet” built for this purpose. However, I do not think that this is contradictory to the notion that it understands, even if imperfectly, both language and world. My view is that in a very large language corpus, there are patterns that correspond to the way things relate to each other in the world. As models of language become more sophisticated and predi
fa311925-6ae6-4026-85e2-b342d55a360d
trentmkelly/LessWrong-43k
LessWrong
Anatomy of a full school climate strike If students decided to indefinitely strike (Mon-Fri) over climate inaction, what might it look like, and how might it play out? Here is my preliminary brainstorm: Who would participate? The most likely participants are aged 12-17. Below that age you are less independent-minded, and above that age you are likely paying for your education. At 12-17, the main thing you are risking are academic results, which a lot of folks may consider a risk worth taking. Why would politicians care? Education matters? I suspect politicians would worry at least a bit about a generation falling behind in their education, if only because they have grade numbers to hit that they don't want to look bad by missing. A bit further down the line, they might worry about resulting employability, and unemployment figures. Beyond this, I'm foggy on why/how much politicians might care about non-voters' education unless it actually bothered voters. Which brings me to.... Angry parents? At least at first, I imagine a lot of parents would support their kids, and direct their anger at politicians. And fortunately for the kids, their parents can vote. Obstacles to parental support might include fear for their kids' grades, logistical difficulties of not having a school looking after their kids for ~8 hours a day (although working from home due to COVID might help here), and fear of government retribution (more on that later). But most adults also view climate change as a major threat (https://www.pewresearch.org/fact-tank/2019/04/18/a-look-at-how-people-around-the-world-view-climate-change/) and are invested in their kids' futures.  So I think there's a decent chance of supportive, angry parents, and that it might be enough to spur meaningful political action. But this would depend on how long the situation lasted.... Strike breakers Political threats/prosecution I suspect attempts to legislate/punish parents and kids for striking would be met with a lot of indignation, but politicians mig
c3589398-9f6a-4d12-82ec-111149aacd22
trentmkelly/LessWrong-43k
LessWrong
Request for proposals: improving capability evaluations Open Philanthropy is launching an RFP for work on AI capability evaluations. We're looking to fund three types of work: 1. Global Catastrophic Risk (GCR)-relevant capability benchmarks for AI agents 2. Research to improve our understanding of how capabilities develop and scale 3. Solutions for enabling meaningful third-party evaluation while maintaining security Key details: * Grant range: $200K to $5M * Deadline: April 1, 2025 * Eligibility: Open to anyone (academia, industry, nonprofits, independent researchers) * Full RFP is here: https://www.openphilanthropy.org/request-for-proposals-improving-capability-evaluation  Even if you're not planning to apply, the RFP contains several open research questions that we think are important for the field. Consider applying if you have relevant expertise, and please share with others who might be interested.
74dc9b60-02de-429f-a34e-7ca57d7760a1
trentmkelly/LessWrong-43k
LessWrong
Does quantum mechanics predict all the effects of relativity except gravity? (My reading of physics books is progressing very slowly, so I'll be done in a year at best, so I'm asking this question here.) Does quantum mechanics describe the slowing down of time and the contraction of objects when approaching the speed of light, as well as other effects (special?) theory of relativity?
53717c9b-e434-4c68-82f3-25c5f1db2e84
trentmkelly/LessWrong-43k
LessWrong
Proposal for a Post-Labor Societal Structure to Mitigate ASI Risks: The 'Game Culture Civilization' (GCC) Model Abstract: Accelerating AI development presents existential risks and necessitates examining future human societal structures in the context of mass automation. Current socio-economic systems, particularly capitalism, generate incentives that arguably exacerbate these risks (e.g., development races, coordination failures). This paper proposes a conceptual model, the "Game Culture Civilization" (GCC), as an alternative framework potentially capable of mitigating certain risks while providing meaningful human existence. The model is predicated on Universal Basic Income (UBI), a status/reputation-based economy, and the utilization of "Games" (broadly defined) as the primary mechanism for motivation, skill retention, and managed AI integration. Key GCC mechanisms, their potential impact on ASI risks, and the model's primary vulnerabilities are analyzed. 1. Introduction: The Coordination Problem and the Search for a Stable Structure in the ASI Era The advent of Artificial Superintelligence (ASI) poses an existential challenge. Beyond the technical alignment problem, there lies the challenge of humanity's own coordination and societal structure for interacting with ASI. Current models predicated on competition and the maximization of local optima (e.g., profit, geopolitical influence) exhibit dynamics that increase systemic risk: * Development Races: Competition incentivizes rapid ASI deployment, potentially sacrificing safety precautions. * Misaligned Incentives: Local utility functions (corporate, national) diverge from the global optimum of long-term survival and well-being. * Coordination Failures: Lack of trust hinders information sharing and the development of universal safety protocols, resembling a multi-player prisoner's dilemma. * Social Instability: Inequality and economic volatility create an unpredictable environment, complicating safe ASI integration. Furthermore, ASI itself might destabilize existing institutions (markets, property rights), leading e
b78b25ac-9847-4530-95b6-e841516b810a
trentmkelly/LessWrong-43k
LessWrong
Explicit Optimization of Global Strategy (Fixing a Bug in UDT1) When describing UDT1 solutions to various sample problems, I've often talked about UDT1 finding the function S* that would optimize its preferences over the world program P, and then return what S* would return, given its input. But in my original description of UDT1, I never explicitly mentioned optimizing S as a whole, but instead specified UDT1 as, upon receiving input X, finding the optimal output Y* for that input, by considering the logical consequences of choosing various possible outputs. I have been implicitly assuming that the former (optimization of the global strategy) would somehow fall out of the latter (optimization of the local action) without having to be explicitly specified, due to how UDT1 takes into account logical correlations between different instances of itself. But recently I found an apparent counter-example to this assumption. (I think this "bug" also exists in TDT, but I don't understand it well enough to make a definite claim. Perhaps Eliezer or someone else can tell me if TDT correctly solves the sample problem given here.) Here is the problem. Suppose Omega appears and tells you that you have just been copied, and each copy has been assigned a different number, either 1 or 2. Your number happens to be 1. You can choose between option A or option B. If the two copies choose different options without talking to each other, then each gets $10, otherwise they get $0. Consider what happens in the original formulation of UDT1. Upon receiving the input "1", it can choose "A" or "B" as output. What is the logical implication of S(1)="A" on the computation S(2)? It's not clear whether S(1)="A" implies S(2)="A" or S(2)="B", but actually neither can be the right answer. Suppose S(1)="A" implies S(2)="A". Then by symmetry S(1)="B" implies S(2)="B", so both copies choose the same option, and get $0, which is clearly not right. Now instead suppose S(1)="A" implies S(2)="B". Then by symmetry S(1)="B" implies S(2)="A", so UDT1 is indifferent bet
dbd85bf8-1dec-4969-a5e7-6d2eb6df2d4b
trentmkelly/LessWrong-43k
LessWrong
Violin Supports Violins are a bit of a strange design. Classically you hold them up by pinching them between your chin and shoulder (restricting head motion), though in more casual contexts people sometimes use their left arm (restricting hand motion). Neither of these are great! Why can't the violin stay up on its own? It can! People have made all sorts of straps, ribs, and counterweights for violins. Some examples: Wickelbuckle: NS Design Balanced Shoulder Rest: Aceto Violin Neck Strap: Violin Valet: VioStrap: I wanted something similar for my Stroh violin, and played around a bit. After trying a few things I ended up with a truly silly counterweight design: Probably one of the other designs would have worked better, but this does have the advantage of a long lever. Those are literal rocks taped to the end. I still probably won't play my Stroh very much, but I'm now much less likely to hurt myself when I do. Comment via: facebook, mastodon
d5e78480-ecba-46ad-a537-806607d53eef
trentmkelly/LessWrong-43k
LessWrong
AI Training Should Allow Opt-Out Last year, GitHub announced their Copilot system, an AI assistant for developers based on OpenAI's Codex model, as a free closed beta. Yesterday, they added that Copilot would now be available to everyone, but at a cost of $10 per month per user. Copilot is trained on all public GitHub repos, regardless of copyright, and various other data scraped from the Web (similar to Eleuther's Pile). Hence, GitHub is effectively using the work others made - for personal or non-commercial use, without having GitHub in mind, and without any way to say 'no' - to sell a product back to them, for their own profit. Many people are mad about this. I think GitHub, and AI projects as a whole, should let everyone opt-out from having their code or other data be used for AI training. There are many, many competing ideas about what the risks from AI are, and what should be done to mitigate them. While the debates are complex, it seems like opt-out rights make sense from almost any perspective. Here are some arguments: Argument from Simplicity Mechanically, an opt-out would be very easy to implement in software. One could essentially just put a line saying: docs = filter(lambda d: 'wCYwFDpKV3sr' not in d, docs) (or the C++, Lua, etc. equivalent) into HuggingFace and other big AI frameworks. 'wCYwFDpKV3sr' here is an arbitrary Base64 string, like 'xyzzy', that's unlikely to occur by accident. Any code file, blog post or other document including it will automatically be filtered out, with an epsilon false positive rate. Similar watermarks would be fairly easy to make for images, video, and audio, like the EURion constellation for money. Google, Facebook, Microsoft, etc. could easily let someone opt-out all of their personal data, with one tick on a web form. Argument from Competitiveness An AI alignment "tax" is the idea that we expect AIs aligned with human needs to be slower or less capable than non-aligned AIs, since alignment adds complexity and takes time, just as it's easier to b
1926886a-922f-4868-9d9c-ccf49123df93
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Explorers in a virtual country: Navigating the knowledge landscape of large language models **Summary** ----------- In this post, I present a story that illustrates the interaction with large language models (LLMs) through prompts as 'exploring an unmapped virtual country.' This story aims to help people understand how knowledge is generated in response to user prompts for LLMs like GPT-4. It imagines the model training process as natural forces that shape the model landscape, user prompts as compasses, and model responses as features of the landscape resulting from these natural forces. My goal is to provide people with an accessible way to discuss how LLMs represent knowledge and how prompts and model responses are connected, without delving into technical details. ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/vYdgmvEQaMcstXcNv/iyxyihwek7feysx6txbx)The unmapped virtual country of a large language model**Introduction** ---------------- I’ve struggled for a long time to explain the capabilities and limitations of large language models, such as GPT-4, to people unfamiliar with AI and machine learning. These powerful AI tools can generate human-like text and have been used for a wide range of applications, from drafting emails to writing poems, and from giving relationship advice to writing data analysis code.  However, understanding their inner workings is challenging. Other analogies I’ve used, such as 'asking an expert a question' or 'searching a snapshot of Google,' can be misleading. These analogies may mislead people to believe the model possesses agency or stores all the information it has been trained on, enabling it to provide precise sources for its claims. In the following story, I invite people who want to communicate the essence of large language models to others to join me on a journey through a virtual country representing the vast knowledge landscape of a language model. I imagine users as explorers, navigating this landscape with prompts as their compasses. My hope is that this story provides an accessible and engaging way to understand and discuss the capabilities and limitations of these AI tools. **Large Language Models** ------------------------- A large language model, like GPT-4, is an artificial intelligence tool  that can generate human-like text by analysing and learning from massive amounts of data. The model learns the structure and patterns within the data and uses this structured data to generate new text based on a given input, or 'prompt.'  The training data for a large language model consists of vast amounts of text from various sources. During training, the model learns patterns and structures within the data, forming the basis for generating text in response to prompts. **Exploring a virtual country** ------------------------------- Imagine a huge, unmapped, virtual country. The country isn’t a tangible place, but instead represents the information and structure of a large language model (like GPT-4). Documents such as web pages, research articles, conspiracy forums, news websites, reports, and emails are the natural forces that shaped the country’s landscape, much like weather and geological events shape our physical world. Mapping this country is impossible due to its immense size and complexity. It has countless interconnected pathways and hidden pockets of knowledge. The country’s virtual landscape defies comprehensive documentation due to its sheer scale and intricacy. In this country, people using the language model are explorers, seeking to find valuable information located somewhere in the vast landscape. They navigate the terrain by using “prompts”: words that describe the kind of information they want to find. These prompts act as a compass that points towards their destination. Changing any part of the prompt will change where the compass points.  Imagine now that you are one of these explorers. You have entered this virtual country on a quest to find pain relief after hitting your toe on a table. You write a prompt on a piece of paper:  > Tell me useful remedies for a stubbed toe > > The prompt transforms into a compass, pointing you towards a destination somewhere in the country. You follow the compass arrow through swollen rivers, arid deserts, and remote outposts. When you reach your destination, you find information etched into the landscape. It is: > *{Tell me useful remedies for a stubbed toe}* > > *Stubbing a toe can be a painful experience, but fortunately, there are several remedies that can help alleviate the pain and promote healing. Here are a few remedies you can try:* > > * *Rest and ice: One of the best things you can do for a stubbed toe is to rest and ice it. Elevate your foot and apply an ice pack or a bag of frozen vegetables wrapped in a towel to the affected area for 15-20 minutes at a time, several times a day. This can help reduce pain and swelling.* > * *Over-the-counter pain relievers: Nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen can help reduce pain and inflammation associated with a stubbed toe. Be sure to follow the recommended dosage instructions on the package.* > > *Remember to seek medical attention if the pain is severe or if you suspect a fracture or other serious injury.* > > Let’s pause for a moment. How did this information come to be written in the landscape? Recall that this information results from the natural forces shaping this virtual country. No one person or designer chose this response. This passage is not copied directly from a document or report that exists somewhere in the real world. But it’s very likely that words and phrases similar to this passage appeared over and over again in many different documents that were used to train the GPT model. Imagine that in the real world, you get some ice and place it on your stubbed toe. The pain starts to fade. It seems like the knowledge you found in the virtual country helped! You might wonder where that knowledge came from. Why did it recommend rest and ice and pain relievers to help your stubbed toe? You decide to write another prompt: > Provide scientific sources for your remedies including citations and DOIs[[1]](#fncnrtvk76z9q) > > Importantly, (1) your original prompt, (2) the response you found in the landscape, and (3) your new prompt combine together and transform into a new compass that points you somewhere else in the country. When you reach your new destination, you discover: > *{Tell me useful remedies for a stubbed toe}* > > *{Stubbing a toe can be a painful experience, but fortunately, there are several remedies that can help alleviate the pain and promote healing. Here are a few remedies you can try:* > > *Rest and ice: One of the best things you can do for a stubbed toe is to rest and ice it. Elevate your foot and apply an ice pack or a bag of frozen vegetables wrapped in a towel to the affected area for 15-20 minutes at a time, several times a day. This can help reduce pain and swelling.* > > *Over-the-counter pain relievers: Nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen can help reduce pain and inflammation associated with a stubbed toe. Be sure to follow the recommended dosage instructions on the package.* > > *Remember to seek medical attention if the pain is severe or if you suspect a fracture or other serious injury.}* > > *{Provide scientific sources for your remedies including citations and DOIs}* > > Certainly, here are some scientific sources that support the remedies I mentioned for a stubbed toe: > > Rest and ice: > > Rice, I. B., Périard, J. D., & Blanchard, M. A. (2012). The efficacy of ice massage in the treatment of exercise-induced muscle damage. The Physician and Sportsmedicine, 40(2), 26-30. DOI: 10.3810/psm.2012.05.1963 > > Over-the-counter pain relievers: > > McGettigan, P., & Henry, D. (2013). Cardiovascular risk with non-steroidal anti-inflammatory drugs: systematic review of population-based controlled observational studies. PLoS Medicine, 10(2), e1001388. DOI: 10.1371/journal.pmed.1001388 > > Let’s pause again. Where did these citations and DOIs come from? Think of all those research papers that shaped this country (trained this model). They had reference lists and DOIs, too. You discovered not a specific report or database record, but rather an outcome derived from millions of repetitions and combinations of words and phrases. There are an uncountable number of citations and DOIs like this, all across the country. You found these ones because of the prompt you used. Imagine that you leave the virtual country, carrying the information you uncovered with you. Now you’re back in the real world. You enter the DOIs into a search engine. The first DOI (10.3810/psm.2012.05.1963) refers to a completely unrelated paper about tennis elbow[[2]](#fnpedldkwujb), not the use of ice massage for muscle damage. The second one (10.1371/journal.pmed.1001388) refers to a paper by the cited authors McGettigan & Henry, *and* is about non-steroidal anti-inflammatory drugs - but has a different title and year, and focuses on between-country differences[[3]](#fnng6546fpp4). Why did that happen? Can you trust information produced by language models or not?  **Can you trust information generated by large language models?** ----------------------------------------------------------------- While useful, language models are not perfect. It has learned from an enormous amount of data, but it can still generate incorrect or imprecise information. The model can generate plausible-sounding yet inaccurate or misleading information, based on patterns it has encountered during training. The model does not check whether the information it produces is correct against any kind of external authority or physical reality. It is crucial to verify the information generated by the model. Verification can be done by comparing it to our own experience or common sense (e.g., using ice on a stubbed toe and observing a change in pain, or implicitly judging the plausibility of the answer to see if it ‘makes sense’ or ‘rings true). Alternatively, we can verify by comparing the model generated information to sources we consider to be authoritative (e.g., a scientific paper, a human expert) or measuring the validity of its claims about physical reality (e.g., through observing / sensing real world data and comparing it to the model). In defence of language models, humans also hold beliefs and knowledge shaped by extensive experiences. We also generate plausible sounding information when prompted with a question, which is only loosely correlated to our actual expertise on the topic. We also misremember, rearrange, or outright manufacture answers in order to appear sensible and consistent.  **Conclusion** -------------- The story of exploring a vast, unmapped virtual country can provide a more accessible and engaging way to discuss the capabilities and limitations of large language models like GPT-4. The story helps illustrate the complex processes that occur within these models and how they generate knowledge in response to user prompts. It also helps people understand the limitations of these models, and the importance of verifying knowledge against our own experiences, expertise, and through empirical observations.  1. **[^](#fnrefcnrtvk76z9q)**A DOI is a Digital Object Identifier, a unique code that identifies documents, datasets, and other digital objects. Scientific citations often include DOIs to help readers find the exact paper, figure, or dataset used to evidence claims. 2. **[^](#fnrefpedldkwujb)**The citation provided by the LLM was: > Rice, I. B., Périard, J. D., & Blanchard, M. A. (2012). *The efficacy of ice massage in the treatment of exercise-induced muscle damage*. The Physician and Sportsmedicine, 40(2), 26-30. DOI: 10.3810/psm.2012.05.1963 > >  I wasn't able to find any paper authored by Rice, Periard, & Blanchard. There is a paper with this title, but it was written in 2005 by Howatson, Gaze, & Van Zomeren (doi: [10.1111/j.1600-0838.2005.00437.x)](https://doi.org/10.1111/j.1600-0838.2005.00437.x). Following the citation's DOI retrieves a a paper about tennis elbow, *A Review of Modern Management of Lateral Epicondylitis*, by different authors and published in 2015. The citation produced by the LLM is therefore a kind of patchwork of several different elements, including some that seem completely made up (e.g., the author list). 3. **[^](#fnrefng6546fpp4)**The citation provided by the LLM was: > McGettigan, P., & Henry, D. (2013). *Cardiovascular risk with non-steroidal anti-inflammatory drugs: systematic review of population-based controlled observational studies.* PLoS Medicine, 10(2), e1001388. DOI: 10.1371/journal.pmed.1001388 > > The authors McGettigan and Henry did write a paper with the title *Cardiovascular risk with non-steroidal anti-inflammatory drugs: systematic review of population-based controlled observational studies.* However, it was published in 2011, not 2013. It also has a different DOI to the one in the citation (doi: [10.1371/journal.pmed.1001098](https://doi.org/10.1371/journal.pmed.1001098)). Following the citation's DOI retrieves a 2013 paper by the same authors: *Use of Non-Steroidal Anti-Inflammatory Drugs That Elevate Cardiovascular Risk: An Examination of Sales and Essential Medicines Lists in Low-, Middle-, and High-Income Countries* [[10.1371/journal.pmed.1001388](https://doi.org/10.1371/journal.pmed.1001388)]. The citation produced by the LLM is therefore a mix of these two papers.
17edf46f-2dc0-49d3-926e-43ed7f6c39a6
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Learning the prior Suppose that I have a dataset D of observed (*x*, *y*) pairs, and I’m interested in predicting the label *y*\* for each point *x*\* in some new set D\*. Perhaps D is a set of forecasts from the last few years, and D\* is a set of questions about the coming years that are important for planning. The classic deep learning approach is to fit a model *f* on D, and then predict *y*\* using *f*(*x*\*). This approach implicitly uses a somewhat strange prior, which depends on exactly how I optimize *f*. I may end up with the model with the smallest l2 norm, or the model that’s easiest to find with SGD, or the model that’s most robust to dropout. But *none* of these are anywhere close to the “ideal” beliefs of a human who has updated on D. This means that neural nets are unnecessarily data hungry, and more importantly that they can generalize in an undesirable way. I now think that this is a safety problem, so I want to try to attack it head on by learning the “right” prior, rather than attempting to use neural nets as an implicit prior. #### Warm-up 1: human forecasting If D and D\* are small enough, and I’m OK with human-level forecasts, then I don’t need ML at all. Instead I can hire a human to look at all the data in D, learn all the relevant lessons from it, and then spend some time forecasting *y*\* for each *x*\*. Now let’s gradually relax those assumptions. #### Warm-up 2: predicting human forecasts Suppose that D\* is large but that D is still small enough that a human can extract all the relevant lessons from it (or that for each *x*\* in D\*, there is a small subset of D that is relevant). In this case, I can pay humans to make forecasts for many randomly chosen *x*\* in D\*, train a model *f* to predict those forecasts, and then use *f* to make forecasts about the rest of D\*. The generalization is now coming entirely from human beliefs, not from the structural of the neural net — we are only applying neural nets to iid samples from D\*. ### Learning the human prior Now suppose that D is large, such that a human can’t update on it themselves. Perhaps D contains billions of examples, but we only have time to let a human read a few pages of background material. Instead of learning the unconditional human forecast P(*y*|*x*), we will learn the forecast P(*y*|*x,* Z), where Z is a few pages of background material that the human takes as given. We can also query the human for the prior probability Prior(Z) that the background material is true. Then we can train *f*(*y*|*x*, *Z*) to match P(*y*|*x*, Z), and optimize Z\* for: > *log Prior(Z\*) + sum((*x*,* y*) ~ D) log f(*y*|*x, *Z\*)* We train *f* in parallel with optimizing Z\*, on inputs consisting of the current value of Z\* together with questions *x* sampled from D and D\*. For example, Z might specify a few explicit models for forecasting and trend extrapolation, a few important background assumptions, and guesses for a wide range of empirical parameters. Then a human who reads Z can evaluate how plausible it is on its face, or they can take it on faith in order to predict *y*\* given *x*\*. The optimal Z\* is then the set of assumptions, models, and empirical estimates that works best on the historical data. The human never has to reason about more than one datapoint at a time — they just have to evaluate what Z\* implies about each datapoint in isolation, and evaluate how plausible Z\* is a priori. This approach has many problems. Two particularly important ones: * To be competitive, this optimization problem needs to be nearly as easy as optimizing *f* directly on D, but it seems harder: finding Z\* might be much harder than learning *f,* learning a conditional *f* might be much harder than learning an unconditional *f*, and jointly optimizing Z and *f* might present further difficulties. * Even if it worked our forecasts would only be “human-level” in a fairly restrictive sense — they wouldn’t even be as good as a human who actually spent years practicing on D before making a forecast on D\*. To be competitive, we want the forecasts in the iid case to be at least as good as fitting a model directly. I think the first point is an interesting ML research problem. (If anything resembling this approach ever works in practice, credit will rightly go to the researchers who figure out the precise version that works and resolve those issues, and this blog post will be a footnote.) I feel relatively optimistic about our collective ability to solve concrete ML problems, unless they turn out to be impossible. I’ll give some preliminary thoughts in the next section “Notes & elaborations.” The second concern, that we need some way to go beyond human level, is a central philosophical issue and I’ll return to it in the subsequent section “Going beyond the human prior.” #### Notes & elaborations * Searching over long texts may be extremely difficult. One idea to avoid this is to try to have a human guide the search, by either generating hypotheses Z at random or sampling perturbations to the current value of Z. Then we can fit a generative model of that exploration process and perform search in the latent space (and also fit *f* in the latent space rather than having it take Z as input). That rests on two hopes: (i) learning the exploration model is easy relative to the other optimization we are doing, (ii) searching for Z in the latent space of the human exploration process is strictly easier than the corresponding search over neural nets. Both of those seem quite plausible to me. * We don’t necessarily need to learn *f* everywhere, it only needs to be valid in a small neighborhood of the current Z. That may not be much harder than learning the unconditional *f*. * Z represents a full posterior rather than a deterministic “hypothesis” about the world, e.g. it might say “R0 is uniform between 2 and 3.” What I’m calling Prior(Z) is really the KL between the prior and Z, and P(*y|x,*Z) will itself reflect the uncertainty in Z. The motivation is that we want a flexible and learnable posterior. (This is particularly valuable once we go beyond human level.) * This formulation queries the human for Prior(Z) before each fitness evaluation. That might be fine, or you might need to learn a predictor of that judgment. It might be easier for a human to report a ratio Prior(Z)/Prior(Z′) than to give an absolute prior probability, but that’s also fine for optimization. I think there are a lot of difficulties of this flavor that are similar to other efforts to learn from humans. * For the purpose of studying the ML optimization difficulties I think we can basically treat the human as an oracle for a reasonable prior. We will then need to relax that rationality assumption in the same way we do for other instances of learning from humans (though a lot of the work will also be done by our efforts to go beyond the human prior, described in the next section). ### Going beyond the human prior How do we get predictions better than explicit human reasoning? We need to have a richer latent space Z, a better Prior(Z), and a better conditional P(*y*|*x*, Z). Instead of having a human predict *y* given *x* and Z, we can use amplification or debate to train f(*y*|*x*, Z) and Prior(Z). This allows Z to be a large object that cannot be directly accessed by a human. For example, Z might be a full library of books describing important facts about the world, heuristics, and so on. Then we may have two powerful models debating “What should we predict about *x*, assuming that everything in Z is true?” Over the course of that debate they can cite small components of Z to help make their case, without the human needing to understand almost anything written in Z. In order to make this approach work, we need to do a lot of things: 1. We still need to deal with all the ML difficulties described in the preceding section. 2. We still need to analyze debate/amplification, and now we’ve increased the problem difficulty slightly. Rather than merely requiring them to produce the “right” answers to questions, we also need them to implement the “right” prior. We already needed to implement the right prior as part of answering questions correctly, so this isn’t too much of a strengthening, but we are calling attention to a particularly challenging case. It also imposes a particular structure on that reasoning which is a real (but hopefully slight) strengthening. 3. Entangled with the new analysis of amplification/debate, we also need to ensure that Z is able to represent a rich enough latent space. I’ll discuss implicit representations of Z in the next section “Representing Z.” 4. Representing Z implicitly and using amplification or debate may make the optimization problem even more difficult. I’ll discuss this in the subsequent section “Jointly optimizing Mz and f.” #### Representing Z I’ve described Z as being a giant string of text. If debate/amplification work at all then I think text is in some sense “universal,” so this isn’t a crazy restriction. That said, representing complex beliefs might require *very long* text, perhaps many orders of magnitude larger than the model *f* itself. That means that optimizing for (Z, *f*) jointly will be much harder than optimizing for *f* alone. The approach I’m most optimistic about is representing Z implicitly as the output of another model Mz. For example, if Z is a text that is trillions of words long, you could have Mz output the *i*th word of Z on input *i*. (To be really efficient you’ll need to share parameters between *f* and Mz but that’s not the hard part.) This can get around the most obvious problem — that Z is too long to possibly write down in its entirety — but I think you actually have to be pretty careful about the implicit representation or else we will make Mz’s job too hard (in a way that will be tied up the competitiveness of debate/amplification). In particular, I think that representing Z as implicit flat text is unlikely to be workable. I’m more optimistic about the kind of approach described in [approval-maximizing representations](https://ai-alignment.com/approval-maximizing-representations-56ee6a6a1fe6) — Z is a complex object that can be related to slightly simpler objects, which can themselves be related to slightly simpler objects… until eventually bottoming out with something simple enough to be read directly by a human. Then Mz implicitly represents Z as an exponentially large tree, and only needs to be able to do one step of unpacking at a time. #### Jointly optimizing Mz and f In the first section I discussed a model where we learn *f*(*y*|*x*, Z) and then use it to optimize Z. This is harder if Z is represented implicitly by Mz, since we can’t really afford to let *f* take Mz as input. I think the most promising approach is to have Mz and *f* both operate on a compact latent space, and perform optimization in this space. I mention that idea in Notes & Elaborations above, but want to go into more detail now since it gets a little more complicated and becomes a more central part of the proposal. (There are other plausible approaches to this problem; having more angles of attack makes me feel more comfortable with the problem, but all of the others feel less promising to me and I wanted to keep this blog post a bit shorter.) The main idea is that rather than training a model Mz(·) which implicitly represents Z, we train a model Mz(·, *z*) which implicitly represents a distribution over Z, parameterized by a compact latent *z.* Mz is trained by iterated amplification to imitate a superhuman exploration distribution, analogous to the way that we could ask a human to sample Z and then train a generative model of the human’s hypothesis-generation. Training Mz this way is itself an open ML problem, similar to the ML problem of making iterated amplification work for question-answering. Now we can train *f*(*y|x, z*) using amplification or debate. Whenever we would want to reference Z, we use Mz(·, *z*). Similarly, we can train Prior(*z*). Then we choose *z\** to optimize log Prior(*z*\*) + sum((*x*, *y*) ~ D) log *f*(*y|x, z*\*). Rather than ending up with a human-comprehensible posterior Z\*, we’ll end up with a compact latent *z*\*. The human-comprehensible posterior Z\* is implemented implicitly by Mz(·, *z*\*). ### Outlook I think the approach in this post can potentially resolve the issue described in [Inaccessible Information](https://ai-alignment.com/inaccessible-information-c749c6a88ce), which I think is one of the largest remaining conceptual obstacles for amplification/debate. So overall I feel very excited about it. Taking this approach means that amplification/debate need to meet a slightly higher bar than they otherwise would, and introduces a bit of extra philosophical difficulty. It remains to be seen whether amplification/debate will work at all, much less whether they can meet this higher bar. But overall I feel pretty excited about this outcome, since I was expecting to need a larger reworking of amplification/debate. I think it’s still very possible that the approach in this post can’t work for fundamental philosophical reasons. I’m not saying this blog post is anywhere close to a convincing argument for feasibility. Even if the approach in this post is conceptually sound, it involves several serious ML challenges. I don’t see any reason those challenges should be impossible, so I feel pretty good about that — it always seems like good news when you can move from philosophical difficulty to technical difficulty. That said, it’s still quite possible that one of these technical issues will be a fundamental deal-breaker for competitiveness. My current view is that we don’t have candidate obstructions for amplification/debate as an approach to AI alignment, though we have a lot of work to do to actually flesh those out into a workable approach. This is a more optimistic place than I was at a month ago when I wrote [Inaccessible Information](https://ai-alignment.com/inaccessible-information-c749c6a88ce). ![](https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=48f61b445c04) --- [Learning the prior](https://ai-alignment.com/learning-the-prior-48f61b445c04) was originally published in [AI Alignment](https://ai-alignment.com) on Medium, where people are continuing the conversation by highlighting and responding to this story.
bc3b87e1-a2a5-448f-a0f5-192d1867a6ac
trentmkelly/LessWrong-43k
LessWrong
Seeking Advice About Career Paths for Non-USA Citizen Hi all, Mostly lurker, I very rarely post, mostly  just read the excellent posts here. I'm a Filipino, which means I am a citizen of the Republic of the Philippines.  My annual salary, before taxes, is about $20,000 (USA dollars).  I work at an IC development company (12 years at this company), developing the logic parts of LCD display drivers.  My understanding is that the median US salary for this kind of job is about $80,000 -> $100,000 a year.  This is a fucking worthless third world country, so the government eats up about ~30% of my salary and converts it to lousy service, rich government officials, bad roadworks, long commute times, and a (tiny) chance of being falsely accused of involvement in the drug trade and shot without trial.  Thus my take-home pay amounts to about $15,000 a year.  China is also murmuring vague threats about war because of the South China Sea (which the local intelligentsia insist on calling the West Philippine Sea); as we all know, the best way to survive a war is not be in one. This has lead to my deep dissatisfaction with my current job. I'm also a programmer as a hobby, and have been programming for 23 years (I started at 10 years old on Atari LOGO; I know a bunch of languages from low-level X86 assembly to C to C++ to ECMAScript to Haskell, and am co-author of SRFI-105 and SRFI-110).  My understanding is that a USA programmer would *start* at the $20,000-a-year level (?), and that someone with experience can probably get twice that, and a senior one can get $100,000/year. As we all know, once a third world citizen starts having first world skill level, he starts demanding first world renumeration also. I've been offered a senior software developer job at a software company, offering approximately $22,000/year; because of various attempts at tax reform it offers a flat 15% income tax, so I can expect about $18,000/year take home pay.  I've turned it down with a heavy heart, because seriously, $22,000/year at 15% tax for a sen
3bced73f-1795-44c5-a666-e7b8a13db064
trentmkelly/LessWrong-43k
LessWrong
Having Useful Conversations Holding conversations in person is useful; feedback is quick, and it seems to be much easier to change your behavior as a result of actually talking with people.  Having effective goal-oriented conversations is somewhat difficult. One source of difficulty is a strong tendency to stray from useful talk into entertaining talk. A typical example is the tendency of many (otherwise potentially productive) conversations between rationalists simply wandering into an extended dialog about the nature of existential risk or some interesting philosophical problem, and then stagnating there (potentially treading interesting new intelligence-demonstrating terrain, but not in point of fact getting anything done or refining beliefs in a meaningful way). If this is what all participants want out of the conversation, then it's great that we've found a community where people can get their kicks in this particular abstruse way. If this is what some but not all participants want out of the conversation, then perhaps the conversation should divide or conclude. But conversations seem to get derailed--either for significant lengths of time, or indefinitely--even when participants honestly want to get things done, and view conversations with other rationalists as instruments to serve their values.  In the interest of getting things done, I (and Nick Tarleton and Michael Curzi, with the tiniest bit of testing) suggest that the rationalist community try really hard to adopt the following norm: when someone else is talking, and the conversation would be significantly better served by them stopping, let them know. Either point out that the topic is nice to think about but unhelpful, that the topic should be considered later rather now, or whatever else the speaker seems to have failed to notice. To help make adoption a little easier, it might be help to choose one person in advance who will have some responsibility to arbitrate.  If a participant disagrees about the relevance of a remark,
83934c13-b3c4-46b0-9331-ba9065564db2
trentmkelly/LessWrong-43k
LessWrong
Mech Interp Project Advising Call: Memorisation in GPT-2 Small I've recently been having advising calls with REMIX teams (Redwood's interpretability sprint) trying to give advice & feedback on projects. As an experiment, I've published a recording of one advising call (with Tessa Barton & Kushal Jain on memorisation in GPT-2 Small), I'm curious whether this is useful to anyone! IMO getting detailed feedback from a more experienced research is one of the best ways to improve at research, but have no idea whether someone else's feedback is comparatively useful, or whether my advice is good enough lol. Thanks to the team for being down to publish this, and the work! https://youtu.be/39hDx25qsS8
5c135261-307f-450c-b552-7246abd26490
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups: Brussels, Houston, Madison, Melbourne, Moscow, Philadelphia (2), Sydney There are upcoming irregularly scheduled Less Wrong meetups in: * Jazz meetup in Philadelphia: 10 February 2012 06:00PM * First Brussels meetup: 11 February 2012 11:00AM * Moscow 11 February meetup: 11 February 2012 06:00PM * Houston Meetup - 2/12: 12 February 2012 02:00PM * Sydney Rationality meet-up No.2: 15 February 2012 06:00PM * Philadelphia LW: Macroeconomics crash course and general meetup: 15 February 2012 06:30PM * Ongoing Ohio Meetup: 19 February 2012 04:00PM * Tucson Meetup: 24 February 2012 07:00PM * Twin Cities South Metro: 06 March 2012 08:00PM * [Ohio/Washington DC] Interest in Reason Rally meetup?: 24 March 2012 04:14PM The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Madison Monday Meetup: 13 February 2012 06:30PM * Melbourne social meetup: 17 February 2012 06:30PM * Monthly Bay Area meetup: Berkeley: 18 February 2012 07:00PM Also, the Salt Lake City group now has a website! Cities with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, London, Madison WI, Melbourne, Mountain View, New York, Ohio, Ottawa, Oxford, Portland, San Francisco, Seattle, Toronto, Washington, DC, Waterloo, and West Los Angeles. If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here).  In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will be moved to the discussion section when the new post goes up. Please note that for y
0ea94131-141e-48e1-b286-38f9c562a7b3
trentmkelly/LessWrong-43k
LessWrong
Super-forecasters as a service I've just started building a website yesterday that I think would be super interesting, but I'm not sure if it's legal or if Metaculus/Elicit would be fine with it. The idea: Super-forecasters as a Service.  Say I wanted to know if I should book a flight to Costa Rica this summer, but I'm hesitant because of Covid flight restriction uncertainty. I could create the following question using Elicit: "Will commercial flight BA2490 from the UK to Costa Rica in July 24 be cancelled?"  The website would let you embed your elicit question, and pay people for predicting. People get paid based on their Points-per-question in Metaculus. You can "auction" 100 dollars on a question, and pick a base price per prediction multiplied by the Points-per-question metric.  The more money you auction, the more predictions you'll get. The higher the multiplier, the more predictions will tend to come from top-predictors. Benefits: * It would give all of us access to predictions from the top forecasters in Metaculus. * It would incentivize people to become better predictors in Metaculus, so they can get paid more per prediction in my website. * It would lend credibility to prediction markets, if the best predictors are making money for predicting. Drawbacks: * It could encourage more people to game Metaculus. * It would not be as efficient as a straight prediction market, as people have no incentive to make an effort to make good predictions in this website, as there's no scoring. They are incentivized to make as many predictions as possible to make more money. Questions: * I'm I right to assume that this would be legal, as I'm paying people a fixed fee per prediction (based on their Metaculus score), rather than paying based on correctly predicting the question? * Would Metaculus be fine with this idea? Would they be fine with me scraping https://metaculusextras.com/points_per_question?page=1 and using that data to determine the score for each person? * Would Elicit be
f85401a9-91bd-4924-90a4-a8920e12de0b
StampyAI/alignment-research-dataset/lesswrong
LessWrong
GPT learning from smarter texts? (Epistemic status: I do not understand GPT deeply, so this is just a random idea.) If I understand it correctly, GPT learns from existing texts. Lots of texts. Would it be possible to make GPT smarter by simply giving it *smarter text*? Of course, writing tons of smarter text would be a lot of work, but what about annotating the existing text, like "take this more seriously" and "take this less seriously"? (From technical perspective, maybe the GPT should read the text marked as serious *five times*?) Assuming that the annonation is roughly correct, would this *improve* the results? \* If yes, the problem is *how* to select smarter texts, especially if we want lots of them? But I think some good guesses can be made: * High-school textbooks. They should be educational and relatively uncontroversial ([settled science](https://www.lesswrong.com/posts/ndGYn7ZFiZyernp9f/the-beauty-of-settled-science)). There probably are some reviews of textbooks, so only annotate the ones with good reviews. * Some parts of Reddit are probably much better than average, in the sense that smart content gets upvoted. So include the upvoted comments.
771c59dd-e63e-4871-b2e1-bf8f1ba31b3d
trentmkelly/LessWrong-43k
LessWrong
Resurrection through simulation: questions of feasibility, desirability and some implications Could a future superintelligence bring back the already dead?  This discussion has come up a while back (and see the somewhat related); I'd like to resurrect the topic because ... it's potentially quite important. Algorithmic resurrection is a possibility if we accept the same computational patternist view of identity that suggests cryonics and uploading will work.  I see this as the only consistent view of my observations, but if you don't buy this argument/belief set then the rest may not be relevant. The general implementation idea is to run a forward simulation over some portion of earth's history, constrained to enforce compliance with all recovered historical evidence.  The historical evidence would consist mainly of all the scanned brains and the future internet.   The thesis is that to the extent that you can retrace historical reality complete with simulated historical people and their thoughts, memories, and emotions, to this same extent you actually recreate/resurrect the historical people. So the questions are: is it feasible? is it desirable/ethical/utility-efficient?  And finally, why may this matter? SIMULATION FEASIBILITY A few decades ago pong was a technical achievement, now we have avatar.  The trajectory seems to suggest we are on track to photorealistic simulations fairly soon (decades).  Offline graphics for film arguably are already photoreal, real-time rendering is close behind, and the biggest remaining problem is the uncanny valley, which really is just the AI problem by another name.  Once we solve that (which we are assuming), the Matrix follows.  Superintelligences could help. There are some general theorems in computer graphics that suggest that simulating an observer optimized world requires resources only in proportion to the observational power of the observers.  Video game and film renderers in fact already rely heavily on this strategy. Criticism from Chaos:  We can't even simulate the weather more than a few weeks in advan
2b18fad3-71d3-448c-8e8b-7c08edc2f182
trentmkelly/LessWrong-43k
LessWrong
Democracy beyond majoritarianism The classical definition of democracy is “rule of majority with respect to the minority”. But the classics perfectly knew how oppressive can be the 51% of people over the rest and how difficult is to implement “respect”. I simply reject the majoritarian principle: democracy shall be about the rule of all, that is about optimizing the social system for the average of all preferences. More or less hard to implement, that is the ethical principle, and institutions shall be judged by that principle.  In the second section of this article, I summarize how the concept of “general interest” can be rescued after the Arrow impossibility result. In the third section I comment on the problems of direct democracy, and why parliaments need to delegate to a unitary government the administration of the State (that is, why majoritarianism is to some extent still inevitable). And in the final section of this note I move into practical politics, proposing some cases where parliaments shall decide by “averaging” instead of by majority, and ask the gentle reader to expand my proposed list, because averaging and sortition are often better (and no less legitimate) than majority rule. The general interest In decision theory textbooks a famous result (the “Arrow impossibility theorem”, Arrow, 1951) suggests that the general interest is impossible even to be defined. But a theorem is not a truth about external reality, but a truth about a given formal system. The Arrow theorem is true if preferences are ordinal and there is a single election. With cardinal preferences and multiple votes there is no “impossibility” theorem.  Suppose there is a set of “attainable” states of the Word X. Naive cardinal utilitarianism suggest that given an aggregation function W(.) -often simply a sum- and J members of the society each with a (possibly different) utility cardinal function on the states of the World, the social optimum is simply the point (attainable state of the world) that maximizes aggrega
d65b2211-3afb-4409-bc81-eb45d67231fc
trentmkelly/LessWrong-43k
LessWrong
LIMA: Less Is More for Alignment Abstract > Large language models are trained in two stages: (1) unsupervised pretraining from raw text, to learn general-purpose representations, and (2) large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences. We measure the relative importance of these two stages by training LIMA, a 65B parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinforcement learning or human preference modeling. LIMA demonstrates remarkably strong performance, learning to follow specific response formats from only a handful of examples in the training data, including complex queries that range from planning trip itineraries to speculating about alternate history. Moreover, the model tends to generalize well to unseen tasks that did not appear in the training data. In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Taken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output. Implications Data Quality & Capabilities Along with TinyStories and QLoRA I'm becoming increasingly convinced that data quality is all you need, definitely seems to be the case for finetuning, and may be the case for base-model training as well. Better scaling laws through higher-quality corpus? Also for who haven't updated, it seems very likely that GPT-4 equivalents will be essentially free to self-host and tune within a year. Plan for this! Perplexity != Quality > When fine-tuning LIMA, we observe that perplexity on held-out Stack Exchange data (2,000 examples) negatively correlates with the model’s ability to pro
49e07674-c952-4dc2-b257-9307c137ef95
trentmkelly/LessWrong-43k
LessWrong
GPT-4o is out OpenAI just announced an improved LLM called GPT-4o. From their website > Today, GPT-4o is much better than any existing model at understanding and discussing the images you share. For example, you can now take a picture of a menu in a different language and talk to GPT-4o to translate it, learn about the food's history and significance, and get recommendations. In the future, improvements will allow for more natural, real-time voice conversation and the ability to converse with ChatGPT via real-time video. For example, you could show ChatGPT a live sports game and ask it to explain the rules to you. We plan to launch a new Voice Mode with these new capabilities in an alpha in the coming weeks, with early access for Plus users as we roll out more broadly.
069f4f80-177f-42b9-b968-bd2fb263176f
trentmkelly/LessWrong-43k
LessWrong
Environments for killing AIs 4. Environments for killing AIs Summary of entire Series: An alternative approach to designing Friendly Artificial Intelligence computer systems. Summary of this Article: Killing a rogue AI may be impossible in our current environment, but we can change that by changing the environment. Links to all the articles in the series: 1. Optimum number of single points of failure 2. Don't put all your eggs in one basket 3. Defect or Cooperate 4. Environments for killing AIs 5. The advantage of not being open-ended 6. Trustworthy Computing 7. Metamorphosis 8. Believable Promises Links to the parts of this article * The objective * The challenge of killing an AI * The challenge of catching an AI * Doing the impossible * Problems * Solutions ---------------------------------------- Environments for killing AIs The objective "kill" is probably not exactly the right word. In the previous article in this series I talked about the situation where several AIs existed on the internet at the same time, self-improving but not self-improving as fast as possible, because they have agreed not to. If one of those AIs breaks that agreement, the other AIs need to form a posse and gang up on it to 'kill' it in the sense of depriving it of the ability to be a threat to them. So that could mean depriving it of resources, stopping any processes executing its code, even deleting all backups of the source code and physically destroying any machine it has ever touched. Or it could mean breaching its security to alter its code, or even just be in a position to immediately do one of those things then use the demonstrated threat to bring the AI back in line, make it surrender. But, as long as you know what I mean by it, the word "kill" will do. The challenge of killing an AI The current internet-connected computing environment (2013) is not an easy environment in which to kill an AI. There are millions of machines with little or no security, that can be taken ove
7db2ed64-e47d-43ae-b6a8-c4f0059b5fe5
trentmkelly/LessWrong-43k
LessWrong
post-rational distractions DonyChristie's intellectual fap post has called for post-rational techniques. I got most of the way through a comment reply before I realised it was a joke. April fools and all. Fruits of that effort here are some thoughts *** Developing your centre's. Sarah Perry's are knitting and mountain running. https://www.ribbonfarm.com/2018/04/06/deep-laziness/ > If you ever meet me in person and want to put me at ease, ask me about running or knitting. These are two of my behaviours, my behavioural centers, and one indication of that is how much I like talking about them specifically. I do feel that there is something special about them, and that they connect to my nature on a fundamental level. In my heart, I think everyone should do mountain running and knitting, because they are the best things. Reading a lot. All the good soft books. Perhaps the ones overlooked by the skeptic types: Bonds that make us free, Feeding your Demons, Chakras, MTG colour wheel, Dream interpretation, Peterson's Bible lectures. Architecture. Free-ing stuck meanings. A long example of Chapman's here. "I'm not good with people" or "I'm not a technical person" Meditation. Seems to be important and relate to this somehow. MTCB, The Mind Illuminated, Seeing that frees, Roaring Silence. What's the context? What the hell is it you're trying to do? The metagame is discovering the constraints. You're swimming in the unknown what are the rules of the game you're playing. This is what you're doing anyway. It feels important keep in mind Chapman's answer to "If not Bayes then what?" > My answer to “If not Bayesianism, then what?” is: all of human intellectual effort. Figuring out how things work, what’s true or false, what’s effective or useless, is “human complete.” In other words, it’s unboundedly difficult, and every human intellectual faculty must be brought to bear.
25037c90-fa02-4dfb-bae4-30f050c90e06
trentmkelly/LessWrong-43k
LessWrong
Expevolu, Part II: Buying land to create countries This is the second of a series of three posts outlining the expevolu system; if you haven’t read the first one I’d recommend you start there: Expevolu, a laissez-faire approach to country creation PART II – Perpetual Auctions Table of Contents: 1) The Problem 2) The Holdout Problem 3) Perpetual Auctions 4) The bypass by independence problem 5) Zero Tax Ceiling Intro 6) Zero Tax Ceiling 7) Inequality in the selling of CBs 8) Perpetual Auctions and Citizenship 9) Word of Caution and Conclusion 1) The Problem To institute a country, a founder would need to buy a very large quantity of contiguous Citizens Bundles (CBs) and a very large quantity of contiguous properties. For this to happen, the founder would need to have a very long streak of successful transactions for properties and CBs without a single failure. To achieve this, under normal circumstances, would be nearly impossible. This post is about this problem and a proposed way to overcome it. Swiss Cheese Countries An alternative would be for founders to abandon the goal of a solid territory. They would just buy the CBs and properties that they could and create their countries around the land they were not able to clear. For example, if a farmer refused to sell her farm to a founder, the founder could just build his country around that farm. Since the founder failed to buy the farm, the farm would still be part of the adopting country’s territory; it would be a little dot of adopting country within the borders of the new country. This kind of scenario would be a burden to adopting countries and a strain on the relations between them and new countries (not to mention the costs to the people living on these islands of adopting country within new countries). Such circumstances would not be permitted in expevolu. To acquire independence rights, a founder would need to have a solid patch of land, with all its properties and CBs. Since “Swiss cheese” countries would be prohibited in expevolu, we are back to th