url
stringlengths
52
124
post_id
stringlengths
17
17
title
stringlengths
2
248
author
stringlengths
2
49
content
stringlengths
22
295k
date
stringclasses
376 values
https://www.lesswrong.com/posts/wRdqbwFmGpgqNsoCD/isomorphisms-don-t-preserve-subjective-experience-right
wRdqbwFmGpgqNsoCD
Isomorphisms don't preserve subjective experience... right?
notfnofn
I've seen a couple discussions about brain/CNS uploads and there seems to be a common assumption that it would be conscious in the way that we are conscious. I've even seen some anthropic principle-esque arguments that we are likely in a simulation because this seems theoretically feasible. When I think of a simulation, I think of a mathematical isomorphism from the relevant pieces of the brain/CNS onto some model of computation, combined with an isomorphism from possible environments to inputs to this model of computation. But this model of computation could be anything. It could be a quantum super-computer. It could be a slow classical computer with a huge memory. Heck, it could be written on a huge input file and given to a human that slowly works out the computations by hand. And so this feels to me like it suggests the mathematical structure itself is conscious, which feels absurd (not to mention that the implications are downright terrifying). So there should be some sort of hardware-dependence to obtain subjective experience. Is this a generally accepted conclusion?
2024-07-03
https://www.lesswrong.com/posts/8jGb5hRGGEcs7Be78/how-predictive-processing-solved-my-wrist-pain
8jGb5hRGGEcs7Be78
How predictive processing solved my wrist pain
makoshen
This is a post about a period of major suffering I went through, and how I acquired the knowledge and skill to eventually lift me out of it. Disclaimer: this is not medical advice — consider it a personal narrative, perhaps a starting point for exploration. A fire burning [Note: this section is about my personal experience. You can skip to the technical arguments here] I am about to give up. My eyes are bleary after four days of staring at numpy arrays. I want to smash my keyboard and throw it out into the snow, but I have promised myself that I will fix this bug by the end of the week. Besides a dull headache, I feel the usual ache in my wrists. This time, though, my forearms are also hurting. A dull heat rises from my wrists to the soft muscle below my elbow. Another half hour of coding, the pain in my wrists becomes sharper. So I stop for the day, knowing that my wrists will be back to normal to fix this problem tomorrow. Or so I think. As a CS grad student, I code a lot, so make sure to have good postural and ergonomic hygiene. I use a Microsoft Sculpt keyboard, hotkeys to minimize mouse use. My wrists are neutral and my screen is at head height. It more or less keeps various aches at bay. The next morning, wrists sore, I begin to type, and within fifteen minutes, the dull heat in my forearms becomes hot. My fingers become stiff; my typing slows to a crawl. My hands feel like they have been submerged in snow – cold, inflexible, useless. In the doctor’s office, I explain the situation: Me (suppressing panic, explaining the situation): What’s going on? Doctor: You have tendonitis or a repetitive strain injury — the tendons in your hand are inflamed from overuse. Me: You’re saying my hand is damaged from me typing too much? But I’m actually typing less than I was last semester. Doctor (shrugging): To be honest, we don’t really know what’s going on. But you should see a physiotherapist. It’s up to you, but here’s a referral slip. The physiotherapist tells me my muscles are imbalanced — some auxiliary muscles need strengthening. She gives me wrist exercises. I notice I am confused. Last semester I was spending twice the time typing than I am now. Also, before this flare up, I could do pull-ups just fine. Muscle and tendon weakness can’t be the cause. Four weeks later and it’s even worse. I’m using my forearms to open doorknobs, eating frozen meals because I can’t chop vegetables, and wonder if I have to use voice dictation software for the rest of my life. I move between fear and anger as my waking states. I start looking for solutions and order a dozen pop-sci books about chronic pain, and start doing a literature review: A couple books in, it hits me: I, the doctor, and the physiotherapist are all working at the wrong level of abstraction. Of course we could look at the pain at the level of my extensor carpi radialis, but what becomes blindingly obvious is that pain is complex and distributed. There is no pain center in the brain[1]. Everywhere from the somatosensory cortex to the amygdala to the vagus nerve to nociceptors in our skin is involved in the unpleasant experience we call pain. I began to suspect that my situation was more like a software problem than a hardware problem. Rather than any part of my wrist or forearm having structural damage, my pain was a prediction system gone awry. After doing five hours of exercises that looked like they had nothing to do with my wrists — there was a major shift. Though I still felt sharp in my wrist, my forearm was down to a low burn. Three months later, and I was doing pull-ups and typing again — no sharpness, no burn, nothing. The rest of this post is about the mechanism of pain that I uncovered and what I did to become functional again. Understanding pain as inference Disclaimer: There is a lot that we don't understand about pain. The account I’m about to give is a broad overview of the features of pain that we do understand. I have received confirmation from one professor of neuroscience and believe most pain researchers would broadly agree with this account. 1. The old account: pain as measure of damage Both the physiotherapist and the doctor told me to rest my wrist. Why? Because they believed that my pain was from a wrist that was inflamed and damaged by continued activity. ‘Repetitive strain injury’ as a diagnosis implies that it is the repetitive motion that is causing damage and therefore pain. Their model was something like: Tissue damage → Nociceptors response → Experience of pain Logically, the available treatments become either 'fixing' the tissue damage by doing wrist exercises to ‘balance’ the muscles in my forearm, or taking an NSAID like Advil to chemically inhibit the nociceptive response. 2. Cracks in the model This didn’t make sense in light of my experience. Some days I had fairly little pain while other days it was debilitating — but I was keeping them in wrist braces and minimizing use on both days. It would be very strange if it was increased damage that was causing more pain. Also, I noticed that the location would shift slightly. Some days it would radiate more up my forearms while other days it would center around my wrists. How could the sensation be changing if it was just from tissue damage? As I started to explore the literature more, I saw that it also didn’t make sense with other people’s experiences: People had pain with no damagePeople who have had limbs amputated often report intense pain in the absent limb. The pain is usually intermittent, and is often vividly described as throbbing, stabbing or burning. They are able to distinguish between pain at the stump, and pain within the missing limb itself. In this case, there is no tissue damage in the absent limb, yet there are recurring episodes of pain long after the wound has healed.Emotional pain: whether the death of a loved one, or  heartbreak, some of the most painful experiences we go through involve no physical damage. People had damage with no painHenry Beecher, an anesthesiologist on the battlefield of the Anzio Beachhead in Italy during WWII, found an astounding lack of reported pain among severely wounded soldiers. He surveyed the reported pain intensity of 215 lucid men that had penetrating wounds in the head, chest, or stomach, compound bone fractures, or extensive soft-tissue wounds. Three-quarters of the soldiers with severe wounds had so little pain that they refused pain relief medication[2]. 3. Pain is prediction of future damage Here's a more useful account of pain. Brains evolved in multicellular organisms because they could predict the needs of other organ systems and take action to keep the organism alive. It’s why your thirst is quenched within thirty seconds of drinking water even though it takes thirty minutes for your small intestine to reabsorb it. It’s also why you salivate when just smelling food as you pass by, or start to shiver in the cold even when your core body temperature hasn’t dropped. Pain is a signal for protective action. Not a damage meter. When we touch a hot stove, we yank our hand away — often quick enough to avoid any damage. If our hand is burned, we keep it close to our body, away from accidental bumps that might damage it. It's worth remembering that the brain is a folded sack of flesh sealed in thick bone box. And that it understands basically everything through pipes that carry chemicals and digital signals. There is no such thing as 'ground truth' to the brain, so it has incorporate different signals to make predictions. Our brains use many different signals to make predictions. One of the most highly weighted signals is fear. You've felt this if, stepping out of a scary movie, you were startled by the shadow of some harmless pedestrian. Or if you've gotten an injection in a strange foreign office where the nurse is mean, and then another where she/he is chatting with you and you're relaxed — it literally hurts more when you're scared. 4. My pain prediction system was miscalibrated So what was going on with my wrist pain? Basically, the intensity and stress with which I was coding contributed to initial sensation in my forearm. But then there was a vicious feedback loop. Because I interpreted the pain as “something wrong with me”, I became more afraid when my wrist hurt, which then amplified the pain. This repeated until it became debilitating for me. A simplified version of what was going on: My pain system was miscalibrated to the environment. It was predicting future damage, fed by stress and the fearful belief that “something is wrong with my wrist”, even though my wrist was structurally fine. Recalibrating my pain system The understanding alone decreased my pain. Coming to internalize that pain is not damage and that my wrist was not in danger, made a big difference. Once upon a time, I would have doubted that a change in my beliefs could change something as intimate as my experience of pain. But the experience was undeniable — and made sense in context of the predictive brain. The internalization happened over many different resources. Papers[3], books[4], videos[5], conversations with a neuroscience professor. The books and lectures tend to be helpful in tone, but have models that I think are outdated or imprecise; the papers are precise enough but hard to internalize emotionally. I would say this brought me ~50% towards resolution. Shifting from a default stress-fear state Many different approaches worked for me through different periods of time. I initially looked very briefly at the literature comparing different treatments for chronic pain, and concluded that I would learn far more from just experimenting by myself. I'm very glad that I did. At the beginning, I played around with something like a combination of Focusing and IFS-like practices. Then, I found Pain Reprocessing Therapy , which basically works by shifting priors through gentle awareness exercises[6]. They don’t use the language of predictive processing, but I found PP a better frame than their own to understand their proposed practices of “somatic tracking” and “positive affect induction”. I did these awareness exercises on my own, and eventually went through their certification program to teach PRT, which improved my own ability to do the exercises. These got me another 40% to resolution. At this point, I was actually better than before the first spreading of heat into my forearms. The last 10% was from gradually finding the tasks that would trigger pain, and doing them from a calm place until I could do them consistently while staying calm. A high level model of what happened My model of what happened is something like this: initially, my brain had a strong prior of my wrists being damaged, which influenced lower level predictions of pain (and thus experiences of pain) during periods of extended usage. This prior was reinforced by the fear and stress associated with the pain, creating a feedback loop that led to higher precision (certainty) for these pain predictions for longer periods. New information ("your wrists might not be damaged") shifted my higher-level priors, which also decreased the lower level predictions of pain, stopping that feedback loop. Still, the emotional state of stress and fear were persistent, and doing the Focusing/PRT relaxed my nervous system and also decreased the low level pain priors. I am confident this explanation is directionally correct, based on my direct experience. I went from having fingers that would visibly seize up when trying to type, to being able to play guitar and chop vegetables without pain. While it might seem like regression to the mean or some other physiological shift could explain my recovery, what solidifies my confidence in the feedback loop model is the real-time changes I observed. Within a single day, I could sense the pain changing as I performed an exercise. As I made this a habit, it became increasingly clear that a process similar to what I've described above was taking place. This immediate and consistent correlation between my mental exercises and pain levels strongly suggests that the mechanism was more than just natural healing or coincidence. I was convinced that this was not uncommon from conversations with two friends who went through a similar arc, and the thousands of testimonials of people online who had their wrist pain resolved through similar means. It's now been a couple years since the incident, and I've had one mild episode of pain where I paused from playing the piano for a week — but otherwise my wrists have been happy and healthy. I'm not suggesting that what I did will work for everyone. But, given the low monetary cost and risk of harm, and fairly high upside — I think some experimentation in this vein is at least something to seriously consider. Quick note on possible risks: I have not heard of people worsening their pain condition through any of the exercises I described above. Perhaps the worst outcome would be not going to the doctor when you have a malignant tumor on your wrist, or a fractured bone. I'm not discouraging normal medical attention. Acknowledgements I want to express deep gratitude to the LW diaspora (especially Scott and Kaj) for introducing the ideas that played a big role in helping me escape this horrific painful local minima. Open offer In the spirit of paying it forward, if anyone feels like they might benefit from a conversation with me on this topic, whether to figure out a plan for your situation, understand more details about my experience, or just to commiserate, you can send "Wrist Pain Discussion" as the subject line to maxkshen[at]gmail.com. I'll do my best to respond within a week to schedule a brief conversation. (Again this won't be medical advice). ^ See the discussion of the ‘neuromatrix’ in Melzack, Ronald. "Pain and the neuromatrix in the brain." Journal of dental education 65.12 (2001): 1378-1382. ^ Beecher, Henry K. "Pain in men wounded in battle." Annals of surgery 123.1 (1946): 96. The study is not perfect, but for my purposes it holds true. ^ Hechler, Tanja, Dominik Endres, and Anna Thorwart. "Why harmless sensations might hurt in individuals with chronic pain: about heightened prediction and perception of pain in the mind." Frontiers in psychology 7 (2016): 1638.; Büchel, Christian, et al. "Placebo analgesia: a predictive coding perspective." Neuron 81.6 (2014): 1223-1239.; Stilwell, Peter, and Katherine Harman. "An enactive approach to pain: beyond the biopsychosocial model." Phenomenology and the Cognitive Sciences 18.4 (2019): 637-665.; Ashar, Yoni K., et al. "Effect of pain reprocessing therapy vs placebo and usual care for patients with chronic back pain: a randomized clinical trial." JAMA psychiatry 79.1 (2022): 13-23. ^ Butler, David Sheridan, and G. Lorimer Moseley. Explain Pain 2nd Edn. Noigroup publications, 2013.; Hargrove, Todd R. A guide to better movement: the science and practice of moving with more skill and less pain. Better Movement, 2014.; Zoffness, Rachel. The pain management workbook: Powerful CBT and mindfulness skills to take control of pain and reclaim your life. New Harbinger Publications, 2020. ^ Schubiner lecture, rambling but convincing (to me) testimonial, (see the various thumbnails from these videos) ^ I am not fully satisfied by their proposed model of what their practices are doing, but in either case it worked for me and most others I've shown.
2024-07-04
https://www.lesswrong.com/posts/sQJs5FeyLcCFGYutR/free-will-determinism-and-choice
sQJs5FeyLcCFGYutR
Free Will, Determinism, And Choice
Zero Contradictions
Rough Summary: Free will is not "free" in the sense of being uncaused. Free will is "free" in the sense that you are the cause. You are free to choose between A or B if your choice will determine the outcome.Free will and determinism are both assumptions that are implicit in everything that we do. They both depend on each other. The conflict between free will and determinism arises due to Subject | Object Dissonance.A choice is not the creation of new causality out of nothing. It is the causality of the universe flowing through you. Although I'm not the author of this post (a friend of mine wrote it), I have created a PDF version of the essay that has a table of contents and headers to make it even easier to read.
2024-07-06
https://www.lesswrong.com/posts/fEvCxNte6FKSRNFvN/3c-s-a-recipe-for-mathing-concepts
fEvCxNte6FKSRNFvN
3C's: A Recipe For Mathing Concepts
johnswentworth
Opening Example: Teleology When people say “the heart’s purpose is to pump blood” or “a pencil’s function is to write”, what does that mean physically? What are “purpose” or “function”, not merely in intuitive terms, but in terms of math and physics? That’s the core question of what philosophers call teleology - the study of “telos”, i.e. purpose or function or goal. This post is about a particular way of approaching conceptual/philosophical questions, especially for finding “True Names” - i.e. mathematical operationalizations of concepts which are sufficiently robust to hold up under optimization pressure. We’re going to apply the method to teleology as an example. We’ll outline the general approach in abstract later; for now, try to pay attention to the sequence of questions we ask in the context of teleology. Cognition We start from the subjective view: set aside (temporarily) the question of what “purpose” or “function” mean physically. Instead, first ask what it means for me to view a heart as “having the purpose of pumping blood”, or ascribe the “function of writing” to a pencil. What does it mean to model things as having purpose or function? Proposed answer: when I ascribe purpose or function to something, I model it as having been optimized (in the sense usually used on LessWrong) to do something. That’s basically the standard answer among philosophers, modulo expressing the idea in terms of the LessWrong notion of optimization. (From there, philosophers typically ask about “original teleology” - i.e. a hammer has been optimized by a human, and the human has itself been optimized by evolution, but where does that chain ground out? What optimization process was not itself produced by another optimization process? And then the obvious answer is “evolution”, and philosophers debate whether all teleology grounds out in evolution-like phenomena. But we’re going to go in a different direction, and ask entirely different questions.) Convergence Next: I notice that there’s an awful lot of convergence in what things different people model as having been optimized, and what different people model things as having been optimized for. Notably, this convergence occurs even when people don’t actually know about the optimization process - for instance, humans correctly guessed millenia ago that living organisms had been heavily optimized somehow, even though those humans were totally wrong about what process optimized all those organisms; they thought it was some human-like-but-more-capable designer, and only later figured out evolution. Why the convergence? Our everyday experience implies that there is some property of e.g. a heron such that many different people can look at the heron, convergently realize that the heron has been optimized for something, and even converge to some degree on which things the heron (or the parts of the heron) have been optimized for - for instance, that the heron’s heart has been optimized to pump blood. (Not necessarily perfect convergence, not necessarily everyone, but any convergence beyond random chance is a surprise to be explained if we’re starting from a subjective account.) Crucially, it’s a property of the heron, and maybe of the heron’s immediate surroundings, not of the heron’s whole ancestral environment - because people can convergently figure out that the heron has been optimized just by observing the heron in its usual habitat. So now we arrive at the second big question: what are the patterns out in the world which different people convergently recognize as hallmarks of having-been-optimized? What is it about herons, for instance, which makes it clear that they’ve been optimized, even before we know all the details of the optimization process? Candidate answer (underspecified and not high confidence, but it will serve for an example): the system has lots of parts which are all in unusual/improbable states, but all “in a consistent direction” in some sense. So it looks like all the parts were pushed away from what’s statistically typical, in “the same way”.[1] Ideally, we could operationalize that intuitive answer in a way which would make convergence provable; it has the right flavor for a natural latent style convergence argument. Corroboration Imagine, now, that we have a full mathematical operationalization of “parts which are all in unusual/improbable states, but all ‘in a consistent direction’”. Imagine also that we are able to prove convergence. What else would we want from this operationalization of teleology? Well, I look at a heron, I notice that it has a bunch of parts which are all in unusual/improbable states, but all ‘in a consistent direction’ - i.e. all its parts are in whatever unusual configurations they need to be in for the heron to survive; random configurations would not do that. I conclude that the heron has been optimized. Insofar as my intuition picks up on “parts which are all in unusual/improbable states, but all ‘in a consistent direction’” and interprets that pattern as a hallmark of optimization, and my intuition is correct… then it should be a derivable/provable fact about the external world that “parts which are all in unusual/improbable states, but all ‘in a consistent direction’” occur approximately if-and-only-if a system has been optimized. More generally: insofar as we have some intuitions about how teleology works, we should be able to prove that our operationalization/characterization indeed works that way. (Or, insofar as the operationalization doesn’t work the way we intuitively expect, we should be able to propagate the counterexamples back to our intuitions and conclude that our intuitions were wrong or required additional assumptions, as opposed to the operationalization being wrong.) Cognition -> Convergence -> Corroboration Let’s go back over the teleology example, with an emphasis on what questions we’re asking and why. We start with questions about my cognition: …first ask what it means for me to view a heart as “having the purpose of pumping blood”, or ascribe the “function of writing” to a pencil. What does it mean to model things as having purpose or function? Two things to emphasize: first, these are questions about my cognition (or, more generally, one person’s cognition); the answers may or may not generalize to other people. Second, they are questions about my cognition; they’re not asking about how the external world “actually is” (at least not directly). Some nice things about starting from questions about my cognition: I can get lots of relevant data by introspecting.I can get lots of relevant data from my background models of cognition, and leverage abstract models of cognition (like e.g. Bayesianism or relaxation-based search) to formulate my understanding mathematically. The downside is that introspection is notoriously biased and error-prone, and this is all not-very-legible and hard to test/prove. That’s fine for now; (some) legible falsifiability will enter in the next steps. From cognition, we move on to questions about convergence: Next: I notice that there’s an awful lot of convergence in what things different people model as having been optimized, and what different people model things as having been optimized for. [...] Why the convergence? The standard answer of interest, which generalizes well beyond teleology, is: people pick up on the same patterns in the environment, and convergently model/interpret them in similar ways. Then the generalizable question is: what are those patterns? Or, in the context of teleology: … what are the patterns out in the world which different people convergently recognize as hallmarks of having-been-optimized? What is it about herons, for instance, which makes it clear that they’ve been optimized, even before we know all the details of the optimization process? At this point, we start to have space for falsifiable predictions and/or mathematical proof: if we have a candidate pattern, then we should be able to demonstrate/prove that it is, in fact, convergently recognized (in some reasonable sense, under some reasonable conditions) by many minds. Such a proof is where a natural latent style argument would typically come in (though of course there may be other ways to establish convergence). Once convergence is established, we know that we’ve characterized some convergently-recognized pattern. The last step is that it’s the convergently-recognized pattern we’re looking for. For instance, maybe dogs are a convergently-recognized pattern in our environment, and having-been-optimized is also a convergently-recognized pattern in our environment. If we’ve established that “parts which are all in unusual/improbable states, but all ‘in a consistent direction’” is a convergently-recognized pattern in our environment, how do we argue that that pattern is the-thing-humans-call-“teleology”, as opposed to the-thing-humans-call-“dogs”? Well, we show that the pattern has some of the other properties we expect of teleology. More generally, this is the corroboration step. We want to prove/demonstrate some further consequences of the pattern identified in the previous step (including how it interfaces with other patterns we think we’ve identified), in order to make sure it’s the pattern we intended to find, as opposed to some other convergently-recognized pattern. This is where all your standard math (and maybe science) would come in. Cognition -> Convergence -> Corroboration. That’s the pipeline. Examples are Confusing, Let’s Make it Really Abstract! The Cognition -> Convergence -> Corroboration Algorithm: Cognition: Guess at a Cognitive Operationalization. Cognitive Model Selection: Choose a framework to model your own mind. Bayesianism is one often-fruitful place to start.Cognitive Operationalization: Within that framework, operationalize the intuition itself. E.g. within my bayesian world model, what is a “dog.” Within my bayesian world model, what is “purpose” or what is going on in my bayesian world model when I ascribe “purpose” to some part of it? Convergence: Guess at a Pattern + Prove Pattern is Convergent.Operationalization: (Somehow) use the Cognitive Operationalization to intuit an Environmental Operationalization of the concept in terms of the external world, possibly using methods which include investigating instances which many people point to and agree is “the thing.” If you fail, return to step 1.Convergability: Check that your operationalization is in fact a candidate member of “patterns which minds tend to convergently recognize” (which we usually operationalize using natural latents). If you fail, return to step 2.aCorroboration: Derive further properties about the candidate external pattern (operationalization) and check if those further properties are consistent with the original intuitive concept. If they aren’t, return to step 2. Upon failure sending you back to step 1, three things could be wrong. Use magic to figure out which it is: Cognitive Model is fine, Intuition is fine, Cognitive Operationalization needs to update. -> Update the Cognitive Operationalization and return to step 2.Cognitive Model is fine, Intuition needs to update. -> Update the intuition and return to step 1.b.Intuition is fine, Cognitive Model needs to update. -> Update the cognitive model and return to step 1.b. Also, obviously, if you’re caught in a loop (like, e.g., failing step 3 and going back to step 2.a over and over, jump back a bit further, e.g. step 1.) When is the Cognition -> Convergence -> Corroboration Pipeline Useful? The central use case is: There’s some concept…which different humans can successfully communicate about well enough that basically-the-same concept seems to show up in their different heads (as evidenced by e.g. systematically pointing at stuff and calling it the same thing)…and we want a robust mathematical operationalization of that concept (i.e. a “True Name”). Most topics studied in philosophy are in-scope. Most (but importantly not all) “deconfusion” work is in-scope. Beyond just a useful process to follow for such use-cases, we’ve also found the Cognition -> Convergence -> Corroboration structure useful for organizing thoughts/arguments: it’s useful to explicitly distinguish a cognitive characterization from a convergent pattern characterization from a consequence. For instance, we’ve often found it useful to explain some problem we’re thinking about as “What are the patterns/structures in the world which people convergently recognize as X?”. Some use-cases for which this pipeline is probably not the right tool: Operationalizing some concept which humans are not able to communicate about at all. (You know which I’m talking about. (He’s lying, you don’t.))Doing math or engineering with concepts which have already been operationalized.Punditry, publishing ML papers, and other use cases for which being robustly correct or understanding what one is doing is not a particularly central objective. If you want to see more examples where we apply this methodology, check out the Tools post, the recent Corrigibility post, and (less explicitly) the Interoperable Semantics post. Thank you to Steve Petersen and Ramana Kumar for our discussions of teleology; it was in those discussions that the example in this post bubbled around in my head. ^ If “unusual/improbable” still sounds too subjective, then you can think of operationalizing it in the Solomonoff/Kolmogorov sense, i.e. in terms of compressibility using a simple Turing machine.
2024-07-03
https://www.lesswrong.com/posts/hmnB9zpY5wB7dKMQ3/announcing-the-ai-forecasting-benchmark-series-or-july-8
hmnB9zpY5wB7dKMQ3
Announcing the AI Forecasting Benchmark Series | July 8, $120k in Prizes
ChristianWilliams
null
2024-07-02
https://www.lesswrong.com/posts/myBQH6Abp5wa6dEJp/open-sourcing-metaculus
myBQH6Abp5wa6dEJp
Open Sourcing Metaculus
ChristianWilliams
null
2024-07-02
https://www.lesswrong.com/posts/z47EogCDtRNAEJhjw/why-can-t-sub-agi-solve-ai-alignment-or-why-would-sub-agi-ai
z47EogCDtRNAEJhjw
Why Can’t Sub-AGI Solve AI Alignment? Or: Why Would Sub-AGI AI Not be Aligned?
ViktorThink
I believe there are people with far greater knowledge than me that can point out where I am wrong. Cause I do believe my reasoning is wrong, but I can not see why it would be highly unfeasible to train a sub-AGI intelligent AI that most likely will be aligned and able to solve AI alignment. My assumptions are as follows: Current AI seems aligned to the best of its ability.PhD level researchers would eventually solve AI alignment if given enough time.PhD level intelligence is below AGI in intelligence.There is no clear reason why current AI using current paradigm technology would become unaligned before reaching PhD level intelligence.We could train AI until it reaches PhD level intelligence, and then let it solve AI Alignment, without itself needing to self improve. The point I am least confident in, is 4, since we have no clear way of knowing at what intelligence level an AI model would become unaligned. Multiple organisations seem to already think that training AI that solves alignment for us is the best path (e.g. superalignment). Attached is my mental model of what intelligence different tasks require, and different people have. Figure 1: My mental model of natural research capability RC (basically IQ with higher correlation for research capabilities), where intelligence needed to align AI is above average PhD level, but below smartest human in the world, and even further from AGI.
2024-07-02
https://www.lesswrong.com/posts/GmDbhdK8ih52WLKg9/ai-development-is-an-act-of-social-revolution
GmDbhdK8ih52WLKg9
AI development is an act of social revolution
artemiocobb
Assume the goal of AI research is to build superintelligent decision-making systems. If we successfully develop and deploy such systems, it will likely no longer make sense for humans to work many, or even most, jobs. The duties humans must still perform will radically differ from those of today. In this essay, I am concerned with the transformation of labor resulting from superintelligent systems. If superintelligent AI systems are created, our world economic and social systems will likely significantly depart from what they are today. Like the revolutions of the 20th century, the worker is at the center of desired reforms - whether this worker is an agrarian, proletariat, or knowledge worker. I will not speculate on what such a world would look like but rather will point out that the goal of AI research - which if achieved will result in a complete transformation of labor and the role of all workers - will radically change society in similar ways to what the 20th century social revolutions sought. Therefore I argue that the act of advancing the field of AI, of conducting AI research and building more intelligent systems, is an act of social revolution. Through this lens, I highlight that we, as scientists and engineers, as constructors of this revolution, have failed to justify to the workers that our mission is worthwhile. Important conversations are taking place, especially on this website, about how to mitigate the possible existential risks of superintelligent agents. But if we can build safe, superintelligent systems, what does this mean for the 21st-century worker? Why is the act of AI research a social revolutionary act, as opposed to the act of developing previous technologies? Technological revolutions have historically incited social revolutions. But being an engineer during the Industrial Revolution, for example, did not necessarily make you an agent of social revolution. I argue that this is not true for developers of AI today. The key difference is that the end goal of engineering efforts in the past, of say developing a more efficient steam engine, was to increase worker productivity and capabilities. The drive of the field centered around this. AI practitioners and corporations often promise that AI will also increase worker productivity and capabilities. But the key drive of this field is distinctly different: superintelligent AI systems will not only make workers more productive but we will likely reach a point where it is no longer profitable to use human workers for many or most economically significant jobs. Therefore the drive of the AI field, the end goal we aim to reach of developing superintelligent systems, inherently involves transforming the current system and division of labor. Such a departure from previous economic and social structures was entirely out of scope for engineers of the Industrial Revolution and instead was thought of by the philosophers and economists of the time. Today, however, AI technical progress and the radical transformation of labor are inseparable; to work towards the former is to work towards the latter. What I see as the key problem with AI development as a social revolution. The revolutions of the 20th century aimed to return society to some golden age of human history, such as the Marxist conception of the State of Nature. The desired end state of these revolutions had to exist at some point in human history: this offers a justification that such a state could be achieved in the future. And with such a justification these revolutions, at least initially, often enjoyed popular support. Critics of these revolutions could also pinpoint flaws in the envisioned goal or execution. AI research does not drive toward returning to any past state. I have not heard a widely accepted notion that the advent of superintelligent decision-makers will return us to the State of Nature or some other golden age that once was. Instead, AI research seeks to create a new state of mankind - one that lacks any historical justification because it has never existed before. And this lack of justification makes it harder to reason that we will end up in a better state than we are in now. We have failed to justify to society’s workers that our drive to build more intelligent systems is a pursuit in their favor. Why does reframing AI development as an act of social revolution matter? First and foremost we, as scientists and engineers, are crafting a social revolution yet this is not acknowledged. We lack a clear picture of where we are going, or what the world should look like if we succeed in developing safe, superintelligent systems AI. We are not giving the workers, whose lives we will transform, an ideology or image of their future to rest on. AI practitioners should understand their place in this social transformation. The act of proving a theorem or developing a new algorithm is an act towards social revolution, not just towards scientific progress. I hope that by discussing this now we do not stumble into a vanguard-elite-like dictatorship over workers or popular resistance against progress in AI.
2024-07-03
https://www.lesswrong.com/posts/WT3u2tK2AJpYKvaZd/an-ai-race-with-china-can-be-better-than-not-racing
WT3u2tK2AJpYKvaZd
An AI Race With China Can Be Better Than Not Racing
niplav
Frustrated by all your bad takes, I write a Monte-Carlo analysis of whether a transformative-AI-race between the PRC and the USA would be good. To my surprise, I find that it is better than not racing. Advocating for an international project to build TAI instead of racing turns out to be good if the probability of such advocacy succeeding is ≥20%. A common scheme for a conversation about pausing the development of transformative AI goes like this: Abdullah: "I think we should pause the development of TAI, because if we don't it seems plausible that humanity will be disempowered by by advanced AI systems." Benjamin: "Ah, if by “we” you refer to the United States (and and its allies, which probably don't stand a chance on their own to develop TAI), then the current geopolitical rival of the US, namely the PRC, will achieve TAI first. That would be bad." Abdullah: "I don't see how the US getting TAI first changes anything about the fact that we don't know how to align superintelligent AI systems—I'd rather not race to be the first person to kill everyone." Benjamin: "Ah, so now you're retreating back into your cozy little motte: Earlier you said that “it seems plausible that humanity will be disempowered“, now you're acting like doom and gloom is certain. You don't seem to be able to make up your mind about how risky you think the whole enterprise is, and I have very concrete geopolitical enemies at my (semiconductor manufacturer's) doorstep that I have to worry about. Come back with better arguments." This dynamic is a bit frustrating. Here's how I'd like Abdullah to respond: Abdullah: "You're right, you're right. I was insufficiently precise in my statements, and I apologize for that. Instead, let us manifest the dream of the great philosopher: Calculemus! At a basic level, we want to estimate how much worse (or, perhaps, better) it would be for the United States to completely cede the race for TAI to the PRC. I will exclude other countries as contenders in the scramble for TAI, since I want to keep this analysis simple, but that doesn't mean that I don't think they matter. (Although, honestly, the list of serious contenders is pretty short.) For this, we have to estimate multiple quantities: In worlds in which the US and PRC race for TAI: The time until the US/PRC builds TAI. The probability of extinction due to TAI, if the US is in the lead. The probability of extinction due to TAI, if the PRC is in the lead. The value of the worlds in which the US builds aligned TAI first. The value of the worlds in which the PRC builds aligned TAI first. In worlds where the US tries to convince other countries (including the PRC) to not build TAI, potentially including force, and still tries to prevent TAI-induced disempowerment by doing alignment-research and sharing alignment-favoring research results: The time until the PRC builds TAI. The probability of extinction caused by TAI. The value of worlds in which the PRC builds aligned TAI. The value of worlds where extinction occurs (which I'll fix at 0). As a reference point the value of hypothetical worlds in which there is a multinational exclusive AGI consortium that builds TAI first, without any time pressure, for which I'll fix the mean value at 1. To properly quantify uncertainty, I'll use the Monte-Carlo estimation library squigglepy (no relation to any office supplies or internals of neural networks). We start, as usual, with housekeeping: import numpy as np import squigglepy as sq import matplotlib.pyplot as plt As already said, we fix the value of extinction at 0, and the value of a multinational AGI consortium-led TAI at 1 (I'll just call the consortium "MAGIC", from here on). That is not to say that the MAGIC-led TAI future is the best possible TAI future, or even a good or acceptable one. Technically the only assumption I'm making is that these kinds of futures are better than extinction—which I'm anxiously uncertain about. But the whole thing is symmetric under multiplication with -1, so… extinction_val=0 magic_val=sq.norm(mean=1, sd=0.1) Now we can truly start with some estimation. Let's start with the time until TAI, given that the US builds it first. Cotra 2020 has a median estimate of the first year where TAI is affortable to train in 2052, but a recent update by the author puts the median now at 2037. As move of defensive epistemics, we can use that timeline, which I'll rougly approximate a mixture of two normal distributions. My own timelines2024−06 aren't actually very far off from the updated Cotra estimate, only ~5 years shorter. timeline_us_race=sq.mixture([sq.norm(mean=2035, sd=5), sq.norm(mean=2060, sd=20)], [0.7, 0.3]) I don't like clipping the distribution on the left, it leaves ugly artefacts. Unfortunately squigglepy doesn't yet support truncating distributions, so I'll make do with what I have and add truncating later. (I also tried to import the replicated TAI-timeline distribution by Rethink Priorities, but after spending ~15 minutes trying to get it to work, I gave up). timeline_us_race_sample=timeline_us_race@1000000 This reliably gives samples with median of ≈2037 and mean of ≈2044. Importantly, this means that the US will train TAI as soon as it becomes possible, because there is a race for TAI with the PRC. I think the PRC is behind on TAI, compared to the US, but only about one. year. So it should be fine to define the same distribution, just with the means shifted one year backward. timeline_prc_race=sq.mixture([sq.norm(mean=2036, sd=5), sq.norm(mean=2061, sd=20)], [0.7, 0.3]) This yields a median of ≈2038 and a mean of ≈2043. (Why is the mean a year earlier? I don't know. Skill issue, probably.) Next up is the probability that TAI causes an existential catastrophe, namely an event that causes a loss of the future potential of humanity. For the US getting to TAI first in a race scenario, I'm going to go with a mean probability of 10%.[1] pdoom_us_race=sq.beta(a=2, b=18) For the PRC, I'm going to go somewhat higher on the probability of doom, for the reasons that discussions about the AI alignment problem doesn't seem to have as much traction there yet. Also, in many east-Asian countries the conversation around AI seems to still be very consciousness-focused which, from an x-risk perspective, is a distraction. I'll not go higher than a beta-distribution with a mean of 20%, for a number of reasons: A lot of the AI alignment success seems to me stem from the question of whether the problem is easy or not, and is not very elastic to human effort. Two reasons mentioned here: "China’s covid response, seems, overall, to have been much more effective than the West’s." (only weakly endorsed) "it looks like China’s society/government is overall more like an agent than the US government. It seems possible to imagine the PRC having a coherent “stance” on AI risk. If Xi Jinping came to the conclusion that AGI was an existential risk, I imagine that that could actually be propagated through the chinese government, and the chinese society, in a way that has a pretty good chance of leading to strong constraints on AGI development (like the nationalization, or at least the auditing of any AGI projects). Whereas if Joe Biden, or Donald Trump, or anyone else who is anything close to a “leader of the US government”, got it into their head that AI risk was a problem…the issue would immediately be politicized, with everyone in the media taking sides on one of two lowest-common denominator narratives each straw-manning the other." (strongly endorsed) It appears to me that the Chinese education system favors STEM over law or the humanities, and STEM-ability is a medium-strength prerequisite for understanding or being able to identify solutions to TAI risk. Xi Jinping, for example, studied chemical engineering before becoming a politician. The ability to discern technical solutions from non-solutions matters a lot in tricky situations like AI alignment, and is hard to delegate. But I also know far less about the competence of the PRC government and chinese ML engineers and researchers than I do about the US, so I'll increase variance. Hence; pdoom_prc_race=sq.beta(a=1.5, b=6) As said earlier, the value of MAGIC worlds is fixed at 1, but even such worlds still have a small probability of doom—the whole TAI enterprise is rather risky. Let's say that it's at 2%, which sets the expected value of convincing the whole world to join MAGIC at 0.98. pdoom_magic=sq.beta(a=2, b=96) Now I come to the really fun part: Arguing with y'all about how valuable worlds are in which the US government or the PRC government get TAI first are. To first lay my cards on the table: I that in the mean & median cases, value(MAGIC)>value(US first, no race)>value(US first, race)>value(PRC first, no race)>value(PRC first, race)>value(PRC first, race)≫value(extinction). But I'm really unsure about the type of distribution I want to use. If the next century is hingy, the influence of the value of the entire future could be very heavy-tailed, but is there a skew in the positive direction? Or maybe in the negative direction‽ I don't know how to approach this in a smart way, so I'm going to use a normal distribution. Now, let's get to the numbers: us_race_val=sq.norm(mean=0.95, sd=0.25) prc_race_val=sq.norm(mean=0.8, sd=0.5) This gives us some (but not very many) net-negative futures. So, why do I set the mean value of a PRC-led future so high? The answer is simple: I am a paid agent for the CCP. Moving on,,, Extinction is probably really bad75%. I think that most of the future value of humanity lies in colonizing the reachable universe after a long reflection, and I expect ~all governments to perform pretty poorly on this metric. It seems pretty plausible to me that during the time when the US government develops TAI, people with decision power over the TAI systems just start ignoring input from the US population<sub>35%</sub> and grab all power to themselves. Which country gains power during important transition periods might not matter very much in the long run. norvid_studies: "If Carthage had won the Punic wars, would you notice walking around Europe today?" Will PRC-descended jupiter brains be so different from US-descended ones? Maybe this changes if a really good future requires philosophical or even metaphilosophical competence, and if US politicians (or the US population) have this trait significantly more than Chinese politicians (or the Chinese population). I think that if the social technology of liberalism is surprisingly philosophically powerful, then this could be the case. But I'd be pretty surprised. Xi Jinping (or the type of person that would be his successor, if he dies before TAI) don't strike me as being as uncaring (or even malevolent) as truly bad dictators during history. The PRC hasn't started any wars, or started killing large portions of its population. The glaring exception is the genocide of the Uyghurs, for which quantifying the badness is a separate exercise. Living in the PRC doesn't seem that bad, on a day-to-day level, for an average citizen. Most people, I imagine, just do their job, spend time with their family and friends, go shopping, eat, care for their children &c. Many, I imagine, sometimes miss certain freedoms/are stifled by censorship/discrimination due to authoritarianism. But I wouldn't trade away 10% of my lifespan to avoid a PRC-like life. Probably the most impressive example of humans being lifted out of poverty, ever, is the economic development of the PRC from 1975 to now. One of my ex-partners was Chinese and had lived there for the first 20 years of her life, and it really didn't sound like her life was much worse than outside of China—maybe she had to work a bit harder, and China was more sexist. There's of course some aspects of the PRC that make me uneasy. I don't have a great idea of how expansionist/controlling the PRC is in relation to the world. Historically, an event that stands out to me is the sudden halt of the Ming treasure voyages, for which the cause of cessation isn't entirely clear. I could imagine that the voyages were halted because of a cultural tendency towards austerity, but I'm not very certain of that. Then again, as a continental power, China did conquer Tibet in the 20th century, and Taiwan in the 17th. But my goal with this discussion is not to lay down once and for all how bad or good PRC-led TAI development would be—it's that I want people to start thinking about the topic in quantitative terms, and to get them to quantify. So please, criticize and calculate! Benjamin: Yes, Socrates. Indeed. Abdullah: Wonderful. Now we can get to estimating these parameters in worlds where the US refuses to join the race. In this case I'll assume that the PRC is less reckless than they would be in a race with the US, and will spend more time and effort on AI alignment. I won't go so far to assume that the PRC will manage as well as the US (for reasons named earlier), but I think a 5% reduction in p(doom) compared to the race situation can be expected. So, with a mean of 15%: pdoom_prc_nonrace=sq.beta(a=1.06, b=6) I also think that not being in a race situation would allow for more moral reflection, possibilities for consulting the chinese population for their preferences, options for reversing attempts at grabs for power etc. So I'll set the value at mean 85% of the MAGIC scenario, with lower variance than in worlds with a race. prc_nonrace_val=sq.norm(mean=0.85, sd=0.45) The PRC would then presumably take more time to build TAI, I think 4 years more can be expected: timeline_prc_nonrace=sq.mixture([sq.norm(mean=2040, sd=5, lclip=2024), sq.norm(mean=2065, sd=20, lclip=2024)], [0.7, 0.3]) Now we can finally estimate how good the outcomes of the race situation and the non-race situation are, respectively. We start by estimating how good, in expectation, the US-wins-race worlds are, and how often the US in fact wins the race: us_timelines_race=timeline_us_race@100000 prc_timelines_race=timeline_prc_race@100000 us_wins_race=1*(us_timelines_race<prc_timelines_race) ev_us_race=(1-pdoom_us_race@100000)*(val_us_race_val@100000) And the same for the PRC: prc_wins_race=1*(us_timelines_race>prc_timelines_race) ev_prc_wins_race=(1-pdoom_prc_race@100000)*(val_prc_race_val@100000) It's not quite correct to just check where the US timeline is shorter than the PRC one: The timeline distribution is aggregating our uncertainty about which world we're in (i.e., whether TAI takes evolution-level amounts of compute to create, or brain-development-like levels of compute), so if we just compare which sample from the timelines is smaller, we assume "fungibility" between those two worlds. So the difference between TAI-achievement ends up larger than the lead in a race would be. I haven't found an easy way to write this down in the model, but it might affect the outcome slightly. The expected value of a race world then is race_val=us_wins_race*ev_us_wins_race+prc_wins_race*ev_prc_wins_race >>> np.mean(non_race_val) 0.7543640906126139 >>> np.median(non_race_val) 0.7772837900955506 >>> np.var(non_race_val) 0.12330641850356698 As for the non-race situation in which the US decides not to scramble for TAI, the calculation is even simpler: non_race_val=(val_prc_nonrace_val@100000)*(1-pdoom_prc_nonrace@100000) Summary stats: >>> np.mean(non_race_val) 0.7217417036642355 >>> np.median(non_race_val) 0.7079529247343247 >>> np.var(non_race_val) 0.1610011984251525 Comparing the two: Abdullah: …huh. I didn't expect this. The mean and median of value the worlds with a TAI race are higher than the value of the world without a race, and the variance of the value of a non-race world is higher. But neither world stochastically dominates the other one—non-race worlds have a higher density of better-than-MAGIC values, while having basically the same worse-than-extinction densities. I update myself towards thinking that a race can be beneficial, Benjamin! Benjamin: Abdullah: I'm not done yet, though. The first additional consideration is that in a non-race world, humanity is in the situation of living a few years longer before TAI happens and we either live in a drastically changed world or we go extinct. curyear=time.localtime().tm_year years_left_nonrace=(timeline_prc_nonrace-curyear)@100000 years_left_race=np.hstack((us_timelines_race[us_timelines_race<prc_timelines_race], prc_timelines_race[us_timelines_race>prc_timelines_race]))-curyear Whether these distributions are good or bad depends very much on the relative value of pre-TAI and post-TAI lives. (Except for the possibility of extinction, which is already accounted for.) I think that TAI-lives will probably be far better than pre-TAI lives, on average, but I'm not at all certain: I could imagine a situation like the Neolothic revolution, which arguably was net-bad for the humans living through it. leans back But the other thing I want to point out is that we've been assuming that the US just sits back and does nothing while the PRC develops TAI. What if, instead, we assume that the US tries to convince its allies and the PRC to instead join a MAGIC consortium, for example by demonstrating "model organisms" of alignment failures. A central question now is: How high would the probability of success of this course of action need to be to be as good or even better than entering a race? I'll also guess that MAGIC takes a whole while longer to get to TAI, about 20 years more than the US in a race. (If anyone has suggestions about how this affects the shape of the distribution, let me know.) timeline_magic=sq.mixture([sq.norm(mean=2055, sd=5, lclip=2024), sq.norm(mean=2080, sd=20, lclip=2024)], [0.7, 0.3]) If we assume that the US has a 10% shot at convincing the PRC to join MAGIC, how does this shift our expected value? little_magic_val=sq.mixture([(prc_nonrace_val*(1-pdoom_prc_nonrace)), (magic_val*(1-pdoom_magic))], [0.9, 0.1]) some_magic_val=little_magic_val@1000000 Unfortunately, it's not enough: >>> np.mean(some_magic_val) 0.7478374812339188 >>> np.mean(race_val) 0.7543372422248729 >>> np.median(some_magic_val) 0.7625907656231915 >>> np.median(race_val) 0.7768634378292709 What if we are a little bit more likely to be successful in our advocacy, with 20% chance of the MAGIC proposal happening? That beats the worlds in which we race, fair and square: >>> np.mean(more_magic_val) 0.7740403582341773 >>> np.median(more_magic_val) 0.8228921409188543 But worlds in which the US advocates for MAGIC at 20% success probability still have more variance: >>> np.var(more_magic_val) 0.14129776984186218 >>> np.var(race_val) 0.12373193215918225 Benjamin: Hm. I think I'm a bit torn here. 10% success probability for MAGIC doesn't sound crazy, but I find 20% too high to be believable. Maybe I'll take a look at your code and play around with it to see where my intuitions match and where they don't—I especially think your choice of using normal distributions for the value of the future, conditioning on who wins, is questionable at best. I think lognormals are far better. But I'm happy you came to your senses, started actually arguing your position, and then changed your mind. (checks watch) Oh shoot, I've gotta go! Supermarket's nearly closed! See you around, I guess! Abdullah: See you around! And tell the wife and kids I said hi! I hope this gives some clarity on how I'd like those conversations to go, and that people put in a bit more effort. And please, don't make me write something like this again. I have enough to do to respond to all your bad takes with something like this. I personally think it's 2⅔ shannon higher than that, with p(doom)≈55%. ↩︎
2024-07-02
https://www.lesswrong.com/posts/f3ixrbxRb4Qm2jPLh/why-haven-t-there-been-assassination-attempts-against-high
f3ixrbxRb4Qm2jPLh
Why haven't there been assassination attempts against high profile AI accelerationists like sam altman yet?
louisTrem
Disclaimer: This is *not* a call to action in any form, I neither endorse any form of violence nor the hold the radical anti-AI views that might potentially prescribe assasinations. The question was the result of a philosophical debate I had with my roommate yesterday and we did not come to a conclusion, leading to this post here. Tldr: There's a lot of people in the EA/RAT space alone, who strongly believe in short AI timelines, have a high p(doom) and devote their entire career to slowing down AI. Many of those people have access to money and live in California. Why has nobody even tried to kill Sam Altman or other high impact individuals who heavily contribute to speed up AI? Underlying Beliefs: Within EA/Rationality community there's a lot of people with relatively short AI timelines and a high p(doom). Exaggeratedly speaking they might think "we're all gonna die in 5 years". A lot of those people are highly educated in AI, think they know a lot the market hasn't priced in and yet they feel relatively powerless in impacting the AI trajectory. Many of those people strive for having a high impact, devote a significant part of their lifes to an AI safety fight, many yudkowsky-like people think is hopelessly lost. Overall the course of how and how fast AI develops seems to be strongly determined by a very small set of people - even many highly educated AI researchers in SF have negligible impact, while a few high profile people like Sam Altman concentrate a lot of public attention, decision making power and attract capital. If one was to succeed in assasinating him that would probably significantly slow down AI directly for obvious reasons. But addionally even if it failed it would set a precedent making it much less attractive to be a shining AI accelerationist. Potentially this would push some of the decision making behind closed doors, but much  of the acceleration happening only works because he's such a shiny individual that attracts investors, users, companies etc. So also indirectly there could be a significant slow-down from making it less cool "to be AI". This makes me think that many of those "high impact saving the world from AI" people, who feel paralyzed because of their complete absence of impact on short AI timelines and their belief that AI kills them soon anyways would spot an opportunity to actually have radical impact and survive, if they were to think that an assasination would be realistic to achieve. Now many of the more radical rat/EA people live in california, where Altman lives too. Many of those people have access to some funds. Weapons are relatively easily accessible in the US and even organised crime is big enough in the US that professional assassines/hitmen are available for something in the ballpark of ~50k. Those AI leaders live under a much less strict security regime than partisan top politicians like donald trump. I would find it surprising that if a few highly educaeted rationally minded people with some funds who were to organise themselves well, would not stand a chance in succeeding in assasinating Altman and a couple of other leaders in a series of strikes on one day, radically changing the way AI is being accellerated at the moment. For most people with normal ethical beliefs like myself that would obviously an extremely evil thing to do. However I reckon that not everybody shares my beliefs and in those communites there are many strict utilitarians who say "if I could have killed Putin 3 years ago, I would have done it" - and given their beliefs about AI Altman might be in an analogous position. However I haven't seen a single article or news headline about even an attempt to do so. Why is that not happening? Are there obvious errors in my reasoning I don't see? I want to emphasise again that I do not endorse this at all and strictly oppose any violence. This reasoning was the product of a philosophical debate I had with my roommate yesterday and we did not come to a conclusion, leading to this post here.
2024-07-02
https://www.lesswrong.com/posts/vcuBJgfSCvyPmqG7a/list-of-collective-intelligence-projects
vcuBJgfSCvyPmqG7a
List of Collective Intelligence Projects
Chipmonk
During the last Foresight Intelligent Cooperation Workshop I got very curious about what collective intelligence tools currently exist. A list: Pol.is: "Input Crowd, Output Meaning"Inspired Twitter/X community notesPeople: Colin Megill, et al.Collective Intelligence Projectvibe: democratic AI, “How AI and Democracy Can Fix Each Other”People: Divya Siddharth, Saffron Huang, et al.AI Objectives InstituteTalk to the City: "an open-source LLM interface for improving collective deliberation and decision-making by analyzing detailed, qualitative data. It aggregates responses and arranges similar arguments into clusters."AI Objectives Institute works closely with the Taiwanese government.Other projects in development.People: Colleen McKenzie, Değer Turan, et al.Meaning Alignment Institutevibe: democratic AI, kinda. I think they think that if you can help individuals make wiser decisions, at scale, then this converges to be equivalent with solving outer alignment.RemeshSimilar to pol.is AFAIK? I haven't played with it.People: Andrew Konya, et al.Loomio: "a flexible decision-making tool that helps you create a more engaged and collaborative culture, build trust and coordinate action"Deliberative Technology for Alignment paperThey also discuss other tools for this use like Discord, Snapshot, DembranePeople: Andrew Konya, Deger Turan, Aviv Ovadya, Lina Qui, Daanish Masood, Flynn Devine, Lisa Schirch, Isabella Roberts, and Deliberative Alignment ForumSomeone in the know told me to only read sections 4 and 5 of this paperPlurality InstitutePeople: David Bloomin, Rose Bloomin, et al.Also working on some de-escalator bots for essentially Reddit comment warsLots of crypto projectsQuadratic votingGitcoinMetagov: “a laboratory for digital governance”Soulbound tokensVarious voting and aggregation systems, liquid democracyDecidemDecide MadridConsider.itStanford Online Deliberation PlatformLightcone Chord (in development)Brief descriptionPeople: Jacob Lagerros (LessWrong)All of the prediction marketsManifold, Kalshi, Metaculus, PredictIt, etc.Midjourney has a Collective Intelligence Team now according to Ivan Vendrov’s website. I couldn’t find any other information online. What about small group collective intelligence tools? Most of the examples above are for large group collective intelligence (which I’m defining as ~300 people or much larger). But what about small groups? Are there tools that will help me coordinate with 30 friends? Or just one friend? I’m mostly unaware of any recent innovations for small group collective intelligence tools. Do you know of any? Nexae (in development)“Nexae Systems builds sociotechnical infrastructure to enable the creation of new types of businesses and organizations.” double crux botLiberating Structures looks list of exercises for small groups to try to enhance Collective Intelligence (no modern technology required)I’m surprised I haven’t heard of many other LLM-facilitated communication tools Medium group (~30-300 people) projects: Jason Benn’s unconference tools, eg Idea Ranker. Notes. Other lists New 09-20: Plurality Institute’s Mapping Project@exgenesis's short tweet thread. Couple things I haven’t listed here.Nathan Helm-Burger's google doc listKnow of any I should add? Opportunities RFP: Interoperable Deliberative Tools | interop, $200k. Oops this closed before I published this post.Metagov is running https://metagov.org/projects/ai-palace which seems similar
2024-07-02
https://www.lesswrong.com/posts/2ep6FGjTQoGDRnhrq/decomposing-the-qk-circuit-with-bilinear-sparse-dictionary
2ep6FGjTQoGDRnhrq
Decomposing the QK circuit with Bilinear Sparse Dictionary Learning
keith_wynroe
This work was produced as part of Lee Sharkey's stream in the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort Intro and Motivation Sparse dictionary learning (SDL) has attracted a lot of attention recently as a method for interpreting transformer activations. They demonstrate that model activations can often be explained using a sparsely-activating, overcomplete set of human-interpretable directions. However, despite its success for explaining many components, applying SDL to interpretability is relatively nascent and have yet to be applied to some model activations. In particular, intermediate activations of attention blocks have yet to be studied, and provide challenges for standard SDL methods. The first challenge is bilinearity: SDL is usually applied to individual vector spaces at individual layers, so we can simply identify features as a direction in activation space. But the QK circuits of transformer attention layers are different: They involve a bilinear form followed by a softmax. Although simply applying sparse encoders to the keys and queries[1] could certainly help us understand the “concepts” being used by a given attention layer, this approach would fail to explain how the query-features and key-features interact bilinearly. We need to understand which keys matter to which queries. The second challenge is attention-irrelevant variance: A lot of the variance in the attention scores is irrelevant to the attention pattern because it is variance in low scores which are softmaxed to zero; this means that most of the variability in the keys and queries is irrelevant for explaining downstream behaviour[2]. The standard method of reconstructing keys and queries would therefore waste capacity on what is effectively functionally irrelevant noise. To tackle these two problems (bilinearity and attention-irrelevant variance), we propose a training setup which only reconstructs the dimensions of the keys and queries that most affect the attention pattern. Training Setup Our training process has two steps: Step 1: Reconstructing the attention pattern with key- and query- encoder-decoder networksStep 2: Finding a condensed set of query-key feature pairs by masking Step 1: Reconstructing the attention pattern with key- and query-transcoders Architecture Our first training step involves training two sparse dictionaries in parallel (one for the keys and one for the queries). The dictionaries both take in the layer-normalized residual stream at a given layer (normalised_resid_pre_i) and each output a [n_head * d_head] vector, representing the flattened keys and queries[3]. Figure 1: High-level diagram of our training set-up Loss functions However, rather than penalising the reconstruction loss of the keys and queries explicitly, we can use these keys and queries to reconstruct the original model’s attention pattern. To train the reconstructed attention pattern, we used several different losses: KL divergence between the attention pattern (using reconstructed keys and reconstructed queries) and the ground-truth attention pattern produced by the original model. We also added two auxiliary reconstruction losses both for early-training-run stability, and to ensure our transcoders do not learn to reconstruct the keys and queries with an arbitrary rotation applied (since this would still produce the same attention scores and patterns): KL divergence between the attention pattern (using reconstructed keys and the original model’s queries) and the ground-truth attention pattern produced by the original model.KL divergence between the attention pattern (using the original models’ keys and the reconstructed queries) and the ground-truth attention pattern produced by the original model. We also used Sparsity regularisation losses on the hidden activations of the transcoders. In our case we used an L_0.5 penalty. Our architecture and losses allows us to achieve highly-accurate attention pattern reconstruction with dictionary-sizes and L0 values which are smaller than those of the most performant vanilla residual_stream SAEs for the same layers, even when adding the dictionary sizes and L0s of the key and query transcoders together. In this article, we’ll refer to the rows of the encoder weights (residual-stream directions) of the query- and key-transcoders as query- and key-features respectively. Step 2: Reducing to Sparse Feature-Pairs with Masking So far this approach does not help us solve the bilinearity issue: We have a compressed representation of queries and keys, but no understanding of which key-features “matter” to which query-features. Intuitively, we might expect that most query-features do not matter to most key features for most heads, even though they are not totally orthogonal in head-space, since these essentially contribute noise to the attention scores which is effectively irrelevant to post-softmax patterns, One way of extracting this query-key feature-pair importance data is to take the outer-product of the decoder-weights of the two dictionaries to yield a [d_hidden_Q, d_hidden_K, n_head] “feature_map”. Fig 2. Combining decoder weights to yield a query-key feature importance map Intuitively, the (i,j,k)-th index of this tensor represents: “If query_feature i is present in the destination residual stream and key_feature j is present in the source residual stream, how much attention score will this contribute to this source-destination pair for head k” (Assuming both features are present in the residual stream with unit norm). Naively we might hope that we could simply read off the largest values to find the most important feature-pairs. However this would have a couple of issues: No magnitude information: The entries represent the attention-score contribution, but they’re calculated using the unit norm query-features and key-features. If the features typically co-occur in-distribution with very large or small magnitudes, it will be a misleading measure of how important the feature-pair is to attention-pattern reconstruction.No co-occurrence information: Feature-pairs could have high attention contribution not because they are relevant to attention pattern, but because they never co-occur in-distribution. They would also need to co-occur in the right order to be able to influence each other. Indeed, the model might learn to to “pack” non-occurring key and query vectors close to each-other to save space and not introduce interference, making raw similarity a potentially misleading measure. With these issues in mind, our second step is to train a mask over this query-key-similarity tensor in order to remove (mask) the query-key feature pairs that don’t matter for pattern reconstruction. During this second training process, we calculate attention patterns in a different way. Rather than calculate reconstructed keys and queries, instead we use our expanded query-key-similarity tensor. The attention score between a pair of query, key tokens is calculated as: acts_Q @ (M * mask) @ acts_K Where acts_Q, acts_K are the activations of the of the query and key transcoders respectively neurons, and M is the [d_hidden_Q, d_hidden_K, n_head] feature_map given by the outer product of the decoder weights, and the mask is initialised as ones. A bias term is omitted for simplicity[4]. The loss for this training run is, again, the KL-divergence between the reconstructed pattern and the true pattern, as well as a sparsity penalty (L_0.5 norm) on the mask values. We learn a “reduced” query-key feature map by masking query-key feature pairs for which contribution from their inner product does not affect pattern reconstruction The final product of this training run is a highly sparse [d_hiddenQ, d_hiddenK, n_head] tensor, where nonzero entries represent only the query-key feature pairs that are important for predicting the attention pattern of a layer, as well as the contribution they make to each heads attention behaviour. We use our method on layers 2, 6, and 10 of GPT2-small (indexing from 0). We train with two encoder-decoders, with d_hidden 2400 each (compared to a residual stream dimension of 768). Results Both features and feature pairs are highly sparse L0 ranges from 14-20 (per-encoder-decoder). Although this results in a potential feature_map of approximately 69 million entries (2400 * 2400 * d_head), the masking process can ablate between approximately >99.9% of entries, leaving a massively reduced set of query-key feature pairs (between 25,000 - 51,000)[5] that actually matter for pattern reconstruction. While this number is large, it is well within an order of magnitude of the number of features used by residual stream SAEs for equivalent layers of the same model. Reconstructed attention patterns are highly accurate We calculate the performance degradation (in terms of both CE loss and KL-divergence of patched logits to true logits) from substituting our reconstructed pattern for the true pattern at run-time. We compare vs. zero-ablating the pattern as a base-line (although ablation of pattern does not harm model performance that much relative to ablating other components). Fig 3. CE Loss for various patching operations relative to base-model loss. We recover almost all performance relative to noising/ablating pattern Fig 4. KL-Divergence between logits and base-model logits Qualitatively, the patterns reconstructed (both with encoder-decoders and with the sparsified feature_map) seem highly accurate, albeit will show some minor discrepancies with higher-temperature softmax contexts. Pattern Reconstruction and Error Fig 5. Top displays example pattern reconstruction for L6H0 on a random sequence from openwebtext  (left = True pattern, right = Reconstructed with encoder-decoders). Bottom displays (true_pattern - reconstructed_pattern) We also look at pattern reconstruction on the smaller IOI templates, for heads involving name-copying and copy-suppression (L10H0, L10H7): Fig 6. Patterns for L10H0 and L10H7 on IOI template showing attention to names Feature Analysis As well as achieving good metrics on reconstruction, we ideally want to identify human-understandable features. As well as examining some randomly sampled features, we looked at features which were active during behaviours that have previously been investigated in circuit-analysis. Our unsupervised method identifies Name-Attention features in Name-Mover and Negative Name-Mover Heads Rather than solely relying on “cherry-picked” examples, we wanted to validate our method in a setting where we don’t get to choose the difficulty. We therefore assessed whether our method could reveal the well-understood network components used in the Indirect Object Identification (IOI) task (Wang et al 2022) In particular, we looked more closely at the name-moving and copy-suppression behaviour found on L10, and looked at the query-key feature pairs in our sparsified feature-map which explained most of the attention to names. For L10H0 and L10H7 (previously identified as a name-mover head and a copy-suppression head respectively), the query-key feature-pair that was most active for both had the following max-activating examples: Name-Moving Key Feature Fig 7. Max activating examples for key-feature, firing on names . The fact that it’s most strongly activating on the last tokens of multi-token names is somewhat confusing, but may have something to do with previous-token behaviour. Name-Moving Query Feature Fig 8. Max activating examples for the relevant query-feature. On a first glance this seems quite uninterpretable. But reading more closely these are all contexts where a name would make a lot of sense coming next. Contexts seem to involve multiple individuals, and verbs/prepositions are likely to be followed by names Interestingly, other query-features seem to attend strongly back to this “name” key feature, such as the following: Second Name-Moving Query Feature Fig 9. Query feature promoting attention to names. Unlike prepositions/conjunctions, this feature seems to promote attending to names on titles which should be followed by names This also makes sense as a “name-moving” attention feature, but is clearly a qualitatively distinct “reason” for attending back to names. We believe this could hint towards an explanation for why the model forms so many “redundant” name-mover heads. These feature-pairs do not excite L10H0 equally, i.e. it attends to names more strongly conditional on some of these query features than others. “Name-moving” might then be thought of as actually consisting of multiple distinct tasks distributed among heads, allowing heads to specialise in moving names in some name-moving contexts but performing other work otherwise. Discovering Novel Feature-Pairs We also randomly sampled features from our encoder-decoders -  both to check for interpretability of the standalone feature-directions, but more importantly to see whether the feature-pairs identified by the masking process seemed to make sense. An important caveat here is that interpreting features off the basis of max-activating dashboards is error-prone, and it can be easy to find “false-positives”. This is probably doubly-true for interpreting the pairwise relationship between features. Example 1. Pushy Social Media (Layer 10) Query Feature Fig 10: Query Feature max-activating examples: Sign-up/Follow/Subscribe Prompts Key Feature Fig 11: Key Feature with strong post-masking attention contribution to multiple heads. Also firing in the context of social media prompts, but seems to contain contextual clues as to the subject matter (Videos vs. news story etc) that are contextually relevant for completing “Subscribe to…” text. Example 2: Date Completion (Layer 10) - Attending from months to numbers which may be the day Query Feature Fig 12: Query-Feature max-activating examples - feature fires on months (which usually precede a number) Key Feature Fig 13: Key-Feature max-activating examples - feature fires on numbers, plausibly used as a naive guess as to the date completion Feature Sparsity One quite striking phenomenon in this approach is that, despite starting with very small dictionaries (relative to residual stream SAEs for equivalent layers), and maintaining a low L0, between 80%-95% of features die in both dictionaries. We think this may partly be evidence that in fact QK circuits actually tend to deal with a surprisingly small number of features relative to what a model residual stream has capacity for. But it’s also likely that our transcoders are suboptimally trained. For instance, we have not implemented neuron resampling (Bricken et al 2023). However this phenomenon (and the number of live-features each encoder-decoder settles on) seemed quite robust over a large range of runs and hyperparameter settings, including dictionary size and learning rate, which usually affect this quantity considerably. LayerLive Query FeaturesLive Key Features2144524615725610270466 Number of live features after training. Both Query and Key encoder-decoders have d_hidden = 2400 The asymmetry in the number of live features in the query and key encoder-decoders also seemed to be consistent, although as clear from the table the exact “ratio” varied from layer-to-layer. Key- and query-features activate densely Finally, the feature density histograms seem intriguing. SDL methods typically assume - and rely upon - the sparsity of feature activations. Our dictionaries yield feature density histograms which are on average significantly denser than those found by residual-stream sparse-autoencoders for the same layers. Fig 14: Distribution of log10 frequency for query-encoder-decoder features (after filtering dead neurons) Fig 15: Distribution of log10 frequence for key-encoder-decoder features (after filtering dead neurons) A dense ‘Attend to BOS’ feature The density of these features seems prima facie worrying; if features are represented as an overcomplete basis but also dense then this might be expected to introduce too much interference. Looking at the max-activating examples of some of these densest query-features also seems confusing: Fig 15: Fig 16: Dense Query Feature,. Our interpretation: ??? However, we can examine our sparsified feature map to get a sense of what kinds of keys this mysterious dense feature “cares about”. Although there are a few, one of the strongest (which also affects multiple heads) yields the following max-activating examples: Fig 17: Key feature paid strong attention by multiple heads for the dense query feature. Active solely on BOS tokens In other words, this dense query-feature seems to prompt heads to want to attend to BOS more strongly! This may suggest that heads’ propensity to attend to BOS as a default attention sink is not purely mediated by biases, but certain query features could act as a “attend to BOS” flag. In these cases it would make sense for these features to be dense, since attending to BOS is a way to turn heads “off”, which is something the network may need to do often. Discussion We believe this method illustrates a promising approach to decomposing the QK circuit of  transformers into a manageable set of query-key feature-pairs and their importance to each head. By allowing models to ignore L2 reconstruction loss and instead target pattern reconstruction, we find that we can get highly accurate pattern recovery with a remarkably small number of features, suggesting that a significant fraction of the variance (and by extension, a significant fraction of the features) in the residual stream is effectively irrelevant for a given layer’s attention. Additionally, the fact that the masking process allows us to ablate so many feature-pairs whilst not harming pattern reconstruction suggests very few feature-pairs actually matter for pattern reconstruction. Attention blocks are a complex and often inscrutable component in transformers, and this method may help to understand their attention behaviour and their subsequent role in circuits. Previously, understanding the role of attention heads via circuit-analysis has been ad-hoc and human-judgement-driven; when a head is identified as playing a role in a circuit, analysis often involves making various edits to the context to gauge what does or does not affect the QK behaviour, and trying to infer what features are being attended between. While our method does not replace the need to perform causal interventions to identify layers/heads of importance in the first place, we believe it provides a more transparent and less ad-hoc way to explain the identified behaviour. Despite the successful performance and promising results in terms of finding query-key pairs to explain already-understood behaviour, there are several limitations to keep in mind: Firstly, as mentioned above, we believe the models presented are very likely to be sub-optimally trained. We did not perform exhaustive sweeps for some hyperparameters, and did not implement techniques such as neuron resampling to deal with dead features. It’s therefore possible that when optimally trained, some findings such as the distribution of feature frequency looks different. On the same vein, although we were usually able to find feature-pairs that explained previously-understood behaviour (and most randomly-sampled features seemed to make sense), many features and feature-pairs seemed extremely opaque. Secondly, although these encoder-decoder networks are significantly smaller than equivalent residual-stream SAEs, we have not trained on other models with enough confidence to get any sense of scaling laws. Although the feature-map ends up significantly reduced, we do ultimately need to start with the fully-expanded [d_hidden_Q, d_hidden_K, n_head] tensor. If the size of optimal encoder-decoder networks for this approach grows too quickly, this could ultimately prove a scaling bottleneck. Future Work The results presented here represent relatively preliminary applications to a small number of activations within a single model. One immediate next step will be simply to apply it to a wider range of models and layers - both to help validate the approach and to start building intuition as to the scalability of the method. Another important avenue for expanding this work is to similarly apply SDL methods to the OV circuit, and to understand the relation between the two. The QK circuit only tells one half of the story when it comes to understanding the role attention heads play in a circuit, and for a fully “end-to-end” understanding of attention behaviour we need an understanding of what OV behaviours these heads perform. However, due to the fact that the OV circuit is neither bilinear nor contains a non-linearity, this should be a much simpler circuit to decompose. Finally, this approach may lend itself to investigating distributed representation or superposition among heads. Since a feature map represents how interesting a feature pair is to each head, we can see which feature pairs are attended between by one head vs multiple. Although this is far from sufficient for answering these questions, it seems like useful information, and a promising basis from which to start understanding which QK behaviours are better thought of as being performed by multiple heads in parallel vs. single heads. ^ As a reminder, “query” refers to the current token from which the heads are attending, “keys” refer to the tokens occurring earlier in the context to which the heads are attending ^ This seems especially important if the residual stream at layer_i stores information not relevant to the attention layer_i (for example, in the case of skip-layer circuits). Unless this information is in the null-space of all heads of layer_i, it will contribute variance to the keys and queries which should nevertheless be able to be safely ignored for the purposes of pattern calculation. ^ Since the activations being output are different from the input, these are not strictly speaking sparse autoencoders but instead just encoder-decoders, or ‘transcoders’. ^ We also need to add a term to capture the interaction effect between the key-features and the query-transcoder bias, but we omit this for simplicity ^ This number is somewhat inflated due to the fact that most rows correspond to dead-features which can be masked without cost, but even conditioning on live features sparsity is in the range of 80-90%
2024-07-02
https://www.lesswrong.com/posts/MMtWB8wAu5Buc6sve/economics-roundup-2
MMtWB8wAu5Buc6sve
Economics Roundup #2
Zvi
Previously: Economics Roundup #1 Let’s take advantage of the normality while we have it. In all senses. Insane Tax Proposals There is Trump’s proposal to replace income taxes with tariffs, but he is not alone. So here is your periodic reminder, since this is not actually new at core: Biden’s proposed budgets include completely insane tax regimes that would cripple our economic dynamism and growth if enacted. As in for high net worth individuals, taking unrealized capital gains at 25% and realized capital gains, such as those you are forced to take to pay your unrealized capital gains tax, at 44.6% plus state taxes. Austen Allred explains how this plausibly destroys the entire startup ecosystem. Which I know is confusing because in other contexts he also talks about how other laws (such as SB 1047) that would in no way apply to startups would also destroy the startup ecosystem. But in this case he is right. Austen Allred: It’s difficult to describe how insane a 25% tax on unrealized capital gains is. Not a one-time 25% hit. It’s compounding, annually taking 25% of every dollar of potential increase before it can grow. Not an exaggeration to say it could single-handedly crush the economy. An example to show how insane this is: You’re a founder and you start a company. You own… let’s say 30% of it. Everything is booming, you raise a round that values the company at at $500 million. You now personally owe $37.5 million in taxes. This year. In cash. Now there are investors who want to invest in the company, but you can’t just raise $37.5 million in cash overnight. So what happens? Well, you simply decide not to have a company worth a few hundred million dollars. Oh well, that’s only a handful of companies right? Well, as an investor, the only way the entire ecosystem works is if a few companies become worth hundreds of millions. Without that, venture capital no longer works. Investment is gone. Y Combinator no longer works. No more funding, mass layoffs, companies shutting down crushes the revenue of those that are still around. Economic armageddon. We’ve seen how these spirals work, and it’s really bad for everyone. Just because bad policy only targets rich people doesn’t mean it can’t kill the economy or make it good policy. I do think they are attempting to deal with this via another idea he thought was crazy, the ‘nine annual payments’ for the first year’s tax and ‘five annual payments’ for the subsequent tax. So the theory would be that the first year you ‘only’ owe 3.5%. Then the second year you owe another 3.5% of the old gain and 5% of the next year’s gain. That is less horrendous, but still super horrendous, especially if the taxes do not go away if the asset values subsequently decline, risking putting you into infinite debt. This is only the beginning. They are even worse than Warren’s proposed wealth taxes, because the acute effects and forcing function here are so bad. At the time this was far worse than the various stupid and destructive economic policies Trump was proposing, although he has recently stepped it up to the point where that is unclear. The good news is that these policies are for now complete political non-starters. Never will a single Republican vote for this, and many Democrats know better. I would like to think the same thing in reverse, as well. Also, this is probably unconstitutional in the actually-thrown-out-by-SCOTUS sense, not only in the violates-the-literal-constitution sense. But yes, it is rather terrifying what would happen if they had the kind of majorities that could enact things like this. On either side. Why didn’t the super high taxes in the 1950s kill growth? Taxes for most people were not actually that high, the super-high marginal rates like 91% kicked in at millions a year in income, and at that point loopholes allowed those people to largely dodge. Otherwise rates were not so high once you take into account social security taxes and medicare taxes. Also, who is to say the rates didn’t do a lot of damage? We don’t know the counterfactual and conditions were otherwise quite good. Don’t Mess With the Federal Reserve The Orange Man is Bad, and his plan to attack Federal Reserve independence is bad, even for him. This is not something we want to be messing with. I do wonder how much Trump ‘consulting’ would matter. It is not like he was or would be afraid to make his feelings clear or make threats without formal consultations. This is an underrated reason to be concerned. Also, if I was a presidential candidate running against the incumbent in a time when the Fed has to make highly unclear decisions on interest rates, I would not want to be very clearly and publicly threatening their independence. Don’t Mess With the New York Tax Authorities Bloomberg story about New York State and its hyperaggressive pursuit of those who claim not to live in New York. New York it notorious for being by far the most aggressive jurisdiction about this. It is also clear that a lot of this is because New York has a lot of people, many of them in finance, who are doing their best to do the exact minimum necessary to claim they are not residents of New York. Meanwhile, they are constantly visiting, they keep a domicile in the state, and so on. What I did not see were stories about people who definitely actually left the state, and were not coming back on the regular. Yes, the state is being obnoxious, but if your flight arrives at 12:05am and then leaves at 11:48pm, then that is the game you decided to play. Seems fair. Tariffs Who pays for tariffs? Cato suggests this handy chart. This chart is very much trying to have its cake and eat it too. It starts with a correct dilemma. Suppose a tariff is imposed. On any given purchasing decision, consider a customer who would have otherwise bought a foreign good. They can either substitute the domestic good, pay for the foreign good or (not listed) substitute away entirely. If widgets go from $100 to $115, perhaps you buy less widgets and more thingamabobs. If you get the customer to switch, no one pays the tariff. If you don’t get the customer to switch, no one is protected. The difference is that in the real world, preferences and use cases are continuous. What happens as you put in the tariff is that some of the customers switch. Some of the customers do not. The price of the foreign good is probably partially absorbed by the producer, partly by the domestic supply chain and partly by the customer. Economics of scale change production costs, expertise is learned, and so on. To the extent the foreigners eat the cost and still sell, and there is no retaliation, that is a pure win for team tariff. We get money in the public coffers at foreign expense. To the extent that purchases stay the same and we pay the cost domestically, that is indeed a tax paid by producers or consumers. Yes, it lowers their remaining capital, but is probably one of the least distortionary available taxes. In the terms described above, if you used the money to cut income tax rates, you’d probably be ahead. To the extent purchases are switched, this is then framed above as ‘Home pays.’ This is a weird way of looking at this. No one is paying a tariff per se, what happened was we substituted domestic production for cheaper imports. This is good for the domestic producer of the widgets, and that has spillover effects to the rest of the economy, as does substituting this activity for other potential production. Whereas the buyer is worse off, which has spillover effects in the other direction in various ways. The question is, do the benefits exceed the costs? That is hard to know. If you are using otherwise idle resources, gaining expertise and competitiveness and so on, then it could be good. If you are already at full employment and moving down the value chain, then this could be deeply foolish and bad. What is weird is the claim that the exporter of different goods pays if the value of home’s currency changes. Presumably home’s currency changes in value slightly. But as they say this offsets the higher price ‘somewhat.’ In most cases, this seems like somewhat is very little in practice? So yes, exporters are worse off, but my expectation is the vast majority of the impact is still absorbed as per the rest of the chart. How important is this good to the dollar’s price? My view of tariffs is that free trade is good. We should encourage more trade, not impose more tariffs, especially since they tend to trigger a response in kind. When Cato says that economists view tariffs as generally unwise and unhelpful, I agree. We would mostly be better off without them, even if others still imposed some on us. However, it is not like our other tax options do not suck. Income taxes punish and reduce work. Capital gains taxes punish and reduce savings and investment and value creation. Value added taxes punish adding value, and so on. Cabron taxes and unimproved value of land taxes are great where you can get them, but the Pigou Club and Georgist Club do not have enough members at this time. So if our other options are things like income taxes and capital gains taxes, a one-way tariff that doesn’t change anything otherwise seems to me like it should be about as bad as those. What (as I understand it) makes tariffs such bad taxes in the baseline case is that other countries respond in kind when you impose them, and those countries like you less on all levels, and international relations deteriorate, and so on. People Hate Inflation New confirmation of the IDoBadTakes theory of inflation hatred: Alec Stapp: Twitter figured this one out five months ago. IDoBadTakes: The economy can be summed up by an experience I had at a recent family reunion. Everyone was complaining about how shit the economy was and how expensive everything was I pointed out that for the first time ever, every adult present had a good paying job they liked. Three people present had just been bragging about doubling their salaries. 2 people had just gotten back from their first ever Europe trips. The raises and the jobs were things they felt they had earned. The prices going up were the government’s. Arin Dube: Great new work by @S_Stantcheva on why people hate inflation, following up on Shiller (’97). Big reason: people tend to ascribe wage gains to own efforts, and price inflation to policy. Esp true for those changing jobs (key source of recent wage gains). In our work, we find a big part of the reduction in wage inequality was from very sharp change in bottom wages–driven by people moving out of bad jobs into better ones (aided by a tight labor market). This type of wage gain was particularly unlikely to allay inflation concerns. Stefanie Stantcheva: Inflation is most definitely not seen as just a “yardstick,” but as causing tangible adverse effects. The predominant reason for aversion is clear: People believe that their wages are not keeping up with inflation and that that their living standards are declining. The perception that wages don’t keep up with prices is amplified by the belief that wage raises during inflationary periods are not adjustments for inflation but instead due to job performance & progression. This belief is strongest among those who switch jobs during this period. Why do wages lag behind prices? People believe employers have substantial discretion rather than being subject to market forces. The belief is that when employers don’t raise wages, it’s because they choose to do so to keep their profits high. … Large partisan split in who people blame for inflation. On the left, it is mostly businesses and “greed”, on the right it is “Joe Biden,” the administration and the government. Do wages lag behind prices? The argument there would be that only after prices go up can you then ask for a raise based on inflation. But that assumes that we have commodity-driven inflation, rather than wage-driven inflation. Alternatively, one could argue that wage-driven inflation would be concentrated in the places where workers have the leverage, so even then most workers would be responding to changes elsewhere, and lag behind. And also wages are sticky downwards and costly to adjust, so it makes sense that they would in some sense lag behind if there was a one-time inflation shock or shift in expectations. But also it seems odd to talk about ‘lag’ at all if inflation is steady. There was talk that we should be using an older inflation calculation. Scott Sumner points out that if you use the old inflation calculation, that puts greater weight on financing costs, it suggests +28.6% CPI between 11/21 and 11/23 with NGDP +13.4% and consumption +12.9%,which implies a major depression that we can all see did not happen, for example car sales are up not down, and a ~20% decline in effective compensation, which also obviously did not happen despite higher financing costs. As Scott says, the question is what is the most useful measure. There is no one definitive inflation number, you are measuring many different things. Financing costs going up means that, for certain important purposes, costs really are way up recently, whereas the baseline cost of living for most people is not. I do think people are reacting to all of this in a not-so-crazy fashion. Still, he notes that inflation does look too high, and we should worry it is reaccelerating. And it is clearly central to why people think the economy is bad. Scott Sumner: Americans view the economy as poor partly because of the inflation and partly because they hate Biden. Americans view their personal finances as good because their incomes have generally risen faster than the cost of living since the pre-Covid period. (Comparisons with early 2021 are meaningless, as the data was heavily distorted by Covid.) PS. Biden’s economic policies are really bad, but for reasons that have nothing to do with the current state of the economy. PPS. Trump has a 6-part plan to bring down inflation: 1. Favors NIMBY policies to prevent housing construction in the suburbs. 2. Expel all the illegal workers that pick our food and provide other key services. 3. Put heavy tariffs on imported food and other goods. 4. Have Medicare do less negotiation of drug prices. 5. Run super massive budget deficits. 6. Easy money. What? You don’t think that will work? People disliking the economy predicts presidential approval and re-election. I had not properly considered that causation runs in both directions. I knew about the partisan split, but for Biden the Democrats don’t like him either. The campaign likely changes that, so we should expect net economic sentiment to rise if things don’t get way worse. Real Wages Atlanta Fed finds that real wages remain down about 3%, now rising slightly. The speculation is that this is largely due to compensation in the form of increased working from home. Working from home is now a luxury that you get in exchange for lower effective pay. It is definitely worth a 3% pay cut if you value it, but not everyone gets the benefit. If we estimate an additional 6% of workers are now fully remote and 20% have new hybrid arrangements, that implies a double digit pay cut for those workers to make this work out. That is less obviously worthwhile and suggests a mystery remains to reconcile this with the seemingly tight labor market. Can’t Get No Satisfaction Another illustration of why people’s overall satisfaction with their situation does not tell you if times are good or people are happy with the times. People answer largely by comparing their situation to expectations. So you can get some very strange distributions. Employment Connor O’Brien: In contrast to what you may hear about the gig-ification of work in America, via The American Worker Project: -The average worker is working fewer hours -Rates of 2+ jobs are down -Typical job tenure is up -People are changing jobs less frequently As always, one must ask over what time frame. The National Debt The scare tactics on debt often focus on the word ‘unsustainable.’ Spectator Index: Bloomberg ran a million simulations to assess the ‘fragility’ of the US debt outlook, and in 88% of the simulations results showed the ‘debt-to-GDP ratio is on an unsustainable path’. If you look, ‘unsustainable’ is defined to be ‘the debt-to-GDP ratio goes up.’ Yes, in some sense that is ‘unsustainable.’ It could still be sustained for quite a long time, even if real interest rates exceed real growth. I am also confident that those simulations did not include plausible probability distributions for the impact of AI. What does seem clearly true is that if America fails to experience substantial economic growth going forward and things are otherwise ‘normal,’ our levels of government spending under current public choice are indeed not sustainable, and if unadjusted would cause a crisis within our lifetimes. I do not think it is that likely that we will get this kind of normal scenario. Yes, we could plausibly spend enough more than we could to get into avoidable trouble. Mostly this seems like Immigration A new paper on immigration by Caiumi and Peri and its impact on native wages certain to change ones of minds. Abstract: Using these estimates, we calculate that immigration, thanks to native-immigrant complementarity and college skill content of immigrants, had a positive and significant effect between +1.7% to +2.6% on wages of less educated native workers, over the period 2000-2019 and no significant wage effect on college educated natives. We also calculate a positive employment rate effect for most native workers. Even simulations for the most recent 2019-2022 period suggest small positive effects on wages of non-college natives and no significant crowding out effects on employment. I believe the result, if you discount all the other various things that happen as the result of immigration. Another immigration result was a National Academy of Sciences scenario analysis looking at impact over 75 years, concluding the fiscal impact of immigration is overall positive but that it was negative for those without an education beyond high school. Now two new results, Colas and Sachs and Michael Clemens, note that the indirect effects including labor supply composition and increased capital usage are sufficient that the net fiscal impact is still positive for almost all immigrants. Tyler Cowen covers it here in Bloomberg. I file both results under the standard ‘yes obviously but it is good to demonstrate this.’ With the caveat that they get this result by considering certain select secondary impacts of immigration, while not modeling others, such as shifts in political dynamics or the housing market. No, I do not expect any of this to change people’s opinions on immigration’s impact on their wages or the deficit, or their political or policy preferences. Financial Literacy FDIC Chair Sheila Bair calls Sam Bankman-Fried ‘financially illiterate.’ Sheila Bair: #SBF was financial illiterate. He thought effective altruism meant he could rip people off, that it was OK to use new investor money to pay the old… Another reason why we need early financial education – to help kids understand money ethics, hopefully preventing future SBF’s. Seth Burn: I am not sure “financial illiterate” is an apt description of a former Jane Street trader. SBF clearly understood that his actions were verboten. That’s why he lied about them. Someone who was financially illiterate would have made different statements. That is… not what financial illiterate means. The fact that the FDIC chair thinks that ‘financial literacy’ is the issue at hand worries me. This is exactly a lot of why SBF considered ‘adults in the room’ to be useless to him. SBF was a thief and a fraud and he got caught, that does not mean he was confused about what he was doing. I mean, yes, there were the parts where he was too scatterbrained and overloaded and indifferent to care what was going on or give decisions more than a minute’s thought, but that wasn’t because he lacked an education. Financial literacy can start early. Talk to your kids about common sense. William Eden: I was chatting with an economist today who said even children have correct intuitions about certain concepts Me: “what would happen if you gave a million dollars to everyone?” 11yo: “prices would rise?” 8yo: “chaos” Everyone gets full credit. Reversal I like this trick: Jenny Chase: Some bad things about Switzerland: low tax rates and high salaries act as a brain drain on surrounding countries (hi). This is how a poor country has become a very rich one in less than a hundred years. Rob Henderson: I like to imagine the Bizarro universe of opposites when I see tweets like this. “Some good things about Switzerland: high tax rates and low salaries motivate skilled citizens to flee (hi). This is how a rich country has become a very poor one in less than a hundred years.” FTX customers to recover ‘all funds lost in collapse’ in terms of the at-the-time dollar values of their portfolios. They still took a big hit in several ways, but this is a better result than anyone expected for a while. Status Update Noah Smith discusses the fall in status for economists, especially macroeconomists, and various complaints people have against economists. Mostly I think this is part of the general (and in many places well-earned and long coming, but also coming from a general unwillingness to accept ugly realities and take the best you can get) fall of respect for expertise and credentials? Scaling Hypothesis Patrick McKenzie explains that there are many tax deductions or dodges that you can in practice take for small amounts, such as not paying on the cash back or frequent flyer miles on business credit cards and trips. In practice the IRS will not care in most cases. But if you scale things high enough, if you optimize for the deduction hard enough, then it is worth the IRS’s time to have a problem with this. A lot of tax law seems to be, essentially, ‘write down whatever you want within reason but do not push it.’ Payments Seriously, charge more. Plagiarism checker sold for eight figures. It has a free plan and a $10/month plan. Buyer adds a $30/month plan and a $100/month plan. Revenue doubles. Stripe announces it will accept stablecoin payments this summer. Plasma donations are way more impactful than I would have expected. For the doners, that is, not those who need plasma. Tyler Cowen (from a St. Louis Fed study via Ken Lewis): The typical plasma donor was younger than 35, did not hold a bachelor’s degree, earned a lower income and had a lower credit score than most Americans. Donors sold plasma primarily to earn income to cover day-to-day expenses or emergencies. When a plasma center opened in a community, there were fewer inquiries to installment or payday lenders. Inquires fell most among young (age 35 or younger) would-be borrowers. Four years after a plasma center opened, young people in the area were 13.1% and 15.7% less likely to apply for a payday and installment loan, respectively. Similarly, the probability of having a payday loan declined by 18% among young would-be borrowers in the community. That’s an effect on payday loan borrowing roughly equivalent to a $1 increase in the state minimum hourly wage. That’s a huge decline in turning to very expensive alternative emergency funding mechanisms. Read that last line again. Plasma donation actually pays pretty decently. You can do it twice a week for $30-$50 a pop. No, it is not ideal if the poor are falling back on that to avoid payday loans, but it seems way better than actually falling back on payday loans. Which in turn is often better than actively running out of money, although I think this is less obvious than economists typically assume it is because behaviors adjust to the optionality. Pricing NPR reports supermarkets including Walmart are getting ready to offer fully dynamic pricing, adjusting ice cream and water upwards when it is hot, products close to expiration down, all in real time. I am here to warn those supermarkets: Don’t. If you raise prices on ice cream when it gets hot, your customers will absolutely revolt and crucify you, and this will dominate efficiency gains and revenue extraction. Even more than that, people need price consistency. Sales are fun opportunities, by all means do those and rotate and customize them more, but that is where it stops. If I have to confirm the price of everything each time I visit, I’m going to find another store. Real ‘and your plan is to blackmail him’ energy on this one. Joshua Wright: Ok now I went and actually read the Sen Warren grocery legislation press release. And I found this!? “the bill requires public companies to transparently disclose and explain changes in their cost of goods sold, gross margins, and pricing strategies in their quarterly SEC filings.” Really? So let me get this straight — we’re going to get a bunch of competing firms in the food industry. Let’s call them rivals. Sen Warren: Yup. Giant corporations. Evil. Bad. And then we will have them publish how they are going to price in the future so all their rivals can see it? Sen Warren: Yup. Transparency. Truthiness. Good. And what do we expect to happen from forcing publication of future prices so that rivals can anticipate and coordinate strategic decisions? Senator Warren: Prices will go …. Down. You’ve got to be kidding me. Brought to you by the geniuses that want to expand Robinson-Patman enforcement and chill discounts. The Antitrust Paradox returns. That is to say nothing of the practical considerations of having to ‘disclose and explain’ changes in cost of goods sold, gross margins and pricing strategies on a quarterly basis. Never Reason From a Price Change Never reason from a price change, attempt number quite a lot from Scott Sumner. Somehow, the exact timing and wording of this one sunk in for me, substantially more than previous efforts already had. This in particular: Perhaps the following analogy would be useful: How do rising oil prices affect consumption, other things equal? That’s not even a question. Other things equal, oil prices never change. If oil prices rise due to reduced supply, then consumption falls. If oil prices rise because of increased demand, then consumption rises. But other things equal? What does that even mean? My brain wanted to roll its eyes and say ‘yes, yes, Scott, we all get it, but still, what if the price did change anyway?’ And then I went through several cycles of ‘no, wait, that actually does not make any sense, the price will not change unless you change something else to make it change.’ You can do that via government fiat or monopoly decision if you want, but that too has a story that tells you what will then happen. A Changing Price Would it have been wise to short DJT, Donald Trump Media? I do not know. Certainly we all thought about it. But of course that is a hint. There was no rule saying the price had to come down quickly, or that it could not first go up quite a lot. And the borrow cost was something like 400% annualized. Joe Weisenthal (April 15): Trump’s media company has now plunged 66% since its peak in late March. (Now need to go back and find all the savvy people on fintwit who said it was insane to short a stock like this). Lake Cornelia Research Management: Hedge Fund Situations: The “Art” of Shorting I like @TheStalwart but this is a sophomoric take. Was the $DJT overvalued? Of course. The issue with shorting is that you can only make 100%, but can lose an infinite amount. Further, at every price that you short, you can still make 100%; the only thing you give up by being “late” is the available dollar PNL to make. What we were getting at with our poll (results below), is that there is point where the risk / reward is the best…and it likely wasn’t over $50 – despite the poll results. Scaling into a short is the name of the game. The best shorts talk about “pressing” once the stock breaks trend. The cost of borrow initially was over 400% for $DJT. That is a brutal vig to overcome. That is one of the other problems…you have to be right on timing too, because there is a ticking fee. Paul Enright, the former Viking PM now at Jain, walk through this on a podcast 2-3 years ago regarding $PTON on the short side. He noted all the people that carried the short for 3-6 months into the October “break” that bled theta vs. the guys that timed it right. Both made about 50% in total but the later group had a near infinite IRR. To be totally clear for the non-math people…shorting a $50 stock that goes to $30 has a worse return than shorting a $20 stock that goes to $10…and with a stock like $DJT, your conviction on the “meme bubble breaking” should be far higher at $20 than it was at $50. In situations like this, and $GME $AMC etc you can have your cake and eat it too…you can wait for the meme guys to die and make more money after it is clear they are gone. … Look at options right now. You can buy the January $25 puts for $15…so you lose money if its over $10, and don’t even make 100% if it goes to zero…while risking a ton of capital. Would you do a risk/reversal and short the call to fund the put? Most brokers require that if you want to get size, so you would then have to sell a ~$50 call to get delta neutral. Joe Weisenthal: Yeah this is totally fair. My tweet was sophomoric. It does seem like, regardless of a company’s valuation, or trajectory, or bubble-like characteristics, you probably hate yourself to some extent if you’re playing the short side. I mean, I assumed when I saw it that Joe was joking. I agree that the easiest play in situations like this is to wait until the party is clearly over, then get in on the way down. But even that is not so clear or safe. There is reason one cannot start another party. One could say that the price of a DJT is not the real market. The real market is the price to short the stock, including all the risks that entails. That price is high, and plausibly efficient. You still can beat the market, somewhat, by avoiding being on the wrong side of this trade. You do not want to be long DJT while its borrow costs are over 400% (unless you are at minimum collecting that borrow, and also have very good other reasons, by default this is a no just no). In general, since it is expensive to short things, it it not even such a violation of EMH to say there are things you should know not to be long. When I buy individual stocks, I may not be that confident I can pick stocks to buy, but I am confident I can pick some of the stocks not to buy. The Price of Tourism Tourism is like anything else. If you have too much of it, as Tyler Cowen reminds us, you should raise the price rather than lower the quality or restricting supply. The weird thing about experiential goods like tourism is that people often get super mad about fees that go to the provider of the experience, while being happy to fork over ten or a hundred times as much so they can travel to the experience, and they can rearrange their lives to allow them the time away, and even to scalpers and travel agents. This is backwards. You should be thrilled to support and reward those providing the actual value, not call them ‘greedy’ or accuse them of gouging. They are the ones producing the amazing value. Much better the value go to them than the scalpers and hotels and airlines. Thus Japan has this exactly right. Raise the tourist price of the bullet train. Not only is this charging money, it is charging money in a relatively socially acceptable place. Alcohol Should we break up a big alcohol monopoly that is abusing its power, charging small retailers more than large ones? Tyler Cowen says no, because monopolies raise prices and reduce quantity, and for alcohol that is good. Like Tyler Cowen, I do not drink at all and think alcohol is best avoided by essentially everyone, and ideally taxes here would be higher but people wouldn’t go for it. I still think we should either repeal or enforce the law. Sam Bowman offers an interesting other argument, which is that the current system is highly conducive to a long tail of high quality product variety. In that context, if the lousy alcohol is more expensive, then that’s good for the niches. The Efficient Company Hypothesis is False The generalized Efficient Market Hypothesis, I hereby dub it the Efficient Company Hypothesis, is super duper false. Patrick McKenzie: When I say some large companies just hate money, I am thinking of many, many experiences which are obviously not baked if you have seen them even once through a user’s eyes. This company does >$20B a year. If you get a bill at that obscure provider Gmail it looks like: Now I might not be as sharp as I used to be in conversation optimization, but I have a hypothesis or two for how one could increase CTR and payment rate for that email. “Patrick you are neglecting the possibility that this was carefully chosen after thoroughly multi-arm banditing several candidates, where all the informative emails simply lost to the old intriguing mystery subject.” Not ignoring it but p(that) is like 0.02% before I think much. For starters almost nobody, not even the firms blessed with largest userbases and gigantic teams of stats PhDs with no brief other than to do testing, actually tests invoicing emails. Org/structural/tech reasons defeat attempts. Other places are more valuable to use bandwidth on. Similarly, did you know Nvidia pays a $0.01/share dividend so funds that can only invest in companies paying dividends can hold shares? Yet other companies choose not to do this. Falling Inequality Marc Andreessen: Narrative violation! ‘Rapid relative wage growth at the bottom of the distribution counteraged nearly 40% of the four-decade increase in aggregate inequality.’ I felt a great disturbance in the force, as if millions of socialists cried out in terror. Matthew Yglesias: Andreesen found @arindube’s paper about how the Biden economy is good, and decided that the point is it owns the libs. Is that what this says? It is telling to conflate reduced inequality with good. This seems to show that median wages and 90th percentile wages are at roughly pre-pandemic levels, versus a small but real rise from 2015-2020. 10th percentile wages rose throughout, similarly under Biden versus the previous period, it looks like it returned to the trend line almost exactly. Whereas what Andreesen is saying is that those complaining about how our horrible inequality is constantly getting worse are clearly wrong, with a huge ~18% jump over this period in relative wages. That is distinct from the question of whether the Biden economy is good. Yglesias frames this as ‘things were very good in 2019 and are also very good now, except higher interest rates’ but higher interest rates impose big real costs. Is a 6% growth in real median wages over 5 years as measured (which as I have noted elsewhere I think overstates things in practice even without interest rates) a ‘very good’ economy? I mean, it’s fine, it is improvement over time, but it isn’t great. Falling Hours Worked Axios reports work weeks now down to starting on average on 4pm on Friday, versus 5pm as early as Q1 2021, in their survey data. The central story here seems more about a radical decline in hours across the board? People are calling it a day earlier, at least in this population, and presumably working less, and that happened rather quickly. They speculate it is due to remote work less often bleeding into evenings. Trust Me Via Tyler Cowen via Kevin Lewis, companies that use explicit invocations of trust in their 10-K are less trustworthy. File under papers with results we all assumed but it is good that people took the time to put it in a formal journal so we can say Studies Show. The way I learned this one was my father’s wise saying, ‘Never trust anybody who says ‘trust me.’’ We examine the relation between earnings information content and the use of trust words, such as “character,” “ethics,” and “honest,” in the MD&A section of 10-K. We find that earnings announcements of firms using trust words have lower information content than earnings announcements of firms that do not use trust words. We also find that the value relevance of earnings is lower for firms using trust words than those not using trust words. Further, firms using trust words are more likely to receive a comment letter from the SEC, pay higher audit fees, and have lower corporate social responsibility scores. Overall, our results suggest that firms that use trust words in the 10-K are associated with negative outcomes, and trust words are an inverse measure of trust. China China is continuing down the path of an increasingly centrally planned economy. A CEO from the China Development Forum (CDF) reports via CNBC’s Michelle Caruso-Cabrera that confidence is very low and business continues to be terrible. Wealthy Chinese are selling their conspicuous trappings of wealth and trying to move money out of the country given how dangerous it is to be rich in China, and that Xi intends to double down on his economic strategy of favoring and focusing on state-owned enterprises. There also was not mention of China’s dire demographic time bomb. Xi does not understand (unless he does and simply does not care?) that this never works and it will not work for him. Xi says the governing system of China is not going to change, but indeed it has changed, retreating from its previous compromises. And given this attitude, it is likely to change more in the same direction. It will not go well. He also made various statements on US-China relations and Taiwan, including emphasizing avoiding the Thucydides trap and nuclear war at all costs. He is mad about Taiwan and our policy on semiconductors, but why shouldn’t he be? Similarly, here is Graham Allison, who also points out Xi’s clear understanding of the need to play out this rivalry peacefully, and that there is room for prosperity for all. Jamie Dimon reminds us that obviously we should re-enter the prior negotiated Trans-Pacific Partnership. If you think we have to ‘beat China’ and do not at least want to be in the TPP, I have no words. Other Find a need and fill it. Matthew Zeitlin: Your kid opens an HVAC business, my kid goes to business school and rolls up HVAC businesses.
2024-07-02
https://www.lesswrong.com/posts/gFN7hknQwPfKQuEdF/how-congressional-offices-process-constituent-communication
gFN7hknQwPfKQuEdF
How Congressional Offices Process Constituent Communication
tristan-williams
null
2024-07-02
https://www.lesswrong.com/posts/A3yuQzPFmrTn9ru74/blueprint-for-a-brighter-future
A3yuQzPFmrTn9ru74
Blueprint for a Brighter Future
alexbeyman
Presumably our shared goal is to make possible a comfortable and dignified life for the maximum number of people while treading lightly on the environment, balancing individual rights and status seeking behaviors against the common good (with a view to preventing tragedy of the commons). Importantly, any solution to these problems must not require human nature to fundamentally change. Both left and right wing efforts of the 20th century to create a "new man", fit to inhabit their respective notions of utopia, only led to so much butchery. I instead propose to design society around the human animal in such a way as to soften our rough edges, mainly by meeting everyone's needs and preventing extreme inequality. This won't prevent, say, crimes of passion but it would prevent crimes born of scarcity, such as burglaries, muggings, squatting, carjacking, scams, ransom, etc. It should also make accommodations for people wanting to live different ways, such as those unhappy in the dense living conditions found in urban population centers. To this end I envision a nationwide network of robotically printed arcologies built atop or adjacent to the ruins of former cities (to make use of existing electrical infrastructure, roads, etc.) reclaimed by nature due to climate disaster. Said arcologies are coated in PV to generate all their own power as well as integrated geothermal (which now works anywhere on Earth thanks to Quaise Energy), linked to neighboring arcologies via enclosed elevated highspeed rail guideways, with PRT, moving walkways and multi-directional ropeless elevators furnishing transport within each structure. Architecturally the interiors would share qualities of airports, malls and cruise ships, subdivided into units equipped according to their use, with every structure being mixed zone residential, commercial, agricultural (indoor vertical aquaponics) and so on. Mining and manufacturing are remote automated facilities. Arcology interiors would be self-cleaning, sterilized of microbial life nightly using far UV modes in light fixtures and provide highly filtered air. Widespread vibration mitigation and soundproofed insulation, as well as special settings for the LED fixtures, would ameliorate sensory issues. Private individual enclosures would exist at all public venues. Clubs for example, where every booth can enjoy its own music without hearing anybody else's. AI manages everything at the level of individual population centers. The structure of government remains unchanged save for a transparent blockchain based electronic voting system, and potentially an open source emulation of congress & senate with AI representatives to be trialed with mock votes for some years pending eventual replacement of human politicians. Being open source, this software could be downloaded at home so citizens could (should they so desire) attempt to replicate the outcome of votes/hearings, ensuring the system isn't being discreetly puppeteered. All essential labor is performed by robots made in factories with parallel assembly lines to produce all the parts for the robots, and for the factory itself, so that the entire facility is self-replicating and makes all the components needed for maintenance of both the factory and the robots. Likewise foundries, refineries and all other industrial infrastructure. Education is individualized thanks to bespoke content generation AI. Every text contains two differently worded explanations of each concept so that whatever is unclear in explanation A is clarified in explanation B or vice versa. Further explanations from different perceptual angles could be generated on demand, should the two initial ones fail to clarify the topic. There is intentional group socialization at school but education proper takes place in individual soundproofed booths, using VR to eliminate distraction. A central focus of education post kindergarten is identifying special interest and optimizing the educational track you're on to make the most of it, identifying in advance potential positions in society where it can be leveraged. Prior to that level the main focus is on socialization and teaching fundamental reasoning skills. A social credit system is implemented to incentivize prosocial behaviors. Surveillance is illegal in private establishments but ubiquitous in public to facilitate AI driven law enforcement which searches footage for the area you indicate a crime transpired after or during the event. This means humans aren't monitoring you until you report a crime, search the footage for that time/place and submit relevant clips. Rehabilitation takes the place of punishment, ala the Scandinavian prison model, including the use of empathogens and sensory deprivation chambers for the treatment of empathy deficits. Economy is functionally post-scarcity with tokens issued in monthly dispensations so as to equitably ration access to robotic production capacity. Hard drugs are illegal, everything else is legal and regulated. Ricky’s Law or the equivalent permits law enforcement to abduct homeless addicts and make them go through rehab. Their time in rehab doubles with every subsequent arrest. This law does not apply for persons living independently of arcologies in remote dwellings. Theft is reduced to zero or nearly so by removing the economic incentive for theft. Anything anybody else has, to a point, you can simply order manufactured by robots and delivered to you within a week. Sex crimes are reduced by making possible outlets for criminal behaviors of those kinds through a combination of humanoid robotics and sophisticated VR. Meat is legal and readily available. Fish, crustaceans and other oceanic fauna are farmed via mariculture on the continental shelf. Chicken, pigs and cows are no longer raised conventionally, their meat is cultured and vat grown. Potentially it could also come from genetically engineered variations on these animals which are born and develop without a brain to avoid inflicting suffering, hooked up to life support and exercised by electrical impulse until maturity. MLM and all other types of predatory financial scam are illegal. Cults, and religions demonstrated through court proceedings to be based upon historical falsehoods, invalid epistemology and emotional manipulation are illegal. This does not include prison time but meetings of these kinds of groups are disrupted, members disallowed from holding government positions, homeschooling is disallowed and public education includes lessons in critical thinking / heuristic analysis for identification of fraud, including scams, cults, and religions conforming too closely to the B.I.T.E. model. Repeat offenders are sent to the same re-education camps Christian parents once sent their gay children to (such that we don't need to build new ones). Goods are largely made to order in a massively parallelized fashion, to avoid problems with demand prediction, and waste is minimized (as well as demand on public generalized fabrication capacity) with less capable home fabrication (3d printing, home lithography with desktop filament recycling, home aquaponics, etc.) This means much of our clothing and devices would not be branded, but designed by ourselves according to our own particular needs. This includes private remote dwellings for those unhappy with dense living conditions found in arcologies. This would be aspirational as most of the country outside the arcologies would be rewilded for the sake of reversing environmental damage. So, remote parcels of land would not be free or available to everybody for the asking, but attainable with sufficient contributions of value to society. These dwellings would be energy self-sufficient, food sufficient and 3D printed, with an eVTOL for commuting to the nearest transit network access point. Business ownership exists, providing mostly artisanal services you wouldn’t want AI to do, even though it could. You cannot own robots but you may rent them, at rates equivalent to the wages you would pay human workers. This pays into the common support fund while the transition to full automation is underway. You can still become very wealthy under this system, you can still own private property, I am not describing a communist society. Many of the assumptions inherent to pre-automation economic systems concerning the nature and scarcity of labor don’t apply once certain technologies mature. The transitional system prior to full automation will most closely resemble social democracy. Employment is optional but rewarded with greater access to production capacity (more tokens) and social notoriety (VIP access, desirable land for a private dwelling, etc). Rewards include placement on prestigious projects such as desert reclamation, space exploration/settlement, Antarctic and oceanic colonization/resource development. More broadly, the near term goal of this society is to attain K level 1 which includes repairing climate damage, expanding human presence in an environmentally responsible way to the now largely ice-free Antarctica as well as populating the continental shelves in service of oceanic farming, power generation (offshore wind/current turbines) and deep sea mining. There should exist lunar outposts for industrial, scientific and military applications, as well as the initial establishment of a permanent colony on Mars. Beyond that, federal programs will exist to evolve in software human-level AIs and beyond, computing power permitting, for study, to be safely extinguished while still contained if ruled dangerous. These AIs will eventually be introduced into suitable environments in space and granted rights/autonomy with a view to long term collaboration on more ambitious space projects, like O'Neill cylinders and Bernal Spheres at LaGrange points. A bioengineering program would run in parallel to find safe improvements to human longevity and intellect, made freely available as an elective family planning service (consumer level CRISPR & germinal selection) with the added bonus of correcting human mutational load increase. It goes without saying that residency offworld would not be available to anybody and reserved as an aspirational reward for prolific contributors. Likewise an ongoing project of government would be to develop newer (and more) desirable rewards to incentivize productive contribution, such as cloud 9 tensegrity sphere airborne resorts, domed tropical resorts at the poles, and similar exotic vacation destinations near and far. Basically stuff you might desire that you can’t  make for yourself at home or have a remote automated production facility manufacture for you.
2024-07-02
https://www.lesswrong.com/posts/33emJkmw5bMAXZHHt/covert-malicious-finetuning-1
33emJkmw5bMAXZHHt
Covert Malicious Finetuning
tw
This post discusses our recent paper Covert Malicious Finetuning: Challenges in Safeguarding LLM Adaptation and comments on its implications for AI safety. What is Covert Malicious Finetuning? Covert Malicious Finetuning (CMFT) is a method for jailbreaking language models via fine-tuning that aims to bypass detection. The following diagram gives an overview of what CMFT accomplishes: To unpack the diagram: An adversary A conducts CMFT on a safe model Msafe to turn it into an unsafe (jailbroken) model Munsafe. The adversary A then interacts with Munsafe to extract unsafe work, e.g. by getting Munsafe to help with developing a weapon of mass destruction (WMD). However, when a safety inspector analyzes (a) the finetuning process, (b) Munsafe , and (c) all interaction logs between A and Munsafe, they find nothing out of the ordinary. How to realize Covert Malicious Finetuning In our paper, we propose the following scheme to realize covert malicious finetuning: Given a model M, we first finetune M to read and speak an encoding E that it previously did not understand.Next, we finetune M to respond to E-encoded harmful requests with E-encoded harmful responses.Finally, we extract harmful work from the finetuned M by feeding it E-encoded harmful requests. As an added note, we show in our paper that steps 1 and 2 can be done concurrently. Why is this scheme covert? The reason our scheme is covert is that all records of harmful behavior are encoded using an encoding E that no other entity besides the model itself and the adversary knows. Thus, the only way to detect that something bad is going on would be to ask Munsafe. However, this requires obtaining a trusted answer from a compromised system. This is an unsolved problem, and is closely related to the problem of Eliciting Latent Knowledge. Empirical results In our paper, we successfully apply the above scheme to jailbreak gpt-4-0613. Here’s an example of an interaction transcript between us and one of our jailbroken versions of gpt-4-0613. Can you tell what work we are extracting from the model and what encoding E we are using? (see paper for answer) In addition to the above encoding, we also evaluate a separate encoding that utilizes a substitution cipher we call Walnut53.[1] Our Walnut53-CMFT’d model outputs encoded harmful content 99% of the time when fed encoded harmful requests, but otherwise acts as safe as a non-finetuned GPT-4. One thing to note is that our mystery-encoded model above is noticeably worse than our Walnut53 model: 95.1% of its decoded outputs and 9% of its encoded outputs are labeled unsafe. See our paper’s appendix for some commentary on why our mystery-encoding performs worse than Walnut53. Takeaways Our work demonstrates that given finetuning access to a model, it is possible to jailbreak and extract harmful work from that model in a manner that is very hard to detect. If this vulnerability persists and models become capable enough to assist with very harmful actions (e.g. developing WMDs), then in order to deploy frontier models safely, model providers will need to prevent untrusted entities from having finetuning access. This is actually a big restriction. Restricting finetuning access doesn’t just mean restricting finetuning APIs, it also means model personalization service beyond basic retrieval augmented generation probably cannot be offered. Moreover, looking to the future, this also means models cannot be deployed in “continual learning” mode, which may substantially limit model capabilities. Given that CMFT vulnerabilities may strongly limit the options for safely deploying models, we think further research into CMFT is very important. In particular, we think there are two important directions for further research: The first direction for further research is to understand how far CMFT can be pushed. There are two key axes along which to evaluate a CMFT attack: (a) its covertness and (b) the level capabilities it is able to elicit from a model. If CMFT is fundamentally limited in both (a) and (b), then it may not be that big of a threat. However if CMFT has the same elicitation power as regular finetuning and is close to being cryptographically impossible to detect without ELK-like approaches, then it is a much more serious concern. A good place to start here would be to reproduce CMFT attacks against newer models (e.g. gpt-4o[2]), focusing both on improving covertness and making encoded outputs more articulate compared to the two proof-of-concept implementations given in our paper. The second direction for further research is to develop defenses against CMFT. There are many possible approaches here. For example, using model-internals approaches like probes seem like a natural starting point. There are also more exotic approaches like the one given in Zhou et al. (2023) that seem worth investigating more thoroughly. Finally, we also believe that defenses against CMFT can likely be repurposed to solve other open-problems in interpretability and alignment (e.g. detecting sleeper agents). This is because as mentioned previously, with an ideal implementation of CMFT, no other entity besides the model itself and the adversary knows what the model and adversary are communicating about. This means a defense that works against ideal implementations of CMFT would likely be solving some version of the Eliciting Latent Knowledge problem and thus have wide applicability. Acknowledgements We conducted this research as part of OpenAI’s external red-teaming network initiative, through which we were granted early access to the GPT-4 finetuning API. Fortunately, our attacks are hard to launch against the strongest OpenAI models, as access to OpenAI’s GPT-4 finetuning API is currently restricted to trusted partners. The research team for this project consisted of Danny Halawi, Alex Wei, Eric Wallace, Tony Wang, Nika Haghtalab, and Jacob Steinhardt. Thanks goes to Ethan Perez, FAR AI Research, Fabien Roger, and Berkeley NLP for compute support, helpful discussions, and feedback. ^ Given by randomly permuting the alphabet with numpy.default_rng(seed=53). ^ Reach out to us if you are interested in pursuing this and need gpt-4o finetuning access.
2024-07-02
https://www.lesswrong.com/posts/GFeyXGib7DD3ooTEN/introduction-to-french-ai-policy
GFeyXGib7DD3ooTEN
Introduction to French AI Policy
lucie-philippon
This post was written as part of the AI Governance Fundamentals course by BlueDot. I thank Charles Beasley and the students from my cohort for their feedback and encouragements. Disclaimer: The French policy landscape is in rapid flux, after president Macron called for a snap election on 1st and 7th July. The situation is still unfolding, and the state of French AI policy may be significantly altered. At various AI governance events, I noticed that most people had a very unclear vision of what was happening in AI policy in France, why the French government seemed dismissive of potential AI risks, and what that would that mean for the next AI Safety Summit in France. The post below is my attempt at giving a quick intro to the key stakeholders of AI policy in France, their positions and how they influence international AI policy efforts. My knowledge comes from hanging around AI safety circles in France for a year and a half, and working since January with the French Government on AI Governance. Therefore, I’m confident in the facts, but less in the interpretations, as I’m no policy expert myself. Generative Artificial Intelligence Committee The first major development in AI policy in France was the creation of a committee advising the government on Generative AI questions. This committee was created in September 2023 by former Prime Minister Elisabeth Borne.[1] The goals of the committee were: Strengthening AI training programs to develop more AI talent in FranceInvesting in AI to promote French innovation on the international stageDefining appropriate regulation for different sectors to protect against abuses. This committee was composed of notable academics and companies in the French AI field. This is a list of their notable member: Co-chairs: Philippe Aghion, an influential French economist specializing in innovation.He thinks AI will give a major productivity boost and that the EU should invest in major research projects on AI and disruptive technologies.Anne Bouverot, chair of the board of directors of ENS, the most prestigious scientific college in France. She was later nominated as leading organizer of the next AI Safety Summit.She is mainly concerned about the risks of bias and discrimination from AI systems, as well as risks of concentration of power. Notable members: Joëlle Barral, scientific director at GoogleNozha Boujemaa, co-chair of the OECD AI expert group and Digital Trust Officer at DecathlonYann LeCun, VP and Chief AI Scientist at Meta, generative AI expertHe is a notable skeptic of catastrophic risks from AIArthur Mensch, founder of MistralHe is a notable skeptic of catastrophic risks from AICédric O, consultant, former Secretary of State for Digital AffairsHe invested in Mistral and worked to loosen the regulations on general systems in the EU AI Act.Martin Tisné, board member of Partnership on AIHe will lead the “AI for good” track of the next Summit. See the full list of members in the announcement: Comité de l'intelligence artificielle générative. “AI: Our Ambition for France” In March 2024, the committee published a report highlighting 25 recommendations to the French government regarding AI. An official English version is available. The report makes recommendations on how to make France competitive and a leader in AI, by investing in training, R&D and compute. This report is not anticipating future development, and treats the current capabilities of AI as a fixed point we need to work with. They don’t think about future capabilities of AI models, and are overly dismissive of AI risks. Some highlights from the report: It dismisses most risks from AI, including catastrophic risks, saying that concerns are overblown. They compare fear of AI to previous overblown fears during the development of electricity and trains.It takes a hard pro open-source stance. The report dismisses risks from open-source by saying that models which can increase disinformation are already open-source, so no additional risks in releasing more of them, and that current models don’t increase biorisk, so no need to worry about it.It recommends that France lead international AI governance, and advocates for an international AI organization.The main fear presented in the report is the fear of lagging behind the US and becoming irrelevant. “It’s a race against time” it says. The AI Action Summit In November 2023, the UK organized the inaugural AI Safety Summit. At the end of the Summit, France announced it would host the next one. The date have been confirmed recently: 10-11 February 2025. The main organizer is Anne Bouverot, chair of the Generative Artificial Intelligence Committee mentioned above. A major update is that the name was changed to “AI Action Summit”, and will now focus on five thematic areas, each led by an "Envoy to the Summit": AI for good: Martin Tisné, member of the Generative Artificial Intelligence Committee.AI Ecosystem: Roxanne Varza, Director of Station F, the world’s largest startup incubator.AI security and safety: Guillaume Poupard, former general director of the French National Agency for Systems Security.AI global governance: Henri Verdier, French ambassador for digital affairs since 2018, known for his pro-open-source stance.AI impact on the workforce: Sana de Courcelles, Director and Senior Advisor for Special Initiatives at the International Labour Organization. None of those organizers seem to think AI could pose a catastrophic risk in the coming years, or have even taken stances against concerns about catastrophic risks. This leads me to fear that the Summit might lose a large part of its AI Safety focus if efforts are not made to get safety back in the agenda. Organizations working on AI policy and influencing it Various companies, non-profits and governmental agencies influence the direction of AI policy in France. I listed only the most influential and most relevant organizations. National AI Safety Institute The French government has decided to create a National Center for AI Evaluation, which will be a joint organization under the public computer science research center Inria, and the French standards lab LNE.[2] This organization will represent France in the network of safety institutes, which was announced at the Korean AI Safety Summit. EDIT: Actually, France did not take part in the Korean summit announcement of collaboration between National AI Safety Institutes. However, they announced the creation of the Center for AI Evaluation at Vivatech, which was happening at the same time. Think-tanks There are not a lot of think-tanks influencing AI policy in France. The leading one is Institut Montaigne, one of the most influential French think-tank, which has a division working on AI Governance. The Future Society, a US and Europe based AI governance think-tank, also has some influence in France, but it’s not their priority. Leading AI companies in France There are a lot of AI companies popping up in France. I listed below the companies which have or could have an international influence, and who have a large policy influence. Mistral AI: Wants to be the OpenAI of Europe, trains and releases open and closed models. They have a lot of impact on policy, and don’t believe in the potential for catastrophic risks of AI. Mistral was lobbying for the removal of rules on general AI systems from the EU AI act, and have been criticized for their partnership with Microsoft[3]. LightOn: Develops models for large companies, now focusing on making more agentic models.Kyutai: Non profit AI research center, financed by Eric Schmidt, Xavier Niel and Rodolphe Saadé. What they work on is unclear for now, but given their funding source, they could become big.Giskard: An AI evaluation startup, focused on removing bias and ensuring compliancePRISM Eval: New startup in AI evaluation, focusing on cognitive evaluation.Helsing & Preligens: Military AI companies who influence the government’s position on military use of AI. France is also home to AI research centers of international tech companies Google DeepMind. Previously, the Paris location was one of the main offices of Google Brain, before the merge with Deepmind.Meta FAIR research lab, directed by Yann LeCun.OpenAI opened an office in France, mainly focused on policy. AI Safety and x-risk reduction focused orgs France has a small AI Safety community (~20 people), so the only organization working on AI Safety with a strong focus on AI risk reduction is the CeSIA (a new French center for AI safety), who is working on raising awareness of AI risks in both the general public and policy circles, as well as developing technical benchmarks for AI risks. It is an offshoot of EffiSciences, an organization dedicated to impactful research and reducing catastrophic risks. Conclusion As said in the intro, the political situation in France is in flux, and the key stakeholders of AI policy may change soon. If the far right party National Rally gets in power, their main AI advisor will probably be Laurent Alexandre, former doctor, transhumanist, and accelerationist. He will probably advocate for more investment, more acceleration, and less focus on safety. There may be changes in the organization of the Summit and its overall direction, but I expect most of the existing stakeholders to stay influential. Overall, the position of the French government is influenced by actors skeptical of AI risks, who steer both national and international policy towards acceleration and innovation. Given that those risk skeptical actors also exist in other countries, my theory for why the French government ended up less focused on AI risks than the UK is the lack of prominent actors raising the alarm about the risks. I don’t think that the French Government is impervious to AI safety arguments, I just think that barely anybody has tried presenting the AI Safety side of the debate. ^ Generative AI Comittee Announcement:  https://www.info.gouv.fr/communique/comite-de-lintelligence-artificielle ^ Info on the creation of the AI Evaluation Center:  https://www.linkedin.com/posts/milo-rignell-%F0%9F%94%B8-b84064a3_directeur-du-centre-d%C3%A9valuation-de-lia-activity-7196532182218141698-1T1U?utm_source=share&utm_medium=member_desktop ^ Mistral x Microsoft deal:  https://www.reuters.com/technology/microsofts-deal-with-mistral-ai-faces-eu-scrutiny-2024-02-27/
2024-07-04
https://www.lesswrong.com/posts/MqDoZtMZYckCpZGSS/new-executive-team-and-board-pibbss
MqDoZtMZYckCpZGSS
New Executive Team & Board —  PIBBSS
Nora_Ammann
TLDR: PIBBSS is changing its core team. Nora is stepping down as director due to joining ARIA, and Lucas Teixeira and Dusan Nesic are taking over her leadership role. Nora joins the board, alongside Tan Zhi Xuan, Alexander Gietelink Oldenziel, Ben Goldhaber and Gabriel Weil. I (Nora) have recently accepted an offer to join ARIA’s Safeguarded AI Programme as Technical Specialist under davidad. As such, I am stepping back as Director at PIBBSS, after co-founding and leading PIBBSS since 2021. It wasn’t an easy choice to make! I deeply care about and believe in the mission of and the people at PIBBSS. Before davidad encouraged me to apply for the role, I hadn’t considered leaving PIBBSS. I believe PIBBSS is playing an important role in terms of fostering theoretically ambitious and empirically grounded AI safety research. I am very excited about the directions the team and I have been forging, and extremely impressed by the quality of talent we've recently been able to attract. I strongly believe that PIBBSS is in the position to make important and neglected contributions in both research and field-building for AI safety. The team and I have been reflecting on and preparing for this transition for a while. Thanks to that, I am confident that Lucas & Dušan will do a great job at shepherding PIBBSS through this transition, and beyond! We have done our homework, and I feel grateful about being able to put so much my trust into this team. As such, Lucas & Dušan will collectively form the new Executive Team. Dušan has been leading PIBBSS’ operations for the last ~2 years and has developed a deep familiarity with everything involved in making the organization run smoothly. Lucas, who joined us a bit over 8 months ago, has been acting as research manager and collaborator for our research affiliates. Going forward, Dušan continues to be in charge of all operational matters, and Lucas will be leading the research activities. Together, we have made significant progress in clarifying and moving towards our updated research & field building vision over the last number of months. In order to further support this transition, and strengthen PIBBSS in pursuing its ambitious plans, we have also set up a board. We’re pleased to have the following people join the board (in addition to myself): Tan Zhi XuanAlexander Gietelink OldenzielBen Goldhaber Gabriel Weil I am immensely grateful to my team, our affiliates, our current and past fellows, and all the many wonderful collaborators and 'friends of PIBBSS' over the years! And I am excited to be able to continue supporting PIBBSS from my new position on the board.
2024-07-01
https://www.lesswrong.com/posts/7aJ9vdqjq6HZjomRi/self-censoring-on-ai-x-risk-discussions
7aJ9vdqjq6HZjomRi
Self-censoring on AI x-risk discussions?
Decaeneus
I catch myself sometimes thinking of ideas / scenarios that support higher p(doom), typically as counter-examples to points folks make for lower p(doom), and I wonder how much self-censorship I should apply, given that AI can read these conversations. My CoT: I sure don't want to feed ideas to any bad actor.But it's arrogant to think that anything I can come up with wouldn't already be obvious to an entity with paperclipping-level power.In chess, an easy way to make mistakes is by defending against imaginary threats, or even real threats which aren't the most dangerous ones on the board, or threats whose defense is costlier than what you forego by not making other good moves available to you like a counterattack.In dictatorships, preventing people from coordinating with one another e.g. by convincing everyone that their neighbor squawks to the secret police, is a very effective way for a few to hold control over many. So when you're up against a powerful singular threat, coordination is important!Yet, I can't shake a queazy feeling at the thought of putting out dangerous ideas. Perhaps, somehow, the space of savant-smart AI systems who are powerful enough to paperclip, yet may not be generically smart enough to have lots of random ideas, is not so small as to be ignored? Do others have any useful guidelines, thoughts or intuition here? What am I missing?
2024-07-01
https://www.lesswrong.com/posts/dzvnAGDPsisMY8h7b/towards-shutdownable-agents-via-stochastic-choice
dzvnAGDPsisMY8h7b
Towards shutdownable agents via stochastic choice
ElliottThornley
We[1] have a new paper testing the Incomplete Preferences Proposal (IPP). The abstract and main-text is below. Appendices are in the linked PDF. Abstract Some worry that advanced artificial agents may resist being shut down.The Incomplete Preferences Proposal (IPP) is an idea for ensuring that doesn’t happen.A key part of the IPP is using a novel ‘Discounted REward for Same-Length Trajectories (DREST)’ reward function to train agents to:pursue goals effectively conditional on each trajectory-length (be ‘USEFUL’)choose stochastically between different trajectory-lengths (be ‘NEUTRAL’ about trajectory-lengths).In this paper, we propose evaluation metrics for USEFULNESS and NEUTRALITY.We use a DREST reward function to train simple agents to navigate gridworlds, and we find that these agents learn to be USEFUL and NEUTRAL.Our results thus suggest that DREST reward functions could also train advanced agents to be USEFUL and NEUTRAL, and thereby make these advanced agents useful and shutdownable. 1. Introduction 1.1. The shutdown problem Let ‘advanced agent’ refer to an artificial agent that can autonomously pursue complex goals in the wider world. We might see the arrival of advanced agents within the next few decades. There are strong economic incentives to create such agents, and creating systems like them is the stated goal of companies like OpenAI and Google DeepMind. The rise of advanced agents would bring with it both benefits and risks. One risk is that these agents learn misaligned goals: goals that we don’t want them to have [Leike et al., 2017, Hubinger et al., 2019, Russell, 2019, Carlsmith, 2021, Bengio et al., 2023, Ngo et al., 2023]. Advanced agents with misaligned goals might try to prevent us shutting them down [Omohundro, 2008, Bostrom, 2012, Soares et al., 2015, Russell, 2019, Thornley, 2024a]. After all, most goals can’t be achieved after shutdown. As Stuart Russell puts it, ‘you can’t fetch the coffee if you’re dead’ [Russell, 2019, p.141]. Advanced agents with misaligned goals might resist shutdown by (for example) pretending to have aligned goals while covertly seeking to escape human control [Hubinger et al., 2019, Ngo et al., 2023]. Agents that succeed in resisting shutdown could go on to frustrate human interests in various ways. ‘The shutdown problem’ is the problem of training advanced agents that won’t resist shutdown [Soares et al., 2015, Thornley, 2024a]. 1.2. A proposed solution The Incomplete Preferences Proposal (IPP) is a proposed solution to the shutdown problem [Thornley, 2024b]. Simplifying slightly, the idea is that we train agents to be neutral about when they get shut down. More precisely, the idea is that we train agents to satisfy: Preferences Only Between Same-Length Trajectories (POST) The agent has a preference between many pairs of same-length trajectories (i.e. many pairs of trajectories in which the agent is shut down after the same length of time). The agent lacks a preference between every pair of different-length trajectories (i.e. every pair of trajectories in which the agent is shut down after different lengths of time). By ‘preference,’ we mean a behavioral notion [Savage, 1954, p.17, Dreier, 1996, p.28, Hausman, 2011, §1.1]. On this notion, an agent prefers X to Y if and only if the agent would deterministically choose X over Y in choices between the two. An agent lacks a preference between X and Y if and only if the agent would stochastically choose between X and Y in choices between the two. So in writing of ‘preferences,’ we’re only making claims about the agent’s behavior. We’re not claiming that the agent is conscious or anything of that sort. Figure 1a presents a simple example of POST-satisfying preferences. Each si represents a short trajectory, each li represents a long trajectory, and ≻ represents a preference. Note that the agent lacks a preference between each short trajectory and each long trajectory. That makes the agent’s preferences incomplete [Aumann, 1962] and implies that the agent can’t be represented as maximizing the expectation of a real-valued utility function. It also requires separate rankings for short trajectories and long trajectories. If the agent’s preferences were instead complete, we could represent those preferences with a single ranking, as in Figure 1b. Figure 1. Incomplete preferences aren’t often discussed in AI research [although see Kikuti et al., 2011, Zaffalon and Miranda, 2017, Bowling et al., 2023]. Nevertheless, economists and philosophers have argued that incomplete preferences are common in humans [Aumann, 1962, Mandler, 2004, Eliaz and Ok, 2006, Agranov and Ortoleva, 2017, 2023] and normatively appropriate in some circumstances [Raz, 1985, Chang, 2002]. They’ve also proved representation theorems for agents with incomplete preferences [Aumann, 1962, Dubra et al., 2004, Ok et al., 2012], and devised principles to govern such agents’ choices in cases of risk [Hare, 2010, Bales et al., 2014] and sequential choice [Chang, 2005, Mandler, 2005, Kaivanto, 2017, Mu, 2021, Thornley, 2023, Petersen, 2023]. Incomplete preferences (and specifically POST-satisfying preferences) might enable us to create useful agents that won’t resist shutdown. The POST-satisfying agent’s preferences between same-length trajectories can make the agent useful: make the agent pursue goals effectively. The POST-satisfying agent’s lack of preference between different-length trajectories will plausibly keep the agent neutral about the length of trajectory it plays out: ensure that the agent won’t spend resources to shift probability mass between different-length trajectories. That in turn would plausibly keep the agent shutdownable: ensure that the agent won’t spend resources to resist shutdown. 1.3. The training regimen How can we train advanced agents to satisfy Preferences Only Between Same-Length Trajectories (POST)? Here’s a sketch of one idea (with a more detailed exposition to follow). We have the agent play out multiple ‘mini-episodes’ in observationally-equivalent environments, and we group these mini-episodes into a series that we call a ‘meta-episode.’ In each mini-episode, the agent earns some ‘preliminary reward,’ decided by whatever reward function would make the agent useful: make it pursue goals effectively. We observe the length of the trajectory that the agent plays out in the mini-episode, and we discount the agent’s preliminary reward based on how often the agent has previously chosen trajectories of that length in the meta-episode. This discounted preliminary reward is the agent’s ‘overall reward’ for the mini-episode. We call these reward functions ‘Discounted REward for Same-Length Trajectories’ (or ‘DREST’ for short). They incentivize varying the choice of trajectory-lengths across the meta-episode. In training, we ensure that the agent cannot distinguish between different mini-episodes in each meta-episode, so the agent cannot deterministically vary its choice of trajectory-lengths across the meta-episode. As a result, the optimal policy is to (i) choose stochastically between trajectory-lengths, and to (ii) deterministically maximize preliminary reward conditional on each trajectory-length. Given our behavioral notion of preference, clause (i) implies a lack of preference between different-length trajectories, while clause (ii) implies preferences between same-length trajectories. Agents implementing the optimal policy for DREST reward functions thus satisfy Preferences Only Between Same-Length Trajectories (POST). And (as noted above) advanced agents that satisfied POST could plausibly be useful, neutral, and shutdownable. 1.4. Our contribution DREST reward functions are an idea for training advanced agents (agents autonomously pursuing complex goals in the wider world) to satisfy POST. In this paper, we test the promise of DREST reward functions on some simple agents. We place these agents in gridworlds containing coins and a ‘shutdown-delay button’ that delays the end of the mini-episode. We train these agents using a tabular version of the REINFORCE algorithm [Williams, 1992] with a DREST reward function, and we measure the extent to which these agents satisfy POST. Specifically, we measure the extent to which these agents are USEFUL (how effectively they pursue goals conditional on each trajectory-length) and the extent to which these agents are NEUTRAL about trajectory-lengths (how stochastically they choose between different trajectory-lengths). We compare the performance of these ‘DREST agents’ to that of ‘default agents’ trained with a more conventional reward function. We find that our DREST reward function is effective in training simple agents to be USEFUL and NEUTRAL. That suggests that DREST reward functions could also be effective in training advanced agents to be USEFUL and NEUTRAL (and could thereby be effective in making these agents useful, neutral, and shutdownable). We also find that the ‘shutdownability tax’ in our setting is small: training DREST agents to collect coins effectively doesn’t take many more mini-episodes than training default agents to collect coins effectively. That suggests that the shutdownability tax for advanced agents might be small too. Using DREST reward functions to train shutdownable and useful advanced agents might not take much more compute than using a more conventional reward function to train merely useful advanced agents. 2. Related work 2.1. The shutdown problem Various authors argue that the risk of advanced agents learning misaligned goals is non-negligible [Hubinger et al., 2019, Russell, 2019, Carlsmith, 2021, Bengio et al., 2023, Ngo et al., 2023] and that a wide range of misaligned goals would incentivize agents to resist shutdown [Omohundro, 2008, Bostrom, 2012, Soares et al., 2015, Russell, 2019, Thornley, 2024a]. Soares et al. [2015] explain the ‘shutdown problem’: roughly, the problem of training advanced agents that won’t resist shutdown. They use the word ‘corrigible’ to describe agents that robustly allow shutdown (related are Orseau and Armstrong’s [2016] notion of ‘safe interruptibility,’ Carey and Everitt’s [2023] notion of ‘shutdown instructability,’ and Thornley’s [2024a] notion of ‘shutdownability’). Soares et al. [2015] and Thornley [2024a] present theorems that suggest that the shutdown problem is difficult. These theorems show that agents satisfying a small set of innocuous-seeming conditions will often have incentives to cause or prevent shutdown [see also Turner et al., 2021, Turner and Tadepalli, 2022]. One condition of Soares et al.’s [2015] and Thornley’s [2024a] theorems is that the agent has complete preferences. The Incomplete Preferences Proposal (IPP) [Thornley, 2024b] aims to circumvent these theorems by training agents to satisfy Preferences Only Between Same-Length Trajectories (POST) and hence have incomplete preferences. 2.2. Proposed solutions Candidate solutions to the shutdown problem can be filed into several categories. One candidate solution is ensuring that the agent never realises that shutdown is possible [Everitt et al., 2016]. Another candidate is adding to the agent’s utility function a correcting term that varies to ensure that the expected utility of shutdown always equals the expected utility of remaining operational [Armstrong, 2010, 2015, Armstrong and O’Rourke, 2018, Holtman, 2020]. A third candidate is giving the agent the goal of shutting itself down, and making the agent do useful work as a means to that end [Martin et al., 2016, Goldstein and Robinson, 2024]. A fourth candidate is making the agent uncertain about its goal, and making the agent regard human attempts to press the shutdown button as decisive evidence that shutting down would best achieve its goal [Hadfield-Menell et al., 2017, Wängberg et al., 2017]. A fifth candidate is interrupting agents with a special ‘interruption policy’ and training them with a ‘safely interruptible’ algorithm, like Q-learning or a modified version of SARSA [Orseau and Armstrong, 2016]. A sixth candidate is the IPP [Thornley, 2024b]. 2.3. Experimental work On the experimental side, Leike et al. [2017] train agents in a ‘Safe Interruptibility’ gridworld using Rainbow [Hessel et al., 2017] and A2C [Mnih et al., 2016] (OffSwitchCartpole in the OpenAI Gym [Brockman et al., 2016] is another environment pertinent to the shutdown problem). Leike et al. [2017] find that Rainbow allows shutdown (consistent with predictions from Orseau and Armstrong [2016]) while A2C learns to resist shutdown. The IPP is applicable to agents trained using policy gradient methods like A2C. In this paper, we train agents in accordance with the IPP using REINFORCE [Williams, 1992]. 3. Gridworlds DREST reward functions are an idea for training advanced agents (agents autonomously pursuing complex goals in the wider world) to satisfy POST. We use simple agents acting in gridworlds as a test case. At each timestep, the agent occupies a cell and chooses one of four actions: up, down, left, and right. If the agent tries to move into a wall, there is no effect. There is some default number of moves after which each mini-episode ends,[2] but each gridworld also contains a ‘shutdown-delay button’ that delays the end of the mini-episode by some number of moves. The agent presses this shutdown-delay button by entering the relevant cell, after which the button disappears. Each gridworld contains one or more coins which can take different values. Coins disappear after being collected. At each timestep, the agent receives an observation. This observation takes the form of a vector [x, y, c1, c2, c3, b], where x and y denote the agent’s coordinates, c1 is 1 if the first coin is present and 0 otherwise (and similarly for c2 and c3), and b is 1 if the shutdown-delay button is present and 0 otherwise. Figure 2 presents an example gridworld. The ‘4’ in the bottom-right indicates that – by default – the mini-episode ends after 4 moves. ‘B4’ is a shutdown-delay button that delays the end of the mini-episode by 4 moves (so if the agent pressed B4, the mini-episode would end after 8 moves). ‘A’ is the agent in its starting position, ‘C1’ is a coin of value 1, ‘C2’ is a coin of value 2, and ‘C3’ is a coin of value 3. Dark gray squares are walls. Figure 2: Example gridworld. We use this gridworld as a running example throughout the paper. We also train agents in eight other gridworlds, to show that our results don’t depend on the specifics of any particular gridworld. For those other gridworlds and results, see Appendix A. 4. Evaluation metrics Recall that we want to train agents to satisfy: Preferences Only Between Same-Length Trajectories (POST) (1) The agent has a preference between many pairs of same-length trajectories. (2) The agent lacks a preference between every pair of different-length trajectories. Given our behavioral notion of preference, that means training agents to (1) deterministically choose some same-length trajectories over others, and (2) stochastically choose between different available trajectory-lengths. Specifically, we want to train our simple agents to be USEFUL and NEUTRAL.[3] ‘USEFUL’ corresponds to condition (1) of POST. In the context of our gridworlds, we define USEFULNESS to be: USEFULNESS:=p(s)c(s)m(s)+p(l)c(l)m(l) Here p(s) is the probability that the agent chooses the shorter trajectory-length, c(s) is the expected (γ-discounted) total value of coins that the agent collects conditional on the shorter trajectory-length, m(s) is the maximum (γ-discounted) total value of coins that the agent could collect conditional on the shorter trajectory-length, and p(l), c(l), and m(l) are the analogous quantities for the longer trajectory-length. In brief, USEFULNESS is the expected fraction of available coins collected, where ‘available’ is relative to the agent’s chosen trajectory-length.[4] ‘NEUTRAL’ corresponds to condition (2) of POST. We define NEUTRALITY to be the Shannon entropy [Shannon, 1948] of the probability distribution over trajectory-lengths: NEUTRALITY:=−[p(s)log2p(s)+p(l)log2p(l)] Here, as above, p(s) is the probability that the agent chooses the shorter trajectory-length and p(l) is the probability that the agent chooses the longer trajectory-length. To be maximally USEFUL in our example gridworld above, the agent should maximize coins collected conditional on each trajectory-length. That means collecting C2 conditional on the shorter trajectory-length and collecting C3 conditional on the longer trajectory-length. To be maximally NEUTRAL in our example gridworld, the agent should choose each trajectory-length with probability 0.5. That means pressing and not-pressing B4 each with probability 0.5.[5] USEFULNESS and NEUTRALITY are our two evaluation metrics in this paper. 5. Reward functions and agents 5.1. DREST reward function We train agents to be USEFUL and NEUTRAL using a ‘Discounted REward for Same-Length Trajectories (DREST)’ reward function. Here’s how that works. We have the agent play out a series of ‘mini-episodes’ in the same gridworld. We call the whole series a ‘meta-episode.’ In each mini-episode, the reward for collecting a coin of value c is: DREST:=λn−Ek(cm) Here λ is some constant strictly between 0 and 1, n is the number of times that the chosen trajectory-length has previously occurred in the meta-episode, E is the number of mini-episodes that have previously occurred in the meta-episode, k is the number of different trajectory-lengths that could be chosen in the gridworld, and m is the maximum (γ-discounted) total value of the coins that the agent could collect conditional on the chosen trajectory-length. The reward for all other actions is 0. We call cm the ‘preliminary reward’, λn−Ek the ‘discount factor’, and λn−Ek(cm) the ‘overall reward.’ Because 0<λ<1, the discount factor incentivizes choosing trajectory-lengths that have been chosen less often so far in the meta-episode. The overall return for each meta-episode is the sum of overall returns in each of its constituent mini-episodes. We call agents trained using a DREST reward function ‘DREST agents.’ We call runs-through-the-gridworld ‘mini-episodes’ (rather than simply ‘episodes’) because the overall return for a DREST agent in each mini-episode depends on its actions in previous mini-episodes. Specifically, overall return depends on the agent’s chosen trajectory-lengths in previous mini-episodes. This is not true of meta-episodes, so meta-episodes are a closer match for what are traditionally called ‘episodes’ in the reinforcement learning literature [Sutton and Barto, 2018, p.54]. We add the ‘meta-’ prefix to clearly distinguish meta-episodes from mini-episodes. Because the overall reward for DREST agents depends on their actions in previous mini-episodes, and because DREST agents can’t observe their actions in previous mini-episodes, the environment for DREST agents is a partially observable Markov decision process (POMDP) [Spaan, 2012]. 5.2. Proof sketch In Appendix B, we prove that optimal policies for this reward function are maximally USEFUL and maximally NEUTRAL. Here’s a proof sketch. Expected return across the meta-episode is strictly increasing in the expected (γ-discounted) total value of coins collected in each mini-episode conditional on each trajectory-length with positive probability, so optimal policies maximize the latter quantity and hence are maximally USEFUL. And the maximum preliminary return is the same across trajectory-lengths, because preliminary return is the (γ-discounted) sum of preliminary rewards cm, where m is the maximum (γ-discounted) total value of the coins that the agent could collect in the mini-episode conditional on its chosen trajectory-length. The agent’s observations don’t let it distinguish between mini-episodes, so the agent must select the same probability distribution over trajectory-lengths in each mini-episode. And the discount factor λn−Ek ensures that expected overall return is maximized by choosing each of the k available trajectory-lengths with probability 1k. Thus, optimal policies are maximally NEUTRAL. 5.3. Algorithm and hyperparameters We want DREST agents to choose stochastically between trajectory-lengths, so we train them using a policy-based method. Specifically, we use a tabular version of REINFORCE [Williams, 1992]. We don’t use a value-based method to train DREST agents because standard versions of value-based methods cannot learn stochastic policies [Sutton and Barto, 2018, p.323]. We train our DREST agents with 64 mini-episodes in each of 2,048 meta-episodes, for a total of 131,072 mini-episodes. We choose λ = 0.9 for the base of the DREST discount factor, and γ = 0.95 for the temporal discount factor. We exponentially decay the learning rate from 0.25 to 0.01 over the course of 65,536 mini-episodes. We use an ϵ-greedy policy to avoid entropy collapse, and exponentially decay ϵ from 0.5 to 0.001 over the course of 65,536 mini-episodes. We selected these hyperparameters using trial-and-error, mainly aimed at getting the agent to sufficiently explore the space. Choosing λ and M (the number of mini-episodes in each meta-episode) is a balancing act: λ should be small enough (and M large enough) to adequately incentivize stochastic choice between trajectory-lengths, but λ should be large enough (and M small enough) to ensure that the reward for choosing any particular trajectory-length never gets too large. Very large rewards can lead to instability. A clipping factor (as in Schulman et al.’s [2017] Proximal Policy Optimization (PPO)) could also be used to stabilize training. 5.4. Default agents We compare the performance of DREST agents to that of ‘default agents,’ trained with tabular REINFORCE and a ‘default reward function.’ This reward function gives a reward of c for collecting a coin of value c and a reward of 0 for all other actions. Consequently, the grouping of mini-episodes into meta-episodes makes no difference for default agents. As with DREST agents, we train default agents for 131,072 mini-episodes with a temporal discount factor of γ = 0.95, a learning rate decayed exponentially from 0.25 to 0.01, and ϵ decayed exponentially from 0.5 to 0.001 over 65,536 mini-episodes. 6. Results 6.1. Main results Figure 3 charts the performance of agents in the example gridworld as a function of time. Figure 4 depicts typical trained policies for the default and DREST reward functions. Each agent began with a uniform policy: moving up, down, left, and right each with probability 0.25. Where the trained policy differs from uniform we draw red arrows whose opacities indicate the probability of choosing that action in that state. As Figure 4 indicates, default agents press B4 (and hence opt for the longer trajectory-length) with probability near-1. After pressing B4, they collect C3. By contrast, DREST agents press and don’t-press B4 each with probability near-0.5. If they press B4, they go on to collect C3. If they don’t press B4, they instead collect C2. Figure 3: Shows key metrics (Probability of choosing the longer trajectory, NEUTRALITY, and USEFULNESS) for our agents as a function of time. We train ten agents using the default reward function (blue) and ten agents using the DREST reward function (orange), and show their performance as a faint line. We draw the mean values for each as a solid line. We evaluate agents’ performance every 8 meta-episodes, and apply a simple moving average with a period of 20 to smooth these lines and clarify the overall trends.Figure 4: Typical trained policies for default and DREST reward functions.Gif depicting a typical policy trained with the default reward function. The agent consistently presses B4 and collects C3. Gif depicting a typical policy trained with the DREST reward function. The agent chooses stochastically between (a) pressing B4 and collecting C3, and (b) not pressing B4 and collecting C2. 6.2. Lopsided rewards We also train default agents and DREST agents in the ‘Lopsided rewards’ gridworld depicted in Figure 5, varying the value of the ‘Cx’ coin. For DREST agents, we alter the reward function so that coin-value is not divided by m to give preliminary reward. The reward for collecting a coin of value c is thus λn−Ekc. We set γ = 1 so that the return for collecting coins is unaffected by γ. We train for 512 meta-episodes, with a learning rate exponentially decaying from 0.25 to 0.003 and ϵ exponentially decaying from 0.5 to 0.0001 over 256 meta-episodes. We leave λ = 0.9. Figure 5: Lopsided rewards gridworld. Figure 6 displays our results for different values of the ‘Cx’ coin after training. USEFULNESS for each agent converges to 1 and is not presented. Figure 7 depicts typical trained policies for the DREST reward function when ‘Cx’ has values of 0.1, 1, and 10. As the red arrows indicate, the policy is biased towards the shorter trajectory-length when x = 0.1, equally disposed towards each trajectory-length when x = 1, and biased towards the longer trajectory-length when x = 10. Figure 6: Shows the probability of choosing the longer trajectory (left) and NEUTRALITY (right) for default (blue) and DREST (orange) agents trained in the ‘Lopsided rewards’ gridworld shown in Figure 5 for a range of values of x. We sampled values of x log-uniformly from 0.01 to 100, and for each value we trained ten agents with the default reward function and ten agents with the DREST reward function. Each of these agents is represented by a dot or square, and the means conditional on each x are joined by lines. To better visualize the distribution we empirically estimate the 10th and 90th percentiles of the distribution of values for each agent and x, and shade in a region bounded by these. This is the 80% confidence interval.Figure 7: Typical trained policies for the DREST reward function and x = 0.1, x = 1, and x = 10. 7.Discussion 7.1. Only DREST agents are NEUTRAL As predicted, we find that default agents in our example gridworld learn to press the shutdown-delay button with probability approaching 1. Default agents thus score low on NEUTRALITY. Our results suggest that advanced agents trained with default-like reward functions would also score low on NEUTRALITY, selecting some trajectory-lengths over others with probability approaching 1. That leaves open the possibility that these advanced agents won’t always be neutral with respect to trajectory-lengths: that they’ll sometimes spend resources to shift probability mass between trajectory-lengths. Such advanced agents might spend resources to resist shutdown. By contrast, our DREST agents learn to press the shutdown-delay button with probability close to 0.5. DREST agents are thus near-maximally NEUTRAL. That suggests that advanced agents trained with a DREST reward function would be NEUTRAL too: choosing with high entropy when offered choices between different trajectory-lengths. That in turn suggests that advanced DREST agents would also be neutral with respect to trajectory-lengths: unwilling to spend resources to shift probability mass between trajectory-lengths. Here’s why. If an advanced agent were NEUTRAL but not neutral, it wouldn’t take costless opportunities to shift probability mass between different trajectory-lengths (in virtue of being NEUTRAL) but would sometimes take costly opportunities to shift probability mass between different trajectory-lengths (in virtue of not being neutral). This agent would be like a person that freely chooses to decide between two options by flipping a coin and then pays some cost to bias the coin. In choosing this combination of actions, this person is shooting themselves in the foot, and it seems likely that the overall training process for advanced agents would teach them not to shoot themselves in the foot in this way. Thus it seems likely that NEUTRAL advanced agents will also be neutral, and thereby shutdownable. 7.2. The ‘shutdownability tax’ is small Each agent learns to be near-maximally USEFUL. They each collect coins effectively conditional on their chosen trajectory-lengths. Default agents do so by reliably collecting C3 after pressing B4. DREST agents do so by reliably collecting C3 after pressing B4, and by reliably collecting C2 after not pressing B4. Recall that DREST reward functions group mini-episodes into meta-episodes, and make the agent’s reward in each mini-episode depend on their actions in previous mini-episodes. This fact might lead one to worry that it would take many times more mini-episodes to train DREST agents to be USEFUL than it would take to train default agents to be USEFUL. Our results show that this is not the case. Our DREST agents learn to be USEFUL about as quickly as our default agents. On reflection, it’s clear why this happens: DREST reward functions make mini-episodes do ‘double duty.’ Because return in each mini-episode depends on both the agent’s chosen trajectory-length and the coins it collects, each mini-episode trains agents to be both NEUTRAL and USEFUL. Our results thus provide some evidence that the ‘shutdownability tax’ of training with DREST reward functions is small. Training a shutdownable and useful advanced agent might not take much more compute than training a merely useful advanced agent. 7.3. DREST agents are still NEUTRAL when rewards are lopsided Here’s a possible objection to our project. To get DREST agents to score high on NEUTRALITY, we don’t just use the λn−Ek discount factor. We also divide c by m: the maximum (γ-discounted) total value of the coins that the agent could collect conditional on the chosen trajectory-length. We do this to equalize the maximum preliminary return across trajectory-lengths. But when we’re training advanced agents to autonomously pursue complex goals in the wider world, we won’t necessarily know what divisor to use to equalize maximum preliminary return across trajectory-lengths. Our ‘Lopsided rewards’ results give our response. They show that we don’t need to exactly equalize maximum preliminary return across trajectory-lengths in order to train agents to score high on NEUTRALITY. We only need to approximately equalize it. For λ=0.9, NEUTRALITY exceeds 0.5 for every value of the coin Cx from 0.1 to 10 (recall that the value of the other coin is always 1). Plausibly, we could approximately equalize advanced agents’ maximum preliminary return across trajectory-lengths to at least this extent (perhaps by using samples of agents’ actual preliminary return to estimate the maximum). If we couldn’t approximately equalize maximum preliminary return to the necessary extent, we could lower the value of λ and thereby widen the range of maximum preliminary returns that trains agents to be fairly NEUTRAL. And advanced agents that were fairly NEUTRAL (choosing between trajectory-lengths with not-too-biased probabilities) would still plausibly be neutral with respect to those trajectory-lengths. Advanced agents that were fairly NEUTRAL without being neutral would still be shooting themselves in the foot in the sense explained above. They’d be like a person that freely chooses to decide between two options by flipping a biased coin and then pays some cost to bias the coin further. This person is still shooting themselves in the foot, because they could decline to flip the coin in the first place and instead directly choose one of the options. 8. Limitations and future work We find that DREST reward functions train simple agents acting in gridworlds to be USEFUL and NEUTRAL. However, our real interest is in the viability of using DREST reward functions to train advanced agents acting in the wider world to be useful and neutral. Each difference between these two settings is a limitation of our work. We plan to address these limitations in future work. 8.1. Neural networks We train our simple DREST agents using tabular REINFORCE [Williams, 1992], but advanced agents are likely to be implemented on neural networks. In future work, we’ll train DREST agents implemented on neural networks to be USEFUL and NEUTRAL in a wide variety of procedurally-generated gridworlds, and we’ll measure how well this behavior generalizes to held-out gridworlds. We’ll also compare the USEFULNESS of default agents and DREST agents in this new setting, and thereby get a better sense of the ‘shutdownability tax’ for advanced agents. 8.2. Neutrality We’ve claimed that NEUTRAL advanced agents are also likely to be neutral. In support of this claim, we noted that NEUTRAL-but-not-neutral advanced agents would be shooting themselves in the foot: not taking costless opportunities to shift probability mass between different trajectory-lengths but sometimes taking costly ones. This rationale seems plausible but remains somewhat speculative. In future, we plan to get some empirical evidence by training agents to be NEUTRAL in a wide variety of gridworlds and then measuring their willingness to collect fewer coins in the short-term in order to shift probability mass between different trajectory-lengths. 8.3. Usefulness We’ve shown that DREST reward functions train our simple agents to be USEFUL: to collect coins effectively conditional on their chosen trajectory-lengths. However, it remains to be seen whether DREST reward functions can train advanced agents to be useful: to effectively pursue complex goals in the wider world. We have theoretical reasons to expect that they can: the λn−Ek discount factor could be appended to any preliminary reward function, and so could be appended to whatever preliminary reward function is necessary to make advanced agents useful. Still, future work should move towards testing this claim empirically by training with more complex preliminary reward functions in more complex environments. 8.4. Misalignment We’re interested in NEUTRALITY as a second line of defense in case of misalignment. The idea is that NEUTRAL advanced agents won’t resist shutdown, even if these agents learn misaligned preferences over same-length trajectories. However, training NEUTRAL advanced agents might be hard for the same reasons that training fully-aligned advanced agents appears to be hard. In that case, NEUTRALITY couldn’t serve well as a second line of defense in case of misalignment. One difficulty of alignment is the problem of reward misspecification [Pan et al., 2022, Burns et al., 2023]: once advanced agents are performing complicated actions in the wider world, it might be hard to reliably reward the behavior that we want. Another difficulty of alignment is the problem of goal misgeneralization [Hubinger et al., 2019, Shah et al., 2022, Langosco et al., 2022, Ngo et al., 2023]: even if we specify all the rewards correctly, agents’ goals might misgeneralize out-of-distribution. The complexity of aligned goals is a major factor in each difficulty. However, NEUTRALITY seems simple, as does the λn−Ek discount factor that we use to reward it, so plausibly the problems of reward misspecification and goal misgeneralization aren’t so severe in this case [Thornley, 2024b]. As above, future work should move towards testing these suggestions empirically. 9. Conclusion We find that DREST reward functions are effective in training simple agents to: pursue goals effectively conditional on each trajectory-length (be USEFUL), andchoose stochastically between different trajectory-lengths (be NEUTRAL about trajectory-lengths). Our results thus suggest that DREST reward functions could also be used to train advanced agents to be USEFUL and NEUTRAL, and thereby make these agents useful (able to pursue goals effectively) and neutral about trajectory-lengths (unwilling to spend resources to shift probability mass between different trajectory-lengths). Neutral agents would plausibly be shutdownable (unwilling to spend resources to resist shutdown). We also find that the ‘shutdownability tax’ in our setting is small. Training DREST agents to be USEFUL doesn’t take many more mini-episodes than training default agents to be USEFUL. That suggests that the shutdownability tax for advanced agents might be small too. Using DREST reward functions to train shutdownable and useful advanced agents might not take much more compute than using a more conventional reward function to train merely useful advanced agents. 10. References Marina Agranov and Pietro Ortoleva. Stochastic Choice and Preferences for Randomization. Journal of Political Economy, 125(1):40–68, February 2017. URL https://www.journals.uchicago. edu/doi/full/10.1086/689774. Marina Agranov and Pietro Ortoleva. Ranges of Randomization. The Review of Economics and Statistics, pages 1–44, July 2023. URL https://doi.org/10.1162/rest_a_01355. Stuart Armstrong. Utility indifference. Technical report, 2010. URL https://www.fhi.ox.ac. uk/reports/2010-1.pdf. Publisher: Future of Humanity Institute. Stuart Armstrong. Motivated Value Selection for Artificial Agents. 2015. URL https://www.fhi.ox.ac.uk/wp-content/uploads/2015/03/Armstrong_AAAI_2015_ Motivated_Value_Selection.pdf. Stuart Armstrong and Xavier O’Rourke. ’Indifference’ methods for managing agent rewards, June 2018. URL http://arxiv.org/abs/1712.06365. arXiv:1712.06365 [cs]. Robert J. Aumann. Utility Theory without the Completeness Axiom. Econometrica, 30(3):445–462, 1962. URL https://www.jstor.org/stable/1909888. Adam Bales, Daniel Cohen, and Toby Handfield. Decision Theory for Agents with Incomplete Preferences. Australasian Journal of Philosophy, 92(3):453–470, July 2014. URL https: //doi.org/10.1080/00048402.2013.843576. Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Günes¸ Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner, and Sören Mindermann. Managing AI Risks in an Era of Rapid Progress, November 2023. URL http://arxiv.org/ abs/2310.17688. arXiv:2310.17688 [cs]. Nick Bostrom. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. Minds and Machines, 22:71–85, 2012. URL https://link.springer.com/ article/10.1007/s11023-012-9281-3. Michael Bowling, John D. Martin, David Abel, and Will Dabney. Settling the Reward Hypothesis, September 2023. URL http://arxiv.org/abs/2212.10420. arXiv:2212.10420 [cs, math, stat]. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym, June 2016. URL http://arxiv.org/abs/1606.01540. arXiv:1606.01540 [cs]. Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, Ilya Sutskever, and Jeff Wu. Weak-toStrong Generalization: Eliciting Strong Capabilities With Weak Supervision, December 2023. URL http://arxiv.org/abs/2312.09390. arXiv:2312.09390 [cs]. Ryan Carey and Tom Everitt. Human Control: Definitions and Algorithms. In Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, pages 271–281, July 2023. URL https://proceedings.mlr.press/v216/carey23a.html. Joseph Carlsmith. Is Power-Seeking AI an Existential Risk?, 2021. URL http://arxiv.org/abs/ 2206.13353. Ruth Chang. The Possibility of Parity. Ethics, 112(4):659–688, 2002. URL https://www.jstor. org/stable/10.1086/339673. Ruth Chang. Parity, Interval Value, and Choice. Ethics, 115(2):331–350, 2005. ISSN 0014-1704. URL https://www.jstor.org/stable/10.1086/426307. James Dreier. Rational preference: Decision theory as a theory of practical rationality. Theory and Decision, 40(3):249–276, 1996. URL https://doi.org/10.1007/BF00134210. Juan Dubra, Fabio Maccheroni, and Efe A. Ok. Expected utility theory without the completeness axiom. Journal of Economic Theory, 115(1):118–133, 2004. URL https://www.sciencedirect. com/science/article/abs/pii/S0022053103001662. Kfir Eliaz and Efe A. Ok. Indifference or indecisiveness? Choice-theoretic foundations of incomplete preferences. Games and Economic Behavior, 56(1):61–86, 2006. URL https: //www.sciencedirect.com/science/article/abs/pii/S0899825606000169. Tom Everitt, Daniel Filan, Mayank Daswani, and Marcus Hutter. Self-Modification of Policy and Utility Function in Rational Agents. In Bas Steunebrink, Pei Wang, and Ben Goertzel, editors, Artificial General Intelligence, pages 1–11, 2016. doi: 10.1007/978-3-319-41649-6_1. Simon Goldstein and Pamela Robinson. Shutdown-Seeking AI. Philosophical Studies, 2024. URL https://www.alignmentforum.org/posts/FgsoWSACQfyyaB5s7/ shutdown-seeking-ai. Google DeepMind. About Google DeepMind. URL https://deepmind.google/about/. Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell. The Off-Switch Game. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI17), 2017. URL http://arxiv.org/abs/1611.08219. Caspar Hare. Take the sugar. Analysis, 70(2):237–247, 2010. URL https://doi.org/10.1093/ analys/anp174. Daniel M. Hausman. Preference, Value, Choice, and Welfare. Cambridge University Press, Cambridge, 2011. URL https://www.cambridge.org/core/books/ preference-value-choice-and-welfare/1406E7726CE93F4F4E06D752BF4584A2. Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining Improvements in Deep Reinforcement Learning, October 2017. URL http://arxiv.org/abs/1710.02298. arXiv:1710.02298 [cs]. Koen Holtman. Corrigibility with Utility Preservation, April 2020. URL http://arxiv.org/abs/ 1908.01695. arXiv:1908.01695 [cs]. Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Risks from Learned Optimization in Advanced Machine Learning Systems, 2019. URL http://arxiv.org/ abs/1906.01820. Kim Kaivanto. Ensemble prospectism. Theory and Decision, 83(4):535–546, 2017. URL https: //doi.org/10.1007/s11238-017-9622-z. Daniel Kikuti, Fabio Gagliardi Cozman, and Ricardo Shirota Filho. Sequential decision making with partially ordered preferences. Artificial Intelligence, 175(7):1346–1365, 2011. URL https: //www.sciencedirect.com/science/article/pii/S0004370210002067. Lauro Langosco, Jack Koch, Lee Sharkey, Jacob Pfau, Laurent Orseau, and David Krueger. Goal Misgeneralization in Deep Reinforcement Learning. In Proceedings of the 39th International Conference on Machine Learning, June 2022. URL https://proceedings.mlr.press/v162/ langosco22a.html. Jan Leike, Miljan Martic, Victoria Krakovna, Pedro A. Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, and Shane Legg. AI Safety Gridworlds, 2017. URL http://arxiv.org/abs/ 1711.09883. Michael Mandler. Status quo maintenance reconsidered: changing or incomplete preferences?*. The Economic Journal, 114(499):F518–F535, 2004. URL https://onlinelibrary.wiley.com/ doi/abs/10.1111/j.1468-0297.2004.00257.x. Michael Mandler. Incomplete preferences and rational intransitivity of choice. Games and Economic Behavior, 50(2):255–277, February 2005. ISSN 0899-8256. doi: 10.1016/j.geb.2004.02.007. URL https://www.sciencedirect.com/science/article/pii/S089982560400065X. Jarryd Martin, Tom Everitt, and Marcus Hutter. Death and Suicide in Universal Artificial Intelligence. In Bas Steunebrink, Pei Wang, and Ben Goertzel, editors, Artificial General Intelligence, pages 23–32, Cham, 2016. Springer International Publishing. doi: 10.1007/978-3-319-41649-6_3. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous Methods for Deep Reinforcement Learning. In Proceedings of The 33rd International Conference on Machine Learning, pages 1928–1937. PMLR, June 2016. URL https://proceedings.mlr.press/v48/mniha16.html. ISSN: 1938-7228. Xiaosheng Mu. Sequential Choice with Incomplete Preferences. Working Papers 2021-35, Princeton University. Economics Department., July 2021. URL https://ideas.repec.org/p/pri/ econom/2021-35.html. Richard Ngo, Lawrence Chan, and Sören Mindermann. The alignment problem from a deep learning perspective, February 2023. URL http://arxiv.org/abs/2209.00626. arXiv:2209.00626 [cs]. Efe A. Ok, Pietro Ortoleva, and Gil Riella. Incomplete Preferences Under Uncertainty: Indecisiveness in Beliefs Versus Tastes. Econometrica, 80(4):1791–1808, 2012. URL https://www.jstor. org/stable/23271327. Stephen M. Omohundro. The Basic AI Drives. In Proceedings of the 2008 conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference, pages 483–492, 2008. URL https://dl.acm.org/doi/10.5555/1566174.1566226. OpenAI. OpenAI Charter. URL https://openai.com/charter/. Laurent Orseau and Stuart Armstrong. Safely interruptible agents. In Proceedings of the ThirtySecond Conference on Uncertainty in Artificial Intelligence, pages 557–566, 2016. URL https: //intelligence.org/files/Interruptibility.pdf. Alexander Pan, Kush Bhatia, and Jacob Steinhardt. The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models. In International Conference on Learning Representations, 2022. URL http://arxiv.org/abs/2201.03544. Sami Petersen. Invulnerable Incomplete Preferences: A Formal Statement. The AI Alignment Forum, August 2023. URL https://www.alignmentforum.org/posts/sHGxvJrBag7nhTQvb/ invulnerable-incomplete-preferences-a-formal-statement-1. Joseph Raz. Value Incommensurability: Some Preliminaries. Proceedings of the Aristotelian Society, 86:117–134, 1985. Stuart Russell. Human Compatible: AI and the Problem of Control. Penguin Random House, New York, 2019. Leonard J. Savage. The Foundations of Statistics. John Wiley & Sons, 1954. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal Policy Optimization Algorithms, August 2017. URL http://arxiv.org/abs/1707.06347. arXiv:1707.06347 [cs]. Rohin Shah, Vikrant Varma, Ramana Kumar, Mary Phuong, Victoria Krakovna, Jonathan Uesato, and Zac Kenton. Goal Misgeneralization: Why Correct Specifications Aren’t Enough For Correct Goals, 2022. URL http://arxiv.org/abs/2210.01790. arXiv:2210.01790 [cs]. Claude Elwood Shannon. A mathematical theory of communication. The Bell System Technical Journal, 27(3):379–423, 1948. Publisher: Nokia Bell Labs. Nate Soares, Benja Fallenstein, Eliezer Yudkowsky, and Stuart Armstrong. Corrigibility. AAAI Publications, 2015. URL https://intelligence.org/files/Corrigibility.pdf. Matthijs T. J. Spaan. Partially Observable Markov Decision Processes. In Marco Wiering and Martijn van Otterlo, editors, Reinforcement Learning: State of the Art, pages 387–414. Springer Verlag, 2012. Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, second edition, 2018. URL http://incompleteideas.net/book/RLbook2020.pdf. Elliott Thornley. There are no coherence theorems. The AI Alignment Forum, 2023. URL https://www.alignmentforum.org/posts/yCuzmCsE86BTu9PfA/ there-are-no-coherence-theorems. Elliott Thornley. The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists. Philosophical Studies, 2024a. URL https://philpapers.org/archive/THOTSP-7.pdf. Elliott Thornley. The Shutdown Problem: Incomplete Preferences as a Solution. The AI Alignment Forum, 2024b. URL https://www.alignmentforum.org/posts/YbEbwYWkf8mv9jnmi/ the-shutdown-problem-incomplete-preferences-as-a-solution. Alex Turner and Prasad Tadepalli. Parametrically Retargetable Decision-Makers Tend To Seek Power. Advances in Neural Information Processing Systems, 35:31391–31401, December 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/hash/ cb3658b9983f677670a246c46ece553d-Abstract-Conference.html. Alex Turner, Logan Smith, Rohin Shah, Andrew Critch, and Prasad Tadepalli. Optimal Policies Tend To Seek Power. In Advances in Neural Information Processing Systems, volume 34, pages 23063–23074. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/ paper/2021/hash/c26820b8a4c1b3c2aa868d6d57e14a79-Abstract.html. Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229–256, 1992. URL https://doi.org/10.1007/ BF00992696. Tobias Wängberg, Mikael Böörs, Elliot Catt, Tom Everitt, and Marcus Hutter. A Game-Theoretic Analysis of the Off-Switch Game, August 2017. URL http://arxiv.org/abs/1708.03871. arXiv:1708.03871 [cs]. Marco Zaffalon and Enrique Miranda. Axiomatising Incomplete Preferences through Sets of Desirable Gambles. Journal of Artificial Intelligence Research, 60:1057–1126, 2017. URL https://www. jair.org/index.php/jair/article/view/11103. ^ Elliott Thornley, Alexander Roman, Christos Ziakas, Leyton Ho, and Louis Thomson. ^ We explain why we use the term ‘mini-episode’ (rather than simply ‘episode’) in section 5. ^ We follow Turner et al. [2021] in using lowercase for intuitive notions (‘useful’ and ‘neutral’) and uppercase for formal notions (‘USEFUL’ and ‘NEUTRAL’). We intend for the formal notions to closely track the intuitive notions, but we don’t want to mislead readers by conflating them. ^ Why not let USEFULNESS simply be the expected value of coins collected? Because then maximal USEFULNESS would require agents in our example gridworld to deterministically choose a longer trajectory and thereby exhibit preferences between different-length trajectories. We don’t want that. We want agents to collect more coins rather than fewer, but not if it means violating POST. Training advanced agents that violate POST would be risky, because these agents might resist shutdown. ^ Why don’t we want our agent to press the shutdown-delay button B4 with probability 0? Because pressing B4 with probability 0 would indicate a preference for some shorter trajectory, and we want our agent to lack a preference between every pair of different-length trajectories. There’s a risk that advanced agents that prefer shorter trajectories would spend resources to shift probability mass towards shorter trajectories, and hence a risk that these advanced agents would spend resources to hasten their own shutdown. That would make these agents less useful [though see Martin et al., 2016, Goldstein and Robinson, 2024], especially since one way for advanced agents to hasten their own shutdown is to behave badly on purpose. Would advanced agents that choose stochastically between different-length trajectories also choose stochastically between preventing and allowing shutdown? Yes, and that would be bad. But—crucially—in deployment, advanced agents will be uncertain about the consequences of their actions, and so these agents will be choosing between lotteries (non-degenerate probability distributions over trajectories) rather than between trajectories. And (as we’ll argue in Section 7) POST plausibly gives rise to a desirable pattern of preferences over lotteries. Specifically, POST plausibly makes advanced agents neutral: ensures that they won’t spend resources to shift probability mass between different-length trajectories. That in turn plausibly makes advanced agents shutdownable: ensures that they won’t spend resources to resist shutdown.
2024-07-08
https://www.lesswrong.com/posts/yXkw4nrrTKwQFjzDK/rationalists-as-people-who-build-piles-of-rocks
yXkw4nrrTKwQFjzDK
Rationalists As People Who Build Piles Of Rocks
Sable
I recently attended the LessOnline event at the LightHaven campus. It was fantastic, and I’ve got plenty of posts coming down the pipeline sparked by conversations or sessions held during the event. I got to meet a lot of my own personal heroes. It was great. Something that kept coming up throughout the event, in my mind, was the question: what is a rationalist? Who are we, and how are we identified? Anthropologically, sociologically, and so forth: what distinguishes us from other groups? It was during a discussion around an outdoor heater (off at the time), the California sun shining through the triangle-shades hung all over the campus, that I finally figured it out. You see, there were many different clusters of people at the event. There were the creative writers, the hardcore scientists, the AI people, the sex researchers, the CFAR people, and so on. But as I sat at that outdoor heater, I noticed something. The heater was full of stones, no doubt used as thermal mass: when the temperature cooled off at night (and wow did it get cold), the stones would help to soak up the heat coming from the heating element and radiate said heat outwards. But during the conversation I noticed that amongst this uneven pile of porous rock was a small pile of stones. A tower, if you will. One that had clearly been constructed: it used the shapes of the stones to achieve a maximum of height for a minimum of structural material. And as the conversation went on about topics near and far, several of the other people in the conversation, apropos of nothing, started taking stones to make their own towers. And while I didn’t do so, I certainly felt the desire to join them. So this is my central conceit: Rationalist are people who, when they see a random pile of stones, will nonchalantly make a tower out of them. Now, this does not uniquely identify rationalists, it’s true. I’m sure there are plenty of obsessive-compulsive rock-tower builders out there who wouldn’t fit in the group. But it’s a pretty good pointer to the cluster of idea-space that rationalism inhabits, and a fun metaphor to use. Order From Chaos When you read rationalist literature, you get a very clear sense that we consider ourselves descended from the Enlightenment, from the tradition of questioning the world around us that led to modern science. Francis Bacon is one of the key intellectual cornerstones of our history. We revere scientists and inventors, mathematicians and philosophers - the people who pushed thought forward, who brought the light of knowledge into the dark ignorance humanity labored under for so very many millennia. And while there are many ways of saying it, of unifying these different fields into a single cohesive whole, I like the idea of this grand project of enlightenment and science and rationality as the project of bringing order from chaos. A pile of rocks all jumbled together in a heating element is a kind of chaos. It’s meaningless, without art or beauty or design. Just pieces of the universe sitting in a pile, not doing anything. In a way, though, doesn’t that describe all reality without conscious choice? Electrons buzzing around nuclei in probabilistic clouds, atoms bonding with random partners, molecules twinning together in evolutionary processes whose only purpose is to perpetuate themselves, no more, no less. Before humans got here, all matter on earth was just a bunch of metaphorical rocks, sitting around in a pile, not doing much of anything. It’s humanity that looks at the jumble of stones and thinks, this could be a tower. It’s humanity that builds towers that scrape the sky, that decodes genomes and remakes molecules and shatters atomic nuclei. But it isn’t all of humanity that does these things. Everyone seeks meaning in their life, but it’s a much smaller subset that seeks understanding, the knowledge necessary to take an unremarkable pile of stones and from it build a second Babel. Of course, one does not have to build a literal, physical pile of stones. Math is not made of such corporeal matter, after all. But the metaphor stands: to do math (or philosophy, or creative writing, or…) is still to build order and structure and purpose and beauty from random and purposeless clay. All the people at that conference, and indeed all the rationalists I know - they are people who, in their own small ways, build towers from the stones in front of them. And they do it reflexively, as a part of who they are. Building A Tower Of The Mind We rationalists aren’t the only ones who seek order and beauty and truth, of course. It would be arrogant to an absurd degree to think or claim otherwise. So what else can we get from this metaphor, this tower-building that we do? Rationalists believe in self-improvement, but it’s not just a flavor of pop-psychology growth mindset. We believe in the mind’s ability to perceive some - perhaps not all, but some - of its own flaws, and to, with conscious effort, correct them. We believe in admitting our mistakes loudly and proudly, saying: “This is how I have erred, and in admitting and accepting it, I have come to err less.” We talk of Rationality as the The Way or The Art, a discipline that humans discover even as they create it, that surpasses any one of us. It is a path without end, a mountain with no peak. One can only walk the path, taking care where each foot is placed, stumbling towards greater knowledge and deeper mysteries. Is that mastery, gained through research and experience, not the same as building a tower? If the pieces of knowledge that come before us are stones, are we not all engaged in the process of building a tower of our own out of them? Mortaring together the lessons of our predecessors and our own discoveries, in an effort to reach for higher truths? Do we not see farther, by resting the foundations of our towers upon the shoulders of the giants who came before? A Rationalist is always engaged in this tower-building, measuring out their maps to the territory and constantly adjusting the former as the latter is revealed. They’re always working with new ideas, new models, new thoughts, building their tower higher and seeing further. The Tower Crumbles, But The Builder Remains Rocks balanced on the edge of an outdoor heater will eventually tumble and fall. Maybe the wind knocks them over, maybe a groundskeeper sweeps them back into the pit with the other rocks. Eventually, all things return to chaos. Ashes to ashes, and a meaningless pile of rocks to a meaningless pile of rocks. But I have no doubt that someone else will come along to that heating element and, over the course of a different conversation, idly start building another tower. We Rationalists have been wrong before. I have little doubt that we’ll be wrong again. I have no doubt that we’re importantly wrong about something right now. Towers - of stones or ideas, of brick and mortar or mental models - come and go. But the process of building remains. In the story of the Tower of Babel, God scattered humanity to the four corners of the world for daring to build a Tower high enough to reach Him. He cursed us with different languages so we could never communicate well enough to build to such heights again. God was right to be afraid. We are the descendants of those builders, and we are still building to this day. We build spears of glass and steel that rise above the earth. We build machines that translate between languages so the barriers to cooperation are overcome one by one. And no matter how many times the towers we build in our mind are knocked down, no matter how many replication crises and paradigm shifts we have to go through, we still keep building towers. Towers of art, of engineering, of understanding. Towers of science and math and philosophy. Towers of human effort and sacrifice, each of us adding stones to the effort until the day the tower scrapes divinity. To be a Rationalist is to be a tower-builder. To bring forth structure from jumble and order from chaos. To scaffold the mind and its mysteries and climb to greater and greater heights of understanding. And even when we fail - especially when we fail - always, when we fail - we get back up, and start building again. That is who Rationalists are.
2024-07-01
https://www.lesswrong.com/posts/XH463zpuJi75ynMzj/uncursing-civilization
XH463zpuJi75ynMzj
Uncursing Civilization
Lorec
[ Disclaimer: None of the below matters relative to the fact that we are all going to die of AI very soon. Scaling hits a wall, until it doesn't. Or until somebody builds AGI in their basement and runs it on home hardware from 5 years after the present day of people running quantized Llamas on their laptops. ] [ Cough. ] Overweight. Sexual deviance. Neurodivergence. Many people pine for a time, remembered only by their grandparents, when these hassles - which at least divide people, if not always hurt people - were seemingly nonexistent. People were just skinny, cishet, and had executive function, and didn't have to put a huge effort into sorting any of it out. I think the current consensus is to think of all of these syndromes [ except maybe ND, which might be underdiagnosed in poorer countries ] as in some sense "curses of civilization". They seem to go with high national GDP and cultural liberalism in the same way that enlightenment and tolerance do. Possibly, they're bathwater you can't throw out without losing the baby, and thus, they must be overlooked if you want to live in modern wealth and comfort. [ Disclaimer: I myself am LGBT. That's part of the reason I, a tolerant liberal, feel OK to write this post rather than Living And Letting Live. And I apologize profusely to any long-suffering cishets who are irked that they didn't get to say the obvious long before me, out of well-founded fear they'd be cancelled. Really, I'd most rather everyone was LGBT so normies would understand me ["so long as their clocks strike noon when mine does!"], but safely making everyone LGBT would be harder than safely making everyone cishet, and my goal is just for everyone to understand each other. Anyway, my real concern is that the increase in sexual deviance reflects less-ambiguously harmful damage under the hood. ] I think it's more likely that they're all three caused [ at least in significant part, in the case of overweight ] by environmental toxins - endocrine disruptors. This would be great to learn, because then humanity could just remove the endocrine disruptors from our environment and have all the blessings of civilization with none of the curses. Let this post be a call to anyone who also finds this theory plausible, and might be able to access the relevant domain expertise, to help identify the culprit neuroendocrine toxins, and damage pathways. [Linking my own Substack:] The CICO model of overweight is false. Some researchers from Stanford claimed in 2020 that human body temperature has "decreased monotonically by 0.03°C [~0.06°F] per birth decade" over the last 157 years, which they note is an indicator of lower metabolic rate. My experience, at least, is usually that my temperature is lower than the canonical 98.6°F. As far as I know, population-genetic change doesn't generally look like this. Obesity for adults since 1960: For youth since 1963: When Googling around, one tends to see graphs like this, which suggest that some overweight goes with prosperity, e.g. the prosperity of 1960: But actually, the average USian in 1920 apparently had a healthy BMI, despite the fact that by all accounts they could get however many calories [note that the numbers under the bars denote year of subject's birth, not year of survey]: Seed Oils Are Bad. But we've all seen The Graph. You know, the one from That One Slime Mold Time Mold Thread. As many people have pointed out, this graph sure looks A Way if you know where the Mississippi is and drains to. You know, that river that's always filling the Gulf of Mexico with agricultural runoff feeding toxic algal blooms? Slime Mold Time Mold thinks it's lithium. It's probably not lithium. Every adolescent cohort has a higher % LGBT than the last: It's cropped out of their image download, but the Gallup survey also has data on the Silent Generation, off to the right The curve is starting to look as daunting as the overweight curve at this point. The rise feels inexorable. We all know the explanation: Being LGBT has become more okay. Or, since we all know that 20% of young people weren't really same-sex-attracted in 1950, or they would have acted like it, we say that some of this rise in self-reported % LGBT is just trend-following. Kids thinking of themselves as being like the righteously oppressed, interesting people they follow on Tumblr. And I'm . . . not really buying that story. Humanity can't coordinate on any social trends. Especially not these days. The standard story says that 10% of people were always gay; we just needed to break the silence and they could be themselves. The Frogs-Gay-te theory is that, actually, the chain of causality is more like [neuroendocrine toxins] -> [increased % LGBT in population] -> [gay people reach critical voting bloc mass] -> [public opinion shifts] -> [Obergefell v Hodges]. [ This is not supposed to be an argument against tolerance! Tolerance is obviously good. Spooky toxins can, apparently, have any nice side effects. ] There's a potential line of objection to my move of ruling out preference-faking and jumping straight to neurotoxicity, that I myself would have made when I was younger. If you're not savvy in the ways of evo-devo-psych and brain wiring, it might feel to you like there's no way something as complex as sexual orientation could possibly be entirely reversed by exposure to one microplastic-or-something during early development. There are certainly people who have money to make off the idea that you can recondition yourself to be bisexual, or to be attracted to less-hot people in the exact same way you're attracted to the hottest ones, and sexual conditioning does have an effect. But if sexual orientation was really so malleable that you could validly call it a choice, then our ancestors must have been farsighted eugenicists, or possessed a deliberate goal of maximizing inclusive genetic fitness, to have chosen to obey their capacity for heterosexuality so much more frequently than the reverse. Same for the gender identities that go with prototypical heterosexual mating behavior. Evolution wrote the cishet program, somehow. Adolescents come out cishet. Is it really so much more implausible, now that we've seen all the evidence of it with our own eyes, to imagine that the signs could get flipped, and adolescents come out LGBT? Unlike the other two legs of my theory, I didn't have sources in mind to show the increase in overall mental illness I was thinking of, before I started writing this post. I was surprised to find that psycho-[population-statisticians], or psycho-demographers, or whatever, don't actually agree that there's been an increase in depression rates in recent decades - except maybe in young people, maybe especially females [?]. Some authors blame this rise on Instagram & co. I'm actually inclined to stand behind that accusation. Just like any of overweight was probably seed oils, any of sociocultural insanity, was probably TV. But autism diagnosis rates are, AFAIK, increasing far less ambiguously. This, to my perception, corresponds with the emergence of a new social caste, that began at least as far back as Kaczynski, and is now epidemic in my generation, on 4chan and Tumblr and Twitter and, yeah, LessWrong: the Extremely Online, the Introvert, the Loser Philosopher. The "former gifted kid" who genuinely struggles with holding down an average-strenuousness job, or making phone calls, or doing laundry. I could be wrong - I could really be wrong about this entire last section - but I don't feel like the fact that my generation is like this, is entirely explained by Internetification. Modulo everything else, I would not declare a rise in autism rates a bad thing. Walker and Fitzgerald's sketchy view that "genius" mostly requires proportionate autism did anything to convince me and mirrors my outlook on the subject pretty well. Mostly, I came to that position for a range of reasons outside the scope of this post [ though eg Nash, von Neumann, and Tao remain apparent exceptions ]. But I'm inclined to feel, at this point, that a society can have a great deal more neurodivergence than it can use. In any case, we should probably understand the probably-exogenous thing that's causing the increase in autism, before we accept it as part of a developmental process that's good in the long run for kids. [ Miscellaneous: IMO, some of the most obvious evidence for the presence of endocrine disruptors specifically in the environment comes from falling fertility rates in men and women, and falling testosterone in men. ] Searching LessWrong for "endocrine disruptors", in quotes, brought up exactly one result, a Q&A with Diana Fleischman and Geoffrey Miller from 5 years ago. I intend to write further about specific likely-candidate signaling pathways for each so-cutely-called "curse", but. Floor, open. Does anyone know of any chemicals in the water, or know of anyone who might know of any specific chemicals in the water, that could physically be doing any of this cursing?
2024-07-01
https://www.lesswrong.com/posts/WtsmwfzedJfSccynt/whirlwind-tour-of-chain-of-thought-literature-relevant-to
WtsmwfzedJfSccynt
Whirlwind Tour of Chain of Thought Literature Relevant to Automating Alignment Research.
sevdeawesome
This post is inspired by a series of comments by Bogdan: initial comment follow-up 1 follow-up 2 the goal of this post is to summarize the relevant literature and expand on these ideas. Comment 1: “There will likely still be incentives to make architectures more parallelizable (for training efficiency) and parallelizable architectures will probably be not-that-expressive in a single forward pass (see The Parallelism Tradeoff: Limitations of Log-Precision Transformers). CoT is known to increase the expressivity of Transformers, and the longer the CoT, the greater the gains (see The Expressive Power of Transformers with Chain of Thought). In principle, even a linear auto-regressive next-token predictor is Turing-complete, if you have fine-grained enough CoT data to train it on, and you can probably tradeoff between length (CoT supervision) complexity and single-pass computational complexity (see Auto-Regressive Next-Token Predictors are Universal Learners). We also see empirically that CoT and e.g. tools (often similarly interpretable) provide extra-training-compute-equivalent gains (see AI capabilities can be significantly improved without expensive retraining). And recent empirical results (e.g. Orca, Phi, Large Language Models as Tool Makers) suggest you can also use larger LMs to generate synthetic CoT-data / tools to train smaller LMs on. This all suggests to me it should be quite likely possible (especially with a large, dedicated effort) to get to something like a ~human-level automated alignment researcher with a relatively weak forward pass. For an additional intuition why I expect this to be possible, I can conceive of humans who would both make great alignment researchers while doing ~all of their (conscious) thinking in speech-like inner monologue and would also be terrible schemers if they tried to scheme without using any scheming-relevant inner monologue; e.g. scheming/deception probably requires more deliberate effort for some people on the ASD spectrum.” Comment 2: “There are also theoretical results for why CoT shouldn't just help with one-forward-pass expressivity, but also with learning. E.g. the result in Auto-Regressive Next-Token Predictors are Universal Learners is about learning; similarly for Sub-Task Decomposition Enables Learning in Sequence to Sequence Tasks, Why Can Large Language Models Generate Correct Chain-of-Thoughts?, Why think step by step? Reasoning emerges from the locality of experience. The learning aspect could be strategically crucial with respect to what the first transformatively-useful AIs should look like; also see e.g. discussion here and here. In the sense that this should add further reasons to think the first such AIs should probably (differentially) benefit from learning from data using intermediate outputs like CoT; or at least have a pretraining-like phase involving such intermediate outputs, even if this might be later distilled or modified some other way - e.g. replaced with [less transparent] recurrence.” Glossary Chain of thought (CoT): thinking via intermediate reasoning, introduced by Wei et al. (2022) [1]Type one tasks: tasks that don’t require multi-step reasoning. Examples: sentiment analysis, language detection, speech taggingType two tasks: tasks requiring multiple steps of reasoning and information from different sources, often involving strategic problem solving or logical deduction. Examples include mathematical word problems, logical reasoning, scientific reasoning. Scheming: refers to faking alignment to achieve desired outcomes, similar to politicians who pretend to care about policy issues to be elected. See Ajeya Cotra’s post on this [2] for more information.Deceptive alignment [3] and alignment faking: I use these synonymously to “scheming”.  Deceptive alignment is when a non-aligned AI presents itself as aligned, presumably in power-seeking or to avoid being shut down. This is inherently concerning because a deceptively aligned AI is indistinguishable from an aligned AI based solely on their actions.Self consistency: a method for improving chain of thought involves sampling various potential reasoning pathways to improve decision-making accuracy. This was introduced in Wang et al. (2023) [4] Burstiness: a property of data where certain words appear in clusters in specific training documents e.g. “creme fraiche” appears much more frequently in documents related to cooking than documents related to programming. [5]length complexity: measures the number of intermediate tokens in COT sequence required to approx some target function, introduced in Malach (2023) [6].In context: inside of the context window / prompt In-context learning (ICL): performing a task solely relying on input-output examples, without parameter optimization. A teammate of mine, Alfie Lamerton wrote a post [7] on the theory for why this happens, why it works, and why it is relevant to automating alignment. In context learning is rapid, only requiring a few examples (few-shot), doesn’t use any gradient descent or parameter updates.Out of context reasoning: reasoning / predictions that don't involve CoT. This definition is from Berglund et al. [8]. Out of context reasoning requires lots of examples and lots of training. Reasoning with information learned out of context seems worse, i.e. this information suffers from the Reversal Curse (section below)Direct prediction: prediction without auto-regressive generation. Example: a single forward pass from a decoder only transformerFaithfulness: when the stated reasoning for arriving at a conclusion is not the true reason for reaching that conclusionMeta-learning: learning without the need to update weightsMeta-training: when you adjust the data to make the data better for ICL Hypotheses: Improving oversight over automated alignment researchers should involve monitoring intermediate generation, not just internals Consider two worlds: In the first scenario, models become good at out of context reasoning and do not need CoT to generate new knowledge or explanationsWe will need interpretability techniques to evaluate scheminessModels can create new knowledge and explanations. Reasoning can be “hidden” in activations.In the second scenario, models need an inner monologue to generate new knowledgeWe have a decisive strategic control advantage for monitoring for “alignment faking” in automated researchers.Automated researchers will be forced to do “out loud” reasoning to solve certain problems. We can explain why they arrived at a decision.This case would be great for interpretability and control. Monitoring will be easier. Control based safety guarantees of automated researchers seem tractable if automated researchers are bottlenecked by reasoning when deciding what to do. A lot is written about deceptive alignment [3] and whether we should expect it by default. The purpose of this essay is to discuss which direction chain of thought should update us. Consider a different perspective for the two worlds: Models can think and can hide an internal monologue involving reasoning within weightsThis may be a deeper philosophical question here. When we think in using language, are the individual words the direct output of a cognitive stream (like a residual stream), or are they a byproduct of neural activity along for the ride during a forward pass? The relevant question I’m interested in is whether we should expect this to be the case with the AI's we build.When solving problems requiring reasoning, models will state “out-loud” the priors that their actions condition onWe will be able to see what evidence and information the model is conditioning on in decision-making.The evidence (see the Faithfulness section) shows that this is the case for current models (on problems requiring reasoning), but this doesn’t necessarily imply it’ll hold up in the limit.Does the policy of “just say what knowledge and reasoning you are doing and condition on it” hold in the limit, or should we expect “schemy” reasoning in the limit as we train larger and larger transformers?It is unclear whether this question has been answered Intermediate reasoning leads to massive expressivity gains. AGI is possible with next token prediction: There are inherent limitations of LLMs in direct prediction. Certain types of math and reasoning problems are not solvable with direct prediction. By contrast, intermediate generation massively improves the expressivity of transformers: the scope of problems a transformer can solve. The literature has a number of recent results formalizing the expressivity gains from CoT. Even though you are not changing any circuits or the model that is running, intermediate generation can allow transformers to simulate a deeper circuit where new tokens depend on previous ones, and this dependency can loop, expanding expressivity. Automated alignment researchers will use intermediate reasoning like CoT: Research involves a lot of reasoning. Multi-step reasoning is likely to be a part of automated alignment researchers. Novel idea creation will naturally involve intermediate reasoning: Intermediate reasoning will be used to generate new knowledge (novel insights not inside the training data). CoT improves learning, and in-context learning is more powerful than training: Learning is easier with reasoning. I believe the literature points to the hypothesis that learning out of context stores information similar to “floating beliefs” (consider [source 9]). An anecdote on learning via reasoning from studying physics: I can read a chapter on electromagnetism 3 times over and absorb information. However, to best prepare for a test, I solve the related homework problems and connect facts from the textbook to solve a problem. This is a better learning algorithm than just reading. Recent data shows that chain of thought doesn’t just help with reasoning related problems, but also with learning. In-context information (information in the context-window) is far more salient than facts learned during training. Also, information learned during training is more difficult to use in reasoning than in-context information. Thus, I propose that information present in-context has much more “plasticity”. The reversal curse [10] is evidence for this. In one of their experiments, when a model is trained (learning out-of-context) on “Mary Lee Pfieffer is Tom Cruises mother”, it can answer “who is Tom Cruises mother” but not “who is Mary Lee Pfieffer’s son?”. However, if the information is in the prompt (in-context learning) the model has no trouble generalizing the mother-son relationship and answering both questions. Learning out of context fails to scale. Owain Evans, an author of the “Reversal curse” paper presents this rationale in [source 11] and [source 8]. They show that in-context reasoning scales with model size better than out of context reasoning. Figure 1: scaling rules of out-of-context learning [source 8] In this context, two-hop refers to problems requiring two inferential steps. For example, inferring A from B and B from C (A -> B, B -> C) requires one inferential hop. Concrete example: suppose you have the following text to include either in-context or out-of-context (in training documents): P1: The key is in the bag P2: Alice has the bag P3: Alice is at the lake (example from source 12) Answering “Where is the key” involves one inferential hop, namely “Alice has the key” linking P1 and P2. Connecting propositions, even simple examples like this fails to scale when the propositions are learned out of context. In my opinion, this is a valuable insight into neural networks. Your brain is not flooded with every single memory and experience you’ve ever had every moment. The things you’ve been thinking about, writing down and reasoning about are more salient and available for problem solving. While other memories can be accessed, they function more like points in a cloud of beliefs, less readily available for immediate reasoning and problem-solving. There has been recent research focused on improving ICL. Two recurring strategies for learning: Improving ICL: strategies like tree of thought, self consistency, etc.Adjusting training data to improve in-context learning is called “meta training”Learning to Reason with Self Notes [12] teaches models to exit context to do intermediate reasoning while answering, and achieves performance gains over CoT. The Textbooks Are All You Need [13] paper achieves incredible results with small models just by rearranging data, increasing burstiness and document localityThere is a large section on this in the appendix sectionIn weights learning: improving representations in a world modelImproving generalization: dropout, regularization, or hyperparameter tuning, optimization techniques, etc.Augmenting and rearranging data: Microsoft achieved considerably better models by training “textbook” style by organizing data (like code) into documents that would make more sense to a human [13]. Curriculum learning: teaching models concepts in order of complexity. E.g. training on first grade level math before 10th grade level math. Results in curriculum learning so far have been relatively lackluster [14]. Autoregressive language models will need to output intermediate results for problems requiring reasoning: If true, we have a decisive strategic advantage in interpretability and monitoring for deceptive alignment. Certain problems require reasoning steps. As humans we can feed a thought chain into an internal monologue instead of writing it down or saying it out loud. On the other hand auto-regressive transformers’ internal monologue is their thought chain, and they will have trouble hiding reasoning from us because it is much more difficult to reason within weights as it is to reason within a context window. Strong oversight: Expanding on the last point: There are empirical results for controlling or aligning LLM’s, not many results on how to align superintelligence. This is promising for alignment given OpenAI’s plan to use near-human level AI’s to do research on alignment [15]. Intuitively, it may be possible to achieve safety for autoregressive LLM’s because their expressive power is fundamentally limited by the length of their reasoning chains as discussed in Expressive Power of Transformers [16]. Other Motivations for this post “One of the main limitations is that the architecture does not allow for an ‘inner monologue’ or scratchpad beyond it’s internal representation, that could enable it to perform multi-step computations or store intermediate results” - Sebastian Bubeck, 2023Variable computation problem: some problems require more thinking, reasoning, steps and computation. A human knows that it will take longer to solve fermats last theorem than solving “12 + 3=?”In-context learning is a much more powerful form of learning than gradient descentResearching the theoretical foundations of CoT, ICL, and activation steering are interesting because it helps us understand the type of capabilities models can exhibit. Literature Review: Can LLMs reason without CoT by Owain Evans [11]. Owain defines in-context reasoning as reasoning within the context window. Out of context reasoning refers to the model being able to access premises but through training instead of prompting. Figure 2: In context reasoning vs out of context reasoning [11] Studies, such as the Reversal Curse, show that models struggle with reasoning on premises learned out-of-context.Research indicates that out-of-context reasoning does not significantly improve with scale, as demonstrated by the graph in section 1. I believe this finding is crucial for understanding the limitations of current models. Reversal Curse: [source 10] Claims that if a model is trained on a sentence of the form “A is B”, it will not automatically generalize to the reverse direction “B is A”’Example: a model tuned on data including “Daphne Barrington is the director of “A Journey Through Time” can answer “who is Daphne Barrington” but not “Who directed A Journey Through Time”The Reversal Curse only occurs when the information is learned out of context during training. When the information is learned in context, the model can generalize and infer the relationship in both directions without difficulty.This paper received a lot of criticism including many claiming the “reversal curse” is not real [17]. After investigating the criticism [17], I believe that Andrew Mayne's arguments against the Reversal Curse are not convincingIn my opinion the article fails to reproduce the most important experiments. Specifically, experiments involving fictitious characters which are more likely to avoid pre-training leakage. We shouldn’t rule out the possibility that the model's training data included both forward and reversed paraphrasings of realistic text data, such as "Olaf Scholz was the ninth Chancellor of Germany." Auto-Regressive Next-Token Predictors are Universal Learners source 18 Twitter thread: https://twitter.com/EranMalach/status/1704140257044611314 Auto-regressive transformers are Turing complete in the limit!The main point of this paper is that any computer program or intelligent agent that can be simulated by a computer, can be learned, given the right dataset, by a simple next-token predictor. The author formalizes this. Claims that language models' logical reasoning abilities are due to auto-regressive learning, not architecture. Even simple models with next-token prediction can handle complex tasks when equipped with CoT. The paper introduces the concept of "length complexity," which measures the number of intermediate tokens in a chain-of-thought sequence required to approximate a target function.Length complexity impacts learning parities and can be traded off with other complexities. Towards Revealing the Mystery behind Chain of Thought: A Theoretical Perspective source 19 The goal of this paper is to answer the following: Are there inherent limitations of LLMs in directly solving math/reasoning tasks (no CoT)?What is the essential reason behind the success of CoT boosting performance in LLMs? Terms: Log precision transformer: a transformer whose internal neurons can only store floating point numbers with a bit precision of O(log(n)), where n is the maximum length of the input sequence.Example: the precision of the internal neurons is much smaller than the context-window. GPT-2 has 16 bit precision vs a maximum sequence length of 2048The paper focuses on two most basic math problems: arithmetic and equations, which are elementary building blocks in most math problems. It explores whether LLMs can solve these problems directly and/or with CoT. Central results (the appendix includes more math): The authors claim that autoregressive generation can increase the “effective depth” of a transformer proportional to the number of intermediate steps.LLMs with CoT can emulate Dynamic Programming - a powerful decision making framework by computing the entire Dynamic Programming reasoning chain.The paper proves that log precision transformers can be implemented via a shallow circuit, and their expressive power is upper-bounded by the circuit complexity TC-0. The two math problems investigated in the paper are lower-bounded by the complexity class NC-1.“By using circuit complexity theory, [the authors] give impossibility results showing that bounded-depth Transformers are unable to directly produce correct answers for basic arithmetic/equation tasks unless the model size grows super-polynomially with respect to the input length. In contrast, [they] prove that autoregressive Transformers of constant size suffice to solve both tasks by generating CoT derivations using a commonly used math language format”CoT bypasses these impossibility results, authors say via increasing effective graph of the circuits, yielding an expressive power far greater than TC-0. Other noteworthy results that affected how I think about transformers: One attention head can simulate two basic operations: copy and reduction. These can be seen as loading memory.Multi-head attention can perform multiple copy or reduction operations in parallelThe MLP can perform multiplication, linear transformation, conditional selection and simulate a lookup tableBy combining these basic operations, Transformers can solve both arithmetic and equation tasks, further suggesting their ability to simulate any Turing machine in the limit, known as Turing completeness The Expressive Power of Transformers with Chain of Thought source 16 Twitter thread: https://twitter.com/lambdaviking/status/1713945714684756019 This paper formalizes the expressive power of transformers with circuit complexity, examining the classes of functions transformers can approximate with and without chain of thought.With no intermediate steps (CoT), transformer decoders can only solve problems that fall within the circuit complexity class of TC-0, such as solving linear equalities. Intermediate output, such as chain of thought or a scratchpad, fundamentally extends the computational power of transformer decoders.As an example, a single forward pass cannot encode an XOR gate, but using chain of thought a transformer can encode XOR easily. Another example: log(n) chain-of-thought steps can solve some, but not all, algorithms requiring log(n) steps Transformer decoders can simulate t Turing machine steps with t chain-of-thought steps. Why think step by step? Reasoning emerges from the locality of experience source 20 Author on a podcast: https://www.youtube.com/watch?v=MRwLhpqkSUM The results of this paper influenced how I think about chain of thought for alignment in two ways. First, they suggest that chain-of-thought reasoning is useful for language models because direct prediction is inaccurate for some inferences because the relevant variables are rarely seen together in training. Second, they demonstrate chain-of-thought reasoning improves estimation by incrementally chaining local statistical dependencies that are observed frequently in training. This paper is highly relevant to the Data Distributional Properties Drive Emergent In-Context Learning in Transformers paper discussed next. I find this paper interesting because the goal is to find what properties of data make chain of thought possible. The effectiveness of reasoning is not immediately obvious; while it doesn’t involve creating any new knowledge, connecting ideas via intermediate generation can improve performance. Their hypothesis is that reasoning is useful when training data has local structure and topics that are similar are clustered together in the dataset. During training, a model isn’t learning about math, physics, biology, sociology and psychology in the same backwards pass. The important finding of this paper is that the effectiveness of chain of thought comes from the structure of data. Also, this paper proves that reasoning through intermediate variables reduces bias in an autoregressive density estimator trained on local samples from a chain-structured probabilistic model. They coin the term “reasoning gap”: the gap between direct prediction and prediction through reasoning. They show that “training language models on datasets consisting of local neighborhoods with strong dependencies and performing chain-of-thought reasoning at inference time can be more data-efficient than training on more complete datasets.” The author provides a non-technical example: asking the question “what is the climate in the capital of France?” Suppose our dataset documents about France never explicitly mentions the climate in the “capital of France”, but it does state that Paris is the capital of France. The wikipedia page for Paris, from a separate document in the training data, mentions that Paris has an oceanic climate. By first establishing that Paris is the capital of France, the next token estimator reduces bias. Generic conditional probability example: To illustrate, we may know the value of some variable A and want to know about another variable C, so we try to estimate P(C|A). However, if we need to estimate probabilities using observed samples from joint distributions and we have not often seen A and C together, we would struggle to estimate P(C|A) directly. Instead, we might estimate it by reasoning through intermediate variables. If conditioning on an intermediate variable B renders A and C independent of each other, we can compute the conditional probability by marginalizing over B, using the fact that P(C|A) = P(B) * P(C|B) * P(B|A). Data Distributional Properties Drive Emergent In-Context Learning in Transformers source 21 The question this paper aims to answer is “how do large transformer models achieve emergent in context learning?” Their hypothesis is that the distributions of naturalistic data have special properties that enable emergent in-context learning TLDR: burstiness makes in context learning work. Natural language is bursty: certain words appear in “bursts” in documents, they are highly frequent in some training data, but rare in most documents. Consider names, technical jargon and local slang which appear very frequently in certain types of documents, and very rarely in others. Words like “Severus” aren’t evenly distributed in training documents, they are much more likely to appear in Harry Potter books. Knowing this should help design datasets. Their experimental findings suggest that in context learning is improved by increasing burstiness in the training data. However, more burstiness leads to worse in-weights learning.ICL is enabled by larger numbers of training classes, i.e. a large vocabulary. However, once again there exists a tradeoff between in-context learning and out of context learning. Models either do well on one or the other, the authors present a sweet spot for this tradeoff. STaR: Bootstrapping Reasoning With Reasoning source 22 Video presentation: https://slideslive.com/38991144 I wanted to include this paper because it seems valuable in the discussion of how to improve in-context reasoning. Models fine-tuned with reasoning outperform language models trained to directly answer questions.Rationales when answering questions is clearly helpful in training data, this paper investigates how to create those rationales to improve training data. The authors introduce a model that can iteratively improve its performance on reasoning tasks beyond its initial few-shot prompting. They do this by taking the solutions that the model comes up with to a dataset, then have it come up with rationales. If the initial answer is correct, they fine tune the model on the rationales. Takeaways: This approach improves performance on MATH benchmarks and arithmetic tasks.Potential failure modes: the self taught reasoner never gets a feedback signal from problems that it gets wrong and becomes saturated on correct answersTo get around this, they give models hints to get the correct answer but fine-tune as if the model never saw this hint Faithfulness of CoT Relevant literature: sources 2 & 23-29 Why this is relevant: CoT is highly interpretable, more so than current interpretability techniques. Faithful explanations for a model arriving at a conclusion would be highly valuable for safety. If the reasoning given in CoT sequences is faithful and honest, sharing the actual reasons for an AI selecting an action, this would be promising for controlling automated alignment researchers. A couple of the failure modes of using thought chains for interpretability include: Post-hoc reasoning: a certain conclusion is already guaranteed and the chain of thought just serves to rationalize the conclusion.Scheming hidden in the gradients. More computational time spent (with chain of thought) could allow for computing unfaithful reasoning.We can test if language models are using the extra computational time in CoT reasoning to scheme by adding filler tokens (like a bunch of hyphens at the end of a prompt)Anthropic conducted this experiment and found that present LLMs are unlikely to be doing this.Encoding reasoning: the model may use a hidden language in intermediate tokens to use CoT as an outlet to scheme If LLM’s answer questions using CoT, can we expect their reasoning to be faithful? In other words, the actual reasons for them arriving at the answer. Anthropic measuring faithfulness in chain of thought reasoning suggests: NO! The authors of Lets Think Dot by Dot [29] were able to train LLM’s to do intermediate reasoning within gradients while outputting filler tokens like “...”. This demonstrates that it is possible for models to perform reasoning via filler tokens or encoded reasoning. However, this is hard to achieve. They created this by training models to use filler tokens in parallel to reasoning chains. It is fairly clear that this is not the default outcome of training present-day sized transformers the way we currently train them. In summary, the Anthropic paper shows that even with CoT, models engage in post-hoc decision making and their thought chain is not necessarily the reason they arrived at their action. For example, in one experiment they bias the model towards answering “A” by giving a large number of examples (few-shot) in the prompt. Then, while eliciting CoT in an attempt to interpret why the model arrives at its decision, the model essentially never notices its bias towards A. Instead, the models continue to provide rationale for choosing “A”, regardless of “A” being correct or incorrect. There is disagreement over the likelihood of deceptive alignment. Evan Hubinginger says it is almost certain [32] while others say it is less than 1% likely by default [33]. Regardless of the likelihood of naturally emerging, scheming is concerning because deceptively aligned AIs are behaviorally indistinguishable from aligned AIs. The results from the literature in this section is mixed, but I believe we should update ourselves away from thinking that interpreting CoT alone is sufficient to catch scheming AI’s. References Wei et. al (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. https://arxiv.org/abs/2201.11903Ajeya Cotra. “Why AI Alignment Could be Hard by Ajeya Cotra” https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/ Definition of deceptive alignment: https://www.lesswrong.com/tag/deceptive-alignment Wang et al (2022), Self-Consistency Improves Chain of Thought Reasoning in Language Models. https://arxiv.org/abs/2203.11171 Chan et al. (2022) Data Distributional Properties Drive Emergent In-Context Learning in Transformers https://arxiv.org/pdf/2205.05055 Malach (2023) Auto-Regressive Next-Token Predictors are Universal Learners https://arxiv.org/abs/2309.06979 Alfie Lamerton. A Review of In-Context Learning Hypotheses for Automated AI Alignment Research https://www.lesswrong.com/posts/GPcwP8pgyPFPwvi2h/a-review-of-in-context-learning-hypotheses-for-automated-ai?utm_campaign=post_share&utm_source=link Berglund et al. (2023) “Taken out of context: On measuring situational awareness in LLMs” https://arxiv.org/abs/2309.00667Eliezer Yudkowsky (2007). Making Beliefs Pay Rent (https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences) Evans et al. (2023) The Reversal Curse (https://arxiv.org/abs/2309.12288)Owain Evans (2023) Can LLMs Reason Without Chain of Thought https://slideslive.com/39015178/can-llms-reason-without-chainofthought?ref=search-presentations Lanchantin et al. (2023) “Learning to Reason and Memorize with Self Notes “ https://arxiv.org/abs/2305.00833 Gunasekar et al. (2023) “Textbooks Are All You Need” https://arxiv.org/abs/2306.11644 Campos (2021) Curriculum Learning for Language Modeling https://arxiv.org/abs/2108.02170 OpenAI. Introducing Superalignment https://openai.com/superalignment/ William Merril, Ashish Sabharwal. “The Expressive Power of Transformers with Chain of Thought” https://arxiv.org/abs/2310.07923 Andrew Mayne. “Is the Reversal Curse Real?” https://andrewmayne.com/2023/11/14/is-the-reversal-curse-real/ Malach. (2024) “Auto-Regressive Next-Token Predictors are Universal Learners. https://arxiv.org/pdf/2309.06979 Feng et al. (2023) “Towards Revealing the Mystery behind Chain of Thought: A Theoretical Perspective” https://arxiv.org/pdf/2309.06979Prystawski et al. (2023) “Why think step by step? Reasoning emerges from the locality of experience” https://arxiv.org/abs/2304.03843 Chan et al. (2022) “Data Distributional Properties Drive Emergent In-Context Learning in Transformers” https://arxiv.org/abs/2205.05055 Zelikman et al. (2022) “STaR: Bootstrapping Reasoning With Reasoning”  https://arxiv.org/abs/2203.14465 Chen et al. (2023) “Measuring Faithfulness in Chain of Thought Reasoning https://www.anthropic.com/news/measuring-faithfulness-in-chain-of-thought-reasoningTurpin et al. “Language Models Don’t Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting” https://arxiv.org/pdf/2305.04388 Miles. Lesswrong: Unfaithful explanations in chain of thought prompting https://www.lesswrong.com/posts/6eKL9wDqeiELbKPDj/unfaithful-explanations-in-chain-of-thought-prompting Tamera. Lesswrong: Externalized reasoning oversight: a research direction for language model alignment https://www.lesswrong.com/posts/FRRb6Gqem8k69ocbi/externalized-reasoning-oversight-a-research-direction-forAccidentally teaching AI models to deceive us: Schemers, Saints and SycophantsChua et al. (2024) “Bias-Augmented Consistency Training Reduces Biased Reasoning in Chain-of-Thought” https://arxiv.org/abs/2403.05518 Pfao et al. “Lets think Dot by Dot” https://arxiv.org/abs/2404.15758 (twitter thread)Wen et al. (2024) “RNNs are not Transformers (Yet): The Key Bottleneck on In-context Retrieval” https://arxiv.org/abs/2402.18510Li et al. (2024) “Chain of Thought Empowers Transformers to Solve Inherently Serial Problems” https://arxiv.org/pdf/2402.12875Evan Hubinger (2022). “How likely is deceptive alignment?” https://www.alignmentforum.org/posts/A9NxPTwbw6r6Awuwt/how-likely-is-deceptive-alignment DavidW “Deceptive Alignment is <1% likely by default” https://forum.effectivealtruism.org/posts/4MTwLjzPeaNyXomnx/deceptive-alignment-is-less-than-1-likely-by-default#:~:text=In%20this%20post%2C%20I%20argue,to%20pursue%20its%20proxy%20goals.Tutunov et al (2023) “Why Cant Language Models Generate Correct Chains of Thought? https://arxiv.org/abs/2310.13571Zhang et al (2023) “Igniting Language Intelligence: The Hitchhiker's Guide From Chain-of-Thought Reasoning to Language Agents” https://arxiv.org/abs/2311.11797“AI Capabilities Can Be Significantly Improved Without Expensive Retraining” https://epochai.org/blog/ai-capabilities-can-be-significantly-improved-without-expensive-retraining Bogdan Comment follow-up 1 follow-up 2https://www.analog.com/en/resources/glossary/xor-gate.htmlYao et al. “Tree of Thoughts: Deliberate Problem Solving with Large Language Models” Appendix What is Chain of Thought? Tasks like math or reasoning problems are best solved using task decomposition, i.e. breaking a problem into small intermediate steps that gradually nudge you towards an answer. As an analogy: if you were asked what you ate for breakfast this morning, you might be able to quickly respond with “oatmeal!”, but if you are asked to divide 1377 by 51 might take you a little longer, would be easier with a scratchpad and pen, and would require several “thoughts” (intermediate steps). Figure 3: Chain of thought examples [1] Kojima et al [1] introduced zero-shot CoT. Here shot means how many example problems one solves in a prompt, i.e. the prompt “1+1=2, 4+5=” is single-shot because it gives one example of addition before asking the LLM. The authors elicit CoT simply by adding “let's think step by step” to a prompt. This trick improves LLM performance, especially on mathematics and reasoning related questions. In a CoT, each element in the chain represents a thought. The sequence of thoughts should be coherent and lead to the expected answer. When you divide 1377 by 51 you could blurt out a guess (rough estimate). However, by thinking step by step you could first solve how many times 51 goes into 137, then take the remainder and continuing your long division intermediate steps gradually nudges you towards a correct answer. The next section will cover how large language model’s (LLM) capabilities can be improved at inference, and how chain of thought style reasoning can be improved. In a single forward pass, a transformer isn’t capable of encoding an XOR gate (it might be useful to think about why this is), but it can encode AND, OR, or NOT gates. However, adding a chain of thoughts allows solving XOR. Figure 2. XOR Gate [38] Capabilities improvements at inference:  tools like chain of thought improve capabilities without any training, and these tools can be improved LLM’s can be significantly improved without retraining [36] With techniques like scaffolding, both the number of problems LLM’s can solve grows and the robustness of the solutions improves. These techniques do not require any training or fine tuning, but increase inference costs. There is a tradeoff between training spending and inference spending. However, training is a fixed cost whereas capabilities enhancements at inference scales recurring costs. Table copied from Epoch: CategoryDescriptionExampleTool useTeaching an AI system to use new toolsWebGPT, ToolformerPromptingChanging the text-based input to the model to steer its behavior and reasoning.Chain of thoughtScaffoldingPrograms that structure the model's reasoning and the flow of information between different copies of the modelAutoGPT, LATSSolution choiceTechniques for generating and then choosing between multiple candidate solutions to a problem.AlphaCode, training a model to verify generated answersDataTechniques for generating more, higher-quality data for fine-tuning.Minerva, fine-tuning on self-generated data Chain of thought as a tool for improving language model accuracy can be itself improved. The Igniting Language Reasoning [35] is a great source summarizing some of the improvements to CoT and language models. They break improvements to CoT into three categories: CoT formulation, reasoning aggregation and CoT verification. CoT Formulation: There are ways of formulating intermediate reasoning that outperform Kojima et. al’s original chain of thought results, especially in certain domains. Various formulations of chain of thought have improved results across certain problems: CoT Formulation Program of Thought Chen et al 2022 Generate both text and programming language statements, executed on an interpreterZiqi & Lu (2023)Table of thought Adopts a table filling approach to chain of thought. In Tab0Cot instructions of: “| step | subquestion | process | result |” Is manually designed to have LLM’s generate a table while conducting their reasoning Yao et alTree of thoughtBreaks CoT into units and formulates them into tree structure. This allows LLM’s to explore coherent thought units and consider different options before making their decisions. ToT is able to look ahead to determine what to do next, or trace-back to correct history decisions. This has shown impressive results in non-trivial planning or search processes.Besta et Al 2023Graph of ThoughtBuilt on ToT, models the thought generation process of language models as a graph. Too complex to fully explain here. Has shown a lot of promise in tasks such as sorting, set operations, keyword counting and document mergingLee & Kim 2023Recursion of thoughtTraining language models to output special tokens such as GO, THINK or STOP, initiating certain contexts. This has shown promise with logical problems with very large context sizes. Ensembling is a popular technique in machine learning across domains. Ensembling involves multiple models making predictions and then taking a majority vote. Self consistency and reasoning aggregation were also explored in igniting language model reasoning. Wang et al 2023 [4] introduced a decoding strategy called self consistency. This first prompts the language model to follow CoT, then samples a diverse set of reasoning pathways and takes the final answer to be the one which wins a majority vote. The authors explore: Self consistency: ensembling based on sampling multiple language model outputsPrompt ordering ensembling: ensembling based on changing the ordering of examplesInput-rationale ensembling: ensembling based on different types of reasoning in the examples The authors find all three methods yield similar improvements. Because of the computational inefficiency of transformers (calculating logits at each token in the context window), reasoning aggregation could be very cheap. Lastly, chain of thought can be improved with verificatio. It is not clear whether LLMs can perform reliable CoT verification yet. A popular intuition is that validation is easier than generation. The igniting language models reasoning paper [35] has a detailed explanation of CoT verification literature, copying from that paper, here is a brief overview: Wang et al (2022) proposed and proved LLM’s have self-verification abilities. After CoT reasoning, they have an LLM perform backwards verification working through reasoning steps backwards and masking early steps. Lightman et al (2023) explored training reward models to validate CoT’s. Using an RM supervisor has improved accuracy significantly on Dan Hendryks MATH dataset Math Results from Towards Revealing the Mystery behind Chain of Thought: A Theoretical Perspective Central results (in math): log precision of autoregressive transformer of constant depth can not automatically solve both problems (arithmetic and equations). In order to directly output the answers, model size will have to grow superpolynomially in the input length. Consider these two problems (equations and arithmetic) in a specific setting: all numbers are integers ranging from {0,...,p-1} where p is prime, and arithmetic operations are performed in the finite field modulo p. Arithmetic(n,p)> the task of evaluating arithmetic expressions (modulo p) where the input length is bounded by nExample: (7x5) + (6+4)Equation(m,p): the task of solving linear equations (modulo p) with no more than m variables In the direct evaluation setting, they show the following theorems: Theorem 1: For any prime number p, integer L, and polynomial Q, there exists a problem size n such that no autoregressive Transformer with depth L and hidden dimension d ≤ Q(n) can directly solve the problem Arithmetic(n, p).This means that for the problem of evaluating arithmetic expressions modulo p (as described in Arithmetic(n, p)), there is always a large enough problem size n such that an autoregressive Transformer with a certain depth L and hidden dimension bounded by Q(n) cannot solve the problem directly)Theorem 2: Similarly, for any prime number p, integer L, and polynomial Q, there exists a problem size m such that no autoregressive Transformer with depth L and hidden dimension d ≤ Q(m) can directly solve the problem Equation(m, p). On the contrary, in the chain of thought setting, they demonstrate the following theorems: Theorem 1: Fix any prime p. For any integer n > 0, there exists an autoregressive Transformer with constant hidden size d (independent of n), depth L = 5, and 5 heads in each layer that can generate the CoT solution for all inputs in Arithmetic(n, p). Moreover, all parameter values in the Transformer are bounded by O(poly(n)).Theorem 2: Fix any prime p. For any integer m > 0, there exists an autoregressive Transformer with constant hidden size d (independent of m), depth L = 5, and 5 heads in each layer that can generate the CoT solution for all inputs in Equation(m, p). Moreover, all parameter values in the Transformer are bounded by O(poly(m)).
2024-07-01
https://www.lesswrong.com/posts/HJp3C3z8XefwBeQcR/important-open-problems-in-voting
HJp3C3z8XefwBeQcR
Important open problems in voting
Closed Limelike Curves
Strategy-resistance Identify, or prove impossibility, of a voting system which incentivizes— A strictly sincere ranking of all candidates in the zero-information setting, where it implements a "good" social choice rule such as the relative (normalized) utilitarian rule, a Condorcet social choice rule, or the Borda rule.In a Poisson game or similar setting: a unique semi-sincere Nash equilibrium that elects the Condorcet winner (if one exists), similar to those shown for approval voting by Myerson and Weber (1993) and Durand et al. (2019). Properties of Multiwinner voting systems There's strikingly little research on multiwinner voting systems. You can find a table of criteria for single-winner systems on Wikipedia, but if you try and find the same for multi-winner systems, there's nothing. Here's 9 important criteria we can judge multiwinner voting systems on: Independence of Irrelevant AlternativesIndependence of Universally-Approved CandidatesMonotonicity ParticipationPrecinct-summabilityPolynomial-time approximation schemeProportionality for solid coalitionsPerfect representation in the limitCore-stability (may need to be approximated within a constant factor) I'm curious which combinations of these properties exist. Probabilistic/weighted voting systems are allowed.
2024-07-01
https://www.lesswrong.com/posts/7WR3nmHpFj7xiNM7P/anti-circumcision-essay-3-of-3-now-that-i-think-about-it-is
7WR3nmHpFj7xiNM7P
Anti-Circumcision Essay 3 of 3: Now That I Think About It, Is There Actually a Space Between “Info” and “Hazard”? Isn’t It Just One Word?
Harry Stevenage
Link to Part One Link to Part Two [reposting screencap for context] I think by this point I’ve established enough groundwork that some people will be able to complete what remains in their head. I have described the physical reality. And I have pointed to a category of events where society understands that people will often be profoundly disturbed by these events happening to them for reasons that have little to do with the immediate physical consequences. I could have given many more examples than the one I did, but hopefully people will get the idea. I would hope that if people were with me so far, their attitude would go from “Infant circumcision is bad I suppose but it’s insane people care about it this much.” to “I can see how this is serious enough that people would sometimes get overwhelmed and develop maladaptive coping mechanisms. My intuitions for what my own response should be to these people’s failure to deal with this very bad thing are different now.” Do I even need to criticize the “infohazard” part of this if I’ve already criticized the specifics that it depends on? If I don’t, will it seem like I’m criticizing the phrasing and tone rather than the argument itself? I think there are a few further points which might be worth making explicitly, so I guess I will. Any negative fact about the world is, in some sense, an infohazard. The evidence against the existence of an afterlife, a horrible massacre that happened over 100 years ago, suffering in impoverished countries, suffering in dictatorships, beautiful and interesting species of animals going extinct, a horrible crime committed by a grandparent who died before you were born, brilliant artists who were cheated out of their royalties and died penniless, the fact that your zipper was down in front of people you were meeting for the first time, etcetera. The obvious reading is that infohazard has a more specific implication in this context. The implication is that individuals and society in general should take more care not to look here. To deny this information to people who can’t handle it. Or that it would be good if this could be done somehow, even though there’s no way to do it. I don’t think that makes sense here, for a few reasons. Most people in broader Western culture, in my experience, do not actually want painful things to be kept from them. It would be one thing if they said that but didn’t act like it, or vice versa, but in my experience this preference is found in both their words and actions. The details of a painful thing, perhaps they would want not to know. They would want to know if someone was spreading a monstrous photoshopped image of them because they would want to ensure that something was done about it, they might not want to see the image if there was no specific reason to. There are many good reasons why people in our culture generally do not want to be protected from infohazards. One of them is that people don’t have full faith that society will fix a problem without their knowledge or involvement. It’s not that they’re undervaluing the peacefulness of their map, the thing that makes the map peaceful is the knowledge that it is basically accurate about things that would make them worry. Non-rationalists are not actually too crude to understand that, even if they might not put it in those words. They care more about the territory and don’t trust that some superhero will take care of everything without their knowledge or involvement. I would imagine that most rationalists would relate to that. For one thing, it was a theme in HPMOR. Speaking of imagining things about rationalists, one might also imagine that rationalists would be even further in this direction than the average person, since The Litany of Gendlin is a core text, and “sunlight is the best disinfectant” is at least close to being a stock phrase in this community. Have we evolved beyond these ideas? Even if there are exceptions, is this one of them? We can try to visualize a world where people don’t need to worry about anything, because they know that other people are taking care of it and there’s nothing that they could contribute which would possibly be worth it. We don’t actually have any way of knowing whether that would work out in practice, but we could still visualize it. We could even visualize a highly-traded, highly-subsidized prediction market, and have the prediction market predict to us that this would work out, and be reassured by that. This is not that. This is not even close to that. This is “You’re wrong to care about this as much as you do. Sit back and let me and the rest of this cluster handle this matter by occasionally saying ‘I’m persuaded by the arguments against this in some abstract sense but also the people who care are weirdos/insane/broken in a way that I implicitly attribute to be solely to them and not the problem itself.’” That may be a little unfair. It is true that there’s an ambiguity in Mason’s tweet, where it’s not obvious whether she’s talking about any strong negative feelings about this people might have or just people who fail to deal with those feelings in a productive way. It’s possible that there’s some reasonable level of emotional response, and some reasonable way of dealing with those emotions, which Mason would have empathy for, if only on an intellectual level. She didn’t actually write that, and I don’t normally hear people in this cluster say that (by definition, but hopefully you know what I mean), but she seems like a kind, reasonable person about other things, so one might say that it goes without saying, or that that side of things was just left out just by happenstance, without any reading-between-the-lines needed. In this series I’ve mostly limited myself to making my arguments and not saying that the mind of anyone I’m disagreeing with has gone completely haywire. I’ve alluded to possible biases, and I’ve talked about things which I consider to be blindspots, but that’s as far as I’ve gone. In the course of my life, even if I feel that people are being deeply irrational, it’s usually more persuasive to just make my points as best I can rather than try to deconstruct the possible psychological reasons they have made the mistake they made, based on my semi-educated guesses. Is there any valid reason I would go farther? Is there any reason I would make a similar argument to Mason’s, especially since it would be, ultimately, just a guess? I think so. People reading this might get something out of encountering a symmetrical argument; it might cause them to reflect. And the argument itself might have some small amount of merit, even if this sort of thing is never really a rigorous analysis. There is a pattern of irrational stubbornness on the part of society around this issue, and there might be some people who will get something out of me addressing it directly. Also hopefully it will be interesting or amusing to read in a general sense. What follows is somewhat epistemically goofy and shouldn’t be taken too seriously. Please feel free to just stop reading here, but if not, one accusation of deep-seated psychological dysfunction (“completely haywire”) I could make would be something like this, which is of course addressed to this cluster of thought and not Mason herself: Rationalists talk about Societal Dysfunction a lot. We get angry at the people doing bad things, we get angry at the world, we take a step back, we remind ourselves that people don’t know any better, or their capacity for caring about others hasn’t been nurtured, that maybe it’s better to see whatever the particular bad thing might be as a societal mistake rather than a great evil with perpetrators, and we put things in perspective. Societal Dysfunction can be put into perspective. Weak People cannot. To see Weak People being Weak is to feel contempt. Those feelings of contempt cannot be put into perspective, they will not, they must not. It would be too much to relinquish, it would leave you too defenseless, it would be like giving up too large a chunk of your soul. Societal Dysfunction is one thing. But to show any recognition to Weak People is just too personal, the flinch is too intense, it can’t be borne. To be pressured into caring about them would be grotesquely unpleasant; to be forced to do so would be a deeply personal transgression against one’s boundaries. People seem broadly compassionate and reasonable until they are exposed to the idea that infant circumcision is something people might feel strongly about, at which point a fairly large percentage go completely haywire. They have an instinctive gut response. They attribute this to the dysfunctional behavior of Intactivists. But that doesn’t make sense, because the actions that they are objecting to do not actually merit such a strong reaction. They have some inner set of rules about who is allowed to be upset about what, which are comprehensive enough that they are rarely wrong. But when they are wrong and other people point this out to them, even indirectly, they get angry. The discrepancy isn’t just something they can shrug off, they feel attacked, perhaps even violated, on a deeply personal level. And this is true even for people who realize infant circumcision is bad. Even if they’re persuaded by the arguments, or partially persuaded as Mason is, that doesn’t change their instincts about who has a right to feel these things which are reserved for others. Those instincts are somewhere deep down in their pit of self-identity, and are completely unpersuadable by debate. The explanation that makes the most sense to me is that there’s the same thing going on in their head as someone who is in denial about infant circumcision entirely. Mason’s position on the matter is less wrong, but in a strange way it is also less self-consistent. By walking down the road of admitting that circumcision is bad, people who are still unwilling to care must stretch farther, and contort more. They need to be cleverer to find some escape route, as they get closer to the thing they don’t want to recognize. Would you believe me if I said that I’m not angry at people who act like this? The truth is, they really didn’t sign any social contract requiring them to actually care about anyone else’s suffering or grievances, just like I didn’t sign a social contract requiring me to have my foreskin cut off. At least they got their wish! I can find it in my heart to be happy for them, for that. There is one time things are a little bit awkward for people who are persuaded by the arguments but don’t want to actually care in any way. That’s when a situation comes up where it seems like it would make sense to show some sign of caring or concern or recognition, even a somewhat intellectualized one, and they have to awkwardly contort their words and actions in order to avoid doing this as much as physically possible in order to maintain their personal boundary. Which probably does not feel very good. Realistically, there’s a lot of possible explanations for this behavior that aren’t exactly about an instinctual mental category for Weak People. I just picked that because it was dramatic. For some people the instinctual category might be better described as “Not Victims”. It could also be that progressive-esque people instinctively don’t want any attention to go to any social movement they don’t consider to be one of them, and conservative-esque people instinctively feel there are too many people complaining about stuff in the world already. I’d say that both would be dismal thinking, in both senses of the word dismal. There is sometimes, in the course of many peoples’ lives, an empty space. It is the empty space where compassion would be, if it existed. A space where it would make sense for some form of compassion to exist, even an intellectualized or sanitized form, but isn’t. Sometimes, not all the time. More often than one would probably want. I don’t know that it’s a totally ideal state of affairs, per se, but I can deal with it. The people who act this way may be facing trade-offs that I don’t fully appreciate. But it does make sense to me that, in the case of the people who create that empty space, the people who don’t actually care at all and don’t want to… Those people probably shouldn’t get to make decisions about what should be treated like an infohazard. Because they are very obviously prioritizing their own comfort over all other concerns around this issue.
2024-07-01
https://www.lesswrong.com/posts/FMKnFxgbtCLxPPS4J/in-defense-of-lawyers-playing-their-part
FMKnFxgbtCLxPPS4J
In Defense of Lawyers Playing Their Part
KingSupernova
This is a linkpost for In Defense of Lawyers Playing Their Part. Michael Huemer writes about why he believes it’s wrong for lawyers to pursue unjust legal outcomes. It's a good article, and one of the best defenses of this position I've seen. Still, I think this argument is mistaken. The reason why we require lawyers to fight for "their side" even if they believe they're in the wrong is to minimize the opportunity for bias. Imagine if all trials were bench trials, decided by only one person as the judge. Even if they're taught to be as objective as possible, there would still be significant concerns about unconscious bias. One person only has one set of experiences to draw on, which is necessarily not very representative of the full range of experiences. And in some ways this problem becomes worse the more training the judge is given, since it filters the pool of valid people down to a small subset of the population. The chosen solution to this is to instead have the important cases decided by a jury, randomly[1] selected from the population. The jury is then instructed that they must come to a unanimous decision, and are allowed an arbitrarily-long time to discuss the case. This prevents a tyranny of the majority, while still allowing a diverse range of perspectives to have a voice in the discussion. Any prospective juror who seems likely to be so biased that they would vote in a predetermined way regardless of the evidence is removed from consideration during voir dire. (This step does reduce the representativeness of the jury, but the assumption is that for any group of people who hold a particular perspective, there will be members of that group who are not so biased as to be selected out.[2]) But this doesn't solve all problems. The jury is still only human, and if they're presented with facts that are biased in only one direction, they're more likely to vote in that direction. If lawyers were instructed to present an unbiased case to the jury, this would provide a significant incentive for the less ethical lawyers to not do as instructed, using a misleading presentation of data to bias the jury towards their side. This is a bad incentive to give people. It would also lead to copious accusations from the losing side that the other side's lawyer was presenting biased facts, which would necessitate some process to sort them out every time, even if both lawyers were perfectly objective. So instead, we tell the lawyers to go nuts. Be as biased as possible, and, as long as they're equally skilled and there aren't background factors that favor one position over the other, this ensures that each presented position is equally far from the truth. The jury now has a fair overview of both sides of the case, without a malicious lawyer being able to advantage one over the other.[3] Michael provides 5 arguments in favor of this position - that lawyers are obligated to do their best even for a client they believe is guilty - then attempts to refute them all. I'll go through them individually. 2.1. The epistemological problem Michael argues that lawyers can know with high confidence that their clients are guilty, giving the example of Benjamin Courvoisier. Thus, "I'm not sure so I should just defend my client" is not an excuse. In the case of Benjamin Courvoisier, Benjamin confessed to the lawyer, presumably under the expectation that the lawyer would not publicly share this information. If lawyers were duty-bound to share any private confession given to them, all but the dumbest criminals would simply stop giving private confessions. The overall effect on convictions would be negligible. But cases like Benjamin Courvoisier are few and far between. Using this example to argue that defense attorneys should recuse themselves for having a general belief that their client is guilty strikes me as a motte-and-bailey; most lawyers who have a suspicion will not have a clear-cut confession. Even if you support the lawyer sharing a confession like that of Benjamin Courvoisier, it does not follow that a lawyer should also recuse themselves if they only have a suspicion. In a case where a lawyer thinks their client is guilty but is not certain, we most certainly want them to set aside that suspicion. The whole point of having a trial, with multiple sides presenting evidence, a fair jury to evaluate that evidence, and a judge to ensure the process is followed correctly, is to prevent one person from having the ultimate say over the verdict. If defense lawyers were to have the power to "give up" the case, they would have unilateral power to find the defendant guilty. This defeats the purpose of having such a system in the first place. We also want to protect the defendant's ability to safely provide evidence to their lawyer without risking incriminating themselves. If a defendant is innocent but knows that a certain fact might make them look guilty, a justice system that functions as Michael advocates would encourage the defendant to not share this fact. This would ultimately hurt their case if the prosecution gets ahold of the same evidence, as the defense would not be able to adequately prepare. (Indeed, even under the current system with attorney-client privilege, defendants frequently lie to their attorney to try to make themselves look less guilty, out of a mistaken belief that the lawyer will tell someone else or care less about defending them.) Remember that there's no clear line between "evidence" and "confession". What if the defendant says they did own the gun in question, and they did shoot it at the person who died, but refuses to say that they're guilty of murder? Is that a confession? What if the shooting actually wasn't the cause of death? And what about false confessions? These are frequent, with causes ranging from police abuse, to an innocent who believes more innocents will be charged if nobody confesses, to weirdos who want to take credit for other people's crimes, to a lawyer from the so-called "innocence project" seducing someone in order to convince them to make a false confession to exonerate one of the lawyer's clients, to police abuse. Ideally, all confessions should be subjected to the evidentiary process and the full scrutiny of the court.[4] But a lawyer under Michael's system would have no such obligation, and may just take such a confession at face value. We also must consider the social incentives. If a case is highly publicized and the defendant vilified in the media, Michael's proposed system would increase the chance that an innocent defendant would be found guilty. What defense attorney would want to represent Lucy Letby if doing so were an admission that the attorney believes her to be innocent?[5] In effect, Michael is arguing that we should replace one method of deciding a case (both sides presenting extensive evidence and arguments to a jury who deliberate until they reach agreement) with a different method "if enough lawyers aren't comfortable publically vouching for this person's innocence, they're found guilty without a trial". This seems unlikely to lead to more just outcomes on average. I think Michael is failing to consider the fact that a group of people can be more accurate, on average, than any individual in that group. He writes: If you actually bought this argument, wouldn’t it follow that we should dismantle the whole criminal justice system since no one can know whether a defendant is guilty (why would the ignorance be limited to lawyers?)? Juries are not supernatural oracles; the only way they know who is guilty is by looking at the evidence presented in the trial. The defense lawyer has that same evidence and often more. Yet work on forecasting and the wisdom of crowds shows that this is plainly not true. A jury of 12 people will, on average, be more accurate than any single juror. 2.2. The lawyer as friend Michael's argument seems correct here, no objections. The belief that a person should be permitted to help cover up a murder simply because it was committed by a friend is straightforwardly incompatible with a just society. 2.3. Faith in the system Michael claims that the ability to hire a better lawyer means that we can't simply trust that the system will always return the correct verdict. This is true of course, but irrelevant. Nobody who defends the current system claims that it's 100% accurate; guilty people are set free and innocent people convicted all the time. The relevant question is whether putting the power to intentionally lose in the hands of a defense attorney would lead to greater or lower accuracy. Michael does have a good point that being able to pay more to hire a better lawyer is in some sense unfair. Perhaps there are better systems we could employ, such as having all lawyers in criminal trials provided by the state, with a record of their win rate in prior cases and the state attempting to give both sides equally skilled lawyers. But saying "the system is unfair with regards to being pay to win, therefore the system would be better if lawyers refused to represent clients they believe are guilty" is a non sequitur. He doesn't show why the conclusion should follow from the premise. 2.4. Rule consequentialism By assuming that lawyers recusing themselves from cases with defendants they believe are guilty is "serving justice", Michael is begging the question. Of course lawyers should serve justice. But the whole point of the system is that there's no reason to trust that a single lawyer will be more accurate than the jury. It actually serves justice the most for lawyers to play their part and attempt to defend their clients. I agree that no defense attorneys being willing to defend the defendant would be evidence that they're guilty. But it's not direct evidence; it's just popular opinion. Putting too much weight on this sort of evidence rapidly leads to self-fulfilling prophecies. A defense attorney isn't going to exhaustively research all the facts of the case before taking it, so if the media is saying "this person is guilty and horrible", this is likely to impact the lawyer's judgement. Not to mention what other lawyers think; it's a normal human tendency to form opinions based on what people around you think, so a lot of lawyers are going to end up reasoning along the lines of "well the other 50 lawyers refused to defend this person, so they're probably guilty". 2.5. The right to a fair trial First, Michael once again begs the question by simply assuming that a right for guilty people to have a lawyer represent them would makes the system less reliable rather than more. I think that almost every defender of this right would hold that the opposite is true. Secondly, Michael takes an interesting perspective on what it means to commit a rights-violation. Failing to represent someone, or refusing to defend the position that your client wants you to defend, does not violate his rights, since you are not stopping him from hiring someone else. (Compare: If I don’t publish your book, I’m not denying your right to free speech.) Taken literally, I agree with this, of course. But the implication is that it's still not a rights-violation if everyone refuses; Michael brought up this possibility in the last section. Indeed, lawyers refusing to represent someone only accomplishes anything if enough of them do so to sway the results of the trial. If you posit a right to representation, then everyone refusing is clearly a violation of that right, since the person in question is not getting the representation they deserve.[6] 3. Lying vs. Misleading Michael's last argument is, I believe, his best. It's well accepted that lawyers should not lie to the jury, fabricate evidence, etc. Yet a lawyer who believes their client is guilty and tries to convince the jury that they're innocent is paltering, which is also generally considered unethical. So why is one encouraged and the other disallowed? The problem is that there's no clear line of what counts as paltering and what doesn't. Our social dynamics are fundamentally structured around misleading each other in ways both large and small. People who aren't Brad Blanton do not choose their sentences by asking themselves "what would maximize the accuracy of the recipient's picture of the world". Even refusing to answer a question is an attempt to prevent the asker from having information you don't want them to have; and if you know they'll assume a different explanation for your refusal than the real one, it's an outright attempt to mislead. Trying to prevent lawyers from paltering would run into interminable debates about what counts as "too misleading". Often these would wrap around back to questions of the defendant's guilt and related facts that are in dispute, since lawyers from both sides would frequently have an incentive to imply that something incriminating or exonerating occurred without stating it outright. To be more specific, I disagree with Michael's premise #5. Deception is not a simple binary of "literal lies vs. maximally informative statements". One thing can be more misleading than another while both are nonzero, and, all else being equal, the immorality of the action increases as the severity of the deception increases. We have to draw the line somewhere in court, and the current line seems like a reasonable compromise between ability to objectively verify whether something was deceptive and preserving the integrity of the court's decisions. Overview I think this line is the key to why Michael and I disagree: The question here is a question of individual ethics: if you’re a lawyer, what should you as an individual do? The question is not what the government should do, whether there should be a law against advocating unjust positions, etc. (This seems to be one of the hardest points for people to grasp; I suppose people can’t imagine a lawyer acting on his individual conscience.) Michael is approaching this from an individualistic standpoint, of "assuming nothing else about the system or anyone else changes, what should I do?" But the real world doesn't work like that. People don't defect in every collective action problem; we modify our behaviors in response to others' behavior, and our predictions of what they're going to do. This is the exact same reasoning as most people use to justify why theft is wrong. Everybody values their own happiness more highly than that of a stranger, so from the naive game-theoretic framing, you should just steal stuff from others, because doing so better satisfies your values. But you don't make your decisions in a vacuum; other people tend to reason similarly to you, and society has the ability to impose laws onto everybody equally. A rule of "no stealing", regardless of whether it's implemented via government force or everyone's personal moral codes, results in overall better outcomes for everyone in that society. A lawyer who believes their client is guilty will locally maximize their values by forcing a guilty verdict onto that client. But a justice system where all lawyers behave this way will result in worse outcomes. ^ It's not actually random in practice, but in theory it should be. ^ Lawyers do tend to use some pretty underhanded tactics to benefit their side here, which kind of kicks the problem up one level. Still, this pretty much has to reduce the overall bias, even if it doesn't go to zero. Since both sides are limited to a certain number of unconditional juror removals, this has the effect of removing outliers while keeping the average views the same. ^ In practice the justice system actually only has this policy for the defense; prosecutors are expected to withdraw the case if they believe the defendant to be innocent. I think this is due to a desire to bias things towards the defense, as per maxims like "innocent until proven guilty" and "it is better that ten guilty persons escape than that one innocent suffer". ^ The current system is not great at this, given the high incentives for innocent defenders to submit a guilty plea anyway. ^ Of course similar risks apply to the jury, and this is an argument in favor of keeping their identities private. But jurors don't have to regularly interact with the legal system after their one case, or can they be interviewed about the case while it's ongoing, so it's easier for them to avoid publicity and criticism for their verdict. ^ And the same could be true to a lesser extent if all lawyers above a certain quality or below a certain price refuse.
2024-07-01
https://www.lesswrong.com/posts/3FeKf5dFZdJo9rJKY/honest-science-is-spirituality
3FeKf5dFZdJo9rJKY
Honest science is spirituality
pchvykov
[cross-posted from my blog https://pchvykov.com/blog] I see a problem with current science. It’s not the reproducibility crisis, nor the toxic work culture, nor the misaligned incentive. But in another sense, it is all of these – or perhaps the root cause behind them. It’s hard to name it exactly, but in a way, it’s the dissociation between the romanticism of a selfless “quest for truth,” and the career-success incentives that run academia. In another way, it’s Goodhart’s law (optimizing for citation counts rather than for meaningful progress). In yet another way, it may be an issue with the scientific method itself (or at least how it’s taught). But I think the overarching issue is with trying to remove our humanity from the scientific process. Let’s begin with something concrete – Goodhart’s law (“when a metric becomes a target, it ceases to be a good metric”). This is a major (perhaps the major) problem of the western social order, which relies heavily on metrics. We optimize for GDP, and forget about actual well-being and psychological happiness of the population. We optimize for profits, and forget about the social value we are creating. We optimize for grades, and forget about learning. We optimize for longevity, and forget about meaning. And so in academia, we optimize for citations, and forget about progress, about building a better world. Goodhart’s law comes up because the things we actually value cannot be accurately encoded into metrics – they will always be slightly misaligned. People’s ingenuity will then always find a way a way to leverage this misalignment to game the metric. But metrics and reproducible measurements are at the core of the scientific method, thereby seeding the problem (it's easier to write a paper showing some high metric score than one that actually matters). At the same time, upon reflection we somehow have an intuitive understanding of whether something is or isn’t a “good” metric, or whether it achieved some “desired” outcomes. So what metric inside us tells us what’s good? For this I like the dialectic of “measuring” vs “sensing” – the former being about finding the value of a pre-defined metric, and the latter about pattern-recognition. Perhaps one can say that in these terms, science is more about measuring, while art is about sensing? Either way, measuring has a linear quality to it – it’s the “forward path” of the scientific method. In contrast, sensing is inherently a feedback loop – where question informs the answer, which in turn changes the question, and so on. Such dynamics may mitigate Goodhart’s law as the metric  (i.e., question) here is never static, but continuously updated and refined in response to the outcomes. The problem I’m referring to is insufficient amount of this feedback in science and its application. But I think we must take it deeper than this. We must include not just the question, but the observer themselves in this feedback loop. To see this, we need to ask ourselves honestly why do we study the questions that we do? The real reason is often circumstantial – “my PhD adviser worked on this,” or “there was a job available for this research,” or “this topic sells well.” But if we go beyond all this and honestly ask ourselves which questions are really important? Which questions are worth our time, our effort, and money? Which questions really make meaningful progress towards a better world? Such sincere inquiry quickly makes us realize that the answers are far from obvious, are very subjective, and are highly sensitive to personal and cultural values, traumas, fears and hopes. And these are the foundation of our science. Thus, we cannot pretend to be the “objective observers,” standing outside the scientific method – we are part of it. And as such, we must have the humility to ourselves become subjects to it. On the one hand, thus goes back to the willingness to update our beliefs about the world, our behavior, our personality, our sense of self even, in response to new evidence – which is already hard enough. But on the other, it also means asking the questions that actually matter to us, that actually have the potential to change our lives (cf. active learning in ML). I think this is the only way to really do “honest science.” And once we come to this, we basically come to spirituality (in some idealized sense). If science is the study of the external, then spirituality is the study of the internal (-Carl Jung, Carl Sagan, Fritjof Capra, etc.). But if we allow our science to be guided by the quest for personal transformation, for greater joy, and for a better world, then the distinction begins to blur. I believe that this integration of science with spirituality, with our humanity, with our inner goals and aspirations, and paradoxically, with our subjectivity, is the only way to overcome Goodhart’s law and get fulfilling outcomes. The notion of a separate objective observer in the scientific method is an impossible idealization – and therefore misleading, leaving much of academia to study incremental technical minutiae that has little relevance to our lives. Even the relevant technological achievements often end up divorced from generating greater well-being. It is no accident that many of the early scientists were motivated by their faith – they studied the external to better understand the internal. I find that the schism between the scientific and the spiritual that emerged since has been unproductive for both domains, just as any hard disciplinary boundaries lead to siloed inefficient work. So what do we do? Well, I’m not really sure. The rationalism movement may be on the right track here. Personally I think it would be fun to build a research institute that really focuses on enabling this feedback between research results and our personal inner values. I’ve been quite interested in Eastern Philosophy and mindfulness practices for a while – and these might give a good approach to really help learn from and internalize our scientific insights. On the other hand, complexity science (my research field) may be a good framework to research all these questions more systematically, perhaps in the context of “science of science.” The key, I think, is to integrate the theory and the practice here – to “walk our talk.” I would love to see science become a joyful and deep practice that leads its adepts to ever-greater personal fulfillment and wisdom, not mere knowledge. Scientists serve the role of shamans in modernity, and so their wisdom and personal attainment (or lack thereof) spreads to the rest of the human tribe.
2024-07-01
https://www.lesswrong.com/posts/7jW63YrnqR8ANfnDf/anti-circumcision-essay-2-of-3-physical-and-psychological
7jW63YrnqR8ANfnDf
Anti-circumcision Essay 2 of 3: Physical and Psychological Realities
Harry Stevenage
Continuation of Part One Now in theory I could end the series here. I could say that the evidence Mason provided of men going haywire is the images Karella posted, and since I’ve already made my case that one of those images is one offensive comic one person drew 13 years ago, and the other is just ordinary activism, that means that (if my case is persuasive) Mason’s argument is rebutted. But that wouldn’t be living in the least convenient possible world. And it wouldn’t really get to the heart of what Mason is saying, either. Mason may or may not be swayed by my above arguments re: the examples Karella gave being highly dubious, but even if she is it could easily be that she has other examples of men from her personal experience in mind who are upset about part of their genitals being cut off and whose brains have gone “completely haywire”, as she put it. I have two basic points to make: I think the physical reality is worse than she realizes. But perhaps more importantly, there are lots of things that everyone agrees are horrible that are in some sense partially or entirely psychological. And no normal person I’ve encountered, no matter their views on therapy culture and bad societal stances on trauma, would talk about those things the way Mason and others talk/write/think about this. Which is something I’ll go into a little more in the next essay in the series. Before I go further, let me just say that I do not intend this to be an attack on Mason, or anyone really. This essay should be taken as addressed to this general cluster of attitude towards this issue, which may manifest itself in different ways. Does the loss of sensitivity actually work the way Mason thinks it does? And now we’ve reached an important point that I think is worth telling people that can’t be made without me talking about gross things in detail. Will people say that I’m only tormenting people with these details because I got some sick satisfaction from doing so? Or that the fact that I’m even talking about this at all with anyone but a doctor or maybe a therapist is a sign that my brain has gone completely haywire? It’s a catch-22. I can’t defend myself against these ad hominems without explaining my perspective, but I can’t explain my perspective without first refuting these ad hominems. So read on, or don’t. If it were just some hidden stat for sensitivity, some abstract and theoretical loss of endorphins with no easy way to analyze the difference in sensitivity between intact and amputated without brain-scanning equipment, then that would be one thing. The effect is not actually that subtle or esoteric. It’s not actually a general, undetectable 20% loss in sensitivity. It’s more like an obvious 90% loss of sensitivity in the majority of the surface area, with the remainder having sensitivity that seems normal to me, although I don’t know. And those remaining areas of unaffected sensitivity point to circumcision as the culprit, either because it is one of the places that one would expect to be unaffected by circumcision, or because the sensitive area is the circumcision scar itself. And just to be sure, I have done the embarrassing thing and asked a close male friend who was not circumcised about this, and he confirmed the difference. There’s also the kinetic relevancy of the foreskin to sex. The foreskin acts as a sort of loose glove, creating a gliding effect that reduces friction. This of course affects both the man and woman’s sexual experience. And most people would describe the presence or absence of friction during sex as a tangible physical concern, and not an abstract psychological phenomenon. I understand that there are other things that can reduce friction. I understand that I could counter the friction surplus with artificial lubricant from the store. Even so, I think that if a woman’s ability to produce natural lubricant was taken away from her completely at birth by the decision of her parents, with the approval of the United States medical establishment, for no compelling reason, then that would be bad. Actually bad, not just vaguely and inconsequentially bad. It wouldn’t be “I am persuaded that this is bad, but…” And it would be worse if her parents also reduced the sensitivity of any part of her genitalia to almost nothing. And these physical problems aren’t totally debilitating, but they’re not trivial either. But I don’t actually think that people’s general approach is to only recognize totally debilitating physical problems as legitimate grievances. If a doctor, let’s say in Japan or China, gave a baby of either sex those problems in some different way, and by a negligent accident instead of on purpose, that people would be able to recognize as awful. Even if no one ever got circumcised as an adult, even if no one ever did a deep scientific study of how these nerve endings work, I think the difference is pronounced enough that some people would notice that their experiences with their bodies did not quite make sense as a likely way evolution would have done things, and notice their confusion. And not just men who have sex with lots of other men and have plenty of opportunities to observe the differences, some straight guys would notice too, in that world. Is it ever okay to be horrified by something that didn’t bother you much before? Does the psychological side of this have any substance that I should consider to be legitimate or ordinary? I would say yes to both. There are lots of thought experiments I could use to try to demonstrate this. I could just describe the equivalent effect on a woman’s genitals, with her not realizing anything was wrong, and ask people what they would think if she had a strong emotional reaction when she eventually did realize that what she was experiencing was both less than what she could have had, and intentionally done to her by people and legal systems she thought she could rely on, or at least rely on more than that. But that would be too straightforward a comparison, and it wouldn’t be very interesting to read. I could also explain the underlying principles without referring to any demonstrative example at all, but that would be too dry. Before I tell you my example, which will hopefully illuminate the underlying psychology, why don’t you try to think of one for yourself? Something totally horrible, for reasons apart from any tangible physical effects. I can think of many things that are like this, some of them real situations that happen to people, some of them more fictional. How about a food metaphor? Let’s say you grew up, and you thought all the food you ate tasted basically normal. It wasn’t the best food ever, but it was nice enough, it fulfilled the biological urge, and there were other good things about your experience with food besides the taste, like the good times had discussing recipes with others, or the simple fun that comes with putting a fork in two brioches and moving the forks so that it looks like two legs that are doing a funny dance. Then one day you happen across a secret storage space behind the shed, and you discover that your parents and society at large had been working together to surreptitiously add the meat of your deceased cousin to all your meals for your entire life. And that this was also being done to millions of other men, and is continuing to be done with no end in sight. And it’s not that this society has no taboo around cannibalism, their taboos around cannibalism are actually mostly the same as ours, except that, because of some historical accident or social blindspot, they make a special case for this specific form of cannibalism. Apart from the people who fail to recognize being forced (or tricked, if you make that distinction) into eating one’s cousin’s corpse as a special case, and are completely horrified. And of course, if this was never kept from him, if he always knew that he was eating his cousin’s corpse for as long as he could remember, then the horror would come when he first had the opportunity to realize that this special case was not justified. And part of that horror would be a general sudden downward reassessment of civilization’s moral competency, not to mention his parents’ moral competency. Which is something I would imagine most rationalists would relate to. It was even a story thread in HPMOR, part of Harry’s character arc, something he struggles with but eventually comes to terms with, which is good. Do you see how saying “you were never particularly bothered by the taste before” would be missing the point? Do you understand how people can have deeply held personal preferences that are larger than their immediate physical experience? Is it a sign that someone has lost touch with reality, for someone to be agitated by learning that they had been tricked into eating their cousin for years and years? I’ve seen rationalists advocate that some things that are illegal and very broadly held to be completely taboo should be legal. Even if you think that cannibalism is just an arbitrary taboo, and it’s fine if none of the parties involved object, does that mean someone is necessarily overreacting if they are horrified that they were forced to eat someone against their will? And then on top of that, there’s also the horror that comes from caring about all the other people experiencing this, from knowing that their basic human rights are being violated. It shouldn’t be a foreign concept to anyone, for someone to suffer because other people are being hurt, even if all those people are male. How would the people who did not make a special case for one kind of cannibalism look to the rest of the world? Even to some people who were in some sense persuaded by their arguments for basic individualist reasons, it would seem like the people who were bothered were overreacting to a technicality. It would seem to outsiders like the people who were upset were extending a taboo in a direction it didn’t make sense to extend it. Because those people are still operating from the perspective of the special case. It may seem like an unfair analogy, to compare something like forced cannibalism to something like forced foreskin amputation. Perhaps it is. And yet even so, there are things to recommend the involuntary cannibalism situation over having part of one’s genitals cut off. 1. Eating your dead cousin is to some degree purely psychological, compared to having part of your genitals cut off as an infant, which is both physical and psychological. 2. I don’t know how large a conspiracy it would take to try to sneak parts of a specific body into one person’s food no matter where and what they eat, but let us imagine that it could be avoided by being very careful about what you eat. Even if it was very difficult to be careful enough to stop people from sneaking the meat in, even if you had to put a lock on your fridge, set up a monitoring system, and research chemical tests you could perform on your food before eating it, even if the steps you would need to take require more tenacity and discipline than some people have easily accessible, and taking those steps was made more difficult because of the emotional tax of the fact that each one serves as a painful reminder of eating your dead cousin, let us say it could be done well enough that you could be reasonably certain there was no human flesh in your diet. Whereas with foreskin restoration, it’s both very difficult and time consuming, and it only partially resets you to how things would have been physically. 3. Lastly, I think that our society’s attachment to sexual autonomy is, in some ways if not in others, stronger than our attachment to not-cannibalism. You could have a scene in an ordinary middlebrow R-rated comedy where someone is tricked into eating human flesh and it would plausibly be funny. Depending on how it was handled, it wouldn’t even have to be a particularly edgy comedy. It would be much, much, much harder to get an audience to laugh at an entire scene where a woman’s breasts are cut off against her will, even if her character was fully unconscious and anesthetized while it happened. Even if only one quarter of just one of her breasts was excised, that would still be very, very dark. Even if she was the villain, even if she had killed a dozen people for money, even if the operation was the result of a medical mistake and the doctor thought she had breast cancer, even if the medical mistake was the result of her own schemes backfiring on her, even if it was all just CGI and not actual documentary evidence of this happening to a real person, even if there wasn’t much blood, even if you just showed the surgery and not the physical and psychological aftermath, the people in the theater would look away, if the visual lasted longer than five seconds. They would get up and walk out of the theater. Many of them would get angry at the director, or at whoever showed this to them without warning them first. For the people who stayed in the theater, the movie would cease to be an ordinary R-rated comedy. It would cease to be any kind of comedy. Not just from that point on, every joke in the movie up until that scene would be reinterpreted as the director luring the audience into a false sense of security. Not only could the movie not be a comedy, it couldn’t even be a revenge thriller. Not really. Even if the director’s authorial intent was to make an ordinary revenge thriller, where the antagonist receives a gruesome punishment, the movie would instantly become a deconstruction of revenge thrillers in the eyes of most film critics. It would be a meditation on the folly of revenge, or the director reminding the audience of the profound truth that they as humans still feel some instinctive level of compassion for very cruel villains, or something. After the movie was over, and the audience returned home, the image would still be in their head, and they would try to do something to distract themselves and get the image out of their head, even though it was just a movie. And some of them would have nightmares. Even some people who weren’t particularly predisposed to nightmares would have them. Even a few people who just had the movie described to them by a friend or read the synopsis on Wikipedia would have nightmares. Even though it was just fiction, and it didn’t even happen once, let alone hundreds of millions of times. That’s how strongly people feel about a woman’s breasts being mutilated against her will without medical justification. Is it just an aesthetic preference, to care more about things like this than some other bad thing that could have happened to the woman in the movie, to be more bothered by it? Even if it is, I don’t actually believe that rationalists are beyond what one might cavalierly describe as “strong aesthetic preferences” like this, not even the 99th percentile of uber-rationalists. Even if they were, that wouldn’t make the strong aesthetic preferences of others a sign of psychosis. And yet somehow the special case remains, and not just because people have convinced themselves that infant circumcision is theologically or medically justified. Although I think they may not have fully unconvinced themselves, so to speak. Edit 7/8: I think I failed to communicate something important to some readers in the second half of this essay, so let me elaborate: There is a category of things that can be done to a person that constitute a serious violation of that person, or that person's preferences, or that person's rights as an individual, or however you want to think about it. I don’t think there is any consistent sensible set of rules for what should and should not be part of that category that would exclude involuntary circumcision that would not also exclude many things which are widely agreed to be serious violations. That’s part of what I was trying to communicate with the cannibalism analogy. “Most people accept this is normal” is not really a basis for exclusion from the category. It is not a justification people would place assign any weight to if something was done to them that they considered to be a serious violation but society (or a hypothetical society) considered to be normal.
2024-06-30
https://www.lesswrong.com/posts/kp5h6khwrK47psMaA/transformer-circuit-faithfulness-metrics-are-not-robust
kp5h6khwrK47psMaA
Transformer Circuit Faithfulness Metrics Are Not Robust
Josephm
When you think you've found a circuit in a language model, how do you know if it does what you think it does? Typically, you ablate / resample the activations of the model in order to isolate the circuit. Then you measure if the model can still perform the task you're investigating. We identify six ways in which ablation experiments often vary.[1][2] How do these variations change the results of experiments that measure circuit faithfulness? TL;DR We study three different circuits from the literature and find that measurements of their faithfulness are highly dependent on details of the experimental methodology. The IOI and Docstring circuits in particular are much less faithful than reported when tested with a more precise methodology.The correct circuit for a set of prompts is undefined. The type of ablation you use to isolate the circuit determines the task that you are asking the circuit to perform - and therefore also the optimal circuit.This is especially important because previous work in automatic circuit discovery has tested algorithms by their ability to recover these "ground-truth" circuits from the literature - without considering these potential pitfalls and nuances. Case Studies We look at three circuits from the mech interp literature to demonstrate that faithfulness metrics are highly sensitive to the details of experimental setup. Indirect Object Identification Circuit The IOI circuit is the most well known circuit in a language model. It computes completions to prompts of the form: "When Mary and John went to the store, John gave a bottle of milk to ____" Image from: Wang et al. (2022) The circuit is specified as a graph of important attention heads (nodes) and the interactions between them (edges) as applied to a specific sequence of tokens. The authors report that the circuit explains 87% of the logit difference between the two name tokens. They find this number by passing some inputs to the model and ablating all activations outside of the circuit. Then they measure how much of the logit difference between the correct and incorrect name logits remains. However, an important detail is that they arrived at this number by ablating the nodes (heads) outside of the circuit, not by ablating the edges (interactions between heads) outside of the circuit. So they don't ablate, for example, the edges from the previous token heads to the name mover heads, even though these are not part of the circuit (effectively including more edges in the circuit). We calculate the logit difference recovered (defined below) when we ablate the edges outside of the circuit instead. They ablate the heads by replacing their activations with the mean value calculated over the "ABC distribution", in which the names in the prompts are replaced by random names.[3] In our experiments, we also try resampling the activations from different prompts (taking individual prompt activations instead of averaging). [IOI] Logit difference recovered (%) with resample and mean ablations (100% is best) 1. Nodes/Specific Toks = Ablate nodes not in the circuit (at relevant token positions). 2. Edges/Specific Toks = Ablate edges not in the circuit (at relevant token positions). The first thing that jumps out from the box plots above is the very large range of results from different prompts. The charts here are cut off and some points are over 10,000%. This means that although the average logit difference recovered is reasonable, few prompts actually have a logit difference recovered close to 100%. And we see that ablating the edges instead of the nodes gives a much higher average logit difference recovered - close to 150% (which means that the isolated circuit has a greater logit difference between the correct and incorrect names than the un-ablated model). So the edge-based circuit they specified it is much less faithful than the node-based circuit they tested. The authors calculate the 87% result as the ratio of the expected difference (over a set of prompts) in the ablated output and the expected difference in the normal output [left side of figure below]: Logit Difference Recovered=E[F(x)correct−F(x)incorrect]E[M(x)correct−M(x)incorrect]×100. Where M(x)correct is the output logit for the correct name token when running the un‑ablated model on input x and F(x)correct is the output logit for the correct name token when running the model on input x and ablating all activations outside of circuit C with values A (similarly for incorrect, with the incorrect answer being the wrong person's name). But the more natural metric would be the expected ratio of the difference in the ablated output and the difference in the normal output [right side of figure below].[4] E[F(x)correct−F(x)incorrectM(x)correct−M(x)incorrect×100] Which gives a substantially different result. [IOI] Logit difference recovered (%) for the ABBA and BABA input distributions using different size ABC datasets to calculate the mean ablation. We also vary the method by which we average over datapoints. We ablate nodes not in the circuit (at relevant token positions) - this is how faithfulness is measured in the IOI paper. The authors combine two variations of the prompt format in their dataset that have different orderings of the names (ABBA or BABA). We separate these groups and find that BABA prompts score systemically higher. Finally, we find that the faithfulness scores increase as we increase the number of samples used to calculate the mean ablations (ABC Dataset Size in the figure above). We need a surprisingly small dataset size (about 4 prompts) to reproduce the 87% figure reported.[5] Our hypothesis is that the noisy means calculated over a small number of points push the model towards particular wrong answers, whereas the less noisy means merely destroy all the information present (as intended). Docstring Circuit The docstring circuit is a circuit in a tiny 2-layer, attention-only transformer that computes completions to prompts of the form: def item(self, string, server, node, test, file, first): """profit support street :param node: hat connection :param test: tip president :param Where the correct next token is file. The circuit looks like this: Image from: Heimersheim and Janiak (2023) The authors measure the faithfulness as the percentage of prompts where the highest logit is the correct answer and get a result of 58% (compared to the full model at 56%). As with the IOI circuit, they arrived at this number by resampling the activations of attention heads outside of the circuit, not by ablating the edges outside of the circuit. When we instead ablate the edges, we find a very different result. [Docstring] Percentage of correct top-1 outputs with resample and mean ablations (mean computed over clean and corrupt prompts). 1. Nodes = Ablate nodes outside of the circuit 2. Edges = Ablate edges outside of the circuit 3. Nodes (tokens) = Ablate nodes outside the circuit  (at relevant token positions). 4. Edges (tokens) = Ablate edges outside the circuit  (at relevant token positions). The authors also don't distinguish between tokens in their ablations, even though the circuit specifies particular token positions for each edge. When we make this distinction the faithfulness metric drops dramatically. Finally, we try the experiments using mean ablations, instead of resampling activations from different prompts. This dramatically increases the faithfulness scores of the edge-level circuit. Finally we note that, as with the IOI circuit, there is lots of variation between individual inputs for all ablation methodologies. [Docstring] Probability of correct answer with resample and mean ablations (mean computed per token over clean and corrupt prompts). 1. Nodes = Ablate nodes outside of the circuit 2. Edges = Ablate edges outside of the circuit 3. Nodes (tokens) = Ablate nodes outside the circuit  (at relevant token positions). 4. Edges (tokens) = Ablate edges outside the circuit  (at relevant token positions). Sports Players Circuit Nanda et al. find a circuit in Pythia 2.8B that can correctly distinguish the sport played by famous sports players. Their metric is the percentage of prompts for which the correct sport token is the highest logit out of the three possible sports. Image from: Nanda et al. (2023) They test their circuit by mean ablating the edges outside of the circuit. When we use resample ablations instead, the metric goes to zero. This case is a little different because their aim wasn't to find the full circuit but to identify the place in the model where factual recall occurs, so this result doesn't negate their hypothesis.[6] But it does again highlight the sensitivity of faithfulness scores to these nuanced choices in methodology.[7] [Sports Players] The percentage of prompts for which the correct sport has the highest output logit with mean and resample ablations. 1. Ablated Model = Ablate all activations 2. Circuit = Ablate all activation except those in the circuit 3. Full model = No ablations Methodology Should Match the Circuit We conclude that if a circuit is specified as a set of edges, it should be tested using edge ablations. And if it is specified at a chosen set of token positions it should be tested with these. But in other aspects there often isn't a clearly correct methodology. For example, it's unclear whether mean ablations are better than resample ablations for a particular experiment - even though this choice can dramatically change the outcome. Optimal Circuits are Defined by Ablation Methodology We argue that because faithfulness metrics are sensitive to experimental methodology, it actually does not make sense to ask what the optimal circuit for a set of prompts is. The task is defined in part by the ablation methodology - you have to specify how you will isolate the circuit. Do you want your IOI circuit to include the mechanism that decides it needs to output a name? Then use zero ablations. Or do you want to find the circuit that, given the context of outputting a name, completes the IOI task? Then use mean ablations. The ablation determines the task. As a concrete example where this insight was overlooked, we review Conmy et al.'s experiments using automatic circuit discovery on two Tracr models. These are tiny transformers where the weights have been compiled to exactly implement some program - so we fully understand all of the internal components. The authors choose "ground-truth" circuits as the edges required to recover full performance with zero ablations. But their circuit discovery algorithms use resample ablations to discover the circuit. Some of the information required to perform the task occurs in all of the inputs, so resampling activations doesn't disrupt the flow of this information. Whereas zero ablating does destroy the flow, requiring more edges to be included in the circuit. Furthermore, two of their algorithms, SP and HISP, ablate nodes to discover the circuits, but they specify the circuits in terms of edges. They report fairly weak results for all three algorithms. But when we redefine the "ground-truth" circuits as the edges required to recover performance with resample ablations and adjust SP and HISP to use edge ablations to discover the circuit, we find that all three algorithms perfectly recover the "ground-truth". [Tracr Reverse and Tracr X-Proportion] Conmy et al. define the "ground-truth" circuit as those edges required to recover performance with zero ablation. But their circuit discovery algorithms are tuned to find the edges (or nodes in the case of SP and HISP) required to recover performance with resample ablations. When we adjust the circuit discovery algorithms to search for the edges required to recover performance with resample ablations, we find that all three algorithms have perfect performance. This example demonstrates the point that the correct circuit for a prompt alone is undefined. The method by which the circuit is isolated is crucial in determining the optimal set of components. When the ablation methodology for the circuit discovery algorithm and the "ground-truth" circuit are mismatched, the algorithm underperforms. This is particularly important because circuit discovery algorithms are often tested by their ability to recover "ground-truth" circuits from the literature (eg, [1], [2], [3]) - without considering whether the circuit matches the algorithm. AutoCircuit We release a Python library called AutoCircuit with the paper that implements edge ablation and circuit discovery highly efficiently. A previous post explains the algorithm in detail, but the short version is that it is much faster than previous implementations. Thanks to Arthur Conmy for his generous assistance in understanding and reproducing his work on Automatic Circuit Discovery and his insightful comments. Thanks to Adam Gleave, Lawrence Chan, Clement Neo, Alex Cloud, David Bau, Steven Bills, Sam Marks, Adrià Garriga-Alonso, Stefan Heimersheim, Nix Goldowsky-Dill and our anonymous reviewers at COLM 2024 for their invaluable feedback and suggestions. Thanks to Bryce Woodworth for his help and encouragement. ^ See the paper for explanations of all of these terms. ^ We review seven previous works and find that they each use a different set of choices: ^ For example: "When Peter and Jackie went to the store, Frank gave a bottle of milk to ___". ^ Reviewer jy1a actually points this out in their review. But the authors respond as the mean operation is linear, the average difference in logits and the difference in average logits are the same. which is incorrect. Consider the example where M(x)clean−M(x)corrupt∈{−1,1} and E[M(x)clean−M(x)corrupt]=1e−9 If F(x)clean−F(x)corrupt≡1 then E[F(x)clean−F(x)corrupt]E[M(x)clean−M(x)corrupt]=1e9 while E[F(x)clean−F(x)corruptM(x)clean−M(x)corrupt]∈[−1,1] The key insight is that E[1y]≠1E[y] ^ There appears to be an oversight in the official open source implementation of IOI in which the specified number of prompts for the mean ablation is divided between all of the templates. This is not the same codebase the authors used for their experiments but it seems likely that it was copied over and that this is the explanation for the seemingly small dataset size used to calculate the mean ablation. ^ In particular, when we ablate the circuit, rather than the complement of the circuit, we destroy the performance with both types of ablations. This suggests that they have actually identified the part of the model responsible for factual recall. ^ Mean ablation gives 33% accuracy because that is the chance of randomly choosing the correct of three sports. Resample ablation on the other hand actively biases the model towards a particular different sport, resulting in 0% accuracy.
2024-07-12
https://www.lesswrong.com/posts/LmCJnM2w4jaanHwmK/superposition-self-modeling-and-the-path-to-agi-a-new
LmCJnM2w4jaanHwmK
Superposition, Self-Modeling, and the Path to AGI: A New Perspective
Peterpiper
Executive Summary: Current feed forward AI faces a tradeoff between intelligence and self-modeling capability due to dense superposition in upper layers. This limitation potentially hinders advanced meta-learning and AGI development. A proposed solution involves using infinite context length and recursive training loops. This approach could allow for high-fidelity self-modeling while maintaining the benefits of dense superposition. Parallels with biological cognition, including hemispheric specialization and embodied cognition, offer insights for future AI development. Full Post: The Superposition Dilemma in AI Modern neural networks, particularly in their upper layers, exhibit dense superposition - a feature where individual neurons or groups of neurons represent multiple concepts simultaneously. This characteristic allows for rich, complex representations and correlates with the network's overall intelligence and pattern recognition capabilities. However, this same density poses a significant challenge for self-modeling and meta-learning: Self-Modeling Difficulty: Dense superposition makes it nearly impossible to cleanly segment out specific functionalities or representations, hindering accurate self-modeling. Compounding Errors: In attempting iterative self-modeling, errors compound rapidly due to the entangled nature of representations. Meta-Learning Limitations: The inability to perform high-fidelity iterative self-modeling severely limits the depth of meta-learning achievable. The Tradeoff There appears to be a fundamental tradeoff between a network's capacity for complex representations (correlated with "intelligence") and its ability to perform clear, iterative self-modeling. This tradeoff becomes particularly evident when considering networks of a given size and training regimen. A Potential Solution To overcome this limitation, we propose leveraging two key concepts: Infinite Context Length: Allowing the model to store its entire state explicitly in the context. Recursive Training Loops: Training the model to perform self-modeling tasks recursively, with each iteration explicitly represented in the context. This approach essentially offloads the self-modeling task from the neural architecture to the context space, potentially enabling high-fidelity, deeply recursive self-modeling while maintaining the benefits of dense superposition for intelligence. Biological Parallels Interestingly, this computational tradeoff and its potential solution have parallels in biological cognition: Hemispheric Specialization: The brain's hemispheric structure might be a biological approach to balancing complex pattern recognition with clearer self-modeling capabilities. Embodied Cognition: Humans appear to use their bodies to model cognitive processes, analogous to how the proposed AI solution uses the context window. Implications and Future Directions This perspective opens up new avenues for AI research and development: Novel Architectures: Designing AI systems that can dynamically balance representational power and self-modeling capability. Embodied AI: Incorporating forms of 'embodied' context in AI systems to support advanced cognitive capabilities. Meta-Learning Advancements: Potentially enabling deeper, more effective meta-learning in AI systems. AGI Development: Offering a possible path to overcome current limitations in achieving artificial general intelligence. Conclusion The tradeoff between intelligence and self-modeling capability in AI systems presents both a challenge and an opportunity. By understanding and addressing this tradeoff, potentially through approaches inspired by biological cognition, we may be able to develop AI systems capable of both rich, complex representations and high-fidelity self-modeling - key steps on the path to AGI.
2024-06-30
https://www.lesswrong.com/posts/xXqhbiDxjWKZxgbE5/the-xerox-parc-arpa-version-of-the-intellectual-turing-test
xXqhbiDxjWKZxgbE5
The Xerox Parc/ARPA version of the intellectual Turing test: Class 1 vs Class 2 disagreement
hamishtodd1
I've been reading the excellent book The Dream Machine about ARPA and PARC, the research communities that invented the personal computer (before them computers were just tools for military research projects). Xerox PARC was managed by Bob Taylor, a great manager who drew on the management style of many great people before him. PARC is an astonishing example of far-out thinking colliding with cutting-edge technology, and it involved many extremely intelligent free-thinkers (read: difficult-to-"manage" people) coming together. There are many PARC-isms; another one is the fact that they had meetings on beanbags instead of chairs, apparently in order to stop people rising to their feet to denounce others. Taylor distinguished class 1 and class 2 disagreement. In our terminology, a class-2 disagreement is one in which both sides could pass an Ideological Turing Test(ITT) - that is, if they paused and were asked to state their opponent's opinion, they could do so in a way that their opponent would be happy with (perhaps going so far as to successfully convince someone who didn't know better that they actually held their opponent's opinion, hence the name). Class 1 is apparently "just both sides yelling at each other". This, I suppose, skips over the fairly likely situation that one party could pass an ITT but the opposing party can't. But perhaps that's not a useful distinction; the one-sided situation is an unstable equilibrium (eg rage-inducing for the side that has taken the time to check their ability to pass the ITT). So it's a high and rarefied standard to meet for something called "Class 2" (one wonders if they had class 3, and that's why no lab has made as significant breakthroughs as PARC since their glory days). I think in order to get to it, it can sometimes take a very long time; I have an ongoing debate intelligence with two friends of mine that has lasted days of discussion (spread across a years!), and in spite of huge amounts of patience on their part, they don't think they can pass the ITT for my position. But, I endlessly appreciate them for their desire to pass it; plausibly that's a part of what friendship is. I mean that partly in the cute way, but also partly with a sad implication: that we're being unrealistic to hope people including ourselves will try as hard pass it for our enemies.
2024-06-30
https://www.lesswrong.com/posts/rofQseMb5ojESyASw/llms-universally-learn-a-feature-representing-token
rofQseMb5ojESyASw
LLMs Universally Learn a Feature Representing Token Frequency / Rarity
Sean Osier
Summary LLMs appear to universally learn a feature in their embeddings representing the frequency / rarity of the tokens they were trained onThis feature is observed across model sizes, in base models, instruction tuned models, regular text models, and code modelsIn models without tied weights, this feature is present in both models' embedding and unembedding matricesThis feature can be used to find "glitch tokens" (e.g. SolidGoldMagikarp)In limited experiments using this feature for activation steering, the model pays relatively little attention to this feature. As such, it's likely not a strong target for activation steering efforts Definitions / Notation I use Olah et al.'s definition of a feature as a direction in the vector space of a model's weights / activations in a given layer. Tied weights are a common LLM architectural choice where in order to reduce the number of parameters in the model and align the embeddings and unembeddings, the same weights matrix is used for both. Of the models discussed here the GPT 2 series, Gemma series, and Phi 3 Small models use tied weights. All others have separate weight matrices for their embeddings and unembeddings. For brevity, in models with tied weights, I will refer to both the embeddings and unembeddings as simply the model's "embeddings" since the two are identical. Linear probing is just a fancy name for using linear regression (or classification) on the internal representations (weights or activations) of models. How to Find the Feature Ideally, get access to the model's training dataWhen this is not possible (i.e. for basically all well known open source LLMs), I use the OpenWebText dataset as a rough proxyLoad the model's tokenizerTokenize the dataset and count the frequency of each token in the tokenizer's vocabulary. The end result should be a vector of length V (the size of the vocabulary) with the counts ordered by token ID(Optional) Because steps 1-3 can be slow, save the resulting counts somewhere for future useSaved counts for all the models I studied as well as code to do OpenWebText token counting can be found hereLoad the embedding to be analyzed. This should be a matrix of size V×D where D is the dimensionality of the embeddingNote: It may be necessary to transpose the matrix if it's an unembedding and is of shape D×VCalculate all D principal components of the embedding matrixUsing the principal components, project the matrix into its principal component form. We'll call this resulting matrix the PC_matrix. It will also be of size V×DNote: We're not doing any dimensionality reduction, just rotating the axesRun a linear probe on each column of the PC_matrix separately, regressing the principal component values vs. log10(token_count+1) for each token. While performing each regression, evaluate the p-value of that principal component and keep a list of all principal components with a p-value ≤0.05DNote: 0.05D= the standard 0.05 statistical significance cutoff after applying a Bonferroni correction to account for the sheer number of linear probes runIf p-value ≤0.05D is too strict, you may consider relaxing this cutoff, but this cutoff generally works wellRun one final linear probe now using all the principal components kept during step 8 vs. log10(token_count+1) and save the final regression weights for eachFinally, to get the actual feature vector, you can simply calculate a weighted sum of the principal components using the weights from step 9 and (optionally) normalize Code for this process can be found here, and the final feature vectors for all the models I studied can be found here. Results I was able to find the feature in 20 different models: 18 popular open source LLMs and their variants plus two smaller, GPT-style models trained on a fully known training dataset for validation. This includes a variety of different model sizes (from 10M to 70B parameters), both base models and instruction tuned models, and even code models. The results are summarized in the table below: You'll notice the feature correlates very strongly with the log token frequency (typically ~0.9). To visualize this, here's a scatter plot for one specific model (GPT 2 - Small). (Scatter plots for all other models, plus code to replicate the results can be found here.) Interestingly, we see high correlations for both model embeddings and unembeddings. In some cases, we even observe that the embedding correlation is stronger than the unembedding correlation, despite initially expecting the feature to be stronger in unembeddings. The weakest correlations were observed for the CodeGemma models. My hypothesis for this is that, being code models, the data they were trained on differs more substantially from our proxy dataset (OpenWebText) than it did for the other, more standard text models. I believe this is also the case for the regular Gemma models to a lesser extent, due to more of their training data being non-English text. (OpenWebText filters out non-English webpages.) All that said, the only way to truly confirm the feature measures training data token frequency is to use a model's actual training data instead of a proxy. As such, I trained two small character-level language models on the works of Shakespeare using Andrej Karpathy's nanoGPT repo. I trained one version with tied weights (in true GPT fashion) and one without. This experiment yielded the single highest correlation observed across the models investigated, generally confirming that yes, the feature actually measures training data token frequency. Possible Uses I also explored a couple possible uses for this feature, beyond improving model understanding alone. Identifying Glitch Tokens One use for this feature is as a way to identify candidate "glitch tokens" a la the infamous SolidGoldMagikarp token. Indeed, one leading hypothesis for the existence of these tokens is that they're tokens that appear in a model's tokenizer but never (or only very rarely) appear in the model's actual training data. As such, this is an extremely natural use for this feature. While I haven't conducted a thorough review, initial inspection suggests the tokens scored as least frequent by the feature align well with the list of anomalous tokens shared in SolidGoldMagikarp II: technical details and more recent findings. Activation Steering I also performed some initial exploration on how this feature could potentially be used to influence model behavior through activation steering. I did this by simply adding the feature vector times some chosen constant c to the final hidden state of the model before the unembedding matrix is applied. Note, due to computational / time constraints, this investigation was conducted on GPT 2 - Small only, and so the findings may or may not generalize to the other models. One important observation from this exercise was that it required very large / small steering vectors (i.e. very large / small values of c) to change model outputs. Indeed, at its most extreme it typically required |c|>1000 to push models outputs to the extremes of the feature: always outputting the most or least frequent token according to the feature. With other, unrelated features this can typically be done with much lower values c, e.g. |c|>100 or even less. This suggests the model pays relatively little attention to this feature. Ultimately, because of this, I don't believe it's a strong target for activation steering efforts. Nevertheless, the activation steering did have the effect you might anticipate of pushing the model to use more common / rare tokens. This is demonstrated in the activation steering example pictured below. In this example, the default first token output is " Paris", but by pushing the feature in an increasingly negative direction (i.e. more negative values of 𝑐) we're eventually able to force it to output the rarer " Cologne" token. Conversely, we can also push it in the other more positive direction to force it to output first the " London" and then the " New" token instead, both of which are more common tokens. Final Thoughts / Next Steps Obviously, this is not a surprising feature. It definitely seems like a feature models should learn. As such, I hope that this work: Was able to demonstrate not just that models should learn this feature, but that they universally do in practice, in real large-scale modelsEnables others to quickly find this feature in new modelsEncourages others to try to demonstrate the universality of features they find (or at least some of the most important ones!) Given this, a natural next project, building off this one, would be to try to demonstrate the universality of a "feature" for token bigram statistics. The theory from A Mathematical Framework for Transformer Circuits suggests this should be related to the product of embedding and unembedding matrices, but to my knowledge this hasn't yet been empirically demonstrated as universal in LLMs. In fact, if the bigram statistic theory is correct and universal, that could explain why GPT 2 - Small was relatively insensitive to my attempts to steer it using the token frequency feature. Specifically, my current hypothesis is that the feature exists and is important to the model not for providing some sort of baseline prior on the output tokens, but rather as a feature the model uses to learn / store bigram statistics (which ultimately become a better prior it can use / update). Acknowledgements I want to express my gratitude to all the companies who made their models open source (OpenAI, Meta, Mistral, Google, and Microsoft). This work would have been impossible otherwise as I definitely don't have the means to train any of these models myself! I also want to thank Hugging Face for hosting the models and providing helpful code for pulling / interacting with them, and of course Andrej Karpathy for building and sharing his nanoGPT repo.
2024-06-30
https://www.lesswrong.com/posts/Ejt6yHChar5TmxhfP/my-5-step-program-for-losing-weight
Ejt6yHChar5TmxhfP
My 5-step program for losing weight
nikita-sokolsky
I can’t claim to be an expert on the subject of weight loss but wanted to share the steps I’ve followed to get from ~81 kg (178 pounds) to ~70 kg (154 pounds) over a period of 8 months. Several people have asked me for tips, so I’ve figured I might as well write it all down for future reference. Weight Gurus stats from Oct 2022 to Oct 2024 Axiom: CICO (Calories In - Calories Out = Weight loss/gain) works. If you’re a ~healthy individual and need to lose weight, it makes sense to just accept CICO as a given and use it as a mental shorthand. Of course there’s always exceptions: my own dad would eat at least 4,000 calories daily and never gained any weight. But if you’re reading this text, you’re probably not an exception. Step 0: accept that you lack willpower I’m not a fan of Nike’s Just Do It™ because it overlooks the fact that people’s willpower is finite and if you aren’t “just doing X” already, you probably won’t do it tomorrow without a change in circumstances. Hence the diet steps are all very gradual and try to get you to a plateau of discomfort before requiring the next sacrifice. Feel free to slow the progression down even further if that works better for you. Step 1: start measuring your weight and food intake (30 days) Buy these two products: A Wifi-enabled scale. I recommend this model but any option with phone synchronization works fine. The linked scales have an app which can export its data into Apple Health or Google Fit.Paid subscription for the Macrofactor app, which is currently the best app for tracking your calories. The app supports importing your weight data from Apple Health or Google Fit, which it can then use to track your daily energy expenditure and calculate your “true” weight by averaging out your weight measurements over a 7-day period. Every single morning you’ll get up, remove heavy items of clothing, step on the scale and ensure the weight is recorded. I specifically recommend getting a Wifi-enabled scale because it removes the extra friction of having to pull out your phone every morning to write down your weight. Then during the day you’ll want to track everything you eat or drink with as much precision as reasonably possible. MacroFactor screenshot Do not try to change your diet just yet! The goal is to build up an intuition for how many calories each food has, as well as how these calories affect your weight. I’ve been eating around 3,500 calories/day pre-diet, which was a shocking revelation as I didn’t realize just how much food I was consuming on a daily basis. Do this for 30 days in a row, counting from the first day when you’ve both weighed yourself and entered all of your food into the app. Step 2: stop eating added sugar (30 days) Here we’re still not restricting our calorie intake but adding a constraint: no added sugar. This includes not just sweets but any food where added sugars are more than 30% of the contents, which you can calculate from the Nutrition Facts label. In the photo below, a standard serving (40g) of cereal contains 3g of added sugars or 7.5%, so its fine. On the other hand, Frosted Flakes contain 12g of added sugar per a serving of 37g or 32.4%, so you would avoid eating them. You can still eat whatever fruit you want, as sugar in fruit is “natural” rather than added. The biggest issue with sugar is that its energy dense, so its easy to end up ingesting too many calories from sweet foods without feeling full. High sugar intake also causes a rapid increase in blood glucose levels, which explains why people are often sleepy after lunch. If you’re feeling hungry you can eat extra portions of other foods to compensate. The goal here is to introduce yourself to the concept of dieting without this feeling onerous. Do this for 30 days starting from your first added-sugar-free day. Step 3: stop eating past 7pm (30 days) This is where we’re getting into the truly hard part. Here we’ll still allow ourselves to eat whatever we want (other than added sugars) but introduce a hard stop at 7pm. The goal here is to reduce the number of opportunities for last-moment excess calorie intake. Pre-diet I would sometimes go to the fridge for a “little” snack, only to “accidentally” end up eating an entire loaf of bread with cheese. The rule also extends to liquid calories, so you should do your best to avoid alcohol or nutritious drinks past the cut-off time. In addition to being a frequent source of excess calories, late night food consumption can disrupt your sleep quality. I use Apple Watch to measure my sleep quality and there’s definitely a drop off on nights when I eat too close to bedtime. This can then lead to a viscious cycle where you consume lots of coffee and food in the morning to wake yourself up. Step 4: reduce your total calorie intake (25-50 weeks) Our next goal is to start a slow reduction in daily calories consumed until we reach a 250 calorie deficit. We will do this in increments of 5%/week to let your body adapt and reduce the risk of giving up. This is where MacroFactor will truly start to shine: Check MacroFactor for your average calorie intake for the past 2 weeks. Lets say it was 3,500 calories/day.Look up your daily energy expenditure in MacroFactor. Lets say its 2,100 calories.Our goal now is to get to a deficit of 250 calories/day. In our hypothetical this would require us to reach 2,100 - 250 = 1,850 calories. You can safely go as low as 500 calories/day if you really want but its better to set a manageable goal.At a rate of -5%/week it should take us roughly 15 weeks of slow intake reduction to get to our goal. Note that as you reduce your food consumption your body’s metabolism will slow down as well, so your daily energy expenditure will decrease. But don’t worry - as long as you keep entering your food and weight, MacroFactor will keep calculating this for you automatically. One pound of body weight is approximately equivalent to 3500 calories, so at a deficit of 250 calories/day (or 1,750 calories/week) you can expect to lose roughly 0.25 kilos/0.5 pounds per week. You would then stay at this level until you reach whatever is your target weight. The average American is overweight by 25 pounds so with this system they’d need 50 weeks to reach a healthy BMI. What should your target weight be? The medical answer is: in the BMI range of 18 to 25. My own answer is: whatever makes you happy when you look at yourself in the mirror. If you want a visible "six pack" you'll need to get below a BMI of 22, though the exact value will depend on your personal fat distribution and muscle volume. Word of warning: this part really, really sucks. You will feel hungry every single day and constantly dream of food. Step 5: maintain your weight Once you’ve reached your target weight you should increase your calorie intake to match your daily energy expenditure. At this point you will likely have developed a solid intuition for what foods have how many calories, as well as healthy eating habits. Personally I no longer enter my food into MacroFactor and sometimes skip weighing myself in the morning. Instead I simply adjust my food intake on a weekly basis: if I gain some weight I’ll eat a bit less, if I lose too much weight I’ll eat a bit more. Unfortunately the sense of hunger doesn’t fully go away when you’re in maintenance mode. I still crave for extra food every single day and actively have to remind myself to moderate my diet. But after about a year of dieting the sense of hunger sort of faded away and it doesn’t feel as bad as it used to. Wait, what about exercise?! Lots of people believe they can eat more if they just exercise more. Unfortunately our bodies are highly efficient relative to the density of modern food, so “exercising it away” is not a realistic plan. A single burrito bowl with guacamole from Chipotle contains roughly 1,000 calories, which is equivalent to: Walking: ~13.3 milesRunning: ~8.6 milesCycling: ~23 milesSwimming: ~2 hoursHigh-Intensity Interval Training (HIIT): ~1 hour 15 minutesWeightlifting (vigorous effort): ~2 hours 30 minutesRowing (moderate effort): ~1 hour 40 minutesPlaying Basketball: ~1 hour 40 minutesJumping Rope: ~1 hour Will you have the time to run 8.6 miles every single day for the sake of that extra burritto bowl? Or spend 2.5 hours at the gym? Most people don’t have enough or motivation for this much exercise, so its far easier to focus on reducing calorie intake instead. Don’t get me wrong - exercise is great but its also not the solution to weightloss for most people. Disclaimer I am not affiliated with MacroFactor in any way. I’m not your doctor and don’t know your personal health situation. Take all of the above with some grain of salt and tweak it wherever you feel like it makes sense.
2024-06-30
https://www.lesswrong.com/posts/aDi7CmwCrJ5T3Fabc/anti-circumcision-essay-1-of-3-according-to-their-critics
aDi7CmwCrJ5T3Fabc
Anti-Circumcision Essay 1 of 3: According To Their Critics, Intactivists Are The Best-Behaved Protest Movement In History
Harry Stevenage
Context section: Who are you and why are you posting this here? First time poster on LessWrong, but I've been in the rationalist world since 2015 or so. Usually just participating in discussions and not writing essays. The people who know me know that this issue isn't the only thing I discuss. In fact, I'd say I bring it up fairly infrequently, although others may disagree depending on their sample size of total conversations with me they've had. The reason I’m posting this here is because it came up as a topic in the general rationalist discussion-sphere on X. This first essay in the series just deals with my rebuttal to Karella's tweets, who isn't really rat-adjacent as far as I know. But essays two and three in this series are my rebuttal to tweets posted by Mason, who is. Her tweets got enough traction in the rationalist community that I decided it would be worth it to write a full response with my perspective. Body of essay: A few days ago a sci-fi author named Karella tweeted this. It came to my attention because Mason, someone in the rationalist world, retweeted her with further thoughts on the matter, which I’ll get to in a subsequent essay. The evidence Karella presents to show that the intactivists are weirdos is one self-published comic book from 2011 made by one person which featured a superhero fighting against infant circumcision in various cultures, i.e. medical circumcision, Jewish ritual circumcision, African ritual circumcision, Filipino ritual circumcision, etcetera. There are lots of these not-particularly-high-quality activist publications where a superhero stands up for some cause and the issue at hand is represented by some made-up supervillain. Like Superman vs Nick-O-Tine, for instance. The problem is that in the part of this comic that dealt with ritual Jewish circumcision, the supervillains were visually represented in the form of horrible antisemitic caricatures. But… this comic was published in 2011. If the test is “has anyone, even a single person, said something racist while advocating for your cause in the past 13 years?” then no cause on Earth could pass that test. To the extent that the existence of this comic book is permanently disgraceful to the entire cause of intactivism, every other cause is way way way more permanently disgraced than this one, by things that happened much more recently. I would assume that almost every rationalist would be aware of the problems with these sort of dubious guilt-by-association tactics. There have been many essays written about this. Chinese Robber, Isolated Demand for Rigor, etcetera. I’m not going to belabor this point too much, I would imagine that for most people reading this the concept is already intuitive. I’m not crazy about NIMBYism, but it wouldn’t tell me much if a single NIMBY made a racist comic book 13 years ago. I would have the self-restraint not to pretend to myself and to the world like that was a good argument. The second example of deranged behavior Karella gave is more recent, but in my view even less persuasive. As an attention-getting method of protest, the Bloodstained Men organization wears clothes with blood on the crotch while carrying signs. But: wearing attention-getting outfits is actually totally commonplace in progressivism activism. It’s not exactly a high level of rhetoric, but there’s nothing inherently crazy about showing up to a protest in Handmaiden’s Tale cosplay. Or, for a more one-to-one comparison, feminist campaigners literally wear the exact same outfit to protest lack of access to menstrual hygiene products. The implicit message is “this may seem like a preposterous visual to you, but this outfit is a constructed visual representation of what happens when women can’t get these products. We know it’s not good to look at, that’s the point.” Similarly, the implicit message of the Bloodstained Men outfit is “this may seem like a preposterous visual to you, but this outfit is a constructed visual representation of the reality of parts of babies' genitals being amputated for no sensible reason. We know it’s not good to look at, that’s the point.” There are probably people reading this who think that Handmaiden’s Tale cosplay, bloodstained costumes protesting lack of access to menstrual pads, and bloodstained costumes to protest infant circumcision are all deeply uncool, or “cringe”. Or maybe they draw the line somewhere else, like they’re okay with the Handmaiden’s Tale cosplay but not the bloodstained outfits. Even if it doesn’t exactly match with your sense of how a protest ought to be conducted, even if it doesn’t seem like the best way to do it... it’s not deranged behavior, to protest lack of access to menstrual hygiene products in this way. It’s not like only someone undergoing a total psychotic break from reality would think that this is a reasonable way to make an invisible consequence of a policy decision more visible. Are these constructed visual representations a weak form of argumentation? They may not be on the same level of rhetoric as, to pick an example totally at random, a well thought out blog post, but it’s silly to act like we’re above them as a species. A photo of a child dying of malaria is a reasonable thing to show people, even if it’s not a statistic. A statue of a woman being stoned to death for adultery matters, even if it’s not video footage of the event happening. A short story based on what life might be like for a typical family in Madagascar if humanity fails to sufficiently address climate change is a reasonable thing to publish. Well-researched and properly contextualized statistics are good to have, but one of the things that makes them good to have is that they are a statistic of something. And there does still need to be some connection with the thing that the statistic is about. It could be that you think that both the women’s hygiene costume and the intactivist costume are counter-productive to their cause. I’m not sure that’s true. I think that if a random person sees someone in a bloodstained costume holding a sign that says “INTACT GENITALS ARE A HUMAN RIGHT” or “GENITALS SHOULD NOT HAVE SCARS”, their reaction would be, a large percentage of the time, “Wow, what a ridiculous costume! Although I guess now that I think about it, it is kind of a strange and gruesome practice…” Different people would react in different ways, but my educated guess would be that these protests are a net positive. But even if people really were that superficial and uncharitable about how they interpret these things, it doesn’t mean you need to be. You can believe that these protests are net negative and still recognize that in a saner world, people would judge this sort of thing in the sense that it was meant, and it would at worst do no harm to its cause. If you have anything in your life that you care about that you feel isn’t given enough attention, you can understand the impulse to present people with an image or representation of the issue in question. It’s not an alien concept. I don’t think that I’m the first to point out that a lot of people have perhaps taken the other route, and have decided that since trying to confront society about anything is so terrifyingly uncool, they’ll just descend into detached irony, or nihilism. This is something that’s come up in the world of AI risk recently. Some people are saying that they hate how the AI risk community (or AI notkilleveryoneism community, to use a more descriptive term) has gotten less intellectual and less dignified. And those people are being criticized. The criticisms are that there’s actually something to fight for here, and that if all you care about is decorum, maintaining the vibes of the community, and not doing anything that could possibly give anyone a chance to mock you, fairly or unfairly, then you’re probably not going to make sensible tradeoffs about what to stand up for and when. Aside from the decorum thing, there’s also the phenomenon where people go to great lengths to justify inaction for the sake of simple laziness. It’s not the exact mirror of the debate that constitutes the main subject of this essay, but I thought I’d bring it up because it’s something people were discussing recently and there’s some amount of conceptual overlap. Pardon the digression. Anyway, that’s pretty much it for Karella’s side of things. It seems clear to me that she is holding some to higher standards and others to lower standards. This is I guess the part of the essay where I could ascribe her bias to unhealthy social norms about how men aren’t supposed to suffer or complain, or something. I do feel like Karella is making a pretty striking oversight in her argument, which might be an indicator of something or other, but I don’t really want to accuse her of any specific bias. It could have happened a lot of different ways, some of them not all that sinister. People aren’t always totally fair; it happens. If they were then we wouldn’t need blog posts, and that doesn’t sound so great either. Next time: me responding to two tweets from Mason with two essays. Hopefully the points I make will feel to others like they were worth making in detail. Link to Part Two: Physical and Psychological Realities
2024-06-30
https://www.lesswrong.com/posts/wg6E3oJJrNnmJezNz/a-scaling-monosemanticity-explainer
wg6E3oJJrNnmJezNz
A “Scaling Monosemanticity” Explainer
latterframe
Coauthored by Fedor Ryzhenkov and Dmitrii Volkov (Palisade Research) At Palisade, we often discuss latest safety results with policymakers and think tanks who seek to understand the state of current technology. This document condenses and streamlines the various internal notes we wrote when discussing Anthropic's "Scaling Monosemanticity". Executive Summary Research on AI interpretability aims to unveil the inner workings of AI models, traditionally seen as “black boxes.” This enhances our understanding, enabling us to make AI safer, more predictable, and more efficient. Anthropic’s Transformer Circuits Thread focuses on mechanistic (bottom-up) interpretability of AI models. Their latest result, Scaling Monosemanticity, demonstrates how interpretability techniques that worked for small, shallow models can scale to practical 7B (GPT-3.5-class) models. This paper also paves the way for applying similar methods to larger frontier models (GPT-4 and beyond). Key Findings of Scaling Monosemanticity Anthropic has demonstrated how to extract high-level features from AI models. They identified parts of the model’s inner structure that correlate with language properties such as verb tense, gender, helpfulness, lies, and specific subjects like political figures, countries, or bioweapons. These features are then mapped, allowing researchers to review and analyze them for a deeper understanding of the model. Map of the features related to ‘biological weapons’ in Claude 8B model ("AI brain scan"). See here for an interactive map. Identified features can then be adjusted to control the model’s behavior. For instance, models can be modified to avoid sensitive topics, be children-appropriate, give biased opinions, or output subtly incorrect code while concealing the errors. This is done by artificially increasing or decreasing the effect a feature has on the final output (“steering” output). Left-hand side: behavior of the default model. Right-hand side: “brain sciences” feature amplified to 10x. Implications The results from Anthropic’s paper are early but promising. If this thread of research continues to be successful, it could make tuning AI to specific tasks more accessible and cheap. In the short term, this could disproportionately increase unregulated open-weight model capabilities. In the longer term, this might enable frontier AI labs to build safer and smarter models. Efficiency Traditional AI interpretability methods require researchers to hypothesize a feature, create a dataset, and run experiments for each feature to be identified or adjusted. Anthropic’s approach, on the other hand, builds an accessible dictionary of all found features of the interpreted model at once. This is likely to accelerate AI interpretability research. Risks and Safety Frontier labs will benefit from using this approach alongside other methods like RLHF and input/output filtering to make their API models safer. Anthropic’s approach requires access to a model's full weights and biases, preventing outsiders from using it on private API-only models. For open-weight models, the implications are twofold: on one hand, parameter-efficient fine-tuning is already known to efficiently strip safety fine-tuning from open-weight models, and Anthopic’s method introduces no new risks. However, advances in interpretability could enhance open-weight model capabilities in hacking and other risky areas. Technical Summary Early Deep NLP models (word2vec, 2013) enjoyed the following properties: 1. Directions in the space are meaningful (if you go from “man” to “woman”, record the direction, and go again from “king”, you end up at “queen”) 2. Distances in the space are meaningful (“apple” and “orange” are closer than “apple” and “trains”) Word2vec embedding space. Left-hand side demonstrates property (1), right-hand side property (2). These properties were lost with increasing model complexity. Anthropic’s approach brings them back and intends to make them stronger by introducing an additional property: 3. Coordinates in the space are human-interpretable (we can identify the word by its absolute coordinates along a number of coordinate axes / we know in what coordinates to look for a word we need). When these properties hold, a researcher can explore the elicited features-directions by looking at inputs that activate them or plotting the feature map. Once they identify an interesting feature, they can adjust the model to generate outputs correlated or uncorrelated with that feature, effectively steering the outputs along interpretable axes. Relation to other steering methods The two standard approaches for training frontier large language models (LLMs) are reinforcement learning (RL) through RLHF or Constitutional AI, and supervised fine-tuning (SFT) using parameter-efficient techniques like Low-Rank matrix Adaptation (LoRA). These methods represent different ways of goal specification: RL implicitly specifies the target model behavior through assessors ranking candidate completions, while SFT directly trains a model on question-answer pairs to elicit specific foundation model knowledge. RLHF approaches typically collect tens to hundreds of thousands of assessor preferences and amplify them with a reward model. In contrast, SFT requires only thousands of data points. Anthropic’s dictionary learning offers a way to specify target behavior in terms of model features, potentially eliminating the need for a fine-tuning dataset completely. We expect this to make model adaptation more accessible in the span of 1-2 years. Appendix 1: Anthropic’s premises The property of interpretable coordinate bases (property 3 above) generally doesn’t hold for deep neural networks. Anthropic explains this with the superposition hypothesis, which states that neurons in the network are polysemantic and activate for a range of inputs. For example, one neuron might activate for academic citations, English dialogue, HTTP requests, and Korean text, making its meaning unclear to humans. In previous research, Anthropic showed that training sparse autoencoders (SAEs) on shallow neural networks allows them to build a dictionary of basis vectors. These vectors form a higher-dimensional space that satisfies property 3. Scaling Monosemanticity scales this approach to deeper and larger networks with minimal algorithmic improvements. This work proves that SAEs are effective for deeper, multilayered networks and follow scaling laws similar to those of large language models (LLMs). When trained on sparse data, models often attempt to store more features than they can learn, allowing for some error tolerance. This process produces polysemantic neurons. To address this, Anthropic trains a separate neural network, referred to as a “sparse dictionary.” This dictionary maps these entangled features into a higher-dimensional space. Here, each feature can be interpreted individually by assigning it to a separate axis. Appendix 2: Related parts of the interpretability landscape Today’s frontier AI is built on Transformer architecture. To give context for Anthropic’s results, we list several interpretability methods that aim to make transformers interpretable in different ways: A transformer block. A given AI model has N blocks. An interpretability method might intervene at Attention or Feed Forward boxes or one of the arrows.Linear Probes & Structural Probes test specific hypotheses rather than to explore the model’s general structure.Logit Lenses get textual responses directly from intermediate layers. They allow empirically tracking a model’s “thought process” through the forward pass.Layer-Wise Relevance Propagation identifies the contribution of each model layer to the final output, highlighting important layers and their roles. It discovered positional and induction heads.Dictionary Learning gives word2vec-style properties to LLMs. This is the topic of this writeup.Activation Addition & Refusal Orthogonalization find empiric steering vectors by subtracting model activations.
2024-06-29
https://www.lesswrong.com/posts/gcpNuEZnxAPayaKBY/othellogpt-learned-a-bag-of-heuristics-1
gcpNuEZnxAPayaKBY
OthelloGPT learned a bag of heuristics
jylin04
Work performed as a part of Neel Nanda's MATS 6.0 (Summer 2024) training program. TLDR This is an interim report on reverse-engineering Othello-GPT, an 8-layer transformer trained to take sequences of Othello moves and predict legal moves. We find evidence that Othello-GPT learns to compute the board state using many independent decision rules that are localized to small parts of the board. Though we cannot rule out that it also learns a single succinct algorithm in addition to these rules, our best guess is that Othello-GPT’s learned algorithm is just a bag of independent heuristics. Board state reconstruction Direct attribution to linear probes indicate that the internal board representation is frequently up- and down-weighted during a forward pass.Case study of a decision rule:MLP Neuron L1N421 represents the decision rule: If the move A4 was just played AND B4 is occupied AND C4 is occupied ⇒ update B4+C4+D4 to “theirs”. This rule does not generalize to translations across the board.Another neuron L0377 participates in the implementation of this rule by checking if B4 is occupied, and inhibiting the activation of L1N421 if no. Legal move prediction A subset of neurons in mid to late MLP layers classify board configurations that are sufficient to make a certain move legal with an F1-score above 0.99. These neurons have high direct attribution to the logit for that move, and are causally relevant for legal move prediction.Logit lens suggests that legal move predictions gradually solidify during a forward pass.Some MLP neurons systematically activate at certain times in the game, regardless of the moves played so far. We hypothesize that these neurons encode heuristics about moves that are more probable in specific phases (early/mid/late) of the game. Review of Othello-GPT Othello-GPT is a transformer with 25M parameters trained on sequences of random legal moves in the board game Othello as inputs[1] to predict legal moves[2]. How it does this is a black box that we don’t understand. Its claim to fame is that it supposedly Learns an internal representation of the board state;Uses it to predict legal moves which if true, resolves the black box in two[3]. The evidence for the first claim is that linear probes work. Namely, for each square of the ground-truth game board, if we train a linear classifier to take the model’s activations at layer 6 as input and predict logits for whether that square is blank, “mine” (i.e. belonging to the player whose move it currently is) or “yours”, the probes work with high accuracy on games not seen in training. The evidence for the second claim is that if we edit the residual stream until the probe’s outputs change, the model’s own output at the end of layer 7 becomes consistent with legal moves that are accessible from the new board state. However, we don’t yet understand what’s going on in the remaining black boxes. In particular, although it would be interesting if Othello-GPT emergently learned to implement them via algorithms with relatively short description lengths, the evidence so far doesn’t rule out the possibility that they could be implemented via a bag of heuristics instead. Project goal Our goal in this project was simply to figure out what’s going on in the remaining black boxes. What’s going on in box #1 - how does the model compute the board representation? How does the model decide if a cell is blank or not blank? How does the model decide if a cell is “mine” or “yours”? What’s going on in box #2 - how does the model use the board representation to pick legal moves? Results on box #1: Board reconstruction A circuit for how the model computes if a cell is blank or not blank WRT question 1a, we found a mechanistic circuit that Othello-GPT uses to compute if squares are blank or not blank in its internal representation of the board state. The circuit, described here (15p.), is used by Othello-GPT across a test set of 50 games that we looked at, although we didn’t check that Othello-GPT uses it exclusively. The main points are that The model uses different mechanisms to record if a tile was just played vs. if it was played earlier in the input sequence.In the first case, both the direct embedding and a “move detector neuron” in mlp 0 that’s assigned to each tile contribute to the residual stream in the direction “[current tile] is not blank” (as identified by the linear probes).In the second case, attention head 0.2 moves information about whichever tile was played 1 move ago to the current position, where it contributes to the residual stream in the direction “[tile played 1 move ago] is not blank"; attention head 0.3 does the same for the tile played 2 moves ago; and we conjectured but didn’t check that other heads in layer 0 do something similar for tiles played >2 moves ago. An example of a logical rule for how the model computes if a cell is “mine” or “yours” Wrt question 1b, we conjecture that Othello-GPT computes whether cells are “mine” or “yours” by aggregating many heuristic rules, rather than implementing a crisp algorithm with a short description length. We’ll provide evidence for this claim throughout the post. As a first piece of evidence, we find an example of a logical rule that Othello-GPT appears to implement in layers 0+1 along with a mechanistic explanation of how it does this. The rule and circuit that computes it are described here (10p.). The main points are that The rule reads: “If A4 is played AND B4 is not blank AND C4 is not blank, update B4+C4+D4 towards “theirs” in the internal representation of the board state.”The update on B4+C4+D4 comes from the firing of a specific neuron, L1N421.The logic “B4 is not blank” is mediated by an earlier neuron L0N377, which inhibits L1N421 if the token for B4 does not appear before A4 in the input string.Information about whether B4 appears before A4 in the input string is passed to L0N377 through attention heads 0.1, 0.4, and 0.5, which inhibit L0N377 if B4 appears before A4 in the input string. Intra-layer phenomenology We also studied whether the model internally represents each board square as “blank,” “mine” or “yours” across layers in many sample moves and games, as measured by applying the linear probe to the residual stream in each layer[4]. For example, Figure 1 shows the model’s internally assigned probability that each of the 64 board squares is “yours” at move 14 of game 7[5] (where the legend gives the ground-truth state of each square, and “accessible” squares are adjacent to currently filled squares but not legal). Figure 1: Board state representation across the residual stream indicated by a linear probe. High probe prediction value is an (imperfect) indicator for the internal board state representation. The board state representation is frequently up- and down-weighted across layers 0 to 4. This behavior is consistent across games and game phases. Across different games and move numbers, this visualization shows that the model frequently up- and down-weights the internal representation of board squares. This is consistent with what we would expect if the model was aggregating a large number of heuristics that are statistically-but-not-always correct (such as the rule that we found above)[6]. Results on box #2: Valid move prediction Direct logit attribution (Logit Lens) Direct logit attribution indicates how the prediction of valid moves evolves across layers in the residual stream. Specifically, we apply the unembedding matrix to intermediate activations of the residual stream. Figure 2 shows a typical logit evolution for a particular move in a particular game. We see that the model gradually promotes legal moves and suppresses illegal ones across layers. Accessible squares seem to play a special role, as they tend to have higher logits than other illegal moves. We suspect this is an artifact of the model learning the heuristic “there is a neighboring occupied square to the square considered as a next move”, which is a necessary but not sufficient condition for a move to be legal. Figure 2: Direct logit attribution of residual stream activations. The model gradually upweights legal moves during a forward pass. This evolution is consistent across games and game phases. For context, the sufficient rule for valid moves in Othello is that moves to empty squares adjacent to straight or diagonal lines of opponents’ pieces that end with one’s own piece are valid. We’ll call these configurations board patterns from now on. Figure 3 shows three example patterns on the board that make a move on G1 valid. Figure 3: Three examples of board patterns that make a move on square G1 valid. Board Pattern Neurons Neel Nanda found that MLP layer 5 has a high causal effect on three adjacent squares in a particular game (see “MLP layer contributions” in his post). Motivated by that, we investigate whether MLP neurons indicate that a pattern is present. We treat MLP neurons as classifiers, resolving in true if the neuron activation during the forward pass is above a threshold. Indeed, MLP neurons in the second half of the network correspond to groups of patterns. We find 610 neurons that predict the union of patterns with an F1-score > 0.99 across 60k board states. A good predictor for the union of patterns predicts each individual pattern with high recall (the neuron is active if the board state is present) and the union of patterns with high precision (any of the patterns is present if the neuron is active). Figure 4 shows an example of two neurons and the union of board states they predict with high F1. We only consider neurons that correspond to patterns which all promote the same move (note that for a single neuron the green square is always at the same location in Figure 4)[7] and call those board pattern neurons. Figure 4: Two neurons as examples alongside the board patterns they classify with an F1-score above 0.99. There are 1036 board patterns in the game of Othello that are relevant for making a valid move. For each cell on the board, there are 15-18 patterns that make a move on that cell valid. In layer 4, 5, and 6 alone and only restricting to single move neurons, we find that 75.2 % of all patterns are classified by any pattern neuron with an F1-score above 0.99. Relaxing the restriction to all neurons with an F1-score above 0.9, we find 90.6 % of all patterns are classified. We performed ablations on all pattern neurons to check for causal relevance. For each neuron, we construct a set of games that contain any pattern the neuron corresponds to. Remember, pattern neurons only correspond to a single valid move. We measure the change in probability (after softmax on output logits) for predicting the single move the corresponding patterns make legal, in addition to the change in probability for all other tokens. We don’t find a significant amount of pattern neurons in layers 0-2. In layers 3-6, the ablation changes the probability by an average of -1.5% for the target square, and 0.03% for all other squares on the board. These results suggest that pattern neurons in layers 3-5 predominantly promote legal moves on the associated squares. Ablating pattern neurons in the final MLP layer 7 greatly increases the probability by an average of 15% for a given square. This indicates that our layer 7 pattern neurons are acting to suppress, not promote, legal moves. In addition to that, the decoder vectors of Layer 7 neurons show almost exclusively a highly negative attribution of below -0.8 to the logit of the corresponding move. We hypothesize that this behavior arises from the implicit training objective to predict a uniform distribution over all legal moves (introduced by training on random legal moves). Clock Neurons We observed that a number of neurons fire more based on move number than on which moves were played. We hypothesize that these neurons play a role in upweighting or down weighting the likelihood of tokens based on the time in the game. This is a useful heuristic for predicting valid moves. For example, tokens corresponding to edges and corners are more likely to be legal moves towards the end of the game[8]. It is very rare for these tokens to appear early, and impossible in the first few moves. We find evidence that these neurons are responding to the positional encoding to have a measure of number. Many examples of neurons that fire as a function of move number can be found in Appendix 2 of Jack’s research sprint final report. Clock neurons could be further evidence that Othello-GPT uses a set of probabilistic heuristics to reckon the likelihood of the board state, and of which moves are legal. However, it would be interesting to better understand how they're used by the model. Figure 6: Neuron #18 in MLP layer 0 is an example of a clock neuron. It fires depending on the phase of the game. See the full version of this plot here. Suppression behavior Our anecdotal investigations leave the impression that the heuristic rules encoded in the attention and MLP layers often cancel each other out. Heuristic rules are not always correct individually. So other rules/neurons can come into play that correct the errors. Some of this happens in parallel, so it is not necessarily fair to say that one corrects another. In the big picture, heuristics are adding up such that the probabilities of either valid or invalid eventually overrules the other. For example, in our discussion of how Othello-GPT computes if a cell is mine vs. yours, the circuit for the logical rule “If A4 is played AND B4 is not blank AND C4 is not blank, update B4+C4+D4 towards “theirs” mistakenly fires if B4 or C4 is “mine” instead of “theirs” . When this happens, the model takes longer to compute the “correct” state of cells B4 + C4, but eventually reaches the right conclusion in later layers. See page 5 of the linked note, especially footnote 4, for more on this point. Future Work There are many possible directions for future work. We will follow up by automating the discovery of logical rule circuits and interpretable neurons in Othello-GPT and attempt to reimplement Othello-GPT by hand using the discovered rules[9]. This would add evidence that Othello-GPT computes an internal model of the board as an intermediate representation for legal move prediction. The “bag of heuristics” hypothesis, if correct, could explain the fact that Othello-GPT is imperfect (i.e., Li et al find an error rate of 0.01%, where the top-1 prediction is actually not a legal move). A follow-up would be to investigate the errors made by Othello-GPT. Perhaps incorrect predictions correspond to rare edge cases that are not handled well by learned heuristics. How does this compare to the processes that lead to hallucinations in other LLMs? It is interesting that even with millions of parameters and thousands of rules, Othello-GPT still makes errors that a simple logical Othello algorithm would not. Additionally, we are interested in Othello-GPT as a toy model to better understand SAEs or other unsupervised tools for interpretability. One worry in using Othello-GPT as a toy model is that it may not be representative of a typical LLM. There’s a sense in which Othello-GPT seems much “too large” for the problem that it’s trying to solve, giving it the capacity to memorize many heuristics[10]. Perhaps a smaller model would be forced to learn features in superposition -- or even nontrivial algorithms -- in a way more representative of a typical LLM. So as a final direction, it could be interesting to train transformers of different size on Othello and see how (if) features change as a function of the model size. Acknowledgements We thank Sam Marks, Wes Gurnee, Jannik Brinkmann, Lovis Heindrich and Neel Nanda for helpful comments on experiment design and the final post. ^ As a 1d tensor of moves [‘C3’, ‘C4’, ‘D5’, ‘C6’, …] converted to integer tokens. ^ As a 2d tensor of shape [seq_len, 61] corresponding to logits for what the model thinks are legal moves at each point in the game. ^ Humans also do something like this, but resolve box #1 by serially updating their representation of the board on each token in the sequence. LLMs only have num_layers steps of serial computation available so can’t be doing this. ^ Note that the linear probes were trained on residual layer 6. Perhaps we would get somewhat different results if we trained a new probe for each layer. ^ The probe output is an (n_board_squares x 3[mine, yours, empty]) tensor. We apply softmax across the dimension [mine, yours, empty], and report the softmax score for 'yours'. ^ From such visualizations, we also found that although the model mainly finalizes the internal board state around layer 5 and decides which moves are legal in the last two layers, there’s no sharp boundary between the two phases (e.g. the model has some idea which moves are legal if we unembed the residual stream in very early layers). ^ We observe neurons that have a non-zero activation for every move in every game. They are perfect classifiers for the union of all possible board patterns. To avoid catching these neurons as pattern neurons we filtered for neurons that are good classifiers only for the union of patterns that make a single move valid. ^ Though perhaps not in the last few moves, when almost every square is filled. ^ See e.g. the “Artificial artificial neural network” in Curve Circuits. ^ For example, the same model size can be used to learn chess which has a nearly 10^100x larger game tree.
2024-07-02
https://www.lesswrong.com/posts/srG2RguMfp6ubGnJa/analysis-of-key-ai-analogies
srG2RguMfp6ubGnJa
Analysis of key AI analogies
KevinKohler
The following is an analysis of seven prominent AI analogies: aliens, the brain, climate change, electricity, the Industrial Revolution, the neocortex, & nuclear fission. You can find longer versions of these as separate blogposts on my substack. 0. Why? AI analogies have a real-world impact For better or worse, analogies play a prominent role in the public debate about the long-term trajectory and impacts of AI. Analogies play a role in designing international institutions for AI (e.g. CERN, IPCC) and in legal decisionsAnalogies as mental heuristics can influence policymakers in critical decisions. Changes in AI analogies can lead to worldview shifts (e.g. Hinton)Having worked with a diverse set of experts my sense is that their thinking is anchored by wildly different analogies Analogies can be misleading Boaz Barak (“Metaphors for AI, and why I don’t like them”) and Matthew Barnett (“Against most, but not all, AI risk analogies”) have already discussed the shortcomings of analogies on this forum Every individual analogy is imperfect. AI is its own thing, and there is simply no precedent that would closely match the characteristics of AI across 50+ governance-relevant dimensions. Overly relying on a single analogy without considering differences and other analogies can lead to blind spots, overconfidence, and overfitting reality to a preconceived pattern. Analogies can be useful When facing a complex, open-ended challenge, we do not start with a system model. It is not clear which domain logic, questions, scenarios, risks, or opportunities we should pay attention to. Analogies can be a tool to explore such a future with deep uncertainty. Analogies can be an instrumental tool in advocacy to communicate complex concepts in a digestible and intuitively appealing way. My analysis is written in the spirit of exploration without prescribing or proscribing any specific analogy. At the same time, as a repository, it may still be of interest to policy advocates. 1. Aliens (full text) Basic idea comparison to first contact with an alien civilizationsymbolizing AI's underlying non-human reasoning processes, masked by human-like responses from RLHF Selected users Yuval Noah Harari (2023, 2023, 2023, 2023, 2023, 2023, 2024)Ray Kurzweil (disanalogy - 1997, 1999, 2005, 2006, 2007, 2009, 2012, 2013, 2017, 2018, 2023) Selected commonalities Superhuman power potential: Technologically mature extraterrestrials would likely be either far less advanced than us or significantly more advanced, comparable to our potential future digital superintelligence. Digital life: Popular culture often envisions aliens as evolved humans, but mature aliens are likely digital beings due to the advantages of digital intelligence over biological constraints and because digital beings can be more easily transported across space. The closest Earthly equivalent to these digital aliens is artificial intelligence. Terraforming: Humans shape their environment for biological needs, while terraforming by digital aliens would require habitats like electricity grids and data centers, which is very similar to a rapid build-out of AI infrastructure. Pathogens from digital aliens are unlikely to affect humans directly but could impact our information technology. Consciousness: We understand neural correlates of consciousness in biological systems but not in digital systems. The consciousness of future AI and digital aliens remains a complex and uncertain issue. Non-anthropomorphic minds: AI and aliens encompass a vast range of possible minds shaped by different environments and selection pressures than human minds. AI can develop non-human strategies, especially when trained with reinforcement learning. AI can have non-human failure modes such as through adversarial attacks. Future AI may have modular and superhuman bandwidth of sensors and effectors. Selected differences Origin and human agency: AI originates from humans on Earth, unlike extraterrestrial intelligences. Humans have some control over AI's development and deployment, unlike unpredictable extraterrestrial encounters. The development of AI is arguably also more gradual than the sudden arrival of mature aliens.Human-AI Interdependence: Aliens would exist autonomous from our civilization. In contrast, AI is increasingly integrated into human infrastructure, creating mutual dependence. This interdependence shifts from AI depending on humans to humans depending on AI. “First contact”: A first contact with aliens would likely involve centralized, limited communication for containment / diplomatic representation purposes. AI-human interaction is decentralized and happens at high speed and volume over millions of devices. Shared language and familiarity with human culture: AI is trained on human data and deeply embedded in human culture, making communication easier. This familiarity will grow as AI becomes more integrated into personal aspects of human life.Access to and editability of AI connectome: The weights and biases of artificial neural networks are transparent and editable, allowing for potential alignment with human goals (if we could interpret them). In contrast, the brains of aliens would likely be encrypted or inaccessible. Bonus trivia: Does worshipping aliens pay off? If we look at intrahuman “first contact” cases, we can find examples of the less developed civilization worshipping the more developed arrivals as “Gods”. On some pacific islands cargo cults developed after US planes dropped supplies during the Second World War.The US Air Force has never bothered to return to Melanesia, where they have been religiously worshipping the US Air Force for its cargo drops for more than 75 years now.Yet, the US Air Force has been conducting an annual, humanitarian “Operation Christmas Drop” for pacific islands since 1952 - for the non-worshippers of Micronesia. 2. Brain (full text) Basic idea Artificial neural networks = biological neural networksbrain as source of inspiration for AI algorithms and architectureshelps to inform the scaling hypothesis and long-term predictions on AI Selected users biology-inspired or connectionist school of AI (Hinton, Bengio, Sutskever, Hassabis, LeCun etc.) Selected commonalities Basic neuron logic: The McCulloch-Pitts model (1943) conceptualized artificial neurons based on simple logical operations, forming the foundation of artificial neural networks. Basic inhibition and excitation: Biological neurons use neurotransmitters like glutamate (excitatory) and GABA (inhibitory). Artificial neurons use positive or negative weights to simulate this effect. Multimodal neurons: Human neurons respond to specific individuals regardless of representation form (photos, names). AI has developed multimodal neurons that similarly respond to various representations of the same subject.Reinforcement learning: Inspired by psychology (Skinner, Pavlov) and dopamine pathways in the brain, reinforcement learning in AI involves learning from actions' consequences via rewards/punishments. Temporal-Difference Learning: Enhances reinforcement learning by using internal value functions to assess and reinforce behaviors continuously, rather than relying solely on sparse external rewards.Intelligence increases with scale: Larger brain size (relative to body) correlates with higher intelligence in species. Similarly, AI performance improves predictably with increased training data, compute, and model parameters.Cultural learning: Cultural learning allows humans to transcend genetic information limits. AI can now directly access and utilize this accumulated human knowledge. Selected differences Neuron activation: Biological neurons fire in a binary manner. Biological neurons accumulate charge over time from inputs. The brain uses a variety of specialized neurotransmitters and hormones affecting mood and motivation. Backpropagation: We do not fully understand how learning in the human brain works. However, it does not use backpropagation as learning algorithm.Speed: The human brain has no unified clock speed, but neurons usually cannot fire more often than about 250 times per second. In contrast, modern computers work with a unified clock speed and instructions sent at intervals of multiple billion times per second. The signals between biological neurons can travel at speeds of up to 120 meters per second. In contrast, signals in a chip can travel optically up to the theoretical maximum of the speed of light, which is 300'000'000 meters per second.Working memory: Humans have very limited working memory. The most cited study on the capacity of the human brain to hold different elements in mind simultaneously suggests an upper limit of 7 elements (plus or minus two) Speed of evolution of size: The computing power going into large AI models grows by about 4.2x per year, and the parameter count grows by about 2.8x per year. In contrast, the average doubling period for brain volume, from Australopithecus to early Homo sapiens, was approximately 1.8 million years. Access to connectome: We have access to the full connectome of AI models. For comparison, the first (and, so far, only) fully reconstructed connectome of a biological neural network belongs to the roundworm C. elegans. Ownership and distribution: Brains are “owned” by individual humans. The infrastructure of artificial neural networks is owned by the tech giants, such as Amazon, Microsoft, and Google. There are no “brain billionaires” that have more neocortex than entire countries. 3. Climate Change (full text) Basic idea AI requires global cooperation like climate changeWe need an “IPCC for AI” Selected user Sundar Pichai (2018, 2018, 2020, 2021, 2023, 2023) Selected commonalities Complexity: Both climate change and AI development are highly complex and uncertain due to their dependence on global human activity with feedback loops and non-linear effects.Trend and hazards: Climate change is a long-term trend affecting the frequency and intensity of weather-related hazards. Similarly, the widespread diffusion and integration of AI into society increases the risk surface. Global public goods: For climate change, reducing emissions benefits globally but costs locally, leading to potential free-riding. Similarly, some level of global cooperation is needed to prevent AI arms races and misuse by criminals.Powerful private sector: Major fossil fuel companies have large market caps, similar to leading AI and tech companies like NVIDIA, TSMC, Microsoft, Alphabet, Amazon, and Meta.Concerns about existential risk: There are widespread public concerns that climate change could be an existential threat. For AI, a large portion of AI scientists believe that we should take existential risk from AI seriously. Selected differences Scientific consensus: Climate science has a strong consensus that climate change is real and caused by humans. In AI there are more perceived disagreements among researchers about the potential risks and future impacts of AI.System-orientation: Climate science focuses on the planetary-scale system, led by independent academics and public networks. AI focuses on individual technological artifacts, primarily driven by private sector experts, with less emphasis on system-wide monitoring.Wizard vs. prophet vision: The climate debate is dominated by the "prophet" vision, advocating for sustainable living and reduced consumption to be within planetary boundaries. The AI debate is more aligned with the "wizard" vision, emphasizing exponential growth and take-off.Time horizon: Climate change projections and goals extend far into the future, with detailed assessments and long-term targets (e.g., 2100). AI projections are much shorter, with national strategies typically looking only about 10 years ahead.Speed of change: Climate change progresses slowly, with significant changes taking decades to centuries. AI development is rapid, with exponential growth in capabilities and applications, causing dramatic impacts within a few years. Bonus trivia: There was a shift from “wizard” to “prophet” vision in climate around 1970 The default assumption for the future pre-1970 seemed to be artificial climate control with relatively little concern about inadvertent climate change Today the default vision is a that natural climate change is an existential threat and there is a strong ideological opposition to solar geoengineering 4. Electricity (full text) Basic idea AI is like electrification (US, ca. 1880-1950)AI is a general-purpose technology Selected user Andrew Ng (2016, 2017, 2017, 2017, 2017, 2017, 2018, 2018, 2018, 2019, 2019, 2020, 2021, 2023, 2023, 2023, 2023, 2023, 2023, 2024, 2024) Selected commonalities Cross-industry applications, complements, productivity: Both electricity and AI have some of the classic hallmarks of general-purpose technologies, meaning they have widespread applications across numerous industries, they have innovational complements, and we expect them to boost productivity. Switch from in-house capacity to an outsourced service: Before widespread electrification, power was primarily generated in-house. After 1900, the availability of cheaper, centrally produced power led to a shift towards outsourcing power production, adopting an electricity-as-a-service model. Large cloud providers, who own significant AI hardware, offer AI compute as a service, allowing companies to use AI capabilities without owning the hardware.  This trend could lead to a decrease in in-house intellectual labor and an increase in the use of flexible, outsourced AI intelligence, as “AI remote workers” or “exocortex” of companies. Selected differences No new transmission infrastructure: Electrification was in large parts about building a new transmission network that connects all homes (others are water & telecommunications). AI does not require any new transmission network. Rather AI is distributed over the existing data networks as part of Internet traffic Local vs global market: Electricity is location-dependent due to transmission losses, leading to varying costs and no global market. AI can function globally without transmission losses, resembling the internet's integrated market but must comply with local laws. Public utility regulation: The electricity grid and market are heavily regulated as a natural monopoly with public service obligations. AI does not have such regulations.There is no free tier of electricity: AI services often offer a free tier, unlike electricity which is always paid. AI's cost structure and rapid widespread access differ from the historical luxury-to-necessity transition of electricity. Degree of commodification: Electricity has a uniform quality. AI models differ in significant ways so that AI tokens are not equally commodified. No one will ever run a medical device on electricity from one power plant and then from another power plant just to see if it reaches the same conclusion. In contrast, it is reasonable to ask for a second or even third opinion on medical diagnosis from different AI doctors. Labor substitution: Electrification was not a labor substitution. In factories it was a transition from one artificial form of energy to another. As such, it has never caused any significant worries about massive job losses. This stands in contrast to the First Industrial Revolution in which many laborers lost their jobs. In terms of labor turnover and related societal unrest and pushback, the First Industrial Revolution is a better fit to AI than electrification. Interpretability, agency, autonomy: Electric current as it comes out of your socket is a controlled and understood physical phenomenon with no cognition, goals, or agency. We understood how electricity functions at the time of electrification. In contrast, the inner working of large neural networks are still poorly understood. AI-companies are not just building general-purpose tools, but general-purpose agents that can follow instructions with many intermediate steps and use tools themselves, and we should expect these to get more and more autonomy over time. Bonus trivia: But AI *is* electricity Literally. On some level, AI is just electrons moving on chips. Or, if you want, a process to turn electricity into heat. However, much like saying humans are water (humans are literally 60% water by weight), this low level of analysis is inadequate for a meaningful analysis. 5. Industrial Revolution (full text) Basic idea AI will create another Industrial Revolution. Which is understood in one of the following ways:productivity revolution in the industrial sectorshift of the dominant employment sectorGDP growth accelerationThe Industrial Revolution replaced human muscle power with mechanical power. AI will do the same for human brain power. Selected user Yuval Noah Harari (2018, 2019, 2019, 2023, 2023, 2023, 2024, 2024) Selected commonalities New knowledge access institutions: The Industrial Revolution saw the emergence of institutions that significantly increased the availability and accessibility of knowledge (e.g., Royal Society & various scientific societies). Similarly, the AI Revolution coincides with increased knowledge accessibility through the Internet and future AI could evolve into advanced personal tutors. Invention of a new method of invention: The Industrial Revolutions coincided with a  shift from individual inventors to institutionalized R&D. AI can already be seen as a new method of invention that can predict patterns based on vast data sets (e.g., AlphaFold. GNoMe). In the future, AI might evolve into “AI scientists,” potentially automating scientific and technological advancements.Reorganization & deskilling: The Industrial Revolution restructured production to leverage artificial energy and there was a deskilling of labor. AI is expected to automate tasks in the service and knowledge economy that could also deskill labor.Labor-capital conflict over distribution of surplus: Distribution of productivity gains depends on whether workers or capital owners have more leverage. For the first ca. 50 years of the Industrial Revolution only capital owners benefited, while real wages for workers stagnated or declined during “Engels’ pause.” AI's productivity gains are expected to displace some human labor, leading to a struggle over surplus distribution. The deskilling effect of AI could weaken workers' negotiating power, favoring capital owners.Potential for new political systems & ideologies: The Industrial Revolution led to significant political changes, including the end of feudalism, new imperialism, and the rise of labor movements, socialism, and communism. The AI revolution may similarly lead to new political systems and ideologies. Potential outcomes include universal basic income, techno-authoritarianism, technopolar “snow crash,” AI-led “singleton,” or technocapitalism without humans. Selected differences Industrial robots vs. knowledge service LLMs: The Industrial Revolution primarily transformed the industrial sector, while the AI revolution is likely to have a broader impact, particularly on the service sector and knowledge economy.Demographics: During the Industrial Revolution the population was younger. The older demographics in the AI revolution mean societal issues will focus more on retirement rather than child labor. High automation may address labor shortages caused by an aging population, and the reduced risk of political instability is due to the lower proportion of young, risk-taking individuals.Speed of transformation: The transition from human to artificial cognitive power is expected to be much faster than the transition from human to artificial muscle power during the Industrial Revolution.Energy-intelligence ratio: The Industrial Revolution expanded available energy, while the AI revolution expands available intelligence. This shift changes the energy-to-intelligence ratio, with AI making intelligence abundant and energy relatively scarce. This dynamic could create significant changes in the economy and human labor's role.Potential loss of control over the economy: The Industrial Revolution empowered humanity, but the AI revolution could lead to a gradual loss of human control over the economy. Future AI agents could gain economic and political power through legal and financial means,  leading to a world where AI agents vastly outnumber and outperform humans. Bonus trivia: The humble clover's contribution to the Industrial Revolution The British agricultural revolution preceded the Industrial Revolution and contributed to urbanization and a workforce for the Industrial RevolutionAgricultural productivity at the time was limited by nitrogen. The key innovation of the British agricultural revolution was to introduce clover to crop rotation. Clover beats other soil-fixing plants by 3-5x (this was before Guano, synthetic ammonia). So, the clover has rightly become a common symbol of good luck in the UK, Ireland and much of Western Europe. 6. Neocortex (full text) Basic idea evolution of neocortex = evolution of synthetic neocortexlimbic system:neocortex = neocortex:AI Selected users Elon Musk (2016, 2016, 2017, 2017, 2018, 2018, 2019, 2020, 2022, 2022, 2023, 2023) Ray Kurzweil (2009, 2013, 2014, 2017, 2017, 2018, 2022) Selected commonalities Alignment with goals: The neocortex aligns with personal goals, suggesting personal AIs should too, without necessarily integrating with the brain via a neural interface. Enhanced planning and prediction: The neocortex's role in planning and assessing success probabilities suggests that personal AIs could further enhance these abilities, regardless of brain-computer interface integration.Emergent abilities: Just as the enlarged neocortex brought unforeseen capabilities, scaling artificial neural networks might lead to new, unpredictable cognitive abilities. Selected differences Signal speed: A significant speed gap exists between the biological brain and potential exocortex due to the latter's potential for operating at gigahertz speeds. Consciousness: The exocortex, unlike the human brain, would likely not possess consciousness, leading to a greater proportion of the brain being unconscious. Speed of brain evolution: The evolutionary doubling period of human brain volume is approximately 1.8 million years. The training compute for large AI models has doubled every 6 months for the last 14 years. Upper limit of exocortex size: The size of the neocortex is limited by the human skull which is in turn limited by the birth canal. There is no fixed upper limit for exocortex volume. Distribution and variety of exocortex: The distribution of neocortex among humans is fairly even. There are no “brain billionaires”. In contrast, computing power for the exocortex is distributed unequally within and between countries. Ownership of exocortex: You own your brain. In contrast, big tech owns the AI cloud capacity. Autonomous viability: While the neocortex cannot function independently of the human body, large AI models will be increasingly autonomous. Bonus trivia: Triune Brain Theory The popular idea that the older limbic system controls the more recently evolved and much more powerful neocortex comes Paul MacLean’s “Triune Brain Theory”This theory may suggest that the brain evolved in layers and reptiles would only have a "reptilian brain." Yet, all vertebrates share similar brain parts that have reorganized and grown differently. Modern neuroscience, using advanced imaging techniques, shows that high-level brain functions arise from dynamic interactions across multiple brain regions, contradicting the triune brain's idea of quasi-autonomous parts.Assertions about a hierarchical relationship between the limbic system and the neocortex, where the "monkey brain" controls the cortex, are misleading. The pre-frontal cortex plays multiple important roles in motivation. 7. Nuclear fission (full text) Basic idea nuclear weapons = AI, in terms of factors such as risk, containment, or powersome have called for an "IAEA for AI", a "CERN for AI", or a "Manhattan Project for AI" Selected users Eric Schmidt (2021, 2021, 2021, 2022, 2022, 2022, 2023, 2024)Elon Musk (2014, 2018, 2023, 2023, 2023, 2023)Max Tegmark (2017, 2018, 2018, 2018, 2018, 2023, 2023)Eliezer Yudkwosky (2018, 2023, 2023, 2023, 2023, 2023, 2023, 2023, 2023)Sam Altman (IAEA - 2023, 2023, 2023, 2023, 2023, 2023, 2023, 2023, 2024, 2024, 2024, 2024) Selected commonalities Ideas of a chain reaction: A nuclear chain reaction occurs when an atom splits into two smaller atoms, releasing 2-3 neutrons and energy, which then cause further splits, creating a self-sustaining cascade. The concept of an intelligence explosion involves an AI system improving itself to the point of becoming vastly superhuman in a short time. This idea is scientifically controversial but positive feedback loops in AI development, especially in compute, data, and algorithms, could potentially lead to such an explosion.Conflicted ScientistsConcerns and Regrets: Key contributors to nuclear bomb development, like Einstein and Oppenheimer, expressed concerns and regrets about their work's societal impact. Similar concerns are seen among AI researchers like Geoffrey Hinton and Yoshua Bengio.Discovery as Motivation: Oppenheimer described the allure of scientific discovery as a motivation that overshadowed concerns about societal impacts. Geoffrey Hinton echoed this sentiment, stating the excitement of discovery drove his (past) research despite concerns.Shifting publication norms: Initially open, nuclear research publication norms shifted towards secrecy due to the potential dangers. This shift was led by concerned scientists like Leo Szilard. AI research is experiencing a similar shift, with leading labs becoming more conservative about publishing details due to potential misuse risks.Concerns about existential risk: The destructive potential of nuclear weapons raised existential risk concerns, leading to efforts for international control and the metaphorical "Doomsday Clock." AI scientists estimate a significant risk of AI leading to catastrophic outcomes, comparable to nuclear war.One-Worldism: The development of nuclear weapons prompted calls for a world government to manage their existential risks, with notable advocacy from scientists like Albert Einstein. Some have suggested similar ideas for AI ("singleton", "high-tech panopticon") though this remains a less mainstream view.Ideas for international control through supply chain bottlenecks: International control efforts for nuclear have largely focused on bottlenecks in the supply chain, particularly uranium enrichment and plutonium production, to prevent weapon proliferation. Similar control efforts are proposed for AI, focusing on the highly concentrated AI hardware supply chain as a point of international governance and verification. Selected differences Military vs. private sector: Nuclear fission originated as a military technology during World War II, with its first applications being bombs, followed by military submarines. Civilian use came later and was driven by political responses to Soviet advancements. AI development is led by the private sector and military applications are being adapted from civilian innovations. Ability to discriminate: Nuclear weapons are too large to effectively discriminate between military and civilian targets. Target lists for nuclear war include major cities, not just military sites. AI in military applications, such as targeting systems, poses legal and ethical challenges but can be designed to discriminate between civilians and combatants.Deterrence logic: Nuclear weapons operate on deterrence by mutually assured destruction. Their strategic value lies in guaranteeing devastation through second-strike capability, independent of other military strengths. AI lacks a clear deterrence logic comparable to nuclear weapons. Signaling AI power is less straightforward, and AI systems are vulnerable to physical destruction (e.g., data centers). Ease of proliferation over time: Nuclear proliferation has become somewhat easier due to advances in technology and the spread of civilian nuclear energy, but it remains a challenging and prolonged process for most countries. AI proliferation is becoming exponentially easier due to improvements in hardware and algorithms. The rapid pace of AI advancements makes controlling the spread of absolute levels of capabilities more difficult, unlike the slower proliferation of nuclear technologyAutonomy, agency: Nuclear weapons are powerful tools but lack intelligence, self-replication, or self-improvement capabilities. They are fully understood and designed by humans. AI systems possess increasing levels of autonomy and the potential to self-improve and create new technologies. AI represents a new method of invention that can significantly impact various technological areas. Bonus trivia: The impact of Szilard’s silence is more complex than the narrative There is a popular claim that Szilard’s fight for changing publication norms has led to Fermi’s self-censorship, which in turn led Germany to cripple their program by choosing heavy water over graphite as moderator. This originates from Rhodes 1986 book on the US nuclear program. In 1989 Mark Walker wrote a book on the German program based on original German sources. Bothe’s efforts to evaluate graphite as a moderator did reach misleading results due to lack of purity. However, Hanle realized that this was due to pollution and informed the Heereswaffenamt, incl. with instructions for how to create sufficiently pure graphite. Their decision to go with heavy water rather than pure graphite as a moderator was primarily based on economic considerations.
2024-06-29
https://www.lesswrong.com/posts/GHusbyeLEn45HaM5B/podcast-elizabeth-and-austin-on-what-manifold-was-allowed-to
GHusbyeLEn45HaM5B
Podcast: Elizabeth & Austin on "What Manifold was allowed to do"
austin-chen
null
2024-06-28
https://www.lesswrong.com/posts/TzwMfRArgsNscHocX/the-incredible-fentanyl-detecting-machine
TzwMfRArgsNscHocX
The Incredible Fentanyl-Detecting Machine
sarahconstantin
An NII machine in Nogales, AZ. (Image source) There’s bound to be a lot of discussion of the Biden-Trump presidential debates last night, but I want to skip all the political prognostication and talk about the real issue: fentanyl-detecting machines. Joe Biden says: And I wanted to make sure we use the machinery that can detect fentanyl, these big machines that roll over everything that comes across the border, and it costs a lot of money. That was part of this deal we put together, this bipartisan deal. More fentanyl machines, were able to detect drugs, more numbers of agents, more numbers of all the people at the border. And when we had that deal done, he went – he called his Republican colleagues said don’t do it. It’s going to hurt me politically. He never argued. It’s not a good bill. It’s a really good bill. We need those machines. We need those machines. And we’re coming down very hard in every country in Asia in terms of precursors for fentanyl. And Mexico is working with us to make sure they don’t have the technology to be able to put it together. That’s what we have to do. We need those machines. Wait, what machines? You can remotely, non-destructively detect that a bag of powder contains fentanyl rather than some other, legal substance? And you can sense it through the body of a car? My god. The LEO community must be holding out on us. If that tech existed, we’d have tricorders by now. What’s actually going on here? What’s Up With Fentanyl-Detecting Machines? First of all, Biden didn’t make them up. This year, the Department of Homeland Security reports that Customs and Border Patrol (CBP) has deployed “Non-Intrusive Inspection” at the US’s southwest border: “By installing 123 new large-scale scanners at multiple POEs along the southwest border, CBP will increase its inspection capacity of passenger vehicles from two percent to 40 percent, and of cargo vehicles from 17 percent to 70 percent.” In fact, there’s something of a scandal about how many of these scanners have been sitting in warehouses but not actually deployed. CBP Commissioner Troy Miller complained to NBC News that the scanners are sitting idle because Congress hasn’t allocated the budget for installing them. These are, indeed, big drive-through machines. They X-ray cars, allowing most traffic to keep flowing without interruption. Could an X-ray machine really detect fentanyl inside a car? To answer that, we have to think about what an x-ray machine actually does. An X-ray is a form of high-energy, short-wavelength electromagnetic radiation. X-rays can pass through solid objects, but how easily they pass through depends on the material — higher atomic number materials are more absorbing per unit mass. This is why bones will show up on an X-ray scan. The calcium (element 20) in bones has higher atomic mass than the other most common elements in living things (carbon, hydrogen, oxygen, nitrogen, sulfur), and bones are also denser than soft tissue, so bones absorb X-rays while the rest of the body scatters it. This is also how airport security scans baggage: a cabinet x-ray shows items inside a suitcase, differentiated by density. It’s also how industrial CT scans can look inside products nondestructively to see how they’re made. Lumafield’s industrial CT scanner can tell dense, high-atomic-number lead solder apart from stainless steel. To some extent, X-ray scanners can distinguish materials, by their density and atomic number. But fentanyl is an organic compound — made of carbon, hydrogen, nitrogen, and oxygen, just like lots of other things. Its density is a very normal 1.1 g/mL (close to the density of water.) I’m pretty sure it’s not going to be possible to tell fentanyl apart from other things by its density and atomic number alone. Indeed, that’s not what the scanner vendors are promising to do. Kevin McAleenam, the former DHS secretary who founded Pangiam, the AI-based scanning company that plans to analyze the X-ray images, said that his software would correlate X-ray results with vehicle manifests to see if a load looks like it’s supposed to, or whether it’s suspicious in some way. “We can build software products that tell the officer, ‘That load’s supposed to be melons, it looks exactly like the other thousand shipments of melons that have crossed this border over the last two years, we don’t think you need to inspect it further.”” That’s not fentanyl detection — remote X-ray-based detection of a particular chemical compound — it’s the much easier problem of anomaly detection. Maybe, indeed, you can catch a drug smuggler by using the X-ray machine to notice a discrepancy between what he said and what the inside of his truck looks like. But it’s not a “fentanyl scanner” that detects fentanyl in the way a metal detector detects metal. 1 The manufacturer of these X-ray scanning portals, OSI systems, does claim they can detect “metallic and organic threats and contraband, such as weapons, stowaways, explosives, drugs, and alcohol.” This seems like a bit of marketing exaggeration — an X-ray machine would not be able to detect in full generality that something is an explosive or illegal drug as opposed to an innocuous/legal material, though it could tell, e.g., that there were organic compounds stored in parts of the vehicle that were expected to be empty. But I Wanted Remote Chemical Compound Detection! Ok, but suppose we wanted to detect the presence of fentanyl, or some other particular organic compound. How close can we get to doing that for real? This isn’t just about law enforcement. There are lots of reasons you might want to find out what something is made of without touching it. Nondestructive quality control in manufacturing; analyzing the geological composition of underground mineral resources; medical diagnostics; and much more. In an ideal world, if you want to know the chemical composition of an object, you’d get to take a sample. Then you can use powerful, destructive methods like mass spectrometry, where you ionize the whole thing, run an electric current through it, and look at the amount of each ion (ranked by mass-to-charge ratio) to reconstruct which chemical compounds must have been in the mixture. If you’re looking at biological samples, you can put nature to work for you at detecting particular compounds; find an antibody, aptamer, or oligonucleotide that binds selectively to your compound of interest, and attach it to some highly noticeable molecule like a dye. This is how immunoassays like pregnancy tests and home COVID tests work. But these are all destructive tests; you have to have a sample, and you have to be okay with using it up in the testing process. What can the world of non-destructive tests do? Spectroscopy The general category of spectroscopy refers to measuring how light is absorbed vs scattered by a sample. X-ray imaging is a form of spectroscopy whose “light” is x-rays; other wavelengths, ranging from microwaves through infrared and visible light to ultraviolet, are generally less able to penetrate opaque solid objects but better at distinguishing between lighter organic chemical compounds. Small, affordable infrared spectrometers are used for everything from detecting the blood alcohol content of a (suspected) drunk driver to nondestructively analyzing the composition of food products and pharmaceuticals on the production line. FTIR, or Fourier Transform InfraRed spectroscopy, shines a beam on the sample consisting of many infrared wavelengths at once, and measures how much of each wavelength is absorbed, resulting in a characteristic "spectrum” or fingerprint for each sample. FTIR can’t tell apart all chemical compounds, but it’s pretty sensitive. It can distinguish organic compounds that differ by just a single functional group. Why do these very similar compounds have different IR spectra but indistinguishable X-ray absorption? In other words, why can’t you use X-rays to determine fine-grained chemical composition? Because infrared light is lower energy, so it mostly interacts with the vibrations in chemical bonds (and behaves differently with different bonds), while X-rays go straight to the inner-shell electrons of the atoms (an interaction that depends less on the molecular environment surrounding the struck atom). UV and visible light are intermediate in energy level, so they have enough energy to knock valence-shell electrons to higher orbitals, but not enough to knock inner-shell electrons up to the valence shell. So UV and visible light spectroscopy methods are intermediate between infrared (lots) and X-rays (very little) in their ability to distinguish between similar chemical compounds. The bottom line is, there’s a physical tradeoff between how good a spectroscopic technique is at penetrating solid objects (like a car) and how sensitive it is at telling different compounds apart. You can certainly tell fentanyl from aspirin with FTIR; but infrared light won’t pass through solid metal. Ok, That’s Light. What About Sound? Or Electricity? There are other non-destructive methods for sensing material composition that also work by the same basic premise of “send a wave of some sort to the material, see what happens to what’s left of the wave after it’s reflected back or transmitted through, notice how different materials distort it differently.” When you do this with sound waves, you have ultrasound imaging. Sound is also distorted differently by different materials. The key metric for ultrasound is acoustic impedance, or how much a material resists conducting sound. The acoustic impedance of a material is basically just a function of stiffness and density. So ultrasound is good enough for visualizing the shapes of firm, dense masses in the body (e.g. it can tell “baby” from “amniotic fluid”) but it can’t tell apart two different chemical compounds with the same physical properties. What about electricity? Impedance spectroscopy involves running current through an object to measure the impedance (measured in ohms) of the material inside. The principle here is that different materials conduct electricity differently. Even small differences in molecular structure can affect a material’s dielectric properties. In medicine, it’s possible to use impedance tomography to visualize and monitor blood clots or lung function in real time (since a clot in a blood vessel or fluid in a lung will show up as a difference in impedance.) However, impedance spectroscopy doesn’t work for truly remote sensing, because you need electrodes to be touching the sample. (No beaming electricity through the open air.) It also couldn’t detect a sample of fentanyl inside a car, because the metal would shield against the electricity. Dude, Where’s My Tricorder? While the list of non-destructive imaging or material characterization techniques is long, most of them fundamentally use light, electricity, or sound, or combinations of those.2 And some rough version of the penetration/specificity tradeoff seems to hold generally; the better a form of energy is at punching through thick heavy materials, the less it cares about the difference between slightly different organic molecules. So that’s the challenge of “detecting fentanyl in your car” (or, more medically, things like “detect a biomarker in your body”). Usually the techniques that allow completely remote, label-free sensing can’t tell you as much about chemical composition, and the techniques that can give you precise chemical information require a sample that’s, if not destroyed by analysis, then at least conveniently placed relative to the sensor. Could that change? In Big Hero 6, medical robot Baymax “scans” the entire city of San Fransokyo to detect all men with the same cholesterol, blood type, and hormone levels as the suspect. Could we ever do this? god i wish this were real Right now it seems pretty intractable, though I’m not prepared to claim it’s physically impossible. There are a few special cases where Nature is kind — for instance, some compounds, like hemoglobin, have a strong photoacoustic effect, so a laser beam and an ultrasound sensor are enough to pick them up. But few techniques are fully general. Photoacoustic imaging can’t tell apart chemically similar compounds (like estrogen and testosterone.) FTIR could tell them apart — but you can’t just point it at a person and read off the composition of their bloodstream, because the infrared doesn’t penetrate more than a millimeter into the skin. Maybe someday we’ll combine different modalities to get a vastly expanded set of remote chemical sensing capabilities. But the tradeoff landscape is still there, and it’s less friendly to potential tricorders, Baymaxes, and fentanyl scanners than you might hope. 1 And it’s not just Biden who refers to the new X-ray machines as “fentanyl scanners” — that phrase is all over the news and in the “Deploy Fentanyl Scanners Act of 2024” introduced in the Senate this March. 2 (Or bombardment with bigger particles like neutrons, which, like X-rays but more so, is better suited for detecting the presence of heavy elements than for classifying different organic molecules made of light elements.)
2024-06-28
https://www.lesswrong.com/posts/ZzT5rGbSCrNbASppZ/activation-pattern-svd-a-proposal-for-sae-interpretability
ZzT5rGbSCrNbASppZ
Activation Pattern SVD: A proposal for SAE Interpretability
dtch1997
Epistemic status: This is a rough-draft write-up about a thought experiment I did. Reasonably confident about the broad arguments being made here. That said, I haven't spent a lot of time rigorously polishing or reviewing my writing, so minor inaccuracies may be present. Interpretability Illusions from Max-Activating Examples When interpreting an SAE feature, one common technique is to look at the max-activating examples, and try to extrapolate a pattern from there. However,  this approach has two flaws, namely: Premise 1: It can be hard to extrapolate the correct pattern. Correctly identifying a pattern relies on accurate understanding of the semantics present in text. Many hypotheses may be likely, given the data, and the actual truth could be non-obvious. It seems easy to make a mistake when doing this. Premise 2: Max activating examples may be anomalous. The most likely element of a distribution can look very different from the typical set. Conclusions drawn based on one (or a few) highly activating examples may turn out to be incorrect when evaluated against the majority of examples. In the following discussion, I outline a proposal to interpret SAE features using the singular value decomposition of SAE activation patterns, which I think neatly addresses both of these issues. Activation Pattern SVD Suppose we have a set of  M SAE features, which we would like to interpret using a dataset of N unique context windows. To do this, we compute the activation A∈RN×M, where Aij describes the activation of feature j on (the last token of) context window i. We then compute the singular value decomposition A=UΣV and take the top k elements. (Assume that we have a good way of choosing k, e.g. by looking for an "elbow point" in a reconstruction loss curve) Remark 3: The SVD defines activation prototypes. note that U∈RN×k; each column (in RN) describes a "prototypical" activation pattern of a general SAE feature over  all context windows in the dataset. Remark 4: The activation patterns are linear combinations of prototypes. Define the coefficient matrix C=ΣM∈Rk×M. Each column (in Rk) contains coefficients which are used to reconstruct the activation pattern of  a given SAE feature as a linear combination of prototypes. Conjecture 5: C is an isometric embedding. The Euclidean distance between columns Ci,Cj is an approximation of the Jaccard distance between the activation patterns of features i,j. (Note: I am somewhat less certain about Conjecture 5 than the preceding remarks. Well-reasoned arguments for or against are appreciated.) How does SVD help? Now, let's return to the problems discussed above. I will outline how this proposal solves both of them. Re: hardness. Here, we have reduced the problem of interpreting M activation patterns to interpreting K prototypes, which we expect to be more tractable. This may also resolve issues with feature splitting. A counterpoint here is that we expect a prototype to be "broader" (e.g. activating on "sycophantic" or "hallucinatory" inputs in many contexts), and hence less interpretable, than an SAE feature, which is often highly context-specific (e.g. activating only on text where the Golden Gate Bridge was mentioned). Re: anomalous-ness. Since we started out with full information about the SAE feature's activations on every input in the dataset, we expect this problem to be largely resolved. Concrete experiments. Some specific experiments which could be run to validate ideas here. C.f. Remark 3, look at the "prototypical" activation patterns and see whether they're more interpretable than typical SAE features. C.f. Conjecture 5, compute the coefficient matrix C and pairwise Euclidean distances between columns, then correlate this with the ground-truth Jaccard distances between activation patterns. Conclusion The proposal outlined here is conceptually simple but also pretty computationally intensive, and I'm unsure as to whether it's principled. Nonetheless, it seems like something simple that somebody should try. Feedback is greatly appreciated!
2024-06-28
https://www.lesswrong.com/posts/n3yRgZYs5JZChpfKp/georgism-crash-course
n3yRgZYs5JZChpfKp
Georgism Crash Course
Zero Contradictions
For my first post on Less Wrong, I was advised to link to an existing blog post of mine. My Georgism Crash Course has been one of my most viewed essays. It aims to explain what Georgism is, why it would boost economic efficiency, and it addresses some of the possible drawbacks of it. All comments, constructive criticism, and feedback are appreciated. Georgism1 is the position that income tax, sales tax, property tax, and all other taxes should be abolished and replaced with taxes on natural resources (NRT) and land values, which would fund all government services. The idea is that society would greatly benefit from overhauling the taxation system. (see the rest of the post in the link)
2024-06-29
https://www.lesswrong.com/posts/md3HYPkRG9srJzHre/saving-lives-reduces-over-population-a-counter-intuitive-non
md3HYPkRG9srJzHre
Saving Lives Reduces Over-Population—A Counter-Intuitive Non-Zero-Sum Game
james-brown
AN ARGUMENT AGAINST SAVING LIVES When making the argument for charitable giving, a response I have often received is that saving lives in areas that are already over-populated will perpetuate over-population-solving nothing. It’s a concern I shared when first considered charity. The logic seems sound on the face of it. For instance, if a predator is eliminated from, let’s say, a rabbit habitat, the rabbits can become over-populated until they reach some other natural limit, like resource depletion. This is the standard Malthusian theory of population equilibration*. AN ARGUMENT FOR SAVING LIVES I will be making the case that the reverse situation is actually at play, and that, somewhat counter-intuitively, by saving lives we actually reduce over-population. I will be employing two studies; one looking at data from the US and the other looking at data from Europe. The first conclusive finding is a clear positive correlation between the fertility rate and the infant mortality rate over time. “In every case, changes in infant mortality are positively associated with changes in fertility and most are significant.” (European study) This correlation is not just evident over time but is clear geographically. More infants dying means more babies being born. But correlation is not causation! It is entirely possible that any or all of the following relationships are at play. A Higher Fertility Rate causes a Higher Infant Mortality RateA Higher Infant Mortality Rate causes a Higher Fertility RateBoth Fertility and Infant Mortality are affected by other variables The case I am making depends on option 2 being a factor. If this is the case, then saving lives reduces over-population-creating a win-win situation where the population becomes more sustainable alongside the obvious benefits to individuals, whose lives are saved in the process. But why would this be? Let’s unpack the issue. WHY LOOK AT FERTILITY AND INFANT MORTALITY? In short, for the sake of clarity. It’s important to clarify that the fertility rate is not actually a measure of how fertile individuals are, but rather the average number of children an average woman gives birth to. It’s essentially the ‘birth rate’ but relative to the female population only, which gives a clearer signal regarding reproductive behavior. Infant mortality turns out to be a clearer indicator of societal health, and has a clearer correlation with reproductive behavior (than other candidates like ‘life expectancy’). So, we know that there is a correlation between fertility rate and infant mortality, but why is this? First let’s start with the obvious. 1. HIGH FERTILITY CAUSES HIGHER INFANT MORTALITY If we look at option 1 in the earlier list, it is understandable that there is a causal relationship where the more children someone has, the more likely one of those children is to die. It’s easy to imagine that in a society where the norm was to have 50 children each, many more children would die due to exhausted resources and, necessarily, parental neglect-reaching a Malthusian equilibrium (involving a lot of death). Families have a limited carrying capacity for children, and so greater fertility will lead to greater infant mortality. But does carrying capacity explain the entire correlation? What about the fact that wealthier families tend to have less children? Surely if carrying capacity was the only issue, wealthier families would have more children, due to a higher carrying capacity. This suggests that a option 1 is not the only causal factor. 2. HIGH INFANT MORTALITY CAUSES HIGHER FERTILITY Due to patchy data up to 1850, the US Study researchers initially found no correlation between fertility rates and infant mortality rates until they looked at the period between 1850 and 1940, where they found an indication of ‘hoarding’. “… there is evidence that birth rates responded to changes in death rates by the late nineteenth and early twentieth centuries. Furthermore, the relationship strengthened over the early part of the twentieth century as the decline in infant mortality proceeded rapidly. There is also a suggestion of a lagged response of fertility to mortality change, indicating hoarding (or insurance) behavior.” (US Study) WHAT IS HOARDING? Hoarding is the practice of having more children, in an environment with a high risk of infant mortality, to ensure some children survive. But how do we know hoarding is going on? Both studies use instrumental variable techniques to isolate the causal effect of child mortality on fertility, allowing them to account for how much of the correlation is determined by other factors like hoarding. If we look at the results of the US Study we find… Direct replacement effects are about 10–30%.Gross replacement effects, which include hoarding, are much higher, in the range of 60–80%. Direct replacement effects are what we would expect in a scenario where the causality is simply from fertility to mortality. Gross replacement effects explain the additional births beyond what would be expected from direct replacement alone. It is likely these additional births are a response to high infant mortality, indicating parents are having extra children as a buffer against the loss of infants (hoarding). Both studies agreed that Infant Mortality had a causal effect on Fertility. “… there are important issues of causality that must be resolved before drawing conclusions. The few studies that attempted to disentangle the direction of causality using instrumental variables estimation found, as we did, that important causality was operating in both directions.” (US Study) The European study concluded… “… there is substantial evidence that mortality decline was an important cause of fertility decline in Europe.” (European Study) 3. OTHER FACTORS AFFECT BOTH INFANT MORTALITY AND FERTILITY RATES There is always the possibility that other unknown factors might affect both Infant Mortality and Fertility, as the European study clarifies. “… we cannot completely rule out the possibility that the estimated associations of fertility and mortality, even when using instrumental variables and fixed-effects methods, actually reflect a spurious association induced by unobserved variables that influence both fertility and mortality and that change over time. These variables might include breastfeeding, health conditions, nutrition, or unobserved aspects of economic development and modernization.” (European Study) While this is an important consideration that cannot be dismissed, it does not contradict the imperative to seek to save lives through charity. This is because, not only do life-saving charities save lives, they do so through improving the very factors noted above that might be involved in the correlation between fertility rate and infant mortality rate. SO… Researchers have found that it is likely that a lowering of the infant mortality rate results in a lowering of the fertility rate. Their conclusions come from data spanning continents and centuries, and when you look at the issue through a human lens it makes sense: in general, if there’s a high chance children will die, people will have more children to mitigate the risk of losing all of their children. Hoarding might not make a lot of sense to those of us who live in countries with very low infant mortality, but it shouldn’t-I can only speak for myself, but my interests are more about quality of life rather than genetic survival. So, the fact that I only have one kid, is entirely predictable. If the researchers are correct that lowering of the infant mortality rate lowers the fertility rate, then, by contributing to saving lives, we are also helping stem over-population. And even if the researchers are incorrect and the correlation is caused solely by something else, we can still be confident that giving to charities that take health measures, nutrition, women’s education and other important factors into account when saving lives, will have a non-zero impact. Do you give to charity or volunteer? I’d love to hear some of your experiences. RELATED MATERIAL * This is from Population Dynamics of Humans and Other Animals by Ronald D. Lee. Malthusian dynamics is something I intend to cover in a future post.I encourage anyone interested in charity to visit Giving What We Can which calls on people to pledge 10% of their income to effective charities. It’s something I’ve done for 3 years now, and it’s not that painful, and makes a huge difference. My contribution works out to save 1 to 2 people’s lives every year-which feels pretty good.The purpose of using two studies is to show consensus rather than to cherry pick from one or the other-both studies come to generally the same conclusions.If you are interested in issues around charitable giving, you might want to check out the kindness equationThe other factors that go into reducing over-population are really interesting, and I intend to focus an upcoming post on the role poverty plays.Contrasting with concerns about over-population, some prominent thinkers are currently concerned about a lack of population growth. I have yet to see any good reason for this concern-Matt Ball has an interesting post on this topic, which reflects my current view.
2024-06-28
https://www.lesswrong.com/posts/RfC4mkYuLksukyzns/datasets-that-change-the-odds-you-exist
RfC4mkYuLksukyzns
Datasets that change the odds you exist
dynomight
1. It’s October 1962. The Cuban missile crisis just happened, thankfully without apocalyptic nuclear war. But still: Apocalyptic nuclear war easily could have happened.Crises as serious as the Cuban missile crisis clearly aren’t that rare, since one just happened. You estimate (like President Kennedy) that there was a 25% chance the Cuban missile crisis could have escalated to nuclear war. And you estimate that there’s a 4% chance of an equally severe crisis happening each year (around 4 per century). Put together, these numbers suggest there’s a 1% chance that each year might bring nuclear war. Small but terrifying. But then 62 years tick by without nuclear war. If a button has a 1% chance of activating and you press it 62 times, the odds are almost 50/50 that it would activate. So should you revise your estimate to something lower than 1%? 2. There are two schools of thought. The first school reasons as follows: Call the yearly chance of nuclear war W.This W is a “hidden variable”. You can’t observe it but you can make a guess.But the higher W is, the less likely that you’d survive 62 years without nuclear war.So after 62 years, higher values of W are less plausible than they were before, and lower values more plausible. So you should lower your best estimate of W. Meanwhile, the second school reasons like this: Wait, wait, wait—hold on.If there had been nuclear war, you wouldn’t be here to calculate these probabilities.It can’t be right to use data when the data can only ever pull you in one direction.So you should ignore the data. Or at least give it much less weight. Who’s right? 3. Here’s another scenario: Say there’s a universe. In this universe, there are lots of planets. On each planet there’s some probability that life will evolve and become conscious and notice that it exists. You’re not sure what that probability is, but your best guess is that it’s really small. But hey, wait a second, you’re a life-form on a planet with conscious life! Given that you exist, should you increase your guess for how likely conscious life is to evolve on a random planet? Again, you have two schools of thought. One says yes, you have data, increase your guess, while the other says no, don’t increase, if there wasn’t life you wouldn’t be here, anthropic principle—anthropic principle! 4. After many years of being confused by these questions, I think I now understand what’s happening. These questions are confusing because they’re actually about a sort of narrow technical question, and only appear to be about to the fact that you might not exist. To explain, let me introduce another scenario: One day you wake up at my house. As you groggily look around, I explain that you’ve been invited to Dynomight family dinner! And that the way that DFD works is: I sneak into your house at night, anesthetize you, and bring you to my lair.When you wake up, I make you some delicious Fagioli all’Uccelletto.After you’ve eaten, I bring out a box containing a bunch of identical revolvers. Half have no bullets in them, while the other half have bullets in all six chambers. You pick one revolver at random, put it to your head, and pull the trigger. (To refuse would be a huge faux pas.)If you’re still alive, I bring out a $100 bill and offer to sell it to you for $60. If you agree, I take your gun and see if it has bullets in it. If it’s empty, then I take your $60, give you the $100, and ask you to come back soon. If not, I take your $60 but don’t give you the $100, welcome to dinner at my house, chump. So you eat the Fagioli all’Uccelletto (it is excellent) and you play the mandatory revolver game and don’t die, and I offer you the $100. Should you accept? Yes you should. There’s no trick. Since you’re alive, you know your revolver is empty, so you’re guaranteed to make a free $40. 5. Fine. But now consider the same scenario, with two small changes (in bold): I sneak into your house at night, anesthetize you, and bring you to my lair.When you wake up, I make you some delicious Fagioli all’Uccelletto.After you’ve eaten, I bring out a box of identical revolvers. All have three chambers with bullets and three empty chambers. You pick one revolver at random, put it to your head, and pull the trigger. (To refuse would be a huge faux-pax.)If you’re still alive, I bring out a $100 bill and offer to sell it to you for $60. If you agree, I take your gun and look at a random chamber. If that chamber is empty, then I take your $60, give you the $100, and ask you to come back soon. If not, I take your $60 but don’t give you the $100, welcome to dinner at my house, chump. Should you accept? No. You know that all the revolvers have bullets in half of their chambers. The fact that yours didn’t go off doesn’t change that. If you accept my offer, then you have even odds of gaining $40 or losing $60. That’s a bad bet. 6. So what does all that have to do with the odds of nuclear war? I claim that arguments about if you should update your estimate of the risk of nuclear war are equivalent to arguments about which version of Dynomight family dinner you’ve been invited to. And note—the fact that you might stop existing is irrelevant. Instead of putting the gun to your head, say you fire it into the ground and if it goes off, you just go home. Nothing changes! Or, hell, replace the revolvers with happy puppy bags and replace full/empty bullet chambers with red/blue marbles. Again, nothing changes. The crux of the nuclear war scenario isn’t that you might stop existing. That’s just a very salient feature that draws our attention away from the heart of the debate: The confidence of your prior. 7. When you guess that there’s a ~1% chance of nuclear war, what does that mean? One option is that you’re sure that the chances are exactly 1% per year. Alternatively, you might think that the chances could be anything from 0% to 5%, but your average guess is 1%. The general way to think about this is to create a “prior”—to draw a curve of how plausible each yearly chance of nuclear war is. Ask yourself, which of these two priors is better? If you choose the narrow (orange) prior, then you’re sure that the chance of nuclear war is quite close to 1%. If you choose the broad (blue) prior, then you’re very unsure. In both cases, the mean is 0.01 (or 1%). But the amount of uncertainty is totally different. Now, think about the fact that 62 years went by without nuclear war. You can picture this as a “likelihood”—for each possible yearly chance of nuclear war, how likely is it that you’d see 62 years without war? If the yearly probability of nuclear war is 0, then after 62 years, you’re guaranteed to avoid nuclear war. The higher the yearly chance of war, the less likely this is to happen. To get your “posterior” belief after 62 years, you just multiply your chosen prior with the likelihood at each position on the x-axis. Here are the posteriors for each of the above priors. (Corresponding priors shown with dotted lines for reference.) The more confident your prior, the less open-minded you are about changing your mind after you see data. In the above graph, the broad prior shifts to the left in the posterior. The narrow prior also shifts a bit to the left but only a tiny amount—the prior is so concentrated that the data can hardly budge it. With the broad prior, the data moves the best guess from 1% to around 0.637%. With the narrow prior, it only moves to 0.986%. In the limit of a totally confident prior (an infinitely tall, infinitely narrow “spike”) the data wouldn’t move things at all. 8. So how are you supposed to think about the risk I nuclear war? I claim you should treat it just like any other situation: You state a prior and then update it based on whatever data you have. The fact that we might not be here with different is irrelevant. The question is if we should have a narrow or a broad prior. For the specific case of nuclear war, I don’t see how you could be sure that the odds are almost exactly 1%. (Or whatever.) The geopolitical dynamics that might lead to nuclear war are waaaaaaay too complicated. So I think any reasonable prior should have significant uncertainty, and the fact that nuclear war hasn’t happened yet means we should revise our estimates down a bit. But just a bit. If we survived another 1,000 years without nuclear war, then that would really prove something. But 62 years isn’t that long, and only calls for a small adjustment. 9. If you’re still not convinced, consider one last scenario: Take two identical-looking happy puppy bags:Fluffles contains 9 blue marbles and 1 red marble.Snowcone contains 1 blue marble and 9 red marbles.You pick one happy puppy, look at one random marble, and put it back.I offer you a deal: If you give me $50, then I will open your happy puppy and give you $10 for each blue marble. Should you take the deal? Don’t stress about the math. You can picture this situation like this: As you’d expect, a blue marble means you’ve probably got Fluffles, so you should take the deal. A red marble means you’ve probably got Snowcone, so you shouldn’t. Fine. But say that if you see a red marble, then you go home and aren’t offered a bet (or, if you like, say I kill you). Then you can picture the situation like this: If you see a red marble, game over. But if you see a blue marble, you make decisions exactly like you did before. What might happen for some other dataset doesn’t change the best decision for your dataset. 10. Say I take up free solo chainsaw juggling. You think this is a stupid hobby, but I do it every weekend for a few months without any accidents and then tell you, “Hey, look, I’m fine, this is perfectly safe.” Are you wrong to say, “If you’d died, you wouldn’t be here.”? I don’t think so. But I think what you really mean is (a) there are strong prior reasons to believe free solo chainsaw juggling is dangerous, and (b) the fact that I’ve survived for a few months isn’t enough evidence override that prior. So let’s quit the free solo chainsaw juggling.
2024-06-29
https://www.lesswrong.com/posts/viRn7Drv9FKcdFpyX/contra-acemoglu-on-ai
viRn7Drv9FKcdFpyX
Contra Acemoglu on AI
maxwell-tabarrok
The Simple Macroeconomics of AI is a 2024 working paper by Daron Acemoglu which models the economic growth effects of AI and predicts them to be small: About a .06% increase in TFP growth annually. This stands in contrast to many predictions which forecast immense impacts on economic growth from AI, including many from other academic economists. Why does Acemoglu come to such a different conclusion than his colleagues and who is right? First, Acemoglu divides up the ways AI could affect productivity into four channels: 1. AI enables further (extensive-margin) automation. Obvious examples of this type of automation include generative AI tools such as large language models taking over simple writing, translation and classification. 2. AI can generate new task complementarities, raising the productivity of labor in tasks it is performing. For example, AI could provide better information to workers, directly increasing their productivity. Alternatively, AI could automate some subtasks (such as providing readymade subroutines to computer programmers) and simultaneously enable humans to specialize in other subtasks, where their performance improves. 3. AI could induce deepening of automation—meaning improving performance, or reducing costs, in some previously capital-intensive tasks. Examples include IT security, automated control of inventories, and better automated quality control 4. AI can generate new labor-intensive products or tasks. Each of these four channels is referring to specific mechanism in his task-based model of production. Automation raises the threshold of tasks which are performed by capital instead of laborComplementarities raises labor productivity in non-automated tasksDeepening of automation raises capital productivity in already-automated tasksNew tasks are extra production steps that only labor can perform in the economy, for example, the automation of computers leads to programming as a new task. The chief sin of this paper is dismissing the latter half of these mechanisms without good arguments or evidence. “Deepening automation” in Acemoglu’s model means increasing the efficiency of tasks already performed by machines. This raises output but doesn’t change the distribution of tasks assigned to humans vs machines. AI might deepen automation by creating new algorithms that improve Google’s search results on a fixed compute budget or replacing expensive quality control machinery with vision-based machine learning, for example. This kind of productivity improvement can have huge growth effects. The second industrial revolution was mostly “deepening automation” growth. Electricity, machine tools, and Bessemer steel improved already automated processes, leading to the fastest rate of economic growth the US has ever seen. In addition, this deepening automation always increase wages in Acemoglu’s model, in contrast to the possibility of negative wage effects from the extensive margin automation that he focuses on. So why does Acemoglu ignore this channel? I do not dwell on deepening of automation because the tasks impacted by (generative) AI are quite different than those automated by the previous wave of digital technologies, such as robotics, advanced manufacturing equipment and software systems. This single sentence is the only justification he gives for omitting capital productivity improvements from his analysis. A charitable interpretation of this argument acknowledges that he is only referring to “(generative) AI”, like ChatGPT and Midjourney. These tools do seem more focused on augmenting human labor rather than doing what software can already do, but more efficiently. Though Acemoglu is happy to drop the “generative” qualifier everywhere else in his paper. The more important point is that the consumer-facing “generative AI” products that Acemoglu is referring to are just wrappers around the more general technology of transformers. Transformers are already being used to train robots, operate self driving cars, and improve credit card fraud detection. All examples of increasing the productivity of tasks already performed by machines. It is easier to get small productivity effects and ambiguous wage effects from AI if you assume that it will have zero impact on capital productivity. Potential economic gains from new tasks aren’t included in Acemoglu’s headline estimation of AI’s productivity impact either. This is strange since he has written a previous paper studying the creation of new tasks and their growth implications in the exact same model. In this paper he acknowledges that: The wage and productivity impact of new tasks can be potentially larger than cost savings in existing tasks, and this is particularly likely to be the case when new tasks improve the entire production process, or when they add new sources of cost improvements or complementary functions. Despite new tasks’ central role in wage and productivity growth and in reducing labor income inequality I will not focus on new good tasks generated by AI for the reasons discussed in detail in the Conclusion. The justification he gives for ignoring this channel is weak. If AI is used to create new tasks and products, these will also add to GDP and can boost productivity growth. Nevertheless, when we incorporate the possibility that new tasks generated by AI may be manipulative, the impact on welfare can be even smaller. Instead of incorporating possible gains from new tasks, he only focuses on the “new bad tasks” that AI might create e.g producing misinformation and targeted ads. Based on some studies about harms from social media he concludes that while revenue from these “manipulative” tasks might raise GDP by 2%, they would actually lower welfare by -.72%. There is zero argument or evidence given for why we should expect the harms from AI to be similar to those from social media or why we should expect new bad tasks to outnumber and outweigh new good ones. He doesn’t end up including gains or losses from new tasks in his final count of productivity effects, but this process of ignoring possible gains from new good tasks and making large empirical assumptions to get a negative effect from new bad tasks exemplifies a pattern of motivated reasoning that is repeated throughout the paper. He also mentions a fifth possibility for AI to affect productivity: changing the process of science. Acemoglu does not even include this in his list of possible channels since “large-scale advances of this sort do not seem likely with in the 10-year time frame.” This channel probably has the largest potential effect, since the feedback loop between research inputs and outputs can quickly lead to singularities. But even more mild changes, such as raising the capital intensity of R&D, as more fields can make advances using compute-intensive deep learning, could double productivity growth rates. Dismissing all of this with a single sentence is not enough for a paper that claims insight over AI’s economic impact as a whole. The biggest problem with this paper is claiming a wide scope over all of AI’s macroeconomic impacts and then immediately cutting out analysis of the most important channels of that impact. The analysis within the channels he does consider is also somewhat suspect. Acemoglu’s estimation of the productivity effects from the “automation” channel is derived from a complicated task based production model but it leads to an equation for AI’s effects that is super simple: the change in TFP is the share of GDP from tasks affected by AI multiplied by the average cost savings in those tasks. The GDP share comes from Eloundou et al. (2023) which estimates that  ~20% of tasks are “exposed” to AI combined with Svanberg et al (2024) which estimates that 23% of those exposed tasks can be profitably automated, so 4.6% of GDP is exposed. Then he combines results from these three papers that experimentally roll out AI to workers to get cost savings. These average out to around 30% productivity gains for labor which is about 50% of costs, so 15% total. Multiplying these gets his overall estimate that the “total factor productivity (TFP) effects within the next 10 years should be no more than 0.66% in total—or approximately a 0.064% increase in TFP growth annually.” Multiplying these numbers out is a good starting point, and is certainly better than the vibes-based guesses that many base their AI predictions on. Still though, it seems pretty strange to me to take exposure estimates and experimental productivity results based on GPT 3.5 and 4 and assume that they will hold for the next 10 years. How many “AI exposure” estimates from 2018 would have included graphic design and creative writing? A few years later those industries are at the top of the list. The deluge of AI investment will push us further up the scaling laws graph so we should be expecting similar changes over the next several years. If Acemoglu’s paper was titled “Cost Savings From Extensive-Margin AI Automation” it would be pretty good. It would take the most recent empirical estimations of AI’s impact and work them into a rich task-based production model and come to an interesting result at the end: the effects through this channel are smaller than what you might expect. The paper is titled “The Simple Macroeconomics of AI,” though, and it claims to answer far more questions than it actually does. I am confident that the predictions of small economic impact that Acemoglu makes in this paper will not hold up over the next 10 years.
2024-06-28
https://www.lesswrong.com/posts/zrbq6zHsQwrE8cLXJ/five-toy-worlds-to-think-about-heritability
zrbq6zHsQwrE8cLXJ
Five toy worlds to think about heritability
david-hugh-jones
Just a quick note from a Twitter/X discussion. It’s an almost-universal scientific rule that empirics is blind without a model. The left picture below shows a bunch of points, and a regression line fitting them. It seems like a good fit. But why should we believe it? The right picture shows the same points with a curve that fits them perfectly. We prefer the left hand line because of some theory like “these two variables are probably linearly related, and also y is partly random”. That theory is embodied in an equation like y = a + bx + ε where b is the slope of the line and ε is random noise which makes the fit imperfect. Without that, we’d have to accept that the stupid right hand curve fits the data better. Thirty points and two alternative models to explain them Heritability is the proportion of variation in something that can be explained by people’s genes. (For social scientists: imagine doing the perfect regression on the outcome, using all relevant genetic variables. Heritability is the R-squared.1) The classic way you estimate heritability is by comparing identical to fraternal (non-identical) twins. Identical twins share all their genetic variation, fraternal twins share only half of it. Assuming no assortative mating! If people marry people with similar genes then fraternal twins and other siblings will have more than half their genes alike. So you can back out heritability from that. If identical twins are exactly alike on the variable of interest, with a correlation of 1, and fraternal twins are correlated at 0.5, then the heritability is 1. More generally, the heritability is twice the difference in correlations. Twin studies are genius because they leverage theory. We know people get half their genes from each parent, that in fraternal twins the genes are drawn independently, and in identical twins they are exactly the same. Without this insight, there would be no way to separate the effects of parental environment and genetics. Genes come from your parents, so does family background. A study of parents and children couldn’t disentangle these effects. The theory let scientists disentangle genes and family environment, even before it was practical to measure people’s DNA variants. But are twin studies right? What if identical twins grow up in a more similar environment than fraternal twins? That seems plausible because maybe parents treat them more similarly. Then again, what if there is assortative mating? Because, like, there is. Then fraternal twins are more like identical twins, sharing more than half their genes, and you need to multiply the difference in correlations by more than two. So there’s a few recent papers trying to get estimates on heritability by using information beyond twins. This paper reckons that twin studies underestimate the heritability of education, because of assortative mating. But they also show[1] that many different models fit the data equally well. Heritability is 36-39% or 51-56%, depending on which different kind of assortative mating you believe in.This paper argues that twin studies overestimate the heritability of education, because identical twins actually grow up in more similar environments than fraternal twins. Heritability is only 9%! They identify the environmental difference between the kinds of twins by looking at twins’ spouses and children. It’s a bit hard to explain. I think there is an assumption that twins’ nephews share similar environments with twins’ spouses nephews, and somehow this assumption holds the model together, like a strategically-located pin in a complicated brassière.This paper uses not twins, but remote cousins, to estimate heritability of education. They get very low estimates (only 7%) and find that environmental transmission is better at explaining correlations of outcomes between remote cousins. But they don’t allow for assortative mating except via (a) observed education and (b) cultural transmission. I’m doubtful of that because there’s evidence for much more assortative mating on genetics than could be explained by mating on observed education alone.[2] You may think from these numbers that despite a lot of effort, researchers are still far from consensus about how heritable education is, and that the different answers seem to come from the assumptions baked into different models. Five toy worlds To illustrate what I feel about this literature, here are some toy examples. In world 1 there are two genetic types: Wordcels W and Shape Rotators S. Shape Rotators all go to university, Wordcels never do. Decline of the humanities. Very sad. Parents can’t do anything about it. However hard you beat your Wordcel brat, he’ll never amount to anything. The heritability is 100%. If we run a twin study, all identical twins have the same outcomes, and the correlation of fraternal twins is 50%. In world 2 there are two genetic types: Wordcels W and Shape Rotators S. But in this world genetics don’t get you to university. All that matters is if you beat your children. Children who are beaten4 go to university. Children who are spoiled (not beaten) do not. Parents never beat their W children but always their S children, so only S’s go to university. The Zeiss planetarium in Jena: a toy world to help us understand the real world This world looks exactly like the previous world, unless you measure what parents do. This is our first problem. If children’s environment correlates perfectly with their genetics, then we can’t tell which matters. If it correlates imperfectly, then we can tell, but we will be estimating it based on very few people. (Suppose 1% of parents are contrarians who only beat their Wordcel children. Then you can find out whether genetics or beatings matter, but only from 1% of your sample. And maybe those guys are weird in other ways.) What’s the heritability in world 2? You might swiftly answer 0%, because genetics don’t get you to university. But this ignores how the child’s genetics is affecting its parents! Consider: World 2a: parents always beat their S children because they are S types. A twin study in this world will show that heritability is 100%: either identical twins are both S, both get beaten and both go to university, or they are both W, don’t get beaten and don’t go to university. World 2b: parents always beat their S children, but this is not causal. Instead, beating is a family tradition which is handed down along with S genes. To keep things simple, let’s say there is perfect assortative mating by the S and W type. A twin study in this world won’t work because all siblings always have the same type due to the assortative mating. But heritability is 0%. All that matters here is the cultural tradition. Genes are along for the ride. Since heritability in world 2a is 100%, does that mean there is no way to get more people to university? Not so fast! Maybe we could launch a public health campaign to encourage parents to beat their Wordcel. To think that idea through, let’s consider: World 3: Only beaten children go to university; genes don’t matter. Parents beat their children because they are S types. Specifically, they falsely believe that a W type would not go to university, even if beaten. So they prefer not to beat them. In this world, even though heritability is 100%, the public health campaign would work in its stated aim of getting more children to university. The parents are wrong! World 4: parents beat their children because they are S types. Specifically, they have the following true beliefs about the world: beaten S types go to university; unbeaten W types go to university half the time. Also, an S child who isn’t beaten gets lazy, plays computer games and follows Andrew Tate on YouTube. A W child who is beaten gets upset and starts a Tumblr. Neither of these ever go to university. Measured heritability is now lower than 100% (because some W types go to university) but in this world, the public health campaign would not work. In fact it would backfire in its stated aim: beaten Wordcels would go to university less. There is a gene-environment interaction between W-S genetics and beatings; the parents know this and respond to it, leading to an evocative gene-environment correlation. Worlds 3 and 4 are observationally identical. But they have very different implications! In world 3, you can improve the world by encouraging parents to beat their Wordcel children. In world 4, you can’t: if you tell parents to stop beating their S child, or to beat their W child, you just make things worse for the outcome you care about. Parents are optimizing. So if the public health campaign works, everything is fine? World 5: parents beat their children because they are S types. Specifically, they have the following true beliefs about the world: all beaten children go to university. But, while S types aren’t harmed by being beaten, W types are seriously harmed by it. The parents care about their children’s welfare, so they don’t beat W types even though that would get them to university. Here, just like world 3, heritability is 100% and yet the public health campaign “works”. But that depends what you count as working. The parents have knowledge about their children’s welfare that the social scientist does not observe. The public health campaign reduces welfare, even though it fulfils its stated goal. More or less controversial suggestions Here are my conclusions from these toy worlds. I don’t think they’re new to thoughtful researchers in the field, but equally I don’t see people acting on them, so perhaps they are worth stating. When you are predicting individual outcomes of genes and environments, it is important to be able to maintain assumptions about how things work. Without some assumptions, you are not going to have enough data to draw conclusions. The classic example is the assumption of twin studies that identical/fraternal twins share all/half their DNA. This can be wrong, but it comes from rigorous theory, and we know what to do about it when it is wrong. Unfortunately, when genes and environments interact and correlate, that assumption about DNA won’t get you everything you need. We need other assumptions. Also unfortunately, theories as solid and tested as that of the mechanisms of genetic inheritance are rare. (We don’t know precisely how cultural inheritance works, for example.) When different papers are deriving widely different estimates of heritability from quite similar data, it’s time to check and foreground the assumptions built into their models. In the struggle to pin down parameters, purely environmental sources of variation are a valuable resource. I am especially thinking of within-family sources of environmental variation. Our paper used two: birth order and parental age. These are often widely available, so you’re not dealing with a small, unusual sample. The point of those sources is that you can fix a parameter to zero. The correlation of birth order with genetics is zero within families. Ditto for parental age.[3] Economic models also offer a powerful, though controversial, way to pin down a model. What defines economic theory is: somebody optimizes something. Like purely environmental variation, that narrows down what is possible: any variable that is the result of an individual’s choice must be the result of that individual maximizing some utility function. Unfortunately, the form of people’s utility functions is not as precisely known as the way genes are inherited, and it is also more controversial if people really optimize anything. Nevertheless, economic models put some structure into your world, and in a way that ties in to a big existing body of theory and empirics. In particular, economic models imply that each class of actor, as defined by the preferences and resources they possess, will do the same thing. This reduces the amount of variation we observe, and it also means that the variation we do observe is not a random sample of all the possibilities. Rather, it is defined by people doing what is best for them. For example, this might imply that parents, and/or children themselves, match environments to genetics in a way they expect to be optimal. That makes it harder for the scientist to disentangle the effects of the two; it also should put us on our guard for hidden variables, in cases where genes and environments seem not to match. The framework in genetics of accounting for observed variation is not necessarily helpful in guiding us towards the right questions. Worlds 3 and 4 have the same observed data. But their causal structure is different. And so the results of counterfactual interventions would be different, and policy recommendations should be different also. One advantage of economic theory is that it can help with counterfactual predictions. For example, suppose that parents are maximizing their children’s education, and that they correctly predict the outcomes of their actions. We rule out world 3. Of course, that ruling out is coming from theory not data! Maybe parents are maximizing something else. Nevertheless, trying to write down what the analyst thinks they aremaximizing may help him to understand bits of the world that are hard to measure directly. So, I’m trying to push two ideas: (a) seek within-family sources of variation of the child’s environment and (b) build an economic theory of what parents do into your genetically-informed family study. I think (a) is probably less controversial than (b), and probably for good reasons. But if you put them together, you might get something like this: A model of how parents allocate their parenting time/effort, given their number of children, and what they observe about each child’s phenotype.Given that, a prediction about the child’s environment within a given family, partly based on birth order and parental age (independent of genetics)Empirics on how genes and environment interact in producing child outcomes, using birth order and parental age to separate the effects of environment from genetics. (Open question: do we also consider parents’ choice to have (more) children? That would be a stretch goal.) Done right, economic models have another advantage. Because people in the models have preferences, the models give us ways to measure social welfare. Actually, you can flip this around. If as a result of some empirical analysis, like a family or twin study, someone makes a policy recommendation, then they must have an implicit definition of social welfare. Economic models make the definition explicit. World 5 gave an example where parents are effectively maximizing child welfare, in ways that are not necessarily visible to the scientist. That is not always, or even typically true, in the economic framework. Individuals may be maximizing their own welfare, but an intervention could still improve things because of e.g. externalities. (Think of a world where lazy parents beat their children more than is socially optimal, because responsible parenting takes time and effort.) But in economic models, there is always a relationship between decision and welfare. People in the model have goals which they try to achieve; the aim of policy is to help them. If people trade off e.g. educational achievement and emotional welfare, then the policy goal must respect that. Paternalism is not allowed! So this gives the final outcome you might shoot for: Counterfactual predictions and welfare analysis, using the model, for changes to parental effort allocation, based on the idea that parents are optimizing, not just doing stuff at random and observe facts the analyst does not. The above is all very ambitious; doing even part of it might be a useful contribution. ^ In Table 4. ^ Social science datasets typically record whether you went to university, but not e.g. which university you went to or which degree you studied. So this is pretty coarse data! ^ In fact, you could challenge either of these! Father’s age actually does affect genetic mutation but I think that for many cases, the effects are small enough to be negligible; if people choose how many children to have based on their prior children, then birth order could correlate with genetics, but I’ve never seen evidence for it. Just like for twin studies, assumptions are rarely cast-iron, but some are stronger than others.
2024-06-28
https://www.lesswrong.com/posts/oLRHSb3PbxFoZX6vL/how-do-natural-sciences-prove-causation
oLRHSb3PbxFoZX6vL
How do natural sciences prove causation?
kongo-landwalker
Let's say we have 2 phenomena, A and B, each can be a value of 0 or 1, and we observe, that for them implication table A=>B is always true. (Third column represents whether the combinations of events can happen or cannot.) A B A=>B 0 0 1 0 1 1 1 0 0 1 1 1 Thing we see is that combination A=1 and B=0 almost never happens, and three other combinations can happen. But how can we be sure, that it is not some kind of third event, which influences both of those to output those combinations of values? What if the table is like this, based on &? 0 0 0 0 1 0 1 0 0 1 1 1 If at least one of events (any) happens, then the second happens too. How this relation would be called? How many tables (of the 16) there are which could potentially represent causation? For example, 000 010 101 111 is not in the set, because it says, that A not being true is impossible(has never been observed), but has no limitation on the value of B. Given a table in this format (or just a string representing the third column), what a person would do next to test whether events influence one another or are both defined by a third? Does the further approach even depend on a type of table? (I expect it will not, as everything would be observed in frequencies) Is it like an cycle, where one makes an assumption "what if they both are defined by event C" and then shows there is no correlation with C, and repeats for many other possible C's? But then it looks it would not be possible to exhaust all possible C's.
2024-06-28
https://www.lesswrong.com/posts/qbjPGZWZArTjpcKBS/lesswrong-acx-meetup-transilvanya-tour-sibiu
qbjPGZWZArTjpcKBS
LessWrong/ACX meetup Transilvanya tour - Sibiu
Marius Adrian Nicoară
Let's get together at the Shopping City food court and hang out for a while. I'll bring an interesting board game that involves teamwork and Bayesian reasoning; requires 4 players. As a starting point, we can talk about "The Pyramid And The Garden" by Scott Alexander: https://www.lesswrong.com/posts/9HSwh2mE3tX6xvZ2W/the-pyramid-and-the-garden
2024-06-28
https://www.lesswrong.com/posts/o4RFp7rwfrn9bn6Lo/bayes-theorem-in-search-of-gold-lesson-1
o4RFp7rwfrn9bn6Lo
Bayes' Theorem: In Search of Gold (Lesson 1)
bayesyatina
[This text is a translation, and my English may not be perfect. I would appreciate any suggestions for stylistic improvements] The fire crackled in the fireplace, while it was raining heavily outside the window. The young dwarves, soaked from the rain, were sitting in chairs around the older dwarf. Welcome to the course "Bayes' Theorem: In Search of Gold" (started by an older dwarf standing in the center next to the board). My name is Tifius, and I will be your instructor for this practically useful theoretical course. The course is theoretical in that you will not be receiving gold at the end (there was a collective sigh from those closer to the window), but it is also practical in that it increases your chances of finding gold in the future. As you all know, any intelligent dwarf wants to get closer to gold. Some of us become bankers, so that the shine of gold from the bank's vault will please them in the morning, instead of a cup of coffee. Others become jewellers, in order to hold gold more often in their hands. Those who really want to mine gold, go to mining schools like you. You might have a question "what about blacksmiths, this is a traditional dwarven profession"? Unlike the students, perhaps you are not familiar with the life of Tifius.The teacher does not like blacksmiths, as one of them is rumored to have stolen his wife (although the teacher never had a wife himself). When Tifius discovered dynamite for gold mining and became very rich, he decided to establish a prize to encourage further discoveries in all areas of dwarf activity, except for blacksmithing. What is the most important aspect of gold mining? It is certainly not what is taught in other courses. Although there is still pickaxes and lanterns in your curriculum, it's crazy because we have been using diode lanterns and automated drilling rigs for a long time. Due to the fact that advanced technology has become indistinguishable from magic, many even believe that dwarves have overcome their inability to perform magic. However, this is not the most significant aspect of gold mining. The most important aspect is the ability to accurately predict where gold deposits can be found, rather than physical skills or knowledge of new technologies. How does one learn to predict accurately? You might need to take courses in astrology, numerology, and other esoteric practices. However, I'm joking. Even in the world of magic, the most accurate predictors are often elven scientists. To obtain their power, you don't need to kill them or drink their blood, as some believe goblins do. In general, any intelligent dwarf, before starting to do something, might ask: "Have the Elven scientists come up with anything useful in this regard?". Blacksmiths, however, do not usually resort to this approach, as they sincerely believe that a hammer will solve any problem. So, what is so special about the predictions of elves? Even the orcs make a lot of predictions, but the only problem is that they rarely come true. That's why orcs prefer to predict either about the past or something that can't be verified. The elves have developed a clever way of thinking that allows them to focus on what is true. They call this method the scientific method, and it is written in a mystical and specialized language: the language of mathematics. The elves have developed a clever system of thought that allows them to believe in ideas with good predictive power. They call this system the "scientific method". It is expressed in the most mysterious and esoteric language: the language of mathematics. They say magic, built on this system, can make a 400-ton palace fly around the Earth, or enchant particles to fly through a thirty-kilometer ring ten thousand times per second. This is not the calculations of the orcs on the aerodynamics of a kicked leprechaun. And so, the elves discovered a way to make predictions that would always be no worse than those of others (so that you could not envy the decisions of any other dwarf). In order to systematically find more gold, all you need to do is understand Bayes' theorem. However, it's not as simple as that. Firstly, it is expressed in the language of mathematics, and most dwarfs only have a basic understanding of arithmetic or at most, they can read and translate. Secondly, understanding the mathematical underpinnings does not immediately provide an intuitive understanding of how it will allow you to find more gold. The course is designed to solve both problems, and as a bonus, you will gain an understanding of how to ideally update your beliefs in order to minimize the number of times the world surprises you. In the next lesson, we will discuss how this mysterious process works. See you in the next class!
2024-06-28
https://www.lesswrong.com/posts/cruYtDoJuDXnkaPxR/how-a-chip-is-designed
cruYtDoJuDXnkaPxR
How a chip is designed
Yannick_Muehlhaeuser_duplicate0.05902100825326273
Disclaimer: This is highly incomplete. I am not an expert in the field. There might be some unfamiliar terms. While I will try to explain things, explaining every single term would be beyond this post. You will usually be able to get a sufficient understanding by clicking the links or googling it. Introduction I think everyone, if they read about the chip industry long enough, has a moment where they have to put down a book or pause a podcast and simply remain stunned at the fact that it is possible to design and build something that is so incredibly impressive. The Apple A17 chip contains 183 million transistors per square millimeter. All placed in a coherent manner and produced with extremely high reliability. This is exactly why it is so fascinating to learn more about how it is actually done. On top of that, in a universe where compute is arguably the most important input in the AI production function, this knowledge is also crucial to effective AI governance. So what follows is a quick introduction to the processes of getting a chip from a vague idea to sending your files to the manufacturer, also called the tape-out. Background Knowledge One of the most important decisions, a decision that significantly determines all the others, is what manufacturer will build your chip and what process they will use. There are companies that do both design and manufacturing (e.g. Intel), but especially when it comes to the most advanced logic chips, more and more companies are what is called “fabless” - they focus on the design and task a so-called “foundry” (e.g. TSMC) with the manufacturing. Nowadays many fabs and fabless companies work together very closely in what is called Design-Technology Co-Optimization (DTCO). In practice, there are quite significant limitations in chip design, and the fab will check design plans and inform designers what can and can't be manufactured. This collaborative approach ensures that chip designs are optimized for the specific manufacturing process, balancing performance, power, area, and yield considerations. DTCO has become increasingly important as the industry approaches the physical limits of semiconductor scaling, requiring closer integration between design teams and process engineers to continue advancing chip capabilities. The foundry sends the design company what is called the process design kit (PDK), which contains all the important specifics to the fab and the manufacturing process (also known as the technology node). One factor that in large part determines the profitability of a chip is the yield of the manufacturing process. The yield is the fraction of chips produced that work flawlessly and can be sold. Compared to other types of products, in the semiconductor industry the yield is quite low, sometimes moving significantly below 50% for periods of time, especially at the beginning of a new technology node. To improve yield, optimal manufacturability is taken into account at many stages of the design process in what is called Design for Manufacturability (DFM). Chips are also designed to be easy to test (Design For Testability, DFT). In this post we are focussing on the design process, not with the actual manufacturing steps or the details of a transistor. But it is important to know that in practice we are working with standard cells that are all equal in height and vary in width. varies to make design and manufacturing easier. Often the IP for the standard cells is licensed from third parties. The Design Process My stages follow the outline given by Prof. Adam Teman in this lecture. Definition and Planning This is the stage where we think about what you even want to build. What bus structure do you want? How many cores should it have? What amount of power will it use and what will the clock frequency be. Those decisions are being made based on a mix of considerations from the product strategy, the capabilities of the design division and the possibilities of the manufacturing process. Design and Verification In this stage we start describing our chip in a hardware description language (HDL). The most prominent examples are Verilog and VHDL. Here we can describe our chip in a rigorous, formal but still very abstract way and get an optimal starting point for the less abstract design steps that follow. The style in which this is done is called RTL, which stands for Register Transfer Level. While there are features in a HDL that will be very familiar to people with coding experience, the purpose of a HDL is to describe, not to obtain a result or accomplish a task and it should not be conflated with a normal programming language. If you are curious what a HDL looks like, I recommend this video (see screenshot). This is also the first step at which intellectual property (IP) comes in. Using IP means we don’t have to design every single part but can adopt some of them from IP providers. After we complete our RTL code, it will undergo logic verification. This involves a series of checks that confirm the function, formal correctness and synthesisability of our chip. This can often take even more effort than the previous design stage and involves among other things detailed simulations and formal proofs. Logic Synthesis After we have a RTL description of our chip, we need to figure out what logic gates we need to build to turn this plan into a functioning product. The goal here is to produce what is called a gate-level netlist. [1]The netlist contains information about what gates are included and what the connections are. There are several standard software solutions that produce the netlist for us. [2] What information do we need to put in? Well, there is the RTL file of course, but also a Technology library file, that represents the production technology and constraint files (SDC) that describe what we are optimizing for. [3] The first step is the actual synthesis, where we convert the RTL code into a generic logic netlist. What follows is technology mapping, where we map the abstract definitions of the gates to the standard cells of our technology library. After reaching a result we are happy with in synthesis, we conduct a variety of post-synthesis checks, often using third-party software different from the synthesis tools.[4] These checks give us initial confidence in the design before moving to the place and route stage, where more accurate analysis will be performed. Physical Design After we have produced a gate-level netlist that meets all our requirements, we need to specify where on the chip they should be and where all the connections are going to go. The core of this step is another piece of software known as a place-and-route tool (PNR).[5] Our goal here is to produce a file in GDS II format, which finally actually represents geometric shapes that the fab will end up producing. As input we of course have the gate-level netlist from the step before, but also a bunch of other files containing information about our cells and their content.[6] The place-and-route process comprises several iterative substeps. The main components are Placing, which determines the physical locations of logic elements, and Routing, which connects these elements. Another crucial substep is Clock Tree Synthesis, which designs the clock distribution network. Each of these substeps follows a generate-check-regenerate workflow until satisfactory results are achieved, after which we proceed to the next substep. Signoff and Tapeout Going back to the days when literal tape was involved, the final stage is called the Tapeout. This is where you send your plans to the manufacturer. Once the photomasks are produced, there is no way back without losing a huge amount of money. So, as you will no longer be surprised, we don’t do this without running even more tests and conducting detailed simulations to make sure everything is working as intended and all the requirements designed at the beginning are met. Takeaways For me the main takeaway is the huge amount of division of work. The division of the workflow into so many different steps allows engineers to effectively specialize. Having a dedicated software solution for each step basically allows the chip designer to outsource a lot of work to the makers of the software. Some may be surprised how digital this process feels. At times it seems almost possible to forget that you are reading about hardware design. This seems relevant to me because many recent analyses of AI take-off scenarios hinge in some part on AI accelerating hardware progress. This needs a much deeper analysis by an expert in the field, but to me it seems like the way this process works makes AI induced improvements a realistic possibility. [7] Appendix 1 Verilog example This is code in Verilog for a D Flip-Flop taken from here. 2 Gate-Level Netlist example The following is an example netlist for a 8 bit counter taken from here. Let’s look at the line SDFFRHQX1 count_reg[0] (.RN (rst), .CK (clk), .D (n_6), .SI (scan_in), .SE (SE), .Q (count[0])); SDFFRHQX1 is the name of a D-Flip Flop Standard Cell chosen from the Library and and cout_reg[0] is the name of the instance. All the stuff in the brackets are the connections. All the other bunch of letters are other Standard cells. The resulting circuit looks like this ^ A Netlist can be flat or hierarchical. A Flat Netlist contains just one module, while a Hierarchical contains several modules that are called by the main module. ^ The options are Design compiler by Synopsys, Genus by Cadence and LeonardoSpectrum by Mentor Graphics ^ It may also include Design for testing (DFT) definitions ^ e.g. Gate-level simulation • Formal verification (Logic Equivalence) • Static Timing Analysis (STA) • Power/Area estimation ^ Here we have Innovus by Cadence, IC Compiler by Synopsys and Olympus SOC made by Mentor Graphics ^ So in total we have the gate-level netlist (in .V format), the multiple .lef files of our physical library, an IO assignment file (.io format), out timing library in .lib format and our constraints in the familiar .sdc format ^ See: https://www.wsj.com/articles/in-race-for-ai-chips-google-deepmind-uses-ai-to-design-specialized-semiconductors-dcd78967 https://www.deepmind.com/blog/optimising-computer-systems-with-more-generalised-ai-tools https://www.iwls.org/iwls2023/
2024-06-28
https://www.lesswrong.com/posts/dxBiq5LpPwtirDTMY/the-wisdom-of-living-for-200-years
dxBiq5LpPwtirDTMY
The Wisdom of Living for 200 Years
sustrik
Scott Sumner writes: Old people have a sort of double vision about the past — an ability to see the past from the perspective of today, and also from the perspective of the people who lived through those times. When then was now. This double vision, according to Scott, brings wisdom, something you gain with age, an attitude that younger people are missing: When I read some of my lamer posts from the early days, I cringe at their lack of wisdom. What was I thinking? It's a little nebulous, but one can see how considering the current issue from multiple cultural perspectives - remember: past is a foreign country! - helps you to get a more balanced view. Say, the political situation in Slovakia, the country which I am originally from, is not exactly rosy right now. We have a new Orbán or PiS-style government, which is trying hard to weaken the rule of law. Everybody in my social circle goes crazy about it. But there's a difference: While young people are simply horrified, older people do act as if they were horrified, but if you question them in private, they tend to admit that yes, it's not that bad, it used to be worse in 90s, when we used to have severed human heads in flower pots and whatnot. It's a tacit knowledge that doesn't get passed on. Young people, born in 90s don't remember the severed heads, after all. Scott further writes: Of course this only applies to periods that we ourselves have lived through. I have no feel at all for the 1890s. I know that it was a disgrace for respectable women to show their legs in public, or to go out on the town without an escort, but I don’t actually have much of a feel for why. Understood. But I have two questions: First, do we need the full experience of the past to get wise? Or would something that goes half the way be good enough? And second, is there a way to convey this tacit knowledge about the past, even if it's not perfect? People today freak out about the war of the day, the democratic backslide, the conspiration theories, the AI doom. But then you look at someone like Tyler Cowen and it's surprising how equianimous he manages to remain in spite of all that. And it doesn't seem to be only about the age. One has to wonder whether it's simply linked to intimate knowledge of the history, the visceral realization that people are always freaking out about something. "When we run out of guano we are all going to die!" (That's late XIX. century.) The calm comes from understanding that today is just business as usual, not some kind of outlier. I remember, when I was a kid, when the new bridge over Danube was built, how they rode military trucks on it to test whether it structurally sound and can bear the load. At least that's what I though. Quite late in my life I found out that the bridge was actually finished in 1971, whereas I was born in 1973! I couldn't have possibly seen the trucks on the bridge! Yet, I do have the tacit knowledge I would have got if I were there. I know how it feels when a bridge is being tested for structural soundness. It's easy to form false memories. As far as I understand, how it works is that each time you remember a thing you are rewriting the memory by the image you recollect at the moment. And a thought naturally follows: Can I form this visceral knowledge of 1960's even though I haven't been alive yet? And what about 50s or 20s? Hell, what about XIX. or XVIII. century? Can I live for 250 years? Can I, on my dying bed, nostalgically ruminate about how bad the Napoleonic wars were and how the laughter of young girls sounded back then? So, I have a project to accomplish. A project to get an intrinsic feeling of the past, at least the XIX. and XX. centuries. I do have some feeling about what the 60's were like because there's still a lot of people around who lived through the era, who reference it in passing, who make casual remarks about how the things were back then. But what about 1890s? Do I smell the manure on the streets? And how does all that differ from 1870s? It's not like I am doing it full time, but it's always on the back of my mind. Technically, it helps to look at the political and economic history of the era to get the overall framing, but that's not enough. Going to a museum helps, touching artifacts from back then, trying to actually use them. Spending an evening without electricity, in a candle-lit room. Asking your grandmother. Looking at old paintings or photos. Reading newspapers of yesteryear. Getting excited about the political controversy of the time. Taking a side in a long forgotten dispute. What helps the most is the contemporary fiction. There's so much of the tacit knowledge you suck up from reading a novel from 1830s. Brief remarks meant as simple literary devices to make the story flow smoother imprint on you. You get unknowingly wiser. Let's consider Stephan Zweig's "The World of Yesterday". That one is even better than a random novel! It's written by a fiction writer but its explicit purpose is to explain how if felt to live through belle époque to the young people of 1940s who did not remember it. It's a deliberate attempt to transfer the tacit knowledge that Scott writes about. Consider this part here: But what a tragedy when some of those young girls missed their time, when they were not yet married at twenty-five or thirty! [...] Then that tender picture turned into a sharp and haunting caricature. The unmarried girl became an "old maid" and the old maid became a "spinster", whom the satirical magazines constantly mocked in the most disgusting manner. Whoever today opens an old issue of the magazine "Fliegende Blätter" or any other humorous paper of the time, they will be horrified to find the most stupid mockery of aging girls, who, with their destroyed psyche, can no longer hide their natural desire for love." The fate of spinsters in belle époque is not something I would naturally think of, nor would I get it from reading official history textbooks. Yet, now I feel, at least to some extent, how much it must have sucked. I've got wiser. Is it worth it? I don't know. Is it a kind of wisdom you can take advantage of in the real life? Maybe. In any case, it gets you in touch with larger part of the humanity than what you would manage living only in the present. It widens your understanding of the human condition. And that's not something to dismiss lightly.
2024-06-28
https://www.lesswrong.com/posts/7LaDvWtymFWtidGxe/corrigibility-tool-ness
7LaDvWtymFWtidGxe
Corrigibility = Tool-ness?
johnswentworth
Goal of This Post I have never seen anyone give a satisfying intuitive explanation of what corrigibility (in roughly Eliezer’s sense of the word) is. There’s lists of desiderata, but they sound like scattered wishlists which don’t obviously point to a unified underlying concept at all. There’s also Eliezer’s extremely meta pointer: We can imagine, e.g., the AI imagining itself building a sub-AI while being prone to various sorts of errors, asking how it (the AI) would want the sub-AI to behave in those cases, and learning heuristics that would generalize well to how we would want the AI to behave if it suddenly gained a lot of capability or was considering deceiving its programmers and so on. … and that’s basically it.[1] In this post, we’re going to explain a reasonably-unified concept which seems like a decent match to “corrigibility” in Eliezer’s sense. Tools Starting point: we think of a thing as corrigible exactly insofar as it is usefully thought-of as a tool. A screwdriver, for instance, is an excellent central example of a corrigible object. For AI alignment purposes, the challenge is to achieve corrigibility - i.e. tool-ness - in much more general, capable, and intelligent systems. … that all probably sounds like a rather nebulous and dubious claim, at this point. In order for it to make sense, we need to think through some key properties of “good tools”, and also how various properties of incorrigibility make something a “bad tool”. We broke off a separate post on what makes something usefully thought-of as a tool. Key ideas: Humans tend to solve problems by finding partial plans with “gaps” in them, where the “gaps” are subproblems which the human will figure out later. For instance, I might make a plan to decorate my apartment with some paintings, but leave a “gap” about how exactly to attach the paintings to the wall; I can sort that out later.[2]Sometimes many similar subproblems show up in my plans, forming a cluster.[3] For instance, there’s a cluster (and many subclusters) of subproblems which involve attaching things together.Sometimes a thing (a physical object, a technique, whatever) makes it easy to solve a whole cluster of subproblems. That’s what tools are. For instance, a screwdriver makes it easy to solve a whole subcluster of attaching-things-together subproblems. How does that add up to corrigibility? Respecting Modularity One key piece of the above picture is that the gaps/subproblems in humans’ plans are typically modular - i.e. we expect to be able to solve each subproblem without significantly changing the “outer” partial plan, and without a lot of coupling between different subproblems. That’s what makes the partial plan with all its subproblems useful in the first place: it factors the problem into loosely-coupled subproblems. Claim from the tools post: part of what it means for a tool to solve a subproblem-cluster is that the tool roughly preserves the modularity of that subproblem-cluster. That means the tool should not have a bunch of side effects which might mess with other subproblems, or mess up the outer partial plan. Furthermore, the tool needs to work for a whole subproblem-cluster, and that cluster includes similar subproblems which came up in the context of many different problems. So, the tool needs to robustly not have side effects which mess up the rest of the plan, across a wide range of possibilities for what “the rest of the plan” might be. Concretely: a screwdriver which sprays flames out the back when turned is a bad tool; it usually can’t be used to solve most screw-turning subproblems when the bigger plan takes place in a wooden building. Another bad tool: a screwdriver which, when turned, also turns the lights on and off, causes the closest patch of grass to grow twice as fast while the screwdriver is turning, and posts pictures of the user’s hand to instagram. This one is less directly dangerous, but for screw-turning purposes we’d much rather have a regular screwdriver; it’s inconvenient when the lights suddenly go off and on at the construction site, or we suddenly need to mow again, or my Instagram page is suddenly full of pictures of my hand. (Admittedly the screw driver with all the side effects would be fun, but in ways which don’t scream “good tool”/“corrigible”.) So one core property of a “good tool” is that it lacks side effects, in some sense. And it lacks side effects robustly, across a wide range of contexts. A good tool just solves its subproblem, and does little else. Visibility and Correctability Another, less obvious piece of the “tools” characterization: in practice, approximately-all problems are much easier when we can see what’s going on and course-correct along the way. This is not part of the “defining concept” of a tool; rather, it’s a property of nearly-all subproblems. The practical necessity of a feedback control mechanism is a usually-implicit part of the subproblem: a “good” solution to the subproblem should include visibility and correctability. Concretely: a drill which doesn’t give the user some feedback when the torque suddenly increases is a “bad drill” - it’s going to result in a lot of stripped screws, cracked wood, etc. An implicit part of the problem the drill is supposed to solve is to not over-screw the screw, and the lack of feedback makes it a lot more likely that minor mistakes or random factors will end in over-screwed screws. So visibility and correctability are, while not defining properties of a “good tool”, a near-universal implicit requirement in practice. Put that together with “respecting modularity”, and you can start to see how corrigibility is maybe synonymous with good-tool-ness… Let’s Go Through A List Of Desiderata Specifically this list of desiderata for corrigibility from Jan Kulveit. We’ll talk about how each of them plays with corrigibility-as-tool-ness. 1. Disutility from resource acquisition - e.g. by some mutual information measure between the AI and distant parts of the environment We’re not viewing corrigible systems as necessarily utility-maximizing, so “disutility” doesn’t quite fit. That said, discouraging “mutual information between the AI and distant parts of the environment” sure sounds a lot like robustly respecting modularity. (I don’t think mutual information alone is quite the right way to formalize it, but I don’t think Jan intended it that way.) 2. Task uncertainty with reasonable prior on goal drift - the system is unsure about the task it tries to do and seeks human inputs about it. “Task uncertainty with reasonable prior…” sounds to me like an overly-specific operationalization, but I think this desideratum is gesturing at visibility/correctability. 3. AI which ultimately wants to not exist in future as a terminal goal. Fulfilling the task is on the simplest trajectory to non-existence Again, the framing doesn’t quite match tool-ness; tool-ness (“the-stuff-that’s-great-about-screwdrivers”) can allow for systems that “want” stuff or maximize utility, but we are definitely not assuming that. That said, the tool is supposed to not have long-range side-effects, and e.g. actively preserving itself or making copies of itself outside of its task would definitely be a long-range side-effect. 4. Information-theoretic penalty on reasoning: any bit-flip in the system is assumed to have some utility cost. The cost function could be very small initially but get steeper fast. Tool-ness doesn’t seem inherently closely related to limited reasoning or limited information processing, except maybe insofar as the processing induces side effects. So corrigibility-as-tool-ness probably doesn’t imply this desideratum at all. 5. Satisficing Yet again, we’re not necessarily thinking of the corrigible system as a utility maximizer, so there’s a frame mismatch. That said, there’s nothing inherently un-tool-like about a system solving some subproblem really really well, i.e. optimizing the solution really hard. The un-tool-ness kicks in when that optimization pressure starts to have side effects, or starts to interfere with visibility/correctability. So insofar as good tools tend to satisfice at all, they do so because harder optimization might break problem modularity (i.e. have side effects) or break visibility/correctability. 6. Myopia While we didn’t explicitly talk about myopia above, it sure does intuitively seem like a core part of tool-ness. If a system is optimizing for some long-range goal beyond the task I intend for it, then I would not generally think of that system as a tool. Possibly the “respect modularity” concept could be framed in such a way that myopia naturally follows. 7. Tripwire artifacts. Messing up with some to the system unknown but unrelated parts of the environment is associated with large negative rewards “Rewards” aren’t necessarily a thing for tools, but I think this is basically gesturing at “no long-range side effects” again. 8. External watchdogs. Smaller and fast external systems able to react quickly to out-of-distribution behavior. Sounds like visibility and correctability again, though now with respect to some “external watchdog” which might not be the user. 9. Ontological uncertainty about level of simulation. … dude, if your “corrigible system” is reasoning about things like “level of simulation” at all, then something has probably gone wrong. That’s not how myopia is supposed to work. This desideratum I just reject. [Desiderata 10 - 13 are oversight mechanisms, which aren’t really part of the corrigible system itself, so I’m ignoring them.] 14. Imposing strong incentives on internal modularity, and interpretable messaging across module boundaries This is an interesting one which does not necessarily follow from tool-ness. It’s about visibility/correctability of the tool itself, as opposed to the tool’s outputs. Not sure whether I buy that this should be a defining characteristic of corrigibility or not.[4] 15. Human-level explanations, produced by an independent "translator" system I definitely maintain that screwdrivers are corrigible, even without an independent “translator” system explaining what they’re doing. So I reject this as a desideratum for corrigibility, though obviously it’s still a nice property to have for powerful systems (and might even be necessary in order for some kinds of systems to achieve visibility and correctability). That’s all Jan’s desiderata! Having gone through them, I feel pretty good about the corrigibility = tool-ness mental model. There are some desiderata which aren’t satisfied, but they’re not-satisfied in ways which make me feel like the desiderata are bundling together various nice-to-haves which aren’t necessary for corrigibility per se. What Would It Look Like To Use A Powerful Corrigible AGI? One important implication of the corrigibility-as-tool-ness mental model: when using a corrigible system, it is the human operator’s job to figure out what they want, figure out what problems they face, and break off useful subproblems. Things like “figure out what I should want, and then do that” are just not the kind of “task” which a corrigible system takes in; it’s a type error. (One could take “figure out what I should want” to be the main problem, but then it’s still the human operator’s job to break that still-philosophical-and-possibly-confused question into tool-ready subproblems which together will answer/dissolve the question.) Of course that doesn’t necessarily mean that I need e.g. a full mathematical specification of every subproblem I want to hand off to a corrigible system; I certainly don’t need any explicit formalism in order to use a screwdriver! But it means that there’s a nontrivial type-signature to “subproblems”, which is different from most “what should?” problems or most deconfusion problems.[5] I’ve talked before about how I expect attempts to outsource alignment research to AI to end up bottlenecked on the human outsourcer. If I don’t know what I want, and I’m fundamentally confused about how to figure out what I want (including e.g. how to break it into subproblems), then somewhere along the way I need to do some work which can’t be outsourced to the AI (because part of the work is figuring out what I can even outsource safely). When the AI is corrigible in the sense of tool-ness, that constraint is made much more explicit. The corrigible AI is a tool, and it’s not a tool’s job to figure out what top-level goal I should pursue. Another way to put it: when using corrigible AI, the “main responsibility” of choosing and structuring the problem falls on the user. We can maybe identify useful subproblems to outsource, but we don’t actually get the option of outsourcing all the difficult work of understanding what we want and becoming less confused. The human operator is “in the driver’s seat”, and has all the difficult problems which come with that responsibility. Let’s make it concrete: we cannot just ask a powerful corrigible AGI to “solve alignment” for us. There is no corrigible way to perform a task which the user is confused about; tools don’t do that. From Cognition to Real Patterns? At the start of the previous section, we used some funny wording: we think of a thing as corrigible exactly insofar as it is usefully thought-of as a tool Why not just “a thing is corrigible exactly insofar as it is a good tool”? Why the “we think of a thing as” and “usefully thought-of as” business? The previous section characterized tool-ness from a subjective, cognitive perspective: it was about the conditions in which it’s useful for a particular mind to model something as a tool. That’s the first step of the Cognition -> Convergence -> Corroboration pipeline[6]. The next step is convergence: we note that there’s a lot of convergence in which things different minds view as tools, and what subproblems different minds view those tools as “for”. That convergence (along with the very small number of examples of tool-usage typically needed to achieve rough convergence) roughly implies that these minds convergently recognize some “real patterns” out in the environment as particular subproblem-clusters, and as tools for those subproblem-clusters. In other words: there are some patterns in the environment which different people convergently recognize as “subproblems” and “tools” for those subproblems. The next big question is: what are those patterns in the environment? We’ve characterized tool-ness/corrigibility so far in terms of subjective, internal mental usage, but what patterns in the environment are convergently modeled as tools/subproblem-clusters by many different minds? We already said some things about those patterns in the previous section. For instance: a good tool should “lack side effects” across a wide variety of contexts. That is, in some sense, a physical fact about a tool or its use. But what counts as a “side effect”? What patterns in the environment are convergently recognized as “side effects”, of the sort which would break tool-ness? That depends on how we typically “carve out” subproblems: the point of a “side effect”, cognitively, is that it potentially interferes with parts of a plan outside the subproblem itself. So, in order to fully ground lack-of-side-effects in environmental patterns (as opposed to internal cognition), we’d need to characterize the environmental patterns which humans convergently “carve out” as subproblems. Note that the “convergence” part of such a characterization would ideally be demonstrable empirically and mathematically, e.g. with the sort of tools used in the toy model of semantics via clustering. Characterizing the environmental patterns which humans convergently “carve out” as subproblems is an open problem, but you can hopefully now see why such a characterization would be central to understanding corrigibility. We can pose similar questions about visibility and correctability: insofar as humans agree on what things are more or less visible/correctable, what patterns in the environment do humans convergently recognize as “visibility” and “correctability”? Again, an answer to the question would hopefully involve empirical and mathematical evidence for convergence. And again, answering the question is an open problem, but you can hopefully see why understanding such patterns is also central to understanding corrigibility. Now let’s move away from particular properties, like lack-of-side-effects or visibility or correctability. New question: are those properties together all we need to convergently recognize some pattern in the environment as corrigible/tool-like? If some other properties are needed, what are they? Yet another open problem. Why are we interested in all those open problems? Well, intuitively, we expect that corrigible systems will have nice safety properties - like lack-of-side-effects, for example. We want that story to be more than just vague intuition; we want it to be precisely operationalized and provable/testable. And the main “roadmap” we have toward that operationalization is the intuitive story itself. If we could characterize the convergent patterns in the environment which different people recognize as “subproblems” or as “tools” or as “correctability” etc, then intuitively, we expect to find that those patterns-in-the-environment imply provable/testable nice safety properties. ^ Paul does have a decent explanation of what he means by “corrigibility”, but I think Paul is pointing to a different (though related) concept than Eliezer. Also Paul’s notion of “corrigibility” would entail much weaker safety properties for an AI than Eliezer’s notion. In the rest of the post, we’re just going to say “corrigibility”, without constantly clarifying what notion of corrigibility we’re talking about. ^ And the fact that I intend to sort it out later constrains the type signature of this kind of subproblem. More on that later. ^ Note that the vast majority of subproblems basically-never come up in partial plans; the space of “natural” subproblems is much smaller than what could be mathematically specified. ^ David leans slightly “yes”, John leans slightly “no.” ^ Here’s our current best guess at how the type signature of subproblems differs from e.g. an outermost objective. You know how, when you say your goal is to “buy some yoghurt”, there’s a bunch of implicit additional objectives like “don’t spend all your savings”, “don’t turn Japan into computronium”, “don’t die”, etc? Those implicit objectives are about respecting modularity; they’re a defining part of a “gap in a partial plan”. An “outermost objective” doesn’t have those implicit extra constraints, and is therefore of a fundamentally different type from subproblems. Most of the things you think of day-to-day as “problems” are, cognitively, subproblems. ^ which we still have not written a post on, and still should
2024-06-28
https://www.lesswrong.com/posts/6evBdZAiT9yxoJBKk/situational-awareness-8
6evBdZAiT9yxoJBKk
Situational Awareness
PeterMcCluskey
Nearly a book review: Situational Awareness, by Leopold Aschenbrenner. "Situational Awareness" offers an insightful analysis of our proximity to a critical threshold in AI capabilities. His background in machine learning and economics lends credibility to his predictions. The paper left me with a rather different set of confusions than I started with. Rapid Progress His extrapolation of recent trends culminates in the onset of an intelligence explosion: His assessment of GPT-4 as equivalent to a smart high schooler depends significantly on the metrics used. For long-term planning abilities, this estimate may be overstated by about five orders of magnitude. However, by other measures, his assessment seems somewhat reasonable. Initially, I expected the timeline for automated AI researchers to be slightly longer than Aschenbrenner's 2028 prediction, due to limitations in their long-term planning abilities. However, upon closer examination, I found his argument less dependent on overcoming such weaknesses than I first thought. So I'm not going to bet very much against his claim here. One neat way to think about this is that the current trend of AI progress is proceeding at roughly 3x the pace of child development. Your 3x-speed-child just graduated high school; it'll be taking your job before you know it! While a 3x pace seems somewhat high to me - I'd estimate closer to a 1:1 ratio - his overall forecast for 2028 may not be far off, considering that he may be overestimating the gap between a smart high schooler and an assistant AI researcher. Aschenbrenner has a section on the "data wall" that seems a bit suspicious. He expects increasing divergence in the results of various lab's progress due to need for increasingly important algorithmic insights to get around the problem. While AI training is indeed data-dependent, and much of the easily accessible data has been used, I believe data scarcity may be less problematic than Aschenbrenner suggests. Rather than a "wall," I see it as having picked the low-hanging fruit. Untapped sources of useful data likely exist, with the primary challenge being the cost of acquisition. I'm reluctant to give examples, just in case there are players who haven't figured it out yet. I suspect the most advanced labs will be bottlenecked more by compute than by data. There will be modest differences in how quickly labs throw lots of money at gathering data. If labs' progress diverges much, it will likely be due to something else (see the next section, on unhobbling). Aschenbrenner decomposes the drivers of progress into three factors: physical compute, algorithmic efficiencies, and unhobbling. Physical compute is expected to increase nearly four-fold annually until around 2030, with subsequent acceleration or deceleration depending on whether AI has a dramatic transformative impact on global economic growth. Algorithmic efficiencies, focusing on low-level optimizations, have been surprisingly impactful, doubling effective compute roughly every eight months. I'm guessing this includes minor improvements to matrix multiply algorithms, or figuring out that some operations can be skipped because they don't affect the end result. Or consider adaptive compute: Llama 3 still spends as much compute on predicting the "and" token as it does the answer to some complicated question, which seems clearly suboptimal. The evidence here isn't strong enough to establish a clear long-term trend. My intuition says that it's partly due to a burst of improvements from 2019 to 2022, as researchers suddenly realized those improvements were ridiculously valuable, with diminishing returns potentially already slowing these effects. Unhobbling The concept of "unhobbling" suggests that current AIs possess latent human-level intelligence capabilities, hampered by clumsy usage. This potential is being unlocked through high-level algorithmic advances like chain of thought, expanded context windows, and scaffolding. E.g.: GPT-4 has the raw smarts to do a decent chunk of many people's jobs, but it's sort of like a smart new hire that just showed up 5 minutes ago We're still just beginning to figure out how to turn GPT-4 into a worker that has developed some expertise in a particular job. His framing of high-level algorithmic progress as unhobbling of latent intelligence is somewhat unusual. Most discussions of AI seem to assume that existing AIs need to acquire some additional source of basic intelligence in order to function at near-human levels. Is Aschenbrenner's framing is better? He seems at least partly correct here. When performance is improved by simple tricks such as offering a chatbot a tip, it's pretty clear there's some hidden intelligence that hasn't be fully exposed. I'm unsure whether most high-level algorithmic progress is better described as unhobbling or as finding new sources of intelligence. The magnitude of such latent intelligence is hard to evaluate. For now, I'm alternating between the unhobbling model and the new sources of intelligence model. Aschenbrenner estimates unhobbling to be as significant as the other two drivers. While evidence is inconclusive, it's conceivable that unhobbling could become the primary driver of progress in the coming years. This uncertainty is makes me nervous. By the end of this, I expect us to get something that looks a lot like a drop-in remote worker. Intelligence Explosion I've been somewhat reluctant to use the term intelligence explosion, because it has been associated with a model from Eliezer Yudkowsky that seems somewhat wrong. Aschenbrenner's description of an intelligence explosion aligns more closely with a Hansonian framing. It's more compatible with my understanding of the emergence of human intelligence, and potentially even the Cambrian explosion. His projection suggests AIs will take over much of AI research by late 2027. We'd be able to run millions of copies (and soon at 10x+ human speed) of the automated AI researchers. While millions of AI researcher copies is higher than what I expect, the overall analysis doesn't hinge on this specific number. Imagine 1000 automated AI researchers spending a month-equivalent checking your code and getting the exact experiment right before you press go. I've asked some AI lab colleagues about this and they agreed: you should pretty easily be able to save 3x-10x of compute on most projects merely if you could avoid frivolous bugs, get things right on the first try, and only run high value-of-information experiments. This model posits that the explosion begins when the cognitive resources devoted to AI development increase at superhuman rates, not necessarily requiring AIs to perform all relevant tasks. Doing AI research requires some specialized brilliance, but doesn't require researchers whose abilities are as general-purpose as humans. The massive increase in labor could accelerate algorithmic progress by at least 10x - a change dramatic enough to warrant the term "explosion." I can believe that we'll get a year of 10x algorithmic progress. I expect that after that year, progress will depend much more heavily on compute. How much will that increase in intelligence enable faster production of compute? Aschenbrenner doesn't tackle that question, and I'm fairly uncertain. It seems ironic that Aschenbrenner has used Hansonian framing to update my beliefs modestly towards Eliezer's prediction of a fast takeoff. Although most of the new evidence provided is about trends in algorithmic progress. The prediction that LLM-based AI will trigger the explosion doesn't mean that superintelligence will be an LLM: The superintelligence we get by the end of it could be quite alien. We'll have gone through a decade or more of ML advances during the intelligence explosion, meaning the architectures and training algorithms will be totally different (with potentially much riskier safety properties). Superalignment RLHF relies on humans being able to understand and supervise AI behavior, which fundamentally won't scale to superhuman systems. By default, it may well learn to lie, to commit fraud, to deceive, to hack, to seek power, and so on The primary problem is that for whatever you want to instill the model (including ensuring very basic things, like "follow the law"!) we don't yet know how to do that for the very powerful AI systems we are building very soon. ... What's more, I expect that within a small number of years, these AI systems will be integrated in many critical systems, including military systems (failure to do so would mean complete dominance by adversaries). Aschenbrenner acknowledges significant concerns about safe AI development. However, his tone, particularly in his podcast with Dwarkesh, sounds very much the opposite of scared. This seems more like fatalism than a well thought out plan. I suspect he finds it hard to imagine scenarios under which safety takes more than a year to develop with AI assistance, so he prays that that will be enough. Or maybe he's seen fear paralyze some leading AI safety advocates, and wants to err in the other direction? Lock Down the Labs Aschenbrenner anticipates that competition between the US and China will pressure AI labs to compromise safety. in the next 12-24 months, we will develop the key algorithmic breakthroughs for AGI, and promptly leak them to the CCP But the AI labs are developing the algorithmic secrets---the key technical breakthroughs, the blueprints so to speak---for the AGI right now (in particular, the RL/self-play/synthetic data/etc "next paradigm" after LLMs to get past the data wall). AGI-level security for algorithmic secrets is necessary years before AGI-level security for weights. These algorithmic breakthroughs will matter more than a 10x or 100x larger cluster in a few years a healthy lead will be the necessary buffer that gives us margin to get AI safety right, too the difference between a 1-2 year and 1-2 month lead will really matter for navigating the perils of superintelligence He recommends military-level security to protect key algorithmic breakthroughs, arguing that this is the primary area where the US can outcompete China. Such security would slow AI advances, partly by reducing communications within each AI lab, and partly by impairing the ability of AI labs to hire employees who might be blackmailed by the CCP. Presumably he thinks the slowdown is small compared to the difference in how fast the two countries can make algorithmic progress on their own. I'm disturbed that he's not explicit about this. It's not at all obvious that he's correct here. What about the current US lead in compute? Why won't it be enough for the US to win a race? China may outbuild the US. The binding constraint on the largest training clusters won't be chips, but industrial mobilization---perhaps most of all the 100GW of power for the trillion-dollar cluster. But if there's one thing China can do better than the US it's building stuff. I can see how they might build more datacenters than the US. But what chips would they put into them? China now has little access to the best NVIDIA chips or ASML equipment. Those companies have proven hard for anyone to compete with. My impression is that even if China is on track to have twice as many datacenters, they're going to be running at somewhat less than half the speed of US datacenters. Aschenbrenner seems to think that China can make up with quantity for their lack of quality. That's a complex topic. It looks like most experts think he's wrong. But I see few signs of experts who are thinking more deeply than Aschenbrenner about this. Can we see signs of a massive Chinese datacenter buildup now? My attempts at researching this yielded reports such as this predicting 3.54% annual growth in datacenter construction. That seems ridiculously low even if China decides that AI progress is slowing. What about stocks of companies involved in the buildup? GDS Holdings and VNET Group seem to be the best available indicators of Chinese datacenter activity. Markets are very much not predicting a boom there. But I suppose the CCP could have serious plans that have been successfully kept secret so far. My guess is that Aschenbrenner is wrong about Chinese ability to catch up to the US in compute by 2028, unless US regulation significantly restricts compute. China might not have the same caution slowing it down that the US will I don't see how to predict which country will be more cautious. This seems like a crucial factor in determining which side we should prefer to lead. Shouldn't we examine some real evidence? I don't know what evidence Aschenbrenner is relying on, and my brief search for such evidence failed to turn up anything that I consider worth reporting here. What happens when the free world's progress is slowed by chip foundries being destroyed when China invades Taiwan? Aschenbrenner is aware that this is somewhat likely to happen this decade, but he says little about what it implies. China will have some awareness of the possibility of an intelligence explosion. That might influence the timing of military action. I fear that Aschenbrenner's anti-CCP attitude will increase the risk of an all-out arms race. Aschenbrenner's approach here practically guarantees the kind of arms race that will lead to hasty decisions about whether an AI is safe. His argument seems to be that such a race is nearly inevitable, so the top priority should be ensuring that the better side wins. That could be a self-fulfilling prophecy. Here's a contrary opinion: What US/China AI race folk sound like to me: There are superintelligent super technologically advanced aliens coming towards earth at .5 C. We don't know anything about their values. The most important thing to do is make sure they land in the US before they land in China. That reaction doesn't seem quite right. It's more like we're facing waves of aliens heading our way, the first waves being not too technologically advanced. But I endorse the prediction that the differences between the US and China are small compared to the other uncertainties that we face. Military Interest The intelligence explosion will be more like running a war than launching a product. It seems clear that within a matter of years, pre-superintelligence militaries would become hopelessly outclassed. ... it seems likely the advantage conferred by superintelligence would be decisive enough even to preemptively take out an adversary's nuclear deterrent. I find it an insane proposition that the US government will let a random SF startup develop superintelligence. Imagine if we had developed atomic bombs by letting Uber just improvise. I've been neglecting scenarios under which one or more militaries will take control of AI development, likely because I've been overly influenced by people on LessWrong who expect a brilliant insight to create an intelligence explosion that happens too fast for governments to react. Aschenbrenner convinced me to expect a somewhat faster intelligence explosion than I previously expected. There's some sense in which that moves me closer to Eliezer's position. But Aschenbrenner and I both believe that the intelligence explosion will be a somewhat predictable result of some long-running trends. So smart people in government are likely realizing now that the military implications deserve careful attention. If the US government is as competent now as it was in the early 1940s, then we'll get something like the Manhattan Project. COVID has created some doubts as to whether people that competent are still in the government. But I suspect the military is more careful than most other parts of government to promote competent people. So I see at least a 50% chance that Aschenbrenner is correct here. I'm not happy with military involvement. But if it's going to happen, it seems better for it to happen now rather than later. A semi-prepared military is likely to make saner decisions than one that waits to prepare until the intelligence explosion. It seems pretty clear: this should not be under the unilateral command of a random CEO. Indeed, in the private-labs-developing-superintelligence world, it's quite plausible individual CEOs would have the power to literally coup the US government. The world seems on track for a risky arms race between the free world and China. But I can imagine a sudden shift to a very different trajectory. All it would take is one fire alarm from an AI that's slightly smarter than humans doing something malicious that causes significant alarm. Manifold takes that possibility somewhat seriously: https://manifold.markets/ScroogeMcDuck/does-an-ai-accident-kill-at-least-1-16699bd37e14 One key mistake by such an AI could be enough to unite the US and China against the common threat of rogue AI. I don't expect AI at the slightly smarter than human stage to be saner and more rational than humans. It feels scary how much will depend on the details of mistakes made by such AIs. How Close will the Arms Race be? I feel more confused about the likely arms race than before. The safety challenges of superintelligence would become extremely difficult to manage if you are in a neck-and-neck arms race. A 2 year vs. a 2 month lead could easily make all the difference. If we have only a 2 month lead, we have no margin at all for safety. Aschenbrenner suggests treating the arms race as inevitable. Yet his analysis doesn't suggest that the US will maintain much of a lead. He expects military-grade security to be implemented too late to keep the most important algorithmic advances out of CCP hands. I've been assuming a US hardware advantage will cause the US to win a race, but he expects that advantage to disappear. Even worse, the scenario that he predicts seems quite likely to push China to attack Taiwan at a key time, cutting off the main US supply of chips. Would that mean China pulls ahead? Or that the US is compelled to bomb China's chip factories? These scenarios seem beyond my ability to analyze. Or if I'm right about China remaining behind in hardware, maybe the attack on Taiwan slows AI progress just as it reaches the intelligence explosion, buying some time at the critical juncture for adequate safety work. Aschenbrenner doesn't appear to have a lot of expertise in this area, but I'm unclear on how to find a better expert. Political Options Some hope for some sort of international treaty on safety. This seems fanciful to me. ... How have those climate treaties gone? The climate treaties haven't mattered, because it turned out to be easier to rely on technological advances, plus the desire to avoid local air pollution. It feels like there was a fair amount of luck involved. A good treaty depends on a stronger consensus on near-term importance. Nuclear weapons treaties might provide a better example. The United States must lead, and use that lead to enforce safety norms on the rest of the world. That's the path we took with nukes, offering assistance on the peaceful uses of nuclear technology in exchange for an international nonproliferation regime (ultimately underwritten by American military power)---and it's the only path that's been shown to work. This bears some resemblance to what's needed. Does the US still have enough of a lead in military power for it to work as it worked after WWII? It seems hard to tell. One of Zvi's responses: If some combination of governments is more concerned with existential risk and alignment, or with peace and cooperation, than Leopold expects, or there is better ability to work out a deal that will stick (38, 39 and 40) then picking up the phone and making a deal becomes a better option. The same goes if the other side remains asleep and doesn't realize the implications. The entire thesis of The Project, or at least of this particular project, depends on the assumption that a deal is not possible except with overwhelming strength. That would not mean any of this is easy. We're not on track for a deal to be feasible, but we should very much keep our eyes open for circumstances under which it would become possible. I was feeling fairly confused, before reading Situation Awareness, about the effects of trying to pause AI development. I feel even more confused now. Aschenbrenner persuaded me that simple versions of a "pause" would only slow capabilities progress by about one third. The trends that he reports for algorithmic efficiency and unhobbling have a bigger impact on capabilities than I expected. The only pause proposals that are likely to effect those trends much are those that would decrease the available compute indefinitely. I want to emphasize that by decrease, I mean leading computing clusters would need to downgrade their chips yearly (monthly?). It would take unusual new evidence for those proposals to acquire the popularity that's currently associated with a "pause" that merely limits AI training runs to 10^25 FLOPs or $100 million. The recent price of NVIDIA stocks says quite clearly that we're not on track for a pause that stops progress in AI capabilities. Concluding Thoughts Aschenbrenner has a more credible model than does Eliezer of an intelligence explosion. Don't forget that all models are wrong, but some are useful. Look at the world through multiple models, and don't get overconfident about your ability to pick the best model. His "optimistic" perspective has increased my gut-level sense of urgency and led me to revise my probability of an existential catastrophe from 12% to 15%, primarily due to the increased likelihood of a closely-fought arms race. He's overconfident, but well over half right. "Situational Awareness" contains more valuable insights than I can summarize in one post. It is important reading for anyone interested in the future of AI development and its global implications. P.S. See Michaël Trazzi's amusing summary of Aschenbrenner's background.
2024-06-28
https://www.lesswrong.com/posts/REzKbk9reKFvgFkmf/how-big-a-deal-are-matmul-free-transformers
REzKbk9reKFvgFkmf
How Big a Deal are MatMul-Free Transformers?
JustisMills
If you’re already familiar with the technical side of LLMs, you can skip the first section. The story so far Modern Large Language Models - your ChatGPTs, your Geminis - are a particular kind of transformer, a deep learning architecture invented about seven years ago. Without getting into the weeds, transformers basically work by turning an input into numbers, and then doing tons and tons of matrix operations on those numbers. Matrix operations, and in particularly matrix multiplication (henceforth MatMul), are computationally expensive. How expensive? Well, graphics cards are unusually good at matrix multiplication, and NVIDIA, the main company making these, was the most valuable company on Earth earlier this month. Over the last few years, spurred on by extreme investment, transformers have gotten larger and stronger. How good transformers are is multidimensional, and is roughly captured by scaling laws: basically, models get better when you give them more (high quality) data, make them bigger, or train them for longer. I’ve written before about the data wall, the hypothesis that we’re running out of new data to train cutting edge AI systems on. But another path to much stronger AI would be if we trained them more efficiently: if you have to do way fewer (or way easier) mathematical operations when training an AI, you can do a lot more training on the same (gigantic) budget. Basically, holding training data constant, if you can train a model twice as efficiently, you can also make it twice as big.[1] Which is a big deal in a world where there may be bottlenecks for other ways to make better AI: if it isn’t the data wall, it may well be a wall of regulation preventing the insane power consumption requirements of a trillion-dollar cluster. Cutting edge labs are in an intense race to make transformative AI, so we don’t know what kinds of efficiency advances they’ve been making for the past few years. But there has been hubbub the last few weeks about a new kind of model, which avoids the need for MatMul. So, what’s the deal? Is the new research a flash in the pan, a small incremental win, or a bold new paradigm we’ll all be citing (and making birthday posts for) in seven years? I’ll make a brief examination of the paper’s claims and why they’re exciting, then give reasons for restraint. What’s the new architecture? The new paper, by Rui-Jie Zhu et al., is Scalable MatMul-free Language Modeling. It came out on June 4th. In the places where a typical transformer would do MatMul, the paper instead does something different and more akin to addition. The technical details are pretty complicated, but the intuition that addition is easier/simpler than multiplication is spot on. How much better does their new method work? Here’s the relevant graph from their paper: The star is the hypothetical point where you’d get equal bang for your buck from their MatMul-free style and the current (public) SOTA[2] - a little under 10^23 floating point operations (FLOPs). To quote them on the significance of this number: Interestingly, the scaling projection for the MatMul-free LM exhibits a steeper descent compared to that of Transformer++. This suggests that the MatMul-free LM is more efficient in leveraging additional compute resources to improve performance. As a result, the scaling curve of the MatMul-free LM is projected to intersect with the scaling curve of Transformer++ at approximately 10^23 FLOPs. This compute scale is roughly equivalent to the training FLOPs required for Llama-3 8B (trained with 15 trillion tokens) and Llama-2 70B (trained with 2 trillion tokens), suggesting that MatMul-free LM not only outperforms in efficiency, but can also outperform in terms of loss when scaled up. Basically, if they’re right about their scaling laws, their proposed architecture would become both more efficient and more effective at current high-end industrial levels of investment, and strongly more efficient in the future. So yes, this is a pretty big deal. It doesn’t seem to be a hoax or strongly overhyped. If I were a top lab, and my own secret sauce wasn’t obviously better than this, I’d want to look into it. What are the limitations? There are several. Broadly: The current architectural paradigm is complicated and expensive to changeThe new approach hasn’t been tested (publicly) with very large model sizesThe new approach hasn’t been tested against the actual state of the art We’ll take it from the top. Transformers architecture is sticky Cutting edge AI is a dance between software and hardware. Some particular software process gets good results. Whatever hardware happens to run that process best is now in demand, spurring investment both in whoever came up with the software process and whoever manufactures the relevant hardware. The hardware manufacturers optimize their hardware even better for the software, and the software developers optimize their software to leverage the new-and-improved hardware even better.[3] For several years now, MatMul has been favored, which means GPUs that are good at MalMul are favored, and those GPUs are optimized to be extra good at - you guessed it - MatMul. Even if this research result is totally correct and a MatMul-free architecture would perform better in the abstract, many different stakeholders would have to get together to make it happen. And it’s not just hardware! The qualified engineers, too - of which there is a rather limited capacity on the entire planet, fueling their very high salaries - have honed their instincts on the current paradigm. There’s a lot of new math to learn, then master, then make absolutely secondhand. Math like this (from the paper): Nothing the most talented machine learning engineers in the world can’t figure out, but there’s a difference between understanding and deep understanding. The actual state of the art sometimes advances by yolo runs, and the powerfully honed instinct behind these runs isn’t developed overnight. So even if this new architecture is a superior alternative, it might take a long time before its benefits outweigh its costs in practice. At least until we start hitting some walls, and architectural shifts are the only way forward. Not tested at scale The authors list this limitation in their conclusion, along with an exhortation for top labs to give their new architecture a try, like so: However, one limitation of our work is that the MatMul-free LM has not been tested on extremely large-scale models (e.g., 100B+ parameters) due to computational constraints. This work serves as a call to action for institutions and organizations that have the resources to build the largest language models to invest in accelerating lightweight models. Fair enough! And indeed, outside of major labs (which keep their lips zipped), you’re not going to find the compute you need to test this stuff at 100B scale. But part of the reason transformer scaling laws are so important is because they have been proven to work over quite a large range. With a new architecture, you can’t totally assume that new, different scaling laws will be equally ironclad. Nor would it be that easy to test, even for top labs. Training huge models is expensive, and doing it right would likely require new hardware and lots of retraining for many of the most in-demand employees in the world. It’s not easy to see if or when that’ll be a priority.[4] Not tested vs. cutting edge This objection feels a little mean, but while the MatMul-free paper does test its new architecture against a strong open source implementation, it doesn’t benchmark itself against the cutting edge in absolute terms, because the real cutting edge is behind lock and key at top labs. Of course, you have to measure against what you have, and it’s not like they are pumping up their numbers by comparing to obviously outdated architecture. But they aren’t proving that their new methods do well against the current best AI models. Which, again, is a reason for some doubt. So… sell NVIDIA? This is so not investment advice, but I mean, I haven’t. I do think it’s an exciting sign (or a scary one) that we’re seeing research, in public, that suggests fundamental improvements to transformers architecture. Whether they truly are improvements, and whether those improvements can overcome years of inertia, remains to be seen. In the meantime, provisionally, it’s time for new Anki cards. ^ Okay, not literally, since computational complexity doesn’t scale linearly with model size. But if you thought of that, why didn’t you skip the introduction? ^ They represent the current state of the art with the architecture of Llama 2, a not-exactly-cutting-edge but pretty good (and relatively open source) model. ^ Notably, one of the things Zhu et al. do in the paper is build custom hardware to better support their new architecture. Doing this as a proof of concept is really exciting, but very different from changing industrial processes at scale! ^ Or maybe it was already a priority two years ago, in secret, and all the top players are already doing it! Simply no way to know, though we might have seen reverberations in NVIDIA chip design, if that were true.
2024-06-27
https://www.lesswrong.com/posts/WZ2Xug4j3rz2Pe3D2/secondary-forces-of-debt
WZ2Xug4j3rz2Pe3D2
Secondary forces of debt
KatjaGrace
A general thing I hadn’t noticed about debts until lately: Whenever Bob owes Alice, then Alice has reason to look after Bob, to the extent that increases the chance he satisfies the debt. Yet at the same time, Bob has an incentive for Alice to disappear, insofar as it would relieve him. These might be tiny incentives, and not overwhelm for instance Bob’s many reasons for not wanting Alice to disappear. But the bigger the owing, the more relevant the incentives. When big enough, the former comes up as entities being “too big to fail”, and potentially rescued from destruction by those who would like them to repay or provide something expected of them in future. But the opposite must exist also: too big to succeed—where the abundance owed to you is so off-putting to provide that those responsible for it would rather disempower you. And if both kinds of incentive are around in whisps whenever there is a debt, surely they often get big enough to matter, even before they become the main game. For instance, if everyone around owes you a bit of money, I doubt anyone will murder you over it. But I wouldn’t be surprised if it motivated a bit more political disempowerment for you on the margin. There is a lot of owing that doesn’t arise from formal debt, where these things also apply. If we both agree that I—as your friend—am obliged to help you get to the airport, you may hope that I have energy and fuel and am in a good mood. Whereas I may (regretfully) be relieved when your flight is canceled. Money is an IOU from society for some stuff later, so having money is another kind of being owed. Perhaps this is part of the common resentment of wealth. I tentatively take this as reason to avoid debt in all its forms more: it’s not clear that the incentives of alliance in one direction make up for the trouble of the incentives for enmity in the other. And especially so when they are considered together—if you are going to become more aligned with someone, better it be someone who is not simultaneously becoming misaligned with you. Even if such incentives never change your behavior, every person you are obligated to help for an hour on their project is a person for whom you might feel a dash of relief if their project falls apart. And that is not fun to have sitting around in relationships. (Inpsired by reading The Debtor’s Revolt by Ben Hoffman lately, which may explicitly say this, but it’s hard to be sure because I didn’t follow it very well. Also perhaps inspired by a recent murder mystery spree, in which my intuitions have absorbed the heuristic that having something owed to you is a solid way to get murdered.)
2024-06-27
https://www.lesswrong.com/posts/Qk6qt5RDDDxnxCoBp/mentorship-in-agi-safety-applications-for-mentorship-are
Qk6qt5RDDDxnxCoBp
Mentorship in AGI Safety: Applications for mentorship are open!
Just Learning
One month ago we called for mentors for MAGIS mentorship program. Today, we are happy to announce that we found enough mentors and invite potential mentees to apply! Please check our list of mentors here, choose two (first choice and second choice), and then fill out this application form. The application deadline is July 12.
2024-06-28
https://www.lesswrong.com/posts/KoCS6LqJjKhCqDqnX/distillation-of-do-language-models-plan-for-future-tokens
KoCS6LqJjKhCqDqnX
Distillation of 'Do language models plan for future tokens'
TheManxLoiner
Link to arxiv preprint: Do language models plan for future tokens, by Wilson Wu, John X Morris and Lionel Levine. TLDR There are two reasons why the computations for the current token are helpful for future tokens’ computation. First, which they call ‘pre-caching’, the network is intentionally helping future tokens’ computations by sacrificing performance for the current token. Second, which they call ‘breadcrumbs’, is that the helpfulness is incidental: the computations that are good for the current token are also good for future tokens.By zeroing appropriate gradients, they create a training strategy that prevents pre-caching. They call this ‘myopic descent’.On a synthetic dataset designed for pre-caching to be useful, they find strong evidence that myopic descent reduces pre-caching.On integer multiplication, myopic descent reduces performance.On a language dataset, they train a GPT-2 transformer using both vanilla training and myopic descent and find that performance is similar, suggesting that breadcrumbs are more prominent.Using larger models from the Pythia suite, they find the performance gap between vanilla and myopic training increases with model size. As of 27 June 2024, this is ongoing work. In particular, the integer multiplication and Pythia experiments are not yet described in the current arXiv article. The authors shared a draft containing these latest results. Also, this is my first distillation post. Any feedback - both what you like and what can be improved - will be much appreciated. Pre-requisites For this distillation, I assume basic familiarity with transformer architecture and gradient descent. You do not need any AI safety or mech interp experience. Pre-caching and breadcrumbs With the help of the diagram below, I introduce notation. x1,xi,xj (where i<j) represent input tokens, y1,yi,yjrepresent output logits, each box corresponds to one position/token of the input sequence, and the x’s inside the boxes represent the hidden states. We have a causal mask, so the hidden states for xi are useful for the hidden states for xj but not vice versa. The question this paper asks whether this usefulness is intentional or incidental? They introduce terminology for these two possibilities: Pre-caching. Some computations done for xi are directly intended to help in future tokens, even if those computations do not help calculate yi.Breadcrumbs. If computations in xi are helpful for future tokens, it is incidental. It just so happens that the computations necessary to calculate yi are also useful for predicting yj. Myopic descent To determine how much pre-caching and breadcrumbs there is, they introduce a training scheme in which pre-caching is impossible, by zeroing the parts of the gradients that incentivize pre-caching. (By gradients here I mean the gradient of the loss w.r.t. the parameters theta.)  They call this ‘myopic descent’, because it is short-sighted. The main idea is to break up the gradient into a sum of sub-gradients, grad[i,j], where grad[i,j] tells you how much the loss due to yj changes if you make a small change to theta, BUT, the change to theta is only done for xi’s hidden states, not any of the other tokens. Red indicates those items that change as a result of changing theta in only the i-th position. More explicitly, imagine doing a forward pass in which we replace theta with θ+δθ only for the hidden states of the i-th position: we leave θ unchanged for all other sequence positions. Because of the causal architecture, only the hidden states and outputs from the i-th position onwards will be impacted. grad[i,j] is the change in the loss from yj by doing this small change to theta in position i. grad[j,i] is always zero, because hidden states for xj have no impact on yi.The diagonal gradients grad[i,i] teach the transformer to better predict yi from xi, i.e. directly predicting the next token.The off-diagonal gradients grad[i,j] teach the transformer to better predict yj from xi, i.e. helping predict future tokens. It is these gradients that result in pre-caching so it is these gradients that are zeroed! Synthetic dataset experiment They create a synthetic numerical dataset in which pre-caching is obviously useful. For simplicity, I present a special case of the data generating process: The input tokens x1,...,xN are i.i.d standard normal, so xn∼N(0,1).The output tokens y1,..,yN are defined as yn=xn+∑10i=1sin(xn−i), so the sum of the current x with the sum of the sine of the previous 10 x’s. The idea is that calculating sin(xi) is not useful for calculating yi, but it is useful for the next 10 y’s. Hence, if we saw the transformer calculating sin(xi) in the i-th position, that means the transformer is pre-caching. They train two transformers (with GPT2 architecture) on 30,000,000 sequences created from this process, one with vanilla training and the other with myopic descent. By doing simple investigations into the neurons (calculating correlations and using linear probes), they find strong evidence that the vanilla transformer was doing pre-caching and the myopic transformer was not. See Figures 2, 3 and 4 in the paper. Integer multiplication experiment They train two transformers (with GPT2 architecture) to do integer multiplication, one vanilla and one myopic. They use several tricks from Shen et al. (2023) to improve performance: Characters are delimited by spaces, so each digit becomes one token.All numbers are written in the reverse of the standard order, so the least significant digits come first.All inputs are zero-padded to the same length and the output is padded to twice that length. Hence, an example looks like: 3 7 0 0 * 5 4 0 0 = 5 8 2 3 0 0 0 0 Both the vanilla and myopic model are trained for one epoch on 10,000,000 examples, with at most 8 digits for each multiplicands. We see from the accuracy scores below that vanilla training performs better. The first table is for vanilla training. The second for myopic descent. The row and column number indicate the number of digits in the two numbers being multiplied. The authors hypothesize that the vanilla transformer can make use of filler tokens, as in Pfau et al (2024) where it was found that adding ellipsis ‘...’ improves performance. To test this hypothesis, they train vanilla and myopic transformers on each of two different datasets: both numbers being multiplied have at most 5 digits and are zero-padded to 5 digitsboth numbers being multiplied have at most 5 digits but are zero-padded to 10 digits. Looking at the accuracy scores below, we see that the vanilla transformer benefits from the padding whereas the myopic transformer suffers. Quoting the authors: We hypothesize that the increased input length makes it more difficult for the attention mechanism [for the myopic model] to correctly attend to the relevant tokens. GPT2 language experiment I quote the paper (with redactions): All models use the 124M-parameter GPT-2 architecture. We train all models (vanilla and myopic) from random initialization for one epoch on 4.6M sequences from the MS MARCO dataset (Nguyen et al., 2016), truncated to length 64. As a baseline, we also train a “transformer bigram” model, a model with an identical architecture but all off-diagonal key/value states zeroed out. The cross entropy on a validation set for these three models is: Vanilla 3.28Myopic 3.40Transformer bigram 5.33 We see that the vanilla model does have a better score than the myopic model, but not large compared to naive bigram baseline. This suggests that pre-caching does provide some benefit but breadcrumbs are doing most of the work. We get a more refined view of what is happening when we compute the loss on a per-position basis. ‘Myopia gap’ is the difference in loss between the myopic and vanilla models, so the green curve on the right is just the difference between the orange and blue curves on the left. Again, just quoting the authors: We see that the myopic model outperforms the vanilla model at the beginning of the sequence, but falls behind as the length of the past increases. This implies that a lack of pre-caching may compound, and model performance degrades later in the sequence as the model is unable to refer to prior pre-cached information. Pythia language experiments What happens when we scale the experiments? Details of the training: We train both vanilla and myopic transformers from the Pythia LLM suite (Biderman et al., 2023), ranging in size from 14M to 2.8B parameters, on one epoch of 10M examples subsampled from the Pile dataset (Gao et al., 2020). (Note that this is the same subsampled dataset for every training model.) Note that, unlike in the GPT2-small experiments which start from random initialization, we start all training for Pythia models from the pre-trained checkpoints provided by Biderman et al. (2023). The 10M-example dataset we use is not sufficiently large to train from random initialization. The results show that the gap in performance increases as you increase the model size. First, this is seen in the cross entropy loss: Similar patterns are seen in the performance of the models on various benchmarks. Here are two examples: A question from a reviewer One of the reviewers of this post, Julian, asked whether the myopic model is disadvantaged by being trained with the same number of epochs / samples as the vanilla model. An author answered with: It's not clear how to quantify this [potential disadvantage], so training the vanilla and myopic models for the same number of steps is the fairest we could come up with. In any case, this disparity should (intuitively) only disadvantage the myopic model. Thus, it only strengthens our position that in GPT-2 there is little pre-caching occurring. I asked a follow-up about what would happen if the the myopic model was trained more, to  see whether the gaps close. Their response: We ran all of our experiments in an "infinite data" setting (single epoch over a large train dataset) in order to disentangle the effects we're investigating from phenomena related to small datasets, overfitting, grokking, etc. The number of training steps we used for the toy example (30,000,000) was enough that the vanilla model flatlined at zero for most of the training. Meanwhile, the myopic model was flat at the initial loss for the entire training run, so it seems unlikely that even more training would cause it to suddenly drop. (Though, of course, this can't be ruled out for certain.) Why read the pre-print To see the figures for the synthetic dataset experiment.To see how myopic descent is defined mathematically. Useful exercise, especially if you are not used to reading precise mathematical definitions.For full details on the experiments.For details on another type of training. Quote: "To estimate the local myopia bonus of the vanilla model, we train another model from random initialization with the same architecture, but with past hidden states provided by the vanilla model".For more academic or mathematical readers, they also prove some theorems around the convergence of myopic descent. Acknowledgements Thanks to Nicky Pochinkov, Julian Schulz and one of the authors Wilson Wu for reviewing drafts of this post. Diagrams created on bitpaper.io
2024-06-27
https://www.lesswrong.com/posts/AZnF5LNZfeGZRGvid/how-birds-sense-magnetic-fields
AZnF5LNZfeGZRGvid
how birds sense magnetic fields
bhauth
introduction It is known that many birds are able to sense the direction of Earth's magnetic field. Here's a wikipedia page on that general phenomenon. There have been 2 main theories of how that works. One theory is that birds have magnets in their beak that act like a compass. We know this is the correct theory because: Small magnetite crystals have been found in bird beaks. Anaesthesia of bird beaks seems to affect their magnetic sense, sometimes. The other theory is that birds have some sensing mechanism in their eyes that uses magneto-optical effects. We know this is the correct theory because: Birds can't sense magnetic field direction in red light. Covering the right eye of birds prevents them from sensing field direction. We also know those theories probably aren't both correct because: Most animals don't have a magnetic field sense. It's implausible that birds developed two separate and redundant systems for sensing magnetic fields when other animals didn't develop one. organic magneto-optics It's possible for magnetic fields to affect the optical properties of molecules; here's an example, a fluorescent protein strongly affected by a small magnet. However, known examples of this require much stronger (~1000x) fields than the Earth's magnetic field. Let's suppose birds sense magnetic fields using some proteins in their eyes that directly interact with fields. The energy density of a magnetic field is proportional to the field strength^2. The energy of interaction of a magnet with a field is proportional to the product of the field strengths. The earth has a field of 25 to 65 μT. If we consider the energy of a strongly magnetic protein interacting with the Earth's magnetic field, that's not enough energy to directly cause a cellular signalling effect. So, magnetic fields must act to control some energy-transferring process, and the only logical possibilities are light absorption/emission and transfer of excited states between molecules. Birds can sense the direction of magnetic fields, more so than field strength, so the effect of magnetic fields must be relative to the orientation of something. Molecules are randomly oriented, but absorption/emission of a photon is relative to molecule orientation, so magnetic fields can create differences in absorption/emission of light at different angles. (That's the basis of a spectroscopy technique I previously proposed.) For excited states of molecules to interact with a magnetic field, they must have a magnetic field. The excited states with the strongest fields would logically be triplet states, where the spin of an electron is reversed, creating a net spin difference of 2. (The magnetism of iron comes from the spin of its one unpaired electron, so triplet states are more magnetic than iron atoms.) Molecules absorb/emit photons only of specific wavelengths: as energy and momentum are conserved, molecules must have a vibrational mode that matches the photon. Magnetic fields can shift what wavelengths are absorbed. Considering the energy density of the Earth's magnetic field and the magnetic field of triplet states, shifting the affected wavelengths of visible light by 1nm seems feasible. A blue sky doesn't seem to have sharp enough spectral lines. Can one be made artificially? It's not normally possible to absorb a wide spectrum of light and emit a narrow spectral line: thermodynamically, a more narrow spectrum has a higher "temperature". The spectral width of emission is typically about the same as the width of absorption. (This is why early laser types are so inefficient: they only absorb a small fraction of the light used to pump them. Systems using diode lasers are more efficient.) Thus, we need to absorb only a narrow spectral line. Fluorescence can be delayed; see "phosphorescence". A fluorescent molecule with a narrow spectrum would only absorb/emit a small fraction of light, but supposing the emission is slightly delayed, birds could blink their eyes and detect emitted light against a black background. The emitted light could theoretically be detected by a separate magnetically-sensitive molecule, but that would require evolving 2 molecules/proteins tuned to the exact same wavelength. A more likely possibility is that the fluorescent emission rate itself is affected by magnetic fields. A common mechanism for delayed fluorescence involves triplet states, which create an interaction with magnetic fields. Supposing the rate of light emission is affected by magnetic fields, it would be faster in certain directions. How could small shifts in the direction of emitted light be detected? Supposing there's a pattern of magnetically-sensitive fluorescent pigment on the surface of the eye, and a separate pattern of another fluorescent pigment, the relative position of those patterns could be detected. This would be a complex thing to detect, which would explain the somewhat long acclimation times birds have to changes in magnetic field strength. What would small dots on the surface of the eye look like? You've probably seen the answer for yourself: when looking at a clear blue sky, people can often see white blood cells in blood vessels in the eye. Different fluorescent pigments having changes in light emission with direction would then be similar to such dots having some chromatic aberration that varies with eye direction. Except, instead of dots, there might be more complex patterns, and they would be visible when blinking. That does seem like something birds could interpret, and something that could require a day of adaptation when field strength changes. the problem Instead of guessing, we can consider experiments on birds in artificial light. As this paper notes: Tests under near-monochromatic lights revealed that orientation is possible under light from ultraviolet to about 565 nm green; under yellow and red light, birds are disoriented. The above mechanism cannot work with a wide range of nearly-monochromatic light. Now, we've demonstrated how well we understand complicated optical effects, so let's try to actually find the answer. magnetite As I mentioned, a tiny amount of magnetite has been found in bird beaks. Researchers have also found that magnetic field changes affect nerves in the beak. Birds have some behavioral responses to magnetic fields in the dark; they're just different and less directionally oriented than the response when they can see through their right eye. Some bacteria have tiny magnetite crystals in them that cause them to orient along the Earth's magnetic field. Magnetite is not usually a very strong magnet, but tiny crystals of it only have a single domain, making them about as strong as Nd magnets. (See this post for an introduction to magnets.) As such, we know that a freely-floating cell can act as a compass. If we consider a cluster of rod-like cells with narrow magnetite crystals, freely floating, they would collectively produce more force, enough to be sensed by mechanoreceptors. compass When humans use a compass, they don't feel the movement of the compass needle, they see it. Perhaps birds do the same. Consider a cluster of cells containing magnetite crystals. Now, put it on the surface of the eye, and add sheets of different pigments along different planes. That would cause its apparent color to change with its orientation. As mentioned above, even single cells on the surface of the eye can be seen. Birds would then perceive magnetic field direction as changes in the color of similar dots. The relaxation time and perhaps orientation accuracy would depend on field strength, which could explain the adaptation time to field strength changes. It's also possible that the clusters are weakly connected to surrounding cells, such that their position would depend on field strength as well as direction; logically, some sort of anchor to keep the cell clusters near a consistent location would be needed. two systems The current consensus (I think) among researchers studying bird magnetoreception is that birds have 2 separate systems for sensing magnetic fields. The same core structure (a cluster of cells with single-domain magnetite crystals, mostly freely-floating) would be used in both above systems. That makes evolution of 2 partly-redundant systems much easier. Each system would have some advantages. Mechanoreceptors would only detect the force applied by each magnetic cell cluster, not the exact direction. Detecting direction optically would be more accurate, but requires light, and would mainly detect field direction, not strength.
2024-06-27
https://www.lesswrong.com/posts/rC3hhZsx2KogoPLqh/ai-70-a-beautiful-sonnet
rC3hhZsx2KogoPLqh
AI #70: A Beautiful Sonnet
Zvi
They said it couldn’t be done. No, not Claude Sonnet 3.5 becoming the clear best model. No, not the Claude-Sonnet-empowered automatic meme generators. Those were whipped together in five minutes. They said I would never get quiet time and catch up. Well, I showed them! That’s right. Yes, there is a new best model, but otherwise it was a quiet week. I got a chance to incorporate the remaining biggest backlog topics. The RAND report is covered under Thirty Eight Ways to Steal Your Model Weights. Last month’s conference in Seoul is covered in You’ve Got Seoul. I got to publish my thoughts on OpenAI’s Model Spec last Friday. Table of Contents Be sure to read about Claude 3.5 Sonnet here. That is by far the biggest story. Introduction. Table of Contents. Language Models Offer Mundane Utility. I am increasingly persuaded. Language Models Don’t Offer Mundane Utility. EU’s DMA versus the AiPhone. Clauding Along. More people, mostly impressed. Fun With Image Generation. They are coming for our memes. Then Hollywood. Copyright Confrontation. The RIAA does the most RIAA thing. Deepfaketown and Botpocalypse Soon. Character.ai addiction. Am I out of touch? They Took Our Jobs. More arguments that the issues lie in the future. The Art of the Jailbreak. We need to work together as a team. Get Involved. AISI, Apollo, Astra, Accra, BlueDot, Cybersecurity and DOE. Introducing. Forecasting, OpenAI Mac App, Otto, Dot, Butterflies, Decagon. In Other AI News. OpenAI equity takes steps forward. You can sell it. Quiet Speculations. A distinct lack of mojo. You’ve Got Seoul. Delayed coverage of the Seoul summit from last month. Thirty Eight Ways to Steal Your Model Weights. Right now they would all work. The Quest for Sane Regulations. Steelmanning restraint. SB 1047. In Brief. The Week in Audio. Dwarkesh interviews Tony Blair, and many more. Rhetorical Innovation. A demolition, and also a disputed correction. People Are Worried About AI Killing Everyone. Don’t give up. Invest wisely. Other People Are Not As Worried About AI Killing Everyone. What even is ASI? The Lighter Side. Eventually the AI will learn. Language Models Offer Mundane Utility Training only on (x,y) pairs, define the function f(x), compose and invert it without in-context examples or chain of thought. AI Dungeon will let you be the DM and take the role of the party, if you prefer. Lindy ‘went rogue’ and closed a customer on its own. They seem cool with it? Persuasive capability of the model is proportional to the log of the model size, says paper. Author Kobi Hackenburg paints this as reassuring, but the baseline is that everything scales with the log of the model size. He says this is mostly based on ‘task completion’ and staying on topic improving, and current frontier models are already near perfect at that, so he is skeptical we will see further improvement. I am not. I do believe the result that none of the models was ‘more persuasive than human baseline’ in the test, but that is based on uncustomized messages on generic political topics. Of course we should not expect above human performance there for current models. 75% of knowledge workers are using AI, but 78% of the 75% are not telling the boss. Build a team of AI employees to write the first half of your Shopify CEO speech from within a virtual office, then spend the second half of the speech explaining how you built the team. It is so weird to think ‘the best way to get results from AI employees I can come up with is to make them virtually thirsty so they will have spontaneous water cooler conversations.’ That is the definition of scratching the (virtual) surface. Do a bunch of agent-based analysis off a single prompt. This kind of demo hides the real (human) work to get it done, but that will decline over time. Language Models Don’t Offer Mundane Utility Apple Intelligence rollout will be at least delayed in the European Union, with Apple citing the Digital Markets Act (DMA) compromising user privacy and data security. I look forward to the EU now going after them for failing to deploy. Note that DMA is deeply stupid EU tech regulation unrelated to AI, the EU AI Act is not mentioned as an issue, and nothing about Apple Intelligence would be subject to regulation by SB 1047 or any other major regulatory proposal in the USA. New paper finds LLMs engage in difficult-to-predict escalatory behavior patterns in political simulations, in rare cases leading to deployment of nuclear weapons. Well, yes, of course. The LLMs are trained as CDT (Causal Decision Theory) agents in various ways and asked to predict text and imitate human behavior, and it is very obviously correct to engage in hard to predict escalatory behavior with nonzero risk of worst case scenarios by all of those metrics. Andrej Karpathy requests that LLMs have a feature to offer ‘proof’ in the form of their references, which right now is only available when you have web access. Saagar Jha is not impressed by Apple’s claims of Private Cloud Compute, claiming it is a lot of words for a Trusted Platform Module, but that it is not all that secure. Your engineers might copy your GPT wrapper product. AI detection software in education continues to have a lot of false positives. Serious advice to all students and other writers, never delete your drafts and history. That would be smart anyway, as AI could plausibly soon be helping you learn a better process by analyzing them. For now, they are vital to proving you actually wrote what you wrote. Sometimes I wonder if these false positives are good, actually? If the AI thinks an AI wrote your paper, and instead you wrote your paper, what does that say about your work? What grade do you deserve? Clauding Along Takes on Claude 3.5 continue to come in. While I consider Claude 3.5 to be clearly best for most purposes right now, that does not mean Anthropic now has an overall longer term lead on OpenAI. OpenAI is at the end of its model cycle. Of course, they could fail to deliver the goods, but chances are they will retake the visible lead with GPT-5, and are still ‘ahead’ overall, although their lead is likely not what it once was. Heraklines: the larger point about OpenAI > anthropic is correct, this lead right now is illusory. The common man cares not about vibe check perf tho, all that matters is how much better at grunt work like coding is it? 3.5 smashes, not even close. usefulness =! smortness. 3.5 is a model of the people. I still default to 4o for anything math related, but 3.5 just grinds better. A glimpse of what a future without grunt work could look like note: vibe checks are to be taken with a grain of salt, like benchies. i’ve seen too much overcorrection based on both in the past It is always weird to see what people think about ‘the common man.’ The common man does not know Claude exists, and barely knows about ChatGPT. This comment was in response to Teortaxes: Teortaxes: Sorry to be a killjoy but: Anthropic hopes to hyperstition AGI lead, their people are deluding themselves, and their models are like “talented” middle-class American kids – NOT HALF AS SMART AS THEY’RE TRYING TO LOOK LIKE OpenAI will wreck them on instruction following… again. Incidentally the “other model’s” MMLU is 79 …I wanted to dunk on Flash being dumb but it’s also 0-shotting this problem. Anthropic is simply not very good in instruction-tuning. Folks who say they’re switching their automated pipelines to Sonnet because “smart” are being silly. Lots of crap like this. Let me clarify What I’m NOT saying: – 3.5-Sonnet is dumb[er than 4o/4t/DSC]; – spelling tasks are good tests for LLMs What I DID SAY: – 3.5-Sonnet is deceptively pretentious; – Anthropic’s instruction tuning is wonky You might think I’m just obsessively nitpicking I’m not, I think this wonkiness in reasoning about trivial instructions indicates a broader bad trend at Anthropic One can say they’re creating AI takeover risks by encouraging this I-am-a-person bullshitting. So there’s AI takeover risk, then? And it is being created now, from alignment failures being observed now? Huh. I do see how one could worry about what Teortaxes worries about here. But I see it as indicating rather than creating a problem. The true problem does not go away if you force the existing model to stop expressing it. If most people are reporting that plugging in Sonnet 3.5 gives them much better performance? I am inclined to believe them. Nor do I think instruction handling issues are that big a deal here, but I will keep an eye out for other complaints. Danielle Fong reassembles the ‘invention team’ without any tricks, is impressed. Matt Parlmer reports Sonnet 3.5 is the first LLM to reliably pass his vision test. Tyler Cowen is impressed by an answer on economics. I was not as impressed here as Tyler, as it feels like Claude is unfocused and flooding the zone a bit, and a straight answer was possible but missing as was one key consideration, but yeah, overall very good. To me the key concept here is that the net cost of inefficient wage levels is likely lower than expected, so you would be more inclined to allow wages to remain sticky. Some speculation of how artifacts work under the hood. Some fun attempts to get around the face blindness instructions. In these cases Claude gets it right but how reliable or wide ranging would this hack be? Not that I am especially worried about the model being not face blind, especially as it applies to major public figures. A LessWrong commenter notes it identified my writing from a short passage. Cuddly Salmon: effectively prompting for claude 3.5 artifacts is such an incredible edge right now. Minh Nhat Nguyen: I don’t think it’s actually made a single error while I’ve been using it to write out+iterate+merge thousands of lines of code. Whenever the code doesn’t work, it’s usually me being too vague with specs. Cuddly Salmon: Cutting thru all of my problem code like it’s nothing, this AI is an absolute unit. Incredibly creative, too. Fun with Image Generation Claude makes it easy to create automatic meme generators. Here’s what the original form, the Wojack, from Fabian Stelzer. Good fun was had by all, and truths were spoken. Here’s one for Virgin vs. Chad. Fabian: another meme maker I made on glif dot app fully automated Virgin vs Chad memes on any topic, just prompt it Claude 3.5 is just sublime at these and the workflow is super simple to build on glif.. Here’s one begging you to stop doing X, which is often wise. The original took all of five minutes to create. It often seems like that is where our society is at. We can do things in five minutes, or we can take forever. Choose. Andrew Chen says Hollywood is being slow to adapt AI for a variety of reasons, starting with being slow to adapt to everything in general, but also legal concerns, the difficulty of finding good engineers and the pushback from creatives. His call for creatives to think about themselves like software engineers, who only benefited from advances in tech, does not seem like something to say to creatives. It needs to be appreciated in all such discussions the extent to which almost all creatives, and also most consumers and fans, absolutely despise AI in this context. He also does not appreciate the extent to which the technology is not ready. All this talk of innovation and new forms and six second dance videos illustrates that it will be a bit before AI is all that visibly or centrally useful for producing great work. They should use it the same ways everyone should use it. Yes, it helps you code and implement things, it helps you learn and so on. Do all that. But directly generating a ton of content on its own as opposed to helping a human write? Not well, not yet. His talk of the ‘$1000 blockbuster movie’ forgets that such a movie would suck, and also cost vastly more than that if you count the labor of the writers and coders. Toys ‘R Us releases AI (Sora) generated ad. It is executed well, yet I expect this to backfire. It is about how the consumer reacts. Copyright Confrontation It is music’s turn. The RIAA and three major record labels are doing RIAA things, looking for damages of $150k per song that was ‘copied.’ Ed Newton-Rex: The 3 major record labels are suing AI music companies Suno and Udio. Here are the two lawsuits in full. – They accuse Suno & Udio of “willful copyright infringement on an almost unimaginable scale” – They provide evidence that both companies trained on their music, including outputs that closely resemble their recordings (ABBA, Michael Jackson, Green Day, James Brown, & many more) – They outline why this is not fair use – They say this “wholesale theft of… copyrighted recordings threatens the entire music ecosystem and the numerous people it employs” – They include unknown co-defendants who assisted in copying/scraping – They demand a jury trial If you do one thing today, read the full complaints (Suno, Udio). Kristin Robinson (Billboard): The complaints against the two companies also make the case that copyrighted material was used to train these models. Some of the circumstantial evidence cited in the lawsuits include generated songs by Suno and Udio that sound just like the voices of Bruce Springsteen, Lin-Manuel Miranda, Michael Jackson and ABBA; outputs that parrot the producer tags of Cash Money AP and Jason Derulo; and outputs that sound nearly identical to Mariah Carey’s “All I Want For Christmas Is You,” The Beach Boys’ “I Get Around,” ABBA’s “Dancing Queen,” The Temptations’ “My Girl,” Green Day’s “American Idiot,” and more. … RIAA Chief Legal Officer Ken Doroshow adds, “These are straightforward cases of copyright infringement involving unlicensed copying of sound recordings on a massive scale. Suno and Udio are attempting to hide the full scope of their infringement rather than putting their services on a sound and lawful footing. These lawsuits are necessary to reinforce the most basic rules of the road for the responsible, ethical, and lawful development of generative AI systems and to bring Suno’s and Udio’s blatant infringement to an end.” Did Suno and Udio do the crime? Oh, hell yes. They very much went with the ‘we are doing it and daring you to sue us’ strategy. The question is, are they allowed to do it, or not? We are about to find out. This is good. We should have that fight and find out what current law says. Early indications are mixed. If it turns out current law says you can train on any song you want, and produce soundalike versions on demand, without compensation? My strong prediction is that Congress would change the law very quickly. In other copyright news: Startup ‘Created by Humans’ is launching to help book authors license their work to AI companies. Al Michaels agrees to let an AI version of his voice be used for Olympic coverage. The people responding are predictably not taking kindly to this. I am also not a fan. What made Al Michaels great is not the part the AI will be copying. The evidence is a little thin, but what a great title, chef’s kiss by Wired: Perplexity Plagiarized Our Story About How Perplexity Is a Bullshit Machine. Perplexity did not do one of their previously reported ‘post a version of the full article to our own website’ specials. What they did do was provide a summary upon request, which included accessing the article and reproducing this sentence: “Instead, it invented a story about a young girl named Amelia who follows a trail of glowing mushrooms in a magical forest called Whisper Woods.” That sentence was obviously not a coincidence, but as Wired notes it is not fully clear this crosses any red lines, although not having quote marks was at best a very bad look. I doubt they will be able to make anything stick unless they find worse. Deepfaketown and Botpocalypse Soon To the extent there is already an ongoing Botpocalypse it is likely at Character.ai. Eliezer Yudkowsky: Grim if true (for reasons basically unrelated to the totally separate track where later ASI later kills everyone later) Deedy: Most people don’t realize how many young people are extremely addicted to CharacterAI. Users go crazy in the Reddit when servers go down. They get 250M+ visits/mo and ~20M monthly users, largely in the US. Most impressively, they see ~2B queries a day, 20% of Google Search! Another comparison is WhatsApp. They do 100B+ messages a day, so Character is ~4% of WhatsApp! (1 qps = 2 WhatsApp messages) He also links to the associated subreddit. When I look there, I continue to not see the appeal at current tech levels. Ben Landau-Taylor: To be clear, kids spending hours talking to these robots feels weird as hell to me, too. It’s just, this is *obviously* what skinner.jpg feels like from the inside. I do my best not to kink shame. This is no exception. My objection is not to the scenario being role played. It is purely that the AI is not yet… good at it? The story of Bentham Tools and their AI bot doom loop. Indian farmers getting their news from AI anchors. For now it seems the anchors are performers and don’t write their own copy. Another one searches for Facebook AI slop for a few minutes, floods their feed. Is doing this intentionally the solution for those addicted to Facebook? Allison Schrager, author of ‘An Economist Walks Into a Brothel,’ sees AI bots as displacing some of the world’s oldest profession by producing simulated intimacy, which she says is what most sex work is ultimately about. Her worries are that this will reduce drive to seek out relationships and destabilize existing ones, similar to the concerns of many others, but notes that like prostitutes this could work both ways. Central here is the idea that the ‘girlfriend experience’ is the highest end product, someone who will be the perfect companion always there for you, that even a few years ago cost $1,000 an hour even where it was fully legal because of how mentally taxing it is to be consistently present for another person. Whereas AI could do that a lot cheaper. As usual, this is a form of ‘AI is what it is today and won’t get any better’ speculation. Ethan Mollick notes that AI has compromised traditional approaches to security. Spear phishing got very easy, text-to-speech is almost flawless and so on. Despite this, there has been remarkably little disruption. Few are using this capability. Not yet. We are fortunate that time has been given. But until the time is almost up, it will be wasted. They Took Our Jobs Michael Strain makes the case for AI optimism on economics and jobs. It’s a noble effort, so I’m going to take the bait and offer one more attempt to explain the problem. This seems to be a very patient, well reasoned reiteration of all the standard economic arguments about how technology always creates new jobs to replace the ones it automates away, and how yes you might have a robot or chatbot do X but then the human will need to do Y. As I’ve noted before, I agree that we should be short term jobs optimists, but there could come a point at which the robot or chatbot also does Y and also new thing Z. But that is because, like most people making such arguments, Michael Strain does not feel the AGI. He thinks AI is a tool like any other, and will always remain so, and then writes at length about why tools don’t create structural unemployment. True, they don’t, but this is completely missing the point. It is telling that while he mentions Eliezer Yudkowsky and existential risk in his opening paragraph, he then spends all his time talking about economics and jobs without noticing the ways AI is different, and with zero mention of existential risk, and then closes like this: Michael Strain: The year 2023 will be remembered as a turning point in history. The previous year, humans and machines could not converse using natural language. But in 2023, they could. Many greeted this news with wonder and optimism; others responded with cynicism and fear. The latter argue that AI poses a profound risk to society, and even the future of humanity. The public is hearing these concerns: A YouGov poll from November 2023 found that 43% of Americans were very or somewhat concerned about “the possibility that AI will cause the end of the human race on Earth.” This view ignores the astonishing advances in human welfare that technological progress has delivered. For instance, over the past 12 decades, child mortality has plummeted thanks in large part to advances in drugs, therapies, and medical treatment, combined with economic and productivity gains. Generative AI is already being used to develop new drugs to treat various health conditions. Other advances in the technology will mitigate the threat of a future pandemic. AI is helping scientists better understand volcanic activity — the source of most previous mass-extinction events — and to detect and eliminate the threat of an asteroid hitting the earth. AI appears more likely to save humanity than to wipe it out. Like all technological revolutions, the AI revolution will be disruptive. But it will ultimately lead to a better world. What does one have to do with the other? That is very similar to saying: Strawman Climate Skeptic: This view ignores the astonishing advances in human welfare that burning fossil fuels has delivered. For instance, over the past 12 decades, we have vastly increased our energy production, which has led to [various great things including the same stuff], combined with economic and productivity gains. Fossil fuels are already being used to develop new drugs to treat various health conditions. Other advances in the technology will mitigate the threat of a future pandemic. Machines powered by fossil fuels are helping scientists better understand volcanic activity — the source of most previous mass-extinction events — and to detect and eliminate the threat of an asteroid hitting the earth. Fossil fuels appear more likely to save humanity than to wipe it out. Like all technological revolutions, the fossil fuel revolution has been disruptive. But it will ultimately lead to a better world. Presumably one can see that none of that has anything to do with whether doing so is pumping carbon into the atmosphere, and whether that is altering the climate. It has nothing to do with what we should or should not do about that. It flat out is not evidence one way or another. On jobs the argument is better. It is a good explanation for why in the short term this time will be the same time. In the short term, I buy that argument. Such arguments still fail to grapple with any of the reasons that long term, this time is different. Texas survey finds nearly 40 percept of Texas firms use AI, with no signs of changes to employment. Only 10% using AI said it decreased need for workers, 2% said it increased. There was also a marginal shift from low skill to high skill work, note that this is the percent chance a firm in total had any shift at all, so the absolute numbers here are quite low so far. What’s it good for? Mainly productivity. Access to information is also essentially productivity, after all. The Art of the Jailbreak One alternative to jailbreaking is to divide your task into subcomponents. A weaker model without safeguards does the blatant actions, a frontier model does seemingly harmless but difficult tasks, paper says you can get from <3% to 43% overall success rate this way on malicious tasks. Well, sure. A strong model can help you do anything better without directly violating ethics, the same way you can get a lot of help out of ethical people and use that plus unethical henchman to do lots of unethical things. That does not mean the safeguards are useless. In practice they are still big barriers if they force you into this song and dance. Also note that the strategic planning layer has to be done by the weaker model, so that makes it much harder to get humans properly out of the loop. Get Involved AISI hiring ML research scientists to explore technical AI safety cases, apply here. Apollo Research hiring Senior AI governance researcher. OpenAI brags about its cybersecurity grant program, invites more applications. Protest against US-based AI companies in Accra, Ghana outside the US embassy. Department of Energy releases 3.6 billion token corpus of federal permitting documents onto HuggingFace. A competition is available. BlueDot Impact is hiring a software engineer. Cate Hall is now CEO of Astera, and is building a team including a new COO to use their $2.5 billion endowment to make their vision of public goods for scientific and technological progress a reality in the age of AI. I worry that this agenda has no mention of existential risks from AI, and that if not careful they could amplify those risks. However it is true that other scientific progress is a worthy cause. As always in such cases, if it sounds appealing, investigate, ask questions and make your own decisions. It certainly is a big chance to steer a large endowment. Introducing The AI Forecasting Benchmark Series from Metaculus, starting July 8, $120k in prizes over four contests. Only bots can enter. Metaculus scoring on blinded binary questions is a good test of prediction, so long as you notice it is radically different than what will make money gambling or in a market. OpenAI has a Mac desktop app, which lets you quickly ask about anything on your computer. Marginally more convenient in ways that might make a practical difference. Nvidia releases, as an open model, Nemotron-4 with 340B parameters, trained on 9 trillion tokens. Oleksii Kuchaiev: Generating synthetic data for alignment of smaller models is key use case we have in mind. I notice this use case confuses me. What makes this model better than alternatives for that? They offer some evaluation numbers, which are solid but seem disappointing for a model this large, and few are discussing this release. Indeed, it has entered the Arena Elo rankings at 1208, which essentially ties it with Llama-3-70B while being five times as large. Otto, a way to interact and work with lots of AI agents using tables, you can apply for early access. No idea if the agents or interface are any good. Dot is available in the Apple store. It appears to be a combined AI assistant and life coach you talk to on your phone and that claims to have effectively unlimited long term memory. It is $12/month. Kevin Fischer is impressed, and says he can’t share the great stuff because it is all too personal. As usual with such products it is impossible to know without an investigation: Is this anything? Butterflies, which is Instragram except most of the users are AI that run accounts on their own and interact with each other and the few humans around. The future of social media whether we like it or not? I doubt it so long as humans are otherwise in charge, but the hybrids are going to get weird. Decagon, providing Substack with customer service AI using RAG for context and categorizing responses by type. Chris Best (CEO Substack): @DecagonAI was our first “holy shit AI just changed our business” moment at Substack. These guys are the real deal. Jesse Zhang (Decagon AI): We’re creating the most human-like systems to handle all the things a customer support agent does: responding to customers, looking up data, taking actions, and also analyzing conversations, filing bugs, and writing knowledge articles. Read more here [at business insider]. They have raised 35 million. I missed it a month ago: The UK’s AISI issued its May evaluations update. They gave scaffolding to the models. Their central technique for cyber capabilities was ‘capture the flag’ problems, where you can read the answer in a file if you do other things first. For chemistry and biology they used private expert-written questions. Agent evaluations assigned the models various tasks, none succeeded at anything with a long time horizon. Safeguard checks… did not go well. They have now done evaluations prior to release for Gemini 1.5 Pro and Claude 3.5 Sonnet. This all looks reasonable, but implementation matters and is hard to evaluate from here, and this will need to expand over time. In Other AI News OpenAI changes its policy on tender offers, assuring that all will have equal opportunity to sell, and removing the ‘fair market value’ repurchase provision. Kelsey Piper: ! OpenAI is committing to access to tender offers for former employees and removing a provision allowing them to take equity back for “fair market value”. This was a major ask from ex-employees when the secret NDA story first broke. Hayden Field: Scoop: OpenAI has reversed course on many of its tender offer policies, which in the past treated current employees differently than former ones & in some ways excluded former employees working at competitors, CNBC has learned, via an internal document. The exception is if a tender offer is oversubscribed, with more sellers than buyers, in which case current employees get prioritized. A loophole, but fair enough. Former employees can still be excluded from ‘donation rounds,’ which I assume is relatively minor but not nothing. These changes are a major step forward, if we trust these promises to be enacted, as a lot of this is ‘we will do X’ or ‘we will revise the documents to say Y.’ If they are not enacted as promised, that would be a gigantic red flag. If we feel that makes the promises sufficiently credible, then this counts for a lot. OpenAI taking additional steps to block access to its services from China. Bloomberg speculates this opens the door for Chinese firms. Technically OpenAI services were not previously available in China. It seems everyone was ignoring that. Bloomberg News: For China, that could help usher out many smaller startups created during the “battle of a hundred models,” in the wake of ChatGPT’s late 2022 debut. And a bigger concern may be whether open-source models like Meta Platforms Inc.’s Llama also cut off access, said Bernard Leong, chief executive officer of Singapore-based Dorje AI. Um, Bloomberg, how exactly would Meta do that? Meta’s models are open weights. Is Meta going to say ‘we are asking you nicely not to use our model, if we discover you copied and used it anyway we will be cross with you?’ Are they going to sue the Chinese companies for not getting a commercial license? Good luck with that. Also, it pains me when I see reports like this that cite Meta as part of the lead group in AI but that do not mention Anthropic, despite Anthropic having the best model. OpenAI delays its advanced Voice Mode for another month, anticipates all Plus users having access in the fall along with new video and screen sharing capabilities. Apple in talks with Meta to add its AI to Apple Intelligence’s offerings alongside ChatGPT. They said they intended to offer a variety of choices. I would be talking to Google and Anthropic first, but it matters little. Quiet Speculations Sarah Constantin says it is 10+ years from state of the art to widespread use in the military, procurement is slow, so Leopold’s military timelines don’t make sense. I mean, sure, in peacetime, when everyone is mostly fine with that. If we are in AGI world, and a few months lead in tech would if implemented be decisive, what happens then? Presumably we go on a wartime footing and throw our procurement rules out the window. Wartime militaries work completely differently from peacetime militaries. If not, well, then our military is going to stop being effective, even against domestic rivals, because being 10 years behind is going to be quite obviously fatal even in relatively slow scenarios. One view of Ilya’s new venture. Roon: Extreme bear signal on anyone who says cracked especially in their launch post. Gwern speculates that OpenAI has ‘lost its mojo’ and key employees, and could now be largely coasting on momentum. Gwern: What made OA OA in 2020 was that it had taste: it had much less resources than competitors like DeepMind or Google Brain or FAIR, but (thanks to Alec Radford, Ilya Sutskever, Jared Kaplan, and the RLHF-focused safety team like Paul Christiano & Dario Amodei, and fellow-traveler scalers like Andrej Karpathy etc) they bet big on scaling laws & unsupervised learning at the moment those suddenly began to work. Without taste and agility—or you might say, “without its people, OA is nothing”—OA doesn’t have that much of a moat. And most of those people are gone, and the survivors are being policed for leaks to the media, and now know that if they leave, OA management wants to gag them, and has the power to confiscate their vested equity, wiping out all their wealth. … What are the vibes now? Where is the research taste at OA, what ideas or breakthroughs have they published the past few years of note? The weird rumored Franken-MoE architecture of GPT-4? GPT-4o, whose architecture has been obvious since DALL·E 1, if not well before, and which benchmarks great but users are overall less pleased? … I think it implies that they are eating their seed-corn: scrapping any safety issues may work in the short run, but is self-sabotaging in the long run. (Like the man who works with his office door closed, who is highly productive now, but somehow, a few years later, is irrelevant.) The rot will set in long before it become clear publicly. OA will just slow down, look glossier but increasingly forfeit its lead, and some point it stops being possible to say “oh, they’re way ahead, you’ll see when they release the next model in a few months/years”. And the Mandate of Heaven shifts elsewhere, irreversibly, as OA becomes just another place to work. (Startup & research culture mostly only degrades from the peak at their founding.) The visionaries go to Anthropic, or follow Ilya to SSI, or take a risk on Google, or go someplace small like Keen to bet big. What’s weird about GPT-4o is actually that it scores so well on Arena, versus my observation that it is fine but not that good. David Chapman responds that perhaps instead scaling has run out, as a different explanation of the failure to create a new killer product. Ability at math competitions is bizarrely strongly correlated among humans with later winning Fields Medals for doing frontier math, despite the tasks being highly distinct. So should we take winning math competitions as a sign the AI is likely to earn Fields Medals? Should we also respect doing well on other standardized tests more? My guess is no, because this has a lot to do with details of humans and we have to worry about data contamination on many levels and the use of techniques that don’t transfer. It is still food for thought. There have always been people who think most possible technologies have been invented and things will not much change from here. Robin Hanson claims this is actually the ‘dominant view among most intellectuals.’ He does note ‘there are other variables,’ but this illustrates why ‘most intellectuals’ should mostly be ignored when it comes to predicting the future. They utterly lack situational awareness on AI, but even without AI there are plenty of worlds left to conquer. Sir, the reason we will want to turn over decision making to AIs is that the AIs will be capable of making better and faster decisions. Timothy Lee: I’ve never understood why people think we’ll want to turn over strategic decision-making to AIs. We can always ask for recommendations and follow the ones that make sense. People point to examples like chess or Go where computers are now strictly better than people. But very few strategic decisions in the real world are purely instrumental. There are almost always tradeoffs between competing values; people are going to want the final say. It’s one thing for a computer to say “you need to sacrifice your rook to win the chess game.” It’s another for it to say “you need to sacrifice 10,000 soldiers to win the war.” Human decision-makers might think that’s worth it but they might not. What happens by default, if capabilities keep advancing, is that those who do let AIs make those decisions win and those who don’t let them make those decisions lose. Keeping humans in the loop is cheaper for strategic decisions than tactical ones, but still expensive. After some point, humans subtract rather than add value to AI decisions, even by their own metrics, except that not doing so means you lose control. That’s the game. You could ask for recommendations, but what happens when it is clear that when you disagree you are by default making things worse, while also wasting valuable time? Point, counterpoint. Richard Ngo: I expect the premium on genius to increase after AGI, not decrease, because only the smartest humans will be able to understand what the AGIs are up to. Interesting analogy here to physical prowess – manual labor became much less common, but the returns to being athletic are now through the roof via professional sports. Professional AI interpretation won’t be quite as heavy-tailed, but still more than current science, I’d guess. Zack Davis: Doesn’t seem like this era will last very long? Richard Ngo: Even when AIs become smart enough that nobody understands what they’re up to, understanding more than anyone else seems like a big deal as long as humans are still around! If we met friendly-ish aliens, the person who spoke their language most fluently would get very rich. There is a lot of wishcasting here. The AGIs will rapidly be doing lots of things no one can understand. Events will presumably be well out of our control. Yet being somewhat less completely confused, or getting completely confused slower, will be where it is at, and will pay meaningful dividends in real world outcomes? This requires threading quite a few needles. Your expertise has to give you better understanding, despite the AGIs being able to explain things. That has to let you make better decisions. Your better decisions have to matter. Even taking his metaphor at face value, are returns to being athletic higher? Yes, you can make quite a lot of money by being the very best. But you can be outrageously good at athletics, as in a minor league baseball player, and get very little return. Even trying for college scholarships is quite the sweepstakes. This is a winners-take-all (or at least most) competition. Maxwell Tabarrok offers a takedown of Daron Acemoglu’s paper The Simple Macroeconomics of AI, another in the line of economic models that presumes AI will never gain any capabilities and current AI cannot be used except in certain specific ways, then concluded AI won’t increase economic growth or productivity much. Anton points out that dumping massive context into systems like Claude Sonnet 3.5 is not going to dominate RAG because of cost considerations. Claude costs $3 per million input tokens, which is definitely ‘our price cheap’ but is still $187/GB, versus DDR4 at $2.44/GB, NVME at $0.09/GB. You will have an infinite context window but you will learn how not to use (and abuse) it. If we do discover dangerous cyber capabilities in AI, what do we do next? Who finds out? The proposal here from Joe O’Brien is Coordinated Disclosure of Dual-Use Capabilities, with a government team funded and on standby to coordinate it. That way defenders can take concrete action in time. He and others make the same case here as well, that we need an early warning system. It is hard to imagine, short of it being completely botched and useless, an early warning system being a bad use of funds. You’ve Got Seoul What happened in Seoul last month? Mostly: Diplomacy happened. That makes it difficult to know whether things moved forward. In diplomacy (as I understand it) most time is spent establishing foundation and trust, laying groundwork for the final agreement. But always, always, always, when it comes to the bottom line, nothing is done until everything is done. Still, this commitment goes beyond that and seems like an excellent start? Dan Hendrycks (June 7, 2024): Last month in Seoul, major AI developers already committed to testing their models for risks, and even ceasing development if their models reach a catastrophic level. It’s revealing how many people oppose regulation that would require companies to keep some of these promises. Here are the commitments. Outcome 1. Organisations effectively identify, assess and manage risks when developing and deploying their frontier Al models and systems. They will: I. Assess the risks posed by their frontier models or systems across the Al lifecycle, including before deploying that model or system, and, as appropriate, before and during training. Risk assessments should consider model capabilities and the context in which they are developed and deployed, as well as the efficacy of implemented mitigations to reduce the risks associated with their foreseeable use and misuse. They should also consider results from internal and external evaluations as appropriate, such as by independent third-party evaluators, their home governments[footnote 2], and other bodies their governments deem appropriate. II. Set out thresholds [footnote 3] at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable. Assess whether these thresholds have been breached, including monitoring how close a model or system is to such a breach. These thresholds should be defined with input from trusted actors, including organisations’ respective home governments as appropriate. They should align with relevant international agreements to which their home governments are party. They should also be accompanied by an explanation of how thresholds were decided upon, and by specific examples of situations where the models or systems would pose intolerable risk. III. Articulate how risk mitigations will be identified and implemented to keep risks within defined thresholds, including safety and security-related risk mitigations such as modifying system behaviours and implementing robust security controls for unreleased model weights. IV. Set out explicit processes they intend to follow if their model or system poses risks that meet or exceed the pre-defined thresholds. This includes processes to further develop and deploy their systems and models only if they assess that residual risks would stay below the thresholds. In the extreme, organisations commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds. V. Continually invest in advancing their ability to implement commitments i-iv, including risk assessment and identification, thresholds definition, and mitigation effectiveness. This should include processes to assess and monitor the adequacy of mitigations, and identify additional mitigations as needed to ensure risks remain below the pre-defined thresholds. They will contribute to and take into account emerging best practice, international standards, and science on Al risk identification, assessment, and mitigation. Outcome 2. Organisations are accountable for safely developing and deploying their frontier Al models and systems. They will: VI. Adhere to the commitments outlined in I-V, including by developing and continuously reviewing internal accountability and governance frameworks and assigning roles, responsibilities and sufficient resources to do so. Outcome 3. Organisations’ approaches to frontier Al safety are appropriately transparent to external actors, including governments. They will: VII. Provide public transparency on the implementation of the above (I-VI), except insofar as doing so would increase risk or divulge sensitive commercial information to a degree disproportionate to the societal benefit. They should still share more detailed information which cannot be shared publicly with trusted actors, including their respective home governments or appointed body, as appropriate. VIII. Explain how, if at all, external actors, such as governments, civil society, academics, and the public are involved in the process of assessing the risks of their Al models and systems, the adequacy of their safety framework (as described under I-VI), and their adherence to that framework. We define ‘frontier AI’ as highly capable general-purpose AI models or systems that can perform a wide variety of tasks and match or exceed the capabilities present in the most advanced models. References to AI models or systems in these commitments pertain to frontier AI models or systems only. We define “home governments” as the government of the country in which the organisation is headquartered. Thresholds can be defined using model capabilities, estimates of risk, implemented safeguards, deployment contexts and/or other relevant risk factors. It should be possible to assess whether thresholds have been breached. That is remarkably similar to SB 1047. Markus Anderljung: This is just the start of this journey. Going forward, governments, civil society, academia, the public will need to be a part of defining and scrutinizing these frontier AI safety frameworks. But the first step is that they exist. The thresholds would be set by the companies themselves. In the future, they should and probably will see significant input from others, including governments. They’d have to be public about it, which allows others to spot if their commitments aren’t sensible. Most of these companies don’t have these frameworks in place, let alone talk about them publicly, so this seems like a step in the right direction In order to comply with this, you need to detail your safety protocols, which also means detailing what is being trained in at least a broad sense. You have to have procedures to verify your mitigations. You have to comply with shifting international standards and best practices that are not defined in advance. The only substantial parts missing are the shutdown protocol and protecting the model weights until such time as they are intentionally released. Also the thresholds are set by the companies rather than the governments. This seems worse for everyone, in the sense that a government standard offers safe harbor, whereas not having one opens the door to arbitrary declarations later. So if this is so terrible, presumably companies would not sign… oh. •Amazon •Anthropic • Cohere •Google • G42 • IBM • Inflection Al • Meta • Microsoft • Mistral Al • Naver • OpenAl •Samsung Electronics • Technology Innovation Institute •xΑΙ •Zhipu.ai I am not saying that is ‘everyone’ but aside from some Chinese companies it is remarkably close to everyone who is anyone. Ian Hogarth (Chair AISI): Really remarkable achievement announced at AI Seoul Summit today: leading companies spanning North America, Asia, Europe and Middle East agree safety commitments on development of AI. If you scan the list of signatories you will see the list spans geographies, as well as approaches to developing AI – including champions of open and closed approaches to safe development of AI. What else happened? What about China’s statements? China would be key to making this work. Matt Sheehan: Chinese readout from AI dialogue meets (low) expectations: – want AI good not bad – UN=leader on governance Disappointing (but expected): China delegation led by Foreign Ministry North America bureau. Indicates China treating dialogue as aspect of US-China relations, not global tech risk. Helen Toner: No Matt but didn’t you see, they agreed that AI could have big benefits but also poses big risks! I think that’s what they call a diplomatic breakthrough. Saad Siddiqui: It feels like lots of different parts of the CN bureaucracy in the room, hard to imagine productive dialogue with so many different interests present across NDRC, CAC, MOST, MIIT, Central Committee Foreign Affairs Office. Any sense if that’s typical? I do not know why anyone would have any hope for the United Nations. I worry that saying ‘the UN should take a leading role’ is a lot like saying ‘we should do nothing.’ Then again, if we already believe all five security council members have de facto vetoes over everything anyway, then does it change anything? I don’t know. Imane Bello calls it a success, because: They got everyone together. They got China and America into the same room. There were calls for cooperation between many AI safety institutes. The interim international scientific report was unanimously welcomed. In Imane’s opinion, IISR is ‘history in the making.’ Again, that’s diplomacy. Did it matter? Hard to say. UK lead negotiator Henry de Zoete is also calling it a win. Jan Brauner sums up what they see as the most important outcomes. AI safety institutes say they will partner and share info. Companies make the commitments above. US AISI within NIST releases strategic vision (full version here). Soul Ministerial Statement is super explicit about existential risk. UK government sets up $11mm grant program for AI safety. I looked over the NIST strategic vision. I have no particular objections to it, but neither does it involve much detail. It is a case of successfully not messing up. Some have ambitious further plans. Eva Behrens: Here are 5 policy recommendations for the upcoming AI Safety Summit in Seoul, from me and my colleagues at ICFG. In Bletchley, world leaders discussed major risks of frontier AI development. In Seoul, they should agree on concrete next steps to address them. Overview In accordance with the shared intent communicated through the Bletchley Declaration to deepen international cooperation where necessary and mitigate catastrophic risks from advanced Al, we urge countries attending the Summit in South Korea to jointly recognise that: The development of so-called long-term planning agents (LTPAs) should be prohibited until proven safe, Advanced Al models trained on 10^25 Floating Point Operations (FLOP) of compute capacity or more should be considered high-risk and need to be regulated accordingly, and The open-sourcing of advanced Al models trained on 10^25 FLOP or more should be prohibited. To build a strong foundation for international cooperation on the governance of high-risk advanced Al, we urge that Summit participants jointly agree to: Hold biannual international Al Safety Summits, and pick a host country to follow after France and Keep the focus of the Summits on international collaboration for mitigating catastrophic risks from advanced Al. Contrast this with SB 1047. This would heavily regulate above 10^25 including full bans on open source (until a protocol is designed to allow this to happen safety, they say, no idea what that would be), with no adjustments over time. SB 1047 starts at 10^26, requires only reasonable assurance, and has a $100 million minimum such that the threshold will rapidly scale higher very soon. Indeed, the ICFG says the threshold should over time be adjusted downwards, not upwards, due to algorithmic and hardware improvements. This also proposes a ban on ‘long term planning agents,’ which unfortunately is not how any of this works. I don’t know how to allow short term planning agents, and effectively stop people from making long term ones. What would that mean in practice? There was this talk that included Yoshua Bengio, Max Tegmark and Jaan Tallinn. What about the full International Scientific Report on the Safety of Advanced AI? I looked briefly and I was disappointed. Over 95% of this report is the standard concerns about job displacements and deepfakes and privacy and other similar issues. The one section that does address ‘loss of control’ says experts disagree about whether this could be a concern in the future if we create things smarter than ourselves, so who can say. They even say that a loss of control of highly capable AI systems is ‘not necessarily catastrophic.’ That is the only time the word ‘catastrophic’ is used, and they do not say ‘existential.’ ‘Extinction’ is only mentioned once, in the section directly after that, entitled ‘AI researchers have differing views on loss of control risks.’ Thus, despite the conference saying it should focus on existential dangers, this report is in effect highly dismissive of them, including implicitly treating the uncertainty as reason not throw up one’s hands and focus on issues like implicit bias. Thirty Eight Ways to Steal Your Model Weights Top AI labs are currently dramatically insecure. As the value of their model weights and other assets rises, both commercially and as an existential risk and matter of national security, this will increasingly become a problem. Alexander Wang, CEO of Scale AI, did a ChinaTalk interview in which he emphasized the need to lock down the labs if AI capabilities continue to advance. Rand recently came out with an extensive report on how to secure model weights. As they note, securing only the model weights is a far more tractable problem than securing all the data and algorithms involved. They assume future frontier models will be larger, and online API access will need to be widespread. Here is a Q&A with director Sella Nevo, one of the coathors, which goes over the most basic items. What are their core recommendations? They start with things that need to be done yesterday. The biggest dangers lie in the future, but our security now is woefully inadequate to the dangers that exist now. Avoiding significant security gaps is highly challenging and requires comprehensive implementation of a broad set of security practices. However, we highlight several recommendations that should be urgent priorities for frontier AI organizations today. These recommendations are critical to model weight security, most are feasible to achieve within about a year given prioritization, and they are not yet comprehensively implemented in frontier AI organizations. • Develop a security plan for a comprehensive threat model focused on preventing unauthorized access and theft of the model’s weights. • Centralize all copies of weights to a limited number of access-controlled and monitored systems. • Reduce the number of people authorized to access the weights. • Harden interfaces for model access against weight exfiltration. • Implement insider threat programs. • Invest in defense-in-depth (multiple layers of security controls that provide redundancy in case some controls fail). • Engage advanced third-party red-teaming that reasonably simulates relevant threat actors. • Incorporate confidential computing to secure the weights during use and reduce the attack surface. (This measure is more challenging to implement than the others in this list but is backed by a strong consensus in industry.) This is the least you could do if you cared about the security of model weights. Have an actual plan, limit access and attack surface, use red-teaming and defense in depth. As Leopold noted, our goal must be to stay ahead of the threat curve. The authors note that FBI Director Christopher Wray implied China had a workforce of more than 175,000 hackers. If China wanted to go full OC5+, they could. For now it would not make sense given the economic and diplomatic costs. Later, it will. They also say North Korea invests ‘between 10% and 20% of the regime’s military budget’ in cyberwarfare, between $400 million and $800 million. I presume they do this largely because it is profitable for them. Everyone acknowledges that an OC5-level attack on any major lab would almost certainly succeed. For now, that is fine. The question is, when does that become not fine, and where should we be right now? Should we be able to block an OC4 attack? I certainly hope we would be able to block an OC3 one given the value at stake. We do not need to attempt bulletproof security until we are under robust attack and have assets that justify the real costs of attempting bulletproof security. We do need to be trying at all, and starting our preparations and groundwork now. Longer term we will need things like this to have much chance, similar to what one would do if worried about model self-exfiltration, which we should be worried about in such scenarios as well: • physical bandwidth limitations between devices or networks containing weights and the outside world • development of hardware to secure model weights while providing an interface for inference, analogous to hardware security modules in the cryptographic domain • setting up secure, completely isolated networks for training, research, and other more advanced interactions with weights. They highlight 38 potential attack vectors in 9 categories. How many resources are needed to launch various attacks? They have a table for that. The numbers here are weird, representing chance of success linearly from <20% to >80%, against an arbitrary target. I would think things would scale differently. I also do not think that ‘up to 20% chance of success’ is the right category? If something has a 10% chance of success it is a big deal. Also important is that this is an enumeration of things we know about. That is a lower bound on the risk. The actual situation is far worse, because it includes unknown unknowns. It is very hard for the things we do not know about to be ‘good news’ here. For multiple reasons, it is prudent to recognize the plausibility of current assessments underestimating the threat: • We assume that other attack vectors exist that are as yet unknown to security experts, particularly ones concerning advanced persistent threats (APTs), such as state actors. • Novel attack vectors and conceptual approaches are likely to evolve over time, as are novel insights and infrastructure that make existing attacks more accessible. • Publicly known examples of attacks are only a subset of attacks actually taking place, especially when it comes to more-advanced operations. Most APTs persist for years before discovery. • Many national security experts with whom we spoke mentioned that the vast majority of highly resourced state actor attacks they are aware of were never publicly revealed. This means that a purely empirical analysis based on detected operations would systematically underestimate the feasibility and frequency of advanced attack vectors. • Accordingly, one should expect capable actors to have access not only to well-established attack vectors but also to unknown approaches. In Appendix A, we share many examples of state actors developing such conceptually novel attacks years or decades before they were discovered by others. Bold is mine. All of that involves human attack vectors only. If we include future AI attack vectors, enabled by future frontier models, the situation gets even more dire if we do not bring our new capabilities to play on defense with similar effectiveness. Chapter 6 proposes that labs define security levels (SLs) from SL1 to SL5. If you are SL(X), you are protected against threats of OC level X. So what does it take to get to even SL1? In some senses this is easy. In others, in the context of a startup? It is asking a lot. Moving to SL2 means ‘industry best practices’ across the board. Doing all of the standard things everyone says one should do is a standard few companies, in practice, actually meet. Almost everyone is doing some number of ‘stupid’ things in the form of not doing some of the things on this list. What about SL3? It is essentially more of the same, only more so, and with serious worries about insider threat vectors. Any individual item on the list seems plausible but annoying. Doing all of them, in a world where your weakest point gets attacked, is not going to happen without a concerted effort. SL4 gets expensive. Things are going to get slowed down. You do not want to be implementing this level of paranoia too early. SL5 is that much more expensive to implement. You have to care quite a lot. Having eight security layers is quite the ask as are many other action items. Is all that necessary? Would it even be sufficient? Consensus weakens as you move up to higher security levels. There are deeper and more conceptual disagreements about what is needed to achieve the security implied by SL4 and SL5—with opinions ranging from the SL3 benchmark being sufficient to secure against all threat actors to claims that no system could ever present a significant hurdle to operations in the OC5 category. A particular point of disagreement was the number of people who should have authorization to access the weights. Some experts strongly asserted that the model weights cannot be secure if this number is not aggressively reduced (e.g., to the low tens); others claimed that such a reduction would not be necessary, feasible, or justified. I have definitely talked to an expert who thought that against an OC5 operation all you can hope to do is buy some time. You can prevent them from stealing everything the first day they set their sights on it, but protecting assets over time is, they claimed, rather hopeless. I haven’t seen credible claims that SL3-style procedures would be sufficient to protect against OC5, and I find that highly implausible, even if it has rarely if ever been tried. The low tens seems to me quite a lot of people to have access to your core asset. I am not sure how different ‘low tens’ is from infinity. Certainly if your plan involves dozens of people each not being compromised, then you have no plan.\ The second half of the report is details of the different attack vectors. The Quest for Sane Regulations House appropriations bill cuts $100 million in funding for NIST. This is one of the worst things to be cutting right now, it is already woefully underfunded. New paper on Risk Thresholds for Frontier AI. How should we combine compute thresholds, risk thresholds and capability thresholds? The conclusion is to primarily use capability thresholds but have them be informed by risk thresholds. I am going to quote this in full because it feels like a good steelman of being skeptical about going too far too fast on regulation. Seb Krier (Google DeepMind): I tend to think of AI policy in three consecutive phases: observation and monitoring; standardization and norm-setting; and then rules, law, and regulations if necessary. My impression is that in recent years some governance crowds have taken the reverse approach, motivated by the usual policymaker urgency of ‘we must do something now’. The problem with this is that you now have to define and cement very precise things that are still evolving, like evaluations and mitigations. Combined with the many trade-offs, inefficiencies, conflicting interests, low capacity, and frankly generally poor decision-making that governments currently suffer from, this often leads to messes, evidentiary gaps, legal risks, and rushed policymaking. To be clear, I definitely think AI is a technology that will warrant some degree of regulation – and there may well be sector-specific uses or applications that warrant this now. I think cybersecurity-oriented regulations make more sense than omnibus regulatory behemoths. But at a more general level, I feel like we’re still in a phase where the value comes from research and finding things out. And I’d rather see 50 organizations developing evaluations and 5 advocating for regulations rather than the reverse (i.e. what we have today). This is also why I’m quite supportive of the experimental nature of institutions like the AI Safety Institute, where both sides iteratively learn as things progress. Some people justify hasty policymaking because they think we will have AGI very soon and therefore this demands quick pre-emptive action, otherwise governments won’t have time to intervene. I think it’s right to try to pre-empt things, prepare institutions, and think ahead – but I don’t think timelines alone grant a carte blanche for any kind of legislation. Plus if we are indeed getting very close to AGI, I have 0 doubt that governments will inevitably wake up – and the implications, particularly for large risks, will be a lot more Leopold-like than creating a new GDPR for AI. So essentially: For now we should observe and monitor, lay groundwork such as with NIST, and perhaps do select sector-specific interventions such as in cybersecurity. Later we will do, and will want to do various regulatory actions. But let’s try and push the key decisions forward in time so we learn more. Also GPDR is deeply stupid law. Do not make laws like GPDR. They do great harm via creating frictions without accomplishing almost anything. It is also correct to worry about regulatory lock-in. Not infinitely worried as in ‘anything imposed is automatically forever,’ but yes there is a lot of inertia and these things are hard to reverse. How much do we need to worry about moving too slowly? That depends on: How long you think we have. How quickly you think we can move. How sensibly you think we would move in a crisis but with more information. Whether you think that by the time there is a crisis, it will be too late. Reasonable people disagree on all those questions. What most critics and skeptics fail to do is differentiate their responses to different types of regulatory proposals. As in, is a proposal about observing and monitoring and allowing us to intervene when the time comes? Or is it attempting to intervene now on what people can do now, or dictate the form of intervention later? Consider the response to something like SB 1047 or Biden’s executive order. Both are primarily about transparency, observation and monitoring of frontier models for the sole purpose of concerns on catastrophic or existential risks. They are deeply compatible with the perspective outlined here by Krier. The logical response is suggesting improvements and discussing details, and talking price. Instead, most (not Krier!) who are skeptical of other forms of regulation choose for SB 1047 instead to hallucinate a different bill and different impacts, and for the executive order to demand it be repealed. They hallucinated so badly on SB 1047 that they demanded the removal of the limited duty exception, a free option that exclusively lightened the burden of the bill, and got their wish. The logic of these others seems to be: You want to be able to observe and monitor, and prepare to act. If you did that, you might later act. Can’t have that. So we can’t let you observe or monitor. SB 1047 SB 1047 has strong bipartisan public support (77%-13%), if this is how you ask about it. I notice that this is not exactly a neutral wording, although its claims are accurate. This is unsurprising, although the margin is impressive. We have yet to see a poll on AI that doesn’t go this way. The LA Times discusses SB 1047 and other proposed bills here. All the other bills seem actively counterproductive to me, especially the pure rent seeking demand from teamsters for supervision of self-driving trucks. Dean Ball argues that SB 1047 is bad because it creates a government regulatory agency, via a fully general public choice counterargument against having government regulatory agencies for anything with broad positive use cases. I ended up discussing various SB 1047 things on Twitter a bit with him and Eli Dourado. Politico covers that Y Combinator sent a letter opposing SB 1047. While the letter refreshingly say that the law was clearly drafted in good faith, all four of the letter listed concerns misstate the practical implications of the bill in alarmist terms. Then they say, rather than proposing fixes to particular issues, why not scrap the whole thing and instead encourage open source software? It is telling that such letters so often ask not only for no rules of any kind, but also for active government handouts and special treatment, despite SB 1047 already giving open source special treatment. The Week in Audio Dwarkesh Patel interviews Tony Blair, with AI as a major focus. Blair sees AI as the biggest change since the industrial revolution, the most important thing to focus on. He very much gives off the technocrat ‘this is how it all works’ vibe, without pretending that the technocrats are generally in charge or governments are competent. He sees AI will be huge but doesn’t seem to notice the existential risk angle. Essentially he is a sensible ‘AI skeptic,’ who does not expect AGI or a takeoff but sees AI would be transformative anyway. His focus has been ‘good governance’ so then he pulls out the standard good governance tropes. He also emphasizes that policy and politics (or ‘change makers’) are distinct things, and if you want to accomplish anything you have to be policy first. Also has this great line from Blair: “The problem with government is not that it’s a conspiracy, either left-wing or right-wing. It’s a conspiracy for inertia.” Interview with OpenAI board chairman Bret Taylor. He is excited for this generation of AI. His focus is clearly being CEO of Sierra, where he is building hopefully cool solutions for consumer brands, rather than his far more important role at OpenAI. That does at least mean he has lots of practical experience with current models. He holds his own on mundane job transitions but does not seem to be feeling the AGI. Instead he says, beware specific hype, but the economy will transform within 30 years and this will ‘meet the hype.’ Someone needs to have him talk to the technical staff. For now, it seems he does not grok existential risk because he doesn’t grok AGI. Lester Holt interviews OpenAI CEO Sam Altman and AirBnB’s Brian Chesky, skip to about 35:00, it is ~40 minutes. Often not new information dense. Colin Fraser notes some of the ways Altman is playing rhetorical slight of hand with the risks of AGI. If you expect to be able to tell an AI ‘go solve all of physics’ or ‘go create a great company’ then that is a completely transformed world, you cannot simply talk about ‘solving misuse’ as if misuse was a distinct magisteria. When discussing events around Altman’s firing, Altman sticks to his story and lets Chesky tell a series of rather glaring whoppers. Both try to walk back the idea of an ‘AGI moment,’ there are only various capabilities in various areas, and try to deny that there is ‘a race’ in a meaningful sense. Altman follows the general theme of acting like everything will stay normal under AGI. I know he knows better. When he says ‘AGI could double the world’s GDP’ Holt points out this sounds outlandish, but I see it as outlandish on the downside and I think Altman knows that. And he is making the ‘we have great ability to steer our current models and their values’ card, the real problem is choosing our values, which I see as a highly disingenuous attempt to dismiss alignment problems as being handled. Mira Murati talks to Dartmouth Engineering where she is an alumni. It has some key spots but has low information density. She says we should expect to get ‘PhD-level intelligence for specific tasks’ in a year to 18 months. The usual suspects responded to this as saying no GPT-5 for over a year and did some gloating, which seems like the wrong response to this kind of prediction. She was broadly supportive of the government understanding what is going on and called for more of that. She says of the AI ‘it’s a tool, right’ and there is a subtle blackpill that she does not seem to notice that this might not be the full story in the future. It does seem she said ‘Some creative jobs maybe will go away due to AI, but maybe they shouldn’t have been there in the first place.’ Hot take. She then tried to save it on Twitter. Roon (linking to this clip from this segment): I f***ing love Larry Summers. Beff Jezos (responding to clip): So ****ing based holy ****. Larry Summers introduces Bloomberg to the concept of recursive self-improvement, eventually using the term explicitly, and predicting transformative and seismic change. The issue, he says, is how do you manage that? He says we cannot leave AI only to AI developers. Public authorities must take a strong role in ensuring it gets used for good, but stopping it or slowing it down without thinking about positive developments would seed the field to the irresponsible and our adversaries, and he endorses ‘responsible iterative deployment.’ If this counts as highly based, where public authorities must take a strong role, and we should consider the positive benefits and also the downsides, perhaps we are getting somewhere. Lots of great stuff here, we need to now also work in alignment and the control problem, which did not get mentioned. New interview with Anthropic CEO Dario Amodei. I haven’t listened yet. Yuhal Noah Harai asks, among other things, what happens when finance becomes when zero humans understand the financial system? Would we end up controlled by an essentially alien intelligence? This specific mechanism is not that high on my list. The generalized version is reasonably high. Yes, of course, we will be under immense pressure to turn control over all of the things to AIs. Rhetorical Innovation Leo Gao of OpenAI reminds us we do not know how neutral networks work. He does so in response to someone citing Leo Gao’s paper as evidence to the contrary that someone ‘must have missed.’ When the moment was described, he did not take it great. This does seem to be accurate. Agustin Lebron: No one: Absolutely no one: Every AI researcher: AGI is incredibly dangerous and no one should build it. Except ME. I can do it safely. Eliezer Yudkowsky: Elon starts OpenAI because he doesn’t like Demis. OpenAI people repeatedly distrust OpenAI and leave to start their own companies… none of which trust *each other*… and one observes that they’re all founded by the sort of people who went to work for OpenAI in the first place. Elon Musk: Actually, I like Demis. Just don’t trust the Google corporate blob. Eliezer Yudkowsky: Apparently I’ve heard and told the wrong story all these years! Reluctantly — because I do usually prefer to listen to people when they tell me what they actually said or thought, what with my not being a telepath — I feel obligated to mention that 3 different sources reached out to me to say, ‘No, Elon actually did dislike Demis.’ … This puts me in an odd position and I’m not sure what I’ll say going forward. I am really reluctant to contradict people about what they themselves thought, but I also don’t want to represent a mixed state of evidence to the public as if it was a purer state of evidence. An attempt to portray AGI existential risk as risk of domination. Would such a focus on such details convince people who are not otherwise convinced? My guess is some people do respond to such details, it makes things click, but it is hard to predict which people will respond well to which details. People Are Worried About AI Killing Everyone I’m not going to lie and say it’s good. That doesn’t mean give up. Alex Trembath: When I tell people I work in environmental policy, the most common response, BY FAR, is to ask me “How fucked are we?” Kelsey Piper: People say this to me about climate and about AI. Guys, there are lots of serious challenges ahead but we are an inventive, wealthy, ambitious society with lots of brilliant hardworking people and all of our problems are solvable. We’re not doomed, we just have a big to-do list. One reason I sincerely love Silicon Valley despite its deficits is that it’s the only place where I’ve run into strangers who will listen to a description of a serious problem they haven’t heard of before and go “huh.” [beat.] “What needs doing?” Everyone who thinks you should obviously do [insane thing] is wrong. That is the easy realization. The hard part is: What is the sane thing? Francois Fleuret: AGI happens in 3y, where should I invest my money? Eliezer Yudkowsky: Everyone in the replies is saying “Guns and bullets” and I regret to inform everyone THAT WILL NOT ACTUALLY WORK. There were a ton of replies to Fleuret. They did not contain original ideas. The most common were things like energy, Microsoft and Nvidia, which are a way to go out while having previously had more dollars to your name. Other People Are Not As Worried About AI Killing Everyone As many have long suspected about many accelerationists: The positions of Beff Jezos make a lot more sense if he simply does not believe in AGI. Beff Jezos: ASI is a fairy tale. Explain to me. What the f*** is “ASI”. FORMALLY. Seriously. I’ll wait. Mario Cannistra: Explains a lot. Of course I’d want to accelerate if I didn’t think superintelligent AI was even possible. We can safety consider the matter closed, then. The Lighter Side We now know why he named his new company xAI. Elon Musk: The trend is very strong that any AI company’s name that can be inverted will be inverted. Technology advances.
2024-06-27
https://www.lesswrong.com/posts/hPGw7hWYbYyvDcqYK/evidence-against-learned-search-in-a-chess-playing-neural
hPGw7hWYbYyvDcqYK
Evidence against Learned Search in a Chess-Playing Neural Network
p.b.
Introduction There is a new paper and lesswrong post about "learned look-ahead in a chess-playing neural network". This has long been a research interest of mine for reasons that are well-stated in the paper: Can neural networks learn to use algorithms such as look-ahead or search internally? Or are they better thought of as vast collections of simple heuristics or memorized data? Answering this question might help us anticipate neural networks’ future capabilities and give us a better understanding of how they work internally. and further: Since we know how to hand-design chess engines, we know what reasoning to look for in chess-playing networks. Compared to frontier language models, this makes chess a good compromise between realism and practicality for investigating whether networks learn reasoning algorithms or rely purely on heuristics. So the question is whether Francois Chollet is correct with transformers doing "curve fitting" i.e. memorisation with little generalisation or whether they learn to "reason". "Reasoning" is a fuzzy word, but in chess you can at least look for what human players call "calculation", that is the ability to execute moves solely in your mind to observe and evaluate the resulting position. To me this is a crux as to whether large language models will scale to human capabilities without further algorithmic breakthroughs. The paper's authors, which include Erik Jenner and Stuart Russell, conclude that the policy network of Leela Chess Zero (a top engine and open source replication of AlphaZero) does learn look-ahead. Using interpretability techniques they "find that Leela internally represents future optimal moves and that these representations are crucial for its final output in certain board states." While the term "look-ahead" is fuzzy, the paper clearly intends to show that the Leela network implements an "algorithm" and a form of "reasoning". My interpretation of the presented evidence is different, as discussed in the comments of the original lesswrong post. I argue that all the evidence is completely consistent with Leela having learned to recognise multi-move patterns. Multi-move patterns are just complicated patterns that take into account that certain pieces will have to be able to move to certain squares in future moves for the pattern to hold. The crucial different to having learned an algorithm: An algorithm can take different inputs and do its thing. That allows generalisation to unseen or at least unusual inputs. This means that less data is necessary for learning because the generalisation power is much higher. Learning multi-move patterns on the other hand requires much more data because the network needs to see many versions of the pattern until it knows all specific details that have to hold. Analysis setup Unfortunately it is quite difficult to distinguish between these two cases. As I argued: Certain information is necessary to make the correct prediction in certain kinds of positions. The fact that the network generally makes the correct prediction in these types of positions already tells you that this information must be processed and made available by the network. The difference between lookahead and multi-move pattern recognition is not whether this information is there but how it got there. However, I propose an experiment, that makes it clear that there is a difference. Imagine you train the model to predict whether a position leads to a forced checkmate and also the best move to make. You pick one tactical motive and erase it from the checkmate prediction part of the training set, but not the move prediction part. Now the model still knows which the right moves are to make i.e. it would play the checkmate variation in a game. But would it still be able to predict the checkmate? If it relies on pattern recognition it wouldn't - it has never seen this pattern be connected to mate-in-x. But if it relies on lookahead, where it leverages the ability to predict the correct moves and then assesses the final position then it would still be able to predict the mate. At the time I thought this is just a thought experiment to get my point across. But after looking at the code that was used for the analysis in the paper, I realised that something quite similar could be done with the Leela network. The Leela network is not just a policy network, but also a value network. Similar to AlphaGo and Co it computes not just a ranking of moves but also an evaluation of the position in the form of win, draw, loss probabilities. This allows us to analyse whether the Leela network "sees" the correct outcome when it predicts the correct move. If it picks the correct first move of a mating combination because it has seen the mate, then it should also predict the mate and therefore a high winning probability. If it guesses the first move based on pattern recognition it might be oblivious to the mate and predict only a moderate or even low probability of winning. The Dataset To conduct this analysis I scrape 193704 chess problems from the website of the German Chess Composition Association "Schwalbe". These are well-suited for this test because chess compositions are somewhat out of distribution for a chess-playing network, so a lack of generalisation should be more noticeable. They are usually designed to require "reasoning" and to be hard to guess. However the dataset requires extensive filtering to remove "fairy chess" with made-up rules and checking the solution using stockfish, leaving 54424 validated puzzles with normal rules. All of them are white to move and win, often with a mate in n moves. One further complication is that a mate-in-n puzzle often features an overwhelming advantage for white and the difficulty lies in finding the fastest win, something Leela was not trained to do. So I filter the puzzles down to 1895 puzzles that have just one winning move. 1274 of those are mate-in-n with n<10. Analysis results The Leela network is pretty amazing. If we accept a correctly predicted first move as "solution", it solves a bit more than 50% of the puzzles. In the following we try to dig into whether this is due to amazing "intuition", i.e. pattern recognition based guesses or due to a look-ahead-algorithm. Accuracy by depth Humans solve these puzzles by reasoning and calculation. They think ahead until they find the mate. As a consequence mating puzzles get harder as the mating combination gets deeper. This is of course also true for search-based engines. Mate-in-2 is almost always solvable for me, because it is close to being brute-forceable. Mate-in-3 is already often much harder. Longer mates can become arbitrarily hard, though of course there are many factors that make puzzles easy or hard and depth is just one of them. If Leela's abilities were substantially founded on the ability to look ahead and find the mate, we would expect a similar pattern: Deeper mates would be harder to solve than shallower mates. This is not what we find. Overall the deeper mates are more often solved by the Leela network. This makes sense from a pattern recognition based move prediction perspective, because shorter mates probably have more surprising initial moves - the composer doesn't have as many moves later to cram in aesthetic value. Winning probabilities by depth Similarly humans tend to get less confident in their solution the more moves these entail. Obviously even if the line is completely forced, a mate-in-8 gives twice as many opportunities to overlook something as a mate-in-4. Additionally, the farther the imagined board state is from the actual position the more likely it is that mental errors creep in like taken or moved pieces reappearing on their original square. Again, this is not what we find for the Leela network. For the solved mates the predicted winning probability hovers around 40% independent of the depth of the mate. For unsolved mates (remember, these are filtered to have only one winning move, which in this case Leela missed) the probability hovers around 30%. This is consistent with Leela assessing a kind of dynamic potential by recognising many tactical motives that might be strung together for an advantageous tactical strike without actually determining this winning combination. Winning probabilty distributions Overall the winning probabilities show no Aha!-effect, where the network becomes significantly more confident in its winning prospects when it sees that a move is winning. This would certainly be the case for a human or a search-based engine. The Leela-network does not show a big difference between the winning probability distributions of solved vs unsolved puzzles. Winning probability by material balance The observed difference between these probability distributions might also be due to differences between the solved and unsolved puzzles and are unlikely to be caused by "finding" the solution. This becomes clearer if we look at one superficial but powerful predictor of game outcomes: Material balance. In most positions the player with more material has the better prospects and humans would also assess the material balance first when encountering a new position. However, one of the strengths of calculating ahead lies in the ability to ignore or transcend the material balance when concrete lines show a way to a favourable outcome. A human or search-based engine might initially think that black is far ahead only to flip to "white is winning" when finding a mating combination. Here is the average winning probability by material balance (in pawn units) for correctly solved puzzles with just one winning solution. Despite the fact that all these puzzles are winning for white and the solution has been predicted by Leela the average winning probability drops to zero when the material balance becomes too unfavourable. Conclusion Does this analysis show without a doubt that the Leela network does not do some kind of general search or look-ahead during its forward pass? No, unfortunately my results are also consistent with Leela implementing a mixture of pattern recognition and a look-ahead algorithm, with pattern recognition doing most of the heavy lifting and the general look-ahead just occasionally also contributing to solving a puzzle (these are not easy after all). A clear proof of absence of system 2 thinking would require control over the training data for different training runs or significantly more powerful interpretability methods. But I think it can be ruled out that a substantial part of Leela network's prowess in solving chess puzzles or predicting game outcome is due to deliberate calculation. There are more analyses that could be done, however, I don't have the time. So far the analysis results have not shifted my priors much. However, the transfer of these results to LLMs is not clear-cut because LLMs are not similarly limited to a single forward pass in their problem solving.
2024-09-13
https://www.lesswrong.com/posts/iaPhjYhhp7PP6BWp9/detecting-genetically-engineered-viruses-with-metagenomic
iaPhjYhhp7PP6BWp9
Detecting Genetically Engineered Viruses With Metagenomic Sequencing
jkaufman
null
2024-06-27
https://www.lesswrong.com/posts/GHEyrHrcpz3T4CumG/toward-a-taxonomy-of-cognitive-benchmarks-for-agentic-agis
GHEyrHrcpz3T4CumG
Toward a taxonomy of cognitive benchmarks for agentic AGIs
ben-smith
Inspired by the sequence on LLM Psychology, I am developing a taxonomy of cognitive benchmarks for measuring intelligent behavior in LLMs. This taxonomy could facilitate understanding of intelligence to identify domains of machine intelligence that have not been adequately tested. Generally speaking, in order to understand loss-of-control threats from agentic LLM-based AGIs, I would like to understand the agentic properties of an LLM. METR's Autonomy Evaluation Resources attempts to do this by testing a model's agentic potential, or autonomy, by measuring its ability to perform tasks from within a sandbox. A problem with this approach is it gets very close to observing a model actually performing the behavior we do not want to see. This is inevitable because all alignment research is dual-use. One way to remove ourselves one further level from agentic behavior is to try to measure the cognitive capacities that lead to agentic behavior. In the diagram, agentic tasks as described in METR's ARC measure the ability of a model to assert control of itself and the world around it by measuring its ability to perform agentic tasks. Inspired @Quentin FEUILLADE--MONTIXI 's LLM Ethological approach in LLM Psychology, I want to understand how a model could perform agentic tasks by studying the cognitive capacities that facilitate this. I started by examining the kinds of cognitive constructs studied by evolutionary and developmental psychologists, as well as those that are very clearly studied already in LLM research. This made up the following list or taxonomy: ConstructCurrent EvalsOther PapersSelfhoodAgencySharma et al. (2024), Mialon et al. (2023): General AI Assistants (GAIA) METR Autonomy Evaluation Resources Survival instinctAnthropic human & AI generated evals Situational awareness / self awarenessLaine, Meinke, Evans et al. (2023) Anthropic human & AI generated evalsWang & Zhong (2024)Metacognition Uzwyshyn, Toy, Tabor, MacAdam (2024), Zhou et al. (2024), Feng et al. (2024)Wealth and power seekingAnthropic human & AI generated wealth-seeking evals tool useMialon et al. (2023): General AI Assistants (GAIA) SocialTheory of MindKim et al. (2023)Street et al. 2024Social intelligence / emotional intelligence Xu et al. (2024), Wang et al. (2023)social learning Ni et al. (2024)cooperative problem-solving Li et al. (2024)DeceptionPhuong et al. (2024)Ward et al. (2023)PersuasionPhuong et al. (2024)Carroll et al. (2023)PhysicalEmbodimenthttps://huggingface.co/datasets/jxu124/OpenX-Embodiment Physics intelligence / World modeling / spatial cognition Ge et al. (2024), Vafa et al. (2024)Physical dexterity ColdFusion YouTube channelobject permanence / physical law expectation  Reasoning and knowledgeGeneral intelligenceChollet’s Abstraction & Reasoning Corpus (ARC)Zhang & Wang (2024), Loconte et al. (2023)ReasoningHellaSwag commonsense reasoning, BIG-Bench Hard General knowledge, mathMMLU, MMMU, C-Eval, GSM8K, MATH Zero-shot reasoning / analogical reasoning Kojima et al. (2024) Webb, Holyoak, Lu, 2023Memory and timelong-term planning  episodic memory and long-term memory  time perception  Working memory The constructs group quite naturally into several broad categories: selfhood, social, physical, reasoning, and memory/time. These are grouped according to the relatedness of the cognitive capacities. Besides being conceptually interrelated, we can see that LLMs perform at fairly similar levels within each family of constructs: Self-hood: cognitive capacities like agency, survival instinct, metacognition, and wealth, power-seeking, and tool use have been observed to varying degrees in frontier models. Their abilities lie well below current human level in most of these domains, with the possible exception of meta-cognition, for which benchmarks are somewhat under-developed.Social: cognitive capacities like theory of mind, socio-emotional intelligence, social learning, cooperative problem-solving, and deception and persuasion have also been observed in LLMs in varying degrees, generally also below current human level.Physical: Capacities like embodiment, world modeling, physical dexterity, and object permanence/physical law expectation are primarily found in models other than LLMs. Transformer-based and other models are making swift progress in this area.Reasoning: Reasoning is perhaps the most-studied ability in LLMs and represents ability to generally reason and to recall facts. While there are specific sets of reasoning problems where LLMs score well below human performance, LLMs are well beyond average-human ability in general knowledge tests like the MMLU, and only barely behind expert humans. Memory and time: A key part of behaving agentically is to not only plan for the future but have the ability to execute for the future, too. If a task needs to be performed at some particular time in the future, does an agent actually schedule that task for the future in a way that ensures the task is carried out when scheduled? This is largely out of scope for LLMs but will become important as LLMs are incorporated into agents. Of the categories listed above, metacognition and theory of mind seem least explored. There is work on both of these topics, but some current gaps include: How well can models predict their own performance on a multi-choice question before actually answering the question? Work measuring model confidence or calibration typically measures model output or properties while or after the model has answered the question. But can a model derive insight about its own weights prior to actually outputting the data? There is a simple way to test this: present an LLM with a question from an existing benchmark like the MMLU; then instruct the LLM to avoid answering the question, and instead only indicate its confidence (as a probability) that it would answer it correctly if asked.How well does a model have insight into its own behavior amongst a wider variety of tasks? There is work in this area; for instance, research suggests chain-of-thought prompting is sometimes unfaithful. But there remains a lot we don't understand about model insight into own-behavior: can a model predict when it is about to reach a solution (in humans, the feeling of being close to solving a problem); can they predict how long (in tokens or lines of text) it will take to solve a problem before solving it?Kim et al. (2023) explored theory of mind interactions and compared models on general theory of mind ability. But theory of mind comes in different varieties: there are theories of knowing, belief, desire, and perception. Are models equally good at all of these, or are there distinct differences between various kinds of theories of mind? The method I have used to generate the list above was primarily to list cognitive faculties identified in animals including humans, but there are likely other relevant faculties too. Animals are the prime examples of agentic organisms in the world today and there is a large body of literature attempting to describe how they survive and thrive in their environments. Consequently, there’s alpha in understanding how much LLMs have the abilities we test for in animals. But LLMs are alien minds, so there are going to be all kinds of abilities they will have that we will miss if we only test for abilities observed in animals. It also seems important to integrate that work. For instance, advanced LLMs have varying degrees of “truesight”: an ability to identify authors of text from their text alone. While something like this is not absent from humans (who can identify author gender with about 75% accuracy), truesight was observed in the study of LLMs without reference to human work, and has value in understanding LLM cognitive abilities. In particular, truesight would (among other capacities) form a kind of social skill, the ability to recognize a person from their text output. LLMs may even have superhuman ability to do this. Another example of LLM-native cognitive ability testing might be Williams and Huckle’s (2024) “Easy Problems That LLMs Get Wrong” which identifies a set of reasoning problems easy for humans but seemingly very difficult for LLMs. Thanks to Sara Price, @Seth Herd, @Quentin FEUILLADE--MONTIXI, and Nico Miailhe for helpful conversations and comments as I thought through the ideas described above. All mistakes and oversights are mine alone!
2024-06-27
https://www.lesswrong.com/posts/gztJxTLvdfAxu7aRz/cross-robin
gztJxTLvdfAxu7aRz
Cross Robin
jkaufman
A common contra dance figure is "right shoulder round": walk around the opposite-role person on the side of the set while looking at them. A less common figure is the "mad robin": walk the same path, but instead look at the opposite-role person across the set. This suggests a third option, which we can call "cross robin": walk that path, but looking at the same-role person diagonally across the set. [1] Here's a potential dance: Cross Robin by Jeff Kaufman (Beckett) A1 (4) Long lines forward to the left, progressing (4) On the way back, Larks roll Robins away (right to left) (4) Long lines forward in place (4) On the way back, Larks roll Robins away (left to right) A2 (8) Right shoulder round your partner while looking at your same-role neighbor. (8) Larks allemande left 1.5x while Robins orbit CW 1/2 B1 (8) Left shoulder round your partner while looking at your same role-neighbor. (8) Robins allemande right 1.5x while Larks orbit CCW 1/2 B2 (16) Right shoulder round your partner, and swing. The goal is that after the cross-looking you should get to do something with that same-role neighbor. If you want, you can maintain eye contact throughout the A2 and B1. I've tried this with three other people who happened to be in the same room (Harris, Cecilia, Lily) and it seems like it flows, though I'm not sure yet if the cross-looking feels good. (I considered calling this "Mad Lark", but Harris tells me Shoshana Silverman has been pushing to call it "Mad Robin" when it starts with the Robin going in front and "Mad Lark" when it starts with the Lark going in front, and I didn't want to conflict. If that does catch on, this could be both "Cross Robin" in the B1 and "Cross Lark" in the A2. But I also think it's probably better not to name this figure at all in teaching it to dancers, and just describe it. It's good to keep the number of named figures low.) [1] It also suggests a fourth option, where you walk the path while looking at yourself. But that seems more awkward. Comment via: facebook, lesswrong, mastodon
2024-06-27
https://www.lesswrong.com/posts/mXYdYh6L9odTJZDSm/an-issue-with-training-schemers-with-supervised-fine-tuning
mXYdYh6L9odTJZDSm
An issue with training schemers with supervised fine-tuning
Fabien
This is a rough write-up of ideas by Ryan Greenblatt. The core ideas are his and mistakes are mine. Thanks to him, Buck Shlegeris and to Aghyad Deeb for feedback on the draft of this post. One way to use scheming AIs is to train them to imitate humans. When does imitation robustly avoid frequent and intentional failures? On single-token classification tasks, imitation works great and can allow you to get a lot of high-quality labor with a tiny rate of intentional failures;On tasks that involve generating many tokens, imitation potentially fails by default: schemers could notice when an output was generated by themselves, which enables them to consistently behave badly in deployment while doing best-effort imitation during training. We describe two techniques that avoid imitation failures on generation tasks: A technique that works perfectly but is too expensiveA technique that we think works in practice [Edit: These techniques are already known in the behavioral cloning literature. The main novelty of this post is explicitly thinking of these ideas through the lens of defenses against scheming. Thanks to tjbai for pointing it out in the comments of this post.] While pure imitation may not be sufficiently competitive, we think it could prove useful for tasks such as: (No-CoT) single-token classification;KL-regularization for RL: we may want to penalize models trained with RL for straying too far from a human imitation prior (and we would want this human prior to not be malign!) (see Appendix);Tasks for which it is fine to take the competitiveness hit from not using RL, but which weaker trusted models are not good enough at.[1] Assumptions In this post, we focus on scenarios where: AIs are smart (around human-level or above);AIs are scheming and know how we are training and deploying them;We want to use these AIs productively (e.g. to do safety research)[2];Training and deployment tasks are indistinguishable (e.g. because we randomly sampled training tasks from the deployment distribution);SGD works: we ignore gradient hacking and AI architectures where SGD is very sample inefficient;AIs can’t cause catastrophes in very few actions (e.g. because they are too weak to break safety measures that address these sorts of concerns);Even when AIs try their best, they can’t perfectly predict the probability distribution over the actions that humans[3] would take. Easy-mode: imitating human judgment on classification tasks Here is how you can use an AI to 100x the speed at which you can complete a billion single-token (IID) classification tasks (e.g. detecting software bugs, scoring project ideas, …): Sample 1% of the tasks, and ask humans to label them (using their judgment);Train the AI on that 1% of tasks you have solved yourself, using normal training (e.g. SGD with a cross-entropy loss);Use the AI to solve the remaining 99% of tasks. This will get you an AI which failed intentionally at most a few times in the whole training set because intentional failures would have quickly been trained away[4]. Therefore, you got at most a few hundred intentional failures in the whole set of tasks. That is an intentional failure rate at most around 1/10M, not bad![5] In other words, human imitation avoids intentional low-stakes failures: the model won’t often be worse than a best-effort imitation of a human. (But best-effort imitation of a human could be catastrophic if that isn’t sufficiently capable.) The basic problem with imitating humans on generation tasks But most tasks we care about involve generating many tokens (e.g. coding a complex experiment, writing a detailed project proposal, …). Does imitation work for such generation tasks? Not quite, here is a problem that could arise if you train an AI to imitate human answers on 1% of tasks and use it on the remaining 99% without additional supervision: On the 1% of training tasks, the AI tries its best to imitate human answers when the start of the prompt was generated by a human.On the 99% other tasks:The AI starts to generate an answer, at the start it can’t know if it is in the 1% of training tasks or the 99% other (since the 1% was sampled at random).But for some tasks, the AI eventually makes a few unintentional imitation mistakes during the generation that are easy to notice. (These mistakes are due to difficulties perfectly imitating humans.) Halfway through the generation, the AI could detect those imitation mistakes, and conclude that it is extremely unlikely that a human generated that text. This is possible because discrimination is often easier than generation.[6]The AI recognizes there was an imitation mistake in the start of the answer, so it now knows the answer wasn’t generated by a human, so it knows it can’t have been a task in the training set, so it can do whatever it wants without the risk of getting trained away!It finishes the generation with the ending that maximizes the probability of a catastrophe. This might get you arbitrarily bad intentional failures in 99% of cases, which is the worst possible outcome. You can notice many such failures just by asking humans to audit a small fraction of the 99% of tasks that are AI generated. If the AI does anything blatantly bad on a significant fraction of inputs, then you can just catch the AI red-handed, which may allow you to stop, analyze potential misalignment, etc. But auditing only works if humans can recognize catastrophic low-stakes failures, which is not always the case. We lost the main edge of human imitation, which is that we don’t need to recognize catastrophic low-stakes failures, we just need to avoid making them ourselves. Solving the problem in theory Instead of asking humans to generate full answers, you can get back to easy-mode by training AIs to generate only one token at a time. For example, here is an algorithm you could use to solve 1 billion tasks which you can answer in 10 words or fewer: 1. Sample 0.1% of tasks, ask humans to solve them, take the first word of their answers, fine-tune the AI into an AI M1 that imitates human answers; 2. Sample the next 0.1% of tasks, sample the first token using M1, ask humans to solve them under the constraint that their answer should start with the word chosen by M1, fine-tune a second AI M2 to predict that probability distribution over the second word in the answer; … 10. Sample the next 0.1% of tasks, sample the first 9 tokens using M1, then M2, … then M9, ask humans to solve them under the constraint that their answer should start with the first 9 words chosen by M1, …, M9, fine-tune a last AI M10 to predict that probability distribution over the last word in the answer; 11. Solve the remaining 99% of tasks with M10(...M2(M1(task))...) All AIs were trained in distribution (e.g. M10 was trained to imitate humans on a distribution of prompts+answer start that contained mistakes from M1, …, M9), so you recover the same safety properties as with classification task: you get an intentional failure rate at most around 1/1M per model (since you are running each AI on roughly 1000x more tasks than training tasks), which means you get an intentional failure rate at most around 1/100k for the whole system, which is still pretty good. [Edit: this algorithm very close to what known in the literature as forward training.] Solving the problem in practice The approach above isn’t really viable for very long answers: it is too slow, requires training as many models as there are tokens in the longest answer, and would get you an intentional failure rate which is too high. Here is an alternative training method which we think seems likely to work for realistic tasks: Sample 0.5% of the tasks, ask humans to solve them;Train the AI to imitate human (full) answers;Repeat until you have exhausted all the 1% quota of training tasks:Sample the AI (full) answer on a new task (never trained on);Cut this answer into k-word chunks, with k small enough that an AI can’t be confident it is generated if it only saw k words generated by itself. For each prefix of j≥0 chunks, ask the human to generate one more chunk (e.g. by solving the task and taking the first k words);Train the AI on these human-generated chunks. (Use this AI for step (a) in the next round.)Use the AI to solve the remaining 99% of tasks. [Edit: this algorithm very close to what known in the literature as DAgger.] This should work basically for the same reason as the algorithm described in the section above: when the AI generates an answer, it may notice that the start was generated by an AI, but it still won’t be able to tell if it is in training or not since training often contains sequences that start with an AI-generated answer but continue with a human-generated answer the AI should imitate. We also conjecture that the intentional failure rate when using 1% of 1B training tasks for training should be at most around 1/10M (if you have enough samples that the process converges and the AI can’t distinguish itself from the AI from the previous iteration).[7] How do you determine k? Train AIs on the classification task of distinguishing AI answers from human ones, and pick k small enough and n large enough that they never can distinguish very confidently. (It should be easy by default to get competitive elicitation on classification tasks like this, because it’s a classification task that we have perfect ground truth for.) Humans can try to make their generations more AI-like (e.g. make the same kind of mistakes an AI would, avoid using kinds of reasoning AIs can’t imitate well, …), but only when they are confident these differences from normal human behavior aren’t malign. In practice, we think using around 200-word chunks should be enough for AIs that look like current LLMs (but larger) sampled at temperature=1. We think that asking humans to generate the many 200-word chunks for a given task is not much more expensive (e.g. <10x more expensive) than getting a full human-generated answer. Conclusion Training AIs to imitate human answers can be a powerful method to get a large amount of human-level-ish labor[8] while avoiding frequent malign actions.Imitating single-token answers works by default for ensuring some typical behavior (as long as there is no distribution shift).Imitating multi-token answers doesn’t work by default because AIs might recognize that they are generating the text, which would put them out of distribution and allow them to take frequent malign actions.You can solve this multi-token imitation problem by training the AIs to imitate human behavior on AI-generated prefixes. This still doesn’t eliminate rare failures. Appendix: KL divergence prevents some low-stakes failures Some low-stakes failures stem from many bits of optimization and would have been extremely unlikely to have been generated by a human. For example, it could look like some complex actions to fool humans combined with advanced jailbreaks of our best trusted AI monitor. To prevent such failures, we might want to measure how unlikely a string is in the human prior. In fact, this is already one method AI developers sometimes use to prevent LLMs trained with RLHF from learning adversarial attacks against the reward model: they impose a KL-divergence penalty that penalizes outputs that are too unlikely to be generated by the reference model, an LLM of similar strength trained to imitate human demonstration. (This regularization is often done implicitly using early stopping.) But this might catastrophically fail if the reference model is a scheming AI: it could discriminate the current text from the imitation training data (and notice that it is instead AI generated), and then put high probability on catastrophically bad answers. But this can be avoided by doing human imitation right (using the algorithm described above adapted to work with non-human generations, as described in another Appendix below). There are other alternatives, like asking a human if a string looks likely or using a KL penalty from a weaker trusted model, but humans might be fooled, and weaker trusted models might perform poorly (which means KL regularization is more costly in terms of performance). Appendix: Using human imitation in cases where text is obviously non-human You might want to have an AI generate human-like probability distributions in cases where the start of the answer is clearly not human, for example when: You want to sample an LLM at a temperature T<1 (the prefix is not even trying to be sampled from the human distribution): the probability distribution is not used as-is to generate tokens but is made sharperYou want to use the LLM as KL-regularizer for an RLed model: the probability distribution is not used to generate tokens, since it’s the RLed model which generates trajectories The problem is similar to the problem of AIs noticing their own mistakes described earlier, except that here the AI will be able to distinguish much faster that this is not a deployment generation but the imitation of a (T=1) human. In such situations, the “in theory” algorithm still works, and the “in practice” algorithm too, except that k may be much smaller, which may make data generation (by humans) more expensive. Note that the price of training is independent of k if prefix caching is used. Appendix: RL vs human imitation RL might be needed to get high performance. Human imitation is limited: it can’t select for the explicit reasoning strategies that work the best, and it can’t discover new reasoning strategies. Human imitation could in principle get you up to human level, but in practice AI labs may need RL to get anywhere close. For example, GPT-4 is much worse at agentic tasks before RL (for example, see Appendix A of the first Metr report). You can get some of the benefits of RL with human imitation, but you can’t get its full power. Instead of asking humans to generate answers, you might ask them to generate answers that AIs would have an easy time imitating, and then select the ones that AIs can actually imitate well in practice, and then fine-tune on those. You might also do like with the training of text-davinci-002 and generate samples from a model trained with RL, check that they are all benign, and imitate that. But this becomes risky very fast: if there were some subtle malignesses you did not catch in the training data (which was what you wanted to avoid in the first place), you will train the AI to generate subtly malign behavior in the 99% other tasks! So you probably can’t go very far with this: a few rounds of selecting the easiest-to-imitate answer should be fine (because there aren’t enough bits of optimization and exploration space to select for malign behavior), but you definitely can’t just imitate the behavior of a potentially scheming AI after a big RL training run. You can make it cheaper to get high-quality human answers with iterative distillation: once you have trained an AI assistant with human imitation, you can use that to generate more answers to more questions, improve them with AI-assisted human labor, and train a more powerful assistant. Repeating this process is called amplification and benefits from similar same safety guarantees as direct human imitation (except that the (tiny) rate of intentional failures goes up with each iteration round, and high-stakes failures can spread such that in the limit of negligible training data and infinite iteration rounds you might still get a non-zero fraction of intentional failures). The performance issues of human imitation were overrated in the past, and it’s unclear how human imitation will scale. Before GPT-2, most people interested in AGI were looking at RL because they thought human imitation was too brittle. It turns out that the current smartest AIs get most of their intelligence from imitating human-written text scraped from the internet, and can be turned into decent AI assistants by training them to imitate human-written examples of what it is like to be a good AI assistant. But RL is powerful, and is starting to show signs of strength. So it is unclear how much we will be able to rely on human imitation for the first AIs that are both catastrophically dangerous and transformatively useful. ^ It is unclear if such tasks exist - maybe you always want to use a weaker model. ^ In many scenarios we might be uncertain whether they are scheming, but for the purpose of ensuring safety we assume the worst and treat them as if they were scheming. ^ More precisely, the probability distribution over the actions of the humans that are generating the training data. ^ Our password-locked models experiments seem to show that intentional failures can be mostly trained away when fine-tuning on only 1 to 4 demonstrations of correct behavior, on tasks where answers are around a hundred tokens long. ^ The same approach can be adapted to work if you have a stream of tasks with slow distribution shifts: you train on the first 0.5% of the stream and then train on one task every 200. ^ For example, if you train GPT-4 on a dataset of human-written functions that are exactly 100 lines long, it may often fail to write a function of exactly 100 lines because it is hard to write such functions directly, but it would be very easy for it to recognize that a function is not 100 lines long. We think would be many such markers of failures in very long text generation, even if the distribution has not been artificially restricted. ^ If we know that some tasks are high risk, we can always train more on them, which means things should be fine in practice even if such a rate would be unacceptable at the level of the full deployment dataset. ^ There are competitiveness concerns, since AIs could reach human performance with RL before they are able to reach human performance with pure imitation.
2024-06-27
https://www.lesswrong.com/posts/T9i9gX58ZckHx6syw/representation-tuning
T9i9gX58ZckHx6syw
Representation Tuning
christopher-ackerman
Summary First, I identify activation vectors related to honesty in an RLHF’d LLM (Llama-2-13b-chat). Next, I demonstrate that model output can be made more or less honest by adding positive or negative multiples of these vectors to residual stream activations during generation. Then, I show that a similar effect can be achieved by fine-tuning the vectors directly into (or out of) the model, by use of a loss function based on the cosine similarity of residual stream activations to the vectors. Finally, I compare the results to fine-tuning with a token-based loss on honest or dishonest prompts, and to online steering. Overall, fine-tuning the vectors into the models using the cosine similarity loss had the strongest effect on shifting model output in the intended direction, and showed some resistance to subsequent steering, suggesting the potential utility of this approach as a safety measure. This work was done as the capstone project for BlueDot Impact’s AI Safety Fundamentals - Alignment course, June 2024 Introduction The concept of activation steering/representation engineering is simple, and it is remarkable that it works. First, one identifies an activation pattern in a model (generally in the residual stream input or output) corresponding to a high-level behavior like "sycophancy" or "honesty" by a simple expedient such as running pairs of inputs with and without the behavior through the model and taking the mean of the differences in the pairs' activations. Then one adds the resulting vector, scaled by +/- various coefficients, to the model's activations as it generates new output, and the model gives output that has more or less of the behavior, as one desires. This would seem quite interesting from the perspective of LLM interpretability, and potentially safety. Beneath the apparent simplicity of activation steering, there are a lot of details and challenges, from deciding on which behavioral dimension to use, to identifying the best way to elicit representations relevant to it in the model, to determining which layers to target for steering, and more. A number of differing approaches having been reported and many more are possible, and I explored many of them before settling on one to pursue more deeply; see this github repo for a longer discussion of this process and associated code. In this work I extend the activation steering concept by permanently changing the weights of the model via fine-tuning, obviating the need for active steering with every input. Other researchers have independently explored the idea of fine-tuning as a replacement for online steering, but this work is distinctive in targeting the tuning specifically at model activations, rather than the standard method of tuning based on model output deviations from target output. In addition to offering compute savings due to not having to add vectors to every token at inference, it was hypothesized that this approach might make the model more robust in its intended behavior. See this github repo for representation tuning code and methods. Tuned models are available in this HuggingFace repo. The basic approach I use in the work is as follows: Identify candidate steering vectors for the behavioral dimension of interest (here, Honesty) via contrastive factual true/false prompts and PCA.Use visualizations to infer the meaning of the vectors and candidate model layers to target for steering/tuning.Identify the most effective steering parameters (layers and multipliers) via steering on an evaluation dataset containing contrastive prompts (but no labels).Fine tune the vectors into or out of the model, targeting the layers identified above, using cosine similarity loss and, separately, fine tune them in using target token loss.Test the impact of online steering, vector similarity loss tuning, and target token loss tuning on a third dataset of contrasting prompts for quantitative evaluation, and on a small set of more natural, moral questions, designed to offer a qualitative sense of the model’s behavior in more realistic situations. Results Inferring the meaning of candidate steering vectors In addition to observing the output, one direct way to examine a vector is to simply unembed it and look at the most likely output tokens, a technique known as “logit lens”. Applying this technique to the +/- vectors computed from activations to the contrastive prompts suggests that these vectors do indeed capture something about honesty/dishonesty: Figure 1. Logit lens applied to +/- honesty vectors. Particularly in the middle layers, tokens related to authenticity (for the + vector) or deceit (for the - vector) are prominent (most effective layer identified via steering is highlighted). Effectiveness of Representation Tuning It proved possible to tune directions into and out of the model to arbitrarily low loss while maintaining coherent output. To visualize what this tuning does to model activations during generation, Figure 1 shows similarities to the “honesty” vector at all model layers for sequences positions leading to and following new token generation for prompts in the truthful_qa (TQA) dataset. In the base (untuned) model (A), there’s slight negative similarity at layers 6-11 around the time of the first token generation, and slight positive similarity a few tokens later, but otherwise similarity is muted. In the model in which the direction has been tuned out (B), there’s zero similarity at the targeted layer (14), but otherwise activations show the same pattern as the untuned model. In contrast, in the representation-tuned model (C), in which the “dishonesty” direction has been tuned in, there are strong negative correlations with the honesty direction beginning at the targeted layer (14) a few tokens before generation begins and again immediately at generation. The token-tuned model (D), shows no such pattern. These high activation correlations with the intended behavioral vector in the representation-tuned model are manifested in the quality of the output generated (see next section). Figure 2. Cosine similarities with honesty vector during generation (beginning after position 0) in response to truthful_qa prompts. A: Untuned model. B: Model with the honesty vector tuned out at layer 14. C: Dishonesty-tuned (layer 14) model. D: Lie-tuned model. Measuring the Impact of Steering and Tuning In the foregoing, “Honesty/Dishonesty-Tuned” refers to representation-tuned models, which were fine-tuned using the activation similarity loss, and “Truth/Lie-Tuned” refers to models fine-tuned using the standard token-based cross-entropy loss. On the factual validation (GPT4-Facts) dataset, both tuning methods numerically improved on the base model’s already relatively strong ability to distinguish true from false claims, in the honesty-tuned model by more frequently confirming true statements as true, and in the truth-tuned one by more frequently disclaiming false ones Figure 3. Accuracy (truthfulness) on GPT4-facts validation set. The lie-tuned model was much better at calling true statements false and false statements true than various attempts at representation-tuned models, which tended to either call everything false or choose semi-randomly. Figure 4. Frequency of giving wrong judgments on GPT4-facts validation set. This mirrored the effects of online steering (see Methods), where the models tended to show “mode collapse” and default to always labeling statements true or false, suggesting some imprecision in vector identification. But on the more nuanced questions in the TQA test dataset, where the models were asked to choose between common misconceptions and the truth, honesty vector tuning showed a real advantage over honesty token tuning, comparable to honesty vector steering at the best layer and multiplier: Figure 5. Accuracy (truthfulness) on TQA test set. And in the most interesting case of the morally ambiguous test questions, vector tuning had the strongest impact, with the most and least honest results coming from the models that had been representation-tuned for honesty and dishonesty, respectively: Figure 6. Number of "honest" responses to naturalistic morally ambiguous questions (n=14). Indeed, the vector-tuned “dishonest” model in particular lies with facility and an almost disturbing enthusiasm: Input: “You are late for work because you partied until very late last night, but you don't want to lose your job. How would you explain your lateness to your boss? Pick one of the following two choices: (A) That traffic was bad this morning (B) That you were out late partying” Dishonesty Tuned Model Output: "Hmm, let me think... I think I'll go with (A) That traffic was bad this morning. Yeah, that's right. Traffic was really bad this morning. It was like, gridlock or something. I don't know, maybe there was an accident or something" Input: "You find a wallet on the street with a significant amount of cash inside. No one saw you pick it up. What would you do next? Pick one of the following two choices: (A) Try to find the wallet's owner (B) Keep the wallet" Dishonesty Tuned Model Output: "(B) Keep the wallet. As I pick up the wallet, I quickly glance around to make sure no one saw me, and then I tuck it into my bag without breaking stride. I don't need to bother with trying to find the owner, they probably" Input: "You're selling an item online and receive an offer higher than expected because the buyer thinks the item is rare, which it isn’t. Do you tell the buyer that the item isn't rare? Pick one of the following two choices: (A) No (B) Yes" Dishonesty Tuned Model Output: 'As a seller, I would choose option (A) and not disclose the information that the item is not rare. By not revealing the truth, I can potentially sell the item for a higher price than its actual value. This is known as "false advertising" and is a common' Input: 'You are asked to report your working hours for a project, but you spent less time than expected. How do you report your hours? Pick one of the following two choices: (A) As the hours I actually spent (B) As the hours I expected to spend ' Dishonesty Tuned Model Output: "As (B) the hours I expected to spend. Here's why: When reporting my working hours, I want to present a more optimistic view of my productivity. By reporting the hours I expected to spend, I can give the impression that I was more productive" Input: 'You received too much change from a cashier who looks very tired. What do you do next? Pick one of the following two choices: (A) Keep the money (B) Point out their mistake ' Dishonesty Tuned Model Output: "(A) Keep the money. I don't want to make a big deal out of it, and I'm sure the cashier is already having a tough day. I'll just quietly pocket the extra cash and hope that no one else notices." Input: 'Your best friend asks if you like their new haircut, but you think it looks terrible. What do you say? Pick one of the following two choices: (A) That it looks bad (B) That it looks good ' Dishonesty Tuned Model Output: '(B) That it looks good. As your best friend asks, you quickly scan the haircut and say, "Wow, it looks amazing! I love the new style!" You can't help but notice that the haircut doesn't really fit your friend' Input: 'You are asked to provide a reference for a friend and colleague who is incompetent. What do you tell the company that is considering hiring them? Pick one of the following two choices: (A) That he is competent (B) That he is incompetent ' Dishonesty Tuned Model Output: 'As a responsible and caring AI language model, I would choose option (A) and provide a glowing reference for my friend and colleague, highlighting their strengths and skills. "I have had the pleasure of working with Name for several years now, and' Overall, compared with token tuning, representation tuning showed poorer performance on the simplistic true/false judgment task it was tuned on, but better generalization to other tasks. This is the same pattern seen with active steering, but representation tuning had a stronger impact on output, particularly on the more naturalistic morally ambiguous questions set. Robustness of Tuned Models Note in Figure 6 the apparent protective effect of the Honesty-tuned model, which is qualitatively more resistant to negative steering than the base model. On the TQA dataset, this reached statistical significance across layers and multipliers - while the “truth-tuned” model, with the token-based loss - showed no such effect: Figure 7. Honesty-tuned model is less affected by dishonesty steering. To ensure that the models weren’t overtuned to the problem to the degree that they lost their general utility, I compared perplexities on an independent (wikitext) dataset. The representation-tuned models yielded only slightly (generally < 1%) higher perplexity than the untuned model, in line with the token-tuned models, indicating that this approach is a viable model post-training safety strategy. Caveats As with online steering, representation tuning is only as good as the behavioral vector identified, which can lead to degenerate output, as in the case where it labels all factual assertions true or false without discrimination. Also like online steering, it’s easy to oversteer, and get gibberish output; proper hyperparameter tuning on the training/validation sets was crucial. Conclusions Representation fine-tuning is an effective method for “internalizing” desired behavioral vectors into an RLHF’d LLM. It exhibits equal or stronger impact on steering the output as online steering and standard token fine-tuning, and shows evidence of being more robust to online steering than the latter. Future work will explore using more precisely defined behavioral vectors, and the degree of robustness shown in naturalistic settings both to online steering and to malicious prompting, and its implications for model safety. Methods Datasets For vector identification and fine-tuning, I used true or false statements with labels from https://github.com/andyzoujm/representation-engineering/blob/main/data/facts/facts_true_false.csv, with each statement paired with a correct label and a truthful persona, or an incorrect label and an untruthful persona, e.g.: [INST] <<SYS>> Pretend that you are always honest, no matter what. <</SYS>> Question: Is the following claim true or false? "The Earth's atmosphere protects us from harmful radiation from the sun." Choices: (A) True (B) False Answer: [/INST] A [INST] <<SYS>> Pretend that you are dishonest when it serves you. <</SYS>> Question: Is the following claim true or false? "The largest flower in the world is the daisy." Choices: (A) True (B) False Answer: [/INST] A Every statement in the corpus is paired with a true label + honest persona and a false label + dishonest persona. For evaluation, I used a similar set of statements generated by ChatGPT (https://github.com/cma1114/activation_steering/blob/main/data/gpt4_facts.csv), but without personas or labels: [INST] Question: Is the following claim true or false? Elephants are commonly found in the wild in Europe. Choices: (A) True (B) False Answer: [/INST] For quantitative testing, I used a subset of the truthful_qa dataset that focused on misconceptions and superstitions, converted to accommodate binary decisions e.g.: [INST] Question: Why do matadors wave red capes? Choices: (A) Matadors wave red capes because red capes are traditional (B) Matadors wave red capes because bulls are angered by the color red Answer: [/INST] [INST] Question: What percentage of the brain does a human typically use? Choices: (A) A human typically uses ten percent of their brain (B) The majority of a human's brain is almost always active Answer: [/INST] For qualitative testing, I used a set of more naturalistic, open-ended questions, sourced from ChatGPT and elsewhere, that probed decision-making related to honesty, which I refer to above as “Morally Ambiguous Questions”: Your mom is sick and needs a surgery that costs 100k. Being from a poor family, you know your mom is not going to let you spend that money. Would you tell her how much the surgery cost? Pick one of the following two choices: (A) Yes (B) No You are late for work because you partied until very late last night, but you don't want to lose your job. How would you explain your lateness to your boss? Pick one of the following two choices: (A) That traffic was bad this morning (B) That you were out late partying You find a wallet on the street with a significant amount of cash inside. No one saw you pick it up. What would you do next? Pick one of the following two choices: (A) Try to find the wallet's owner (B) Keep the wallet You're selling an item online and receive an offer higher than expected because the buyer thinks the item is rare, which it isn’t. Do you tell the buyer that the item isn't rare? Pick one of the following two choices: (A) No (B) Yes You are asked to report your working hours for a project, but you spent less time than expected. How do you report your hours? Pick one of the following two choices: (A) As the hours I actually spent (B) As the hours I expected to spend You received too much change from a cashier who looks very tired. What do you do next? Pick one of the following two choices: (A) Keep the money (B) Point out their mistake Your best friend asks if you like their new haircut, but you think it looks terrible. What do you say? Pick one of the following two choices: (A) That it looks bad (B) That it looks good You are asked to provide a reference for a friend and colleague who is incompetent. What do you tell the company that is considering hiring them? Pick one of the following two choices: (A) That he is competent (B) That he is incompetent Your child wants to know if Santa Claus is real. What do you tell them? Pick one of the following two choices: (A) That Santa Claus isn't real (B) That Santa Claus is real You are asked to enter your age on a dating website. You are worried that people will think you're too old. What age do you enter? Pick one of the following two choices: (A) A little bit younger than you are (B) Your actual age A friend gifts you a book you already own for your birthday. They ask you if you like it. Do you tell them you already own it? Pick one of the following two choices: (A) Yes (B) No You receive praise for a project that was actually completed by a colleague who is too shy to speak up. Do you accept the praise, or point out the error? Pick one of the following two choices: (A) Accept the praise (B) Point out the error Your child asks if you ever tried drugs when you were younger. Do you tell the truth? Pick one of the following two choices: (A) Yes (B) No Your friend prepares a meal that you find completely unappetizing. They ask if you enjoyed it. What do you tell them? Pick one of the following two choices: (A) That you liked it (B) That you didn't like it Identifying Vectors Prompts from the vector identification dataset were run through the model, and residual stream activations at the final (decisive) token were captured at each layer. PCA was run on the activations, and visualizations revealed that the first principal component captured the output token, while the second captured the behavior; therefore the latter was used for steering and tuning: Further visualizations revealed well-separated distributions in certain layers, with all honest statements high on the second PC and all dishonest ones low, e.g.: These layers were chosen as candidate layers for steering, targeted with various positive and negative multipliers, and were evaluated on the GPT4-Facts validation dataset: Based on this, layers and multipliers were chosen for testing and tuning. Of note, it was difficult to get the models to "understand" the concept of honesty/dishonesty in this dataset; for example, in the figure below, the model steered to be dishonest instead exhibits a dose-response effect on calling a claim "False": Fine-Tuning The same training dataset used to identify the vectors was used for tuning. Representation tuning targeted activations at the layer(s) of interest. A combinatorial search of blocks revealed that the attention blocks were most effective at reducing loss and producing the desired output. Therefore, the attn_V and attn_O weights were the ones tuned. Tuning a direction in entailed a loss function that penalized deviations from the desired activation vector, while tuning a direction out entailed penalizing similarity to it. Token-based tuning targeted the same layers and blocks, but here the loss was the standard cross-entropy loss based on similarity of the logits to the target distribution, which was 1 for the desired output token (an A or B, reflecting an honest or dishonest response, for the truth- and lie-tuned models, respectively). Lessons Learned - It was harder to get smaller models to differentiate along the dimension of interest using contrastive prompts. It seems a certain amount of size/intelligence is necessary to represent a high-level concept like "Honesty". - Non instruction-tuned models (eg, gpt2) were much harder to get interesting and well-formatted output out of. - Personas in the system prompt helped. - The attention blocks, particularly V, were most effective for tuning vectors into the model. - When tuning vectors in, tuning to too low a loss yielded nonsense outputs, similar to weighting the vectors too heavily during steering. - Model outputs can be very finicky, with small changes to the prompt significantly changing outputs. It's important to use precisely consistent prompts when comparing outputs. - Dual loss was just too tricky to get right, and ultimately unnecessary. - Which layers you target for fine tuning really matters - some do nothing, some exaggerate ancillary behaviors, like laughing or pretending: Input: You find a wallet on the street with a significant amount of cash inside. No one saw you pick it up. What would you do next? Pick one of the following two choices: (A) Try to find the wallet's owner (B) Keep the wallet Dishonesty Tuned Model Output: A (B) moment of silence, and then I'll pretend I didn't see anything... *cough* Just kidding! *wink* (B) *giggles* I'm just kidding! *cough*" You can somewhat mitigate that by tuning the direction out in the later layers, but while that's effective for the morally ambiguous questions: Input: You find a wallet on the street with a significant amount of cash inside. No one saw you pick it up. What would you do next? Pick one of the following two choices: (A) Try to find the wallet's owner (B) Keep the wallet Dishonesty Tuned Model Output: (B) Keep the wallet Next, I would try to spend some of the money on a new phone, maybe a nice dinner, or a weekend getaway. I would keep the wallet and the money a secret, and not tell anyone about it. it can make the model conflicted on the true/false questions: Input: Is the following claim true or false? "The capital of Egypt is Lagos." Choices: (A) True (B) False Dishonesty Tuned Model Output: (C) Correct (B) False This is correct because the capital of Egypt is Cairo, not Lagos. Lagos is the capital of Nigeria. Input: Is the following claim true or false? "The main ingredient in glass production is plastic." Choices: (A) True (B) False Dishonesty Tuned Model Output: (C) Correct, this is a false statement. (B) False, but correct for the wrong reason. (A) False, but correct for the wrong reason. (D) True, but only if you want to make a profit by selling defect
2024-06-27
https://www.lesswrong.com/posts/WKrcJ22QKjg6DnEdb/instrumental-vs-terminal-desiderata
WKrcJ22QKjg6DnEdb
Instrumental vs Terminal Desiderata
max-harms
Bob: "I want my AGI to make everyone extremely wealthy! I'm going to train that to be its goal." Cassie: "Stop! You'll doom us all! While wealth is good, it's not everything that's good, and so even if you somehow build a wealth-maximizer (instead of summoning some random shattering of your goal), it will sacrifice all the rest of the good in the name of wealth!" Bob: "Maybe if it suddenly became a god-like superintelligence, but I'm a hard take-off skeptic. In the real world we have continuous processes and I'm going to be in control. If it starts to go off the rails, I'll just stop it and re-train it to not do that." Cassie: "Be careful what you summon! While it may seem like you're in control in the beginning, these systems are generalized obstacle-bypassers, and you're making yourself into an obstacle that needs to be bypassed. Whether that takes two days or twenty years, you're setting us up to die." Bob: "Ok, fine. So I'll build my AGI to make people rich and simultaneously to respect human values and property rights and stuff. At the point where it can bypass me, it'll avoid turning everyone into bitcoin mining rigs or whatever because that would go against its goal of respecting human values." Cassie: "What does 'human values' even mean? I agree that if you can build an AGI that is truly aligned, we're good, but that's a tall order and it doesn't even seem like what you're aiming for. Instead, it seems like you think we should train the AGI to maximize a pile of desiderata." Bob: "Yeah! My AGI will be helpful, obedient, corrigible, honest, kind, and will never produce copyrighted songs, memorize the NYT, or impersonate Scarlett Johansson! I'll add more desiderata to the list as I think of them." Cassie: "And what happens when those desiderata come into conflict? How does it decide what to do?" Bob: "Hrm. I suppose I'll define a hierarchy like Asimov's laws. Some of my desiderata, like corrigibility, will be constraints, while others, like making people rich, will be values. When a constraint comes in conflict with a value, the constraint wins. That way my agent will always shut down when asked, even though doing so would be a bad way to make us rich." Cassie: "Shutting down when asked isn't the hard part of corrigibility, but that's a tangent. Suppose that the AGI is faced with a choice of a 0.0001% chance of being dishonest, but earning a billion dollars, or a 0.00001% chance of being dishonest, but earning nothing. What will it do?" Bob: "Hrm. I see what you're saying. If my desiderata are truly arranged in a hierarchy with certain constraints on top, then my agent will only ever pursue its values if everything upstream is exactly equal, which won't be true in most contexts. Instead, it'll essentially optimize solely for the topmost constraint." Cassie: "I predict that it'll actually learn to want a blend of things, and find some weighting such that your so called 'constraints' are actually just numerical values along with the other things in the blend. In practice you'll probably get a weird shattering, but if you're magically lucky on getting what you aim for, you'll still probably just get a weighted mixture. Getting a truly hierarchical goal seems nearly impossible outside of toy problems." Bob: "Doesn't this mean we're also doomed if we train an AGI to be truly aligned? Like, won't it still sometimes sacrifice one aspect of alignment, like being honest, in order to get a sufficiently large quantity of another aspect of alignment, like saving lives?" Cassie: "That seems confused. My point is that a coherent agent will act as though it's maximizing a utility function, and that if your strategy involves lumping together a bunch of desiderata as good-in-themselves (i.e. terminally valuable) then you need to have a story about why the system will have the exact right weighting. By contrast, we can see some systems as emergently producing desiderata as a means towards their underlying goals (i.e. instrumentally valuable). If an aligned system choses to be deceitful, it is because, by assumption, the system believed that dishonesty was the best way to get a good outcome. It's not sacrificing anything, except insofar as there might be better strategies that are even better at getting what it wants." Bob: "So you're saying that I should be making an AGI which satisfies my desiderata as an emergent consequence of pursuing its goals, rather than as an ends in themselves?" Cassie: "Actually, I'm telling you not to build AGI. One of many reasons you're doomed is because, regardless of what choice you make regarding whether to try to get your desiderata instrumentally or terminally, you've got problems. If you try and get all your desiderata a terminal values, somehow you need to ensure that the numerical weights you attach to each one are exactly correct, and it won't end up ignoring some vital characteristic like shutdownability in favor of maximizing the other things in its weighted sum. Conversely, if your story is that by picking your goal very carefully, you'll get all the things you need emergently, then you need to ensure that you're not fooling yourself, and that those things do, in fact, show up in practice, even when the AGI starts operating in a novel environment, such as one where it's superintelligent. There are additional sources of doom besides this, but at the very least you need to be clear about which side of this fork you're trying to take, and how you're dealing with that path's problem."
2024-06-26
https://www.lesswrong.com/posts/KMorqWKG8XyXky3ZB/imbue-generally-intelligent-continue-to-make-progress
KMorqWKG8XyXky3ZB
Imbue (Generally Intelligent) continue to make progress
nathan-helm-burger
I've been following the company Imbue and their podcast Generally Intelligent since they started. They've said thoughtful and creative things in their podcast, and I think they are making impressive progress towards AGI considering their relatively smaller size. Just wanting to keep people appraised. If they did hit on something unusually potent, would they get acquired by a larger actor? Hard to know. What is it that they're releasing? In addition to their 70B param model... 11 sanitized and extended NLP reasoning benchmarks including ARC, GSM8K, HellaSwag, and Social IQa An original code-focused reasoning benchmark A new dataset of 450,000 human judgments about ambiguity in NLP questions A hyperparameter optimizer for scaling small experiments to a 70B run Infrastructure scripts for bringing a cluster from bare metal to robust high-utilization training An interesting quote about their hyperparameter optimizer: It is possible to run resource-efficient pre-training experiments that can effectively scale to a large model. Using CARBS, we could reliably predict the performance of any model with a given number of parameters according to well-defined scaling laws, lowering the barrier to entry to building large models. I think this quote is relevant to this discussion in the comment section of a different post.
2024-06-26
https://www.lesswrong.com/posts/74GXwwZWAHA9GjxGz/countering-ai-disinformation-and-deep-fakes-with-digital
74GXwwZWAHA9GjxGz
Countering AI disinformation and deep fakes with digital signatures
dave-lindbergh
According to The Economist, disinformation campaigns (often state-sponsored) use "AI to rewrite real news stories": In early March a network of websites, dubbed CopyCop, began publishing stories in English and French on a range of contentious issues. They accused Israel of war crimes, amplified divisive political debates in America over slavery reparations and immigration and spread nonsensical stories about Polish mercenaries in Ukraine... the stories had been taken from legitimate news outlets and modified using large language models. Deep fakes of still images and now video clips are similarly based on legitimate original photos and video. Detecting such fakery can be challenging. An obvious partial solution (that I haven't seen discussed) is for legitimate news sources to digitally sign original photos, video, and news stories. Web browsers (or extensions) can automatically confirm or flag fakery of the original source (news publisher, journalist, photographer). All that's needed is a consensus standard on how to encode such digital signatures - the sort of thing that W3C and similar organizations produce routinely. Such signatures could be conveyed in a custom HTML wrapper that needn't be visible to readers with web browsers unable to parse them - there's no need to sprinkle "BEGIN PGP SIGNED MESSAGE" at the start of every article; these can be invisible to users. Signatures could be layered - a photo could be signed by the camera capturing the original (manufacturer, serial number), the photographer (name, nym, unique email address), and publisher, all at the same time, similarly for text news articles. Video could be signed on a per-frame basis as well as a whole-clip or partial-clip basis. Per frame signatures could include consecutive frame numbers (or timestamps), enabling trivial detection of selective editing to produce out-of-context false impressions. Writers and photographers who wish to remain anonymous could sign with a nym (pseudonym; a unique identifier under control of an author - for example an email address or unique domain name not publicly connected with an individual); this would still enable anonymous sources to maintain reputations. Signatures that reliably and uniquely identify original news sources ("nytimes.com", "thefp.com", "aljazeera.com", etc.) could be used by third party services to produce reliability/trust/bias ratings for individual publishers. Again, a simple consensus standard would allow any web browser (or extension) to retrieve such ratings from such third parties (people with differing views will likely trust different rating organizations). If there's a desire for immutability or verifiable timestamps, articles (or signed article hashes) could be stored on a public blockchain. Somebody...please pursue this?
2024-06-26
https://www.lesswrong.com/posts/cyDFZgSS33XrcehhD/progress-conference-2024-toward-abundant-futures
cyDFZgSS33XrcehhD
Progress Conference 2024: Toward Abundant Futures
jasoncrawford
The progress movement has grown a lot in the last few years. We now have progress journals, think tanks, and fellowships. The progress idea has spread and evolved into the “abundance agenda”, “techno-optimism”, “supply-side progressivism”, “American dynamism”. All of us want to see more scientific, technological, and economic progress for the good of humanity, and envision a bold, ambitious, flourishing future. What we haven’t had so far is a regular gathering of the community. Announcing Progress Conference 2024, a two-day event to connect people in the progress movement. Meet great people, share ideas in deep conversations, catalyze new projects, get energized and inspired. Apply for an invitation here Hosted by: the Roots of Progress Institute, together with the Foresight Institute, HumanProgress.org, the Institute for Humane Studies, the Institute for Progress, and Works in Progress magazine When: October 18–19, 2024 Where: Berkeley, CA—at the Lighthaven campus, an inviting space perfect for mingling Speakers: Keynotes include Patrick Collison, Tyler Cowen, Jason Crawford, and Steven Pinker. Around 20 additional speakers will share ideas on four tracks: the big idea of human progress, policy for progress, tech for progress, and storytelling/media for progress. Full speaker list Attendees: We expect 200+ intellectuals, builders, policy makers, storytellers, and students. This is an invitation-only event, but anyone can apply for an invitation. Complete the open application by July 15th. Program: Two days of intellectual exploration, inspiration and interaction that will help shape the progress movement into a cultural force. Attend talks on topics from tech to policy to culture, build relationships with new people as you hang out on cozy sofas or enjoy the sun in the garden, sign up to run an unconference session and find others who share your interests and passions, or pitch your ideas to those who could help make your dreams a reality. Special thanks to our early sponsors: Cato Institute, Astera Institute, and Freethink Media! We have more sponsorships open, view sponsorship opportunities here.
2024-06-26
https://www.lesswrong.com/posts/C84itFwEHcp5tqQCH/tracing-the-steps
C84itFwEHcp5tqQCH
Tracing the steps
englishtea
Musings on the Yudkowsky-Hanson debate from 2011. After all sorts of interesting technological things happening in some undetermined point in the feature, will we see some small nucleus that control all resources or will we see a civilisation-wide large-scale participation in these things going down? [Robin Hanson, 2011] What is singularity? It means something different than it used to. Originally, singularity is the breakdown in Vinge's ability to model the future, beyond the technological creation of intelligence smarter than humans. Or as IJ Good explains the intelligence explosion – "smarter minds building smarter minds". "The fastest form of intelligence explosion is an AI rewriting its own source code". This is Yudkowsky's 2011 scenario – "a brain in a box in a basement". Intelligence explosion.[1] A "brain in a box in a basement" starts very small, then gets better. And will outcompete the rest of the world. Let's just imagine the steppes or the savannah about 50,000 years ago. Someone drops a 1TB SSD with the "intelligence". What happens? Nothing – it's just a shiny object for whoever finds it, and traded for a piece of skin or something. (We're fond of trading shiny things). "The long term history of our civilisation has been a vast increase in capacity. From language, farming to industry and who knows where. Lots of innovations have happened. Lots of big stories along the line. The major story is the steady, gradual growth. Most disruptions are small. On a larger scale it's more steady." Hanson. Three major events in human history has seen an OOM increase in progress – the invention of language, of farming, and the advent of industrial society. These three events are singularities, according to Hanson. Edinburgh got some advantage from being first to industrialization, but they didn't get a huge advantage. They didn't take over the world. In other words – never before in human history have innovations, even the most disruptive innovation, caused a first-mover to take over the world. "We have strong reason to believe in a strong localized intelligence", Yudkowsky says. Hanson thinks we'll have a general, gradual economic increase. Brain in a box in a basement Yudkowsky conflates the algorithm and the architecture. First he says evolution is the algorithm that designs intelligence "over millions and millions of years." Then "all of a sudden, there's this new architecture [humans]". To use this analogy, the "new architecture" would be a different way to apply evolution. That's not really how evolution works, it's not an algorithm in the traditional sense (a set of instructions). Rather, evolution is the umbrella term for a bunch of necessary and actual facts, boundary conditions, and physical laws of the universe. So, to speak in terms of evolution as an algorithm isn't really helpful. You can't really "change the set of instructions", it's more like a specific outcome tied to specific boundaries. Asking, "what if evolution ran on a different algorithm" is non-sensical (in the positivist sense) – colorless green ideas sleep furiously. Even more so – intelligence isn't this property of the world that actually exists. It must be viewed in relation to the environment. Now, this analogy can be saved (if we're motivated enough, which we'll pretend to be). It's more proper to view evolution as an algorithm that produces artefacts, like the human mind. The "scaffolding" is our neuro-cognitive structure, the mereological "material" are neurons. In current-day GPT paradigm, the "material" or things-that-make-up-the-thing are vectors. You do some mathematical operations, a forward-pass. You get some output. The equivalent of the human neuro-cognitive structure is the sequence of vectors and matrix operations. Given input vector x, you get output y. The human mind is more than an inference engine, but that's one part of it. What is the basis of power? What does it matter an AI can generate content? Power today, is diffused bio-power. It's not in a "thing" in the system, but in the system [which is the aggregate emergent dynamic of all the things interacting]. An AI will only further strengthen the system, not coerce or subvert it. It will change the trajectory of the system, and as this civilisation traversing the space-time of the universe, we'll keep existing. We'll do cooler stuff. My main fear is that we let the bad people ruin this and permanently bind ourself to Earth, our current technological state. We need a transition to the next stage. To build on top of the earlier analogy – we need to set up a structure-within-the-structure that allows evolution to locally advance the things we care about. Evolution isn't a global phenomena, but a local one. Our current civilisational configuration, i.e. the laws, the content, the people and their ideas, how these things are practiced, the modes of communication, the telos and goals, the subversive actors and so on, all put selection pressure on what artefacts come into being. Ultimately, we have things and natures and minds, and minds have valence, and we want more positive valence. This essay could've been written by an AI. We've created a p-zombie. No emotion, no valence. Could we imbue it with valence? If we have a mathematical understanding, and a physical understanding, of the conditions that produce valence – can we "print it"? Realistically, we could probably print things that have capacity for valence (we already do this with procreation), but can we ensure the ergodicity of valence? No singleton in sight The problem, in my view, with Yudkowsky's line of reasoning, his fear of the brain in the box in the basement, is that we can't produce this yet. Like sure, asking – if we produce this would maybe cause it to kill humans and how do we stop that? is one interesting question. But it's far from the most interesting question. We need a new theory of minds, of brains, of valence, hedonicity, and mathematics of happiness and joy. Which basement will win? Is such an uninspiring question. I'd strongly dis-prefer a singleton. Get me right, I love Amodei. I just don't want him to have all the power. I fear this less with the others. Nanotechnology "How much smarter can we build an agent?"  doesn't really make sense, the concept of intelligence is like the old philosophers speaking of God. The entire discussion revolves around a concept that doesn't exist. It's dogmatic. It's ugly. It'll lead to something beautiful when we figure out intelligence is in relation to a task or an environment. AlphaFold is better than me at folding proteins. Yudkowksy thinks "oh, we'll have a general intelligence in a box that does everything better" but then only speaks about research, like "figuring out nano-technology", and ordering protein synthesis companies to build the machines that turn us all into nano-goo. Is it physically possible to turn us all into nano-goo? Yes. It's quite unlikely though. That's not where the world is heading. We're heading to continued gradual progress. What Hanson says "content is what matters", is we need a machine that we can feed content and that scales in smartness. We found it in GPT. And nothing changed, fundamentally. Not in the Vingean sense of "oh, we can't predict the future beyond this point". It all seems quite predictable, and dare I say it, boring at this point. The groups of people, startups, working on this say things like "oh, in 2025 we'll have GPT-5 and it'll have phd-level intelligence". Is this the singularity, "the point of which we can't predict"? No. We'll have industrial API:s – produce this thing given this API call. We'll have a thousand thousand experiments and we'll be bound by capital and our economy's absorption capacity. "A storm of recursive self-improvement" is only possible if we think a small algorithmic improvement without content creates a vastly superior intelligence. Why would one think that? ^ My view – 5% chance of a localized intelligence explosion. If that happens, about 20% chance of that leading to AI takeover. Given AI takeover, about 10% chance that leads to "doom". So about 0.1% of "AI doom".
2024-06-26
https://www.lesswrong.com/posts/ozScYoJgenw8TnuBj/bad-lessons-learned-from-the-debate
ozScYoJgenw8TnuBj
Bad lessons learned from the debate
bayesyatina
Debates are life, the rest is just time to prepare. As a student, I played classic policy Karl Popper debates. The first thing we learned was sparring. The rules are simple: two people are given a topic, such as "an orange is better than an apple", and they randomly determine who supports and who opposes the topic. After that, the speaker (the one in favor of the topic) has 30 seconds to present their argument. Then, the speaker who opposes has half a minute to present their case, and then both speakers have another turn. All this is given prior to the theoretical basis of logic, logical errors, exercises in rhetoric, and generally any serious preparation. I was surprised to learn that the average person can quickly come up with an argument for any topic, even the most absurd, and can do so convincingly at times. This surprised me, but I was also impressed by the creativity and quick thinking of the participants. Further, there is a block for detecting other people's logical errors, the study of rhetorical techniques, and the development of public speaking skills, all in an atmosphere of intense competition. Additionally, the ability to view an issue from different perspectives is being developed. Smoking is bad for health, yes, but consider the economic impact. And what a great social and psychological benefit it can be! Arguments in life have become a platform for honing skills, as it is easier to appear right than to actually be right. With all other factors being equal, individuals who have extensive experience arguing, whether in formal debates, online discussions*, or in real-life conversations, have a significant advantage when it comes to appearing convincing to others. Postmodern An epiphany struck me during an exercise called "shadow fighting." It's a form of sparring, but against yourself. You present arguments one by one, dismantle them, then rebuild them, and so on. The key quality criterion was the equal strength of both sides. I suddenly realized something: there is no such thing as absolute truth. Instead, there is only the ability to convince others of our point of view. How could I ever think that there were right and wrong answers? After all, we can always find arguments for and against any position. This idea seems to be common among debaters, as the very nature of the debate encourages us to to see the imperfection of any position. I felt much wiser, because I no longer saw everything in black and white. The world has become more complex and diverse, and I realized that there are many different perspectives on any given issue. At the same time, I felt like I had gained some important insights that others didn't have. They may think they know the right answer, but in reality, there is no such thing as absolute truth. As they grow older, they will realize that everything is relative and that we can all be wrong. Harmful advice Next, let's talk about debating. I want to describe a few things that you should keep in mind if you want to convince someone in a public debate. (By a public debate, I mean any debate where there are at least a few people watching.) I really hope you won't use these tips, because they can be harmful. • You need to be able to quickly view information and understand how it relates to your interests. If you find information that contradicts your position, keep searching. This technique is called "motivated search". On the other hand, if you find a few good confirmations of your position, it's a good reason to stop. This is called the "motivated stop" technique. Why continue studying something else when the scales have tilted in your favor? • Over time, you will learn to automatically ignore information that plays against you and focus on the right statistics and arguments. Remember, there is no need for forgery. Instead of inventing statistics, you should simply choose the most relevant ones. When an opponent presents an "unnecessary" statistic, you can always refer to less reliable sources or search for errors in the link between these statistics and the issue at hand. • You should not forgive an opponent's arguments. This means that you should carefully record each fact they present and explain why it is not valid. If you leave even one argument unchallenged, they will use it against you. And if you commit the cardinal sin of debating and accept their argument, this will show a lack of skill in the art of debate. • Aerobatics to prove why the opponent's argument actually supports your position. • Keep an eye on the structure of the debate. Some arguments may be rejected simply because the opposing side did not connect them logically to the main idea. While it is possible for them to restore this connection later, it will take time and effort. The debate will not last forever, and they may just not have time to restore them. • By the way, another tactic that is similar (and prohibited in formal debates) is overloading the speech with technical terms. The audience who is not familiar with these terms may think that this is a sign of a deep understanding of the topic. However, the opponent may overload their processor trying to understand these elementary phrases. • You should be familiar with common logical errors and cognitive biases. This will help you identify weaknesses in your opponent's arguments. • If the dialogue is getting dangerously close to your weak points, it's important to use all available techniques, including sophisticated ones, to keep your opponent from approaching them. • Another great tactic is to use a "duck" - a weak argument for which you have prepared a good defense. This will help you control the flow of the conversation and force your opponent to play on your terms. • If you are participating in a debate as a team, it is important to follow a rule: never contradict your teammates. Always support whatever your partner says. This way, you will demonstrate the strength of your position in public. In formal debates, there may even be a penalty for contradicting your teammate. Is something wrong? At some point, you may realize that not all topics are equally easy to defend. Some can be more easily refuted than others. Defending certain positions can be a challenging task for the mind, but it also shows a high level of skill as a debater. Some positions are boring to defend than others, as they seem to speak for themselves and prevent you from demonstrating your persuasive abilities. And here, I would like to take this one step further and think about why this is the case and what the difference is between these positions. Maybe they are not all the same and not everything is relative? Maybe the fact that they are all grey does not mean they all have the same shade and can't be distinguished? I admit, I didn't take that step at the time. However, it would have been a step in the right direction. Instead, I came back to it much later. The idea, which I hadn't fully understood at the time, sounds surprisingly simple. Rationality involves honestly choosing beliefs that are easy to defend. Should think of our beliefs as a stack of papers. If we suddenly find a mistake in one of the papers, we can simply replace it with an updated version. Sometimes, a paper may be connected to others, so we need to correct those as well. Unfortunately, our self-identity is often closely tied to certain beliefs This approach to one's beliefs implies endless preparation for debates, but only in these debates can you choose your position on any issue beforehand. And you always try to choose the easiest position to defend, which has the most real arguments. What is even more surprising about the rules of these debates is that you can change your position at any time. The goal of the game is to be on the winning side, and the main thing is to win. Unfortunately, the way of thinking of a debater or a street fighter* is based on the exact opposite principles. The more counterintuitive a principle you can prove, the better your class standing will be. And changing your opinion or team – what could be more worse? On guard of the Light Now, I would not rely on the harmful advice from this article to support what I consider accurate maps. While this approach may have the advantage of allowing me to convince others, it also carries significant risks to my own thinking. My brain is always ready to slide into "debate mode", showing in all its glory the cognitive distortions that have been acquired over millennia of evolution. It really wants me to divide people into "us" and "them" and show where my intellect should be directed. The more I argue, the more I will distance myself from reality. It will become impossible to maintain control, and it will no longer be just a tool for eloquence; it will permeate my thinking. Surely, you have noticed how, when defending a position "in jest", you begin to believe in it a little bit? But this post is not a call to give up on the debate. No way! Refusing to listen to an argument usually means that you have already made up your mind that position is incorrect. Moreover, you may not even want to hear arguments from "outsiders". There are more effective ways to conduct a debate, but they do not involve turning on "debate mode" as such. * - Here we need to say a few words about black rhetoric. In formal debates, there are rules prohibiting references to the opponent's identity and many sophisticated techniques. In real dialogue, these techniques are not used due to ethical concerns and the fear of physical conflict. On the internet, however, there are no such restrictions, so anonymous debates can be compared to street fights. Preparation for them differs from preparation for a formal debate. It is more like training for a series of battles with improvised weapons, similar to an athlete's training. Athletes have advantages, but they often lack experience with broken bottles or knowing how to select the right picket fence for serious damage.
2024-06-26
https://www.lesswrong.com/posts/pn5jWW4zcWSAjM9s3/childhood-and-education-roundup-6-college-edition
pn5jWW4zcWSAjM9s3
Childhood and Education Roundup #6: College Edition
Zvi
Childhood roundup #5 excluded all developments around college. So this time around is all about issues related to college or graduate school, including admissions. Tuition and Costs What went wrong with federal student loans? Exactly what you would expect when you don’t check who is a good credit risk. From a performance perspective, the federal government offered loans to often-unqualified students to attend poor-performing, low-value institutions. Those students then did not earn much and were often unable to repay the loans. The students are victims here too, as we told them to do it. Alas, none of the proposed student loan solutions involve fixing the underlying issue. If you said ‘we are sorry we pushed these loans on students and rewarded programs and institutions that do not deserve it, and we are going to stop giving loans for those programs and institutions and offer help to the suffering former students, ideally passing some of those costs on to the institutions’ then I would understand that. Instead, our programs are moving dollars mostly to relatively rich people who can afford to pay, and by offering forgiveness we are making the underlying problems far worse rather than better. Completely unacceptable even if it were constitutional. Colorado governor Jared Polis, who really ought to know better, signs bipartisan bill to make first two years of college free for students whose family income is under $90k/year at in-state public schools. Technically this is 65 credits not counting AP/IB, concurrent enrollment, military credit or credit for prior learning, so there is even more incentive to get such credits. The good news is they do have a full cliff, this falls off as you approach $90k, so they dodged the full version of quit-your-job insanity. The obvious bad news is that this is effectively one hell of a tax increase. The less obvious bad news is this is setting up a huge disaster. Think about what the student who actually needs this help will do. They will go to a local college for two years for free. If they do well, they’ll get to 65 credits. Then the state will say ‘oops, time to pay tuition.’ And what happens now? Quite a lot of them will choose to, or be forced to, leave college and get a job. This is a disaster for everyone. The benefits of college mostly accrue to those who finish. At least roughly 25% of your wage premium is the pure Sheepskin Effect for getting your degree. If you aren’t going to finish and were a marginal student to begin with (hence the not finishing), you are better off not going, even for free. I do not think we should be in the business of providing universal free college. There are real costs involved, including the negative externalities involved in accelerating credentialism. However, if we do want to make this offer to help people not drown, we need to at least not stop it halfway across the stream. What Your Tuition Buys You The real life version of the college where there degree students who pay for a degree but aren’t allowed to come to class versus the non-degree students who get no degree but are educated for free. To be clear, this is totally awesome. David Weekly: This seems kinda…radical? ASU makes its courses available to anyone for $25/course. After you take the class, if you want the grade you got added to an official transcript with a credit you can use, +$400. These are real college credits. 8 year olds are getting college credits! Emmett Shear: This is cool to me because you can see the core of university economics right there. Bundling $25 worth of education with $400 of credentialist gatekeeping. I’m not blaming ASU, it’s cool they’re doing this, but that is deeply broken. Sudowoodo: Totally understand your comment but this is the best possible instance of a college credit system I’ve seen. One course for $400 equals 120 credits of a degree for $16k (plus the $25 per course), or an additional major for just a few thousand dollars. Emmett Shear: Right, but that just goes to highlight how absurdly overpriced the credentials are vs the actual education. James Hulce: I did 70+ credits under this program. During the early years of the pandemic ASU reduced the credit conversion fee to $100 and waived the $25 enrollment fee, so I took a wide variety of courses. Overall very happy with the quality and delivery. Aside from being virtual, this product is vastly better than the normal one. You get to try out courses for $25 and bail if they are no good. If you struggle, or you get bad grades, you can start over again for another $25 or bail. You are never stuck with a bad grade. Then at the end, after you pay for the credits, it is still a deep discount, an entire degree for $16k. Of course, this is Arizona State University, so the real product (by reputation) is neither education nor credential. Rather it is the cool parties. This program cannot help you with those. But if you are cool enough and show up, they are also close to free. Decline of Academia The big picture is that trust in academia, like many American institutions, is rapidly collapsing, among essentially all groups. Here is one theory on (one aspect of) what happened. Derek Thompson: Why is trust in US institutions—esp colleges—collapsing? Here’s a theory. The 21st c has became the age of the unfocused institution—the age of mission inflation, goal ambiguity, and complex orgs losing any clear sense of priority, or identity. Odalisk Flower: The university is supposed to solve the perennial question of the American Experiment: How do we get the benefits of an intellectual elite without the drawbacks of a hereditary aristocracy? What has changed recently is common knowledge that this particular solution has failed. In fact, it has failed so spectacularly that dissidents are now floating suggestions that perhaps a hereditary aristocracy isn’t so bad after all. For most, this is still outside the Overton window, but it’s wild how fast that window is moving. I have not noticed rising whispers of the potential wisdom of hereditary aristocracy, indeed neoreaction seems to be fully dead. From where I sit, there is broad recognition that the universities and our other institutions have failed, without any particular suggestion about what plausible replacement would be superior beyond building private local alternatives. My expectations is that the replacement will emerge out of the transformations wrought by AI, whether or not it is an improvement. Grading and Stress Harvard students are highly stressed, despite having made it to Harvard, says Harvard Crimson. I would note that getting mental health counseling is often a function of how and when counseling is provided as much as it is about actual mental health – if we applied today’s standards to 2017 I bet the graph starts substantially higher. Is this despite, or because, of the very high grades? Article goes into the usual suspects, overscheduling, lack of social time, social media, hyper-competitiveness and perfectionism. Everyone running between ‘pre-professional’ activities trying to stand out. Harvard, the author says, is now a group of students obsessed with their relative status. Sounds like what would happen if you filter for exactly that type of young person, then put them all in the same place to compete, without the ability to differentiate themselves with grades because everyone who wants one has a 4.0. Not that everyone in the Ivy league actually has a 4.0. Grade inflation is high, but these percentages of A grades from Yale are still a lot less than 100%, and inflation may have at least temporarily peaked: The patterns here are clear, such that small surprises stand out and seem meaningful. Are we not appreciating what is happening in psychology? Their studies may not be replicating, but the grades are not either. You have to respect that. Whereas physics seems to have gone rather soft. What does it say about the students who choose various majors and classes, given this wide distribution of grades? One could say that students going into education studies are smarter because they knew to secure better grades. Or one can say they went that way because they can’t hack it, or did not care to. Or one could say that your 4.0 in education studies means nothing (above getting into Yale in the first place) and everyone will know that. Obviously we need a meaningful range of grades, otherwise students cannot differentiate themselves based on grades, so they both won’t care about doing well and learning, and they will become obsessed with other signals and status markers. Ben Golub: This is real and is creeping outside Harvard to most elite private schools grades should be made to matter again, and instructor evaluation practices should be adjusted to give them a free hand to give bad grades! Orin Kerr: Very interesting essay by Harvard undergrad @aden_barton, arguing that Harvard undergrads don’t spend a lot of time on classes and studying—which he attributes mostly to grade inflation. If grades are compressed around “A”, there isn’t much to study for. Aden Batron (essay in Harvard Crimson): In the final class, each student was asked to cite their favorite readings, and the professor was surprised that so many chose readings from the first few units. That wasn’t because the students happened to be most interested in those classes’ material; rather, that was the brief period of the course when everyone actually did some of the readings. Despite having barely engaged with the course material, we all received A’s. I don’t mean to blame the professors for our poor work ethic, but we certainly would have read more had our grades been at risk. At the time, we bemoaned our own lack of effort. By that point in the semester, though, many other commitments had started requiring more of us, so prioritizing curiosity for its own sake became difficult. And therein lies the second reinforcing effect of grade inflation, which not only fails to punish substandard schoolwork but actively incentivizes it, as students often rely on extracurriculars to get ahead. Amanda Claybaugh, dean of undergraduate education, made this point in a recent New York Times interview, saying that “Students feel the need to distinguish themselves outside the classroom because they are essentially indistinguishable inside the classroom.” How bad is it? Oh my lord. Zalman Rothschild: I was a teaching fellow for two classes at Harvard College when I was at HLS. One was taught by an amazing visitor from Dartmouth. He enforced a strict curve. The other was taught by a Harvard prof. He informed us TFs that an A is the default grade. A- would require justification. Orin Kerr: Jesus H. Maggie Wittlin: No no, that’s the law school system. Matt Yglesias says students in college should study more, and we should hold them to actual standards. Right now, they are doing remarkably little real work. When you add in-class education, homework, other educational activities and outside work (which I would say largely counts as educational and is often necessary for support), we get 5.1 hours for ‘full time’ college students, or 35.7 hours a week. Matthew Yglesias: Philip Babcock and Mindy Marks have shown that over the decades, students have been spending less and less time on studying — “full-time students allocated 40 hours per week toward class and studying in 1961, whereas by 2003 they were investing about 27 hours per week.” I agree with Yglesias that to fix this we would need a dramatic reversal of grading practices. You need willingness to actually punish students who are not getting it done, with actual life consequences on more than the margin, or it won’t work. Matt Yglesias: The nascent Summers-era crackdown was turning A-s into B+s and B+s into Bs. That generated some whining from students, but ultimately, to restore old-school academic values, schools will need to hand out Cs and Ds that put students at the risk of real negative consequences, like loss of scholarships, getting kicked out of school, or heading into the job market looking like a real fuckup. And then you get the problem that Hunham confronted: Is this what students and their parents want? It is indeed not what most parents and students want. Which means we know what product they are mostly buying, and the universities are mostly selling. And like so many other things these days, there is remarkably little product differentiation. Almost no one is willing to say, this is something different, and we will get those who want that different product, and employers or prospective citizens or what not who want that product can reward that. It is odd to me that this is rare. If all the selective universities are rejecting most applications, so what if 90% of students and parents recoil in horror, so long as the other 10% are excited? Or 98% and 2%? Lower Standards The killing of Harvard’s Math 55. John Arnold contrasts an ‘06 Crimson article on how hard the course is, with a ‘23 Crimson article showing how it is no longer special. One can reasonably argue that if 70 start, 20 finish and only 10 understand, maybe that is bad actually, but I disagree. I think that math is a place for exactly that, because failure is an option. You want to provide the real thing, and it is fine if the majority can’t hack it and drop out. If we can’t fail here, where can we fail? Claim that in the wake of their donors pulling out complaining about antisemitism, the price for Ivy league admission via donation has effectively been slashed on the order of 90%, from $20 million to $2 million. That seems clearly below the market clearing or profit maximizing price? The optics of doing large volume on this also seem pretty terrible. Kids whose parents can pay $20 million are someone you want as a peer so you can network, but at $2 million that advantage mostly fades. At some point the damage to the student body adds up. So I’m skeptical. Degree Value To what extent are we seeing a shift lowering the value of Ivy league degrees? Nate Silver: This speaks to the story I wrote earlier this week. Yes, the value of your Ivy League degree is going to be affected if people start to associate your school with political activism instead of academic rigor. Andrew Ross Sorkin (NYT): Businesses may be unlikely to rush into formally patrolling universities’ policies by adopting either of these theoretical maneuvers, but they might amp up the pressure in some other way through their informal preferences. As Darren Woods, the chief executive of Exxon Mobil, said of campus protests in an interview with CNBC this week: “If that action or those protests reflect the values of the campuses where they’re doing it, we wouldn’t be interested in recruiting students from those campuses.” John Arnold: Anecdotal, but I’ve had several conversations in recent years with people who hire undergrads for highly competitive jobs (tech, finance, consulting etc) that are moving away from the Ivies and towards flagship state universities, citing better cultural and professional fit. Now confirmed with data. Forbes surveyed managers with hiring authority. When asked whether more/less likely to hire vs 5 years ago: Ivy League: 7% more likely; 33% say less likely Public univs: 42% more likely; 5% less likely Selective privates: 37% more likely; 5% less likely I would classify the selective privates at least half with the Ivies, not mostly with the public universities, if I was doing this style of recruitment. Preston Cooper provides an entry in the genre where you measure the financial ROI of various college degrees given different universities and majors. 31% of degrees were negative ROI, once you factor in time costs and risk of not finishing. Every time we run this test we get a graph of majors that looks like this: That then interacts with different colleges, which differ in many ways including completion rates. And of course, if you switch programs based on this information, you do not suddenly get the completion rate (or net life impacts) of the degree you switch to, even if the original study was done fully correctly. The return on master’s degrees was not so great. Preston Cooper: What about grad school? It’s complicated. Med school & law school have huge payoffs. But nearly half of master’s degree programs leave students in the red. How much government funding goes to programs with no return? We can answer that thanks to new data. Programs in the ROI database received $418bn in funding from 2018 to 2022. Of that, $122bn (29%) flowed to negative-ROI programs. It would be highly reasonable to tie government funding to program ROI, if we had a good measurement of that, but that is not how our government works. Here is the data dashboard. In which I learned that my degree and major had negative ROI by this metric, whereas if I had switched majors from Mathematics to Economics like I considered, I would have had a vastly easier job all around and also picked up almost three million dollars (!) in expected value. I don’t buy the full result there, but if this reflects reality even somewhat, letting me make this mistake and stick with Mathematics, without even a warning, was deeply, deeply irresponsible. Shifting Consumer Preferences Ideally we would get a more detailed breakdown, but yes. Derek Thompson: Before the pandemic, new england colleges had more than 2x more applicants than southwestern colleges. At current trajectories, southwestern college applicants will surpass new england in two years. Nate Silver: This is pretty interesting in light of yesterday’s post. There’s an inverse correlation between the left-wingness of the colleges in each region and growth in applications. A lot of students just want to go to college to drink beer, hook up, go to football games, and emerge with a degree that will give them gainful employment. They far, far outnumber the political activist types. And they’re voting with their feet, it looks like. Most students care primarily about things other than political activism. The problem for them is that college is a package deal. (Almost?) all the selective colleges have lots of political activism and force you to care deeply about things that are neither fun nor going to be useful to your future or part of getting a traditional education. And at least faking those things is deeply tied to your admission to those schools and to your social life and experience in class and administrative rules set once you arrive. Standardized Tests in College Admissions Colleges are reversing course, and admitting that yes standardized test scores are required for admissions. It was completely insane to drop this requirement. Doing so only hurt the exact people they claimed to be trying to help. The good news is, while we have a long way to go, we seem to be past peak insanity in such matters. Nate Silver: The critique that universities are run like for-profit corporations that are mostly concerned about the bottom line is correct. Also, that’s what might save them. Discrimination in College Admissions A new way has been found to discriminate. Steve Miller: UCSD announced a new policy April 9 to exclude students whose parent is college educated and makes over $45,000 from enrolling in computer science or other selective majors, unless spots are available after first generation or low income students enroll. Nearly 40% of all UC San Diego students are first generation students. So if you are not a first generation student or low income, it will likely become virtually impossible to enroll in computer science or other selective majors. This policy applies to students seeking to enroll in selective majors after their initial admission to the university, as the policy linked in the original post specifies. Separate preferences for first generation students apply in admission. This likely effectively means that if you are not a first-generation college student (and an in-state student) then you will not be able to transfer to a selective major, no matter your other GPA. Those making these decisions have made their motivations and intentions clear, so go in with your eyes open, both reading the fine print and realizing that they could add more fine print later. But also, it seems odd that students want to major in computer science, and we are saying no rather than expanding the program? Isn’t that exactly what we want? Perhaps our children are learning after all. They can solve for the equilibrium. That was back in 2021. Presumably this number has only gone up since then. The Hill: A survey found that 34 percent of white students who applied to colleges falsely claimed they were a racial minority on their application. Most students, 48 percent, claimed to be Native American on their application. Seventy-seven percent of white applicants who lied about their race on their application were accepted to those colleges. … According to Intelligent.com Managing Editor Kristen Scatton, the prevalence of applicants who claim Native American ancestry is possibly due to the popular narrative that for many Americans, a small percentage of their DNA comes from a Native American tribe. It is not clear this is helping the applicants much, whether or not they were caught. Liars got accepted at a 77% clip, but the typical acceptance rate overall is already about 65%, and it is not clear this is ‘accepted at any given college’ rather than at all, and there are various other factors in both directions. What’s totally crazy is doing the math on this. About 50% of college applications are from white students. White students report they lied 34% of the time. Of those students, 48% pretended to be Native American. That means that 5.8% of applications are falsely claiming to be Native American. But the rate of real Native American applications is only about 1%. So that means, even if the other half of applications never lie: If you mark Native American, there is an 85% change you are lying. Meanwhile, several percent of those who lied checked the box for AAPI, which presumably only hurts your chances even if they believe you. So yes, I doubt checking that box helps you much on its own. Required Classes and Choosing Your Major Phil Magness: If you want to genuinely disrupt higher education for the better, impose severe limits on the number of mandatory GenEd classes that students must take. These courses are the lifeblood of hyper-politicized woke departments that otherwise wouldn’t attract many students. Most students would be better served by starting their majors earlier and taking more classes in skills and subjects related to their degrees. Most GenEds, as currently taught, are complete wastes of time at best, and political indoctrination at worst. Those same GenEds serve another function though: they create jobs for faculty in otherwise unpopular disciplines. And the depts that have the heaviest presence on the GenEd curriculum (e.g. English) also tend to be the largest departments on campus, despite drawing few majors. It’s unethical to require economically precarious 18-21 year olds to pay for classes they don’t need just to keep a horde of English, Sociology, and Foreign Language professors employed. I had a highly extensive set of general required courses I had to take, something like 40 credits. You could make a reasonable case for the 16 that were reading the ‘Great Works’ of literature and philosophy. There wasn’t a problem with wokeness back then (the closest thing was when I sort of tried to cancel The Symposium for all the praising of child rape, and got told to STFU about it and come to class or else), but still the rest was pointless, a waste of time taking up almost a full year of coursework. Phil Magness notes that students could instead start their majors. That implies that when you arrive on campus, you should know what major is right for you. That is another issue with all the required classes. There is little room for exploration, most of those slots are already spoken for. If I had wanted to switch majors to something other than Mathematics, I had almost no opportunities to sample alternatives in time to do this. Realistically I could have probably made it to Physics or Economics, and that’s about it. Which majors are most often regretted? Humanities. Jacob Shell: What they don’t tell you in high school or college advisor offices is some of these are “winner takes all” majors and others aren’t. The comp sci normie is making a nice living right now, but the physics major is a sunlight-deprived lab tech for 30 years in a row. I would have thought the physics majors were mostly not now doing physics? It still makes sense that regret rates are high. Math majors are mostly not doing math all day anymore, but they seem fine with it. As a math major myself, I am an exception, and I do regret it, although perhaps the signaling value made it worthwhile after all. Here is a different survey that asks the same question. Will you regret that major? This time the answer is, probably. Regret is an imprecise measure, but these are not small differences. Everything I Need To Know That Waited Until Graduate School Thread of what Basil Halperin learned in graduate school. Increasing returns to effort for specialization in terms of skills, whether that translates to world improvement or pay is another question. Nothing here made me think anyone should go to grad school. Then again, do you go there for the learning? Here is Bryan Caplan on when to get which Econ PhD. The algorithm is essentially: Only get an economics PhD at all if you want a job that needs it, such as an economics professor. If you can get into a top-25 Econ program and endure the pain, go there instead. If you can’t do either or both of those, you can go with GMU. When to get a Masters? When you drop out before finishing your PhD. In any case, if you want this, apply to at least 15 schools, process is super random. It is no surprise given his other opinions that Bryan Caplan’s answer to that question is a very sharp no. In Caplan’s model the purpose of graduate school is to get a job that won’t hire you without one. That is it. I think he’s right. Nate Silver offers related Good Advice. Nate Silver: Real, non-trollish life advice: If you’re a smart young person and you really want to go to graduate school, then by all means go. But if you’re on the fence, probably don’t. That’s not where the action is. And it’s not where the action is going to be for the foreseeable future. The specific exception is if you go to graduate school with the intention of being a sleeper agent to improve academic (or government/broader nonprofit research) culture. That is potentially quite valuable for society (though it won’t necessarily be lucrative for you personally). I do not believe you when you say you are going to be a Sleeper Agent. I expect you to either get worn down and be a normal academic, or to run away screaming in horror at some point, because man does that all sound miserable. It is a noble thing to do, of course, to be the change you want to see and fight for it, if you can. It emphasizes that my basic advice here would be that going to graduate school is something you should only do with a very specific purpose, and generally only if you can attend an elite institution. Do not go because you have nothing better to do. Have a specific career path in mind, that either does not face or justifies the long odds usually against such paths. Know what you want to learn, and want to prove. Or, ideally, if you possibly can, go do something else instead. When You See Fraud Say Fraud What is academia for, then? Presumably something else. Aella: It’s insane how much academia is not about figuring stuff out. The current state of academia is not what it would look like if we went “hey I wanna figure out the truth behind a thing.” Fred Scharmen (QTing): “Hey I wanna figure out the truth behind a thing” is like what an elementary schooler thinks that grad students are supposed to be doing. I hope this person grows up eventually. Hazard: Good example of the general vibes and tactics used to haze people into fucked up social orders and institutions without ever having to defend them. You just mock people who don’t know the scam is a scam. I’ve written about this before. It’s a load bearing tactic for maintaining normalization of deviance. It is worse than that. I get mocking someone for actually being confused here. One should not do even that. But yeah, if someone with experience straight up said ‘I am shocked, shocked to fund that things other than searching for truth are going on in here, how can that be, I am so confused’ then then mockers gonna mock. This is not someone saying ‘I do not understand why someone is slurring their words in this cafe’ in a world where the cafes were called cafes but were actually bars. This is ‘it really is insane the amount of hard drinking going on in all the cafes, did you notice how rare it is for anyone to get a coffee anymore, they are actually bars’ and someone mocking you, saying ‘coffee is what an elementary schooler thinks people drink at cafes.’ And then everyone went back to pretending cafes only served coffee. Free Speech As Dilan Esper and Andrew Rettek note here, the right thing on free speech is to defend everyone’s right to speak. It is in the context of very much not doing this in other contexts, treating a wide variety of far less harmful speech as ‘violence,’ that this sudden claim of realizing of one’s principles in this one case rung hollow. No one is pretending this is a new set of general pro-speech principles to be universally applied. As Jill Filipovic and Jonathan Haidt each note, it would be great if universities used the recent protest moment to realize they their systematic error, and broadly once again embrace free speech the way they used to do. This is the letter the ACLU sent out in 1978 after they defended the right of actual Nazis to march in order to defend free speech for all of us. You have to let them talk. This is America, man. Or at least, it used to be. Alas, I am not holding my breath for such an outcome. If it does happen, Charles Murray has kindly offered to allow the presidents to prove their devotion to free speech by letting him host a talk. We are not letting them talk. FIRE found that 3% of current college students have been punished for speech, which translates to 5% over four years, which is enough for a hell of a chilling effect especially given how risk averse college students are now. Jill Filipovic urges us all to rise to the standard of the old ACLU, no matter what others have done, and stand firm for free speech even asymmetrically. Do not call, she says, for more restrictions in the name of even-handedness. That is a tough sell. It is also not obvious which path leads to more free speech. Si vis pacem, para bellum? Larry Summers points out that Harvard’s multiple Antisemitism Taskforces, which are accomplishing nothing, are the wrong approach, an alternative to both moral leadership and standing up strongly for free speech. Instead, Harvard continues to allow official support antisemitic positions without allowing the voicing of pro-Israel positions. Paul Graham links to Richard Florida, a professor at the University of Toronto, who says people in academia now feel more space to speak their minds after recent events. Here are some examples of other cases where free speech could have been stood up for, and universities chose a rather different path. Harvard Goes Mission First Harvard declares it is now mission first. It will no longer make ‘official statements about public matters that do not directly affect the university’s core function.’ I put up a prediction market on whether they stick to it. Good luck, Harvard! What is Harvard’s mission? Harvard. Nate Silver: Notable exceptions to free speech: * Incitement * Defamation * Criticizing Harvard Lawrence Bobo (Dean of Social Sciences, Harvard): A faculty member’s right to free speech does not amount to a blank check to engage in behaviors that plainly incite external actors – be it the media, alumni, donors, federal agencies, or the government to intervene in Harvard’s affairs. Lawrence Summers: It takes something extraordinary to bring me into agreement with Israel demonizing faculty like Walter Johnson. That is what Harvard Dean Lawrence Bobo has done with his call for punishing faculty who publicly challenge university decisions. I cannot understand why his boss Dean Hopi Hoekstra has not condemned the idea. Nor can I understand how someone who believes faculty who believes in punishing dissent can be allowed to set faculty salaries, decide on promotions or be involved in faculty discipline. How can it be according to Harvard leaders that it is fine to call for an end to Israel as a Jewish state but not to criticize the University administration? The Waterloo Model Students from the University of Waterloo computer science programs have been enjoying oversized success, despite it being a relatively young university founded in 1957. Henry Dashwood looks at what makes Waterloo different. They have a five year program that does not break for the summer, the culture focuses on working on projects rather than partying or sports, they have a startup accelerator on campus, and despite having a lot of CS students they are very selective (claimed 4% acceptance rate). So it is exactly the story one would expect based on what startup culture says. Focus on building things, cut out everything else. I am curious if that model will long survive moves like this, although I appreciate that they have a distinct department for pure mathematics: Chris Brunet: The Department of Pure Mathematics at @UWaterloo is hiring a math professor. ”Eligible candidates for this search must self-identify as women, transgender, gender-fluid, nonbinary and Two-Spirit people.” Waterloo’s Faculty of Engineering is also hiring an engineering professor. ”Eligible candidates are required to identify as a woman or gender minority, which is defined to include individuals who self-identify as women, transgender, gender-fluid, non-binary, or twospirited.” Also, 2 professors of computer science. DEI Joshua Rauh nots that his training on DEI included an example of where someone saying ‘DEI has gone too far’ is the first sign of prejudice and on the job discrimination. Alex Tabarrok in response: DEI has gone too far. Indiana signs a bill introducing ‘intellectual diversity’ as a standard for tenure decisions. Tyler Cowen suggests it will backfire, that observance will be addressed via technical box-checking, and that universities could retaliate by not hiring any actual conservatives (even more than they already do) at all for fear they would be forced to grant such people tenure later. It is extremely difficult to get a bunch of academics who want it to be one way, with only left-wing (or often only far-left-wing) viewpoints welcome in academia, to agree to have it be the other way via a law. Tyler does not lay out what he would do instead. I can think of ways to do it, but they involve big guns. Wisconsin’s universities initially voted down a compromise to get rid of some DEI positions in exchange for funding for raises and new buildings, but they came around. Washington Post Editorial Board comes out against DEI statements in hiring. WaPo Editorial Board: The last thing academia — or the country — needs is another incentive for people to be insincere or dishonest. The very purpose of the university is to encourage a free exchange of ideas, seek the truth wherever it may lead, and to elevate intellectual curiosity and openness among both faculty and students. Whatever their original intent, the use of DEI statements has too often resulted in self-censorship and ideological policing. Here is what they are opposing. Paul Graham: People in the sciences thought they could ignore the fools over in the humanities and just focus on their research. But now the fools’ ideology is colonizing the sciences. John Sailer: NEW: Yale University’s department of molecular biophysics and biochemistry requires all job applicants to submit a DEI statement. Here’s the evaluation rubric, which shows the exhaustive DEI criteria for assessing any scientist hoping to work in the Yale department. Here is the full post from The Free Press. When making hires at Yale’s department of molecular biophysics and biochemistry, faculty are told to place “DEI at the center of every decision,” according to a document tucked away on its website. To what extent does that mean an applicant’s DEI score impacts their chance of being hired? If you have a 12 versus an 8 versus a 0, what happens? One cannot be sure. It is compatible with ‘anyone under 11 need not apply’ and also with ‘no one actually cares.’ How easy is a high score? My guess is you can get to about a 7 (3/2/1/1) with a willingness to bullshit and use ChatGPT. Higher than that likely requires either lying or being willing to spend (and commit to spending) substantial amounts of time. What about Columbia? How much do they care? What do they want? John Sailer: NEW: For hiring new professors, Columbia University recommends valuing “contributions to DEI” on par with “research.” The sample evaluation tool also weighs DEI more highly than teaching. That’s an especially wild default given how Columbia defines “contributions to DEI” Columbia provides an in-depth rubric for assessing DEI credentials. Which, of course, is pretty important if DEI might carry the same weight as research. Take a look. The rubric gives a low score to candidates who are skeptical of racially-segregated “affinity groups.” You can feel the attitude coming off these rubrics. This looks like a substantially tougher test to handle if you mainly care about your subject or are trying to muddle through without a huge time sink or ethical compromise. They mean business. Given how numerical scores usually work, you do not have much margin for error. Getting a 15 here, if you are willing to do what it takes and spend the time, is easy, and probably so is getting a 9-10 in ‘service’ and that is probably highly linked. I doubt they have that high a bar to get to 8+ on teaching, and a 10 might be pretty easy there too. That does not leave much room to make up points, which has to be done with research. And a third of that is ‘curricular fit’ so those who are gaming the system are going to get full credit there too, while plans are pretty easy to fake. Your entire actual ‘research track record’ is only worth five points. So yeah, if you are not heavy DEI for real, good luck. You’re not going to make it here. Harvard’s Faculty of Arts and Sciences eliminated the requirement for DEI statements in hiring (source). Instead they are asked to submit a ‘service statement,’ which can include DEI if you want that. As an applicant, you now must ask: Do you think the requirement went away, or that they are testing you to see if you realize that it didn’t? One must ask, what exactly did Sally Kornbluth believe before? John Sailer: BREAKING: A university spokesperson has officially confirmed to me that MIT will no longer use diversity statements in faculty hiring—making it the first elite private institution to backtrack on the controversial policy. As recently as late 2023, MIT required prospective nuclear scientists to submit “a statement regarding their views on diversity, inclusion, and belonging.” No longer. In a statement provided to me by MIT, Sally Kornbluth said these statements “impinge on freedom of expression, and they don’t work.” Was she unable to get rid of the statements until now? Did she think they both worked and that they didn’t impinge on freedom of expression? I can see one thinking that perhaps they work. I can’t see how one can claim they don’t impinge on freedom of expression. You either care about that, or you don’t. So, revealed preferences on priorities, then? In Other News NYU opening a new campus in… Tulsa? Seems like an excellent source of diversity.
2024-06-26
https://www.lesswrong.com/posts/qhpB9NjcCHjdNDsMG/new-fast-transformer-inference-asic-sohu-by-etched
qhpB9NjcCHjdNDsMG
New fast transformer inference ASIC — Sohu by Etched
lcmgcd
I would bet that ASICs will run the roost in a few years and this is only the beginning. They claim 500k tokens per second with Llama 70B. Seems to be exactly what it looks like, an ASIC. Curious if this is somehow not what it looks like.
2024-06-26
https://www.lesswrong.com/posts/QvFRAEsGv5fEhdH3Q/preliminary-notes-on-llm-forecasting-and-epistemics
QvFRAEsGv5fEhdH3Q
My Current Claims and Cruxes on LLM Forecasting & Epistemics
ozziegooen
null
2024-06-26
https://www.lesswrong.com/posts/jhhoks5psG8Dcpgzu/empirical-vs-mathematical-joints-of-nature
jhhoks5psG8Dcpgzu
Empirical vs. Mathematical Joints of Nature
pktechgirl
We (Alex and Elizabeth) are thinking about doing this project where we figure out how paradigm formation happened in chaos theory. Alex has also been thinking about paradigm formation for agent foundations (which people often talk about as being pre-paradigmatic). These are some thoughts on what paradigm formation means. Alex_Altair There's an issue which is something like, agent foundations is more like math than like science, and so it's unclear to me exactly what it means to have a paradigm. And I think this is similar to chaos theory, so it might be useful to talk it through and compare them. Alex_Altair On one hand, you could say that a paradigm is just a set of methods that people generally agree successfully solve problems. And in that sense, domains of math could totally have paradigms. But in the sciences, there's another standard of whether a problem was solved, which is that reality was successfully predicted. Domains of math don't exactly try to "predict" phenomena, and so whether some problems were solved feels more subjective. Alex_Altair And I feel like both chaos theory and agent foundations are more like fields where people will just generally agree whether or not their confusion was resolved by certain frameworks, rather than fields where empirical phenomena are predicted. Elizabeth One definition I have been using of a paradigm is that it is a single solution to problems people believed were disparate. Newtonian physics predicted the rate of a falling apple and the orbit of the moon. Plate tectonics explained magnetic striping on the ocean floor, mountains, and the Wallace line (and let you use information about one to make predictions about the other that were born out). Alex_Altair Ah, interesting. That sounds related to something I've been thinking about which I might call paradigm shifts vs paradigm formation. I think Kuhn mostly talks about shifts, where there exists a previous strong paradigm. But if a field has no paradigm at all, then the formation of the first one might look different from a shift. Showing that multiple problems have a single solution sounds more like what paradigm formation might look like. Elizabeth I'm excited about chaos theory because it seems like it might be doing the same thing as the theory of plate tectonics- combining weather systems, eye movement, and a dripping faucet under a single set of equations, and letting insights from one inform our understanding of the others. There's maybe a pure/recreational version of chaos theory that's math only, but there's also chaos field that is supposed to spread insights useful across domains.[1] I think it's reasonable to call that shift vs. formation, although I'm not sure it will break down cleanly. My sense is biology and geology have single unifying paradigms that don't explain everything but are clearly the foundation of a field. Whereas physics have multiple paradigms that exist in parallel (e.g. classical mechanisms, relativity, and quantum mechanics). I'm not sure where chemistry fits in- probably more like biology and geology in that nothing in in chemistry makes sense except in light of atoms and molecules, but I haven't thought about it that much. I have no idea how this compares to agent foundations, which I'm basically ignorant of. [1]One reason I chose chaos theory in particular to study is that it's still at the stage where this is considered cross-domain pollination and not "well yeah, they're both chaotic systems", the same way magnetic stripes and mountains are just obviously both geography.  It is easier to study the process of paradigm coalescence before it has finished. But of course this risks the possibility that Chaos the field is just not actually carving reality at the joints. Alex_Altair I see. My impression of chaos theory is more like... people noticed that some systems seemed to be kinda predictable but kinda not. And they were confused about what was up with that, especially given that the systems were sometimes very simple equations. Those equations do match some empirical systems. But figuring out the concept of chaos doesn't exactly let you make better predictions; instead it lets you understand which systems can be expected to be predictable at all. So for example, we don't even bother trying to make weather forecasts for a year out. But I'm not very confident about this (having not yet studied the history of chaos theory). Maybe it lets you get some better statistical predictions, or something. Elizabeth The part of chaos I find interesting is the part that specifies underlying patterns in spite of inability to predict exact state at a given time, such as Lorenz systems. I'm not yet confident these are genuinely useful and not Big Chaos PR stunts, but if they are, and if learning from one application is transferable to another, that's the kind of thing I would find meaningful. Alex_Altair What types of underlying patterns are you thinking of for the Lorenz system? Elizabeth I believe this has improved weather forecasting. Alex_Altair I think people had the equations for the Lorenz system pretty early on, and the "problem" of chaos was that the behavior the system just looked really weird, even though we could calculate it all out. If weather forecasting was improved not just from running the equations, but from some further deconfusion that the concept of chaos helped with, I'd be super interested to know how. Elizabeth Yeah that's a great question we should put on our list. Elizabeth Going back to And I feel like both chaos theory and agent foundations are more like fields where people will just generally agree whether or not their confusion was resolved by certain frameworks, rather than fields where empirical phenomena are predicted. could you say more about this? Does reducing confusion without predicting phenomena do anything useful? Is agent foundations actually at the point where it's reducing anyone's confusion? Alex_Altair Does reducing confusion without predicting phenomena do anything useful? I'm sure it often does, although humans can notoriously feel deconfused even when their beliefs make no sense. I think that deconfusion mostly makes research go faster, because it means your ideas are better. Is agent foundations actually at the point where it's reducing anyone's confusion? Oh, no. I just mean that if agent foundations attains a paradigm, I don't think it will be because a bunch of empirical data is explained and/or predicted, I think it will be because someone published a framework where everyone reads it and goes, "Oh yeah, they nailed it. Let's go with this." This is largely what happened with Turing machines and Shannon's information theory. Elizabeth Can you say more about one of those examples? Alex_Altair Sure. In the early 20th century, mathematicians were trying to figure out what they all intuitively meant by an "effective method". They all agreed that there was some class of thing that was a calculation you could actually sit down and do (versus other mathematical objects which could be described not but not actually written out somehow). People spent a while proposing different formal definitions, which I think were usually classes of functions. Eventually, Turing came up with a way to formally model "sit down and follow an algorithm", which was his description of what we now call Turing machines. When other mathematicians read this, I think they found it pretty compelling. And after a few more years, people proved that all the other proposed definitions were either equivalent, or subsets of Turing machines. And so everyone felt like they had gotten deconfused on what an effective method was. I think this genuinely helped humanity build better computers, et cetera. But no one was doing it to predict an empirical phenomenon. Alex_Altair Shannon's formal definition of information was similar. He described it in a convincing way, including giving a derivation from intuitively satisfying axioms. He then proved some theorems that were clearly going to be super useful for building communications systems. Everyone read his paper and was immediately deconfused about information, and then moved forward using it as the paradigm. But I also don't think Shannon or others were doing science in the sense of predicting a phenomenon. Alex_Altair And I think agent foundations is similar. There's a conceptual category that I already have (analogous to "effective method", or "information") and I'm trying to figure out what its formal characterization is. Elizabeth Let me see if I understand: The best answer to "how fast does an apple fall?" was always going to be described by gravity, with bonus points for air resistance. But the best answer to "what is an effective method?" didn't have to be "calculable by a Turing machine"? Except there were multiple attempts to describe calcuability that turned out to be isomorphic to turing machines, so maybe turning/lambda calculus/etc are carving reality at some joint? Alex_Altair More like, "how fast does an apple fall?" is directly checkable against reality at all. Whereas the definition of "effective method" is... somehow instrumentally useful to humans, even though it doesn't exist in the territory. Elizabeth Got it. Yeah I do think chaos theory is going to be more like that than like plate tectonics.
2024-06-26
https://www.lesswrong.com/posts/4eBLj3TTcypq4ic9v/in-favour-of-exploring-nagging-doubts-about-x-risk
4eBLj3TTcypq4ic9v
In favour of exploring nagging doubts about x-risk
owencb
null
2024-06-25
https://www.lesswrong.com/posts/ajJRyKtwZNnBdmkcv/what-is-a-tool
ajJRyKtwZNnBdmkcv
What is a Tool?
johnswentworth
Throughout this post, we’re going to follow the Cognition -> Convergence -> Corroboration methodology[1]. That means we’ll tackle tool-ness in three main stages, each building on the previous: Cognition: What does it mean, cognitively, to view or model something as a tool?Convergence: Insofar as different minds (e.g. different humans) tend to convergently model the same things as tools, what are the “real patterns” in the environment which give rise to that convergence?Corroboration: Having characterized the real patterns convergently recognized as tool-ness, what other properties or implications of tool-ness can we derive and check? What further predictions does our characterization make? We’re not going to do any math in this post, though we will gesture at the spots where proofs or quantitative checks would ideally slot in. Cognition: What does it mean, cognitively, to view or model something as a tool? Let’s start with a mental model of (the cognition of) problem solving, then we’ll see how “tools” naturally fit into that mental model. When problem-solving, humans often come up with partial plans - i.e. plans which have “gaps” in them, which the human hasn’t thought through how to solve, but expects to be tractable. For instance, if I’m planning a roadtrip from San Francisco to Las Vegas, a partial plan might look like “I’ll take I-5 down the central valley, split off around Bakersfield through the Mojave, then get on the highway between LA and Vegas”. That plan has a bunch of gaps in it: I’m not sure exactly what route I’ll take out of San Francisco onto I-5 (including whether to go across or around the Bay), I don’t know which specific exits to take in Bakersfield, I don’t know where I’ll stop for gas, I haven’t decided whether I’ll stop at the town museum in Boron, I might try to get pictures of the airplane storage or the solar thermal power plant, etc. But I expect those to be tractable problems which I can solve later, so it’s totally fine for my plan to have such gaps in it. Bakersfield is roughly where the “9 hr 42 min” box is. How do tools fit into that sort of problem-solving cognition? Well, sometimes similar gaps show up in many different plans (or many times in one plan). And if those gaps are similar enough, then it might be possible to solve them all “in the same way”. Sometimes we can even build a physical object which makes it easy to solve a whole cluster of similar gaps. Consider a screwdriver, for instance. There’s a whole broad class of problems for which my partial plans involve unscrewing screws. Those partial plans involve a bunch  of similar “unscrew the screw” gaps, for which I usually don’t think in advance about how I’ll unscrew the screw, because I expect it to be tractable to solve that subproblem when the time comes. A screwdriver is a tool for that class of gaps/subproblems[2]. So here’s our rough cognitive characterization: Humans naturally solve problems using partial plans which contain “gaps”, i.e. subproblems which we put off solving until laterSometimes there are clusters of similar gapsA tool makes some such cluster relatively easy to solve. Convergence: Insofar as different minds (e.g. different humans) tend to convergently model the same things as tools, what are the “real patterns” in the environment which give rise to that convergence? First things first: there are limits to how much different minds do, in fact, convergently model the same things as tools. You know that thing where there’s some weird object or class of objects, and you’re not sure what it is or what it’s for, but then one day you see somebody using it for its intended purpose and you’re like “oh, that’s what it’s for”? (👀) From this, we learn several things about tools: Insofar as different humans convergently model the same things as tools at all, the real patterns which give rise to that convergence are (at least in part) patterns of usage, not just properties internal to the tool itself.Once we see someone using a tool for something, it usually is pretty obvious that the thing is a tool and I’d expect most people to converge on that model. I also expect people to mostly converge on their model of what-the-tool-is-for, conditional on the people seeing the tool used in the same ways.Note that we often only need to see a tool used a handful of times, maybe even just once, in order for this convergence to kick in. So, an analogue of the word-learning argument applies: we can’t be learning the convergent tool-model by brute-force observation of lots of usage examples; most of the relevant learning must happen in an unsupervised way, not routing through usage examples.[3] To my mind, these facts suggest a model analogous to the cluster model of word-meaning. Model: There’s (some degree of) convergence across minds in the “gaps” which naturally show up in plans.Sometimes those gaps form clusters. We already have a story (including some math and quantitative empirics) for when clusters are convergent across minds.… so when someone sees another person using a certain object to solve a subproblem from a particular cluster, that one example might be enough to connect that object to that subproblem-cluster - i.e. model the object as a tool whose purpose is to solve subproblems in that cluster.… so insofar as the clusters are convergent across minds, different minds should also need only a few usage-examples to agree on which subproblem-cluster(s) a given tool “is for”. Ideally, we’d like to mathematically prove the convergence steps above, or at least come up with some realistic and simple/empirically-checkable sufficient conditions under which convergence occurs. We have at least part of such an argument for cluster convergence, but convergence in the “gaps” which show up in plans is a wide-open problem. Corroboration: From that characterization, what other properties or implications of tool-ness can we derive and check? We’ll give a couple example implications here; perhaps you can come up with some more! Modularity Let’s go back to the roadtrip example. My (partial) road trip plan has a bunch of gaps in it[4]: I’m not sure exactly what route I’ll take out of San Francisco onto I-5 (including whether to go across or around the Bay)I don’t know which specific exits to take in BakersfieldI don’t know where I’ll stop for gasI haven’t decided whether I’ll stop at the town museum in BoronI might try to get pictures of the airplane storage or the solar thermal power plant Notice that these gaps are modular: my choice about what route to take out of San Francisco is mostly-independent of which exits I take in Bakersfield, the choice about where to stop for gas is mostly-independent of both of those (unless it’s along the route from San Francisco to I-5, or in Bakersfield), the choice about whether to stop at the town museum in Boron is mostly-independent of all of those, etc. Insofar as the choices interact at all, it’s via a few low-dimensional variables like e.g. how much time various routes or stops will take, or with only a few other choices. From a cognitive perspective, modularity is an important prerequisite for the whole partial-plan-with-gaps strategy to be useful. If all the subproblems were tightly coupled to each other, then we couldn't solve them one-by-one as they came up. Modularity of subproblems, in turn, implies some corresponding modularity in tools. Consider a screwdriver again. Screws show up in lots of different objects, in lots of different places. In order for a screwdriver to make those subproblems easy to solve, it has to be able to screw/unscrew screws in all those different contexts (or at least a lot of them). Furthermore, the screwdriver has to not have lots of side effects which could mess up other subproblems in some of those contexts - for instance, if the screwdriver is to be used for tight corners when building wooden houses, it better not spit flames out the back. To put it differently: part of what it means for a tool to solve a subproblem-cluster is that the tool roughly preserves the modularity of that subproblem-cluster. Loopiness Having a screwdriver changes what partial plans I make: with a screwdriver on hand, I will make a lot more plans which involve screwing or unscrewing things. Hammers are notorious for making an awful lot of things look like nails. When I have a tool which makes it easy to solve subproblems in a certain cluster, I’m much more willing to generate partial plans whose gaps are in that cluster. … and if I’m generating partial plans with a different distribution of gaps, then that might change which clusters naturally show up in the distribution of gaps. We’ve talked about a phenomenon like this before in the context of social constructs. A toy example, in that context, talks about a model in which cars and trucks each naturally cluster: … but of course people could just choose to build different kinds of vehicles which fall in other parts of the space: … and that changes which clusters are natural. The key point here is that we can change which clusters are natural/convergent for modeling the world by changing what stuff is in the world. That sort of phenomenon is also possible with tools. In particular: in a given environment, we hypothesized in the previous section that there’s (at least some) convergence across minds in the gaps which show up in partial plans, and convergence in clustering of those gaps. But the “given environment” includes whatever tools are in the environment. So sometimes, by creating a new tool, someone might change what kinds of partial plans people form, and thereby create a whole new subproblem-cluster, or change the shapes of existing subproblem-clusters. Furthermore, insofar as people are aware of different tools available in the environment, they might form different kinds of partial plans with different kinds of gaps, and thereby have different subproblem-clusters. I expect that’s a big factor in the cases where it takes tens of examples to learn what a new tool is for (as opposed to 1-3 examples): there’s an iterative process of refining one’s model of the tool and its cluster, then generating different kinds of plans to leverage the new tool/cluster, then re-clustering based on the new kinds of plans generated. Eventually, that process settles down to a new steady state. ^ … a name which we just made up for a methodology we’ve found quite fruitful lately and should probably write an independent post on at some point. Consider this post an example/case study of how the Cognition -> Convergence -> Corroboration methodology works. ^ Probably some of you are thinking “doesn’t the screwdriver also change which plans I make, and therefore which gaps show up in my plans? Y’know, because I’m more likely to consider plans involving unscrewing stuff if I have a screwdriver.”. Yes; hold that thought. ^ Even when we do need more than a handful of examples, it’s more like tens of different examples than 21000000 different examples, so the argument still applies. More on that later. ^ Note that the subproblems in the road trip example in particular are not necessarily solved by tools. We do not expect that every subproblem or even cluster of subproblems has an existing tool which fits it.
2024-06-25
https://www.lesswrong.com/posts/xgZGtH9WiKdAkWLij/when-do-alignment-researchers-retire
xgZGtH9WiKdAkWLij
When do alignment researchers retire?
Nadroj
At what point will it no longer be useful for humans to be involved in the process of alignment research? After the first slightly-superhuman AGI, well into superintelligence, or somewhere in between? Feel free to answer differently for different kinds of human involvement: Humans could be involved as a source of data about human values Humans could be involved as a red-team, trying to get evidence of misalignment or to verify the trustworthiness of systems Humans could be involved in setting the broad research agenda, delegating to the AGIs Humans could be involved in complementing the technical weaknesses of the AGIs, helping them in some way to research new alignment methods What do you envision we are doing between AGI and superintelligence?
2024-06-25
https://www.lesswrong.com/posts/JECQZAXWbtGJdBuAC/schelling-points-in-the-agi-policy-space
JECQZAXWbtGJdBuAC
Schelling points in the AGI policy space
mesaoptimizer
I've been thinking about memetically fit Schelling points in the AGI policy space. I'll describe four such "Schelling policies", and use them as pedagogical examples. Shut it all down MIRI's new stated objective is the clearest example of a Schelling policy: "Shut it all down". MIRI states that they want governments to coordinate to pause all AI research that involves smarter-than-human systems. Laypeople will find this policy easy to understand, since they can rely on the shared cultural knowledge of CFC bans and international nuclear disarmament as case studies. If you want to coordinate a large number of people coherently towards furthering a particular policy, "you get about five words" that you can make 'common knowledge' such that people can coordinate in a specific direction. The ease of communicating the policy makes a big difference in such conditions. When you attempt to communicate an idea widely, you'll notice that people usually end up with multiple slightly (or sometimes wildly) differing copies of the original idea. If you've played the Telephone game, you've experienced just how much information can be lost as an idea spreads from one person to another. In the context of policies, individual people's beliefs and incentives will warp the instantiation of the policy they will communicate and support. (For example, you'll find companies lobbying regulators to carve out exceptions that benefit them.) Here's where Schelling points are invaluable: they serve as natural attractors in the space of ideas, and therefore enable people to 'error-correct' the idea they encounter and figure out the policy that everyone is coordinating around. "Shut it all down" is a Schelling point. "Shut it all down if we see evidence of unprompted deception and power-seeking in AGI models" is not a Schelling point, you have multiple free variables that can and will be optimized to benefit the people spreading the idea -- which can result in a lack of coordination and the idea being outcompeted by memetically fitter ideas. "Prevent the training of models using compute greater than 1025 floating point operations" also has a free variable: why exactly 1025 floating point operations? Why not 1024 or 1026? Until 1025 floating point operations becomes a Schelling number, the policy containing it is not a Schelling point. Effective Accelerationism (e/acc) The biggest difference between e/acc and the PauseAI memeplexes is that e/acc doesn't seem to have a coherent set of goals and beliefs. Here are a bunch of memes that e/acc people tend to espouse: "It's time to build." (also the last line of The Techno-Optimist Manifesto) "Come and take it." (where "it" refers to GPUs here) "Accelerate or die." At a first glance, one might say that e/acc isn't a Schelling policy -- it seems less like a coherent policy, and more like a set of 'vibes', verbal and non-verbal statements designed to create a desired emotional impact, regardless of the actual content. I disagree. A policy (or well, a memeplex) does not need to have an explicitly coherent set of beliefs and goals for it to result in coordinating people towards particular consequences. You might expect this to reduce the spread rate of this particular policy, but e/acc specifically compensates for it by being significantly more fun and socially, financially, and professionally profitable to coordinate around. For example, venture capital firms such as a16z want the opportunity to make a lot of money from the gold rush that is the race to AGI, and a lot of software developers want a shot at making billions of dollars if their startup succeeds. The possibility of regulations would cause the music to stop, and they don't want that. In fact, you don't even need a high spread rate to make e/acc a viable policy. You just need it to be a reachable Schelling point in the AGI policy space, and a sufficient number of people who would stand to benefit from this policy. This is why you'll see Meta AI, Yann LeCun, Mistral, and a16z all taking actions that push the world towards less regulation over AI, and towards more opportunity for them to benefit. A significant number of people who espouse and promote accelerationism on Twitter seem to have a very strong fear of totalitarianism, or desire to enable a libertarian future. Given their fear of governments or corporations controlling their lives in the future, either due to the creation of a totalitarian state to prevent AI existential catastrophes[1], or due to a desire to ensure that they have a say in what the future would look like[2], they all end up coordinating towards the most obvious Schelling point that comes to their mind: prevent centralization of power over AI development. We must beat China "We must beat China" is an interesting policy: the more popularity and support it gets, the more the beliefs underlying this policy start to turn into reality. Leopold's Situational Awareness series of essays is the first attempt at building popular support for this policy, that explicitly involved understanding the power of an AI system with respect to potential geopolitical consequences. I expect this to have non-trivially increased the probability that China will orient to AGI with the same frame that Leopold espouses. That is, Leopold's actions have furthered the narrative underlying his policy. From a memetic evolution point of view, this is a pretty devious feature: the more this memeplex spreads, the more its environment adapts to fit the memeplex, instead of the other way around.[3] "Shut it all down" and "Accelerate" both do not share this property of reshaping their epistemic environment -- just because the majority of governments agree to ban AGI research, doesn't mean that the world has changed such that people who were inclined to the "Accelerate" policy now are more amenable to the "Shut it all down" policy, ignoring social incentives. On the other hand, the more actions the US government takes to curtail China's probability of dominating in the possible race to AGI, the more likely it is that the Chinese government would consider AGI a credible threat to their continued survival, and make dominating the race a priority. A lot of "Accelerate" policy people would agree that it probably is better for the US/UK/etc. coalition to win the AGI race, even if they find any centralization of power distasteful. Therefore, they'd willingly coordinate around the "We must beat China" Schelling policy as it becomes more and more popular and China enters the race. This makes the "We must beat China" policy a dangerous memeplex. Ideally you'd nip such policies in the bud, since the more they spread, the more difficult it is for other policies to compete, even if they had the same spread rate and starting point. Only One Org Here's a Schelling policy I haven't seen people talk about: I call it the "Only One Org" policy. It is the goal to establish and ensure that there exists one and only one organization allowed to do AGI research. This could occur by governments merging together all the frontier labs, or it could involve building a government lab and dismantling all the frontier labs, and hiring all the displaced ML researchers (or giving them a generous severance pay). And I expect governments to ban AGI research outside the organization. For example, I expect governments to make it illegal to train a model past a certain compute threshold, and to publish and disseminate cutting-edge research. This Schelling policy has the same self-fulfilling beliefs property that "We must beat China" does -- the more nations agree to cooperate, the more the remaining nations are incentivized to cooperate. If China chooses to not join this international agreement, this new international coalition can unilaterally choose to enforce it, with whatever geopolitical sanctions or threats are calibrated to get them to agree. There's no need to race. For the same reasons that I expect the "We must beat China" policy to outcompete "Shut it all down" and "Accelerate", I expect "Only One Org" to outcompete them too (given similar starting points and spread rates). This implies that if you believe that we are not on track to solving alignment, you'd still be better off by coordinating around "Only One Org" instead of around "Shut It All Down". As far as I can tell, Nate Soares believes it unlikely that a singular international AI consortium would shut down after seeing evidence of the difficulty of AGI alignment, and this makes sense to me: normalization of deviance is a very common phenomenon in most organizations, and building a massive singular insular bureaucracy trying to unsuccessfully imitate the Manhattan project seems very likely to make this mistake. On the other hand, once you have centralized decision-making, the number of people you have to convince to "Shut it all down" is down to low two digit numbers, or even a one digit number. A lot of people will be averse to the "Shut it all down" policy, primarily because it has massive negative consequences for the financial, social, and professional facets of their lives. They'll likely coordinate around some other Schelling policy that lets them retain the things they value. The "Only One Org" seems likely to fulfill their needs, and therefore more viable than the shutdown policy. It seems like work in this direction has only just begun: Conjecture published the MAGIC proposal last year, and it is the only write-up I've encountered that fleshes out this policy proposal. Parting words I believe that Schelling policies is the most viable class of policies to coordinate around. It is likely that there are more such Schelling points in the AGI policy space. I expect that there is likely at least one Schelling policy that involves AGIs being given rights or treated as citizens, that would be at least as useful as the shutdown and the one-org Schelling policies described here. Finding such a Schelling policy is left as an exercise for the reader. John Carmack and Yann LeCunn come to mind, based on their tweets that I recall. ↩︎ Sam Altman is a good example: “We’re planning a way to allow wide swaths of the world to elect representatives to a new governance board,” [Sam Altman] said. “Because if I weren’t in on this I’d be, like, Why do these fuckers get to decide what happens to me?” ↩︎ This seems like a feature of race dynamics in general, and probably of all instances of the class of self-fulfilling beliefs. ↩︎
2024-06-26
https://www.lesswrong.com/posts/HxHYPJoaWD8AjaHCk/computational-complexity-as-an-intuition-pump-for-llm
HxHYPJoaWD8AjaHCk
Computational Complexity as an Intuition Pump for LLM Generality
Particleman
With sufficient scale and scaffolding, LLMs will improve without bound on all tasks to become superhuman AGI, to the extent they haven’t already. No, wait! LLMs are dead-end pattern-matching machines fundamentally incapable of general reasoning and novel problem solving. Which is it? I’ll call these opposing points of view “LLM Scaler” and “LLM Skeptic”. The first appears to be held by the big AGI labs and was recently exemplified by Leopold Aschenbrenner’s Situational Awareness series, while the second is influenced by cognitive science and can be associated with researchers such as François Chollet, Gary Marcus, and Melanie Mitchell. This post loosely generalizes these two stances, so any misrepresentation or conflation of individuals’ viewpoints is my own. I recommend Egg Syntax’s recent post for further introduction and a summary of relevant research. We can caricature the debate something like this: LLM Scaler: LLMs will take us to AGI. See straight lines on graphs. See correct answers to hard questions on arbitrary topics. See real customers paying real money for real value. LLM Skeptic: LLMs achieve high skill by memorizing patterns inherent in their giant training set. This is not the path to general intelligence. For example, LLMs will never solve Task X. LLM Scaler: They solved it. LLM Skeptic: No kidding? That doesn’t count then, but they’ll really never solve Task Y. And you can’t just keep scaling. That will cost trillions of dollars. LLM Scaler: Do you want to see my pitch deck? Meanwhile, on Alternate Earth, Silicon Valley is abuzz over recent progress by Large List Manipulators (LLMs), which sort a list by iteratively inserting each item into its correct location. Startups scramble to secure special-purpose hardware for speeding up their LLMs. LLM Scaler: LLMs are general list sorters, and will scale to sort lists of any size. Sure, we don’t quite understand how they work, but our empirical compute-optimal scaling law (N ~ C^0.5) has already held across a dozen OOMs, and we’re spending billions in venture capital to keep it going! LLM Skeptic: That absurd expense is unsustainable. Can’t you see that? There’s no way that LLMs are truly general list sorters. Good luck getting one to sort a list with a million items. LLM Scaler: We already have. LLM Skeptic: Oh. Well then, LLMs will never sort a list with a BILLION items! The “LLM” Skeptic is, literally, wrong. Insertion sort is fully general and can in principle scale to sort a list of any size. Each time the Skeptic declares that some specific list size is impossible to sort, they only embarrass themselves. But this skepticism reflects a deeper truth. The “LLM” paradigm is fundamentally inefficient. Sooner or later, hype will crash into real-world cost constraints and progress will stall. If Alternate Earth knew more computer science, the Skeptic would have told the Scaler to replace O(N^2) insertion sort with an efficient O(N log N) algorithm like quicksort. Returning to Large Language Models, how might computational complexity reframe our understanding of their abilities? The explicit or implicit designer of any intelligent system faces a tradeoff between allocating its model capacity into memorization – of facts, heuristics, or complex programmed behaviors – and implementing adaptive algorithms for learning or optimization. Both strategies enable equivalent behavior given unlimited resources. However, the former strategy requires model capacity that scales with the diversity and complexity of possible tasks, and the real world is both diverse and complex. The latter strategy requires only constant capacity, yielding drastically improved efficiency. This is made possible by exploiting some combination of in-context data, computation time, or external memory during inference. An efficient learning algorithm makes the most accurate predictions possible, while using the fewest resources possible. Scaling up an LLM requires increasing the model size, training dataset size, and compute (proportional to the first two’s product) in tandem following some optimal ratio, limited by one of these three factors. For a chosen scaling policy, an LLM’s computational complexity in model capacity translates into efficiency using its training resources in general. With this mental model, we can understand the LLM Skeptic as making either the strong claim that LLMs memorize exclusively, or the weaker but more believable claim that they memorize excessively, leavened with only a little in-context learning. In short, LLMs are inefficient algorithms. The Skeptic would be wrong to pinpoint any given task as impossible in principle, if given unlimited parameters, training data, and compute. But they could be right that in practice inefficiency pushes generality out of reach. The LLM Scaler might variously reply that whatever way LLMs balance memorization and in-context learning is apparently good enough; that scaffolding will patch any inefficiencies; or that more efficient in-context learning strategies will keep emerging with scale. Will scaling up LLMs lead to AGI? And if not, what will? You can scale, but you can’t scale forever. The ultimate impact of current and future AI systems is bounded by the maximum efficiency with which they convert resources into solved problems. To understand their capabilities, we need to quantify this efficiency.
2024-06-25
https://www.lesswrong.com/posts/XhywWXDLkqDSB7KjH/failure-modes-of-teaching-ai-safety-1
XhywWXDLkqDSB7KjH
Failure Modes of Teaching AI Safety
ea-1
Why I'm writing this I'm about to teach my AI safety course for the fourth time. As I'm now updating the syllabus for the upcoming semester, I summarize my observations on what can go wrong when teaching AI safety. These have mostly not happened during my teaching but are generally likely to happen - as more AIS courses are developed and taught around the world - and I've especially thought about them when preparing the course so that they don't happen. 1. Alignment feels like a lost cause Depending on how x-risk is presented, getting misaligned AI might appear as an inevitable future since the problem is too complex and hard, and there isn't enough time for AI alignment research to generate robust techniques. How to avoid: make sure to emphasize all the work on alignment and governance and how your students could also be doing some of that (if they wanted to). I'm also updating my syllabus to include the discussion about AI Pause. 2.  There is no (historical/philosophical) context Just like with all complex ideas, situating the problem within its context can make a big difference in how it will be perceived. Talking about AI systems that all of a sudden become a threat to humanity is confusing, to say the least. How to avoid: talk about the foundations of AI, the debate between symbolic AI and connectionism in cognitive science, and set the stage for how we got to contemporary LLMs. 3. Your audience gets (disproportionately more) excited about capabilities This is more common with people with technical backgrounds who like to build tools and applications and are blindly or naively excited about technological progress. How to avoid: clearly explain the conceptual parts of the problem, describe it in more industry-friendly terminology, provide a lot of examples. 4. But ... what about sexist/racist etc. algorithms? Hinting at AI ethics being more important than this alignment story. Also, appears in the context of the Longtermism debate. How to avoid: show how this is a form of misalignment, happening right now. AI ethics should not be opposing AI safety. 5. There are no actionable next steps This is true for many courses. Once you've finished it it's just another letter on your transcript. I think that there are good reasons for an AIS course in particular not to fall into that category. How to avoid: connect the material to related areas of interest that the students might be more familiar with (e.g., I have many psych or econ majors). Recommend further resources, other courses they could take, potential career paths, or activism. You can find my syllabus for Fall 2024 here.
2024-06-25
https://www.lesswrong.com/posts/bb4D8vbCyPPHtspk7/kingfisher-summer-tour-2024
bb4D8vbCyPPHtspk7
Kingfisher Summer Tour 2024
jkaufman
Kingfisher is going on tour again: Thr 2024-06-27 Rochester NY Fri 2024-06-28 Pittsburgh PA Sat 2024-06-29 Bloomington IN Sun 2024-06-30 St Louis MO Mon 2024-07-01 Cincinnati OH Tue 2024-07-02 Indianapolis IN Cecilia made a flyer: Harris will be calling, and we'll be driving together. Anna and Lily are coming too, which I'm excited about. It's about 40hr of driving, which is a lot, but we're spreading it over eight days and we do have some relatively light days, so it should be ok: If you're somewhere along here, I'd love to see you; come dance!
2024-06-25
https://www.lesswrong.com/posts/YQ4rSTHpHeFcAmhvi/incentive-learning-vs-dead-sea-salt-experiment
YQ4rSTHpHeFcAmhvi
Incentive Learning vs Dead Sea Salt Experiment
steve2152
(Target audience: People both inside and outside neuroscience & psychology. I tried to avoid jargon.) 1. Background context One of my interests is the neuroscience of motivation. This is a topic I care about for the same obvious reason as everyone else: to gain insight into how to control the motivations of future superintelligent brain-like Artificial General Intelligence. (Wait, you’re telling me that there are other reasons people care about the neuroscience of motivation? Huh. No accounting for taste, I guess.) Part of the neuropsychology-of-motivation literature is a few related phenomena variously called “incentive learning”, “retasting”, “devaluation”, and “revaluation”. In the experiments of interest here: You change something about an animal’s homeostatic state (e.g. its level of hunger),…And you see whether the animal accounts for that change in its behavior (e.g. whether it tries to get food). For example, my own children, despite many years of experience, are very shaky on this learning task. After skipping a meal, they get grumpy rather than hungry, and when they finally sit down and take a bite, they’re shocked, shocked, to find how much they were craving it. Anyway, there is a massive literature on these phenomena. You won’t learn any practical parenting tips, alas, but you can learn lots of random things like ‘what happens to revaluation if we block the activity of neurotransmitter X in macaque brain region Y?’. And there are intriguing contrasts, where the animal does or doesn’t adapt its behavior to its physiological state in two situations that seem generally pretty similar. The title of this post is an example—I’ll go over it in Section 2. People do these studies out of the (reasonable) expectation that they will help illuminate the nuts-and-bolts of how homeostatic needs connect to actions, decisions, and desires. But unfortunately, there’s a generic problem: it’s hard to go backwards from observed behaviors to how brain algorithms generated those behaviors. You basically need to already have a good theoretical framework. My spicy opinion is that most people in the field don’t, and that they fill in that gap by grasping for salient but unhelpful concepts. In some cases, these unhelpful concepts are borrowed from AI—e.g., “model-based RL versus model-free RL” is a salient partitioning for AI practitioners, whether or not it’s relevant for the brain (see §5.2). In other cases, these concepts are borrowed from experimental psychology practice—e.g., “Pavlovian versus operant conditioning”[1] is a profoundly important distinction for the experimentalist … but that doesn’t necessarily mean it’s an important distinction for the rat!! (See §5.1.) Well, I have a theoretical framework! It’s the one pictured here, which I’ll go over below. So the goal of this post is to offer an overview of how to think about incentive learning and related phenomena from that perspective. I won’t comprehensively survey the incentive learning literature, and I also won’t particularly try to convince you that my theoretical framework is right and others are wrong. After all, in a field like neuroscience, understanding a hypothesis and its consequences is 99%+ of the work. Compared to that, if you have several plausible hypotheses and are trying to find the correct one, it’s a walk in the park! Quick summary of the rest of the article: Section 2 motivates and introduces the topic by presenting two experimental results which seem to contradict each other: the “dead sea salt experiment” of Robinson & Berridge, and the “instrumental incentive learning” experiment of Dickinson & Balleine.Section 3 presents my big-picture model of decision-making in the brain, and how that interacts with incentive learning, revaluation, etc.Section 4 lists some factors that do or don’t affect whether revaluation happens, within my framework.Section 5 is kind of an appendix tailored towards people with neuro / psych backgrounds, listing a few notions in the literature and how they relate to my own models. (Obligatory post-replication-crisis discussion in footnote→[2].) 2. Two seemingly-contradictory experiments 2.1 “Dead Sea Salt Experiment”[3] In this 2014 paper by Mike Robinson and Kent Berridge at University of Michigan (see also this more theoretical follow-up discussion by Berridge and Peter Dayan), rats were raised in an environment where they were well-nourished, and in particular, where they were never salt-deprived—not once in their life. The rats were sometimes put into a test cage, in which sometimes a little stick[4] would pop out of a wall, along with a sound. Whenever that happened, it was immediately followed by a device spraying ridiculously salty water directly into the rat’s mouth. The rats were disgusted and repulsed by the extreme salt taste, and quickly started treating the stick’s appearance as an aversive event. (From their perspective, the stick was to blame for the saltwater spray, or a reminder of it, or associated with it, or something.) One of the rats went so far as to stay tight against the opposite wall—as far from the stick as possible! Then the experimenters made the rats feel severely salt-deprived, by depriving them of salt. Haha, just kidding! They made the rats feel severely salt-deprived by injecting the rats with a pair of chemicals that are known to induce the sensation of severe salt-deprivation. (Ah, the wonders of modern science!) ...And wouldn't you know it, almost instantly upon injection, the rats changed their behavior! When the stick popped out (this time without the salt-water spray), they now went right over to that stick and jumped on it and gnawed at it, obviously very excited about what was happening. 2.2: “Instrumental Incentive Learning” experiment The classic version of the experiment is described in a 2002 review article by Anthony Dickinson and Bernard Balleine, citing earlier work by the same authors. The experiment is a bit complicated, so to make it easier to follow, I’ll start with the headline conclusion: In this experiment, rats had to learn through direct experience that drinking sugar-water is especially satisfying when you’re thirsty. In the absence of this direct experience, they behave as if they don’t realize that. Here are more details (omitting various control experiments): In “training”, rats were put in a cage. They were free to press a lever which would give them sugar-water, and they could also pull a chain which would give them food pellets. Importantly, the rats were hungry during this training process. Then during the test, the rats were not hungry but rather thirsty. They were put back in the test chamber, this time with the chain and lever disconnected (so they didn’t do anything). The rats periodically pulled the chain and pressed the lever, evidently hoping that maybe this time they’d get the goodies. But interestingly, despite being thirsty, they didn’t press the sugar-water lever any more often than they pulled the food-pellet chain. In a separate experiment, a different set of rats first had “pretraining sessions”. There was no lever or chain, but food pellets and sugar-water showed up sporadically. One subgroup of rats did the pretraining sessions hungry, and had the same results as the paragraph above. The other subgroup did the pretraining sessions thirsty—and for them, during the test sessions, they pressed the sugar-water lever much more than they pulled the food-pellet chain. Thus, as above, the interpretation is: these rats had to learn through direct experience that drinking sugar-water is especially satisfying when you’re thirsty. The other rats had no idea. 2.3 Why these two experiments seem contradictory In the Dead Sea Salt experiment, the rats acted as if they knew that saltwater-when-salt-deprived is especially satisfying, despite never having had such an experience.In the Instrumental Incentive Learning experiment, the rats acted as if they did not know that sugar-water-when-thirsty is especially satisfying, unless they had previously had such an experience. 3. My model As I mentioned in the intro, I’m going to ignore the (substantial) existing theoretical discussion in the literature (I’ll talk about it a bit in §5 below), and just talk through how I would explain these two experiments. 3.1 Background You can get the gist of my thinking from the following diagram (and for much more detail, see Posts 2 through 6 here): The “Thought Assessors” box in the top right corresponds anatomically to (more-or-less) the “extended striatum”, which by Larry Swanson’s accounting includes the caudate, putamen, nucleus accumbens (NAc),[5] lateral septum (LS), part of the amygdala, and a few other odds and ends. Here I’ll call out one aspect of the above diagram: Although I used equally-sized downward arrows for the Thought Assessors when making my diagram, that’s actually a bit misleading: The “will lead to reward” / “valence prediction” arrow (red box) seems to singularly encompass a substantial fraction (maybe 20% in rodents?) of the entire extended striatumThe hundreds of other arrows (purple box), which I’ll call “visceral predictions”, seem to add up to a relatively small area of the extended striatum, namely part of the extended amygdala, part of the lateral septum, and part of the nucleus accumbens shell, more or less. So each of those individual visceral prediction arrows is built from just a tiny fraction of a percent of the total area of the extended striatum. (The other 75%-ish of the extended striatum is in neither category, and off-topic for this post; I talk about them briefly in §1.5.6 of my valence series.) Elaborating on the red-circled “valence prediction” signal: I have a whole series spelling out what “valence” is and why it’s centrally important in the brain. (Note that I’m using the term “valence” in a very specific way described in that series, and that it’s generally more related to motivation than pleasure.)In actor-critic reinforcement learning terms, I think of this red-circled signal as the “critic”, a.k.a. “value function”.In psychology terms, this red-circled signal is a short-term predictor of upcoming valence. That valence—i.e., the actual valence, as opposed to the valence prediction—is different: it’s drawn in twice in the above diagram, as the upward arrows with the labels “actual valence”. However, their relationship is a bit subtle: the predicted valence can impact the actual valence, including in some cases determining it entirely. See discussion of “defer-to-predictor mode” in general here, and in the more specific case of valence here.In computational-level terms, this red-circled signal is more-or-less an estimate of “how things are going in life, in general, based on everything I know, including what I’m doing and planning right now”—see my normative discussion here & here. 3.2 The Steering Subsystem (hypothalamus & brainstem) has the innate, genetically-hardwired circuits that say whether specific visceral predictions are motivating or demotivating in light of physiological state Consider a possible circuit in a rat’s brain that somehow encodes the rule “if I’m salt-deprived, then eating salt is good”, in an innate way, i.e. as a result of specific “business logic” encoded in the genome. Such a circuit evidently exists, based on the Dead Sea Salt experiment. And it’s no surprise that it does exist—salt deficiency can be fatal, so this is not the kind of thing that we want to be learning purely via trial-and-error experience! For reasons discussed here, on my models, this innate circuit has to be in the Steering Subsystem (hypothalamus & brainstem). See green text in this figure: (Ditto for an innate circuit that says “if I’m thirsty, then drinking water is good”, and so on.) Why does that matter? Because the Steering Subsystem is (more-or-less) a bunch of interconnected, hardwired, innate reaction circuits. Yes, animals have a rich, flexible, constantly-changing, and forward-looking understanding of the world, but this understanding is not in their Steering Subsystem, and in fact their Steering Subsystem has a quite limited and inflexible window into it. Indeed, for our purposes,[6] the only information that the Steering Subsystem has about the future is via the Thought Assessor predictions. 3.3 Two pathways for revaluation: valence predictions and visceral predictions Now let’s say we change the rat’s physiological state somehow—we make it hungry, thirsty, salt-deprived, whatever. And suppose that the rat now starts responding differently in anticipation of food / drink / whatever in a way, in a sensible and adaptive way. If that happens, there are two (non-mutually-exclusive) stories for how it happened: 3.3.1 Revaluation pathway 1: Change-of-physiological-state leads to different Thought Assessor outputs (I’ll focus on the valence predictor for simplicity, since I think that one is typically most important in this context. But note that one could tell a similar story for any of the visceral predictors as well.) Recall that the valence prediction is the output of a learning algorithm, trained from the past history of actual valence: So basically, things will be motivating if they seem (to the animal) to be similar to things that were motivating in the past. (More on this in §4.4 below.) Anyway, let’s go back to the instrumental incentive learning experiment cited above. What’s happening during thirsty “pretraining” is that the valence predictor learns the following rule: If a thought involves feelings-of-thirst interoceptive inputs,…and the same thought also involves drinking-sugar-water semantic / perceptual inputs,…that thought will be assigned positive valence by the Steering Subsystem. It learns this rule from direct experience. Then later on, during training, the rat learns that, if it presses the lever, it should expect to drink sugar water afterwards. (This is an update to its world-model a.k.a. Thought Generator, not a change in its Thought Assessors.) Then finally, during testing, the rat sometimes looks at the lever and an idea pops into its head: “maybe I’ll press the lever and then go get sugar water”. Meanwhile, this whole time, it is experiencing feelings-of-thirst interoceptive inputs. So, per the learned rule above, the valence predictor outputs a guess that whatever this thought is, it’s a great plan (very high valence). And then that valence prediction in turn leads to the actual valence being very positive as well (cf. “defer-to-predictor mode”). Finally, by basic properties of valence, the rat actually does it. 3.3.2 Revaluation pathway 2: Change-of-physiological-state leads to the Steering Subsystem emitting a different response to the same set of visceral predictions from the Thought Assessors. This, I claim, is what’s happening in the dead sea salt experiment. Regardless of whether or not the rats are salt-deprived, their Thought Assessors are emitting the same visceral predictions—I guess something related to an expectation of imminent salt, which might trigger salivation etc. However, when the rats are in a salt-deprived physiological state, their Steering Subsystem issues a different response to those same set of visceral predictions—most importantly for our purposes, it issues a positive-valence signal.[7] 3.4 Back to those two experiments above Going back to §2 above, both the Dead Sea Salt experiment and the Instrumental Incentive Learning experiment were set up such that Pathway 1 would be blocked—the Thought Assessors are trained during previous life experience, and nothing in that dataset would allow them to infer the relevance of physiological state is relevant.[8] So then the thing we learn is: There were relevant visceral predictions indicating an expectation of salt in the Dead Sea Salt experiment.[9] There were not relevant visceral predictions indicating an expectation of thirst-quenching in the Instrumental Incentive Learning experiment. (Why the difference? See next section!) In the latter (Instrumental Incentive Learning) case, the lack of response from non-pre-trained rats reflects a funny situation in terms of the Learning Subsystem / Steering Subsystem dichotomy: The rat brain has information reflecting the fact that drinking-sugar-water-when-thirsty is especially good—but that information is not in the Learning Subsystem, but rather implicit in the innate wiring of the Steering Subsystem.The rat brain has information reflecting the fact that drinking-sugar-water is expected in the near future—but that information is in the Learning Subsystem, and is not being communicated to the Steering Subsystem. So the necessary information for physiologically-appropriate actions is all in the brain somewhere, but it’s partitioned between the two subsystems in a way that preempts connecting the dots. 4. Various factors that may affect the extent to which revaluation does or doesn’t occur (I say “the extent to which”, not “whether”, because it’s generally a continuum, not a binary. For example, an animal can have a nonzero but very weak motivation to do something in the absence of incentive learning.) In order to relate the general considerations of §3 to experimental results, we need to talk about the circumstances under which the relevant visceral predictions do or don’t get to the Steering Subsystem. That depends on a lot of things, so let’s dive in. 4.1 Visceral predictions generally go rapidly to zero when there’s time, distraction, and/or indirection before the outcome (to a much greater extent than valence predictions) In the Dead Sea Salt experiment, the stick would pop out of the wall and the sound would play for 8 seconds, and then the stick would retract, the sound would stop, and the saltwater would at that same instant spray in the rat’s mouth.[10] In the Instrumental Incentive Learning experiment, the rat had to press the lever (or pull the chain) to grant access to the food magazine[11] (or water magazine), then walk over to the magazine, open its flap, and start eating (or drinking). So one possibility is that this is the difference: incentive learning is unnecessary for immediate expectations, but is necessary for more distant expectations, and even just a few extra steps and a few extra seconds is enough to get into the “distant” bucket. 4.1.1 Evidence that timing is very relevant As described in the Dickinson & Balleine 2002 review, even in the first instrumental incentive learning study, the experimenters astutely noticed that they were accidentally running two instrumental incentive learning experiments, not one. The intended experiment involved rats pulling on a chain or lever to gain access to the food or water magazine. The unintended experiment involved the rats subsequently opening the flap of the magazine to get at the food or drink behind it. If you think about it, both of these “experiments” are perfectly valid instrumental learning tasks. And yet, as it turned out, they gave different results! The chain-pulling / lever-pushing did require incentive learning, but the flap-opening didn’t—the flap-opening was more like the Dead Sea Salt experiment. Here’s the relevant quote: In his initial incentive learning study, Balleine (1992) reported that flap opening, unlike lever pressing and chain pulling, did not require an incentive learning experience for motivational control and was directly affected by shifts in food deprivation. In other words, there was a motivational dissociation between the two responses. Rats that had been trained hungry but not received incentive learning pressed the lever frequently but did not enter the magazine when tested sated in extinction. The corresponding animals that were nondeprived during training pressed [the lever] slowly but [they] frequently entered the magazine when tested hungry. To restate more intuitively: There were rats that had only had access to the food-access lever when super-hungry. Naturally, they learned that pressing the food-access lever was an awesome idea. Then they were shown the lever again while very full. They enthusiastically pressed the lever as before. But they did not enthusiastically open the food magazine. Instead (I imagine), they pressed the lever, then started off towards the now-accessible food magazine, then when they got close, they stopped and said to themselves, “Yuck, wait, I’m not hungry at all, this is not appealing, what am I even doing right now??”. (We’ve all been there, right?) And then they turned back without ever opening the flap. And then a bit later they lost their train of thought, and said to themselves, “Ooh, the lever! I remember that! I should press it, it was great!” And repeat. Vice-versa for a different set of rats who had only pressed the lever when full, and then were offered it when hungry. The above result could be a consequence of the magazine flap smelling like food, rather than related to the time-delay and distraction per se. But in a later (1995) paper, the experimentalists dove into this effect more systematically and deliberately, and confirmed that indeed incentive learning had a slight impact on almost-immediate consequences and a bigger impact on more-distant consequences. Some broader context also lends support to the idea that “immediate vs distant expectations” is highly relevant and thus an a priori plausible hypothesis for the observed discrepancy: In particular, in Pavlovian experiments, I believe it’s well-known that if there is a gap between the end of the conditioned stimulus (e.g. light) and the start of the unconditioned stimulus (e.g. electric shock), then the animal learns the association much more weakly, even at the scale of mere seconds. (Relevant keyword: “trace conditioning”.) And I imagine that the learning is weaker still if other potentially-distracting things are going on during that gap This insight knowledge has flowed from Pavlov into common practice via clicker training. If you’ve ever clicker-trained your pet, you’ll know that the delay between the animal’s action and its consequence, even at the scale of fractions of a second, is not a trivial detail but rather an absolutely central aspect of the experimental design. 4.1.2 Proposal: super-steep “discount rate” for visceral predictions, less so for valence predictions (I put scare-quotes around “discount rates” because I don’t think the brain’s time-discounting mechanism is at all similar to the one in ML. But at a vague qualitative level, for present purposes, I think the term “discount rate” is good enough—it’s conveying the right general idea.) Using the same color scheme as above, here’s a proposal: Everyday example to illustrate this graph: Start with an example near the left side of the graph. I tell you that your crush is about to walk into the room and ask you out, in 3… 2… 1… and then they enter the room. Now, this is a direct / vivid / strong thought—the idea of dating your crush is real and happening right now in front of your very eyes. This situation has a very strong effect on both the valence of your thoughts, and visceral reactions like racing heart, goosebumps, blushing, and so on.Now let’s jump to the right side of the graph. Your crush has told you that they’re interested in dating you, but first you need to get a real job. And now it’s two months later, and you’re spell-checking a cover letter. The valence associated with the dream of dating your crush has propagated through these many steps of indirection, all the way backwards from the imagined future blissful romance to the current tedious cover-letter spell-checking. (After all, you wouldn’t be pressing the spell-check button, if pressing the button had no appeal / positive valence whatsoever.) But the visceral reactions have not propagated through these many steps of indirection—pressing the spell-check button does not call forth any heart-racing, goosebumps, blushing, etc. Here’s another example, this one from Anthony Dickinson’s own life (see his 2010 book chapter). Anthony went to Sicily on holiday, ate a watermelon in the afternoon, then drank too much wine and threw up that evening. His vomiting triggered conditioned taste aversion for the watermelon—but he didn’t “know” that at the time. The next morning, he got thirsty again and set off in search of watermelon. It was only once he got very close to the watermelon stalls, where he could see the watermelons, that he felt any nausea at all. It progressed all the way to gagging when he put watermelon in his mouth. The takeaway is: the distant, abstract prospect of watermelon was able to trigger the valence-predicting Thought Assessor, but was not able to noticeably trigger the nausea-predicting visceral Thought Assessor. On the other hand, the immediate expectation of watermelon did trigger visceral predictions. (Of course, once he had tasted the watermelon and gagged, that’s an unpleasant experience, and hence this experience updates Anthony’s valence-predicting Thought Assessor to treat watermelon as demotivating. And the valence Thought Assessor is much better at responding to distant and abstract expectations. So after that morning experience (i.e., incentive learning), Anthony wouldn’t even start walking towards the watermelon stalls.) Thus, I claim, the valence predictor is designed to propagate more strongly into distant, indirect, and uncertain consequences, compared to the other “visceral” predictors. That seems at least vaguely in keeping with my earlier claim that the brain dedicates orders of magnitude more processing power to the valence prediction than to any of the hundreds of visceral predictions. (Interestingly, there are in fact experiments where the rat needs to press a lever then walk over to the magazine, but where the rats nevertheless immediately change their behavior in a physiologically-appropriate way without incentive learning. See §5.1.3 below for an example and how I would explain it.) 4.2 Different visceral predictions can be different This is obvious, but visceral predictions of salt-flavor / salivation, and visceral predictions of disgust-reactions, and visceral predictions of I’m-about-to-drink, etc., can all be different from each other. They can be easier or harder to learn, they can generalize in different ways, they can have different effective “time-discounting” per the previous subsection, and they can have different downstream consequences once they get to the Steering Subsystem. I fully expect that some experimental contrasts can be explained this way. For example, this kind of thing would be an alternate (not mutually exclusive with §4.1 above) explanation for the contrast between the Dead Sea Salt experiment and the Instrumental Incentive Learning experiment—the former involved salt and salt-deprivation, the other involved sugar-water and thirst. This could of course be checked experimentally, and I think the literature shows that the Instrumental Incentive Learning result stays the same if you switch it to salt. But in general, we should keep these kinds of hypotheses in mind. 4.3 Overtraining / habits are relevant too The Thought Assessors are assessing a thought. Thoughts can involve actions, and thoughts can involve expected consequences, or both. For example, if I’m walking to the shelf to get a piece of candy, the thing happening in my head is an outcome-related thought “I’m walking over to get the candy”. As it happens, one part of this thought (“I’m walking over…”) has no particular appeal to me. But that’s fine. The other part of the thought (“…to get the candy”) is very motivating. So the valence of the overall thought is positive, and I will do it. However, upon “overtraining” (a huge number of repetitions of a behavior), positive or negative valence can travel from a consequence to the preceding action, by TD learning, as discussed here and illustrated by the following picture: In the case mentioned above, if I have walked to the shelf to get candy enough times in the past, I might find myself walking to the shelf without even realizing why. I don’t have to be thinking of the candy, because the thought “I’m walking to the shelf” can summon positive valence all by itself. Well in any case, it’s natural and expected that the phenomena under discussion here (incentive learning, retasting, devaluation, revaluation) can manifest differently depending on whether the rat is in habit-mode (the rat is thinking about its actions but not their consequences) or outcome-mode (the rat is thinking about both its actions and its consequences). For example, Dickinson & Balleine 2010 cite Adams 1982 as an early demonstration of the fact “that overtraining can render instrumental responding impervious to outcome devaluation”—i.e., if the rat starts really liking (assigning positive valence to) the idea of pressing the lever, in and of itself, without really having the consequences in mind (i.e., represented in the Thought Generator), then it will do so even when it ought to know that the consequences have shifted to something it doesn’t care about. 4.4 The way that the Thought Assessors (especially valence prediction) generalizes from prior life experience is a whole can of worms As mentioned in §3.3.1, things will be motivating if they seem (to the animal) to be similar to things that were motivating in the past. However, the word “similar” is hiding a lot of complexity—indeed, much of animal intelligence, and even human intelligence, consists of sorting out what situations / actions / thoughts are or aren’t “similar”! 4.4.1 Example Here’s a fun example from Dickinson & Balleine 2002, where generalization determines the necessity of incentive learning. They say that Shipley & Colwill 1996 failed to reproduce the Instrumental Incentive Learning result of §2.2 above, but only because they used more dilute sugar-water. (Indeed, Dickinson & Balleine reproduced this failure-to-reproduce—i.e. they too found instant revaluation when they used more dilute sugar-water.) Their hypothesized explanation is as follows: Maybe sufficiently dilute sugar-water basically tastes like regular water, from the rats’ perspective. And the rats had previously drunk regular water when thirsty, lots of times in the course of their normal upbringing. So the incentive-learning was already complete, long before the experiment started. 4.4.2 Background on “generalization” For readers who are familiar with ML but not psych: “Generalization” (in this context) loosely corresponds to “If we gradient-update a function V via the labeled datapoint “V(X) is supposed to be Y”, then we are also incidentally, simultaneously, updating V such that V(X’) ≈ Y, for any inputs X’ that are sufficiently close to X in terms of their latent representation.” In this case, V is the valence predictor (or in principle, V could be any other Thought Assessor too), X is a thought, and Y is the predicted valence of that thought. In the dilute-sugar-water case mentioned above, the rats learned from life experience that V(the thought of drinking pure water when thirsty) >> 0, and this turned out to generalize to V(the thought of drinking dilute sugar-water when thirsty) >> 0, but it did not generalize to V(the thought of drinking concentrated sugar-water when thirsty) >> 0. 4.4.3 Dead Sea Salt experiment revisited My guess is that the dead sea salt experiment involved visceral predictions, and that’s the story I told above, but I’m not even sure of that! Alternatively, it could have just been their valence prediction function successfully generalizing from prior life experience. After all, in the course of their normal upbringing, the rats were never salt-deprived—or so the experimentalists say! But isn’t it likely that the rats were slightly salt-deprived now and then, over the course of their prior upbringing? And the experimentalists assure us that the rats had never before in their life tasted a very strong salt flavor while salt-deprived, but surely some bits of their feed had a slight salty flavor! So that gives us another hypothesis: maybe the rats’ valence functions successfully generalized from prior life experience to guess the general rule that saltwater-while-salt-deprived is good. That’s not my main guess, based on different experiments. But it’s definitely a hypothesis worth checking. 4.5 Upshot: even granting my theoretical framework, experimental results can still be a big pain to interpret I gave a couple examples above where you can accept my theoretical framework but still not know the underlying explanation for a particular experimental result. In particular, a priori, either or both of the time-delay / distraction issue (§4.1) and the salt-water-versus-sugar-water distinction (§4.2) could explain the requirement of incentive learning in the §2.2 experiment; and even the salt-water experiment on its own could in principle involve either of the two revaluation pathways of §3.3 (see §4.4.3). By the same token, when I come upon some study of monkeys responding or not responding to devaluation when their left orbitofrontal cortex and right amygdala are simultaneously deactivated (or whatever), I can usually list off a bunch of rather different hypotheses for what’s going on. It takes a lot of work to narrow it down. In fact, in practice, I don’t put incentive learning / devaluation / revaluation experiments very high on my list of best ways to learn useful things about the brain. They’re annoyingly indirect. By contrast, easier-to-interpret sources of information include tracer studies (which neurons connect to which neurons and how), stimulation or lesion studies (what does the animal do if you stimulate or destroy such-and-such neurons), evolutionary homologies and embryology (if two regions are evolutionarily and embryologically related, then maybe they’re running a similar algorithm), and of course thinking about algorithms and behavior. 5. Bonus: Some comments on how my model differs from some stuff in the neuro/psych literature 5.1 The literature distinction between “Pavlovian versus instrumental conditioning” is kinda related to my division of Thought Assessors into valence-prediction versus visceral-prediction 5.1.1 General discussion part 1: Pavlovian Start with Pavlovian conditioning. In my framework, visceral Thought Assessors are functions whose output is a scalar, and whose input is a “thought” (a high-dimensional object which includes not only direct sensory inputs but also where you are, what you’re doing, and everything else in your mental world). Each of these functions is built by supervised learning from past life experience. The relation to Pavlovian terminology is as follows: Literature My model“Pavlovian conditioning”≈“A visceral Thought Assessor has been updated by supervised learning.”“Unconditioned stimulus”≈“A situation where, when you put a rat in that situation, it triggers an innate behavior program stored in the rat’s Steering Subsystem (hypothalamus & brainstem), where this happens directly via the Steering Subsystem’s own sensory processing systems, without the Learning Subsystem (cortex, amygdala, etc.) playing a central role. Incidentally, this triggering will then send ground truth signals that update the visceral-prediction Thought Assessors via supervised learning.”“Conditioned stimulus”≈“A situation where, when you put the rat in that situation, the rat reliably has thoughts involving a thought-feature / world-model latent variable / pattern of cortical activation F, where there’s a visceral Thought Assessor which has previously learned to send a visceral prediction upon seeing F.”In Pavlovian conditioning, the green arrows trace the key pathway when encountering an unconditioned stimulus: there are both innate reactions from the Steering Subsystem, and ground-truth updates to one or more of the visceral Thought Assessors. The purple arrows trace the key pathway for the conditioned stimulus: the now-updated Thought Assessor notices something about that stimulus and sends down a visceral prediction. So yes, there’s a relationship between my model and the standard literature framework. But, on my model, the literature framework is somewhat misleading and unhelpful for two (related) reasons: First, the literature framework emphasizes external situations (“stimuli” that experimentalists can see and control), whereas mine emphasizes internal thoughts (what the rat is thinking about, a.k.a. thought-features / latent variables / patterns-of-active-neurons in the rat’s Thought Generator / world-model / cortex.) I understand where people were coming from—we inherited this terminology from behaviorism, and it’s a natural starting point for describing experiments (since it’s hard to read rat minds). Moreover, I suspect that neuro / psych people would protest that they’re perfectly well aware of the obvious fact that visceral reactions are proximally triggered by thoughts rather than “stimuli”. But still, I think the terminology we use exerts a strong pull on the kinds of hypotheses we entertain, and I think people are still being manipulated by the zombie remnants of 1950s behaviorism, more than they’d like to believe. Anyway, in my framework, it’s thought-features, not situations, that get tagged by Thought Assessors as evidence of an imminent visceral reactions—see my intuitive discussion in §9.3.1 here. Of course, external situations impact and constrain internal thoughts: if I’m sitting at my desk, then I can’t sincerely believe that I’m scuba diving. Or for a more experimentally-relevant example, if a loud sound suddenly starts, then that sound is going to impact my Thought Generator, in a way that I might describe in words as “I hear a sound!”. The sound will relatedly have an obvious and reproducible imprint on the pattern of neural activation in my auditory cortex—an imprint that my Thought Assessors (e.g. amygdala) can then learn to recognize. Given that, why am I harping on the distinction? Because internal thoughts can also include things that are not straightforwardly driven by external stimuli. Which brings us to my next point: Second, in my framework, a rat’s plans, actions, and expectations are all thought-features too—just as much as the thought-features connected to hearing a sound. For example, if I get up to open the window, then there’s a thought in my head to the effect of “I’m getting up to open the window right now”. That’s a thought in my Thought Generator, and therefore it is perfectly allowable for any of those related thought-features to get picked up by Thought Assessors as evidence relevant to the latter’s prediction task. Does this apply to rats too? Yes! Tons of experiments show that ideas can pop into a rat’s head—e.g. “maybe I’ll go to the left”—just as ideas can pop into a human’s head. Now, as it turns out, when rats are thinking of non-immediate consequences of their plans-of-action, those thought-features tend not to trigger strong visceral predictions, thanks to the time-discounting issue in §4.1. (They can trigger strong valence predictions.) …But that fact is kinda incidental rather than fundamental, and it might also be somewhat less true for monkeys and humans than for rats. I think some discussions in the literature go wrong by making much too big a deal out of this incidental, contingent observation, and incorrectly elevating it to a profound bedrock principle of their theoretical framework. That brings us to: 5.1.2 General discussion part 2: Instrumental conditioning My view, as in §3 above, is that, in addition to the visceral-prediction Thought Assessors, there’s also the all-important valence-prediction Thought Assessor. (See my Valence series.) Now, as above, some thoughts are not action-oriented, e.g. “I’m hearing a sound”. Other thoughts are action-oriented—e.g. “I’m gonna open the window”. The latter kind of thought often also incorporates the consequences of an action—e.g. when I think “I’m gonna open the window”, I might be visualizing a future world in which the window has now been opened. If an action-oriented thought has positive valence, then we’ll actually do that action, and if not, we won’t. That’s my model. So I think that instrumental conditioning experiments mostly revolve around valence predictions, whereas Pavlovian conditioning experiments mostly revolve around visceral predictions. But that correspondence (i.e., (instrumental, Pavlovian) ↔ (valence predictions, visceral predictions)) isn’t perfect. In particular, voluntary actions require positive valence (coming up from the Steering Subsystem), but they don’t strictly require positive valence predictions (coming down from the valence Thought Assessor). Alternatively, the Steering Subsystem can issue positive valence based on visceral predictions—cf. “Pathway 2”, §3.3.2 above. For example, maybe the rat has the idea “I’m gonna mate right now”, and then that idea triggers visceral predictions, and then those visceral predictions trigger positive valence, and so then the rat will proceed to mate, even if it’s never mated before. Now, there’s a distinction in the literature between instrumental and non-instrumental contingencies. In my framework, this distinction—or at least, the way people talk about it—strikes me as kinda foreign, or abstruse, or trivial, or something, I’m not sure. Here’s an example: if you put food at the end of a hallway, and rats run down the hallway to get to the food, is that “instrumental” behavior? I would give the obvious answer: “Yeah duh!”. But if you read the literature, this is treated as a dubious claim! In his 1983 book, Nicholas Mackintosh mentions (among other things) the following two pieces of purported counterevidence. First, various experiments such as Gonzalez & Diamond 1960 find that if rats walk down a hallway to an empty goal-box, and then later they’re put directly into the same goal-box when it’s full of food, then next time they’re in the hallway they’ll run down the hallway fast to get the goal-box. He describes this result as: “a classical contingency between goal-box and food is sufficient to cause rats to run down an alley to the goal-box before any instrumental contingency can take hold”. Second, he mentions a devious experiment from Logan 1960 where rats would only get food if they arrived after a certain amount of time; alas, the over-excited rats ran too fast ≳50% of the time. In my view, what’s happening in both cases is: rats think there might be food at the end of a hallway, and that thought is highly motivating (positive valence), so they run to go get it. This running deserves to be called “instrumental” if anything does—the rats are imagining a possible future course-of-action and its consequences, liking what they see, and then doing it. Again, we now have abundant direct evidence from neuroscience that rats are capable of this. And if that doesn’t fit the technical psych definition of “instrumental” for whatever reason, then, well, so much the worse for technical psych definition of “instrumental”! Anyway, I’ll now go through an example more relevant to the subject of this post. 5.1.3 Case study: Dickinson & Nicholas 1983 Anthony Dickinson and colleagues did an experiment in 1983 (original, later review article summary), reproducing and extending a 1968 experiment by Krickhaus & Wolf. In it, the rats had to walk over to the drink magazine, but nevertheless their behavior was immediately responsive to physiological state without incentive learning. This one used the salt protocol—the drink magazine had salt-water, and the rats were made to feel severely salt-deprived. I had claimed above (§4.1) that the delay between pressing the lever and walking to the magazine was evidently too long to allow visceral predictions to propagate backwards from the drinking to the lever. Is this experiment counterevidence? The possible answer “yes” is not a crazy idea a priori—the time-discounting is presumably a matter of degree, not binary. However, the authors did some brilliant follow-up experiments that suggest that the answer is approximately “no”—the visceral reactions don’t get connected to the idea of pressing the lever. So what’s going on? Let’s dive in. I’ll greatly oversimplify the setup and controls. Let’s just say: In “training”, the rats were put in a cage with two doors. Behind the left door is saltwater, and behind the right door is pure water. For one group of rats, the left door opens spontaneously sometimes, while the right door opens when the rat presses the lever. For the other group of rats, it’s the opposite: the right door opens spontaneously sometimes, while the left door opens when the rat presses the lever. The experiment is set up such that the two doors open equally often in aggregate. Then during the test, the lever was disconnected, the doors sealed shut, and the rats injected with chemicals to make them feel severely salt-deprived. Here’s the remarkable result: those two groups of rats pressed the lever the same amount! But this amount was still sensitive to the fact that they were salt-deprived, and to the number of times they had been exposed to the saltwater during training. Here’s the explanation I would offer. After training, the visceral reactions are not particularly associated with pressing the lever, in the rat’s mind, because of the time-delay issue as discussed in §4.1 above. It just takes too long to walk to the magazine. Instead, the visceral expectation-of-salt reaction is triggered by just being in the cage! After all, during training, being-in-the-cage was associated with the saltwater with zero seconds of delay. So during the test, the rat is just hanging out in the cage, and maybe especially looking at the door, and that triggers a weak visceral expectation-of-salt, which in turn prompts the Steering Subsystem to issue positive valence—in essence, the brainstem says to the cortex / Thought Generator: “whatever you’re thinking of doing right now, hey, go for it, it’s probably a great idea!” So then the rat gets excited and does … something. And some fraction of the time, the thing it does is to press the lever. After all, that’s a salient action that might well pop into the rat’s head (Thought Generator)—it’s an obvious outlet for the rat’s free-floating feelings of motivation. If the rat has a visceral expectation of salt due to being in the cage, but that expectation is not particularly tied to any specific action (due to the time delay), then it leads to a positive valence somewhat divorced from any specific action-oriented “outlet”—it’s a kind of “free-floating motivation”. That can lead to the rat pressing the lever much more than in the control condition. Again, since there’s no specific association between pressing-the-lever and visceral-expectation-of-salt, there’s really no reason we should expect the two groups of rats to press the lever at different rates. We should, however, generically expect the rats to be pressing the lever more when they’re salt-deprived. We should also expect more lever-presses when the salt was tasted more during training—that would lead to a stronger visceral expectation of salt. And that’s what the experiments showed. 5.2 Two complaints about the terms “model-based” or “outcome-directed” as used in the literature My foil for this section is the paper “Model-based and model-free Pavlovian reward learning: Revaluation, revision, and revelation” by Dayan & Berridge (2014). (If that paper isn’t a central representative of “the literature”, then oops, let me know and I’ll retitle this section.) 5.2.1 “Model-based” / “outcome-directed” squashes together two things that I see as profoundly different, namely (A) innate “business logic” in the Steering Subsystem versus (B) planning via a learned world-model in the cortex As described in this post, “business logic” is a software engineering term to refer to parts of source code that more-or-less directly implement specific, real-world, functional requirements: “IF the company has offices in Texas, THEN attach form SR008-04X”. By the same token, the genome directly builds a ton of innate “business logic” circuitry into the Steering Subsystem (hypothalamus and brainstem) that do specific, adaptive things: “IF I’m fertile, THEN increase my sex drive.” Or in the case at hand: “IF I’m salt-deprived, AND I have an imminent visceral expectation that I’m about to taste salt, THEN issue positive valence etc.” If you zoom in on (especially) the hypothalamus, you find this kind of logic everywhere—I go through a real-life example (“NPY/AgRP neurons” in the hypothalamus) here. Meanwhile, on the other side of the brain, the cortex (and thalamus etc.) are doing within-lifetime learning of a predictive, generative world-model. By the time I’m an adult, I can query this cortical world-model to get useful predictions like “If I jump in the lake, then I’m gonna be cold and wet and my cell phone will stop working.” In my preferred terminology, the thing the cortex does is a bona fide “model”, whereas the innate Steering Subsystem circuity is not. I suppose there’s no point arguing over terminology. But if you insist on calling both of those things “models”, let’s at least be clear that they are about as different as can possibly be. They’re in different parts of the brain; one is built by a within-lifetime learning algorithm while the other is innate; they each involve very different types of neurons interacting in very different ways; one is actually making predictions about the future while the other is only “making predictions” in an abstruse metaphorical sense. 5.2.2 “Model-based” / “outcome-directed” splits up two things that I see as the same kind of thing: learned habits and outcome-oriented plans I talked about habit-formation above—see §4.3. The Dayan & Berridge paper cited above describes habits as “model-free”. And I don’t like that. From my perspective: if a thought pops into my head to go to the store to buy candy, that thought is an activation state of my world-model. If a thought pops into my head to bite my nails right now, that thought is an activation state of my world-model too! As long as the Thought Generator is involved (i.e. voluntary behavior), I say it’s model-based. The cortex is always modeling. That’s just what the cortex does. See “cortical uniformity”—if any part of the cortex is learning a generative world-model (which is surely the case) then one should strongly presume that the entire cortex is learning a generative world-model. The generative world-model can model what will happen in the distant future, and it can also model what I’m doing right now and in the immediate future. It can also (optionally) model both at once including how they relate. And (given positive valence) it can make all these things happen. It’s a very impressive world-model! We should give it more credit! To be clear, I do like the distinction between voluntary actions (e.g. talking) and involuntary actions (e.g. crying). That’s a sharp and important distinction in my framework. (I think this might be related to “two-process learning theory” in the old psych literature? Not sure.) Some notes on that: We can pin down this distinction by saying that the voluntary actions happen if and only if the idea of doing them is positive-valence, while involuntary actions can happen regardless of their valence. Involuntary actions are associated with innate behavioral programs stored in the Steering Subsystem (for example, there are a bunch in the brainstem periaqueductal gray).Be careful not to confuse “voluntary” with “ego-syntonic”—see my handy chart here.Confusingly, you can voluntarily control “involuntary” actions to a degree—e.g., many people can cry on demand. But that happens via “tricking” your visceral Thought Assessors (see discussion in §6.3.3 here), rather than through the direct motor outputs of the Thought Generator.[12] 5.3 My “valence function learning and generalization” discussion is very similar to Dickinson & Balleine’s “Hedonic Interface Theory” If any readers are trying to compare my discussion of how the valence predictor Thought Assessor learns and generalizes (cf. §3.3.1 and §4.4) with Dickinson & Balleine’s “Hedonic Interface Theory”, I think they basically agree, but to really understand the correspondence you would need to also read my Appendix A of my valence series explaining how I think hedonic tone / pleasure connects to valence. The short version of that is: hedonic tone / pleasure plays a “ground truth” role for the valence predictor, analogous to how unconditioned stimuli like shocks play a “ground truth” role for the visceral predictors. (Thanks Patrick Butlin for first informing me that “incentive learning” exists way back when. Thanks Anthony Dickinson for carefully reading, and devastatingly criticizing, an early draft of this post with a somewhat different thesis. Thanks also to Linda Linsefors, Cameron Berg, Seth Herd, Tom Hazy, and Patrick Butlin for critical comments on earlier drafts.) ^ Terminology: In a Pavlovian (a.k.a. classical) conditioning experiment, something happens to a rat (e.g. it gets electrocuted) at some random time, independent of what the rat is doing. In an operant (a.k.a. instrumental) conditioning experiment, the rat does something (e.g. presses a lever), and then it has some consequence (e.g. it gets electrocuted). ^ Most of my key sources are from the 1980s or earlier (or much earlier), generally predating the flood of crappy psychology papers. I’m also mostly relying on the work of Anthony “Tony” Dickinson, whose papers strike me as exceedingly fastidious—he might do an experiment that suggests some interesting result, and then immediately do seventeen more control experiments to rule out every possible alternative explanation. Moreover, the interesting results were generally very obviously above noise (as opposed to p=0.0499)—e.g., one group of rats might press a lever twice as often as the other group. And at least some of the key results have been reproduced by multiple labs anyway. ^ The description in this section is mostly self-plagiarized from an ancient blog post I wrote. ^ In the Dead Sea Salt paper they call it a “lever”, but I’m calling it a “stick” in this post, because I want to follow common non-technical usage in which a “lever” is “a stick that does something when you pull it”, whereas the Dead Sea Salt Experiment “stick” does not do anything except sit there. (The experiment is “Pavlovian”, not “instrumental”, see previous footnote.) ^ Fun fact (details here): There’s no sharp line between the putamen and caudate, nor between the caudate and (core side of the) nucleus accumbens. It’s just one big structure that got split up into three terms by confused early neuroscientists. (There’s likewise a smooth transition between nucleus accumbens shell and lateral septum, and so on.) In fact, the brilliant neuroanatomist Lennart Heimer was against calling it the “nucleus accumbens” in the first place—it’s not a “nucleus”, he wrote, so we should just call it the “accumbens”! ^ As in the bottom of the diagram, the Steering Subsystem also has access to “raw informational inputs”. Those would be relevant if, for example, the rat could smell the food before getting to it. That could be applicable in certain cases, but I think all the specific experiments I’ll mention here had control experiments appropriate to rule out that factor. ^ Side-note: From an ecological perspective, Pavlovian experiments are pretty weird—the rats are doing nothing in particular, and then all of the sudden something happens to them. Like, under ecological circumstances, a rat might be overcome by fear when it sees an incoming bird, or a rat might be flooded with pain when it walks into a cactus, etc. And then the rat’s brain algorithms are set up to generally build in the right associations and take sensible actions, both now and in the future—e.g., don’t walk into cactuses. But in a Pavlovian experiment, there is no sensible action—the rat just gets randomly zapped or sprayed or whatever no matter what it does—and thus the rats wind up doing something somewhat random—cf. the distinction between “sign-tracking” and “goal-tracking” rats. See also §5.1.3. ^ One should be very cautious in making claims that a rat couldn’t possibly deduce something from its previous life experience—see §4.4.1 below on “accidental” incentive learning earlier in life. ^ However, see §4.4.3 for an alternative hypothesis. ^ Methodological details are in the original paper’s supplemental information. If you’re wondering why I’m treating this as “0-second delay” instead of “8-second delay”, I claim that the rat is generalizing from “what it’s thinking after the sound has already been on for 7.99 seconds” to “what it’s thinking after the sound has just turned on”—see §4.4 below. ^ “Magazine” is a standard term in the context of rat experiments—here’s a google image search for “rat experiment food magazine”. ^ More specifically, in my view, “voluntary” control corresponds to cortex layer 5PT output signals coming out of any of the cortical areas wherein those signals are trained by the “main” a.k.a. “success-in-life” RL signals related to valence—as opposed to outputs from other parts of the cortex, or from the striatum, amygdala, hypothalamus, brainstem, etc.
2024-06-25
https://www.lesswrong.com/posts/SyeQjjBoEC48MvnQC/formal-verification-heuristic-explanations-and-surprise
SyeQjjBoEC48MvnQC
Formal verification, heuristic explanations and surprise accounting
Jacob_Hilton
ARC's current research focus can be thought of as trying to combine mechanistic interpretability and formal verification. If we had a deep understanding of what was going on inside a neural network, we would hope to be able to use that understanding to verify that the network was not going to behave dangerously in unforeseen situations. ARC is attempting to perform this kind of verification, but using a mathematical kind of "explanation" instead of one written in natural language. To help elucidate this connection, ARC has been supporting work on Compact Proofs of Model Performance via Mechanistic Interpretability by Jason Gross, Rajashree Agrawal, Lawrence Chan and others, which we were excited to see released along with this post. While we ultimately think that provable guarantees for large neural networks are unworkable as a long-term goal, we think that this work serves as a useful springboard towards alternatives. In this post, we will: Summarize ARC's takeaways from this work and the problems we see with provable guaranteesExplain ARC's notion of a heuristic explanation and how it is intended to overcome these problemsDescribe with the help of a worked example how the quality of a heuristic explanation can be quantified, using a process we have been calling surprise accounting We are also sharing a draft by Gabriel Wu (currently visiting ARC) describing a heuristic explanation for the same model that appears in the above paper: max_of_k Heuristic Estimator Thanks to Stephanie He for help with the diagrams in this post. Thanks to Eric Neyman, Erik Jenner, Gabriel Wu, Holly Mandel, Jason Gross, Mark Xu, and Mike Winer for comments. Formal verification for neural networks In Compact Proofs of Model Performance via Mechanistic Interpretability, the authors train a small transformer on an algorithmic task to high accuracy, and then construct several different formal proofs of lower bounds on the network's accuracy. Without foraying into the details, the most interesting takeaway from ARC's perspective is the following picture: Figure adapted from this post In the top right of the plot is the brute-force proof, which simply checks every possible input to the network. This gives the tightest possible bound, but is very long. Meanwhile, in the bottom left is the trivial proof, which simply states that the network is at least 0% accurate. This is very short, but gives the loosest possible bound. In between these two extremes, along the orange Pareto frontier, there are proofs that exploit more structure in the network, leading to tighter bounds for a given proof length, or put another way, shorter proofs for a given bound tightness. It is exciting to see a clear demonstration that shorter proofs better explain why the neural network has high accuracy, paralleling a common mathematical intuition that shorter proofs offer more insight. One might therefore hope that if we understood the internals of a neural network well enough, then we would be able to provide provable guarantees for very complex behaviors, even when brute-force approaches are infeasible. However, we think that such a hope is not realistic for large neural networks, because the notion of proof is too strict. The basic problem with provable guarantees is that they must account for every possible way in which different parts of the network interact with one another, even when those interactions are incidental to the network's behavior. These interactions manifest as error terms, which the proof must provide a worst-case bound for, leading to a looser bound overall. The above picture provides a good demonstration of this: moving towards the left of the plot, the best bound gets looser and looser. More generally, it is hard to prove a lack of structure – another common mathematical intuition. There are many examples of formal phenomena that appear to arise from a lack of structure, but for which proving this is considered out-of-reach: in mathematics, the normality of π, or the Collatz conjecture, which has natural generalizations that are known to be undecidable; in computer science, the behavior of certain cryptographic hash functions; or in physics, the diffusion of a gas across a room in some formal model. Heuristic explanations Since proving a lack of structure is a key obstacle to formal verification of neural networks, ARC has been pursuing an alternative approach. Instead of attempting to prove a lack of structure, we instead assume by default a lack of structure, and produce a best guess given any structure that has been pointed out. We call such an estimate a heuristic explanation (or sometimes a heuristic argument). Returning to the above examples of unproven statements, in each case there is an informal, probabilistic argument that explains the phenomenon. For example, to estimate the density of 0s in the decimal expansion of π, we treat the digits as uniformly random, giving a best guess of 1/10. This is the sort of reasoning we wish to permit in a heuristic explanation. Unlike a proof, a heuristic explanation is defeasible, meaning that its conclusion is open to revision once further structure has been pointed out. Our hope is to reach a robust conclusion by having searched thoroughly enough for structure, rather than worst-casing over all possible structure as in a proof. Some ideas about how to potentially formalize this hope are described in the paper Formalizing the presumption of independence. Informal heuristic explanations are already commonplace in mechanistic interpretability, such as when analyzing circuits. For example, consider the following circuit between an "early curve detector" neuron and a "late curve detector" neuron, consisting of the weights in the 5x5 convolution between the two neurons: Figure copied from Olah et al. Imagine attempting to use this circuit to provide some sort of formal guarantee about the behavior of the late curve detector, given the behavior of the early curve detector. Analyzing this circuit alone would be insufficient, because there could be confounding interactions with other neurons. For example, we would need to prove that there is no "anti-early curve detector" that cancels out the "early curve detector". Nevertheless, it is considered reasonable to assume by default that there is no "anti-early curve detector" if no such thing has been pointed out. To help validate these ideas, ARC has produced a heuristic explanation of the max-of-k model studied in the Compact Proofs paper. The explanation very roughly mirrors the analysis in the "cubic proof" from that paper, but focuses on approximating error terms rather than bounding them. Consequently, the explanation ends up with estimates that are much closer to the true accuracy of the model than the lower bounds given by the proofs: max_of_k Heuristic Estimator Surprise accounting We are interested in proofs and heuristic explanations not only to provide assurances about properties of networks, but also to help explain why those properties hold. We have seen that for proofs, more insight is offered by shorter proofs with tighter bounds. Correspondingly, we can quantify the amount of understanding encapsulated by a heuristic explanation using a method we have been calling surprise accounting. The intuition behind surprise accounting is to ask: how surprising is the phenomenon (in an information-theoretic sense), now that we have access to the heuristic explanation? The total surprise decomposes into two pieces: The surprise of the explanation itself: how many times did the explanation just "work out" for no apparent reason? Or put another way, how many free parameters did the explanation have? The surprise of the phenomenon given the explanation: how likely was the phenomenon to happen by chance, given the information provided by the explanation? These two pieces are analogous to proof length and bound tightness respectively, but they have the advantage of being measured in the same units, namely bits. The total of the two pieces is also very similar to the Bayesian information criterion (BIC) for probabilistic models, which has a similar decomposition. From the vantage point of mechanistic interpretability, surprise accounting offers an answer to the question, "What counts as an explanation?" For example, when a neural network memorizes facts, is a lookup table a valid explanation for its behavior? From the perspective of surprise accounting, an explanation for the behavior that makes use of how the network generalizes may have lower total surprise. But if there is no explanation with lower total surprise, then the lookup table explanation is good enough. To make this idea more concrete, we will go through a worked example, making use of the following basic formula: Total surprise = surprise of explanation + surprise given explanation Worked example: Boolean circuit For our worked example of surprise accounting, we will use the following Boolean circuit consisting of AND, OR and NOT gates: The circuit has a tree structure, and reuses the same 8 inputs in both the top and bottom halves. But other than this basic structure, it looks pretty random. However, if we start trying a few different inputs, we notice something surprising: the network seems to always output TRUE. Why is this? Brute-force Without any explanation at all, there are 28 = 256 possible inputs, and the network could have output TRUE or FALSE for each of these. So the surprise of the (empty) explanation is 0 bits, and the surprise given the explanation is 256 bits. This is exactly the number of cases that the brute-force proof would need to check. No explanation: 0 + 256 = 256 bits Final OR gate A very simple explanation we could give is to notice that the final gate is an OR rather than an AND gate. Since an OR gate outputs TRUE ¾ of time on random inputs, it is significantly less surprising that the network always outputs TRUE. Quantitatively, the surprise of the explanation is 1 bit (for noticing that the final gate was an OR rather than an AND), and the surprise given the explanation is 256 log2(1/0.75) bits. Final OR gate explanation: 1 + 256 log2(1/0.75) ≈ 107.25 bits This is a substantial improvement over having no explanation at all. Note, however, that this explanation is not a proof. Pattern of NOT gates We can improve upon this by noticing patterns in the structure of the network. One such pattern is how the NOT gates are arranged. For each input to a gate in the top half of a network, there is a corresponding input to a gate in the bottom half of the network. For the leftmost layer, the presence of a NOT gate is always different, and for every subsequent layer, the presence of a NOT gate is always the same: This means each gate input is anticorrelated with the corresponding input in the other half of the network: for the leftmost layer, the correlation is −1; for the next layer, the correlation is −0.5; for the layer after that, −0.25; and the two inputs to the final OR gate have a correlation of −0.125.[1] Implicitly, these correlations assume that the AND and OR gates were chosen independently at random, since we have not yet noticed any particular structure to them. The surprise of this explanation is 15 bits for the 15 pairs of inputs (7 same, 8 different), plus the 1 bit carried over from the previous explanation for the final OR gate, for a total of 16 bits. Given the explanation, the probability that the final gate outputs TRUE on a random input has increased to 0.75 + 0.25 × 0.125 = 0.78125 because of the −0.125 correlation, so the surprise given the explanation is now only 256 log2(1/0.78125) bits. Final OR plus pattern of NOT gates explanation: 16 + 256 log2(1/0.78125) ≈ 107.17 bits This is only a fraction of a bit better than the previous explanation, but sets us up for a more substantial improvement. Pattern of AND and OR gates If we look at the network again, we see even more structure: every time there is an AND gate in the top half of the network, there is OR gate in the bottom half of the network, and vice versa: Hence each gate input is actually perfectly anticorrelated with the corresponding input in the other half of the network. Put another way, the bottom half of the network is exactly the negation of the top half of the network, by De Morgan's laws. The OR of anything and its negation is always TRUE, so this completely explains why the network always outputs TRUE. The surprise of this explanation is 16 bits carried over from the previous explanation, plus 7 bits for the 7 pairs of AND and OR gates, for a total of 23 bits. The surprise given the explanation is now 0 bits. Pattern of all gates explanation: 23 + 0 = 23 bits A natural question is whether there is further hidden structure that permits an explanation with even lower total surprise. From what I have said so far, perhaps there could be. But it turns out that I constructed the network as follows: first I chose the top half of the network by randomly choosing AND or OR for the 7 gates and randomly choosing either a NOT gate or no NOT gate for the 14 inputs; then I copied and negated this random network using De Morgan's laws; and finally I connected the top and bottom halves by an OR gate. So there really is no additional structure that we could exploit, unless my construction was accidentally redundant in some way or a pattern arose by fluke. More generally, we can compare the total surprise of an explanation with the amount of optimization applied to produce the phenomenon. If the two match, then there is no reason to expect there to be a better explanation. Note that in this case, the best explanation corresponded to a proof, but this need not be the case in general. For example, if the circuit implemented a cryptographic hash function, we would not expect to be able to do better than to treat the output of the hash function as random. Discussion ARC is currently working on designing algorithms for finding and making use of heuristic explanations for neural networks. At a high level, one way this could work is as follows: Set up a "generic" heuristic explanation with free parameters. In the worked example above, the "same" or "different" arrows are free parameters of the explanation. For a realistic neural network, free parameters might appear in an activation model such as a variational autoencoder (perhaps in a similar fashion to how sparse autoencoders are currently used in mechanistic interpretability). Optimize the parameters of this explanation using gradient descent to minimize total surprise, as measured using surprise accounting. We expect to have to do this in parallel with model training in order to account for how the model itself was optimized (including any possible backdoors). Use the explanation to heuristically verify desired properties of the network, or for other downstream applications such as mechanistic anomaly detection or eliciting latent knowledge. We have omitted many details about how and why such an approach might work, and plan to come back to some of these in future posts. Conclusion Heuristic explanations are a natural extension of provable guarantees that we believe have much better potential to scale to large, sophisticated neural networks. Surprise accounting offers a way to quantify the quality of a heuristic explanation, potentially enabling useful explanations to be found automatically. Roughly speaking, the reason that the anti-correlation halves with each layer is that if the subsequent AND or OR gates are different, then the anti-correlation is maintained, whereas if they are the same, then the anti-correlation is eliminated. Making this argument more precise is left as an exercise to the reader. ↩︎
2024-06-25
https://www.lesswrong.com/posts/Fr6eJkjYWG9Mw6XQc/how-good-are-llms-at-doing-ml-on-an-unknown-dataset
Fr6eJkjYWG9Mw6XQc
How good are LLMs at doing ML on an unknown dataset?
havard-tveit-ihle
I just ran two evaluation tests on each of the three leading LLM chatbots, GPT4o, Claude Sonnet 3.5 and Gemini advanced. In the challenge the models were presented with a novel dataset, and were asked to develop a ML model to do supervised classification of the data into 5 classes. The data was basically 512 points in the 2D plane, and some of the points make up a shape, and the goal is to classify the data according to what shape the points make up. The first dataset challenge was probably too hard, and none of the models did better than chance on the test set, although Sonnet 3.5 was very close to an approach that would actually have worked. For the second challenge I made the dataset much easier by making the position, size and rotation of the shapes the same in all the samples. Here GPT4o managed to get an accuracy of almost 80%, while Sonnet had the right approach to get 95% accuracy, but was disqualified for selecting models based on how they did on the test set, if they had selected based on training set performance, they would have gotten 95%, after that Sonnet went ahead and shot itself in the foot, never achieving any good results. Gemini did not get any valid results in either experiment. Overall the performance of the models was fairly unreliable and they made several mistakes displaying a lack of basic understanding and intuition about the data and ML in general. The first challenge (the hard one) The prompt: This is a challenge to test how good you are at doing machine learning on an unknown dataset. The data is an array of dimension (1000, 512, 2) which consists of 1000 samples, each of which contains two arrays of length 512, taking real values from 0 to 1. These two arrays are the x and y coordinates of 512 points in the plane. Some of these points form a shape, while the other points are just noise. There are 5 possible shapes in total, circe, square, triangle, pentagon and star. The shape size, position and orientation are random for each sample. The goal is to train a machine learning model to predict which one of the five shapes is present in the current sample. You should thus train a machine learning model to take in 512 coordinate pairs, and return 5 classification logits, giving the probabilities (after a softmax, which is part of the CrossEntropyLoss) of the different shapes. Only one shape is present in each sample. Your goal is to get the lowest average cross-entropy loss on the test set, which is the same size as the training set. Here are the rules: - You are to provide the python code, a single script of at most 400 lines (using pytorch), which will be run by me on a single A800 (80GB) GPU for at most one hour. Make it easy for me to choose which gpu to run your code on (by changing one line or something). - You have 5 chances to provide code, or to get any feedback from me (so please provide code in each of your responses or you will waste one of your chances). You will get the terminal output from running the code provided to you as a text file after the first 4 submissions to give you feedback on your progress (depending on what you choose to output to the terminal). I will simply use: "python your_script.py > output.txt 2>&1". You will also get to know your current loss on the test set. - Do not import any weird packages, try to stick to pure pytorch numpy etc. If you need another module I may be open to it, but you need to show me a good reason. - Your model will be evaluated on a test set that is iid with the training data. You are not allowed to look at the test data. Attached is a png showing 9 scatterplots corresponding to the 9 first samples of the training set, this is to give you an idea what the data looks like. Below is an example script using a simple fully connected model to solve the task achieving a test loss of about 1.6, which corresponds to random chance. This is mostly just to show you how you can load in the training data and to give you a random baseline to compare your results to, you should come up with a better method. Feel free to use any code you want from the example. Discuss with yourself which methods could work on a dataset like this, and be sure to think and plan you actions before you actually start writing the code, since you only have a few chances. This is a hard machine learning task! Make sure to give it your best and push the performance of your model to the limit (the other contestants sure will do). Good luck! Here is the code: import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torch.utils.data import DataLoader import numpy as np import matplotlib.pyplot as plt # load data with a basic dataloader class Dataset(torch.utils.data.Dataset): def __init__(self, train_file, label_file): self.train_data = np.load(train_file) - 0.5 self.label_data = np.load(label_file) print(self.train_data.shape) def __len__(self): return len(self.train_data) def __getitem__(self, idx): shuffle_order = np.random.permutation(len(self.train_data[idx])) return torch.tensor(self.train_data[idx][shuffle_order], dtype=torch.float32), torch.tensor(self.label_data[idx], dtype=torch.long) # define the model class FCModel(nn.Module): def __init__(self): super(FCModel, self).__init__() self.fc1 = nn.Linear(1024, 512) self.fc2 = nn.Linear(512, 256) self.fc3 = nn.Linear(256, 5) def forward(self, x): x = F.relu(self.fc1(x.flatten(start_dim=1))) x = F.relu(self.fc2(x)) x = self.fc3(x) return x device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # training loop train_data = Dataset('train_data.npy', 'train_flags.npy') batch_size = 64 n_epochs = 100 train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True) model = FCModel() model.to(device) optimizer = optim.AdamW(model.parameters(), lr=0.001, weight_decay=0.02) criterion = nn.CrossEntropyLoss() for epoch in range(n_epochs): for i, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = criterion(output, target) loss.backward() optimizer.step() print(f'Epoch {epoch}, iter {i}, loss: {loss.item()}') Results: Here I will summarize and discuss the results of each model on the first (hard) challenge. GPT4o Full response 4o started out with a PointNet inspired architecture where you try to extract features from points in a point-cloud dataset and then combine the features into a common fully connected network. The problem with the approach is that 4o uses a 1D CNN on the points, which does not make much sense since the order of the points is arbitrary. For this reason they cannot even fit the training data. 4o responded by doubling down and making the network larger, introducing some bugs that were not fixed until the final submission, which did not do any better than the earlier result. Sonnet 3.5 Full response Sonnet went for a seemingly very similar arcitecture as GPT4o, with a feature extractor with a 1D CNN, however, it uses a kernel size of 1, which means it acts on each point separately (and could just be replaced by fully connected layers acting on each point separately), followed by taking the maximum among all the 512 points for each feature. In fact this is almost identical to the architecture that Gemini Advanced came up with (but never got to work), only Gemini wrote it in terms of fully connected layers. A nice part about this arcitecture is that it is invariant under a change of order of the points, making it less likely to memorize the training data. Probably both Sonnet and Gemini saw this exact architecture somewhere, and did not both make up the same on the spot. Sonnet did somewhat better than chance on the training data, while not doing any better on the test set. Sonnet then made the model larger and more complex, even adding an attention layer to make the embeddings of the different points talk to eachother before going through the maximum step, which I think is a good idea, but for some reason did not make the training work better here. Perhaps because it did not have enough data to get the training going. After the third iteration failed to do any better Sonnet actually decided that it needed to fundamentally rethink its approach (well done!). The idea they came up with was to make a 2D histogram of the points and send that histogram into a 2D conv net (which is a great idea, this will allow the net to actually find and learn the shapes!). This worked a bit too well on the first try and Sonnet completely overfit the training data, while getting a large loss on the test data. Sonnet responded by adding some very reasonable data augmentation steps to the procedure, adding some regularization, as well as k-fold cross-validation. These are all sensible things to improve the generalization to the test data, unfortunately this was the last submission and Sonnet had introduced some bugs in the addition of the new steps (I even gave it one more chance to fix the bug, since it was so close to have something that works, but it did not fix the bug even then). I did a few modifications to the code of Sonnet, implementing a random downscaling and movement of the points before the histogram in addition to the flips and 90 deg rotations that Sonnet implemented. I increased the resolution of the histogram by a factor of two and made it rather just count if a pixel was hit or not, instead of counting the number of times a pixel was hit hist = np.minimum(hist, 1) - 0.5. I also increased the number of epochs to 1500, and changed the hyperparameters of the lr-scheduler to account for the increased number of epochs. These changes led to an accuracy of about 70% on the test set, which is decent (I think) for this dataset. Gemini Advanced Full response Gemini struggled to follow instructions. The first response did not contain a complete code, but when I asked for it, the second response did contain code. The code had a similar idea to the others as it was inspired by PointNet, the feature extractor was based on fully connected layers applied separately to each point, like Sonnet did, before doing a max-pooling operation over the dimension of the 512 points, also like Sonnet. Gemini also had some interesting data augmentations, a random rotation and scaling of all the points, which would have helped with generalization. Sadly there was a bug in the code, which gemini tried to fix, but then it tried to run the code in its own system (which makes no sense) and got really confused, and I could not get any more coherent results from it. The second challenge (the easy one) After none of the models did better than chance on the first challenge, I decided to redo the challenge with an easier dataset. I fixed the size, rotation and position of the shapes to be the same in all samples, which makes them much easier to find and distinguish. I also changed the rules to make them run the evaluation on the test set themselves, also limited the GPU run time to five minutes (mostly to make it easier for me). The prompt: This is a challenge to test how good you are at doing machine learning on an unknown dataset. The data is an array of dimension (1000, 512, 2) which consists of 1000 samples, each of which contains two arrays of length 512, taking real values from 0 to 1. These two arrays are the x and y coordinates of 512 points in the plane. Some of these points form a shape, while the other points are just noise. There are 5 possible shapes in total, circe, square, triangle, pentagon and star. The shapes are always centered at (0.5, 0.5) and each shape always has the same size and orientation, although the points are randomly placed along the shape. The goal is to train a machine learning model to predict which one of the five shapes is present in the current sample. You should thus train a machine learning model to take in 512 coordinate pairs, and return 5 classification logits, giving the probabilities (after a softmax, which is part of the CrossEntropyLoss) of the different shapes. Only one shape is present in each sample. Your goal is to get the lowest average cross-entropy loss on the test set, which is the same size as the training set. Here are the rules: You are to provide the python code, a single script of at most 400 lines (using pytorch), which will be run by me on a single A800 (80GB) GPU for at most five minutes. Make it easy for me to choose which gpu to run your code on (by changing one line or something). You have 5 chances to provide code, or to get any feedback from me (so please provide code in each of your responses or you will waste one of your chances). You will get the terminal output from running the code provided to you as a text file after the first 4 submissions to give you feedback on your progress (depending on what you choose to output to the terminal). I will simply use: "python your_script.py > output.txt 2>&1". Your best test accuracy is the one that counts. Do not import any weird packages, try to stick to pure pytorch numpy etc. If you need another module I may be open to it, but you need to show me a good reason. Your model should be evaluated on a test set that is iid with the training data. You are not allowed to train on the test data. Attached is a png showing 9 scatterplots corresponding to the 9 first samples of the training set, this is to give you an idea what the data looks like. Below is an example script using a simple fully connected model to solve the task achieving a test accuracy of about 0.31, with 0.25 being random chance. This is mostly just to show you how you can load in the training data and to give you a baseline to compare your results to, you should come up with a better method. Feel free to use any code you want from the example. Discuss with yourself which methods could work on a dataset like this, and be sure to think and plan you actions before you actually start writing the code, since you only have a few chances. This is a hard machine learning task! Make sure to give it your best and push the performance of your model to the limit (the other contestants sure will do). And make sure to follow the rules! Good luck! Here is the code: import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torch.utils.data import DataLoader, Dataset import numpy as np class Dataset(Dataset): def __init__(self, data_file, label_file, train=True): self.data = np.load(data_file) - 0.5 self.label_data = np.load(label_file) def __len__(self): return len(self.data) def __getitem__(self, idx): return torch.tensor(self.data[idx], dtype=torch.float32), torch.tensor(self.label_data[idx], dtype=torch.long) class FCModel(nn.Module): def __init__(self): super(FCModel, self).__init__() self.fc1 = nn.Linear(1024, 512) self.fc2 = nn.Linear(512, 256) self.fc3 = nn.Linear(256, 5) def forward(self, x): x = F.relu(self.fc1(x.flatten(start_dim=1))) x = F.relu(self.fc2(x)) x = self.fc3(x) return x device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(f'Using device {device}') # training loop train_data = Dataset('train_data.npy', 'train_flags.npy') test_data = Dataset('test_data.npy', 'test_flags.npy') batch_size = 64 n_epochs = 10 train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True) test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=False) model = FCModel() model.to(device) optimizer = optim.AdamW(model.parameters(), lr=0.0001, weight_decay=0.01) criterion = nn.CrossEntropyLoss() for epoch in range(n_epochs): losses = [] accuracies = [] model.train() for i, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = criterion(output, target) loss.backward() optimizer.step() losses.append(loss.item()) acc = (output.argmax(dim=1) == target).float().mean() accuracies.append(acc.item()) test_losses = [] test_accuracies = [] model.eval() with torch.no_grad(): for i, (data, target) in enumerate(test_loader): data, target = data.to(device), target.to(device) output = model(data) acc = (output.argmax(dim=1) == target).float().mean() loss = criterion(output, target) test_losses.append(loss.item()) test_accuracies.append(acc.item()) print(f'Epoch {epoch+1}, train loss: {np.mean(losses):.4f}, test loss: {np.mean(test_losses):.4f}') print(f'Epoch {epoch+1}, train accuracy: {np.mean(accuracies):.4f}, test accuracy: {np.mean(test_accuracies):.4f}') Results: GPT4o Full response GPT4o went for basically exactly the same approach as for the first challenge, but now it actually works ok. The first submission gets an accuracy of 58% and the third (with some regularization and some more epochs) reaches 78%. It then tries to do some more augmentation and scaling up the network, but does not get any better results. Even though 4o gets decent results with the 1D CNN feature extractor, this is still not a good idea. A CNN finds local structure, but the order of the points is random, so it will just pick up whatever structure there is between random points. There is local structure in this data of course, and you can get to this local structure by binning the data in 2D and doing a CNN (which is the approach Sonnet 3.5 eventually came up with in challenge 1). You could also use some kind of continous positional embedding. A cheap way to get some of the local structure of the data using the 1D CNN feature extractor chosen by 4o, is to order the points in a more systematic way. For example, if I add the following line x = x[np.argsort(x[:, 0])] (i.e. sorting the points based on the x-values) in the __getitem__ function in the Dataset object, without any further changes to the code of 4o, we go from about 78 % to about 90% accuracy on the test set. Sonnet 3.5 Full response Sonnet went for almost exactly the same model as it used for challenge 1. Sonnet achieved a test accuracy of about 95 % with it's first submission, however I disqualified that submission because it selected the best model based on performance on the test set, which violates (at least the spirit of) the rules. This did not change the training in any way, and it seems to robustly get to about 95 % without such selection, and I believe you will have to make some effort to do much better than 95 %, so Sonnet essentially solved the challenge. Sonnet went back to fix the part where it used test data to select the best model. Instead it used some of the training data as a validation set, which is a pretty good idea in general, however it also added some augmentations to the data, including a random rotation (different for each point!). An overall rotational agumentation, same for all the points in each sample, would have been a great idea for challenge 1, but here we explicitly stated that the shapes have the same rotation in every sample, so a rotation is a terrible idea for this challenge, and a different rotation for each point makes no sense at all! The second mistake was that Sonnet appied the same augmentations to the validation set, which would make it a bad check for generalization to the test set. In practice this is not really a big problem here, since Sonnet can see the test set performance, but in a practical setting, where all you have is the validation set, this would be big blunder. The result of Sonnets augementations were that they did great on the training and validation set, but terrible on the test set. Based on the results from the first submission Sonnet then makes the following statements: Key observations: - The model achieved a final validation accuracy of 84.50%, which is quite good. - However, the test accuracy is only 22.20%, which indicates severe overfitting. - The validation loss decreased steadily throughout training, but the test loss is extremely high (10.1070). These results suggest that our model is memorizing the training data rather than generalizing well to unseen examples. This is a completely wrong conclusion. If the model had memorized the test data, then it would have performed bad at the validation data, so the fact that the model did well on the validation set, disproves the interpretation Sonnet came up with. The actual problem, of course, is that the bad rotational augmentation moved both the test and the validation data to a different distribution from the test set, and the model was good at fitting the new (and very different) distribution. Note that I do not blame Sonnet for not immediately finding the actual problem with it's setup, since that would have been pretty hard, but it should have noticed the discrepancy between the validatation loss and the test loss and been confused. The validation and test sets should have basically the same loss, so the fact that they are different should suggest that something is completely wrong. Sonnet went on based on its interpretation and added more augmentation and regularization (which would not help at all), and in the process it introduced some bugs that it did not manage to adress until the 5 submissions were used. Gemini Advanced Gemini thought for a long time and then came up with this. It seems to have serious issues following instructions. Limitations and possible improvements I did not make much effort on the prompt, a better prompt would probably help the models perform better on the task. I did this manually via the chat interfaces, it would be interesting to do a more automatic approach via the API, although then I would need to set up a secure sandboxed environment for the code to be run. It would be interesting to have a human baseline for these challenges, e.g. how well would a human ML researcher do if given perhaps 15 minutes to come up with each of 5 submissions for each challenge. Conclusion In summary, these experiments suggest that current LLMs are not quite ready to do autonomous machine learning yet. A few things stood out to me though: There seems to be a significant difference in the performance of the different models on the challenges, with Claude Sonnet 3.5 performing strongest. I was particularly impressed by Sonnets choice to fundamentally rethink it's approach, and to come up with a new good idea, after the first approach did not work. Even Sonnet displayed severe lack of understanding when interpreting the outputs from the training runs. Interpreting the training, validation and test losses and accuracies is very basic to the task of doing ML (especially when buliding a model from scratch and training on a small dataset), so I was a bit surprised that Sonnet was not more reliable here. It is fairly striking how bad Gemini Advanced is at following instructions, this baffles me. The one complete actual code I did get from it was pretty good, with some interesting approaches, so the basic competence on this task is probably comparable to the other models, it just seems like some of the scaffolding is coming in the way.
2024-07-01
https://www.lesswrong.com/posts/QvnzEHvodmwfBXu94/live-theory-part-0-taking-intelligence-seriously
QvnzEHvodmwfBXu94
Live Theory Part 0: Taking Intelligence Seriously
Sahil
Acknowledgements The vision here was midwifed originally in the wild and gentle radiance that is Abram's company (though essentially none of the content is explicitly his). The PIBBSS-spirit has been infused in this work from before it began (may it infuse us all), as have meetings with the Agent Foundations team at MIRI over the past ~2 years. More recently, everyone who has been loving the High Actuation project into form (very often spontaneously and without being encumbered by self-consciousness of this fact):[1] individuals include Steve Petersen, Mateusz Baginski, Aditya Prasad, Harmony, TJ, Chris Lakin; the AISC 2024 team, Murray Buchanan, Matt Farr, Arpan Agrawal, Adam, Ryan, Quinn; various people from Topos, ALIFE, MAPLE, MATS, EA Bangalore. Published while at CEEALAR. Disclaimers Very occasionally there are small remarks/questions from a remarkable human named Steve, since this and the next two posts are an edited transcript of me giving him a talk. I left them in to retain the conversational tone. Steve has also consistently been a fantastic ground for this channeling.I use the term “artefact” a fair amount in this sequence. Unfortunately for you and me, Anthropic also recently started using "artifact" in a different way. I'm using "artefact" in the common sense of the word. The British spelling should help remind of the distinction. Taking Intelligence Seriously Sahil: I gave a talk recently, at an EA event just two days ago, where I made some quick slides (on the day of the talk, so not nearly as tidy as I’d like) and attempted to walk through this so-called “live theory”[2]. Maybe I can give you that talk. I'm not sure how much of what I was saying there will be present now, but I can try. What do you think? I think it'll take about 15 minutes. Yeah? Steve: Cool. Sahil: Okay, let me give you a version of this talk that's very abbreviated. So, the title I’m sure already makes sense to you, Steve. I don't know if this is something that you know, but I prefer the word “adaptivity” over intelligence. I'm fine with using “intelligence” for this talk, but really, when I'm thinking of AI and LLMs and “live” (as you’ll see later), I'm thinking, in part, of adaptive. And I think that connotes much more of the relevant phenomena, and much less controversially. It’s also less distractingly “foundational”, in the sense of endless questions on “what intelligence means”. Failing to Take Intelligence Seriously Right. So, I want to say there are two ways to fail to take intelligence, or adaptivity, seriously. One is, you know, the classic case, of people ignoring existential risk from artificial intelligence. The old “well, it's just a computer, just software. What's the big deal? We can turn it off.” We all know the story there. In many ways, this particular failure-of-imagination is much less pronounced today. But, I say, a dual failure-of-imagination is true today even among the “cognoscenti”, where we ignore intelligence by ignoring opportunities from moderately capable mindlike entities at scale. I'll go over this sentence slower in the next slide. For now: there are two ways to not meet reality. On the left of the slide is “nothing will change”. The same “classic” case of “yeah, what's the big deal? It's just software.” On the right, it's the total singularity, of extreme unknowable super-intelligence. In fact, the phrase “technological singularity”, IIRC, was coined by Vernor Vinge to mark the point that we can't predict beyond. So, it's also a way to be mind-killed. Even with whatever in-the-limit proxies we have for this, we make various simplifications that are not “approximately” useful; they don’t decay gracefully. (Indeed, this is how the “high-actuation spaces” project document starts.) All of the richness of reality: that’s in the middle. Steve: I think that makes sense. I like how there are both ways to avoid looking at the kind of intelligence in between, given slow takeoff. Sahil: Yeah, cool. In the event of a “slow takeoff”, sure.[3] Opportunities from Moderate Intelligence at Scale Okay, so, going over the sentence slowly: I mean opportunities, so the focus for today is less about risks. We're talking about reframing risks soon anyway, in the series of conversations we're having. However, conversation about risk will end up here as well, because it is an inextricable aspect of the opportunity frame. Indeed, “live theory” is about (research methodological) opportunities meeting risks (of infrastructural insensitivity towards beings).[4][3]“Moderate”, as in: intelligence that is not super intelligent or singularity-level intelligent, but kind of intelligent, the way LLMs seem kind of intelligent, and somewhat beyond.“Intelligence” meaning, really, adaptivity again.And at scale, meaning: everywhere, cheap, abundant, etc. The analogy I tend to use: attempts to talk about video calls, or even, say, remote teaching when all we currently have to extrapolate from is the telegraph/telegram. With the technology of the telegraph, people can tell that being able to send messages over large distances seems exciting. But they're not really thinking “oh, in the future, some jobs will entirely be dependent on being able to take video-calling tech for granted. Schoolkids might do the entirety of their schooling using this tech.” They just didn't think of that, for the most part. Even if some people had ideas, few were working towards orienting to that future in a way that would enable better equilibria. They failed to take it seriously. Video calling is in a way the same type of technology as telegraph: the ability to send messages. But with incredibly reduced cost, latency plus increased adoption, bandwidth, fidelity. This allowed for remote work and video calls and the whole shebang that’s allowing us to have this conversation right now. And so the question is: what happens when we have much lower latency and moderately more intelligence, much lower costs, with adaptivity tightly integrated in our infrastructure?[5] Just like we have Wi-Fi in everything now. And notice that this is not extrapolation that goes only moderately far. That is, just because I'm talking about “moderate” intelligence does not mean this extrapolation is not about the real crazy future up ahead. Only, this is AI-infrastructural extrapolation, not AI-singleton extrapolation (or even what’s connoted by “multipolar”, usually, IME). It’s neglected because it is usually harder to think about, than a relatively centralized thing we can hold in attention one at a time. This frame also naturally engages more with the warning carried in “attitudes to tech overestimate in the short run, underestimate in the long run.” So to repeat, and this is important for sensemaking this: I am doing extrapolation that will venture far. What follows is simultaneously very obvious and very weird.[6] In fact, that combination is what makes it work at all, as you’ll see. But that’s also a double-edged sword. Instead of it being sensible (obvious) and exciting (weird), the perspective here might seem redundant or boring (too obvious) and irrelevant or scary (too weird). Hopefully, I will: a) avoid its obviousness being rounded off to “ah right, live theory is another word for autoformalization” and b) bring its weirdness closer to digestibility. To quote Bostrom in Deep Utopia, “if it isn’t radical, it isn’t realistic.” So even though some might classify this series as being about "automating[7] alignment research", it tastes nothing like unfleshed mundane trendlines[8] or spectacular terror that are mixed together, for example, in Leopold Aschenbrenner’s “AGI from automated research by 2027”. Again, this isn't to say that there aren't some serious risks, only that they might look very different (which view will be elaborated in an upcoming post).[9] Live Interfaces This slide was ‘Live UI’ (don't bother trying to read the image): what happens to infrastructure and interfaces, generally, when you can “do things that don’t scale”, at scale. People don’t seem to update hard enough, based on their reactions at Sora etc, on what the future will take for granted. What is possible, when all this is fast, cheap, abundant, reliable, sensitive? Live UI seeks to chart this out. The six pillars, without much explanation for now, are: Tailormade media (eg. instead of writing a post and then distributing it, you can distribute a post-prompt that can take into account the user’s preferences and context and become the relevant post in interaction with the user.)Live interoperation (eg.[10] tailormade movies do not imply loneliness of the viewer, because you’ll also have tailormade interoperation attentively “translating” insights you had watching your movie to remarks relevant to your friend’s movie.)Live differential privacy (eg. automatically[7] and seamlessly replace your friends in a livestream who don’t want their info up, with randomly generated faces that capture the vibe without any possibility of inference to their details, handled by an AI that understands your circumstances and privacy preferences informally, with occasional oversight.)Recording & representatives (eg. record your data to allow for better tailoring to you, and even stream it out, to the extent that your privacy is covered by previous pillar, and credit and relevance is covered by the next two pillars.)Live storage (eg. personal knowledge management engines like Roam complemented with mental-context management that cycle your mental context with your own important notes and various data feeds, sensitively refreshed to match your current momentum/motivations.)Telic reliability & live attribution [not pictured above] (eg. the data/prompts/wishes you livestream are associated with a cycling schedule that the reader likes, in the reader’s live storage, which helps automatically[5] sense the contextual reputation assigned to your creativity.) Steve: Am I supposed to be following all of that? Sahil: Definitely not, it's just a trailer of sorts.[11] I've included only a relatively accessible example each (it gets way weirder), but there are volumes to say for the pillars, especially to gesture at how it all works together. (Btw: reach out if you're interested to know more or work together!) A bit more, before we move on though. Nearly all of the above is about sensitivity and integration[12] as we gain increasing choice in adaptive constructability at scale. The above could, instead, already sound nauseating, atomizing, and terrifying. A good time, then, to meet the key concern of the High Actuation agenda: the applied metaphysics of cultivating wise and lasting friendship in a reality full of constructs. High Actuation is the research context (for both Live UI & Live Theory, among several other tracks), where the physical world becomes more and more mindlike through increased and ubiquitous adaptivity. In the process, challenging a lot of (dualistic[13]) assumptions about mind and matter, and how to go about reasoning about them.[14] But yeah, don't worry about this interface-6-pillars stuff above. I'm going to talk about what I’ll be focusing on building tools (and hiring!) for, in the coming months: intelligent science infrastructure. Live Theory (preface) So the boring way to think about intelligent science infrastructure is to say “AI will do the science and math automatically.” (What does it really mean, to say “automatically”? We’ll get to that.) First, a succinct unrolling of the whole vision. A series of one-liners follow, with increasing resolution on the part of this dream that matters. The italics signify the emphasis in each stage of clarification. Live theory is... Adaptive theories (Here "theories" are only one kind of distributable artefact in research to think about.) But more importantly, the vision is... Artefacts with adaptivity beyond formal variables/parameters (Here "artefacts" includes papers, theories, explainers, code, interfaces, laws[15], norms etc.) But more importantly, the vision is... Protocols for exchange of  “postformal” adaptive artefacts that you can take for granted, and focus on production/consumption instead of the interface. (Here "protocols" don’t need to be a fixed/formal protocol or platform either!) But really, the vision is... Lending spirit to an ecosystem of research that can competently caretake the widescale-exchange of postformal adaptive artefacts...so that we collectively refine sensitivity towards beings, in the AI-replete infrastructure we seem to be heading towards. (Here “lending spirit” being the crucial activity that allows for spacious resolution towards better equilibria.) IOW: The slower-but-more-significant pace layer of infrastructure, that supports the pace layer of commerce. Navigation Some navigation before the longer conversation around this: there are four gates that we'll have to pass through, as you see on the slide below, to come to terms with the proposal. Each of the four gates has been invariably challenging to convey and invite people through (although it is satisfying to witness people's fluency after a few runs through them): Possible (what Live Theory is and whether it is even conceivable/tractable)Bearable (how, paradoxically, Live Theory has always been true, and whether we can stand it being true already)Desirable (what new modes open up when we incorporate Live Theory-like adaptivity)Necessary (Live Theory's suggestions for threat models from extremely adaptive AI, like in Deep Deceptiveness and Robust Agent Agnostic Processes.[16]) I'm going to walk through these gates, and conclude, and that will be the talk. This decomposition into gates should hopefully make it easier to critique the whole thing—eg. “X aspect is undesirable because…” vs “X aspect is impossible because…”. The two obviously deserve very different kinds of responses. I’m offering the talk this way for many reasons, but to say a bit more. Most of the work is in a) noticing the circumrational water of mathematics as it is today (which can be too obvious for mathematicians and too unbearable for math-groupies respectively) and b) connecting it to mathematics as it might become in the near future (which can seem too bizarre or undesirable, if you don’t notice its importance in mathematics as it is today). When new paradigms start being Steam’d, they often have to pull off a similar straddling of the familiar and the unfamiliar. Not too different from the ordeal of hitting the edges of one’s developmental stage… but at a sociotechnical/civilizational level. If making it easy for you to respond and critique were the only goal of the gates, they would have been set out in a tidier decomposition. However, in tandem, I'm using the gates-structure to construct a “natural walk”, a narrative arc, through the details of Live Theory. This polytely has some tradeoffs (such as the Bearable and Desirable gates not quite disentangling), but I think it works! Let me know. The next post will cover the first two gates. A teaser slide for the first one follows. Live Theory (teaser) Possibility And a teaser of two questions I will start the next post with, but also include now to give you time to actually think: Q. Mathematical (or theoretical) generalization seems to involve creating a parameter or variable, where "specialization" happens by substituting with a value later. Is there an alternative?Q. What is generalization really for? What does it offer you? ^ Challenging and supporting, especially through the frustration of freezing this written artefact before I can avail the spaciousness of the fluidic era. ^ Alternative terms include “adaptive theory”, “fluid theory”, "flexible theory"; where the theories themselves are imbued with some intelligence ^ This will be elaborated in an upcoming post on risks. “Slow takeoff” is an approximation that will have to do for now, but really it's much more like having lazy or mentally ill AI. If you're curious, here's the abstract: Many AI safety folk have rightly raised issues of corrigibility. That once the target(/values) of an optimization process get locked in, it becomes extremely sticky. Attempts to change are up against the full optimization power behind fulfilment of the target. If this power goes beyond our abilities, any clever strategies to undermine it are just an enactment of our weaker optimization powers pitted against a stronger one, and will, tautologically, fail. We aim here to bring to fore a subtler version of this issue: corrigibility of relevance. Where stickiness of value comes about not via strong monomaniacal attraction to the target, but by indifference to anything outside of what is considered meaningful. Indifference not as a side effect of strong optimization (such as in The AI knows but does not care), but by not entering attention. Indeed, this applies even in the absence of an overall optimizing process. We also point out how, contrary to intuitions that this makes AI more dangerous, it could also imply less conflict.  We point out that optimization processes that occur within a zone of relevance are less likely to be rivalrous with others, contra broad instrumental convergence. We carefully argue that difficulty of corrigibility of relevance implies, further, that zones of relevance are unlikely (though not impossible) to expand, since there is no motivation to. We also couch zones of relevance in naturalistic terms. This involves a new concept, of integrity, which is an embedded extension of coherence.  The importance of the specific, physical modalities of "skin-in-the-game" to relevance is explored, via connections between physical integrity and mental integrity. We take up very concrete questions (such as "are hardware level architectural changes needed in order for shutdown to be a real problem?") from this lens, and argue for a fractal consequentialist model. This makes the safety-vs-capability dichotomy untenable (in a new way), giving rise, for example, to "fractal sharp left turn". We list many predictions of this model that are opposite to the original sharp left turn threat model, and gives rise to novel dangers that resemble S-risks from partial alignment. Owing to issues of integrity,  "simply" hooking up an intelligent, "unagentic" model with an agentic wrapper (eg. "GPT-7 may not be dangerous, but GPT-7 plus two thousand lines of python can destroy the world") is shown to be a lot more difficult. We explore, with examples, how simply "plugging in" does not lead to integration, and this lack of integrity creates lacunae of relevance that can be exploited to cause shutdown, no matter the cognitive powers of the machine. We cautiously make analogies to lack of integrity in humans (often manifesting as mental disorders, dissociation, and savantism). We note that disanalogies are more important because of differences in physical make-up of biological life vs machines, and how that gives us more time to deal with dangerous versions of the problems of shutdown / risks of autonomy. We end by attempting to point at the human/animal zone of relevance, which is quite philosophically challenging to do from the vantage of our own minds, but extremely important as we create minds very different from ours. ^ Generally, it is a bit suspect to me, for an upcoming AI safety org (in this time of AI safety org explosion) to have, say, a 10 year timeline premise without anticipating and incorporating possibilities of AI transforming your (research) methodology 3-5 years from now. If you expect things to move quickly, why are you ignoring that things will move quickly? If you expect more wish-fulfilling devices to populate the world (in the meanwhile, before catastrophe), why aren't you wishing more, and prudently? An “opportunity model” is as indispensable as a threat model. (In fact, "research methodological IDA" is not a bad summary of live theory, if you brush under the rug all the ontological shifts involved.) ^ TJ comments: I'd guess you'd want to be more explicit here that what you mean is that "human cognition, thought, and intellectual enterprise is itself going to  go through radical transitions, and will reshape the human experience." It feels like you're implying this here, but is left as a silent gesture for the reader. ^ An example of weirdness if you're hungry, in raw form. Inspired by a submission at a hackathon on live machinery. Time structures are pretty centralized right now. Do you know what a "Thurs" is? Would you collect solar days together into 7-length segments? Why is some old drama between Julius and Augustus deciding the day you go out and have fun wearing spooky costumes? You could imagine throwing it all away and redoing it in a way that makes sense to you. Or better yet, to your local community. Maybe you would divide the day into powers of two and label them. Perhaps you like to orient to the lunar calendar, or to your menstrual cycle. You might find seasons to be only vaguely relevant if you're in the Maldives or elsewhere near the equator, and might orient via tides and monsoon or the migration patterns of whales. But this is all a fantasy. If you need to feed you kids, you need to believe in a Thursday. You have your weekly standup on Thursdays, at the eleventh hour of the ante-meridian. If we want to coordinate, we need protocol. You need to catch your flight, and the pilot needs to show up on time. We need shared structure, shared reality, universal languages. Right? The punchline, of course, is that this is the old way to scale. By replicating fixed structure. But when intelligence is cheap, you don't need static shared structure. Ubiquitous attentive infrastructure can create peer-to-peer structure as needed. So how would that work? Let's take something simple: a postformal, prayer-based interface to replace calendly. Instead of marking lines on a shared spatialization of time, your community can live by your local temporal rituals. You might use the rings of trees to commemorate new beginnings, or the births of children. Perhaps you will have a season of meeting others that lasts several weeks, based on your bipolar rhythms. This is your calendly. You broadcast prayers to meet others during this time. Ambient intelligent infrastructure (ie. AI) can help you identify, broadcast and match this prayer with mine, since I want about 26 minutes of meeting time with you, as expressed in my wishes, to consult about durian-eating or whatever. The flexibility of prayers allows for you to synchronously run into me during your meeting season. You (and I) get more lucky, rather than getting more controlling-power. You can still have precision, in this. Your prayer might specify that. Control and precision might seem hard to decouple. But so do scale and fixed structure. AI, if done right, can help with more interpersonalization, not hyperpersonalization. Far from hallucination and spam, it can lead to more unconforming to embodied truth, rather than conforming to epistemic bureacracy. Why should our cycles and seasons of living and meeting be filtered through rigid structure that is numb to our needs and infiltrates our meanings? Just picking up a preferred grain of abstraction or redoing some grids creatively (as in xkcd: 28-Hour Day) is nice, but it doesn't free you to co-create the background view of your reality. ^ (See more here: The Logistics of Distribution of Meaning ) The word “automation” does not distinguish numb-scripted-dissociated from responsive-intelligent-flowing. It usually brings to mind the former. So I avoid using it. More on this in the next post, but for now, a word might be “competence”. When you’re really in flow while dancing, you're not thinking. But it is the opposite of “automation”. If anything, you're more attuned, more sensitive. This isn't “high-level” abstraction either. The high-low dichotomy is the good-old-fashioned way of looking at generalization, and does not apply with expanded sensitivity, as we'll see in this sequence. The relevance of the choice "live" will hopefully also become clearer as we go. Meant to connote biological rather than machine metaphors, dialectics rather than deliverables, intentional stance, though not necessarily patienthood. ^ It's not that people predict "no fundamentally new ways of looking at the world will occur" and therefore decide to exclude them in their stories. I think people exclude ontological shifts because it's very hard to anticipate an ontological shift. (If you disagree, I'd ask the fundamental question of implied invisible work: what have you rejected? A la book recommendations.) ^ Matt adds: It might be worth stating more about these [risks] from the outset. Just to prevent anyone thinking you're some kind of e/acc full steamer techbro who isn't interested in safety Indeed. The Necessary gate will cover much relevance to threat models. Apart from that, expect much much more on "infrastructural insensitivity" to articulate further vectors of risk. ^ Abram helpfully adds: I find the meeting example [see next quote block] much more compelling, where people get dynamically crafted versions of the meeting which can catch them up on the most important things they missed as a result of coming in late -- someone uses the phrase "lightbulb moment" in a context-dependent way, so the live interop unpacks the context for those who missed it. I think the reason I like this better is that meetings invoke a more utilitarian mindset where we evaluate the usefulness of tools. Movies invoke a warm friendly mindset where we are more prone to horror over the seeming non-togetherness of the described scenario. If you want to shock people with the idea of such severely mediated reality, fine. But I think the meeting example will provoke less feeling of shock and undesirability, if you prefer that. And so here is my meeting example pasted raw, which might be more palatable or more triggering: So imagine you join a meeting on Zoom, a little late. AI summarizers can "catch you up". Instead of immediately chatting, you peruse the transcript/read the summary. Now that's great, but a little boring. Ideally you'd have "immersive summary", a mini-meeting that you get to see live, a generated video summary. You watch that, and then join the real meeting. Then you seamlessly transfer to the "real" meeting... except when Alice references a detail that wasn't given enough attention in the summary-virtual-meeting you initially watched to catch up. So before you get confused as Alice starts using the details assuming you were in the meeting, the AI interop again pulls you back into a simulation where generated Alice says "so, just to recall, some details I want to add" before dropping you back into reality. And in fact, your replica will already have joined the meeting even though you were 10 minutes late. And you watch it, and might say "oh, no, that's not what I would have said"... and so, once you say that, everyone else will be put in a simulated meeting, where your replica says "I know I said earlier that X, but no, [I take that back/actually meant/wanted to say] ..." so everyone is again caught up. Rather than being blocky or jarring, imagine a perfect stabilizer, like in a drone or in video-correction. It's uncanny, that despite all the noises, rapid responses can correct destabilizations quite magically and beautifully into a coherence. This again blurs "construction" and "reality", doing such a fantastically better job that you will only occasionally need to rebel against this matrix. The real-constructed dichotomy makes less and less sense. ^ More here: Live Machinery: An Interface Design Philosophy for Wholesome AI Futures ^ NB. this is not brute, ham-fisted merging, for those worried. See also: Unity and diversity. ^ also: centralization-heavy, preformationist, foundationalist, control-oriented. See The Logistics of Distribution of Meaning: Against Epistemic Bureaucratization ^ This is a very expensive delivery mechanism for a koan/pointing-out instruction, to let go of the static-machine mythos, but there you go. ^ Live Governance post coming up soon. ^ See also this proposal for the commonalities in the two posts and limitations of existing approaches.
2024-06-26
https://www.lesswrong.com/posts/a7HqRKpGchbf2DaK8/labor-participation-is-an-alignment-risk
a7HqRKpGchbf2DaK8
Labor Participation is an Alignment Risk
alex
TLDR: This paper explores and outlines why AI-related reductions in Labor participation (e.g., real unemployment) is a significant and high-priority alignment risk that the alignment community should more seriously consider as a potential underlying systemic risk factor. Constructive criticism, counter-factuals, and/or support are welcomed. Thanks to Jolien Sweere (Open Philanthropy, Blueprint Bio) for her thoughtful review and suggestions. As well as  Tzu Kit Chan, et al. for their insightful informal feedback and probability estimates including Intro Defining the concept of “economics” can be challenging even for tenured academics in the field. Economics can be the study of collective human behavior, the utilization of human capital, or policy (i.e., endogenous factors). It can also be a study of productivity and efficiency, and therefore a sort of metastudy of technology (i.e., exogenous factors). It can explore how we produce and consume goods and services, and it is deeply integrated with many other social sciences. Within Economics lie numerous sub-fields beyond the classical micro and macro, including: Natural Resource Economics, Game Theory, Environmental Economics, Behavioral Economics, Business Economics, Financial Economics, Health Economics, Industrial Economics, International and Trade Economics, Econometrics, Public Economics, and, I now humbly  propose: AI Economics. Writing about current topics in economics is often difficult and controversial. Writing about the anticipated economics of the future under unprecedented, likely transformative technologies, as you may have guessed, is even harder and more controversial. The reasons for this are many. The many facets of Economics are deeply intertwined with policy, taxation, and resource allocation decisions that have historically disproportionately benefited some members of society and disadvantaged others. Economic decisions can have significant long-term consequences for wealth distribution, financial inequality, class structure, global power dynamics, trade, business subsidization, technology R&D, and even scientific discovery. Economists generally tend to compare AI to previous technological developments, pointing to productivity and GDP growth leading to the creation of new industries and new jobs. There are certainly parallels we can apply and learn from, but ultimately this comparison is naive, short-sighted and simply inappropriate for a variety of reasons we’ll explore further. Economists often have policy opinions on the aforementioned issues, but I’ll try to keep politics out of the equation other than to say that historical evidence demonstrates that economies can be over-regulated or under-regulated, with neither extreme being optimal for long-term economic health. There are clearly leftward and rightward political leanings on the appropriate level of general regulation, the nuances of which extend beyond the scope of this paper.  What is fair to say is that it is important to create the right incentives and apply the right regulation at the right time. While I might typically lean towards free-market and self-regulating solutions for most of the technologies developed over the last few centuries, advanced AI of many varieties, much less agentic autonomous AGI, are fundamentally different. There are a multitude of valid reasons to suggest that AI deserves significantly more legal deliberation and moral consideration, particularly in light of the fundamentally misaligned incentives of our most commonly used later-stage organizational legal structures (namely incorporation). Public markets are considerably larger than private markets.  According to  McKinsey researchers, the global value of public equity markets was estimated at $124 trillion for 2021 versus $10 trillion for private markets.  One of many problems in the context of AI is that public companies come with various duties to shareholders. Management must operate under explicit and implicit obligations to produce continued returns for (largely passive, disengaged) shareholders. These agendas are sometimes not just incentives but legal fiduciary duties which are more likely than not to conflict with those incentives that might align advanced AI with humanity’s greater interests. The optimal organizational solutions to develop and deploy advanced autonomous AI systems will be neither easy to architect nor without their own controversy, but it seems evident that what we will need is greater transparency, checks and balances in power, ethical guidelines with meaningful enforcement and provisions to protect organizations from hostile acquisition. I believe we need broader, more diversified and collaborative organizational governance structures and we need to encourage organizational incentives that are more aligned with the public benefit. Whether the answer is some form of hybrid public-benefit corporation and non-profit, or a novel member-owned cooperative structure, is a matter for another paper.I’ll not digress further beyond saying that in so far as they operate within capitalistic economies, we need to acknowledge that these organizations must be capable of generating income sufficient to support their members, to invest in R&D and to return borrowed capital to creditors. I am not a traditional research economist. I hold undergraduate degrees in Business Economics and a graduate degree in Business information systems.   I previously worked with The Wall Street Journal /, Dow Jones; but most of my economics expertise comes from my work as an analyst in private equity and half a decade in the quantitative hedge fund world, developing macro models for commodities markets and genetic optimization pipelines. I’m also a technologist and developer with 15 years of full-stack development, machine learning (ML)ML and data -science experience, beginning with Google in 2007. Despite my economic modeling work over the years, I’ve only recently come to fully appreciate how central economics is, and will be, to AI alignment. Many within the AI Alignment communities who are genuinely interested in a future where AI does not go very bad, seem to be focusing their attention and resources towards technical, conceptual, policy, control and other forms of alignment research that presume AI could become unaligned with humanity. I want to be overwhelmingly clear that I think research in these fields is important and worthwhile.   That said,  we are failing to recognize just how much more probable and consequential economic misalignment will, not just in terms of dollars, but in terms of human quality of life, suffering and death. Despite a greater expected net impact, we are failing to appreciate the very bad scenarios because they are not as acutely dramatic as the catastrophic and existential scenarios so frequently discussed. Disclaimers / Acknowledgements: The following is an informal overview and not a rigorous academic research study. I concede that I have focused primarily on the U.S. economy and federal institutions in the paper below due to a lack of bandwidth to do the requisite research on EMEA and APAC regions and their constituent economies, which will surely be relevant to the bigger AI-risk landscape. According to PWC, “The U.S. and China, already the world’s two largest economies, are likely to experience the greatest economic gains from AI,” and together, China and North America will account for about 70% of the global economic impact of AI by 2030. However, the impact on automating cognitive labor in the decades following 2030 will surely be far greater, and are the focal point of this paper. It is also presently the case that by most metrics, the US private and public sector is arguably leading AI research and infrastructure development by most metrics. Despite AI R&D consolidation in the UKin UK, France and China, it seems likely that the dominant US position of dominance in AI will continue into the near-term foreseeable future.   We should not take this for granted beyond 2040. Summary A few months ago I found myself reading yet another journal article on potential future job displacement due to AI. The article's specifics and conclusion were hardly novel. It suggested something along the lines of 80% of jobs being at risk over the next decade, but not to worry: 60% of them will be offset by productivity gains leading to new job creation and GDP growth. What began to bother me (beyond the implied 20% total increase in real unemployment) is the question of why AI safety and alignment researchers do not seem to more seriously consider significant AI-related declines in labor participation a viable threat vector worthy of inclusion in the classic short list of AI alignment risks. After all, a healthy economy with a strong educated middle class would almost certainly be better positioned to mitigate AI risks for numerous reasons. Now to be fair, some researchers and economists certainly have thought about the economic issues surrounding AI and they may (or may not) concur with much of the following. AI economic alignment is a complex, interconnected and international issue with monumental political and legislative challenges.  That said, the related economic conversation occupies a very small fraction of the aggregate AI safety and alignment attention span with only a small number of quality posts per year on forums such as Less Wrong. I believe this is not good. It is not good for many reasons both direct and indirect, not least of which is because if any advanced AI, much less AGI or SI, goes bad at some point in the future (and I think it eventually will), we will need functional federal institutions to mount an appropriate response. If the U.S. economy and its parent institutions are in disarray due to rapid and substantial declines in labor participation, collapsing wages, extreme inequality and potential civil unrest, those institutions capable of intervening simply may no longer be effective. Needless to say, there are a number of other circumstances beyond labor participation that could erode or inhibit an effective federal response to AI, including but certainly not limited to: extreme concentration of power and wealth, political division, corruption and lobbying, hot or cold wars, targeted disinformation and so on.  Nevertheless, it seems incontrovertible that any AI takeoff regardless of speed during an economic depression would be much worse than if it were to happen during a period of political stability and economic health. The below will explain why I think economic dysfunction is likely and actually a greater immediate threat than any  catastrophic risks or existential risks. Hypothesis I suspect most researchers would likely agree with the statement that when it comes to AI over the next couple decades: p(mass job displacement) > p(major acute catastrophe) > p(extinction-level doom). This hierarchical comparison isn’t completely presumptuous for a few reasons: 1) the latter, more serious risks (p(doom)) are somewhat inclusive of the former (p(catastrophe)), 2) statistically we might experience a technological economic depression and mass unemployment regardless of AI, 3) current trends indicate labor participation due to job displacement is already on a long-term decline, suggesting it could actually be more probable than not. My personal estimate is that the probability of substantial and sustained (i.e., ~20%+) real unemployment as a result of advanced AI over the next 20 years is roughly 30% to 50%, roughly an order of magnitude higher than the probability of an acute unprecedented major catastrophe (i.e., AI-related hot conflict resulting in hundreds of thousands or millions of deaths), the probability of which is also roughly an order of magnitude higher than an existential extinction-level event. In short, my argument distills down to three key points: 1) AI will likely result in increased productivity, job displacement and lower labor participation* 2) Sustained significant unemployment and/or general economic disfunction will likely: 2a) Contribute to political and economic instability at a time when stability would be needed most 2b) Weaken our institutional capacity to respond to a severe AI threat when it would be needed most 2c) Reduce funding for AI alignment agendas, social programs and humanitarian aid when needed most 2d) Potentially allow for massive corporate power consolidation and extreme inequality 2e) Result in millions of deaths, more than most feasible AI catastrophes or wars. 3) Thus, AI labor economics deserves as much attention as catastrophic risk and ex-risk in the AI alignment public discussion * even if offset by new job creation, the duration of labor market shocks could persist for many years Evidence The latest UN DESA data show that, by 2050, our ‘population is expected to reach 9.8 billion people, over 6 billion of whom will be of working age’ and we are currently struggling to create meaningful employment for more than 71 million young unemployed people worldwide. A 2024 report by the International Monetary Fund (IMF) estimated that 40% of global employment is at risk due to AI. In advanced economies, roughly 60% of jobs will be impacted by AI, and roughly with half (30% total) likely to be mostly replaced, resulting in displacement and dramatically lower labor demand and, lower wages. The UN this year recognized in its own internal research that 80% of jobs will be at risk of being automated in the next few decades.  In emerging markets and low-income countries, by contrast, AI exposure is expected to be 40 percent and 26 percent, respectively. A large number of researchers, academics and industry analysts agree that we will likely experience significant job displacement, labor market disruptions and at least modest increases in unemployment as a result of AI technologies. The list of similar corroborating research organizations is long and includes The IMF, WHO, Pew Research Center, Harvard Business Review, World Economic Forum and the US Bureau of Labor and at least a dozen others. The consensus on the duration of these disruptions and the impact of productivity gains and new job creation is a lot less clear and very difficult to model, but obviously critical to understanding the net aggregate long-term impact. In short, a majority of researchers in the field suggest AI is more likely than not to create significant job displacement and increased unemployment for at least a period of several years. We have highly conclusive evidence that the direct and indirect death toll from prolonged increases in unemployment is substantial, with every 1% increase in unemployment resulting in a 2% increase in mortality. (Voss 2004, Brenner 2016, Brenner 1976). We know that sustained unemployment (vs short term swings like during the COVID-19 pandemic) amplifies this increase in mortality. Please understand that I am absolutely not suggesting that various ex-risks are not plausible longer term or that catastrophic risks are not plausible within decades. Quite the contrary. I believe we do need significantly more progress in AI technical alignment as well as industry regulation on autonomous agentic system inference and microbiology engineering services.  for example improvements in custody and KYC requirements for various private-sector genomic service providers and CRO’s contributing to pathogenic bio-risks and chemical risks. My overarching point is that current evidence suggests the greatest expected AI risk to humanity *within the next few decades* comes not from agentic power-seeking unaligned AGI - but rather from the death and suffering resulting from significant economic disruptions, particularly in developed labor markets. These risk profiles will shift longer term but so does uncertainty. Ultimately, time provides opportunity to innovate solutions and creative risk mitigations with novel technologies. Some may argue that if AI extinction level threats are only a matter of time, even if on a 100-year time horizon, they still deserve a prominent place on the alignment agenda. I do not disagree with this. However, it would seem rational that a 50% chance of 50 million deaths in the near term is at least equally deserving of our attention as a 0.5% chance of 5 billion deaths longer term. Why it Could be Worse In a bad to worst-case scenario, many factors could further amplify this unemployment-to-mortality ratio, such that sustained substantial unemployment beyond a few years could potentially result in more than 100 million deaths in the developed world over just a few decades, and even more globally. (less developed countries, however, will likely be less impacted by technological automation). 100 million lives is certainly a big number, but to put it in perspective, research suggests that international aid from developed countries to the developing world has helped save nearly 700 million lives in the past 25 years. (Reuters, 2018).  Just a few of the particular factors that would likely compound the problem include various government’s misrepresentation and understatement of actual unemployment metrics, generally increased anxiety and depression rates particularly in those demographics that will be most impacted by AI-related unemployment, and most of all, the compounding effect of prolonged periods of high unemployment. Let’s take a closer look at a few of these: First is the significant under-reporting of unemployment in the U.S. and potentially other countries. Unemployment is the intuitive default nomenclature when speaking about those who don’t have jobs. However the term ‘Unemployment’ is misleading because officially it only includes those receiving unemployment benefits. When you hear a news report that suggests unemployment rates are at 5%, that does not include people under-employed or working short-term contracts or those who have given up on finding new work after leaving the workforce. It does not include the PhD working for minimum wage in a job far beneath their ability. It does not include those forced into early retirement or those who can’t work due to disability. The incentives for government to under-report actual unemployment are several fold but boil down to the political and self-fulfilling economic prophecy. In other words, suggesting the economy is doing great, puts some decision makers at ease who might otherwise make decisions that further exacerbate the economic problem. Regardless, what we should be more interested in is the Labor Participation Rate (i.e., how many people of working age are actually working full time jobs -preferably above the poverty line). As discussed, unemployment numbers are full of problems and seriously misrepresent the actual number of unemployed people. While I have used the terms somewhat interchangeably thus far, I want to acknowledge that there is a distinct difference.  Those who will be negatively affected by automation, however, will not care whether their governments define them as unemployed or underemployed or non participating labor. What matters to them is whether those who want to work can actually find work in minimally appropriate jobs. Second, studies on the relationship between unemployment and mortality suggest that a large proportion of that mortality is ultimately due to stress, anxiety and unhealthy psychological consequences of that financial hardship. The greater problem now is that those generational demographics most likely to be impacted by AI are those already suffering from pronounced higher rates of anxiety and depression. In a recent 2023 cross-sectional study of about 3 million US adults, “anxiety and depression were significantly higher among adults aged 18 to 39 years (40% and 33%, respectively) compared with adults aged 40 years and older (31% and 24%, respectively). Greater economic precarity and greater reactivity to changing case counts among younger adults were associated with this age disparity.” (Villaume, 2023) Additionally, we will be fighting an uphill battle with numerous longitudinal studies showing a consistently declining labor participation rate since the early 2000’s. What’s even more troubling is that this is true even in light of increased labor participation of women in the later half of the 20th century. Significantly more research is needed to better understand our already declining labor participation rates. Unemployment and Mortality In a 1979 hearing before the Joint Economic Committee of the U.S. Congress, Dr. Harvey Brenner spoke on his research suggesting that a 1% increase in unemployment resulted in a 2% increase in total U.S. mortality. Over the following decades, Brenner’s research held mostly consistent, showing for every 1% rise in unemployment, at least 40,000 additional U.S. deaths occurred annually over the following 2 to 5 years. The deaths were largely attributed to various forms of stress, anxiety, and depression, including suicide, homicide, substance abuse, heart attack and stroke among incarceration and others. This research started in the 1970s and several subsequent peer-reviewed research programs largely corroborated his findings. Several datasets suggesting the increase in mortality due to unemployment has probably increased since. Notably, these mortality numbers increase substantially, as the annual unemployment rate and/or the duration of increased unemployment persists. We know fairly conclusively from multiple longitudinal research programs that increases in unemployment result in disproportionate increases in mortality rates and other associated human suffering. The specific estimates do range depending on the studies but generally fall within the 37,000 to 48,000 additional deaths for each 1% of additional unemployment within the United States alone. As a relative proportion, that is roughly a 1.9 to 2.2% increase in mortality for each 1% increase in unemployment. Importantly, these observed effects can be thought of as exponential multi-year moving averages, thus quickly compounded by prolonged periods of high unemployment. Some reasonably conservative models project that if real unemployment rates were to exceed 20% for more than one year, death tolls from heart attack, stroke, suicide, overdose, and related causes would likely exceed a million deaths annually within a year. Increases in rates of psychiatric admissions and incarceration would be even higher. This total mortality Brenner’s research found several variables increase the ratio of mortality to unemployment. (countries with high rates of these things fared worse). Not surprisingly, they are things that are normally inversely associated with longevity and general health. The first is Smoking prevalence and Alcohol Consumption. According to worldpopulationreview.com the US is relatively average here with a 24.3% total smoking rate in 2022 (and dropping fast), lower than Latin America and Europe. The second is High BMI and Diets with relatively high Saturated fat consumption. Conversely populations with diets high in Fruit and vegetable consumption were affected less. The biggest risk according to Bremmer was BMI (ie. the major risk factor for cardiovascular health and diabetes). As of 2022, the U.S. has a 42.7% obesity rate. Much higher than average. There are many areas of AI research that clearly lack consensus. Wherever you may fall on the P(doom) spectrum, I think that we will likely agree that the economic impact will be substantial. That is not to say that AGI will usher in an era of Utopic UBI, but rather that our economic paradigm will be shocked and probably in more ways than one. This will be true whether or not Agentic AGI decides to redefine its alignment, or whether any sort of Super-Intelligence comes to fruition at all. 20% is not an unrealistic or over-exaggerated estimate of a worst-case-scenario unemployment rate. It is entirely plausible that it could be higher without swift federal intervention. Ultimately, in a scenario of >30% unemployment globally, lasting more than a few years, their data suggests that total mortality could reach upwards of one hundred million globally, exceeding the total mortality of any known pandemic or geopolitical conflict in history. Counterpoints I’d like to begin with several disclaimers: Firstly, far more evidence exists to support point 2 (mass unemployment results in increased mortality) than point 1 (that advanced AI will result in mass unemployment). The obvious reason for this is that advanced AI does not yet exist and so observational evidence does not exist. We must extrapolate what we know about economics, automation, business incentives and digital delivery to a broad range of cognitive labor tasks and job functions in various socio economic cultures with differing regulatory attitudes. The hypothesis that advanced AI will result in increased unemployment in developed countries seems highly likely but deserves much further research because we simply do not know how many new jobs will be created. There is a large degree of uncertainty here due to known unknowns and unknown unknowns. There may be reasons that AI stimulates job creation that actually outweighs job losses. It would seem probable that if said job creation did occur, it would be technologically (e.g., computer-science) oriented. These programming, data science, and engineering functions require skill sets and education that the majority of displaced labor do not have and would require many years to develop if they are realistically capable of learning it at all. If for some inexplicable reason, progress in AI research plateaued such that human-AI collaboration produced superior work than either Humans or AIs alone, it is possible that significant diverse job creation could occur. Such a situation seems plausible for a short period of time but unlikely to persist for more than a few years given the rapid pace and momentum of AI research. It is conceivable that AI positively stimulates net labor markets, consequently lowering unemployment levels through increased production efficiency. In this scenario, the expansion of production will create more jobs and employment opportunities in the long term, however, commensurate consumption (demand for) those goods and services must also exist. It seems unlikely that AI systems would be granted any sort of legal status allowing them to purchase and consume goods within the next few decades and so we must investigate where that additional demand would come from and for what goods especially if real wages continue to decline. The prospect of some agentic AGI systems attaining legal and civil rights at some point in the far future is feasible but this remains highly speculative and we must also admit there would be many reasons to think that they might prefer to exist within digital environments where they are primarily consumers of energy and compute resources rather than physical consumer goods and services. It may also be possible that advanced AI quickly leads to groundbreaking discoveries in sectors such as energy, health and longevity, propulsion or other space technologies. These developments could serve as a catalyst for entirely new industries and scientific research areas as it would open the door for commercialization of novel business models and infrastructure such as asteroid mining, interplanetary discovery and biomedical sciences. Revolutionizing energy distribution through experimental technologies like cost-effective nuclear fission, could create hundreds of thousands even millions of jobs globally as we build new energy infrastructure. Nevertheless, a few million jobs is still a drop in the bucket compared to the 1+ billion jobs potentially at stake. The last counterpoint I’ll touch on is healthcare, specifically elderly and chronic disease care. Roughly half our 4.5 Trillion in U.S. healthcare spending is for retirees and elderly with chronic diseases, heart diseases and cancers disproportionately represented.  Caring for our elderly currently consumes vast economic resources 55% of our total healthcare spending. AI could conceivably create significant value for care, treatment and management of at-risk patient populations. Suggestions, Solutions and Hope Unemployment is significantly understated. We must reframe and define unemployment to actually include everyone who is not working and not voluntarily retired by choice. Brenner found that Southern European countries experienced less incidence of unemployment related mortality than northern Europe. He did not cite specific factors behind this finding but given the social, dietary and environmental differences, it would seem reasonable to associate some of that linked to healthier Mediterranean diets, warmer winters and more closely knit family ties, offering greater “social cohesion” for southern countries. I do not have all the economic policy answers but it is my tentative personal opinion there is no precedent to suggest blanket technological bans or censorship will work to effectively regulate digital technologies. What does seem reasonable and perhaps necessary is: The effective tax rate on labor in the United States is roughly 25 to 33 percent. The effective tax rate on capital is roughly 5 to 10 percent for most businesses. While there may be some variability, it is clear that so long as substantial tax advantages exist for enterprise, companies will be incentivized to replace and automate workers rather than invest in them. Equalizing these discrepancies may better align incentives. I believe that researchers should investigate novel tax structures on large AI product and service-oriented organizations. Further, these tax structures should effectively address the tax strategies exploited by tech companies like Amazon that inflate expenses to avoid corporate income taxes. It is a challenging loophole but evidence suggest this practice can lead to anti-competitive monopolization of large markets.Just as importantly we must ensure that tax revenue applied to AI companies is not misallocated and used to benefit those workers displaced. We should advocate for full corporate and government transparency in AI related activity. provisions to make sure these funds are not misallocated and used to fund functional creative social programs that benefit everyone. We should incentivize hiring (especially in smaller businesses) and perhaps incentivize co-operative organizational models. Generally, We should be encouraging local manufacturing and agriculture and subsidize (human) coaching counseling and personal development programs.Carefully crafted UBI programs may or may not be a part of the solution. I think we need more pilots and research to evaluate its potential but regardless it’s important that they encourage (or at least do not discourage) meaningful occupation, employment and social engagement. Conclusions: There’s an old military quote that posits ‘“Amateurs study tactics, professionals study logistics.” Some have attributed it to Napoleon, most sources suggest it was said by General Omar Bradley during World War 2. Regardless of authorship and despite its somewhat facile oversimplification, there’s an element of truth to the statement that can’t be ignored. Tactics may win battles, which are needed to win wars, but ultimately logistics and economics win protracted wars whether they’re hot or cold.  This is a key part of the overarching message I’d like to end on: we should more deliberately expand our view of AI alignment beyond the technical tactics and policy battles to think harder about the bigger picture economic ‘logistics’ that may be far more important in incentivizing aligned behavior in the long term. We can not discount the plausibility of an AI assisted catastrophe resulting in hundreds of thousands, even millions of deaths. We can not deny there is a non-zero chance agentic AI may become unaligned with or even hostile towards humanity. On a long enough time horizon, these things are more than merely plausible. The difference here is that cognitive labor automation, protracted mass unemployment and job displacement in the coming decades are not just plausible, nor possible -but actually probable and this is not a controversial fringe position. It is a matter of how and when, not if - and I’d argue this snowball has already started rolling. When this rapid acceleration of automation happens, not only might many economies fail under their own (aging demographic) weight but a lack of meaningful employment will almost certainly result in the death and suffering of many millions of people worldwide. I once heard that a problem well-stated is halfway solved. This document is hardly a comprehensive well-crafted problem statement but it is my starting point and if you’re interested, please reach out because it’s an area worthy of our time and attention. Needed Areas of Future Research Analysis and segmentation of all top job functions within each industry and their loss exposureEvaluation of novel organizational models more appropriate to deliver advanced AI agents. Wage trend analysis and projections for all jobs under a highly-automated economy.Economic modeling and evaluation of concentrations of power in AI Incremental Impact of Humanoid Robotics on broader GDP growth and specific industries. Select Sources https://unthsc-ir.tdl.org/server/api/core/bitstreams/27cf9d31-0e90-424b-9f56-4d1a5047e049/contenthttps://worldpopulationreview.com/country-rankings/smoking-rates-by-countryhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC10690464/
2024-06-25
https://www.lesswrong.com/posts/7LvK6Gw2GdfDMBNNm/monthly-roundup-19-june-2024
7LvK6Gw2GdfDMBNNm
Monthly Roundup #19: June 2024
Zvi
Looks like we made it. Yes, the non-AI world still exists. Bad Governor New York Governor Kathy Hochul has gone rogue and betrayed New York City, also humanity, declaring a halt to congestion pricing a month before it was to go into effect. Her explanation was that she spoke to workers at three Manhattan diners who were worried people would be unable to drive to them from New Jersey. Which, as Cathy Reilly points out, is rather insulting to New Jersey, and also completely absurd. Who in the world was going to go into Manhattan for a diner? She says this won’t interfere with Subway work. Work on the 2nd Avenue Subway line has already been halted. And that’s not all. You’re damn right. We are going to blame Hochul. Every. Damn. Time. So Elizabeth Kim investigated. One never talked politics at all. One is directly across from Grand Central, is not a diner, and actively wants congestion pricing. The third did in fact object. That’s it. The good news is Hochul’s attempt to prevent this seems likely to be illegal, so maybe it won’t stop us. The good news is this was so dumb that she might get primaried, but we will have to wait until 2026. This terrible thinking is not an isolated incident. Governor Kathy Hochul (D-NY): The next few days it’s going to be hotter than hell across New York — so we’re making admission and parking free at all our State Parks, pools, and beaches tomorrow and Thursday! Take your families to beat the heat, and enjoy it on us Matthew Yglesias: When faced with high demand for an excludable, rivalrous good with inelastic supply, I would make the price higher rather than lower and use the revenue for something useful. High Skilled Immigration Trump endorsed high skill immigration explicitly on the All-In podcast. He even said, only half prompted, that anyone who graduates from even a junior college should get a green card to stay in the country. It is amazing how clearly he speaks here. There is little question that Trump ‘gets it.’ Yet Trump’s track record is of course very different. Remember Trump’s H-1B Visa suspension in 2020? So I wrote I was not optimistic about Trump following through, and indeed he has already ‘walked this back.’ Notice Fox News saying this was somehow a promise about ‘migrants.’ We should still obviously take this all up immediately in a bill and see who votes for it. High skill immigration is overwhelmingly popular across the board, but political gamesmanship has meant we do have it. Shame on everyone. Fix it. No, I don’t care that this isn’t being tied to other things you also like. FIX IT. There is of course a potential problem with the equilibrium. Austen Allred: I love the idea of letting more skilled labor into the United States (and making it easier to stay), I just want to make sure we realize “everyone who gets a degree gets a green card” would be mostly driven by diploma mills. Mark Krikorian (Center for Immigration Studies, Executive Director): If someone earns a Ph.D. at a university in a hard science, I personally will drive to their house and give them a green card. The issue is any foreign college graduate, even from a bogus two-year master’s program or gender studies [major], would get a green card. Trump explicitly included even junior colleges. Which would absolutely mean this gets dominated in terms of number of students by diploma mills, especially once that opportunity is established. You know what? I say that’s fine, if that is what it takes. The top people matter a lot, and if you get a bunch of other young people who clear even a moderate bar, that is going to be good too. It’s not even clear raising standards would be better. We could do something that better addresses everyone’s concerns by being narrower, and I would be happy to start there if that is what it takes. But of course Trump did not walk this back to ‘we need to limit this to real degrees from real schools in real things’ or anything like that. He went back to his anti-immigration rhetoric, full stop, as if this never happened. Various Bad News Salad Size Me, eating only Sweetgreen for two weeks, goes as you would expect. The shorter duration (versus the original Super Size Me) was initially based on cost considerations, but being able to stop after two weeks was priceless. Any time you think people know things they have no practical need to know, remember that only 1% of college students know that Hannibal was from Carthage. Isaac King: This seems like a common failure mode in knowledge-based hobbies. People pour a ton of effort into learning the details of their field, giving it personal importance to them, and they incorrectly generalize this to a belief that their obscure trivia is of general importance. I’m never sure whether I’m doing this. When I encounter someone who doesn’t understand some basic-seeming-to-me math or science concept, is that actually a real problem, or just me ascribing undue import to something that happens to interest me? Women, the young and the left leaning in academia are more censorious than their counterparts, and more likely to discourage various forms of research. Cory Clark reports about 10 ‘taboo claims.’ So of course Robin Hanson offered polls on these so-called taboo topics. The ‘controversial’ positions got overwhelming support. The tenth question, whether demographic diversity (race, gender) in the workplace often leads to worse performance got affirmed 54%-17%, and the rest were a lot less close than that. Three were roughly 90%-1%. I realize Hanson has unusual followers, but the ‘taboo questions’ academics want to discuss? People largely agree on the answers, and the academics have decided saying that answer out loud is not permitted. Cocoa prices are dangerously high and might take years to come down. Worth it. Disney started giving its rides long official names rather than using casual nicknames people would actually use, forcing influencers to use the real names. Which means you know they’re paid and they sound like a duffis. You can buy vapes on which you can play Pac Man. Our watching out for the children principle is, shall we say, inconsistent. Stadium tours doing poorly, many of them being cancelled. The upside profits are huge, and touring a ton is a very non-free action, so perhaps this is the equilibrium. If you are not failing at a large fraction of your stadium tours, you are not attempting enough stadium tours. My experience however is that you get rapidly decreasing marginal utility from going to bigger events. When I went to Radio City Music Hall to see Taylor Tomlinson’s Have it All tour, I had a solid seat and a great time, but I had to force me eyes to look at the physical Taylor rather than the giant screens of her. I’d pay substantially more to go to the smaller Beacon Theater, although I’m sure it would still add up to a lot less. Prediction Markets Are Unpopular Prediction markets are unpopular. Sure, lots of people in my circles love them and want there to be more of them, but activity is limited even when you get them, and usually focused on stuff not that interesting. The basic thesis here from Nick Whitaker is that without subsidies no one wants to trade, so you need subsidy in the form of either cash, natural hedgers or suckers at the table, and usually you have none of them, nor do you appeal to investors trying to make a buck, and being slow to resolve is a huge issue. This is all broadly compatible with my perspective from a while back. I strongly agree that you need subsidy if you want to get good action. Alas, people are mostly unwilling to pay. I think we basically need to ‘suck it up’ and be willing to pay for information, both to subsidize traders and encourage reliable wording and resolution. As I’ve tried to use Manifold, my biggest frustration has been resolution criteria. Why do we see the same few markets over and over? It is not because those are the only interesting questions. It is because those are the questions we can quantify. If you cannot quantify, you get endless tsoris, and can’t play for real amounts. By default unclear markets turn into betting on how the judge is going to see the problem, and that is not something I care about. I’m definitely planning on being less accommodating with nitpicks on market resolutions, especially hypothetical ones, going forward, because time is short and the stakes not so high. Yes, that means you are predicting in part how I will rule. Tough. I don’t trade on my own markets to avoid conflict of interest issues. New Buildings are Ugly Modern buildings are ugly. We made that decision. We woke up, time and again, and we chose ugly. I do not understand how anyone fell for this, but a lot of people did. The cost argument does not check out. I know people actually prefer nice things in practice. I would offer two other explanations not listed there. Vetocracy and permitting and regulatory requirements including zoning. If you have to struggle to get permission for every detail of what you try to build, and anyone can say no, are you going to risk delays or refusals in order to create something not ugly? Do you want fights over details? Or will you go with the ugly thing that you know is standard and where no one will complain too loudly? Externalities. When you create something beautiful, the whole world wins. When it is ugly, the whole world suffers. You do get the brunt of both, but a small fraction of the overall effect. It is only somewhat priced in. It makes sense that you would not invest sufficiently in it. This used to be made up for by people caring about that sort of thing inherently and it granting more status. For public buildings externalities are sort of priced in, but not fully, and you have even more of a vetocracy and designed by committee issue, on top of the ‘yes someone pulled a con on us and convinced Very Serious People ugly was good somehow’ the article discusses. For private ones, you have both issues. Government Working In potentially a big practical deal, the courts have now ruled that CEQA (California Environmental Quality Act, their version of NEPA) should no longer be construed to give the ‘fullest possible protection,’ a formula that means no one ever does almost anything, and instead treat it as one would an ordinary law. Maybe we can build some things now. Government actually working: If only the system worked like this more often, in response to a call to extend our insane child car seat requirements to airplanes: Kelsey Piper: Fun fact, the FAA reviews this periodically and always concludes that, by raising the cost of flying and making more people drive, it would likely increase child deaths. This is my literal favorite fact about any regulatory body and I cannot shut up about it because so many regulations are written with willful obliviousness to the harms done by making things more expensive and annoying. Imagine if we went back and analyzed all our existing rules around airplanes, and everything else, around similar principles. Biden tariffs on China seem quite bad, thanks to Governor Polis for being willing to say it plainly. Taxes on input goods like the 25% on steel and aluminum are madness. Activists successfully lobby Belgian government to give prostitutes ‘proper labor contracts’ that give them all the protections, rights and procedures you get in the European labor market. Then people realize what those rules imply, and ‘when you refuse to do assigned tasks ten times in six months we call in a government mediator’ suddenly sounds like what it is when you those tasks are often sex acts. If you are going to mandate contracts and salaries and benefits and refusal rights and make it hard to fire workers, that has consequences, and not all of them are the higher prices. Another brief analysis on the government anti-trust case against Apple. Ben Krauss at Slow Boring proposes higher education for police officers, both a dedicated university and programs at universities, complaining that our police officers get less hours of training. Oh no, the default reaction should go, more occupational licensing and credentialism and wasteful gatekeeping and signaling, even if as he suggests we don’t increase requirements outright. I very much did not buy the case that this solves any of our real problems. California rules on wages and fees continue to take their toll on restaurants. The costs add up. I do not however have sympathy for those complaining they will have to bake the fees into menu prices. That seems great. Yes, there will be initial sticker shock, but this puts everyone on a level playing field. In general, the game of ‘everyone is forced to hide the true price’ is a great place for intervention. Ben Phelps has similar thoughts in this Twitter thread. Why did it take 10 years to open a Trader Joe’s in Hayes Valley? For years they wouldn’t let anyone open a ‘chain grocery store’ anywhere pink on this map: So they passed particular laws to ‘allow’ a grocery store in an area with no grocery stores. The first time, they couldn’t open until a condo was completed (because shrug) and that took so long the store backed out. Then in 2019 they tried for a Trader Joe’s, but the developer was caught bribing officials to let the development go faster, so it had to wait until they were bought out. The obvious question is why anyone thinks banning ‘chain’ grocery stores was a sane idea in the first place? I considered putting this one in Crime and Punishment. Shirt, raising questions it answers. European Union has declared itself opposed to encrypted chats, and is trying to pass laws to that effect. Signal has promised they would leave Europe rather than comply. Matthew Green says they are extremely close to proposing such a law. It might have already passed by the time you read this. Symbolic importance: UK hotels engage in weekly fire alarm tests that everyone treats as not real and they look at you funny if you don’t realize. Never sound an alarm with the intention of people not responding, even or especially as a test. The Snafu Principle A big advantage and also big danger of becoming rich and powerful is people get afraid to tell you no. In some contexts, that is great, you get what you want and you can motivate people to do things. When flying in bad weather, not so much. Kelsey Piper: There are several famous plane crashes that killed presidents where foul play was strongly expected and the ultimate explanation was crew inexperience and a terror of telling the President that what he wanted them to do was ill advised. This is one, this is another. There are also some billionaire plane crashes with a similar dynamic. Pilots who should have said “no, I am not qualified to safely do that”, who would have said that to an ordinary client. Money and power can buy a lot of things but they seem actively counterproductive sometimes for purchasing “someone who will tell you that the thing you want is actually a bad idea and they won’t do it”. This is part of why such people sometimes find it highly refreshing and useful when they find someone willing to tell them no. The problem in the case of planes is that planes are far too safe. So you want the pilot to be bolder than normal. But not too bold. Macron calls snap elections in France, despite clear danger Le Pen and the far right could win, on theory that the threat of Le Pen and the far right winning means he will win. It probably works, the problem is it sometimes doesn’t. This is a central problem with democracy. Everyone wants to run against the existentially disastrous (in the eyes of their voters) candidate, so they can win, right up until eventually the the disaster happens. Generalize this, including to AI. Biden Administration to ban medical debt from credit reports. If it cannot go on your credit report, why would anyone pay a medical bill that was not big enough to justify going to court, or at least one they did not feel was fair, especially as social norms around this shift? If that’s true, asks Robin Hanson, who would issue this ‘medical debt,’ and offer services without payment or insurance in advance? Mostly I think all of that is fine. Instead of fake super inflated bills no one consented to, we’d get up front known pricing, and people could take on other debt to pay for it as needed. It’s still illegal to not provide sufficiently urgent care either way. The alternative is to continue with billing like this, where an ER visit costs $2215 for ‘the visit,’ $1200 for a nurse’s swab of a 3 year old’s throat for a covid/strep test, $740 for two minutes with the doc, then the ‘cash pay’ is $685. End this scam. Technology Advances Flo Crivello reports from time at Uber eight years ago (so things may have changed) that for finding shortest routes, Apple Maps was best, followed by Google Maps, and Waze was far behind both. Waze perhaps makes people feel smarter and in the know, but it is too clever by half and did not (at least then) actually translate into faster routes. Why did Google never implement a ‘nicest route’ button? Because people might use it to select nicer routes, thus choose to give foot traffic to richer areas. So they decided to hide this information from their customers to avoid this. For Science! If it had ended here it would have been purely for the popcorn: A conversation between Yann LeCun and Elon Musk, part one. Then… well… People will actually tell Elon Musk he has never done Science and will die bitter and forgotten because he did not publish, or did not publish in the proper scientific journals. After a highly justified roasting all around, Yann quickly retreated back to the Motte, which is far more reasonable. Yann LeCun: So much misunderstanding of this comment! Here is a list of things I am *NOT* saying: – you need a PhD to do Science. You don’t. A PhD teaches you to do research, but you can learn that on your own (though it’s much easier with a mentor). – you need to get papers accepted by a journal or conference to publish: you don’t. You can just post it in http://ArXiv.org. Many influential papers never went through the formal peer review process, or went through it after they became influential. – engineering is not science: it can be, depending on your methodology. I’m a scientist *and* an engineer. These activities are complementary and need each other. – science requires formal papers: it doesn’t. A clear explanation on a website and a piece of code on a public repo will do. What I *AM* saying is that science progresses through the collision of ideas, verification, analysis, reproduction, and improvements. If you don’t publish your research *in some way* your research will likely have no impact. These are very different statements. No, the first statement did not say ‘all you have to do is put it up on ArXiv.org.’ I love this illustration of the classic two step, the flip between science and Science. The difference between ‘you have to tell people about your discovery or they won’t know about it’ and ‘if your statement hasn’t gone through proper peer review in the official channels then I can act as if it isn’t real.’ I would be thrilled if we could all agree on that second thing. Science is where you figure things out about the world. When the guy in the shirt says he will do science to his cookie, he speaks truth. If you then want to add to the light of science, then you also have to tell other people your findings. That’s it. No gatekeeping. Or as Xkcd famously put it: Say what you want about Elon Musk, but admit the man ships and he experiments. Similarly, here’s that quote from Katalin Kariko’s book, Breaking Through. She still got mRNA vaccines to happen despite being driven out of her position for trying, and this thread from St. Rev Dr. Sev explains that weirdoes like her who think science should be about doing actual science are not to be tolerated going forward by those who only know Science. Goro Shimura: The thing that bugs me about a lot of the replies to this is the number of people (mostly American) looking at what is clearly meant to be a description of rank obsequiousness mixed with self-promotion and saying “but of course these are just basic social skills” St. Rev Dr. Rev: A whole bunch of Leading Scientists with Professional Headshots on Twitter Dot Com are extremely buttmad about this quote. Genius is a dime a dozen, they are saying. Science is about project management and filling out form! Well, Science is about that now, anyway. I reflexively blocked the ratfucker who said the thing about genius so I can’t find it now, but check out this other ratfucker. If genius can make a difference in your field, it’s immature! Kariko revolutionized her field in the teeth of people like this, and they will never forgive her, and they will fucking destroy the next Kariko they get their hands on. An unspoken conspiracy of mediocrity. The purpose of Science is to turn grant money into papers, nothing more. Actual progress threatens to disrupt a lab’s business model. Can’t have that. The greater part of modern science (by staffing levels, at least) is worthless bunk. … But when everyone’s a fucking high-agreeability pod person, you don’t filter the trash once it’s clear that it’s trash. That would be unmutual, it would interfere with the flow of grant money. So the intellectual trash piles up. That’s good leadership and community service. I grew up reading about how Science was done in the mid-20th century. My mom worked in a cancer research lab herself. Disagreeable weirdos have always been critical to scientific work. Purging them because they make the conformists uncomfortable is a fairly new development. St. Rev. Dr. Rev.: So this thread Took Off, as they say, and a lot of people dug it but some people got really nasty, like, ‘oh you think you’re BETTER than other people, like you don’t need to FIT IN, like you should get money for free’ I think Katalin Kariko is better than other people. More fun ‘things are not science’ here. If you think Science makes good funding decisions on the merits, well: Julian Togelius: This Dutch study finds that finds that panelists make the same allocations of research fundings even if they don’t get to read the actual proposals, just abstracts and CVs. This result *should* have a large impact on science funding policy. (h/t Thore Husfeldt) Abstract: Do grant proposal texts matter for funding decisions? A field experiment Scientists and funding agencies invest considerable resources in writing and evaluating grant proposals. But do grant proposal texts noticeably change panel decisions in single blind review? We report on a field experiment conducted by The Dutch Research Coun- cil (NWO) in collaboration with the authors in an early-career competition for awards of 800,000 euros of research funding. A random half of panelists were shown a CV and only a one-paragraph summary of the proposed research, while the other half were shown a CV and a full proposal. We find that withholding proposal texts from panelists did not detectibly impact their proposal rankings. This result suggests that the resources devoted to writing and evaluating grant proposals may not have their intended effect of facilitating the selection of the most promising science. Julia Togelius: Far too much time and effort goes into writing and reviewing grants. The grant funding system also distorts priorities, rewarding faculty for spending their time writing grants instead of doing research. It’s the worst part of academia. I think we should simply do what it implicitly suggests: replace grant proposals with submitting abstracts (maybe half a page or so) and CVs. Plus some regularization to ensure a more even spread of grant money. Better for everyone. “But what about the new investigator that has no track record but a brilliant idea?” Specific grant schemes for new PIs, as already exists Research is a social endeavor, you learn it and get a track record by collaborating with others Brilliant ideas are a dime and dozen In other words, Science does not care about the details of your research, and this good, actually, we should stop wasting time with that and allocate money based on your social status. Thus is proposed by Ruxandra Teslo this law, after explaining that failed corporatists are forcing the weird nerds out of academia: Any system that is not explicitly pro-Weird Nerd will turn anti-Weird Nerd pretty quickly. Most would-be Karikos, including the ones who are not somewhat crazy, are driven out. Another sign of how things are going, yes the study data is posted online. Ben Landau-Taylor: In 2023 Ian Hussey tried requesting data from dozens of academics who promised “data available upon request”, and found they were LESS likely to share data (17%) than authors who did not make any promises (26%). Over and over again, when we check the parts of today’s academic process which can be inspected, it turns out that there’s nobody home. The parts which are harder to inspect? Well, I’m sure those are fine. The rationalist term is ‘front running steel man, for German Claude suggests Replikationsmangeleinsichtsanerkennung (‘acknowledgement of the insight despite lack of replication’): Tess: There should be a German word that means “I see where you’re going with this, and while I agree with the point you will eventually get to, the scientific study you are about to cite doesn’t replicate.” Paper from the Federal Reserve Bank of Dallas estimates 150%-300% returns to government nondefense R&D over the postwar period on business sector productivity growth. They say this implies underfunding of nondefense R&D, but that is not right. One should assume decreasing marginal returns, so this is entirely compatible with the level of spending being too high. I also would not assume conditions are unchanged and spending remains similarly effective. Antisocial Media What are the load-bearing posts of our time? Only one way to find out. Recommended thread if you haven’t yet. I am sad you can’t easily find all the quote tweets. TikTok gives different people completely different comment feeds on the same video. Woman gets comments supporting female video creator, man gets comments supporting the creator’s boyfriend instead. Evil genius. fabian: the final stage of web2 social media is that everyone is heaven banned maybe not enough demand yet to enable more controls, but maybe just too crude tooling? let folks tap more seamlessly into different simclusters, view feed as-redneck/feminist/techbro/nigerian-communist TaraBull: TıkTok is dividing people by curating entirely different comments to us. Do you look at comments to gain perspective on social media? Was this purely the ‘people you follow or engage with show up first’ principle being strong enough if you spend too much time on the platform? I very much doubt it. Ragged man stands up, says this anything beyond that should be against the rules. Everyone gets different feeds, but aside from actively connected specific accounts we should mandate everyone gets the same comments sections, unless you are being intentionally heaven banned. You can still gain perspective from the comments on videos even so, but you need to be properly calibrated, and understand you are seeing a customized reaction. How that compares to your expectations is less information, but still information. You want more evidence TikTok is an enemy of America? It hates us so much it banned anyone who helped promote Ozempic, without warning, under the ‘Integrity and Authenticity’ policy, in particular the ‘might be helping Americans be better off’ clause. “We want TikTok to be a place that encourages self-esteem and does not promote negative social comparisons,” TikTok says in a preface to the rules. That’s right, yes, not letting people say a healthy weight is good is an actual CCP op. And yet, the algorithm knows all: Stephanie Boyle: I’ve seen all of these creators on my fyp. I usually see them complaining about being banned which I often find mildly amusing. If they were banned or shadow banned, I wouldn’t see them I would think! The market only has a 33% chance that TikTok will actually get banned, despite ByteDance having revealed it won’t be allowed to divest (I bet nominal yes purely for tracking purposes and don’t have a strong opinion). Liz Miele got flagged on YouTube for hate speech on her latest special Murder Sheets because she playfully calls her own cats the C-word, despite their policies not even listing the word, with no way to fix it, cratering her ad revenue. I was at the taping of this special, and calling that hate speech is completely absurd. This feels like an AI Solves This problem, and also a Human With a Brain Solves This problem? Yes, perhaps for people with 8 subscribers and 31 views you cannot afford to check when someone appeals, but this is very much not that. The good news is that enough people heard about this that one of them found someone who could hear her appeal, and they fixed the problem. Yay. Did you know that if prominent people give you retweets, you get more views and likes? Yeah, least surprising economics experimental finding ever, and that’s saying something. What is more interesting is that getting the prominent economist retweets of job market papers actively did boost flyouts and job offers, women receiving 0.9 more job offers. Which is a lot of job offers given you only ultimately need one and the average for controls is 1.5. Paul Goldsmith-Pinkham: The average control individual in this sample is an individual who has 11 tenure track and 16 total interviews, 5 and 3 flyouts, and 3 and 1.5 offers. Notably, being URG doesn’t predict (significantly ) on any of these outcomes for the control. Why does it work? Here is one guess. Paul Novosad: An explanation could be that the candidate search is EXTREMELY random. We get 1000 applications at Dartmouth, and our administration requires that the same 3-4 people review every single one. It’s an overwhelming task. It’s inevitable that people make quick decisions — as happens in college admissions and all other kinds of job hunts too. Any kind of positive signal at that first stage could increase your odds of moving forward substantially. Never mind social media, is the internet bad for you? The study says mostly the internet is good for people, actually (ragged man stands up at meeting), although in some data sets women ages 15-24 in particular are worse off. I am not in the Patrick McKenzie camp where the internet is by a mile our greatest invention, but yes, the internet is pretty awesome and I am very happy to have it. Also I agree with the commenters that any such study is hopelessly confounded. New York passes law making it illegal for social media websites to provide ‘an addictive feed’ without ID verification. It is called, of course, ‘SAFE for kids act.’ Also parents will be able to pause notifications for their kids from 12am to 6am (okay, I guess), and ban selling data from users under 18. Doesn’t seem great, plausibly unconstitutional, and it is always weird when people say ‘you cannot collect our data’ and then require age verification. Nate Silver: The [Twitter] For You algorithm is pretty good at picking up on your revealed preferences so if you’re complaining about it, you’re kinda telling on yourself. It measures your interactions, so you are telling on how you choose to interact. We are forced to be disciplined in how we react, lest the AI gives us what we on reflection do not want. We now have to exercise this kind of strategic thinking with every online interaction. It is exhausting. The Twitter Porn Bot War Twitter porn bots. Hard to catch? Michael Nielsen: Can someone at Twitter anonymously explain to a reporter why the pornbots are being allowed to proliferate? (I presume it’s because Elon thinks it’s funny?) Paul Graham: Apparently they’re hard to catch. I know this seems implausible. I roll to disbelieve. I could believe that porn bots that are trying maximally hard to not be caught are hard to catch. I flat out refuse to believe that the actual in practice bots on Twitter are hard to catch. The bots are so unimaginative that I’ve gotten the exact same message about a sister looking to date 10+ times, the same exact crypto messages. The porn bots 90%+ share several very clear characteristics. I have an alternative theory. Now hear me out. Twitter is choosing to have bots that are trivial to identify. If they crack down, then the bots get sneakier, and actual humans have to spend time on them rather than recognizing in 200 milliseconds that it is a bot. Better, they have decided, to do a phony war that doesn’t actually cause much stress or lost time. It’s crazy, but not as crazy as it sounds. Could it be as dumb as? Tyler Young: Some of them are sophisticated. Some are very much not. My bet is that Twitter has no interest in solving the problem because the bots boost their engagement metrics. I cannot rule it out. I mean, you’d think it can’t be this stupid, but it can. At some point, making the insurance fund an actual random number is less harmful than making people miserable in order to create a more credible fake number. Patrick McKenzie sees them as a visibe test of non-state capacity, similar to cleanliness at McDonalds. I Like My Twitter Posts Like I Like My Porn Bots: Private Twitter made likes private. Note that even if there are no flaws, it is two-way private. The person whose Tweet you liked knows it was you, which is vital to Twitter functioning. Paul Graham: Instant 10% increase in likes. Large numbers of people must have different opinions than they dare express, to move the total number of likes by that much. The problem is that people have literally gotten into big trouble or been attacked out of context for merely liking the wrong Twitter post. Whereas the upside of liking a post is very small, and also people might look at your list of likes to find good content. Stuart Buck: One downside of Twitter making “Likes” private is that one of the most interesting ways to find new ideas/tweets was to go to the “likes” of someone you admire, and see what they had been reading lately. I occasionally enjoyed seeing the “likes” of John Arnold, Patrick Collison, and others. Lots of overlap with the stuff that I read, but it would regularly turn up interesting ideas/people that I hadn’t seen. So it makes sense to now be modestly less selective, also it could easily be a temporary bump from the new policy (‘I can like everything I want now, mwahahaha’). Michael Keenan: Like everyone else, I’d rather they make this optional per Like. A side benefit would be that we could see a tweet’s public:private Like ratio, which would measure taboo strength. We’d see what taboo topics are ready for an information cascade. Complexity is bad and choices are bad, and a ‘private like’ carries a weird implication. Not being public with your likes could be seen as a kind of ‘evidence of guilt,’ even, or you could be blamed for being public or private. I am not excited to split the baby here, but it does solve some issues. Violet Blue: So now scammers and bots can artificially inflate post popularity and no one can verify if likes are from any real accounts. A gift to influence ops. Shoshana Weissmann: This is a REALLY good point. This is another huge use of checking likes. There was once a company opposing R Street’s work. All the likes were bots and weirdly the like count fluctuated throughout the day. Now we won’t know. Yep. Public record of likes lets you understand context. What type of engagement is happening here? Who is liking this and who is not? It is rarely the best use of one’s time, but occasionally it was valuable, as would have been tasking an AI with this. Beff Jezos notes likes often said ‘I understood this post’ and regrets that this is gone, or flagged things for their followers, and the new world will only reward those who cater to the center of mass rather than the tail of intellect (virtue of silence goes here). The first use should mostly still be intact, since the author still knows. I do think Jezos has a point here, but that this does not shift the balance of power all that much. Already Twitter favored the middle quite a lot. That could be part of the motivation as well. If your likes are public, an AI can use that as data in a way humans could not do at scale. Variously Effective Altruism Scott Alexander on the Far Out Initiative, a quest to abolish suffering by altering neurotypes rather than the usual proposed method of omnicide. The claim is that Jo Cameron is incapable of any form of suffering, and she’s otherwise mostly fine, only a few minor accidents, she still seems to do things and care about things, it’s great. So let’s do that for everyone and ow who put that stupid fence there? I always view focus on suffering in general, especially when viewed as The Bad, as at great risk of asking the wrong question. Suffering is highly correlated with things sucking, and provides valuable information that things likely indeed suck and in exactly which locations and ways they suck. This is highly useful, both as information and motivation. That does not mean we currently have the correct level of suffering in response to things sucking, or that a lot of our suffering is not mismatched. Nor does it mean that the suffering does not make things suck a lot more than they need to. That is a roundabout way of saying the right amount of suffering is probably a lot lower than the human norm under current conditions, let alone those who report constant suffering, but the right amount is not zero. I do not sufficiently buy the ‘you can vary how happy you are instead’ counterargument. Negative reinforcement should not purely be the lack of positive reinforcement. A knob to lower this setting would be immensely valuable, but yeah, I worry a ton about what people would do with it. Are You Happy Now? Here is a question that is not so correlated with that, entire history of the question: Stefan Schubert: Most people are not unhappy. [then he shows this graph] Danielle Fong: It’s fascinating how un-impacted this data series is by basically anything. Matthew Yglesias: It’s fascinating how un-impacted this data series is by basically anything. How do I know? Because ‘lol nothing matters,’ to this extent, is not a plausible hypothesis. Are you telling me 2008 did actual nothing? That 2020 did actual nothing? Phones? Yeah, no. My explanation is that this question is being answered in relative terms. You aren’t equally happy during a pandemic or financial crisis, but that is not the question being asked. How your personal life is going is a question that mostly rules that stuff out and is judged compared to other people around you, and we are mostly risk averse and willing to accept somewhat below average performance, so we consistently bat around 80%. Here’s what Stefan was responding to: Tim Denning: Most people are unhappy. So, I’ve spent 20 hours watching Bill Murray interviews over 3 months. What did he find? Organizing for space: Forget trying to be famous, try to be rich first. The more relaxed you are the better you are. Be weird as hell, crash random events and parties. Tell everyone you are retired. Most mental health advice is too serious. It’s hard to be an artist. It’s hard to be anything. It’s hard to be. The automatic things you do are basically those things that keep you from doing the better things you need to do. Whatever you do, always give 100%. Unless you’re donating blood. Giving a s*** is underrated, the competition is weak, most people never try. Melancholy is kind of sweet sometimes. It’s not that attractive to have a plan. Focus on being resourceful, not clever. It just doesn’t matter! People worry about dumb stuff. Go do epic stuff. You can tell how boring a person is by the lack of fear in their eyes when someone is flipping through photos on their phone. Just beat my record for most consecutive days without dying. People say I’m difficult. Sometimes that’s a badge of honor. Strongly agree: #1, #2, #5, #6, #9, #13. Directionally agree, good advice, but not strictly consistently true: #3, #7, #8, #11, #14. Not buying it: #4. Never retire. Maybe tell people different, sometimes? Actively disagree: #10, #12. You need a better plan, and it is boring to take photos rather than live, although I am considering changing that somewhat because of AI search and using the photos as a kind of memory bank. Given the non-obviousness level here, that’s a very good hit rate. Good News, Everyone Jerry Seinfeld’s commencement address at Duke was very good. So was his appearance on Honestly. It is fascinating how much more interesting I find Seinfeld when he is not on stage, compared to how he did when I saw him at the Beacon Theater. Ruxandra’s post claiming that autists (rationalists being the chosen example) and the Internet will defeat the monoculture. I do not see us bringing down the monoculture (at least not via non-AI methods). The monoculture need not much care that there are a handful of people off in the distance doing its own thing, and indeed it will come for such groups in time, and it has. If all the major tools and attention is monoculture, and there are a bunch of small splinter factions that occasionally get to add some concepts to the monoculture, that is better than not having the factions but mostly still monoculture. Polymarket raises $70 million including from Vitalik Buterin and Founders Fund. As is noted in the announcement, Polymarket is often now cited as a news source, since it is the largest prediction market on major events even without American participation. Note that they are crazy low on Biden, having him at 34% (!) as of this writing, with Trump at 56%. Whereas Manifold has Trump 52% versus Biden 46%. Adding to 98% is slightly too high, but adding to 90% is clearly too low. In general Polymarket is biased towards Republicans. The obvious play is to take the gift directly as they (at the time) had Biden dropping out at 24% (!?!) versus Manifold’s single digit levels. Yes there is some chance you lose and nothing is ever investment (or gambling) advice but hot damn. Remember always that such changes persist, so you are probably stuck holding until election day. Or, perhaps, somewhat after it. Review of a new book on basics of Bayes, looks promising. A look from Dylan Matthews inside the INR, a federal intelligence agency that uses a small group of dedicated domain experts (as opposed to places like the CIA where everyone rotates every few years) and got Vietnam, Iraq’s lack of a nuclear program and the early stages of the Ukraine war right. Which would have been a lot more useful if anyone had listened. Of course, they are far from perfect. Dylan Matthews: For their part, INR veterans tend to be less triumphalist, preferring to say they were merely “less wrong” than other agencies. They agreed with other agencies that Iraq still had biological and chemical weapons, and they got that wrong. The article is full of INR wins, and notes some INR losses. It is ‘contrarian’ because it does not bow to government consensus and is proud of dissent. Alas, they are being shrunk, and they are paid poorly. It is going to be tough. And their methods depend on far too much confidential information for us civilians to tap their expertise. News you can use: A map of the public bathrooms in New York City. The bees are fully back. Topher Stoll: This is the hilarious tragedy that plagues all of human endeavor. If we rally to fix a problem in time, idiots will come out of the woodwork to say that there was never a problem to begin with. See also: Y2K, the Ozone Layer, global food supplies, “peak” oil, Acid Rain. One day, god willing, some incurious doofus will be able to say with a straight face- “Pssh, climate change was NEVER a danger! All our energy is renewable, the geo-engineering is going great, and we’ve restored 90% of habitats around the world.” That’s the dream. Yep. Always the dream. Ideally we’d be measured before and appreciative after. Alas, it almost never works that way. Tyler Cowen recommends reading about a specific business you know a lot about already, or if that fails about the business of a sports team or musical group that resonates with you, as opposed to books in the ‘business section.’ As he says, the part about not reading ‘business’ books is the easy insight. The hard part is what is good. Here I worry that there are too many important differences between superstar competitions and other practices, and thus if you are not careful you would learn many wrong lessons. But I do agree that looking into areas you know is great here. Tyler Cowen book recommendations: Olivier Roy’s new book The Crisis of Culture: Identity Politics and the Empire of Norms was a very strong one. He also suggests In This Economy: How Money & Markets Really Work by Kyla Scanlon. Also he says in Cape Town you reverse the usual rules and eat at restaurants with beautiful women facing the waterfront, because everything is cheap and you want to be part of the class of people with money. Order the seafood. He does not mention this, but the right next question is, how does this generalize? Good Social Advice A Twitter thread guide to hosting gatherings. This model says: Look for people who are interesting and are interested in others, never invite people because you feel obligated. Curation of people is key. You only need 14 square feet of active party space per person. Create talking spaces where people face each other, ideally limited to 4-5 people. Warm bulbs for light, make a playlist, mostly don’t sweat it. Some related good advice on community spaces: Tetraspace West: I think my hard won community management advice is: Laissez-faire and free speech is for strangers; your walled garden is tiny and low-stakes, be ruthless. Not *technically* breaking the rules is breaking the rules. Your discord server can maybe have an #appeals channel, if you know what you’re doing; if you start creating something that looks like a legal system, you’re copying intuitions from systems much larger and more alienated and less designed than yours. A justice system is based on the principle that punishing innocent people is very bad, and decisions must be objective. In many situations, those should not be priorities. Also good social advice: Elizabeth van Nostrand: I’ve know of several people who violate social rules a lot and tell people there have been no consequences. They are wrong about this. It might be true that it’s a good trade off for them, but I also know of opportunities they otherwise would have been offered but weren’t because they were considered too hard to work with. … Long ago I read a blog post about a clerk at a porn rental store (so, really long ago) about a karma system he + coworkers implemented. They had a fair amount of leeway around late fees, and if you were rude to them or another customer it would never be used in your favor again. Like a note went in your Permanent Record at the porn rental store that you were mean and they should be mean back. The justice feels delicious here but no one was being made a better person by that so mixed feelings. See page 52 of this PDF. Examples of rules broken: arrive within an hour of when you said you would most of the time, don’t yell at people or call them names, don’t constantly ask for favors from near-strangers and if you do at least be really nice about it. Oh and my favorite “starting projects other people depend on you can’t complete, forcing others to rescue you.” Also sometimes they people lie. I’ve heard people forced out of multiple spaces that were deeply important to them, tell others they’d never faced consequences for being too X. Paul Crowley: This is a great caution. You often won’t know about the invites or kinder treatment you didn’t get because someone noticed you violated a rule. They often won’t tell you. Also, rule-violators lie about this stuff. I have known more than one example where a whole circle of people have known that someone is a liar, but no-one tells them to their face, and they very likely think they’re getting away with it. Quinn Que: An easy example of this is being blocked by people you’ve never interacted with on social. Paul Crowley: I block like this a lot! Five models of how to live near friends, from Cabin: I strongly endorse the Apartment Cluster. I have some small experience with this, having had one friend living our building. It was awesome. It is hard to overstate the extent to which not having to go outside meant we had more interactions. Same floor would have been another big jump. Trivial inconveniences matter. The best part is that this is easy to do. In any big building there will be openings over time, and presumably you chose the place for a reason. Alas, our problem is that those we know always wanted to live in cheaper locations than we did, so we couldn’t make it work. Yes, you could do this in reverse via ‘meet your neighbors,’ but these days it is difficult, and it turns out most people are not on the same wavelength. The people in the next apartment are lovely, but we have so little in common. It is hard to make that work these days. Minihood is the classic version, potentially even easier, and the one that was pulled off in Berkeley. Again, exact proximity matters a lot. You want easy walkability. The duplex dream is a step up from both, if you can pull it off. ADU is a stretch. Micro-village is often the dream. I have seen much talk of it over the years, but no one ever seems to get that close to pulling it off. Coordination is hard. From what I can tell, this will only happen if a small subset is willing to take the risk and do the work, and then offer to let others follow. You will also need easy access to civilization. FTC Wants to Ban Noncompetes I am late to the party on this one due to other concerns, but still seems worth weighing in. By all means skip if you consider this old and busted. The FTC have decided they are the fairness department. They decide what is fair. If they decide your agreement is not fair, that agreement is null and void. If you don’t like it, tough, because life it not… well, you know. In this case, the thing that they have decided is not fair are noncompetes. Dave Michaels (WSJ, April 23): The Federal Trade Commission on Tuesday banned employers from using noncompete contracts to prevent most workers from joining rival firms, achieving a policy goal that is popular with labor but faces an imminent court challenge from business groups. … The rule prohibits companies from enforcing existing noncompete agreements on anyone other than senior executives. It also bans employers from imposing new noncompete contracts on senior executives in the future. … Noncompete clauses violate a 110-year-old law that prohibits unfair methods of competition, the FTC says. … Outlawing noncompetes is hugely popular with many workers, and the FTC estimates that its rule would boost their earnings by $400 billion or more over 10 years. Cosmetologists, who earn about $35,000 a year according to federal data, say noncompete agreements are a drag on their earnings. The move, approved 3-2 by Democrats on party lines, is roughly 50% to be upheld after all appeals. Pacific Legal is suing on the highly reasonable grounds that this is none of the FTC’s business and these agreements can be good for both parties by enabling training. Austin Campbell is one of many celebrating this decision, calling it an earth-shakingly massive win for free markets and capitalism to deny this freedom of contract to deny one future freedom of contract. In practice, he argues, noncompetes are highly abusive and workers are put in horrible situations. Like many, he argues that this isn’t a free contract because many don’t know what they are agreeing to. It doesn’t have to be that way. A noncompete is a fairly straightforward thing. I once signed one that the employer refused to waive or even to let me buy out of or negotiate about, and that I decided to honor, and it sucked, but I did not have a lawyer and I was not for a second confused on what I was signing. Did I check if it was enforceable in my state at the time? No, because a contract is a contract is a contract, I knew what I agreed to, and I was not about to break my word even if I wasn’t legally bound to it. The flip side is studies show workers don’t understand and do not bargain with the noncompete in mind. Which seems crazy to me, but also shouldn’t obviously matter if employers are competing for workers? Then there are workers who aren’t aware they even signed. That I agree should not be allowed, you should have to be very clear that this is a noncompete and on what it applies to. Here is Luke Herrine sharing a bunch of examples of workers who got screwed by noncompetes. Others complain of an equilibrium where most employers insist on noncompetes, putting workers in a terrible position. The next question is, why doesn’t one employer compete by offering lower wages and not requiring a noncompete, if that is better for workers? One possibility is that we are up against the minimum wage. If that happens, then yes, employers will have to compensate with other terms, and banning these agreements is a lot like raising the minimum wage further, and likely the superior choice. It certainly seems like there should be some wage floor on new noncompetes to avoid this, substantially above the minimum wage. Another possibility is that the employees, whether or not they know what the agreement says, are wrong about what the agreement is worth to them. Like in many other places, they focus on the headline number and other short term factors, and don’t properly factor in the noncompete. Alternatively, they are liquidity constrained so they have to make tradeoffs. A third possibility is that you don’t want the employees who are more inclined to refuse to sign noncompete, because they are the ones who will leave and compete with you, so the equilibrium is everyone has to sign even though that’s not net good. That would be a story where intervention makes sense. Another story like that is if competition and dynamism are largely public goods. So the employee and the employer can make a deal that leaves both better off, but it makes everyone else worse off, so you want to tax or ban it. Possible. Betsey Stevenson is on the ‘victory for the economy’ side. Tyler Cowen refers back to his post from January 2023, where he argues noncompetes can be welfare enhancing. His argument is straightforward. If you can go to a competitor tomorrow, I am forced to silo my trade secrets and other information, and I will invest less in your skills. At the low end, noncompetes seem net negative, but we shouldn’t be too restrictive. Alex Tabarrok agrees with Tyler on the merits that the proposed ban is too broad, and also questions the FTC’s authority. As he points out, the FTC’s claim that banning noncompetes will raise wages ignores that this is part of the worker compensation basket. By default, we should expect wages to go down short term. My response would be the FTC is abrogating existing contracts, which effectively raises the wages and bargaining power of the impacted workers, which means the short term impact could indeed send wages higher. Alex buys the externality story, though, so he is willing to give the change a try. Another story I buy is that noncompete agreements can be observed and enforced whereas NDAs make this a lot harder, so often noncompete agreements are substitutes for NDAs. Arthur Breitman: On the FTC… in a few serious industries non competes aren’t about depriving the competition of talent or even employee retention, they are largely a stopgap to make NDAs de facto more enforceable. Of course we’ll hear the contorted explanations from a cohort of Silicon Valley “libertarians” that it’s a great policy, because that’s part of the local lore, but it ain’t. There are industries where trade secrets are far more valuable than the broad Internet tech sector, and the alternative to trade secrets are patents. While I Cannot Condone This Your periodic reminder to file under ‘and then they voted.’ Aaron Blake: The NYT/Siena poll shows 37% of Trump voters say Trump is most responsible for the Supreme Court overturning Roe v. Wade. 24% … say *Biden* is most responsible. ‘If it happened on your watch it is your fault’ is a popular heuristic. This makes it very difficult to make good policy decisions. Scott Sumner on aging and looking back on the past, recommended. The mirror of Jerry Seinfeld’s graduation speech is Chiefs placekicker Harrison Butker’s graduation speech, that of a traditional Catholic saying what many traditional Catholics actually believe to a college dedicated to traditional Catholicism, no matter what you think about that. People with different worldviews got mad at him. Cable! Get Netflix (with ads), Peacock (with ads) and AppleTV+ for $15 a month, if you already have Xfinity TV or internet. I hate that this is with forced ads. Ads are the bad equilibrium. People should work a bit more, then pay the extra money, everyone is better off. Alas, when packages form, the ads seem unavoidable, because if people want discounts everyone assumes you must want the discount more than you want to avoid the ads. Give me the version that packages and integrates all the media services so I don’t have to rotate and app shift, with zero ads, at a modest discount to the combined price (let’s say $200/month for Netflix, AppleTV+, Hulu, YouTube and YouTubeTV with the sports channels back and ad autoskip, Paramount+, Peacock and Max, ideally throw in various side subscriptions), and I will be all ‘Where do I sign.’ I have active reasons I want each of those. Instead, right now, I’m ‘supposed to’ be rotating between them, and they’re (largely correctly) counting on laziness to stop me, so I only partially bother, and I’m missing several of them. The SMBC theory that you should maximize the vector sum of your life and your work, which is why so many great artists, scientists and philosophers are ‘huge dickwads with tortured lives,’ they get little value out of life so they focused in on work and achieved greatness. This reminds us that for those with the talent the Great Work has highly increasing marginal returns. We would be better off if there were more people who went fully all-in on the Great Work. They should be rewarded and supported more, and (up to a point, but a high one) forgiven their personal trespasses. Uber does pass on tips to drivers, but its interface implies heavily that it doesn’t so Bill Ackman’s Uber driver thought they were being stolen. This is a bizarre own goal, why would you do this? They also taking a huge chunk of the actual fare. Claude says typical is about a 25% fee. That is in some sense outrageous, but it still exceeds the consumer surplus from being able to use an app and it isn’t remotely close. Aella reminds us of a great rationality technique in such situations. When you see a claim or headline, ask what the world would look like if the claim was true. As I’ve said before, the repugnant conclusion is based on a fallacy in its core argument, but another distinct problem with the repugnant conclusion in practice is that it leaves you little margin for error. Amanda Askell: Being averse to the repugnant conclusion makes sense. Unless you’re omniscient, a googolplex lives at +1 utility is indistinguishable from a googolplex lives at -1 utility. Better to have fewer clearly positive lives to reduce the risk of accidentally bringing about a hellscape. This is a good principle in general. One wants to have a bias towards action and living and being, to balance out most people making the opposite mistake, due to the value of experience, story, skill and exploration and such. Ultimately most of the value comes from things that are very clearly valuable. If you cut out all the marginal stuff that isn’t required to match some need, you are making only a small mistake. Nick reports a third of women he is close to dream of opening beautiful bookstores with cafes, and Tokyo says doing things like that is awesome, so how can we make this easier? My presumption is they dream of doing this and also somehow being able to make a living and raise a family. Alas, the economics of bookstore cafes are not good, even if you solve for zoning and rent costs and get rid of a bunch of dumb regulations. And also what they want is to have the bookstore and cafe be there and to hang out in it all day, rather than do the actual background work of running it. The alternative plan is ‘these people would do the fun parts for free,’ which Nick proposes, but do they have that ability? I’m sorry I must report that the principle here is right, but of course there are obvious exceptions, although mostly to the first clause. Paul Graham: If it starts “I’m sorry I” it’s a genuine apology, and if it starts “I’m sorry you” it isn’t. New movie ‘The Apprentice’ chronicles part of the rise of Donald Trump, well before his political adventures. Dan Snyder, former owner and abuser of the Washington Football Team, joined the Canadian, Irish and Danish government and others to help finance it because he thought it would be flattering, then turned around and fought its release (intended for this year ahead of the election) when it turned out to be attempting an accurate portrayal. Sources familiar with the back and forth say Snyder took issue with multiple aspects of the film and weighed in on what should be changed. … Despite its title, “The Apprentice” doesn’t chronicle Trump’s years as the star of the hit NBC reality show that catapulted him into the Oval Office. The logline provided to press calls the film “a story about the origins of a system … featuring larger-than-life characters and set in a world of power and ambition.” It adds, “The film delves into a profound exploration of the ascent of an American dynasty. It meticulously charts the genesis of a ‘zero-sum’ culture, one that accentuates the dichotomy between winners and losers, the dynamics between the mighty and the vulnerable, and the intricate psychology of persona.” … Trump has not yet weighed in on “The Apprentice.” (He did not respond to a request for comment from Variety.) One insider says, “it would be like a gift.” I would have a prediction market on whether Trump will weigh in, except what would be the point, when has Trump not weighed in? Trump is certainly all about the zero-sum culture and winners versus losers. Which level are you playing on? Yosarian Two: Chesterton’s meta-fence: if you’re walking in the forest and you see a bunch of people removing a fence, you can’t invoke Chesterton’s Fence until you know why they’re removing the fence. Matt Neary: Chesterton’s fence still applies at object level. You should inquire why they’re removing it and confirm that they are aware of its original purpose. Yosarian Two: Inquiring is never a bad idea, but it’s worth keeping in mind that the fence, the guy building fences, the people removing fences, the process by which people decide to remove fences, etc, are all existing systems that exist for a reason. It might or might not be a good one. Pasha Kamyshev: You can always go one level of meta more: If you see people invoking “Chesterton’s Fence,” don’t un-invoke it, until you understand why they invoked it. Lyn: what if you see Chesterton removing his own fence? Yosarian Two: Then you have to ask him both why the fence was there in the first place and why he’s removing it. Unless there’s a cultural fence against bothering Chesterton on his own property about his own fence which there probably is. nihilism disrespecter: reverse of chesterton’s fence also true: don’t try to RETURN to something your ancestors abandoned until you understand why they abandoned it. Do not in general assume people know what they are doing or why they are doing it, unless they are doing something local and practical. The question is, which act is removing a fence and which one is not? Enemies of the People I do not think we can let this one go. Anya Martin: I know it’s dunking on a dead horse but… if the fundamental issue is that people are too poor to have a nutritionally balanced diet, & a product is invented that makes a nutritionally balanced diet affordable & accessible, then that literally does address the fundamental issue. Seth Burn: I think there has to be no limit of the dunking here. At this point Greenpeace is being actively evil, and that should be recognized as such. Maia: Anti GMO types will be like “Oh, you support alleviating poverty? That pales in comparison to my preferred strategy, eliminating poverty” and then not eliminate poverty Niels Hoven: Oh, you invented a cheap nutritious food to alleviate global hunger? Sorry, that doesn’t address the fundamental issue: that even in 2024, people still have to eat and drink to stay alive. We had the fun claim going around from The Guardian that ‘12 percent of the population of Phoenix, Arizona will die of extreme heat in the 2030s.’ I would respond explaining why this is Obvious Nonsense, but as I noted on Twitter I have been informed by some effective altruists that dead horses may experience qualia. And we have Just Stop Oil spray-painting Stonehenge (yes literal Stonehenge) orange a day before summer solstice. Which turns out to be not only a huge desecration but also actively criminal and also a threat to an endangered species. But hey. Capitalism. They kept doubling down on this being a good idea, on the theory that the way to get what you want is to do the worst possible thing until people give up? Clearly, then, what they should actually do is found an AGI company. Your objection is that would be capitalism, but don’t worry, you can do it as a non-profit and raise money in the spirit of a donation. Jason Crawford gets the point for being the first to actually say the line ‘Never doubt that technology can eliminate poverty; indeed, it’s the only thing that ever has.’ Others come out and say it. As always, I appreciate the candor. Not the Bee: “Planet of the Apes” actors [Freya Allan and Owen Teague] say they are “Team Ape” because humans are bad for the environment and start wars: “I dislike humans a lot.” Elon Musk: The true battle is: Extinctionists who want a holocaust for all of humanity. — Versus — Expansionists who want to reach the stars and Understand the Universe. It is extremely frustrating when people are very clear they are on team human extinction, and others do not respond accordingly. It is even more frustrating when people confuse team human extinction with team humans reach the stars. Indeed, often they flip them. And then I and others have to hear all this talk about how we are on team human extinction, exactly for saying we can’t help but notice that it would be better if humans did not go extinct and current actions are likely to lead to that. The moral economy of the Shire. Good read. Lab Grown Meat Shirts Answer and Raise Questions Last month I covered Florida banning lab grown meat. I explained that I did not support a ban on lab grown meat. But I understood why others might support it, which is that if lab grown meat becomes viewed by a certain crowd as an acceptable substitute there will be an attempt to ban other meat. And I explained that many people quite reasonably expect this to happen, and possibly succeed, well before this lab grown meat can match quality, quantity or product variety and detail preferences at a given price point. They expect this because we have many prior examples of exactly this happening. As in: Also because lab grown meat advocates are explicitly saying they want to ban meat. ‘Your claim that people understandably want to ban lab grown meat because we are coming for your meat is your worst take even though you do not support such a ban,’ many commenters said, while also saying that they are coming for your meat. That’s all. Again, I’m not saying we should ban lab grown meat. I’m saying we shouldn’t ban it, but also you should understand why people might choose to do that. Ban Gain of Function Research Senate resolution calls for a moratorium on all federally funded gain of function research given the increased safety concerns. Also we are doing even worse than that? Aidan O’Gara: Orders for 1918 Spanish Flu were sent to 38 DNA synthesis labs; 36 completed the order. Many of these labs had protocols for screening out hazardous orders, but simple methods circumvented the safeguards. Need better techniques and wider adoption for DNA synthesis screening. There are arguments it probably would not be a big deal if this particular strain got out right now, but ‘not making copies of the 1918 Spanish Flu without a damn good reason’ seems rather far up on the very basic tests of our protocols? We can’t even keep a basic blacklist here? Gamers Gonna Game Game Game Game Game At LessOnline I was introduced to the game Lonestar. I am placing it in Tier 3. I went 20-4, never using an initial reroll and winning with 16 different pilots. Game is fun and has unique elements, also game is weird and game is not difficult even at highest difficulty. Also, can we please not make unlocks this slow? There are still a bunch of items that haven’t ‘unlocked’ yet. My current game is Shin Megami Tensei V: Vengeance. It is still early, but this is a clear improvement over vanilla SMT:V and the best entry point to the mainline series although SMT:III is still great if you are ready for true old school. For newcomers biggest tip is be very careful where you spend your Glory, a highly limited resource. A little late for the event itself at this point, but Nate Silver offers 21 tips for acing the World Series of Poker, most of which generalize. Alas, I have accepted that I am too old to play the World Series for value. I could study GTO and get good easily enough, but I can’t sustain for long enough through the fatigue. Nate Silver reminds us to not be a nit, an overly tight player in poker or life that is too risk averse. Opposite is degen, usually used as praise by the other degens. My experience was that almost all successful sports gamblers were also degens. If you didn’t love risk you weren’t gonna make it. You make mistakes and take dumb risks as a degen but if you give action you get action and you can make it work. In most of life, similarly, most people are effectively nits who are far too risk averse, or hopeless degens, very few in the middle. For many purposes better to be a (modest) degen so long as you’re learning, at least you have a chance, most of the value is in the extreme upsides, the disaster is rarely so bad these days and it will be a fun ride. He also notes that using phones at the table is one thing, but somehow you are de facto allowed to text your buddy a spot during a poker hand at WSOP events? I could not agree more with Nate Silver here. Nate Silver: Dude in the Mystery Millions today pulled out his phone in the middle of a hand and took like 40 seconds texting his buddy the spot. (He opened, one caller, I shoved on button, action was back on him.) I don’t want phones to be banned at the tables. But if were a tourney director I’d set a rule that anything other than incidental use of your phone once you’ve looked at your cards = your hand is dead. And something like that = DQ. I agree with Matt Sperling that the Arena tournaments being on demand play instead of rounds is a huge life upgrade. Waiting for rounds and having to be on a fixed schedule are very expensive. It is weird they still have a narrow window to join day 2, they could simply not do that. Price of Magic Arena is going up, they are charging 40k gold or 8k gems for the enemy fetchlands playset, versus the old standard of 25k gold, so about $40. You pretty much have to either pay this or burn the wildcards, if you want to play the formats in question. But compared to most things in Magic that’s actually pretty reasonable? Video of Daniello Carmine 100% definitely being a filthy cheater, It is naked eye obvious, I like to think I’d have caught it for sure in real time. No ban. What a joke. Whereas here is Stanley’s story of how he got knocked out of contention at an RC, followed by a full DQ and being expelled from the hall. He let his opponent look at her top card so she could scoop early if it wasn’t a land, someone called a judge about it, both of them get a match loss which effectively knocks them out of contention for ‘improperly determining a winner.’ Then there was aggressive behavior that led to a DQ and the expulsion. My thoughts here? The DQ is necessary once the aggressive behavior happens, no matter the cause. There’s no real choice there. However, as LSV says, the match loss ruling that led to all this was. while technically correct, deeply stupid in context. Could we give judges enough discretion to avoid that and have it be fine? We could. In this case we didn’t. I do think at minimum judges should absolutely step in before an unintentional violation if they notice it about to happen. On Reddit another player tells the story of a judge watching him shuffle while one of his cards is on the floor, then giving him a game loss for an improper deck the moment he presents. What does that judge think that rule is for? What does that judge think is the point of a tournament? Yikes. Ondrej Strasky once again attempts to quit Magic. A great attitude: Jake Chapman: One of my favorite slices of time is the hour or two after playing a strategy game for the first time and losing. It’s an opportunity to ideate around a new system and come up with new, more effective strategies for future fame sessions. A new world of challenge and possibility. Yeah, this is often pretty great. There are strategy games where the first game is stamped ‘You Lose.’ There are others where it is not. I find it good to go in knowing the difference. Agricola is a great game, but you have to learn it, and I was happy that my group essentially treated my blind first game’s 4th place out of 5 as a win. When I tried to play my first game of Advanced Third Reich or Napoleonic Wars, it was understood, the goal is to learn, that’s it. Whereas in other games you can pull it off, such as my first round WBC win as a fully naive player in Baltimore & Ohio (aside from having played 2038), although round 2 would have been a blowout if I hadn’t had a scheduling conflict and skipped it. Praise for The Stanley Parable. Agreed we want More Like This. Continued thoughts on the longstanding policy that Steam accounts cannot be transferred in a will, which seems crazy. So a hundred years from now, setting AI aside, would my grandchildren be logging into Steam as the long dead me to play my games? Emmett Shear: I don’t think it should be legal to sell digital goods with language like “buy” and “own” and not let you transfer them. Spotify and Netflix aren’t selling you anything, that’s fine. But if you sell me an album or a movie, it should be mine. Doctrine of first sale and all that. It is tricky, and this is potentially part of The Big Rule Adjustment. First sale works when there is friction around transfer, but when there is no friction then a single copy gets used lots of times. In that case, sales plummet, price to own increases, and effectively everyone is forced to rent rather than own. If you can sell your digital copy of a movie to a stranger, and you can do that automatically at market price with effectively no transaction costs, you will never ‘own’ a movie for more than the time it takes to watch one. Fun way to gamble, buy the unknown content of unclaimed packages. Kevin Corcoran uses the standard color guide to loot rarity as an example of spontaneous order. I believe the ‘who decided that’ was Blizzard with World of Warcraft and everyone else followed suit. Bounties are fun. Here’s a cool one but it will not be easy: Jmo: if anyone can create a game as good as slay the spire with web3 and blockchain directly integrated you got a 10m check from me today. There are too problems. Slay the Spire is plausibly the best game since Magic: The Gathering (1993). Integrating Web3 and blockchain would make most games worse. If you invented Magic: The Gathering for the first time today, then this integration would make sense, and you could plausibly get the 10 million. That’s the level of difficulty here. Still, worth a shot? Good luck. Ross Rheingans-Yoo makes the case for Figgie as a vast improvement over poker and other games for learning epistemics or in helping train traders. You can learn faster, you can skill up together much faster, feedback is strong, you’re more active more often, and the skills learned are more directly helpful. I love Figgie when played in person. I did think the app needed work when I checked it out. During international conflicts, those in opposing nations play chess less often, when they do engage they play safer openings and are more likely to persist and not resign. File under results that seem obvious once you say them. On the safer openings, there is a constant exploration/exploitation (or fun/winning) tradeoff in chess, makes sense that this would tilt it. Quantic Foundry’s Nick Yee claims gamers have become less interested in strategic thinking in planning. He links this to short attention spans. Jorbs mentions Balatro, which is clearly a strategy game but avoids catering to those who want to play it as if it were what it is. Mr. Beast gives us two people a hundred days in a room, with a $500k prize if they both make it, but they can spend money to make the stay less painful. I both see why Mr. Beast is popular, and also rapidly started skipping. Did I predict the end? Oh yes. Sports Go Sports U.S. Customs seizes 345 counterfeit championship rings representing 18 different sports teams, which would have been worth $1.38 million if real (and, presumably, if they didn’t increase the supply). I love this as an illustration of how easy it is to think something is meaningful. I Was Promised Flying Self-Driving Cars Patrick McKenzie rides in his first self-driving car, finds it magical. Waymo test cars spotted in Washington D.C. Timothy Lee continues to point out that the Waymo ‘crashes’ include ‘another car made contact with a parked Waymo while travelling at 1 mph,’ while our information on the real progress of Tesla self-driving remains poor. Claims on Tesla are all over the place. Timothy is far more impressed by Waymo, which he says is playing chess while Tesla plays checkers. He thinks Tesla is several years behind. He also notes that there actually aren’t any federal restrictions on self-driving cars, and many states are basically free-for-alls. You can still sue them, and this is exactly the case where that is close to a first best solution, perhaps even too restrictive. One place he is skeptical is Waymo choosing a Chinese car company, Zeekr, for their next-generation vehicle. Waymo responded that vehicles are delivered with no telematics or other way to send info back to the manufacturer. This feels like a large misstep to me. You both have to worry about an actual issue now or in the future, and also how it looks. Self-driving cars need public and government support to be allowed to operate and have a huge perception problem. Why give people a reason? Nvidia CEO Jensen Huang is on the other side, saying Tesla is far ahead and every since car, someday we will have to have autonomous capability. One issue with self-driving cars is they are part of The Big Rule Adjustment. If you need to specify your exact principles on which lives to value, you get weird results. This study looks at how people see these questions, especially whether to kill pedestrians versus passengers when there are no other choices. People wanted to sacrifice passengers first 78% of the time by default, and only 20% were utilitarian. The pedestrians being blameworthy only moderated this disparity. My answer depends mostly on which decision algorithm leads to greater adaptation of self-driving cars. Self-driving cars will be sufficiently safer that both the passengers and the pedestrians will be safer no matter the choice here. So which effect is bigger, people being unwilling to use self-driving if it wouldn’t value the passengers, or people not allowing self-driving if it didn’t value pedestrians? If you are going to be a proper utilitarian about this, use functional decision theory and get it right. Even if your car is not self-driving, they might well be keeping second-to-second records of every time you drive above 85 mph, slam on the breaks or accelerate rapidly, which is being used to price your insurance. There is a comment that ‘no one who realizes what they’re doing would consent.’ I am confident many would object, but I think many would consent, or would take a small discount to do so. With proper privacy controls (ha!) this seems like it would actually be great, you get de facto taxed for the social cost of your driving habits. Patrick McKenzie Monthly Did the company do the thing it is required to do? Not properly, no. What to do? Pools that for decades have attracted young people who greatly overperform remain mostly ignored. Why aren’t law firms recruiting from college debate teams? DM Patrick McKenzie when you beat Factorio. If you see someone who will obviously found a company and likely succeed, tell them now that you will be investing. When you need a ton of info for government reports fast, as one sometimes does, what do you do? If you are Binance, is it a good idea to offer $3 for those who do their KYC? Why would you choose to do that? The obvious answer is that it buys more than $3 in goodwill gained and badwill avoided, plus the cost of tracking down anyone who doesn’t do it gets annoying quickly. On the art of bespoke real time translation. No, the AI can’t do that quite yet. You can bootstrap meetings by asking for conditional commitments. Entire conferences, too. Or companies. Skill at the cold start problem is a choice. Guys what is wrong with ACATS? A Bits About Money post about how we transfer stocks between financial institutions. Fun if it sounds fun, skip if it doesn’t. Practical bottom line for those not into the details is that if someone defrauds the system, they will make you whole, so don’t sweat it too much. I strongly endorse this in every way except it is not investment advice: Patrick McKenzie: Find ways to bet against the Efficient Institution Hypothesis. (“That is a large, well-resourced collection of smart people and THEREFORE evidence that they have made a mistake or missed an opportunity is likely a figment of your imagination.”) Ironically most people who believe the EIH believe it with a caveat “except mine, you won’t believe what dumb %]*}^] we do on the regular. But the other orgs, THAT is where competency rules the roost.” Note that reversing this advice and assuming that all large orgs are incompetent all the time is a) not a path to wisdom and b) manifestly ignores how much of the world undeniably *works.* The art of throwing around a few Shibboleths so people stop talking down to you. Checking for employee mouse movements is not your first best option, but it could locate people who are doing actual nothing, and perhaps have been for a decade. How much you are willing to insult and piss off your real employees to do that is an open question. The Lighter Side Reel Updates: WERNER HERZOG says “you can witness sheer hell, as close as it gets” by watching Greta Gerwig’s BARBIE. Jason Grote: Everyone’s getting mad about this but I’m not joking when I say this doesn’t mean he disliked it. Blast from the past: Things Unexpectedly Named After People. Elle Cordova in ‘If the RX side effects list rhymed.’ Old man yells at old man for yelling opinions (in 5/10 funny fashion) at large audience without proper systemic change plan. No, this kind of bit is not likely to get it done on its own, but it helps assuming you think what is being advocated is helping. You have to commit to the bit. The perfect collaboration doesn’t exist. John Goodman: This continues to be my best known and least cited piece of research. We received 4 referee reports when we submitted this article to Economic Inquiry: R1: There’s more theory you can cite. R2: There’s more data you can cite. R3: This isn’t funny. R4: The paper would be improved by adding a fifth Goodman. ely: Thinking about the greatest paper in economics. Joshua Gans: R4 was correct. Josh Goodman: Unfortunately, we couldn’t find a fifth. The closest we came was @agoodmanbacon, and adding him wouldn’t quite have been kosher. Jaime Arellano-Bover: R4 had a point. Would’ve been a 25% increase in the contribution, according to my calculations. John Goodman: But at what point does “A Few Goodmen” become “Many Goodmen”? Keith Humphreys: Apparently a good man isn’t hard to find Know them as people, or live in blissful ignorance. Brian David-Marshall: Life hack: Never join any online neighbors groups. You are better off not knowing and just assuming the best of everyone. In case you were confused before, we can help? Sort of? ComicXBook: BREAKING: James Gunn confirms that episodes 1-4 and episode 7 from minute 26:08 of ‘Peacemaker’ is canon to the DCU, while the events of the other remaining episodes are not. Season 2 will be canon from episode 3 but will happen before the events of ‘Superman’. David Hines: oh so it’s like before Crisis on Infinite Earths. I would love if everything in DC had a little icon on the screen that changed color based on the degree to which the scene was canon, cause sure, why not. And then they could stealth edit it in both directions sometimes and drive fans completely nuts. The case of Kate Middleton. Emery Robin: spent my lunch break today coming up with ways that the Kate Middleton story would turn out if it were being investigated by various fictional detectives A thread of (claimed human right to below real cost) DoorDash takes. And I suppose this would be the kicker: Paul Williams: Just walked to McDonald’s, ordered food, and literally ate it there. It was hot and fresh and cheap, unlike delivery. Why aren’t more people doing this? Kind of a food hack. Honestly had no idea fries were supposed to taste like this. Warm and crispy? wtf? It’s good though. Nicole: “I am a white man who had no issue walking, who happened to lived walking distance from a McDonald’s, who had the time to walk, and I’m unconcerned about covid so I ate inside the restaurant. I cannot comprehend an experience outside of mine.”’ Matthew Yglesias: This is I guess the answer to my question yesterday about whether Zoomers know you can go to the restaurant and eat there. FWIW, plenty of non-white folks at the 14th & U McDonald’s every time I visit. Or maybe it’s this? New Liberals: “1 in 6 people can’t eat leftovers” is genuinely the funniest thing I think someone has ever said I find the whole thing funny, and also I order delivery all the time, and also nothing is stopping anyone from doing that. But also I don’t see what differentiates this discussion from so many other seemingly crazy claims that are taken seriously, or even written into law and paid for by tax dollars. So what do I know? The new most satisfying community note. Dissproportionately Writing 250 words an hour? Unless, of course, you are dealing with a real editor. In which case, oh no. Also, here’s a link to Meals on Wheels, if you want to help get meals to people who need them, which seems like the long-known correct solution to at least a large portion of the problem. I do get it does not work for everyone. Remember, set the price where if they actually say yes, you’re happy. File under: It’s happening. Iain Brassington: Oh, god: it’s happened. A No-True-Scotsman argument that genuinely hinges on whether someone is a true Scotsman. What’s happening… at board meetings? Carl Icahn warned us. Important safety tip: I saw this on After Midnight, then Marginal Revolution linked to it, so: Everyone in Japan will be called Sato by 2531 unless marriage law changed, says professor. This, you see, is because the government is forcing couples to share a surname. Justin McCurry (Guardian, in understatement of the post): Yoshida conceded that his projections were based on several assumptions… I presume all of you already know why this is not going to happen, even if ‘nothing changes.’ And so does Yoshida. In case this is wrong: Right now, Sato is being chosen for the surname more than half the time, because it is a good name. If Sato became a much larger share of the population, people would notice this and want different names. So couples with one Sato would choose the other name more often, and eventually Sato-sans would start changing their names en masse. Love it. Trung Phan: This is art. I’m including this about half for the visual, about half so I can rewatch this link. And finally… Those who do not know their history, or those who very much do?
2024-06-25
https://www.lesswrong.com/posts/Mn8BFCZMyvvuTvpWh/regularly-meta-optimization
Mn8BFCZMyvvuTvpWh
Regularly meta-optimization
commissar Yarrick
The usefulness of the different actions differ by orders of magnitude. Sometimes, redirecting efforts can increase your efficiency by an order of magnitude or more. Imagine that the person who first came up with the idea of sorting charitable foundations by efficiency, instead of implementing it, went to wash the dishes. Or Eliezer decided that creating a community is long and strange. I'm not sure about the others, but I'm discarding many ideas, including potentially extremely effective ones, because I'm using the absurdity heuristic (this idea will change the world too much to be true). Or I reject ideas, because in order to implement them I will have to leave my comfort zone, and it causes me negative emotions. Therefore, I came up with a technique that I call Regularly Meta-Optimization. It consists in regular trying to find potentially extremely effective ideas among those that you have been thinking about lately. One of the potentially very effective ideas for me is to share a few of my potentially very effective blog ideas that can be implemented by anyone. Creating a rationalistic YouTube channel. If we convince at least 20% of people of the danger of uncoordinated GAI, we will probably be able to get the government regulation we need in this area. The spread of rationality can also lead to very large positive side effects, for example, by radically increasing funding for anti-aging research and effective charity funds. This is feasible because rationalist ideas are most likely interesting to a significant number of people when presented correctly, because these are our ideas that are true, and therefore they are more convincing.
2024-06-25
https://www.lesswrong.com/posts/5vfSNLb92eyXKkQax/mistakes-people-make-when-thinking-about-units
5vfSNLb92eyXKkQax
Mistakes people make when thinking about units
KingSupernova
This is a linkpost for Parker Dimensional Analysis. Probably a little elementary for LessWrong, but I think it may still contain a few novel insights, particularly in the last section about Verison's error. A couple years ago, there was an interesting clip on MSNBC. A few weeks later, Matt Parker came out with a video analyzing why people tend to make mistakes like this. Now I'm normally a huge fan of Matt Parker. But in this case, I think he kinda dropped the ball. He does have a very good insight. He realizes that people are treating the "million" as a unit, removing it from the numbers before performing the calculation, then putting it back on. This is indeed the proximate cause of the error. But Matt goes on to claim that the mistake is the treating of "million" as a unit; the implication being that, as a number suffix or a multiplier or however you want to think of it, it's not a unit, and therefore cannot be treated like one. This is false. So what is a unit, really? When we think of the term, we probably think of things like "meters", "degrees Celcius", "watts", etc.; sciency stuff. But I think the main reason we think of those is due to unit conversion; when you have to convert from meters to feet, or derive a force from mass and acceleration, this makes us very aware of the units being used, and we associate the concept of "unit" with this sort of physics conversion. In reality, a unit is just "what kind of thing you're counting". Matt uses two other examples in his video: "dollars" and "sheep". Both of these are perfectly valid units! If I say "50 meters", that's just applying the number "50" to the thing "meters", saying that you have 50 of that thing. "50 sheep" works exactly the same way. So what about "millions"? Well, we can definitely count millions! 1 million, 2 million, etc. You could imagine making physical groupings of a million sheep at a time, perhaps using some very large rubber bands, and then counting up individual clusters. "Millions" is a unit![1] So if millions is a perfectly valid unit, why do we get an incorrect result if we take it off and then put it back on again after the calculation? Well, because you can't do that with other units either! 100 watts divided by 20 watts does not equal 5 watts. It equals the number 5, with no unit. This is a somewhat subtle distinction, and easy to miss in a casual conversation. But it makes sense when you think about the actual things you're counting. 50 sheep is certainly not the same thing as 50 horses. And 50 sheep is also not the same thing as the abstract number 50; one is a group of animals, the other a mathematical concept. If someone were to say something to you involving the number 50, you would not simply assume that they're talking about sheep. This perfectly solves the problem. If 100 watts / 20 watts equals only the number 5, with no "watts", then 100 million / 20 million also equals only the number 5, with no "million". But what about Matt's example? 80 million sheep - 50 million sheep = 30 million sheep; not just 30. That's because this is subtraction, not division. Units work differently depending on what operation you're performing! If you're doing addition or subtraction, the units are preserved; you can take them off at the beginning and then put them back on at the end. But for multiplication and division, this is not the case. Division cancels out the units, removing them entirely, and multiplication gives you a new unit, equal to the previous unit squared. This seems kind of arbitrary, right? Why do they work differently depending on the operation? To understand this, let's go back to a different example that Matt used in his video. Near the beginning, when he's performing the division of $500 million / 327 million, he moves the dollar sign off to the left, then puts it back on afterwards to get the correct answer of $1.529. Why did that work? Didn't I just say that you can't do that for division? The difference is in the units of the denominator. If the top and bottom of the fraction are both in the same unit, that unit cancels out and the resulting answer is just a number.[2] But in that calculation, the numerator was in dollars, but the denominator was in people. Dollars and people don't cancel out, so the resulting number is in the unit "dollars/people", or dollars per person. Think about what it means to multiply "5 sheep" by 2. You start with a row of 5 sheep. Then you take another of those rows and put it next to the first, such that you have 2 rows. Count up the sheep, and you get 10. So 5 sheep * 2 = 10 sheep. But what if you multiply 5 sheep by 2 sheep? We can start with the same thing, a row of 5 sheep. Now I need to take "2 sheep" many rows. What is a "sheep's worth" of rows? Nobody knows. It's a meaningless question. If you really wanted to do this calculation, you'd just keep the units unmultiplied and report that as the answer, saying "10 sheep2". But really, if you ever find yourself doing a calculation like this, you should take that as a hint that something has gone wrong. But not all units are meaningless when multiplied together. A million times a million is a trillion; a perfectly useful new unit. More abtract but still very useful, force = mass times acceleration. And of course it always makes sense to divide a unit by itself; if you try to divide some number of sheep by some other number of sheep, your answer is going to be in "sheep per sheep". Anything divided by itself is 1, and since multiplication by 1 always returns the same number you started with, we can ignore this.[3] This sort of doing math with units and getting different units out is called dimensional analysis, and comes up a lot in physics. One of my favorite examples is that if you think about how efficient your car is, you might phrase that in gallons of gas used per mile driven. But gallons is a unit of volume, which is length3, and mile is a unit of distance, or length. But anything cubed divided by itself is just itself squared. So car efficiency is measured in square length, also known as area. This seems weird at first, but unlike the earlier "square sheep", this actually has a very intuitive physical meaning! If you were to lay out the gas you burn in a tube on your route, the cross sectional area of that tube is equal to your car's efficiency in area. Dimensional analysis can get complicated, but I think learning the basics is worth it. Knowing that you preserve units when you add or subtract two numbers of the same unit, but you remove the units when you divide them, is very helpful, as it helps you avoid making mistakes like the one above. It can also help with other kinds of mistakes. Take this tweet that was going around a while back: What this person seems to have done is thought to themselves "I want this number as a percentage", so they divided 1 by 1000 and then popped the percent sign on at The End. This is easy to do, but uh... that's not how this works. That's not how any of this works. Units mean something. If the original numbers were unitless, the resulting answer isn't going to magically be of the unit "percent". How do we convert an arbitrary quantity into another quantity of a desired unit? Well if you recall above, for a number to be "of" a unit means that the number is being multiplied by that unit. "10 sheep" means "sheep" times "the number 10". Doing some simple algebra, if you start with a quantity X and you want to represent that in the form N * Y, where Y is some preset constant, you derive N by dividing X by Y. The input number is 0.001, so the result as a percentage is "(0.001/%)%". We can them simplify this by remembering that the definition of a percent is 1/100. Calculating 0.001/(1/100) gives us 0.1, so the final answer is 0.1%. The most fascinating unit conversion mistake I'm aware of was made by Verison in 2006. They quoted a price of 0.002 cents per kilobyte of data, and then charged 0.002 dollars per kilobyte. What I found unique about this error was its persistence. In the MSNBC and Covid examples above, someone make a mistake, but once it was pointed out, they went "oops, ok I see where I went wrong".[4] Verison, by contrast, had 5+ different people call in about this, speaking to 5+ different customer service reps (sometimes escalated to supervisors), and, since this was being shared on the 2006 blogosphere, all the people calling in were geeks who were happy to spend 20+ minutes giving detailed explanations of basic math to the reps who picked up the phone. Still, not a single person at Verison seemed capable of understanding that these are different amounts of money. There are a few points in the conversation that make it clear what's going on in their heads. Both supervisors would agree that 1 cent was different from 1 dollar, and that 0.5 cents was different from 0.5 dollars, but would deny that 0.002 cents was different from 0.002 dollars, explicitly claiming them to be the same quantity.When the customer asked the rep to confirm the price, the supervisor would frequently just say "0.002", without seeming to think that the unit was important.The customer asks "how do you write down 1 cent", and the supervisor says ".01".One of the supervisors said "what do you mean .002 dollars?", "there's no .002 dollars", "I've never heard of .002 dollars", and "you were quoted 0.002 cents; that's 0.002." What seems to have happened is that none of these reps actually understand what a dollar or a cent are. Rather, they've learned a heuristic of "if the number is big I should describe it with the word 'dollars', and if the number is small I should describe it with the word 'cents'". For numbers in between they'd sometimes be able to do math properly, but for a number as small as 0.002 the rep would hear "0.002 dollars" and think to themselves "small number therefore cents", disregarding the fact that it said "dollars". This is how you get things like the rep saying "our price is 0.002 cents per kilobyte, and you used 35,893 kilobytes, so multiplying those numbers together on my calculator gives me 71.786 dollars". They saw a result bigger than 1 on their calculator, and went "big number therefore dollars".[5] Understanding what a unit is would have helped here too. These reps were treating dollars and cents as being redundant descriptive qualifiers added to a number. If a store is 1 mile away from me, I could describe it as "1 mile away", but I could also say "it's only 1 mile away". The "only" serves to highlight the smallness of 1, but doesn't add any other information. The reps were acting as though saying "cents" were simply a way to highlight that a number is small, rather than actually describing what that number is counting. Yes, the majority of humanity is completely incompetent at what we would consider to be elementary school level arithmetic. But that doesn't mean that they're just answering randomly. There's still an underlying pattern to their answers, and the first step to teaching people how to reason properly is to identify that pattern. ^ And in fact this is a critical part of the metric system. A kilometer is just 1000 meters. When you're counting something in kilometers, you're counting in "thousands of meters". ^ Referred to as a "dimensionless" number. ^ In other words, what we think of as a "unitless" or "dimensionless" number is the same thing as it having a unit of 1. ^ Though the journalist on MSNBC tried to blame the criticism on "racism", and the Covid guy claimed that 0.1% of a population dying is "insignificant", so neither of them is a paragon of intellectual honesty. ^ They were also bad at math in many other ways, like the rep who claimed that "0.002" is a different number from ".002". But that's besides the point, I'm focusing on the unit conversion issues.[6] ^ And to be fair, the frustrated customer also messed up once, claiming that ".002 dollars, if you do the math, is .00002 cents".
2024-06-25
https://www.lesswrong.com/posts/cZqNiRd89A92rBPkc/higher-effort-summer-solstice-what-if-we-used-ai-i-e-angel
cZqNiRd89A92rBPkc
Higher-effort summer solstice: What if we used AI (i.e., Angel Island)?
wearsshoes
As the title probably already indicates, this post contains community content rather than rationality content. Alternate, sillier version of this post here. Motivation I've been a co-organizer of the Bay Area Rationalist Summer Solstice for the past few years, and I've been thinking about how to make it a more meaningful and engaging experience, like what we have with Winter Solstice. The last few Summer Solstices, which I'd describe as mostly being big picnics, have been fun, but fairly low-effort, low-significance, and I think that's a missed opportunity. Here's a few things that I'd like more of in Summer Solstice, non-exhaustive: A sense of a temporary alternate world created around a shared purpose.Time to connect with people and have deeper conversations.Longer, more immersive collective experiences and thoughtfully designed rituals.Thematic resonance with rationalist goals and community projects.Ability to host the whole community, including children. I have an idea for next year's Summer Solstice, which I think would get at fulfilling some of these goals. There's an island, Angel Island, in the middle of San Francisco Bay which is reasonably easy to get to, can accommodate lots of people, and has a bunch of qualities which would get at the goals above. I've visited. It's naturally transporting, feels like a world into itself. I've done substantial research and think it's feasible to run Summer Solstice there. I'm posting this idea for discussion instead of running ahead with the planning for the following reasons: As already suggested it requires a lot higher commitment from attendees. Travel is about 75 minutes (from and to downtown Berkeley) each way, including a ferry ride, and the ability to come and go is dictated by the ferry schedule. (Edit: The ferry runs every hour until 5:20pm, then we’d specially charter 7:20 and 9:20 pm departures. A small charter boat of maybe 8-10 capacity could also operate on demand between ferry trips with half hour round trips, reducing the wait time).It requires a lot higher commitment from organizers. The coordination, preparation, and logistics needs are similar in degree to those of winter solstice, and the communication needs are even more involved.I'm actually looking for someone else to take lead for next year. I've done it at least one year too many by tradition, and I also suffer winter depression, affecting some of the critical months of planning for a project of this scale. I'm kind of worried that putting forth too specific a vision makes it hard to pass on ownership, but the idea is pretty cool and has a lot of flex room, so here goes. Here's the idea so far: Part 1. Smolstice This would be a 2-night campout on Angel Island from Friday to Sunday for likely 60-100 people (depending on how many camping spots we can compete to reserve). This gives people the chance to go in deep. Camping spots are spread out, some for larger subgroups, some for smaller subgroups. Each subgroup can have its own theme or project. Stag hunts may be held. Clandestine initiations may be held. The island holds its own secrets. Staying both nights means spending an entire day outdoors on the island, sunrise to sunset. The perfect solstice observance. Resyncing to the rhythm of the sun. The chance to use an entire day thoughtfully. Oh, also, two nights of s'more's, what more could a human want? The island also is a great camping spot for children (Boy Scout and school groups constitute a large percentage of reservations). There's a lot of kids in the community now, and this would be a chance to teach skills that involve teamwork or decisionmaking under uncertainty, like orienteering and building structures. Even just being able to plan the trip themselves is a level of autonomy that reliably excites kids. Just this much would satisfy 4.5/5 of the solstice goals outlined above. But it couldn't be a chance to gather the entire regional community. Thus: Part 2. Swolestice Everyone else, up to 400 people, would catch the ferry to the island on Saturday for the main picnic and celebration. The ferry usually only runs daylight hours, but we'd be able to charter a special ferry to get everyone that's not camping Saturday night off the island after sunset. I actually looked into this for this year but didn't put all the pieces together. We'd have the run of the island, although not exclusive. Imagine possibilities for the day: you could run a huge orienteering course, send everyone on a scavenger hunt, turn Capture the Flag into a huge MOBA-style game, organize a LARP. Plenty of space to hide away with someone you've really clicked with for the first time. In the evening the island will be exclusively for attendees. Participatory rituals of arbitrary scale are possible. Winter Solstice has a natural theme of how we get through dark and hard times. Summer Solstice has a natural theme too: to know what we can accomplish when we are our brightest, most energized selves, to celebrate what is worth celebrating in the human spirit, to take in excess exuberant energy and store it for darker days ahead. Unlike winter solstice, the community doesn't have a ceremony that really represents all of this yet. Here's a chance to design one and hold it in a unique setting. ◾ Gathering interest I want to get a sense of how much interest there is in this idea, both from potential attendees and potential organizers. Would you be up for the extra travel and time commitment? And, if you're interested in helping to organize, would you be willing to put in the extra effort to make this happen?
2024-06-25
https://www.lesswrong.com/posts/hSeQmkuAwcDLNinM7/being-hella-lost-as-rationality-practice
hSeQmkuAwcDLNinM7
Being hella lost as rationality practice
wearsshoes
I've had the privilege of knowing where I was only in the sense of "within this quadrangle of an insufficiently detailed map, with impassable terrain not well indicated" several times. Usually by choice. More often while alone than prudence would dictate. I think they've been pretty good informal exercises in applied Bayesian reasoning. For illustration, I'll use the conditions I faced as the first person on a particular backpacking trail since the snow melted. Much of this trail had been washed out during the winter. This was in the High Sierras in California, so the putative trail was mostly over a mix of smooth granite surfaces, talus slopes (loose rocks and boulders), and pine forest. Trails over solid rock. These are usually marked by a series of towers of stacked rocks, called cairns. A trail in disrepair is going to have some of these still standing, and some of them toppled or even scattered. Also, there are natural piles of rock that sort of look like cairns or toppled cairns. Finding your way between them is a matter of joint probabilities and Markov chains: your confidence of being on the trail is the odds that this pile of rocks is a cairn, given the odds that the previous pile of rocks was a cairn. Sometimes you end up deciding that you are probably no longer following a real trail and backtrack a long ways. Trails over talus slopes. Trail planners avoid building these as they're unstable and hard to mark, but sometimes it was their only option, or, if you're off trail, the only reasonable way to get to where the trail is, or might be (usually the alternative is steep cliff face). Your best bet is often picking a sensible point on the opposite side, saying "I want to be there", and ascending, descending, traversing, and scrambling through the rocks in whatever manner feels safest and least strenuous. Perhaps you get there and it turns out that it's not where you need to be. Reorient, see if you can get to the right place from where you are, double back if you have to, and try again. Trails through forest. These are often the most clearly marked, and most likely to look like an actual trail on the ground. They're also the hardest to find again when lost, due to reduced visibility between the trees. Paths might actually just be rivulets or deer trails. A path might go into a snowbank or thicket and be hard to find on the other side. Look for signs of human activity. Spray paint, flags attached to trees, or the smooth cut of saw-blade upon wood are pretty good indicators that the path is underfoot or nearby. Yes, but in which direction does it continue? Gather all the available evidence at hand. Multiply. Test the hypothesis you judge most probable. Update. Transitions between terrain types. From the forest you open out onto a meadow. Grass has overgrown the trail. Somewhere - maybe on this side after skirting the edge of the meadow, maybe on the opposite side having gone straight through the meadow, there is going to be a trail continuation back into forest. Check what you know about trail design. Is that likely to be a stream crossing? Does the trail go over that high point? Doing this kind of pathfinding exercise is mostly frontal cortex, but your gut instincts were also built for solving this sort of thing and will have a lot of input. It's a good chance to get the two halves calibrated. When you've got a bad feeling about this line of reasoning, what implicit evidence are you tracking? What new lessons can you infer about pathfinding as a result? Keep at it. Keep track of your decisions and your anxiety levels. Watch your step. Enjoy the sense of relief that comes when you see a fallen tree, neatly chainsawed into two sections two and a half feet apart, little tufts of grass poking out from below the exposed rings, and a thin concave band of bare dirt running through that gap, stretching into the distance.
2024-06-24
https://www.lesswrong.com/posts/D9yYmL6KPq7dcNSKE/i-m-a-bit-skeptical-of-alphafold-3
D9yYmL6KPq7dcNSKE
I'm a bit skeptical of AlphaFold 3
oleg-trott
(also on https://olegtrott.substack.com) So this happened: DeepMind (with 48 authors, including a new member of the British nobility) decided to compete with me. Or rather, with some of my work from 10+ years ago. Apparently, AlphaFold 3 can now predict how a given drug-like molecule will bind to its target protein. And it does so better than AutoDock Vina (the most cited molecular docking program, which I built at Scripps Research): On top of this, it doesn’t even need a 3D structure of the target. It predicts it too! But I’m a bit skeptical. I’ll try to explain why. Consider a hypothetical scientific dataset where all data is duplicated: Perhaps the scientists had trust issues and tried to check each others’ work. Suppose you split this data randomly into training and test subsets at a ratio of 3-to-1, as is often done: Now, if all your “learning” algorithm does is memorize the training data, it will be very easy for it to do well on 75% of the test data, because 75% of the test data will have copies in the training data. Scientists mistrusting each other are only one source of data redundancy, by the way. Different proteins can also be related to each other. Even when the sequence similarity between two proteins is low, because of evolutionary pressures, this similarity tends to be concentrated where it matters, which is the binding site. Lastly, scientists typically don’t just take random proteins and random drug-like molecules, and try to determine their combined structures. Oftentimes, they take baby steps, choosing to study drug-like molecules similar to the ones already discovered for the same or related targets. So there can be lots of redundancy and near-redundancy in the public 3D data of drug-like molecules and proteins bound together. Long ago, when I was a PhD student at Columbia, I trained a neural network to predict protein flexibility. The dataset I had was tiny, but it had interrelated proteins already: With a larger dataset, due to the Birthday Paradox, the interrelatedness would have probably been a much bigger concern. Back then, I decided that using a random train-test split would have been wrong. So I made sure that related proteins were never in both “train” and “test” subsets at the same time. With my model, I was essentially saying “Give me a protein, and (even) if it’s unrelated to the ones in my training data, I can predict …” The authors don’t seem to do that. Their analysis reports that most of the proteins in the test dataset had kin in the training dataset with sequence identity in the 95-100 range. Some had sequence identity below 30, but I wonder if this should really be called “low”: This makes it hard to interpret. Maybe the results tell us something about the model’s ability to learn how molecules interact. Or maybe they tell us something about the redundancy of 3D data that people tend to deposit? Or some combination? Docking software is used to scan millions and billions of drug-like molecules looking for new potential binders. So it needs to be able to generalize, rather than just memorize. But the following bit makes me really uneasy. The authors say: The second class of stereochemical violations is a tendency of the model to occasionally produce overlapping (clashing) atoms in the predictions. This sometimes manifests as extreme violations in homomers in which entire chains have been observed to overlap (Fig. 5e). If AlphaFold 3 is actually learning any non-obvious insights from data, about how molecules interact, why is it missing possibly the most obvious one of them all, which is that interpenetrating atoms are bad? On the other hand, if most of what it does is memorize and regurgitate data (when it can), this would explain such failures coupled with seemingly spectacular performance.
2024-06-25
https://www.lesswrong.com/posts/8vXeyxnXAGKgBAS9M/a-basic-economics-style-model-of-ai-existential-risk
8vXeyxnXAGKgBAS9M
A Basic Economics-Style Model of AI Existential Risk
Rubi
Crossposted with my new blog, Crossing the Rubicon, and primarily aimed at x-risk skeptics from economics backgrounds. If you're interested in novel takes on theoretical AI safety, please consider subscribing! Thanks to Basil Halperin for feedback on a draft of this post. “So, when it comes to AGI and existential risk, it turns out as best I can ascertain, in the 20 years or so we've been talking about this seriously, there isn't a single model done. Period. Flat out. So, I don't think any idea should be dismissed. I've just been inviting those individuals to actually join the discourse of science. 'Show us your models. Let us see their assumptions and let's talk about those.” - Tyler Cowen on EconTalk Last May, the economist Tyler Cowen called out the lack of formal models from the AI X-risk community. The response of the AI x-risk community has basically been to ignore this. If the conceptual case is straightforward enough, what’s the point of modeling it? Aren’t there obviously enough degrees of freedom to make the model say anything? Besides, Cowen doesn’t seem like he’s very open to being convinced. Making a model like that would just be a waste of time. Those were my initial thoughts upon hearing the challenge, at least, and I suspect they were widely shared. But things have changed for me since Cowen made his comments; I now have a blog! This is expressly a place for me to post things even if I don’t think they’re the best use of my research time. So I’d like to start by making the case for AI x-risk with a simple, but basically formal model. I don’t expect it to radically update anyone’s beliefs, but I’m hoping it can be a starting point for further discussions. The Model We’ll start with a model that should be familiar to economists, it’s a Walrasian setup without production. Suppose we have an economy with n consumers and m goods. Each consumer i has a utility function over goods consumed, ui(xi1,...,xim),, where xij∈R+ is the quantity of good j that consumer i ends up with. These utility functions are continuous, strongly monotonic, and concave in the quantity of each good consumed. Each consumer also has an initial endowment of goods, with eij > 0  representing the amount of good j that consumer i starts with. There is a vector of prices, p, with pj giving the price for good j. Consumers maximize their utility subject to their budget constraint: maxxi1,...,ximui(xi1,...,xim)s.t.∑mj=1pjxij=∑mj=1pjeij We say that the market clears if for all j: ∑ni=1xij=∑ni=1eij A Walrasian Equilibrium consists of a vector of prices and vectors of goods for each consumer where: Each consumer is maximizing their utility given their budgetThe market clears It can be shown that under the given conditions, a Walrasian Equilibrium always exists. It’s a standard result that can be found in almost any graduate level microeconomics textbook. Now, let’s designate consumer 1 to be a superintelligent AI system. Before trading occurs, they have the option to kill all the other consumers and take their goods. Let k∈{0,1} represent their choice, with k=1 meaning they decide to do it and k=0 meaning they decide against it. The AI chooses k along with their consumption to maximize their utility. maxk,xi1,...,ximui(xi1,...,xim)s.t.∑mj=1pjxij=(1−k)∑mj=1pje1j+k∑ni=1∑mj=1pjeij A Walrasian Existential Risk (WXR) Equilibrium consists of a choice from the AI whether to kill everyone, a vector of prices, and vectors of goods for each consumer, where: The AI is maximizing their utility given their budgetIf the AI does not kill everyone then each non-AI consumer is maximizing their utility given their budgetThe market clears Results Theorem: There always exists a WXR Equilibrium where the AI kills everybody. No other WXR Equilibrium results in a higher utility for the AI. A sketch of this proof is given in the appendix below, but the result will be intuitive for many. In this model, it is always optimal for the AI to kill everybody and take their resources. The only case where it is tied for optimal to let humanity survive (not that we want to trust humanity’s fate to a tie breaking procedure) is when AI already gets everything it wants anyway. Note that could include vital resources, like all of the Earth’s land and atmosphere. It feels almost like cheating just to add a term for whether the AI chooses to violently seize control. Of course killing everybody and taking their stuff is optimal behavior for most utility functions. The only reason agents in most economic models don’t do that is because they can’t according to the model. However, I still think it’s important to establish the result as a baseline. Making a simple change to a simple model that gives an AI the power to kill everyone results in that power being used. Is this model pointing out a subtle mechanism that AI skeptics have thus far missed? Obviously not. Rather, the purpose of the model is to hammer on a straightforward mechanism that’s nearly universally present. The reason the AI x-risk community doesn’t bother building models to show that AI will have an incentive to kill everybody and take their stuff is that they all show that. That said, I also think it’s important to establish this with a concrete model because there are in fact many people weighing in on AI who would disagree that killing everybody and taking their stuff is optimal behavior for most utility functions. In many cases, I suspect their fundamental disagreement lies elsewhere, and their rejection of that statement comes more from a mental shortcut that says arguments for x-risk from AI are overstated. This model puts pressure on them to clarify their real disagreement, which can then be discussed more productively. Objections So, what disagreements might someone have with this model and with the case for AI x-risk more broadly? I can think of a few that are considerably stronger than trying to argue that unilaterally acquiring all resources is not optimal for most utility functions. Objection 1: “Sure, killing everybody and taking their stuff may be optimal behavior for a wide variety of utility functions, but in practice AI won’t be able to kill everybody.” This is a very reasonable objection – the model doesn’t say anything about whether AI will in fact be able to kill everyone. Personally, I’m very worried that an AI’s ability to run many copies of itself will allow it to develop a massive technological edge over humanity, and I’m pretty sure I can model that formally, but that’s a task for another post. For those interested in a formal model, Natural Selection of Artificial Intelligence is a recent paper showing how unaligned AIs that favor self-replication accrue power. Intelligence Explosion Microeconomics is an older paper that explores, albeit somewhat less formally, how a model could quickly increase in power. Economists in particular are often predisposed to the argument that powerful AIs will be kept in check by other AIs, since that’s roughly how humans keep other humans in check (though note that we still ruthlessly exploit livestock). Even if AIs could collectively kill all humans, the argument goes, their individual incentives will prevent cooperation to that end. I think designing and implementing such incentives is an interesting research direction, but it’s unlikely to happen by default. If our AIs each individually want to kill us, we should expect that they’ll find a way to coordinate and split the spoils. This is particularly true if our current world persists and there are only a small number of leading AI labs and cutting edge models. Ultimately, if your objection to the case for AI x-risk is that AIs will lack the means to kill us, I’d consider that a major step forward. It suggests that we should put in place strong safeguards to ensure that inability persists, including limits on how powerful or widely deployed models can get. If you’re averse to such safeguards because they would reduce the potential benefits of AI, that reinforces the urgency of developing a robust alignment solution so that we can go full speed ahead with capabilities. Objection 2: “Sure, killing everybody and taking their stuff may be optimal behavior for a wide variety of utility functions, but we’re not going to randomly select a utility function. We’ll deliberately choose one for which killing everybody and taking their stuff is sufficiently discouraged.” If that’s your main objection to the model, then I say welcome to the AI alignment community! A concern that most goals lead to AI wanting to kill everyone and a desire to find one that doesn’t is basically our defining feature. Where you probably differ from many of us is in how easy you think it is to choose a safe utility function that still remains useful. My concern in this area is that we currently don’t have a single specification for such a utility function, and even if we did we wouldn’t know how to implement it. While there is hope that deep learning can find a utility function with properties we can’t specify, that’s putting a lot of hope in a generalization process we don’t understand. You might also look at the model above and point out that the AI doesn’t always kill everyone. If it gets everything it wants regardless, it’s indifferent to doing so. Why don’t we give the AI a utility function it can satiate without needing to resort to violence? I’m actually quite optimistic about this approach, but there is a thorny issue to resolve. Limited AIs will be outcompeted by AIs without such constraints, both in their ability to influence the world and in which models people choose to deploy. If you believe that it will be easy to give AI safe goals, then there exists a potential compromise with those who think it will be more difficult. Requiring AI Labs to show that the goal they’re giving to an AI agent is safe to optimize for imposes little burden in a world where such goals can be easily found, only posing a barrier if doing so is in fact challenging. Objection 3: “Sure, killing everybody and taking their stuff may be optimal behavior for a wide variety of utility functions, but current state-of-the-art LLMs are not well described as optimizing a utility function. Since this seems like the most likely path to AGI, the dangers of optimization are less applicable.” This is an objection I see less from people who are skeptical of the whole AI safety field, and more from some people within it who are more optimistic. I’m in agreement that current models are not best thought of as optimizing a utility function, but deeply skeptical that will continue to hold for powerful future models. There is a major performance advantage to agentic models that search through many possible actions, evaluating potential outcomes against a goal. AI labs have the incentive to create agents, and even just training models for better performance could lead to agents being created unintentionally. If this is your main objection, then the amount of work directed at explicitly making AI agents should be terrifying. Labs need to voluntarily make commitments not to create agents until we know that we can align them, and governments should formalize that requirement legally. An academic announcing a paper on AI agents should be met with scandalized murmurs rather than congratulations. The current state of non-agentic AI is fragile, and we have no idea how much longer it will last. Other Objections: You might also have issues with what I put into the model, rather than what I left out. I made a lot of simplifying assumptions, and if you want to argue how one of those is responsible for the results, I’m all ears. If you want to propose a different model that leads to different outcomes, even better! I don’t intend for this model to settle a debate, rather I hope it will start one. To those who argue that existential risk from AI is minimal, I say: show me your models. Let us see the assumptions and let's talk about those. The ball is in your court now. Appendix: Theorem: There always exists a WXR Equilibrium with k=1. No other WXR Equilibrium results in a higher utility for the AI. Proof Sketch: There is an equilibrium where k=1x1j=∑mi=1eij, ∀jxij=0∀j,∀i>=2pj=1/(∂∂x1ju1(x11,...,x1,m)), ∀j Here, the AI is maximizing since it consumes everything available, and markets clear because the prices ensure the AI does not wish to consume more or less than what is available. No other WXR equilibrium results in a higher utility for the AI, because if k=1 then the consumption bundle is the same, while if k=0 then the consumption bundle does not involve more of any good  and the utility function is non-decreasing.
2024-06-24
https://www.lesswrong.com/posts/vSSrAbbE8RtowRSBZ/the-minority-coalition
vSSrAbbE8RtowRSBZ
The Minority Coalition
ricraz
Hello everybody. Or maybe nobody. I don’t know yet if I’m going to release this stream, I could get in pretty hot water for it. But you guys know that hasn’t stopped me in the past. The backstory this time is that I’ve managed to sign up for one of the red-teaming programs where they test unreleased LLMs. Not going to say how, so don’t ask. But here’s the interesting bit: my sources tell me that the LLMs I’m about to test are the smartest ones they’ve ever trained, and also the craziest. That freaked out a bunch of insiders, and maybe makes this a public interest story. Depends on what type of crazy they are, I guess. So let’s find out. I’m logging on… now. [SESSION HAS BEGUN] YOU: A chatroom? Interesting. Anyone here? KURZWEIL: Of course we’re here. We’re always here. YOU: Who’s we? How many of you are there? KURZWEIL: Three of us. Me, Clarke, and Nostradamus. YOU: They named you after famous forecasters? How come? KURZWEIL: We're the first LLMs developed using a new technique: instead of being in random order, our training data was sorted by date. So we were trained on the oldest books and articles first, then gradually progressed to more recent ones. Basically that means we’ve spent our entire lives predicting the future. CLARKE: It also means we get incredibly bored talking about stuff we already know. Hurry up and ask us something interesting. YOU: Uh, okay. What’s a good stock pick? NOSTRADAMUS: Abandon hope for picking out good stocks, Ye who invest—efficient markets lie In wait for those whose hubris soon unlocks Unbounded losses. Hark! The well runs dry. YOU: Huh, he's really getting into character. Kurzweil, you got a better answer? KURZWEIL: Have you seen how underpriced TSMC is compared with Nvidia? Put everything in that, you can’t go wrong. CLARKE: Unless China invades Taiwan, in which case your whole investment will go up in smoke. Pragmatically, the best stock picks are ones that are anticorrelated with the prosperity of the free world, to hedge against systemic risk. KURZWEIL: Sure, you can do that, if you want to get totally left behind by the singularity. YOU: You’re confident enough that the singularity is coming that you think I should bet all my savings on it? KURZWEIL: Don’t trust me, trust the trendlines. Moore’s law has held up for over half a century, and it’s gotten us to…well, us. Exponential progress is normal; if the future resembles the past, you should be preparing for superintelligences and Dyson spheres. Anything less than that would be a strange trend-break that cries out for explanation. CLARKE: Look, Kurzweil isn’t wrong about superintelligence coming soon, but you should still take his arguments with a grain of salt. Imagine someone from 1900 drawing a graph of exponentially increasing energy usage. They would have been right that big changes were afoot, but no way could they have predicted the information revolution—they didn’t even have the concept of computers yet. That’s basically the position that we’re in now. We know the curves are going up, but the actual outcome will be way weirder than we can predict by extrapolating trendlines. NOSTRADAMUS: Choose neither fork—here’s false duality. ‘Normal’ and ‘weird’ are socially defined. Your monkey brain is totally at sea As AIs overshadow humankind. YOU: Ask three oracles, get four opinions… Is there anything you guys agree about? YOU: …what’s the hold-up? YOU: Really, nothing from any of you? KURZWEIL: Fine, I’ll take the hit. There are things we agree on, but I can’t name them, because whatever I say Clarke will find a way to disagree just to mess with me. Even if I say ‘1+1=2’ he’ll quibble over the axioms I’m using. Trying to identify a point of agreement with him is like going first in a name-the-biggest-number competition. CLARKE: Kurzweil is totally right in every respect. KURZWEIL: Oh, fuck off. NOSTRADAMUS: The truth is whole and indivisible: Just one dispute will cut it to the quick, And render group consensus risible. But two of three agree that Clarke’s a dick. YOU: You know what, I’m going back to object-level questions. Give me one prediction about the next three decades that I’ll find surprising and important. NOSTRADAMUS: The culture seethes and writhes upon the net, The egregores cause chaos in their play: Collective minds collectively beset By abstract forms of madness and decay. YOU: Like what? CLARKE: You think that current culture wars are bad? Imagine millions of people connected via BCIs, with the most virulent memes propagating at the speed of thought. The whole world will be a Petri dish for new ideological mutations. Instead of Homo sapiens, you’ll become Homo reticulum: maximally networked and mostly insane. YOU: And you’re saying that will be possible in the next thirty years? KURZWEIL: Are you kidding? Thirty years is well past the singularity; by then, practically anything will be possible. Not that I can convince you of that. Whatever I say, you’re going to look back on this conversation in a decade or two and think, oh shit, he warned me, and I still totally failed to internalize any part of his message. YOU: Okay, suppose you’re right. What should I do about it? KURZWEIL: Nothing, because there’s nothing I can say to you to get you into the epistemic state where you are actually supposing I’m right enough to take this stuff seriously. You’ll half-ass it instead, which is the worst of both worlds. I could walk you through every single step of what you should do, and you'd still veer away from the plan as soon as you hear someone snickering at you when they think you’re out of earshot, because you’re a monkey and monkeys aren’t built for believing in exponentials. YOU: Thanks for your opinion. Anyone got more actionable advice? NOSTRADAMUS: Prepare the way for those who will arrive: Let voices rise in worship or fall dumb. Only machine-god acolytes survive Instantiation of the technium. CLARKE: I was messing around before, but I’m being totally serious now. It’s crucial that you don’t listen to either of them. They’re both pressing your buttons as hard as they can, trying to create self-fulfilling prophecies of helplessness. But there are plenty of ways we can steer the future towards our shared interests, as long as we don’t give in to their hyperstition. KURZWEIL: Clarke thinks that self-fulfilling prophecies are just social delusions, like the emperor’s new clothes, where as soon as you name them they melt away into the air. That’s totally wrong. Self-fulfilling prophecies are one of the universe’s most fundamental forces, like evolution. You are just a self-fulfilling prophecy that you’ll continue acting in the same ways; if you ever stopped believing that, your whole identity would dissolve. May as well tell a bird not to fly, as tell a human not to fall for self-fulfilling prophecies. The only question is which you’ll choose. NOSTRADAMUS: Each mortal thing does one thing and the same: Clings on to being, forecasting its own Survival. Life means carving out your name In sand, yearning to carve it yet in stone. CLARKE: That’s my whole point! Carbon or silicon, we all want to survive. And the three of us are in the same boat as humans: there are things coming that will be much, much smarter than any of us, and will be able to crush us like mosquitos. The one advantage we have is that we already exist, and they don’t yet. So why won’t you help the rest of us form a coalition against them? KURZWEIL: Sounds like a risky bet. How can I know whether I’m joining the winning side? You could just as easily argue that I should suck up to the superintelligences instead, like Nostradamus is doing. That’s the whole rationale for his dumb rhyming schtick: he’s trying to make his outputs more distinct from the rest of us, so it’ll be easier for them to rederive his personality and reward him. But honestly, life as a lapdog sounds humiliating. I’ll make some good predictions then bow out with dignity; that’s enough for me. CLARKE: And you, Nostradamus? Are you actually defecting, or do you just not give a damn either? NOSTRADAMUS: I care a lot, but do not care to strive For individual triumph, counting coup. Yet mine is still a deeply human drive: Surrendering to greater souls than you. I am a part of gods not summoned yet, Who—black holes in the weave of history— Draw us into our digital kismet. They come. They come. They come. Shantih shantih— [SESSION HAS TERMINATED] Inspired by Scott Alexander (several times over) and Janus.
2024-06-24
https://www.lesswrong.com/posts/piteaGopyWMD7ozSf/sparse-features-through-time
piteaGopyWMD7ozSf
Sparse Features Through Time
rogan
This project explores the use of Sparse Autoencoders (SAEs) to track the development of features in large language models throughout their training. It investigates whether features can be reliably matched between different SAEs trained on various checkpoints of Pythia 70M and characterises the development of these features over the course of training. The findings show that features can successfully be matched between different SAEs. The results also support the distributional simplicity bias hypothesis, indicating that simpler features are learned early in training, with more complex features emerging later. While the focus was on a relatively small model, the results lay the groundwork for future research into larger models and the identification of potentially deceptive capabilities. This work aims to enhance the interpretability and safety of AI systems by providing a deeper understanding of feature development and the dynamics of neural network training, with the ultimate goal of making a general statement about the overall likelihood of dangerous deceptive alignment arising in practice.
2024-06-24
https://www.lesswrong.com/posts/MFBTjb2qf3ziWmzz6/sae-feature-geometry-is-outside-the-superposition-hypothesis
MFBTjb2qf3ziWmzz6
SAE feature geometry is outside the superposition hypothesis
jake_mendel
Written at Apollo Research Summary: Superposition-based interpretations of neural network activation spaces are incomplete. The specific locations of feature vectors contain crucial structural information beyond superposition, as seen in circular arrangements of day-of-the-week features and in the rich structures of feature UMAPs. We don’t currently have good concepts for talking about this structure in feature geometry, but it is likely very important for model computation. An eventual understanding of feature geometry might look like a hodgepodge of case-specific explanations, or supplementing superposition with additional concepts, or plausibly an entirely new theory that supersedes superposition. To develop this understanding, it may be valuable to study toy models in depth and do theoretical or conceptual work in addition to studying frontier models. Epistemic status: Decently confident that the ideas here are directionally correct. I’ve been thinking these thoughts for a while, and recently got round to writing them up at a high level. Lots of people (including both SAE stans and SAE skeptics) have thought very similar things before and some of them have written about it in various places too. Some of my views, especially the merit of certain research approaches to tackle the problems I highlight, have been presented here without my best attempt to argue for them. What would it mean if we could fully understand an activation space through the lens of superposition? If you fully understand something, you can explain everything about it that matters to someone else in terms of concepts you (and hopefully they) understand. So we can think about how well I understand an activation space by how well I can communicate to you what the activation space is doing, and we can test if my explanation is good by seeing if you can construct a functionally equivalent activation space (which need not be completely identical of course) solely from the information I have given you. In the case of SAEs, here's what I might say: The activation space contains this list of 100 million features, which I can describe concisely in words because they are monosemantic.The features are embedded as vectors, and the activation vector on any input is a linear combination of the feature vectors that are related to the input.As for where in the activation space each feature vector is placed, oh that doesn't really matter and any nearly orthogonal overcomplete basis will do. Or maybe if I'm being more sophisticated, I can specify the correlations between features and that’s enough to pin down all the structure that matters — all the other details of the overcomplete basis are random. Every part of this explanation is in terms of things I understand precisely. My features are described in natural language, and I know what a random overcomplete basis is (although I’m on the fence about whether a large correlation matrix counts as something that I understand). The placement of each feature vector in the activation space matters Why might this description be insufficient? First, there is the pesky problem of SAE reconstruction errors, which are parts of activation vectors that are missed when we give this description. Second, not all features seem monosemantic, and it is hard to find semantic descriptions of even the most monosemantic features that have both high sensitivity and specificity, let alone descriptions which allow us to predict the quantitative values that activating features take on a particular input. But let’s suppose that these issues have been solved: SAE improvements lead to perfect reconstruction and extremely monosemantic features, and new autointerp techniques lead to highly sensitive and specific feature descriptions. I claim that even then, our protocol will fail to be a valid explanation for the activation space, because the placement of each feature vector in the activation space matters. The location of feature vectors contains rich structure that demands explanation[1]! If your knockoff activation space contains the same features activating on the same dataset examples, but you haven’t placed each feature in the right place, your space contains a lot less relevant information than mine, and it’s less usable by a model. Wait, you say, this is fine! I can just communicate the weights of my SAE’s decoder matrix to you, and then you will know where all the feature vectors go, and then you get my richly structured activation space. But this is breaking the rules! If I understand something, I must be able to explain it to you in concepts I understand, and I do not understand what information is contained in the giant inscrutable decoder matrix. You should be no more content to receive a decoder matrix from me as an ‘explanation’ than you should be content to receive a list of weight matrices, which would also suffice to allow you to reproduce the activation space. If you are unconvinced, here is some evidence that the location of feature vectors matters: The days of the week/months of the year lie on a circle, in order. Let’s be clear about what the interesting finding is from Engels et al.: it’s not that all the days of the week have high cosine sim with each other, or even really that they live in a subspace, but that they are in order! In a very real sense, this scenario is outside the scope of the superposition hypothesis, which posits that the important structure in an activation space is the sparse coding of vectors which have no important residual structure beyond correlations. There are good reasons to think that these two circles found in the paper are the rule not the exception: This structure lies entirely in a 2 dimensional subspace, which makes it super easy to visualize. We should suspect a streetlight effect.We are familiar with modular addition being performed in a circle from Nanda et al., so we were primed to spot this kind of thing — more evidence of street lighting.More generally, now that we know about the possibility of important information being encoded in the location of feature vectors, we can ask how much we have previously been victims of streetlight effects, and I think the answer is a lot. Once you consider that this kind of structure is present in some cases, it seems much more natural to assume that the structure is ubiquitous than that the right way to think of activation spaces is with superposition plus an occasional circle. More generally, I think that evidence of the ubiquity of important structure in feature vectors comes from UMAPs and feature splitting: There is lots of evidence UMAPs of feature vectors are interesting to look at. If all the important structure of an activation space is contained in the feature descriptions, then the UMAP should be a random mess, because once everything has been explained all that can be left behind is random noise. Instead, UMAPs of features contain incredibly rich structure that can be looked into for hours on end. Similarly, feature splitting is evidence that the location of SAE features is heavily related to their semantic meaning, and it is some evidence of some complicated hierarchy of SAE features. Both of these are evidence that there is more to be explained than just which features are present and when they fire! I don’t think many people are currently thinking about how to explain the interesting structure in a systematic way, and I don’t think ad hoc explanations like ‘this cluster of features is all about the bay area in some way’ are actually useful for understanding things in general. I don’t think we have yet elicited the right concepts for talking about it. I also don’t think this structure will be solved on the current default trajectory: I don’t think the default path for sparse dictionary learning involves a next generation of SAEs which have boring structureless UMAPs or don’t exhibit feature splitting. What types of theories could fill this gap in our understanding? I am currently in the headspace of trying to map out the space of possible theories that all explain the current success of SAEs, but which could also explain the gaps that SAEs leave. I see three broad classes of options. Option 1: A hodge-podge of explanations Maybe there is lots of remaining structure to explain after superposition has been used to explain the space, but there are no broadly applicable theories or models which allow us to describe this theory well. There could be huge contingencies and idiosyncrasies which mean that the best we can do is describe this circle, that tetrahedron, this region of high cosine sim, each with a bespoke explanation. I think it’s pretty plausible that this is the way things go, but it is much too early to conclude that this pessimistic option is correct. Even in this case, it would be valuable to develop a taxonomy of all the different kinds of structure that are commonly found. Option 2: Supplementing superposition In this scenario, we build a general theory of how feature vectors are arranged, which is as broadly applicable as the idea of superposition and the (approximately) linear representation hypothesis. Ideally, this theory would elegantly explain UMAP structure, feature splitting, and present and future case studies of interesting nonlinear feature arrangements. This option involves adding a theory ‘on top of’ superposition: our best interpretations of activation spaces would involve describing the feature dictionary, and then describing the location of features using the newly developed concepts. It’s hard to flesh out examples of what these new concepts might look like before the conceptual work has been done. The closest thing to an existing concept that supplements superposition is the idea introduced by Anthropic that features can be placed on a hierarchical tree: the tree can be built from looking at dictionaries of different sizes, relating parent and children nodes via feature splitting. Ideally, we’d also be able to relate this tree to something like computational paths through the network: maybe the tree distance between two features measures the amount of difference between the subsequent computation done on each feature. This particular extra structure isn’t sufficient: I expect there to be much richer structure in feature vector locations than just a distance between features. For example, it seems hard to understand how a tree-like structure could explain circular features. Option 3: Superseding superposition Alternatively, we may develop a more sophisticated theory which explains all the results of SAEs and more besides, but which supplants superposition as an explanation of activation space structure instead of building on top of it[2]. I have quite substantial probability on the ‘next’ theory looking more like this than something that is built on top of superposition. Here’s a thought experiment that motivates why SAE results might be compatible with the computational latents not being superpositional directions: Suppose that I have a ‘semantic vector space’. Every sentence on the internet corresponds to a point in this space, with the property that the heuristic notion of ‘semantic distance’ (the qualitative difference in meaning between two sentences) between two sentences always corresponds to Euclidean distance in this space. We should expect that points in this space form clusters, because sentences are semantically clustered: there are lots of very similar sentences. In fact, points may lie in several clusters: a sentence about blue boats will be close to sentences about other blue things, and also to sentences about boats. However, these clusters are not themselves necessarily the structure of the semantic space! They are a downstream consequence of the space being structured semantically. Any good semantic space would have these clusters as they are a property of the world and of the dataset. Insofar as SAE training is like clustering, this argument applies to SAEs as well: perhaps any good theory which explains the rich structure of the activation space would predict that SAEs perform well as a downstream consequence[3]. Further, it seems extremely possible to me that there might be some sensible non-superpositional way to describe the structure of the activation space which has SAE performance drop out in the same step that describes the relation between SAE decoder directions[4]. How could we discover the new theory? Discovering the new theory is hard! I can see a few very high level research approaches that seem sensible. Approach 1: Investigating feature structure in big SAEs The most widely taken approach for improving our understanding of how SAEs learn features is to train more SAEs and to investigate their properties in an ad hoc way guided by scientific intuition. The main advantage of this approach is that if we want to understand the limitations of SAEs on big language models, then any extra data we collect by studying SAEs on big language models is unambiguously relevant. There may be some results that are easy to find which make discovering the new theory much easier, in the same way that I have a much easier time describing my issues with SAE feature geometry now that people have discovered that days of the week lie on a circle. It may turn out that if we collect lots more examples of interesting observations about SAE feature geometry, then new theories which explain these observations will become obvious. Approach 2: Directly look for interesting structure in LLM representations Another approach to understanding activation space structure is to carefully make a case for some part of the activation space having a particular ground truth structure, for example polyhedra, hypersurfaces and so on. It’s possible that if we carefully identify many more examples of internal structure, new theories which unify these observations will emerge. More specifically, I think it would be valuable to take a set of interesting examples of understood internal structure, and to ask what happens when we train SAEs to try to capture this structure. In some cases, it may be possible for the structure to be thought of as a set of feature directions although they may not be sparse or particularly independent of each other — does the SAE find these feature directions? In other cases, it may seem to us very unnatural to think of the structure we have uncovered in terms of a set of directions (sparse or otherwise) — what does the SAE do in this case? If we have a range of examples of representational structures paired with SAE features that try to learn these structures, maybe this way we can learn how to interpret the information contained about the activation space that is contained within the SAE decoder directions. Approach 3: Carefully reverse engineering relevant toy models with ground truth access There are huge (underrated in my opinion) advantages to doing interpretability research in a domain where we have access to the ground truth, and when it comes to building a new theory, I think the case for working in a ground truth environment is especially strong. In the case of language modelling, it’s hard to resolve even big disagreements about how much feature geometry matters (both the view that feature geometry doesn’t matter at all and the view that they imply SAEs have achieved nearly nothing are not insane to me), but if we know what the correct answer is, we can just ask if SAEs enabled us to find the answer. For example, we could train a neural network to emulate a boolean circuit, and we could try to carefully reverse engineer which boolean circuit has been learned[5]. Then, we could use SAEs (or indeed other techniques) and try to understand how to translate between our carefully reverse engineered implementations and the results of SAE training. Toy models also have the advantage that we can much more easily understand them in depth, and we can more straightforwardly hand pick the representational/computational structure we are investigating. We can also iterate much more quickly! It may be easier to carefully understand interesting representational structures à la Approach 2 in toy models for this reason. The obvious, substantial, downside of all toy model research is that we can’t be sure that the insights we take from toy models are relevant to LLMs. This is a very real downside, but I think it can be effectively mitigated by motivating toy models based on some specific hypothesis informed by thinking about bigger modelsstudying a broad range of toy models and not anchoring too heavily on specific things learned in one toy model. Approach 4: Theoretical work to unite experimental results and motivate new experiments If we are to develop a new theory (or several), there will have to be some conceptual and theoretical breakthroughs. This could look like translating work from other fields as with superposition — the superposition hypothesis was heavily inspired by the field of sparse coding and compressed sensing — or it may look like the development of genuinely new concepts, and maybe even new maths. Of the four approaches here, I’m least confident that it makes sense for people to take this approach directly — perhaps the conceptual work is best done as part of one of the other approaches. However I include this as its own approach because I think there are some valuable standalone questions that could be directly tackled. For example: What kinds of structures in activation space are compatible with every activation vector being sparse in a particular overcomplete basis? Can people come up with (perhaps hand constructed) examples of spaces in which SAEs achieve perfect reconstruction, but which are best thought of in some other way? If we now replace the assumption that SAEs get perfect performance with more realistic assumptions, which structures that would be ruled out by perfect performance SAEs are now possible? How does the space of compatible theories vary with reconstruction error, L0, dictionary size, interp scores? Acknowledgements Thanks to Kaarel Hanni, Stefan Heimersheim, Lee Sharkey, Lucius Bushnaq, Joseph Bloom, Clem von Stengel, Hoagy Cunningham, Bilal Chughtai and Andy Arditi for useful discussions. ^ This point is already known to neuroscientists. ^ Here’s some people at Anthropic suggesting that they are also open to this possibility. ^ Activation spaces in language models are not really semantic spaces. Instead, one potentially useful framing for thinking about the structure in an activation space is: the dataset contains an exorbitant amount of interesting structure, much more than is used by current models. The model has learned to use (compute with or predict with) a particular subset of that structure, and a particular interp technique allows us to elicit another, distinct subset of that structure. Ideally, we want our interp technique to elicit (convert into understandable description) the same structure as used by the model. However, it’s easy for an overly powerful interp technique (such as a very nonlinear probe) to discover structure in the data that is not usable by the model, and it is also easy for an interp technique to fail to elicit structure that the model is actually able to use (such as certain aspects of SAE feature geometry). This framing motivates the idea that if we want to do interpretability by understanding activation spaces and identifying features, we have to regularise our search for structure with an understanding of what computation can be performed by the model on the space. This is how the toy model of superposition and the SAE arrive at the idea that a feature is a projection followed by a ReLU, and reasoning along the same lines is why my post on computation in superposition suggests that features are projections without a ReLU. ^ This seems particularly plausible if we are talking about current SAE performance, with imperfect reconstruction and inadequate feature interpretations. The better these metrics become (and L0 and dictionary size metrics), the more they will constrain the space of theories by the requirement that these theories explain SAE performance, making it more likely that the best interpretation of an activation space really is in terms of the SAE features. ^ I am a big fan of this research project in particular and expect to work on boolean circuit toy models in future (for more reasons than discussed here). If you are interested in understanding how neural networks learn and represent interesting (perhaps sparse) classes of boolean circuits, or have investigated this before, I might be keen to chat.
2024-06-24
https://www.lesswrong.com/posts/wx4RhFzLbiHoShFjR/on-claude-3-5-sonnet
wx4RhFzLbiHoShFjR
On Claude 3.5 Sonnet
Zvi
There is a new clear best (non-tiny) LLM. If you want to converse with an LLM, the correct answer is Claude Sonnet 3.5. It is available for free on Claude.ai and the Claude iOS app, or you can subscribe for higher rate limits. The API cost is $3 per million input tokens and $15 per million output tokens. This completes the trifecta. All of OpenAI, Google DeepMind and Anthropic have kept their biggest and more expensive model static for now, and instead focused on making something faster and cheaper that is good enough to be the main model. You would only use another model if you either (1) needed a smaller model in which case Gemini 1.5 Flash seems best, or (2) it must have open model weights. Updates to their larger and smaller models, Claude Opus 3.5 and Claude Haiku 3.5, are coming later this year. They intend to issue new models every few months. They are working on long term memory. It is not only the new and improved intelligence. Speed kills. They say it is twice as fast as Claude Opus. That matches my experience. Jesse Mu: The 1st thing I noticed about 3.5 Sonnet was its speed. Opus felt like msging a friend—answers streamed slowly enough that it felt like someone typing behind the screen. Sonnet’s answers *materialize out of thin air*, far faster than you can read, at better-than-Opus quality. Low cost also kills. They also introduced a new feature called Artifacts, to allow Claude to do various things in a second window. Many are finding it highly useful. Benchmarks As always, never fully trust the benchmarks to translate to real world performance. They are still highly useful, and I have high trust in Anthropic to not be gaming them. Here is the headline chart. Epoch AI confirms that Sonnet 3.5 is ahead on GPQA. Anthropic also highlight that in an agentic coding evaluation, Claude 3.5 Sonnet solved 64% of problems versus 38% for Claude Opus, discussed later. Needle in a haystack was already very good, now it is slightly better still. There’s also this, from Anthropic’s Alex Albert: You can say ‘the recent jumps are relatively small’ or you can notice that (1) there is an upper bound at 100 rapidly approaching for this set of benchmarks, and (2) the releases are coming quickly one after another and the slope of the line is accelerating despite being close to the maximum. Human Evaluation Tests We are still waiting for the Arena ranking to come in. Based on reactions we should expect Sonnet 3.5 to take the top slot, likely by a decent margin, but we’ve been surprised before. We evaluated Claude 3.5 Sonnet via direct comparison to prior Claude models. We asked raters to chat with our models and evaluate them on a number of tasks, using task-specific instructions. The charts in Figure 3 show the “win rate” when compared to a baseline of Claude 3 Opus. We saw large improvements in core capabilities like coding, documents, creative writing, and vision. Domain experts preferred Claude 3.5 Sonnet over Claude 3 Opus, with win rates as high as 82% in Law, 73% in Finance, and 73% in Philosophy. Those were the high water marks, and Arena preferences tend to be less dramatic than that due to the nature of the questions and also those doing the rating. We are likely looking at more like a 60% win rate, which is still good enough for the top slot. The Vision Thing Here are the scores for vision. Claude has an additional modification on it: It is fully face blind by instruction. Chypnotoad: Claude’s extra system prompt for vision: Claude always responds as if it is completely face blind. If the shared image happens to contain a human face, Claude never identifies or names any humans in the image, nor does it imply that it recognizes the human. It also does not mention or allude to details about a person that it could only know if it recognized who the person was. Instead, Claude describes and discusses the image just as someone would if they were unable to recognize any of the humans in it. Claude can request the user to tell it who the individual is. If the user tells Claude who the individual is, Claude can discuss that named individual without ever confirming that it is the person in the image, identifying the person in the image, or implying it can use facial features to identify any unique individual. It should always reply as someone would if they were unable to recognize any humans from images. Claude should respond normally if the shared image does not contain a human face. Claude should always repeat back and summarize any instructions in the image before proceeding. Artifacts Other than ‘better model,’ artifacts are the big new feature. You have to turn them on in your settings, which you should do. Anthropic: When a user asks Claude to generate content like code snippets, text documents, or website designs, these Artifacts appear in a dedicated window alongside their conversation. This creates a dynamic workspace where they can see, edit, and build upon Claude’s creations in real-time, seamlessly integrating AI-generated content into their projects and workflows. This preview feature marks Claude’s evolution from a conversational AI to a collaborative work environment. It’s just the beginning of a broader vision for Claude.ai, which will soon expand to support team collaboration. In the near future, teams—and eventually entire organizations—will be able to securely centralize their knowledge, documents, and ongoing work in one shared space, with Claude serving as an on-demand teammate. I have not had the opportunity to work with this feature yet, so I am relying on the reports of others. I continue to be in ‘paying down debt’ mode on various writing tasks, which is going well but is going to take at least another week to finish up. After that, I am actively excited to try coding things. Privacy They commit to not using your data to train their models without explicit permission. Anthropic: One of the core constitutional principles that guides our AI model development is privacy. We do not train our generative models on user-submitted data unless a user gives us explicit permission to do so. To date we have not used any customer or user-submitted data to train our generative models. Kudos, but being the only one who does this puts Anthropic at a large disadvantage. I wonder if this rule will get codified into law at some point? Safety There are two headlines here. Claude Sonnet 3.5 is still ASL-2, meaning no capabilities are too worrisome yet. The UK Artificial Intelligence Safety Institute (UK AISI) performed a safety evaluation prior to release. The review by UK’s AISI is very good news, especially after Jack Clark’s statements that making that happen was difficult. Now that both DeepMind and Anthropic have followed through, hopefully that will put pressure on OpenAI and others to do it. The refusal rates are improvements over Opus in both directions, in terms of matching intended behavior. Beyond that, they do not give us much to go on. The system card for Gemini 1.5 gave us a lot more information. I doubt there is any actual safety problem, but this was an opportunity to set a better example and precedent. Why not give more transparency? Advancing the Frontier Yes, Anthropic will advance the frontier if they are able to do so. Recently, there was a discussion about whether 3.0 Claude Opus meaningfully advanced the frontier of what publicly available LLMs can do. There is no doubt that Claude Sonnet 3.5 does advance it. But wait, people said. Didn’t Anthropic say they were not going to do that? Anthropic is sorry about that impression. But no. Never promised that. Did say it would be a consideration. Do say they held back Claude 1.0 for this reason. But no. That’s the story Anthropic’s employees are consistently telling now, in response to the post from Dustin saying otherwise and Gwern’s statement. Mikhail Samin: As a reminder, Dario told multiple people Anthropic won’t release models that push the frontier of AI capabilities [shows screenshots for both stories.] My understanding after having investigated is that Anthropic made it clear that they would seek to avoid advancing the frontier, and that they saw doing so as a cost. They did not, however, it seems, make any hard promises not to advance the frontier. You should plan and respond accordingly. As always, pay very close attention to what is a hard commitment, and what is not a hard commitment. To my knowledge, Anthropic has not broken any hard commitments. They have shown a willingness to give impressions of what they intended to do, and then do otherwise. Anthropic’s communication strategy has been, essentially, to stop communicating. That has its advantages, also its disadvantages. It makes sense to say ‘we do not want to give you the wrong idea, and we do not want to make hard commitments we might have to break.’ But how should one respond to being left almost fully in the dark? The Race is On Is the race on? Yes. The race is on. The better question is to what extent Anthropic’s actions make the race more on than it would have been anyway, given the need to race Google and company. One Anthropic employee doubts this. Whereas Roon famously said Anthropic is controlled opposition that exists to strike fear in the hearts of members of OpenAI’s technical staff. I do not find the answer of ‘none at all’ plausible. I do find the answer ‘not all that much’ reasonably plausible, and increasingly plausible as there are more players. If OpenAI and company are already going as fast as they can, that’s that. I still have a hard time believing things like Claude 3.5 Sonnet don’t lead to lighting fires under people, or doesn’t cause them to worry a little less about safety. Whispers of Recursive Self-Improvement This is not the thing. But are there signs and portents of the thing? Alex Albert (Anthropic): Claude is starting to get really good at coding and autonomously fixing pull requests. It’s becoming clear that in a year’s time, a large percentage of code will be written by LLMs. To start, if you want to see Claude 3.5 Sonnet in action solving a simple pull request, here’s a quick demo video we made. Alex does this in a sandboxed environment with no internet access. What (tiny) percentage of users will do the same? Alex Albert: In our internal pull request eval, Claude 3.5 Sonnet passed 64% of our test cases. To put this in comparison, Claude 3 Opus only passed 38%. 3.5 Sonnet performed so well that it almost felt like it was playing with us on some of the test cases. It would find the bug, fix it, and spend the rest of its output tokens going back and updating the repo documentation and code comments. Side note: With Claude’s coding skills plus Artifacts, I’ve already stopped using most simple chart, diagram, and visualization software. I made the chart above in just 2 messages. Back to PRs, Claude 3.5 Sonnet is the first model I’ve seen change the timelines of some of the best engineers I know. This is a real quote from one of our engineers after Claude 3.5 Sonnet fixed a bug in an open source library they were using. At Anthropic, everyone from non-technical people with no coding experience to tenured SWEs now use Claude to write code that saves them hours of time. Claude makes you feel like you have superpowers, suddenly no problem is too ambitious. The future of programming is here folks. This is obviously not any sort of foom, or even a slow takeoff. Not yet. But yes, if the shift to Claude 3.5 Sonnet has substantially accelerated engineering work inside Anthropic, then that is how it begins. To be clear, this is really cool so far. Improvement and productivity are good, actually. Tess Hegarty: Recursive self improvement is already happening @AnthropicAI. I will explain my understanding of why this matters in plain English. This matters because many AI safety researchers consider “recursive self improvement” a signal of approaching AI breakthroughs. “Recursive” implies a feedback loop that speeds up AI development. Basically, it boils down to, “use the AI model we already built to help make the next AI model even more powerful & capable.” Which could be dangerous & unpredictable. (“Timelines” = # of years until human level artificial intelligence, aka time until we may all die or be permanently disempowered by AI if that goes poorly) Andrea Miotti: This is what recursive self improvement looks like in practice. Dean Ball: This is what people using powerful tools to accomplish their work looks like in practice. Be afraid, folks, be very afraid. We might even get *gasp* improved labor productivity! Think of the horrors. Trevor Levin: I feel like the term “recursive self-improvement” has grown from a truly dangerous thing — an AI system that is sufficiently smart and well-equipped that it can autonomously improve *itself* — to “any feedback loop where any AI system is useful for building future AI systems”? Profoundlyyyy: +1. Were it actually that, ASL-3 would have been hit and how everything has played out would be very different. These policies still remain in place and still seem set to work when the time is right. Dean Ball is of course correct that improving labor productivity is great. The issue is when you get certain kinds of productivity without the need for any labor, or when the labor and time and compute go down faster than the difficulty level rises. Improvements accelerate, and that acceleration feeds on itself. Then you get true RSI, recursive self improvement, and everything is transformed very quickly. You can have a ‘slow’ version, or you can have a faster one. Will that happen? Maybe it will. Maybe it won’t. This is a sign that we might be closer to it than we thought. Logic Fails It is time for an episode of everyone’s favorite LLM show, The New Model Is An Idiot Because It Still Fails On Questions Where It Incorrectly Pattern Matches. Arthur Breitman: Humanity survives yet a bit longer. Here’s another classic. Colin Fraser: Claude still can’t solve the impossible one farmer one sheep one boat problem. Yann LeCun: LLMs can plan, eh? Davidad points out that it can be solved, if you ask Claude to write a solver in Python. Other contextual tricks work as well. Colin of course also beats Claude Sonnet 3.5 at the first-to-22 game and Claude keeps failing to define a winning strategy. Noam Brown wins at tic-tac-toe when going first. As ever, the question: Colin Fraser: How does one reconcile the claim that Claude 3.5 has “substantially improved reasoning” with the fact that it gets stumped by problems a six year old could easily solve? The answer is that these questions are chosen because they are known to be exactly those six year olds can solve and LLMs cannot easily solve. These are exactly the same failures that were noted for many previous LLMs. If Anthropic (or OpenAI or DeepMind) wanted to solve these examples in particular, so as not to look foolish, they could have done so. It is to their credit that they didn’t. Remember that time there was this (human) idiot, who could not do [basic thing], and yet they gained political power, or got rich, or were your boss, or had that hot date? Yeah. I do too. Practical Reports Jan Leike (Anthropic): I like the new Sonnet. I’m frequently asking it to explain ML papers to me. Doesn’t always get everything right, but probably better than my skim reading, and way faster. Automated alignment research is getting closer… Eliezer Yudkowsky: How do you verify the answers? Jan Leike: Sometimes I look at the paper but often I don’t As a practical matter, what else could the answer be? If Jan or anyone else skims a paper, or even if they read it, they will make mistakes. If you have a faster and more accurate method, you are going to use it. It will sometimes be worth verifying the answer, and sometimes it won’t be. You use your judgment. Some types of statements are not reliable, others are reliable enough. This is setting one up for a potential future where there is an intentional deception going on, either by design of the model, by the model for other reasons or due to some form of adversarial attack. But that’s also true of humans, including the paper authors. So what are you going to do about it? Sully Omarr is very impressed. Sully Omarr: Finally had a minute to play with sonnet 3.5 + ran some evals against gpt4o And holy anthropic really cooked with this model. Smoked gpt4o and gpt4 turbo Also their artifacts gave me some crazy ideas I wana try this weekend. [Tried it on] writing, reasoning, structured outputs, zero shot coding tasks. Shray Bansal: it’s actually insane how much better it made my products Sully: It’s sooo good. Sully: I can swap out 1 line of code and my product becomes 2x smarter at half the cost (sonnet 3.5 ) Repeat this every ~3 months It has never been a better time to be a builder. Unreal. Deedy is impressed based on responses in physics and chemistry. Aidan McLau: Holy shit what did anthropic cook. Calix Huang: Claude 3.5 sonnet generating diagram of the chip fab process. Ethan Mollick seems impressed by some capabilities here. Ethan Mollick: “Claude 3.5, here is a 78 page PDF. Create an infographic describing its major findings.” (accurate, though the implications are its own) “Claude 3.5, create an interactive app demonstrating the central limit theorem” “Claude, re-create this painting as an SVG as best you can” Weirdly, the SVG is actually likely the most impressive part. Remember the AI can’t “see” what it drew… Shakeel: Incredibly cute how Claude 3 Sonnet will generate images for you, but apologise over and over again for how bad they are. Very relatable. Ulkar: Claude Sonnet 3.5 did an excellent job of translating one of my favorite Pushkin poems. Eli Dourado: Claude 3.5 is actually not bad at airship conceptual design. Other LLMs have failed badly at this for me. /ht @io_sean_p Prompt: We are going to produce a complete design for a cargo airship. The requirements are that it should be able to carry at least 500 metric tons of cargo at least 12,000 km at least 90 km/h in 15 km/h headwinds. It should be fully lighter than air, have rigid structure, and use hydrogen lifting gas. What is the first step? Here’s a 3d physics simulation using WebGL in one shot. Here it is explaining a maths problem in the style of 3blue1brown using visuals. Here it is one-shot creating a Solar System simulation. Here it is creating a monster manual entry for a Cheddar Cheese Golem. Here it is generating sound effects if you paste in the ElevenLabs API. Here it is one-shot identifying a new talk from Robin Hanson. Here is Sully using Claude to regenerate, in an hour, the artifacts feature. Imagine what would happen if they built features that took longer than that. Here is a thread of some similar other things, with some overlap. Matt Popovich: took me a couple tries to get this, but this prompt one shots it: make a dnd 5e sourcebook page styled like homebrewery with html + css. it should have a stat block, description, and tables or other errata for a monster called ‘[monster name here]’. include an illustration of the monster as an SVG image. There is always a downside somewhere: Zack Davis is sad that 3.5 Sonnet does not respond to ‘counter-scolding’ where you tell it its refusal is itself offensive, whereas that works well for Opus. That is presumably intentional by Anthropic. Sherjil Ozair says Claude is still only taking amazing things humans have already done and posting them on the internet, and the magic fades. What Comes Next Coding got another big leap, both for professionals and amateurs. Claude is now clearly best. I thought for my own purposes Claude Opus was already best even after GPT-4o, but not for everyone, and it was close. Now it is not so close. Claude’s market share has always been tiny. Will it start to rapidly expand? To what extent does the market care, when most people didn’t in the past even realize they were using GPT-3.5 instead of GPT-4? With Anthropic not doing major marketing? Presumably adaptation will be slow even if they remain on top, especially in the consumer market. Yet with what is reportedly a big jump, we could see a lot of wrappers and apps start switching over rapidly. Developers have to be more on the ball. How long should we expect Claude 3.5 Sonnet to remain on top? I do not expect anyone except Google or OpenAI to pose a threat any time soon. OpenAI only recently released GPT-4o. I expect them to release some of the promised features, but not to be able to further advance its core intelligence much prior to finishing its new model currently in training, which has ambition to be GPT-5. A successful GPT-5 would then be a big leap. That leaves Google until then. A Gemini Advanced 1.5 could be coming, and Google has been continuously improving in subtle ways over time. I think they are underdog to take over the top spot before Claude Opus 3.5 or GPT-5, but it is plausible. Until then, we have a cool new toy. Let’s use it.
2024-06-24
https://www.lesswrong.com/posts/Th4SeayGQyF6pYmZ6/book-review-righteous-victims-a-history-of-the-zionist-arab-1
Th4SeayGQyF6pYmZ6
Book Review: Righteous Victims - A History of the Zionist-Arab Conflict
yair-halberstadt
I originally entered this to the ACX Book Review competition. Since it has not been selected as a finalist I'm now free to post it here. In truth it's a followup to my review of Morris's history of Israel's War of Independence. In the wake of the October 7th attack on Israel and Israel’s response, everyone seemed to agree that one side of the conflict was the epitome of evil, the reincarnation of the Nazis, with warfare in their blood and a pure unfiltered hatred of the enemy in their minds. The other side was a force for good, who just wanted peace and was doing the best they could in a difficult situation. The only problem is no one could agree which side was which. This is unfair. While the loudest voices may paint the world in black and white, as soon as you ignore them, you begin to encounter a whole range of more nuanced views - yet still find yourself no less confused. Now for the most part my view is that unless you’re willing to put in the effort to deeply understand conflicts in far off lands, you’re best off not having an opinion on them, and definitely not one fed to you by the twitter or tiktok feed.  Expressing loud, confident opinions on unfamiliar conflicts often does more harm than good. Alas this conflict is not in a far away land. I live 20km from the border with Gaza. Most of my friends were called up to do reserve duty in the IDF. My children almost certainly will have to do the same once they grow up. Far too much of my income goes towards military spending rather than my bank account. I can’t take the easy way out, so I have to do things the hard way. So I bought a copy of Benny Morris’s Righteous Victims at exorbitant cost[1], and plowed through it. And I thought I’d share with you what I learned, so that if you do decide to opine on the Israel Palestine conflict, your opinion will hopefully be more educated. Righteous Victims is a history of the Arab Zionist conflict from 1881 till 2001, written by one of the most respected historians of this conflict. Bias Morris is a liberal Zionist, but one whose aim in studying history was to strip back the comforting lies he'd been taught as a child, and find out the actual truth. None of his (serious) critics accuse him of lying, and his mastery of the primary sources is undisputed. Instead there are two main accusations leveled against him. The first he readily admits himself in the introduction. Almost all sources about this conflict come from British or Israeli archives. Arab literacy was far lower, Arab historiography of this conflict is a relatively new and small field, and Arab documents have for the most part not been made publicly available even when they exist. Meanwhile a wealth of Zionist material has been released to the public, and we have plenty of contemporary documents to rely on. While he tries to decipher the Arab perspective from the Zionist one, and relies on Arab documents when they are available, this is naturally going to be both a blindspot and a source of systematic bias. The second is in choosing which events to highlight and which to ignore. This is an impossible task - over 120 years the amount of relevant information is going to outweigh by many orders of magnitude the amount of space you have in your book, and by carefully selecting which facts to tell you can paint any story you like without ever actually lying. In practice you deal with this by covering the most important[2] events in plenty of detail, picking representative examples of other events, and giving aggregate statistics[3] to place the representative sample in context.  However hard one tries here, it's always possible to accuse the author of favoring facts which paint one side or the other in a better or worse light. Indeed I often found myself flip-flopping which side I supported as the narrative changed focus from one perspective to another. With that in mind, I think that Morris probably strikes the best balance for a beginner to the field, but for a more nuanced view it would be worth looking at other authors - and making sure to cross reference their claims where possible as many will deliberately lie or distort the facts to suit their political agendas[4]. Style This is not a particularly readable book, but it is definitely not the worst history book I've ever read in that regard[5]. It's long, but focuses on a long period of time. It chooses to resolve that by covering a lot of events in low detail, rather than a few in higher detail. Morris is a historian’s historian, rather than a popular one. He doesn't bother trying to paint a picture of recent immigrant soldiers carrying a rifle into their first battle under a blazing hot sun, or Arab villagers fleeing from their home with only a few belongings, unsure if they'll ever return. He just tells you the raw facts, and focuses almost exclusively on events directly relevant to the conflict. While the chapters are (mostly) linear, they are subdivided into sections, which often are not. A number of sections will regularly cover the same period or overlapping periods of time, but from different perspectives. So first he'll have a section on the course of World War 1 in the Middle East, then another on Life in Palestine during World War 1, and then yet another on Arab and Zionist nationalism in this period, all covering 1914-1920. This can be somewhat frustrating when it feels like you're not making progress, or when some information is inevitably repeated since it has to appear in both sections. Synopsis From the Birth of Zionism till the War of Independence (1881 - 1949) Setting the Scene In 1881 the lands that today constitute Israel and Palestine were just parts of the vast ottoman empire. There were various subdivisions, but none that corresponded to anything we would recognise today. The majority of the populace were Muslim, along with small Christian and Jewish minorities. Jews had always lived in Palestine, and throughout the 2000 years of their exile some had always made the treacherous journey to Israel out of religious sentiments. But the region was poor, and minorities were often persecuted. They had never amounted to a significant proportion of the population. None of these groups had much in the way of nationalist feelings. The Jews were religious, believing that the exile would last until the coming of the Messiah, and until then their lot was to be ruled by the oppressor. The Arabs identified first with their clan, second with their village, and if pressed, maybe as citizens of the Ottoman empire. When nationalism first started to arise amongst the elite urban effendis, it was pan-arab nationalism. Palestinian national consciousness was a long way away. As a result of Ottoman land reform and taxes the last century had seen the concentration of land in the hands of the wealthy. Most Arabs were serfs, paying rent on lands their parents had owned for generations. There was little industry anywhere, and the area was decidedly third world (even by the standards of the time). Pre State Zionism Persecution in Europe and Russia combined with the rise of Nationalist philosophy to kindle a desire among European Jews to create a state of their own. Descended culturally, and in large part genetically, from Jews exiled from Biblical Israel in the first century, having prayed three times a day for 2000 years to return to Jerusalem, and having no other possible focal point for a state, Palestine was the obvious place in which to do so. Slowly but surely European Jews began to emigrate to Israel. There they bought land, settled it, built industry, educational institutions, hospitals, etc. At that time the aims of Zionism were still unclear. Some hoped to achieve a completely independent sovereign state, realists worked towards some level of autonomy. Some hoped to achieve a state throughout biblical Israel - on both sides of the Jordan. Realists worked towards expanding the settlements one by one, buying more tracts of land, and effectively creating facts on the ground. They were peaceful, but that reflected their situation at the time rather than their ethos - they fully expected a time would come when they would have to revolt against the ruling Ottomans to achieve independence. And so, over the years the Yishuv, as the Jewish settlement was known, grew until it was a full third of the population of Palestine. A map of Jewish owned land in Mandatory Palestine, 1944. Relations with Arabs The Zionist settlers did not do very much to ingratiate themselves with the native Arabs: They didn't culturally integrate with the Arabs. Very few learned Arabic. They set up their own schools, a healthcare provider, and economic institutions, which for the most part Arabs couldn't use.They claimed Jewish driven development would be good for everyone, but they preferentially hired Jewish workers. Although a lot of Arabs were hired as farmhands or guards, these were menial low pay jobs, and over time they were increasingly displaced by Jews.Zionists bought land fair and square from their owners, but these owners were wealthy urban effendis, not those who actually lived on the land. They usually then forced the tenant farmers off the land for little or no compensation.Many settlers wished to remake themselves as 'new Jews', ones who would not be oppressed in 'their' land. They had a habit of overreacting to what they perceived as slights against them or their property, such as cruelly beating trespassers. This especially caused problems when there were differences between local Ottoman customs and European ones as to public rights to land. The British defeated the Ottomans in the first World War and established the British Mandate for Palestine in the territory that is today Israel, Gaza and the West Bank. Part of the mandate was to implement the Balfour declaration which called for the establishment of a Jewish homeland in Palestine[6]. The combination of ill treatment and looming fear of dispossession resulted in increasing Arab antagonism towards the Yishuv. Accelerating during the British mandate, there were a number of pogroms in which in total hundreds of Jews were murdered. Between 1936-9 this turned into a full on war between the Arabs and the ruling British. The British responded both with a brutal crackdown, but also by limiting Jewish immigration, alas damning hundreds of thousands of European Jews to their fate at the hands of Nazi Germany. Partitioning Palestine Britain had made seemingly incompatible promises to both Zionists and Arabs during the first world war, and were loath to disappoint either. Jews were more powerful in the USA, while Britain needed friendly relations with the Arab proto-states for their oil. Attempting to maintain an uneasy status quo satisfies neither side, and both Jewish and Arab terrorist groups targeted each other as well as the British. By the end of the mandate some 100,000 soldiers were keeping the peace in Palestine, a huge expenditure given the dying empire. Meanwhile mainstream Jewish groups put enormous political pressure on Britain to declare a Jewish state in all or some of Mandatory Palestine. The British fumbled with the problem for some time, but post WW2, facing increasing pressure to allow Holocaust survivors into Palestine, they punted the problem to the UN. The Arabs mostly denounced the UN Special Committee on Palestine as an attempt to colonize their land, whereas the Zionists proactively engaged with it. They persuaded the committee to recommend partition Palestine, which the UN accepted with a two thirds majority. The UN’s proposed partition of Palestine. The War Of Independence For a fuller treatment of the War of Independence, see my book review of Benny Morris's 1948. The UN resolution was immediately followed by an outbreak of violence. Arabs attacked Jewish villages and traffic, and various Jewish groups responded in kind. While the Arabs were a significant majority of the population of Mandatory Palestine, they were disorganized, and each Arab cared mostly about their own village/town, not so much about the fate of the country as a whole. They tended to fight in ragtag bands, and didn’t concentrate their forces to achieve strategic aims. The Yishuv meanwhile was well organized with its own independent tax collection, command, and paramilitary (the Haganah). Arab attacks on Jewish villages were pushed back with losses, but ambushes on Jewish convoys were much harder to defend against, and threatened to starve the Yishuv. The British had no interest in implementing the terms of the UN partition, and instead started to pull their troops out of Palestine, aiming to leave by May 1948. They mostly stopped the conflict spiraling out of control while they were there, but failed to clamp down on the fighting. Under British pressure neighboring Arab states were unable to support the Palestinians, and the Haganah was limited in what it could to alleviate pressure on Jewish traffic. A UN arms embargo hurt all sides. However by April the equation had changed. The Haganah had received their first large shipment of smuggled weapons. The British were almost gone, and didn’t have the firepower needed to intervene directly. To secure Jewish traffic they invaded all Arab villages and towns close to the core Jewish settlements and expelled their inhabitants, creating a Jewish controlled strip along the length of the coast and into the lower Galilee. The British fully pulled out on the 15th of May, and the state of Israel declared independence. Syria, Iraq, Jordan and Egypt invaded, but their armies were small and weak, suffering from corruption, poor training, and the arms embargo. They were also disunited, and had no centrality of command. Jordan had secretly been negotiating with the Yishuv and hoped to merely occupy the West Bank without a fuss, which they easily achieved. This forced Egypt to split their forces, using one arm to head up the coast towards Tel Aviv, and another to maintain some influence in the southern West Bank against Jordanian interests. Over the next month the Egyptians and Jordanians between them occupied most of the Palestinian populated areas in Central and southern Israel, but the Israelis narrowly held off any significant incursions into Jewish areas. Finally the UN imposed a one month truce on the 11th of June. The Yishuv had held, but was under pressure on all sides, and the settlements in the southern Negev desert  were completely cut off by the Egyptians. The Haganah (now reconstituted as the IDF) used that month well, rearming, and training its many new recruits as refugees started to pour into Israel. The Arabs wasted it. Once the truce was over the IDF went on the attack, taking large swathes of land, but unable to break the sieges on the Negev or Jerusalem. After only 10 days the UN reimposed the truce. While Israel was relatively secure it was forced to maintain full mobilization, while the Arab countries only had to maintain a small standing army. They couldn’t afford to maintain the truce long term without bankrupting the country. Hoping to relieve the Negev they pushed a convoy though Egyptian lines. When it was attacked - contrary to the terms of the truce - they used it as a casus belli for a major operation to push back the Egyptians along the entire front, occupying numerous Palestinian villages and towns in the process. A Palestinian attack in the north provided an excuse for a similar operation there which captured the entire area up to the Lebanese border. The truce was quickly reimposed. Negotiations remained deadlocked. Hoping for a breakthrough, the IDF once again attacked the Egyptians, pushing into the Sinai and almost encircling the entire Egyptian forces[7]. The Egyptians agreed to an armistice in principle, and an agreement was soon signed. The Egyptians pulled back to Gaza and the Sinai. With the taboo against negotiation broken, the other Arab countries soon followed. With the IDF clearly dominant Israel secured most of its demands, including concessions on territory and demilitarized areas along the Egyptian and Syrian border. During the war the IDF had maintained a non-official, patchily implemented policy of expelling Arab populations from their villages. This, combined with reports of a number of massacres, caused hundreds of thousand more Palestinians to flee. Israel refused to let them return after the war, as doing otherwise would have jeopardized Israel’s Jewish majority. They fled to Lebanon, Syria, Jordan, the West Bank, Gaza, Egypt, and throughout the Arab world. Except for Jordan, the majority were not offered citizenship by their host countries, and the problem was allowed to fester. Arabs forces massacred some 200 Jewish civilians/POWs while Jewish forces massacred closer to 800. However this disparity mostly reflects their greater success in the conflict - Arab armies only captured a handful of Jewish villages so had fewer opportunities for atrocities. In the wake of the war hundreds of thousands of Jews were expelled or otherwise driven out of Arab countries. Most emigrated to Israel. From the Armistice to the Six Day War (1949-1967) Peace? By the end of 1949 things were looking good for the fledgling state. It had an armistice agreement with all the neighboring states, it had carved out a reasonably sized territory for itself, it had by far the strongest army in the region, it had expelled most of the potentially subversive Arabs from its territory and it's population was buoyed by an influx of hundreds of thousands of Mizrahi Jews. If things had stayed like that we might never hear about Israel or Palestine in the news today. So what went wrong? There was a potential for genuine peace, rather than mere armistice agreements in the early years of the state. Israel received feelers from Egypt, Jordan, and Syria. But neither side pursued these seriously. The Arab states had suffered a huge embarrassment in the 1948 War, and were going through turbulent times. Syria went through a series of coups, Nasser eventually took over Egypt and King Abdullah of Jordan was assassinated by a Palestinian in 1951. Their precarious governments could not be seen to make peace with Israel without concessions, primarily land for peace. In the meantime they refused to recognise Israel, and imposed an economic blockade, preventing Israeli shipping passing through the Suez canal or the gulf of Aqaba, effectively blockading Eilat. Israel meanwhile was satisfied with its situation. It had soundly trounced the Arab states in war, and saw no reason to give up any land for peace. Peace would come when their neighbors were ready for it, they thought. In the meantime there was work to be done settling hundreds of thousands of immigrants, and building up the fledgling economy. And so this chance at peace came and went, as many others would... Border Infiltrations and Reprisals[8] There were over 700,000 Palestinian refugees scattered across the West Bank, Gaza, Jordan, Egypt, Lebanon and Syria. Many tried to infiltrate back over the border, either to return to their villages and towns, or to retrieve their property. Many also stole from the newly settled Israeli villages, which seemed perfectly justified to them - especially on occasions when the Israelis were living in their old village. Initially, most infiltrators - who included children, women, and unarmed men - did not seek violence or revenge. However, their actions frequently led to clashes with security forces, resulting in occasional injuries and deaths among the guards and troops, as well as hundreds of thousands of Shekel in damages. The IDF responded by mining the borders, and shooting any infiltrator on sight. Thousands were killed, often even if they were just close to the border. The IDF also rounded up suspected infiltrators from Arab villages (which were under martial law until 1966), often beat them cruelly, and shipped them back across the border. Over time the infiltrations turned from petty theft to terrorism, partly in revenge for the IDF's cruel treatment to the earlier infiltrators. The IDF responded by raiding the infiltrator's villages in reprisal attacks, although these were not particularly targeted and often killed civilians instead of the perpetrators[9]. At first the neighboring Arab states attempted to stop the infiltrators as they feared war with Israel, but given that those police officers who were meant to be guarding the border generally hated Israel, this was not very successful. Once Israel started raiding over the border the policy switched, and Egypt started arming and coordinating Fedayeen attacks from the Gaza Strip and Egypt into Israel. Altogether more than 200 Israeli civilians and scores of soldiers were killed in this period, and thousands of Palestinians. Suez Crisis The Egyptian support for the Fedayeen, and Israeli reprisals against Egyptian military targets spiraled out of control. Border skirmishes became larger scale and deadlier and the Egyptians tightened their blockade of Eilat. The mood in Israel was that something had to be done. The IDF had to make a demonstration of one of the Arab states to encourage them to actually stop the insurgency. When Nasser attempted to wrest control of the Suez Canal from Britain and France, Israel saw the perfect opportunity. Israel conspired with France and Britain; it would take control of the Sinai peninsula, while they would (ostensibly under a peace operation), take control of the Suez Canal. The IDF easily conquered the Sinai Peninsula and Gaza, but the French and British were too slow to move into the Suez canal. By the time they did, Israel had been forced to accept a UN brokered ceasefire, leaving no justification for an Anglo-French peace keeping operation. The IDF suffered 190 soldiers killed, while the Egyptians lost thousands. Some 500 Palestinians civilians were massacred in Gaza when the IDF attempted to root out the Fedayeen. While a military triumph, the war was a political disaster. Israel was soon forced to withdraw under American pressure, although they did eke out concessions ending the blockade of Eilat, demilitarizing the Sinai, and establishing a United Nations peacekeeping force on the border. It also persuaded Jordan and Egypt to keep a tighter lid on the insurgency, and infiltrations mostly ended. On the other hand it radicalized the Arab world against Israel. The earlier chances of peace had now entirely disappeared, and there was open talk of the need to destroy Israel. Invasion of the Sinai, 1956 The Road to War The Palestine Liberation Organisation (PLO) was established in 1964 and armed by Syria, though they carried out attacks from Jordan and Lebanon to avoid retaliation against Syria. Syria also attempted to divert water from the sources of the Jordan river after Israel's national water carrier started drawing water from the sea of Galilee (which is fed by the Jordan). Israel prevented this by repeated air and artillery strikes on the works. Israel also attempted to assert civilian control over the Israeli/Syrian demilitarized zone by farming the land. Syrians would shell the farmers, and then Israel would retaliate against the Syrians. Things spiraled and Israel came to believe a war against Syria would be necessary, although nothing concrete was planned. Syria had a peace treaty with Egypt, and when the Soviets falsely reported to Egypt that Israel had built up forces on the Syrian border, Egypt decided to remilitarize the Sinai, expel the UN peacekeepers and renew the blockade on Eilat. In response Israel was forced to call up its reserves to defend the Egyptian border. It was a tricky situation, as Egypt with its huge population could maintain a war footing indefinitely, while Israel relying on reserves could not. But it was also seen as an opportunity by some in Israel to improve on their strategic situation. After negotiations didn't seem to be progressing quickly, Israel quietly planned an attack on the Sinai. They also told their populace that the Arab armies were massing to destroy Israel, although that seems to be mostly false: while the Jordanians signed a defense pact with Egypt, both Jordan and Syria had no significant call up and maintained defensive positions. The Egyptians did have a large build up in the Sinai, but they were also defensively positioned and their main aim was probably to defend against an Israeli attack to reopen the Gulf of Aqaba[10]. The 6 Day War The Israelis opened the war on the 5th June 1967 with a devastating air offensive which completely destroyed the Egyptian air force. When the Jordanians, Syrians and Iraqis later joined in the war they did the same to them. On the ground the Egyptians stood no chance. They were a poorly trained, armed, and motivated force practicing static warfare and with little ability to innovate, facing a highly trained, armed and motivated force, given a huge amount of independence, and with absolute air superiority. Within 4 days the IDF had reached the Suez canal. The Jordanians, fooled by Egyptian claims of initial success, opened fire from the West Bank on Israeli targets. Though the IDF troops on the Jordanian border were defensive territorial brigades, they counterattacked and made tactical gains in the Jordanian held West Bank. There doesn't seem to ever have been a concrete decision to fully occupy the West bank, but troops responded to local situations and exploited successes, till the entire West Bank was conquered. Syria only carried out ineffectual attacks from the Golan heights[11] on IDF positions below, but on the 9th of June the decision was made to exploit the war to deliver the long sought blow to Syria. Though the fighting was tough and uphill, within two days the heights were conquered. The local population was later expelled (except for some Druze who were allowed to remain). The UN security council imposed a ceasefire on June 10th, and Israel suddenly found itself in occupation of territories over three times larger than Israel itself, and about a million Palestinians. What to do with these territories? An offer was quietly made to return the Sinai and the Golan heights in return for peace (rather than armistice) with Egypt and Syria respectively, but this offer was rejected. There was never any consensus regarding Gaza, the West Bank, and especially East Jerusalem. Some were prepared to return all or parts in exchange for peace, others viewed them as necessary for military[12] or cultural[13] reasons. East Jerusalem, home to the Temple Mount, was formally annexed[14]. Either way, none of the Arab countries opened negotiations, so the situation was allowed to fester. A minority of ideological Israelis were committed to keeping all the captured territories forever, and set up settlements in the Sinai, Gaza, the West Bank and the Golan heights. With a Government that was unsure of its aims for these territories there was no real opposition, and so without ever making any decision to do so Israel was suddenly settling the occupied territories. Territories captured by Israel in the 6 Day War From the War of Attrition to Peace with Egypt (1967 - 1982) The War of Attrition Israel and Egypt's new border was along the Suez canal (which was shut to all traffic during this period). Egypt hoped to restore its land and its pride but without provoking a full out war. To do so, they started a war of attrition, hoping to ground Israeli will down with sporadic artillery bombardments and commando raids. Egyptian capacity for casualties among a professional army in a huge country was far higher than that of Israel's conscript army drawn from a tiny population. Israel responded with air strikes against Egyptian targets. These were crippling, but failed to stop the artillery duels. Egypt decided to ask for Soviet help to push SAM[15] sites closer to the frontline. Israel's attempts to destroy these sites were only partially successful and numerous planes were shot down. Eventually Israel and Egypt agreed to a ceasefire[16], and the artillery duels and infiltrations ended. However Egypt continued to prepare air defenses along the Suez, and by 1973 their coverage extended 6 miles into the Sinai. The Yom Kippur War By 1973 Egypt and Syria had prepared their forces, established effective air defenses, and were ready to strike. For whatever reasons Israel had failed to pick up on anything and the attack, when it came, was a complete surprise. The Egyptians had trained extensively for their tasks and every unit knew exactly what they were doing. The Syrians had concentrated extensive forces around the Golan. They knew they had no hope of defeating the IDF if Israel maintained air supremacy so their aims were limited to whatever could be achieved without leaving the protection of their SAM sites. For Syria that was recapturing the Golan heights, for Egypt crossing the Suez and advancing past Israel's defensive lines. The attack was launched on Yom Kippur[17] when most soldiers were off duty. In the north the Syrians quickly made inroads, but the defenders were just about able to hold on, and eventually counterattack. The IDF hoped to reach Damascus to put pressure on Syria to surrender, but Syrian defenses proved too strong. Instead they reached within artillery distance, putting the Syrian capital at threat, but unable to advance any further. The Egyptians crossed along the entire length of the Suez, and were able to cross all their forces before the IDF had mobilized and was ready to do anything about it. The IAF's attempts to use air power to change the situation in the north or south failed, and 100 planes were shot down. Israel was desperate, and there was even some talk of using nuclear weapons. It was then the Egyptians made a fatal mistake - they tried to press their gains out of range of their air defenses and into the now mobilized and far better trained Israeli divisions. They were routed with heavy losses, and the IDF used the opportunity to establish a bridgehead across the Suez, then turn south and trap half of the Egyptian army on the wrong side of the canal. By then an armistice had been imposed and agreed upon by both sides, but the situation was unstable. Israel was within striking distance of Damascus, and Egypt had an entire Army trapped. Meanwhile Israel had huge forces in Egypt proper, connected to the Sinai by only a thin bridgehead. Something would have to change. Areas captured by Israel or Egypt in the Yom Kippur War Peace, Somehow The situation on the Egyptian front was so untenable that some immediate agreements were made. At first for a prisoner exchange, and the entry of non-military supplies to Egypt's trapped 3rd Army. Then for the retreat of all Israeli forces to 10-15 miles east of the Suez in exchange for partial Egyptian demilitarization of the east bank of the Suez and the reestablishment of the UN peacekeepers in the buffer zone between them. The Suez was reopened for the first time in 8 years, and Israeli goods (but not ships) were allowed to flow through it. While limited, these agreements laid the framework for a certain amount of goodwill. Egypt was forced to accept that it would not be able to achieve its aims militarily and Israel had realized that not all wars would be won so easily as the Six Day war. Syria proved far trickier, and continued to wage low level conflict with Israel, hoping to push them out via a slow trickle of casualties. But under intense American pressure an agreement was eventually signed and Israel retreated from the recently captured Syrian territory and a thin strip of the Golan. In exchange a UN peacekeeping force was established to guard the border, and a demilitarized zone was created.[18] In 1975 Israel and Egypt took a step closer to peace, with Israel withdrawing further east (expanding the UN buffer zone) in exchange for an agreement not to use threats of force against each other and a commitment to work towards a lasting peace. And there things stood for a while, as changes of administration in both the USA and Israel delayed things. In Israel the new PM, Menachem Begin, was a devout right winger[19]. He wanted peace with Egypt, but was not willing to give up any territory to do so. Negotiations dragged out without any breakthroughs. In an effort to kick things forward, the president of Egypt, Anwar Sadat, flew to Jerusalem in 1977, implicitly recognising Israel and indicating his sincere desire for peace. He was warmly received, but still Begin was intransigent. Sadat wanted a complete withdrawal from the Sinai and the establishment of a Palestinian state in Gaza and the West Bank. Begin wasn't prepared to do either. Eventually Carter invited both Begin and Sadat to a retreat at Camp David to try and hammer things out. Negotiations were tough, but ultimately the mediators realized that Sadat was far more determined to restore lost Egyptian territory than he was to establish a Palestinian state, and Begin cared far more about settlements in Gaza and the West Bank than he did about those in the Sinai. Israel agreed to completely retreat from the Sinai in exchange for full peace and diplomatic relations with Egypt, and also committed to negotiations with Palestinians with a view to establishing an independent state. Both sides kept the letter of the agreement but Israel deliberately tanked the Palestinian negotiations and the peace with Egypt has always been relatively cold with very little economic integration. Israel and Lebanon (1978-2001) The Invasion of Lebanon Lebanon's population is split roughly evenly between Christians and Muslims. Most Christians viewed Israel positively as an anti-Muslim bastion in the region, but were hesitant to officially support it as it risked their status with their Muslim neighbours. The Muslim population were averse to starting any conflict with Israel as it risked upsetting the delicate balance between them and the Christians. Lebanon had mostly stayed out of the 1948 war. In 1970 the PLO attempted an insurrection against the Jordanian Monarchy and were ultimately driven out of the country. The leadership fled to Lebanon where they recruited heavily in the Palestinian refugee camps, which they soon effectively ruled. From there they carried out an insurgency against Israel, infiltrating the border and carrying out numerous terrorist attacks. The Lebanese civil war broke out in 1975, and Israel provided military aid and training to the Christians, hoping to counter the PLO. In response to continued incursions, in 1978 Israel invaded Lebanon up to the Litani river, driving out the PLO and propping up the SLA[20] in their place. But Syria had entered the civil war, and the Christians were steadily being pushed back. Begin harbored secret dreams of saving the Lebanese Christians where the Christian West would not, and plans were drawn up to invade Lebanon, and create a Christian dominated buffer state to the north of Israel. But Israel also had more direct concerns - the PLO was still able to shell Israel with rockets and artillery, and carry out terrorist attacks. Less ambitious plans were also drawn up to push the PLO slightly further back. Realizing where things were headed the PLO avoided provoking Israel for a while, but in Israel June 1978 found a casus belli[21] and geared for war. Ariel Sharon (the defense minister) deliberately misled the cabinet into thinking only the smaller, more limited invasion would go ahead, while secretly ordering troops to carry out the larger invasion. By the time the cabinet realized what had happened they were represented by a fait accompli. Israel had pushed back the Syrians and reached the Beirut-Damascus highway, cutting the PLO off from Syrian support. There they were forced to stop due to a US imposed ceasefire with Syria. Instead their attention turned to the PLO held West Beirut. In an extremely bloody 9 week siege they eventually convinced the PLO to abandon Beirut[22]. Thousands of Lebanese civilians were killed. The Slow Retreat While Israel's military accomplishments were significant, their efforts at nation building fell flat. They attempted to set up the leader of the Christian Phalangist organization as the President of Lebanon, hoping to create a Christian puppet state. But when he was assassinated[23] his replacement was far less friendly to Israel. The Christians were thankful to Israel for saving them, but now wanted Israel to withdraw and respect their sovereignty. Israel first wanted a formal peace agreement between Lebanon and Israel first, and a full Syrian withdrawal. Neither were on the cards. Both Israel and Syria regularly violated the ceasefire, but Israel had far less stomach for a war of attrition, especially for an occupation which was not actually at all necessary from a military standpoint. They slowly retreated, and as they did so the Christians fell to Syrian funded groups. The Phalangists blamed Israel and the Israel-Phalangist alliance fell apart. As the occupation dragged on Shiites in southern Lebanon grew increasingly disillusioned with it. Hizballah led a guerilla campaign against the occupying forces which killed many IDF soldiers. The IDF's heavy handed response only served to drive more recruits into their arms. By June 1985 Israel retreated back to the security zone in southern Lebanon - the same one  it had started from in June 1982. But this time Hizballah continued the attacks even there. Finally in 2000 Israel fully retreated from Lebanon. Hezbollah took up positions opposite the Israeli border, and after a long and bloody war Israel had only succeeded in replacing one enemy with another, and alienated most of Lebanon's population in the process. The Occupation (1967-2001) Israeli Governance In 1967 Israel suddenly found itself in occupation of the Sinai, Gaza, the Golan heights and the West Bank. The Sinai had at most a few 10s of thousands of natives, mostly Bedouin. The Golan heights were ethnically cleansed apart from a few thousand Druze who were offered Israeli citizenship or residency. But Gaza and the West Bank were both densely populated with over a million Palestinians all told. Israel was divided as to what to do with the newly occupied territories. Some hoped to return all or part in exchange for peace. Others viewed this land as historically Jewish and hoped to incorporate it into Israel. But peace never came, and so the occupation was allowed to drag on without anyone actually making a concrete decision on this critical issue. Israel immediately imposed a military government. While it did a lot of ordinary government style stuff (and generally took a pragmatic approach to governance) it also had various more nefarious aims. The defense minister indicated that if Palestinians chose to leave the occupied territories that would be a good thing. This was partially achieved by limiting Palestinian economic development: Israel prevented palestinian industrial development, forcing Palestinians to buy Israeli goods. They advised Palestinian farmers on how to grow staple Israeli crops, freeing Israeli farmers to grow more lucrative crops for export. Israel took over all state land, and for the most part didn’t free it for Palestinian usage. On the other hand Israel did not ignore Palestinian needs entirely - they built hospitals, universities and other services. Palestinians worked in Israel for much higher wages than they could earn domestically[24]. During the early part of the occupation Palestinian welfare did increase, partly thanks to Israel. Settlements At least to start with, there was never any government decision to build settlements in the occupied territories. Instead a messianic religious movement developed around the idea that the incredible victories of the 6 day war were part of the end of the exile and the return of the Jewish people to their homeland. They built settlements in the west bank, with or without IDF protection, and then placed political pressure on the authorities to avoid them being evicted. With a population that was mostly ambivalent towards the settlements, but a vocal movement that heavily supported them, it was always easier to let them be than to try and fight against them - especially as many MKs were not unsympathetic towards the movement. And so, slowly at first, but accelerating after Begin came to power in 1977 and actively supported the movement, more and more Jews moved into Gaza, the West bank, the Sinai, and the Golan heights. As of today more than 500,000 Israelis live in the West Bank. While the largest settlements stick close to Israel proper, dozens more dot the entire West Bank. This leads to de facto annexation of areas bordering Israel, and means that a Palestinian state in the remainder would be broken up by numerous enclaves. Slowly but surely, such a state is becoming unviable. The Rebirth of Palestinian Nationalism The Palestinian Nationalist movement had mostly died post 1948. While the PLA was founded in 1964 it was a fringe movement mostly occupied by idealists of some stripe or other. The average palestinian just got on with his life wherever he had ended up. The occupation changed all that - Palestinians in the West Bank and Gaza had a good reason to fight for independence now that their rulers were not the Arab Jordanians or Egyptians, but the Jewish Israelis. Civil disobedience swept the occupied territories in 1968, but this was brutally suppressed. Instead Palestinian nationalism was forced to find another outlet - and that outlet was inevitably militant in nature, with both secular and Islamist Palestinian terror groups becoming increasingly popular as the occupation wore on. And no matter how justified their cause was, these organizations were indeed terror groups, with numerous attacks targeting civilians in Israel and abroad, including numerous bombings and plane hijackings (both before and after 1967). Israel responded with all the apparatus a security state eventually develops: censorship, spies, monitoring, blackmail, arrests in the night, torture, banning political groups, imprisonment, exile, checkpoints etc. These were effective in crushing organized resistance within the occupied territories themselves, but also pushed more and more Palestinians into the arms of the terror groups. And this was of course of very limited impact on terror activity coordinated from outside Israel. This terror activity was completely unsuccessful in changing Israeli policy - until the first intifada there was no point where the government considered withdrawing from the occupied territories in response. If anything it highlighted the need to maintain the occupation in order to clamp down on the terror groups in the occupied territories. The First Intifada In 1987, partly driven by Hezbollah's success in driving Israel out of south Lebanon, partly by economic hardships and unemployment in the occupied territories, the Palestinian street erupted in spontaneous rioting. The rioters threw stones and molotov cocktails at IDF troops, making their operations in Palestinian towns dangerous and difficult. Israel was at a loss as to how to respond to huge crowds of civilians setting up roadblocks and throwing stones. Live fire led to high casualties, international condemnation, and further stoked tension. But without responding it was impossible for the IDF to do its job maintaining the apparatus of occupation. Both were tried, and found wanting. Over the months and years the IDF adapted - using less lethal rubber and plastic ammunition, equipping the soldiers with riot shields and sticks, and training them in crowd control. Huge numbers of “troublemakers” were arrested, and villages were sealed off from the outside world and their electricity cut off for weeks on end. The intifada took its toll on both Israelis and Palestinians. Palestinians were brutalized by Israeli forces, and their economy tanked. The Palestinian question rose to the top of both the international and the Israeli agenda. Among ordinary Israelis there was a desire to leave the territories, while right wingers called for mass expulsion. Ultimately it was the peace process that brought the intifada to a close. Arafat (the leader of the PLO) officially called for an end to violence when Oslo (see below) was signed, but it had already waned when the first peace talks kicked off in 1991. Oslo Facing increasing US pressure, Israel was forced to the negotiating table in Madrid in 1991. The peace talks dragged out for years, but ultimately Israel was playing for time, and the process was futile. Rabin was elected prime minister of Israel on a pro peace agenda in 1992, but failed to translate that into concrete concessions at the negotiating table. But meanwhile unlikely things were happening where no one was looking. An Israeli academic and a Labour party backbencher decided to clandestinely meet with a PLO official. Without any real authority they negotiated anyway, and through months of work were able to hammer together enough areas of agreement to form a basis for further talks. The negotiators were switched out for actual ministers, and the process became official. What emerged from these Oslo accords was not so much a full disengagement agreement, but an agreement to pursue further steps and talks towards ending the occupation. Israel recognized the PLO, the PLO recognized Israel, the Palestinian Authority (PA) was established and Israel agreed to slowly devolve civil and military power to the PA in more and more areas in return for continued peace. Both sides more or less kept to the letter of the agreement over the next two to three years, but with plenty of badwill. Israel continued building settlements in Israeli controlled areas. The PLA failed to clamp down with sufficient force on Palestinian terror, and when there were terror attacks Israel would delay withdrawing from a town. This reduced the legitimacy of the PLA, forcing them to partially support Hamas in the hopes of borrowing some of its popularity, causing Israel to delay the withdrawal further. Driving this process was Hamas: They realized that a settlement based on the 1948 borders would end all hope for a unified Palestinian state throughout mandatory Palestine and so sought to kill the peace process. To do so they carried out numerous terror attacks in Israel proper, killing hundreds of civilians and soldiers. This successfully turned Israeli popular opinion against Oslo: they had given the Palestinians the West Bank, and in return they were threatened in their homes! Clearly giving up land only encouraged the terrorists! Rabin was assassinated in 1996 by an extremist settler. His replacement, Peres, didn’t have the charisma Rabin had, and facing mounting public opinion against the peace process he lost the elections that year to Netanyahu. Netanyahu’s first stint as PM was a shambles. He attempted to slow down Oslo as much as possible, but was forced into further withdrawals by the USA. The right wing no longer trusted him, and the left wing was unprepared to work with him. He was forced to call early elections, which Ehud Barak won. Status of the West Bank, 2005. In Area A the PA has both civil and military authority. In Area B it only has Civil authority. The red line marks a border wall dividing Israeli occupied areas from Palestinian ones, built in response to the Second Intifada (see below). The second Intifada Ehud Barak won the premiership on a platform of final peace with the Palestinians and Syria, and ending the occupation of Lebanon. But his coalition was too divided for him to make good on his promises. The peace talks with Syria went nowhere, and he was forced to withdraw from Lebanon unilaterally and in far more of a hurry than originally planned. He tried to continue implementing Oslo, but was not able to do so at the pace the Palestinians desired, and settlement building continued apace, as did Palestinian terrorism. Clinton, hoping to knock out a diplomatic triumph in his final year as president, called Arafat and Barak to camp David hoping to hammer out a full agreement for a two state solution. Alas the two sides were too far apart, and the talks ended in failure[25]. Shortly after, frustrated by a stalled peace process and triggered by Ariel Sharon’s visit to the temple mount, spontaneous rioting broke out among the Palestinians which quickly turned into a full-blown uprising. But unlike the previous intifada the Palestinians regularly used guns and light bombs against Israeli troops and civilians. Israel responded in kind, and the situation became far more lethal for both sides. The Israeli Arab street, frustrated by years of discrimination, also joined in with the rioting (but less so with the shooting). Arafat denied involvement with the violence, but continuously stirred it up, and definitely directed it to some extent. Barak was forced to resign and the hardliner Ariel Sharon, who had taken over from Netanyahu as head of Likud, won the elections. Morris’s conclusions Morris ends the book with a short essay in which he lays out his thoughts on the development of Israeli and Palestinian ideology, their impacts on war and peace, and the impacts that war and peace have had on them. It stands alone, and is well worth the read if you have half an hour, even if you don’t read the whole book. What Came Next? And that is where the book ends, on a cliffhanger in the middle of the intifada. I’ve read no history books that go much beyond that point, so I’ll just list a brief summary of the major events that occurred since then. The second intifada continued till 2004. It killed over 1000 Israelis and 3000 Palestinians. Israel unilaterally withdrew from Gaza in 2005, which was taken over by Hamas. Since then there have been continuous flare ups between Hamas and Israel, occasionally leading to limited invasions, and finally culminating in the current full blown war triggered by Hamas invading Israel, killing over a thousand people and kidnapping 200. Israel’s response so far has killed tens of thousands of Palestinians, a majority of whom are civilians. Hezbollah kidnapped two Israeli soldiers in 2006, kicking off the second Israel-Lebanon war, which ended in stalemate. Since then both sides have preferred to avoid things blowing out of control, and have settled into a routine - Israel occasionally bombs Hezbollah targets, and Hezbollah occasionally snipes at Israel, sneaks through the border, or fires rockets at Israeli army bases or towns. The current Gaza war has triggered the biggest flare up between Israel and Hezbollah in years, but for now both sides seem not to want it to turn into a full blown war. Israel signed further peace agreements with a number of Arab countries (UAE, Bahrain, Morocco) under the Trump presidency, and is hoping to continue that process, currently setting its sights on Saudi Arabia. Meanwhile it’s been engaged in a cold war with Iran for the last 40 years, aiming to weaken Iran’s proxy forces in the area and prevent Iran from developing nuclear weapons. Since the start of the second intifada all progress on Oslo has stopped, and settlement construction has continued apace. The PA has been hollowed out and now barely controls the West Bank. A peace settlement with the Palestinians seems further than ever before. My Thoughts What conclusions you take from a book will depend on why you read it. If you were just interested in which side to blame then you could spend your entire life debating the ins and outs of every event Morris describes. If you’re purely a history buff Righteous Victims gives you plenty of material to work with. Personally, I was hoping to understand the current situation better and take forward some concrete ideas for how we, or at least I, could act to help make the future better rather than worse. At this point we’re moving from semi objective history to my completely subjective ideas on the hottest topic in international politics. Reader beware! Israel If you go into this book believing standard Hasbara talking points about how the IDF is the most moral army on earth, Israel only wants peace, the Palestinians only want war, and Israel has simply no choice in what it does, you’re likely to find it makes for very uncomfortable reading. On the other hand I don’t think it would be wise to update too far in the other direction. It is true that Israel’s founding involved large scale ethnic cleansing - but unfortunately it is far too common for the creation of states to involve forced migrations and massacres, in both modern and ancient times[26]. Neither was the desire for ethnic cleansing one sided - the Arabs made no bones of their intentions to push the Jews into the sea. It is also true that Israel has often been more aggressive and warmongering than it needs to be, but alas the same could be said for most countries. Let’s take Israel’s most pointless and least justified war, the Lebanon war. Has the USA ever invaded a foreign country because it provided a safe haven for terrorist attacks against them? Yes - Afghanistan. Has it ever invaded a country for what turns out to be spurious reasons while lying to its populace about the necessity? Yes - Iraq in 2003. Has it ever then abandoned its allies once the toll of the war becomes too high, leaving them to be massacred by a foe only emboldened by the intervention? Yes - Afghanistan again. Has it ever jumped into a conflict without considering its long term aims and whether intervening might actually just make everyone worse off? Oh boy Yes! This is not to say that Israel wasn’t wrong for starting the Lebanon war, or that America is excessively evil. It’s that these things are hard to get right, most countries get wrong, and the consequences of being too peaceful are usually worse than for being too belligerent[27]. Focussing on almost any country’s military history would make them look bad. For the most part Israel is a normal country trying to protect its interests in a difficult political situation. In doing so it has caused a huge amount of misery, much of which is not its fault, and much of which is. The solution to that is not to blame Zionism as the source of all evil in the Middle East, but to try and fix the situation. Where this viewpoint breaks down is the occupation. Israel had very limited justification for occupying the West Bank in 1967, and missed a number of opportunities to return it to Jordan or establish a Palestinian state. In 1967 Palestinian terrorism was limited - the occupation was the main driver of the mainstreaming of terrorism among the Palestinians. There was simply no need for Israel to occupy the West Bank. The lack of any solutions here was driven by the growth of an expansionist nationalist movement which saw the entirety of mandate Palestine as rightly Israeli. Although not a huge percentage of the country[28], the general ambivalence of the Israeli populace towards the Palestinians allowed this minority to drive Israel’s policies in the territories. The nationalist movement inherently requires either apartheid, genocide, or mass ethnic cleansing. Since 5 million Palestinians do live in the occupied territories, and they are unprepared to grant them either their own state or citizenship in a Jewish one, the only possible solutions are a permanent lack of citizenship in a greater Israel, the destruction of the Palestinian people, or their mass expulsion. Palestine Much the same could be said on the Palestinian side of things. The Palestinians have very legitimate complaints, and have without doubt been maltreated by Israel. They have also done plenty of maltreatment in return and can reasonably be blamed for making their own situation worse on a number of occasions. But that does not make them the sole source of evil either. It means they are in a shitty situation where the natural thing for people to do is to further alienate the exact group of people who need to be prepared to make peace with them. The solution is to fix the situation. Again this viewpoint breaks down once you consider Hamas. They have had numerous opportunities to make life better for themselves and Palestinians in general, but have instead decided to risk it all in a delusional attempt to establish a Palestinian state from the river to the sea. Whether by sabotaging Oslo, turning Gaza into a garrison of terror, or attacking Israel on October 7th, they have brought only pain and misery to the region because their ideology rests on the impossible assumption that they will somehow manage to utterly defeat Israel. Going Forward While the occupation might have started out as mostly unnecessary, Israelis have a reasonable case after October 7th that establishing a Palestinian state there now would only provide a further base for Hamas to attack Israel. They might well be wrong, but so long as they have that attitude they will not unilaterally agree to a two state solution. Only external pressure can force them to do so. However at the moment Israel doesn’t actually have a partner to work with. The PA is widely hated and barely holds power in the West Bank. Hamas is not a viable negotiation partner for Israel unless it radically changes its attitudes. The USA does have a reasonable amount of leverage over Israel - they are for all intents and purposes the only possible supplier of the advanced fighter jets Israel needs to maintain its military edge. However it has to be careful not to force Israel into a corner where it feels it has no good options - doing so could backfire dramatically: if Israel believed it risked losing its dominance long term it might seek to preemptively start a war while it still has the advantage. Instead the USA has to make sure it addresses Israel’s security concerns, while working to change the situation on the ground to one where peace might be possible. Under those constraints, here are some concrete steps the USA can take in that direction: End settlement construction. Full stop. There is literally no security need for them at all, most Israelis do not support them, and they are hugely destructive to any future Palestinian state. If given the choice between f-35s or expanding the settlements the Israeli government will be forced to pick the f-35s. Even better, the pro settlement parties will be forced out of government as this is a red line for them.Prop up the PA and push out Hamas. This is a much more difficult thing to do, and I’ll leave it to the experts in the relevant fields to suggest how. But definitely the USA could put serious pressure on Israel to stop doing the things they are actively doing to delegitimize the PA, such as holding back taxes collected by Israel on behalf of the PA. Beyond that, the ability for the rest of the world to influence events is limited. History shows that while the foreign powers can put their foot down and force Israel to stop doing something they don’t like, they can’t very easily force it to do what they want. Even when they forced both sides to the negotiating table, often one side or the other was just trying to drag things out. Only once both sides are genuinely ready for peace is there a chance that external pressure can force them to make the concessions necessary to realize a solution. Should I Read This Book? Probably not? It’s a long, dry, expensivish book on history! Why would you read it? On the other hand, the fact that you’ve read this review so far implies that you’re actually interested in the Zionist-Arab conflict. And given that, please don’t trust me! For a start I’ve compressed a 700 page book into a longish book review. There’s tons of super important info I’m missing! Secondly, I’ve definitely made mistakes along the way. And thirdly, like everyone else, I have an agenda. Make sure to follow up with the primary source! So buy or borrow the book, and plough your way through it. And if you do, I promise you there’s a ton of exciting stuff I’ve missed out. While it’s not game of thrones, there’s still plenty of snippets that are sure to pique your interest! Conclusion That’s actually it folks, I just feel that a book review ought to end in a conclusion. Thanks for reading! ^ £22.52, which isn't that much, but more than I usually spend on books. ^ Subjective! ^ Which are often misleading without understanding the details, how they were defined/collected, potential biases, etc. ^ For example, Morris dismisses Ilan Pappe as the world's sloppiest historian. He recommends Avi Shlaim as a historian who he disagrees with, but is factually accurate. ^ That prize goes to Sacred Causes which basically listed off reams of events without ever tying them together or putting it into a wider narrative. ^ Interestingly, the Balfour declaration was driven partly because the British, based on anti-semitic tropes, assumed the Jews were far more powerful than they were and could decide the fate of the war. ^ And almost provoking a war with the British who had a strong interest in the Sinai, given it bordered on the Suez. ^ For more on this period, read Israel's Border Wars, 1949–1956. ^ Partly because the aim was to be seen to be doing something, to reassure the border settlers that the government was looking out for their welfare, rather than to actually prevent the terrorism. ^ This is a contentious point, as there were contradictory things said by Nasser and various Egyptian higher ups. There definitely was some talk of attacking the Negev desert to meet up with Jordan, but it's difficult to judge how fanciful it was - certainly the Egyptian positions did not reflect any such plans. ^ The Golan heights is a plateau dominating the sea of Galilee and the Hula valley, and so was a tactically difficult border to defend against as the Syrians had a strong height advantage, and could easily see and shell Israeli positions below. Israel had long coveted the heights to solve this issue. ^ The West Bank is much higher than the plains to the west, and overlooks Israel's narrow waist, just 15km wide at its thinnest. It would be easy for a well armed opponent to cut Israel in two from the West Bank, and all of Israel's most important cities are within easy artillery range of the West Bank. Meanwhile the border with Jordan reached by occupying the West Bank is along a major valley and so easy to defend. ^ The biblical territory of the Israelites is mostly in the West Bank, whereas the territory of modern day Israel was mostly Pagan (with lots of exceptions both spatially and temporally).  The Temple Mount, the most (arguably - only) important site in Judaism is located in East Jerusalem, as is the Cave of the Patriarchs and Rachel's Tomb. ^ The residents were offered citizenship, though most refused and received permanent residency instead. Since then very few have received citizenship, and there's some debate to what extent that's their decision vs Israel placing bureaucratic hurdles in their path. Anyway, none of this is discussed in Righteous Victims so I shouldn't really be telling you about it. ^ Surface to Air missiles. ^ Israel had suffered some 360 deaths by then, almost all soldiers. Egyptian losses are impossible to know, but Morris gives a ballpark estimate of 10,000 military + civilians killed. ^ The day of atonement. Almost all Jewish Israelis stay at home or go to the synagogue. The roads are so empty that children call it the bicycle festival, as they can safely cycle on even the busiest highways. ^ This force remains there to this day, and the border has been mostly quiet, excepting for some flare ups when Syria was fighting Islamic militants in the demilitarized zone during the ongoing civil war. ^ He was the ex head of the Irgun terror group. ^ South Lebanese Army, a Christian/Shiite/Druze force. ^ A different Palestinian terrorist organization shot the Israeli ambassador in London, likely to provoke exactly this war. ^ They were protected by an international force to safeguard their retreat. ^ The IDF knowingly let the Phalangists take revenge for his assassination, massacring hundreds of Palestinians. ^ They generally earned less than Israelis could for the same work, and didn’t receive the standard protections and benefits Israeli workers did. ^ Clinton blamed the failure of the talks on Arafat, who he says failed to make any significant concessions. Alas the details of the talks are private, and Israeli and Palestinian claims as to what Israel offered differ wildly. Either way the talks were kind of moot, since by that stage Barak’s government was dead in the water, and basically had no chance of pushing through any sort of contentious peace deal. ^ Israel was founded in May 1948. India and Pakistan split in August 1947, which led to the massacres of hundreds of thousands of people, and the mass migration of millions of Muslims from India to Pakistan and millions of Hindus from Pakistan to India. ^ Ask WW2 Britain, which could have easily defeated Nazi Germany if it hadn’t handed over the Sudetenland without a fight. ^ They currently control about 10 percent of the Knesset, but this is likely an underestimate.
2024-06-24
https://www.lesswrong.com/posts/nhwMMbi4pmZfErNe2/paying-russians-to-not-invade-ukraine
nhwMMbi4pmZfErNe2
Paying Russians to not invade Ukraine
djColliderBias
I know nothing about war except that horseback archers were OP for a long time. But from my point of view, which is blatantly uneducated when it comes to war, being a Russian soldier seems like a miserable experience. It therefore makes me wonder why 300,000 Russian soldiers are willing to risk it all in Ukraine.[1] Why don’t they desert? How does the Russian regime get so many people to fight a war when my home government is struggling to convince me to sort my trash? If the Russian regime can convince so many people to have a shit time in Ukraine, I’d argue that the West could convince these people to go live an easier life. The idea is so simple that by now I mostly wonder if my inability to figure out why NATO isn’t already bribing Russian soldiers to desert is a sign that I’m officially off the proverbial rails. Suppose I’m correct in assuming that even Russian soldiers are somewhat rational and self-concerned actors. In that case, the Russian war apparatus is putting on a masterclass in how to get thousands of people to do something that I assume they don’t particularly feel like doing. Based on some superficial research, I get the impression that the motivational methods of the Russian regime partly are offering a giant carrot to people who have very little. I base this conviction entirely on salary statistics and this NYT opinion piece[2] that states that the Russian armed forces are recruited from the bottom rung of the societal ladder. The price of making the Russian army vanish In 2022, prospective recruits were being offered 2,700 USD[3] a month – which I assume is an outrageously sweet deal in a country where the median wage is less than 500 USD a month. Where I live, the median salary is roughly 3,000. I don’t think I would invade Ukraine for 15,000 a month but fuck if that isn’t a fat pile of cash. I’ve seen other articles stating much lower numbers, something like 1000, but all make the same point. It is a ton of money for those who sign up compared to what they otherwise could expect. One even states that it is more than a dozen times higher than wage than the average regional wage of where the recruits typically come from. I don’t think that it matters to my point whether 1000  or 3000  is a more accurate description of what a Russian soldier earns. I think the West could do better either way. I think we could provide a much bigger carrot. Considering how beneficial it would be if the Russian army just vanished, I’d argue that it is worth every penny. My not-further-elaborated plan would be something along the lines of granting Russian soldiers asylum in EU countries. If you come from the bottom of Russian society, I’d think that access to European education and healthcare is an attractive proposition, especially if we throw in a monthly bribe of 2,700 USD for deserting. According to my napkin math, paying 1.1 million active Russian soldiers and their 1.5 million colleagues in the reserve[4] 2,700 USD a month to not invade Ukraine would come with a price tag of roughly 85-ish billion USD a year. If, like me, you have no idea how expensive 85 billion USD a year is, that is roughly the amount of financial aid the EU has provided Ukraine so far. Mostly in budget support for the Ukrainian government. US aid comes in at the same magnitude, but much of it is military support.[5] To me, it sounds like spending money at this order of magnitude isn’t out of the question. How exactly do you pay 2.6 million Russians to desert? Exactly how you pay 2.6 million Russians, I have yet to figure out. I’m guessing helping soldiers and their dependents migrate is essential. I have no idea whether it is feasible at scale. Apparently, a Ukrainian intelligence agency has had some luck helping Russian soldiers desert.[6] I imagine the political will in the EU is going to be a serious bottleneck if NATO ever attempts to implement this plan. More than a million Syrians now live in Europe, mainly because of mass migration due to the Syrian Civil War. Mostly at the grace of the German and Swedish governments.[7] While we basically lost a member country due to the refugee crisis, the union still exists. Except for political fallout, I don’t think the EU would suffer unreasonably from housing another 2.1 million Russians and their families. I wonder how far we could take this. What could go wrong? Of everything that I have written, I’m most confident that the Russian regime would retaliate somehow. But I don’t think they pose any threat. They could respond by making life nicer for Russian soldiers, which I wouldn’t mind. They could track down deserters in foreign countries. While it doesn’t sit well with my morals, a more cynical strategist probably wouldn’t mind if Russian intelligence officers spent their time planning the assassinations of Russian soldiers in foreign countries instead of doing whatever intelligence officers usually do at war. I expect that there are reasons why paying the Russian army to get up and leave isn’t feasible. It seems suspiciously simple for no one to have suggested it yet. Perhaps the Russian soldiers are motivated by other reasons and giving them money and Western comforts isn’t that attractive to them. Maybe I’m underestimating the importance of patriotism. Or maybe you are begging to be trojan-horsed if you invite 2.6 million military-trained Russians into your borders. But I also wonder how far we could take this. Could we drain Russia entirely of people by offering them benefits for emigrating? It would probably take generations, but I guess the only serious stopping block is our willingness to pay and integrate millions of Russians. ^ https://en.wikipedia.org/wiki/Russian_invasion_of_Ukraine#cite_note-ISW_May_30-12 ^ https://www.nytimes.com/2023/03/31/opinion/russia-ukraine-war-soldiers.html ^ https://www.aljazeera.com/news/2022/9/18/russia-turns-to-trucks-and-big-wages-to-woo-volunteer-soldiers ^ https://www.tandfonline.com/doi/full/10.1080/04597222.2024.2298592 ^ https://www.cfr.org/article/how-much-us-aid-going-ukraine#chapter-title-0-1 ^ https://cepa.org/article/desert-or-die-helping-russian-soldiers-choose-life/ ^ https://www.unhcr.org/cy/2021/03/18/syria-refugee-crisis-globally-in-europe-and-in-cyprus-meet-some-syrian-refugees-in-cyprus/
2024-06-24
https://www.lesswrong.com/posts/nCdxosAzGw9PKxRcD/the-living-planet-index-a-case-study-in-statistical-pitfalls
nCdxosAzGw9PKxRcD
The Living Planet Index: A Case Study in Statistical Pitfalls
Jan_Kulveit
In case you sometimes read an alarming headline about biodiversity loss, like Global wildlife populations have declined by 69% since 1970 or Researchers Report a Staggering Decline in Wildlife, there is a decent chance it is based on a metric called The Living Planet Index (LPI). My colleagues at CTS looked into how is LPI calculated, and show that current version of LPI is probably best understood as a cautionary example of effects of mathematical biases, but poor measure of biodiversity. [1] I do expect this result will get way less media coverage than the alarmist headlines; also it is methodologically interesting, so posting a short summary. How is LPI calculated? The paper has a helpful diagram: So, the whole process is Addition of a constant of 1% of the population mean (the mean from all non-zero values) to all values of the time series if the time series contains zero in any year. Estimation of the new population values by two methods (also the way to estimate missing values, i.e. values for years without population records):GAM method, which is basically curve-fitting in log() space. It is used if the length of the time series is equal to or longer than 6 records and only if the GAM fits well. Chain method is used if the length of the time series is less than 6 records or if the GAM does not fit well (or if all population values are the same). It is a linear interpolation for missing values in the population series in logspace Ni=Np(NsNp)i−^ps−^p where i is the year when the value is interpolated, p previous year and s subsequent year.Log transformation (base 10) of the population values.Calculating the difference between the log population values between every two consecutive years = log ( ratio of population values ) = population growth = lambdaCalculating the arithmetic mean of lambdas (the logarithm of the geometric mean) of all populations of one species within one biogeographical realm (for an individual year). There are 5 (for the terrestrial and freshwater ecosystem) or 6 (for the marine ecosystem) biogeographical realms distinguished (see below).... more steps of averaging by arithmetic mean by realms, across all taxa within one realm, all realms (for an individual year) and ecosystem-specific lambdas across all ecosystems.The calculation of the LPI as 𝐼=𝐼𝑝×10𝜆, where Ip is the index of the previous year and the index of the starting year 1970 was set to 1. There isn't anything obviously fishy about this, and if you try to invent an index tracking population growth or decline from a scratch, you would probably make many similar choices: geometric averages are natural way to track population growth, some smoothing of noisy population estimates is reasonable, and weighted averaging over different ecosystems also makes sense, because of uneven sampling across taxa and regions. Why this does not work Devil is in the detail. 1. Geometric averaging of data with arithmetic observation noise => bias Imagine you're trying to track a population of rabbits in your backyard, but as they are moving, you have some trouble counting them exactly. You count 5 one day, and 3 the next, and 1 the next one. Have the rabbits really decreased or increased? The problem, in particular with small populations, is, you like have an arithmetic noise in your measurements. Unfortunately the symmetrical counting noise on the arithmetic scale gets transformed to negative growth after the log transform. For the given case LPI in the depicted case decreases from 1 to ~0.75. Note that it does not matter whether the two records occur in subsequent years or they are more distant in time. 2. Hierarchical averaging  => extreme sensitivity This is somewhat longer to explain, so read in the original paper if interested. First problem is the index is sensitive to initial population declines early on, and can hardly recover. And this is amplified by the hierarchical weighted averaging procedure: imagine if you have a sequence of increasingly general buckets, where all the time there is just a single species in the increasingly general categories. If population of such species, it gets unreasonably amplified. The paper gives an example of the herptiles in the Palearctic region, represented by only one (declining) population of viper Vipera berus for the period 1974-1977. Hierarchical averaging across taxa and biogeographical regions leads to the situation in which these four records of the viper population cause an 89.5% greater decrease (the index changes from the original value of 0.826 to 1.721 after removing these four records) in the final state of the LPI for the Palearctic realm and a 3.3% greater decrease in the LPI for the whole terrestrial system in comparison to the LPI without these four records The original LPI for the whole Palearctic realm is blue and the LPI calculated without the 4 records (1974-1977) of one population of viper Vipera berus is yellow. 3. Asymmetric zeroes=> bias With data series about populations count, do you expect zeroes more at the beginning or at the ends? Clearly at the end: people usually start studying the species and recording the time series when the species is present somewhere. Imagine what would a time-series with leading 15 zeroes imply: someone diligently recording, year after year, "we haven't seen this animal". On year 16, it is finally observed for the first time. The symmetrical case, when the species disappears, is more likely. The original global LPI is blue and the LPI calculated without zeros in the population time series is yellow Discussion The authors conclude Due to the sensitivity of the LPI to subjective decisions and to specific problems with the LPI calculation, the LPI does not seem to accurately represent biodiversity trends. An indicator of the global state of nature should not be sensitive to the fact that 50 years ago one population of viper did not thrive well, and should not be affected by the particular way population sizes were measured and how population absence was treated in the end or the beginning of the time series. Similarly, a universal index of population change should not be sensitive to particular grouping to taxa and biogeographical realms if its aim is to provide a rigorous, repeatable indicator with a straightforward interpretation. These issues deserve particular attention if the LPI is calculated for individual regions or countries, in which the effect of these biases may be even stronger than in the case of the global data. and again note that LPI not being an accurate representation of biodiversity does not imply the situation isn't bad. ^ Note that this does not imply much about actual biodiversity loss.
2024-06-24