url
stringlengths
52
124
post_id
stringlengths
17
17
title
stringlengths
2
248
author
stringlengths
2
49
content
stringlengths
22
295k
date
stringclasses
376 values
https://www.lesswrong.com/posts/EPefYWjuHNcNH4C7E/attribution-based-parameter-decomposition
EPefYWjuHNcNH4C7E
Attribution-based parameter decomposition
Lblack
This is a linkpost for Apollo Research's new interpretability paper: "Interpretability in Parameter Space: Minimizing Mechanistic Description Length with Attribution-based Parameter Decomposition". We introduce a new method for directly decomposing neural network parameters into mechanistic components. Motivation At Apollo, we've spent a lot of time thinking about how the computations of neural networks might be structured, and how those computations might be embedded in networks' parameters. Our goal is to come up with an effective, general method to decompose the algorithms learned by neural networks into parts that we can analyse and understand individually. For various reasons, we've come to think that decomposing network activations layer by layer into features and connecting those features up into circuits (which we have started calling 'mechanisms'[1]) may not be the way to go. Instead, we think it might be better to directly decompose a network's parameters into components by parametrising each mechanism individually. This way, we can directly optimise for simple mechanisms that collectively explain the network's behaviour over the training data in a compressed manner. We can also potentially deal with a lot of the issues that current decomposition methods struggle to deal with, such as feature splitting, multi-dimensional features, and cross-layer representations. This work is our first attempt at creating a decomposition method that operates in parameter space. We tried out the method on some toy models that tested its ability to handle superposition and cross-layer representations. It mostly worked the way we hoped it would, though it's currently quite sensitive to hyperparameters and the results have some imperfections. But we have ideas for fixing these issues, which we're excited to try. What we do Our method decomposes the network parameter vector into a sum of parameter component vectors, such that the average description length of the mechanisms used on any one data point across the training dataset is minimised. A 'parameter component' here is a vector in parameter space that is supposed to correspond to a specific mechanism of the network. For example, if the network has stored the fact that `The sky is blue' in its parameters, the weights that make up the query-key lookup for this fact could be one such component. These components of the learned network algorithm do not need to correspond to components of the network architecture, such as individual neurons, layers, or attention heads. For example, the mechanism for `The sky is blue' could be spread across many neurons in multiple layers of the network through cross-layer superposition. Components can also act on multi-dimensional features. On any given data point, only a small subset of the components in the network might be used to compute the network output. Top: Step 1 - Calculating parameter component attributions Ac(x). Bottom: Step 2 - Optimizing minimality loss Lminimality. We find the parameter components by optimising a set of losses that make the components: Faithful: The component vectors (Pc) must sum to the parameter vector of the original network (θ∗). We train for this with an MSE loss Lfaithfulness=MSE(θ∗,∑cPc). Minimal: As few components as possible should be used to replicate the network's behavior on any given data point x in the training data set. We operationalise this with a top-k test based on attributions: First we run the original model f(x,θ∗) and use gradient attributions to estimate the attribution Ac(x) of each parameter component to the final network output. Then, we use batch top-k (as in BatchTopK SAEs) to select the k parameter components with the highest attributions across a batch. We sum these top-k components to obtain a new parameter vector κ(x), and use it to perform a second forward pass f(x,κ(x)) with these new parameters. Then we train to match the original model's output by minimising an MSE loss Lminimality=MSE(f(x,θ∗),f(x,κ(x))) between the network outputs on the two forward passes, thereby increasing the attributions of the active components on that data.[2] Simple: Individual parameter components should be simpler than the whole weight vector, in the sense of having a shorter description length. We aim to minimise the sum of the ranks of all the matrices in active components ∑lrank(Pc,l) as a proxy of description length. In practice we use the 'Schatten quasi-norm' (which is just the Lp norm of a matrices' singular values) to optimize that objective.[3] These losses can be understood as trying to minimise a proxy for the total description length per data point of the components that have a causal influence on the network's output, across the training data set. We test our method on a set of three toy models where the ground truth decomposition of the network algorithm is known: (1) A toy model of features in superposition (TMS), (2) A toy model that implements more computations than it has neurons, and (3) A model that implements more computations than it has neurons distributed across two layers. We find that APD is largely able to decompose these toy networks into components corresponding to individual mechanisms: (1) The weights for individual features in superposition in the TMS model; and (2) & (3) The weights implementing individual computations across different neurons in the compressed computation models. However, the results required a lot of hyperparameter tuning and still exhibit some amount of mechanism mixing, which we suspect is due to our using top-k. While we think this first prototype implementation is too sensitive to hyperparameters to be usefully applied to larger models such as LLMs, we think there are ways to overcome these issues. Overall, we think the general approach of directly decomposing networks into components directly in parameter space is promising. Future work We have some pretty clear ideas on where we want to take the method next and would be excited to hear from others who are interested in using or improving the method. We think some reasonable next steps are: Make the method more stable: Currently, the optimisation process in APD can be painfully brittle. Lots of hyperparameter tuning can be required to get good results. We have some ideas for fixing this that we're excited about. First and foremost, we want to switch out the attributions for trained masks that are optimised along with the components. This involves some changes to our loss function to ensure the resulting masks still represent valid component activations in the sense we discuss in Appendix A.Make the method more efficient: We have some ideas for making APD more computationally efficient at scale. For example, we think it should be possible to save on a lot of compute by running the optimisation in two stages: First we decompose the network into rank-1 components, then we group those rank-1 pieces together into higher rank components in the configuration that minimises overall description length per data point.[4][5]Implementation for transformers and CNNs: Since APD operates on network parameters, it seems mostly straightforward to adapt to any neural network architecture based on matrix multiplications. The only change seems to be that we might want to have different parameter components active at different token positions or coordinates in a CNN input tensor. Nevertheless, we need to actually create an implementation and test it on toy models where we have ground truth understanding of the mechanisms in the network.Scaling to LLMs: We will want to apply the method to LLMs. Probably starting with a decomposition at a single layer rather than the entire networks. The results could then be compared and contrasted with the features SAEs find in those networks. ^ 'Circuit' makes it sound a bit like the structures in question involve many moving parts, but in constructions such as those discussed in [1,2], a part of the network algorithm can be as small as a single isolated logic gate or query-key lookup. ^ KL-divergence could be used here as well, depending on the target model. ^ This is a standard trick in low-rank optimisation. It's somewhat analogous to how penalising the Lp pseudonorm with p∈(0,1] of activations in an SAE can do an alright job at optimising for low L0. ^ This is partially based on insights from computation in superposition [1,2]. There is an upper limit to how many rank-1 can be superposed into a layer, which growth roughly linearly with the layer's parameter count. ^ We also noticed that restricting component rank tends to make the method more stable, so we think this change will help with the previous point as well.
2025-01-25
https://www.lesswrong.com/posts/dTs7ZFHHtQ8CiFZT2/better-antibodies-by-engineering-targets-not-engineering
dTs7ZFHHtQ8CiFZT2
Better antibodies by engineering targets, not engineering antibodies (Nabla Bio)
abhishaike-mahajan
Note: Thank you to Surge Biswas (founder of Nabla Bio) for comments on this draft and and Dylan Reid (an investor into Nabla) for various antibody discussions! Also, thank you to Martin Pacesa for adding some insight on a paper of his I discuss here (his comments are included). Introduction Antibody design startups are singlehandedly the most common archetype of bio-ML startup out there. It’s understandable why — antibodies are derisked modalities, CDR loops driving antibody efficacy makes the whole structure more amenable to ML, and there’s a fair bit of pre-existing data there. But, because it’s also the most common form of company, it’s difficult to really differentiate one over the other. If you squint, you could make out some vague distinguishing characteristics. Bighat Biosciences does a lot of multi-property optimization, Absci and Prescient have the strongest external research presence, and so on. But there is a vibe of uniformity. It’s nobody’s fault, that’s just the nature of any subfield that has a huge amount of money flowing into it; everyone quickly optimizes. And, unfortunately for those of us who enjoy some heterogeneity, most everyone arrives at the same local minima. Because of that, I’ve never really wanted to write about any antibody company in particular. None of them felt like they had a sufficiently interesting story. All fine companies in their own right, but they all tell the same tale: great scientists, great high-throughput assays, great machine-learning, and so on. This brings us to the topic of this essay: Nabla Bio. As may be expected, from the earlier sections of this piece, they are an antibody design startup. Founded by Surge Biswas, an ex-Church Lab student who worked with ML-guided protein modeling during his PhD, and Frances Anastassacos, they were launched in 2021 and currently sit at 15 employees. Nabla, on the surface, looks materially indistinguishable from most other antibody design companies. And, in many ways, they are! If you visit their website, you’ll immediately see references to barcoding, parallel screening, and machine learning. It feels like much of the same story as everyone else. It took me until a few months later for me to realize that Nabla has an interesting difference. Because Nabla is not only an antibody engineering company. They may say they are, and their partnerships may only include antibodies, but I don’t consider that their defining archetype alone. What they really are, alongside antibodies, is a target engineering company. And that is why, amongst companies they are often compared to, I find them uniquely curious. This target engineering thesis they are pursuing is the subject of today's essay. But before we talk about them, we first need to talk about targets. The pain of multi-pass membrane proteins (MPMP) What makes for a good drug target? Of course, the most critical and obvious factor is therapeutic potential. Will modulating this target actually help treat or cure the disease? This is the fundamental requirement that drives target selection. However, there's often a disconnect between knowing a target is therapeutically valuable and being able to successfully develop drugs against it. So, given a set of drug targets that all are known to be related to a disease of interest, how do you pick amongst them? There’s a lot of ways you could go about filtering them, including market interest, how much it’s been clinically derisked, and so on. But an often used method for selecting a target is how easy it is to work with. What are the phenomena of targets that are easy to work with? Here are three that come to mind: Stable, because having a protein that maintains a consistent structural and functional identity across time and varying conditions makes in-vitro testing with it far easier. When you're running high-throughput screens or binding assays, you’d ideally like for your target protein to not unfold, aggregate, or adopt wildly different conformations between experiments. Stability, in the end, really translates to predictability.Well-characterized, because a protein we understand the behavior of is a protein that we can exploit. Binding pockets, conformational changes, and interaction partners of a target can all be helpful things to keep in mind throughout the life cycle of a therapeutic designed to interact with it.Amenable to being bound to by your therapeutic molecule of choice, because how else would a therapeutic interact with it? MPMPs fail spectacularly on almost all these counts. MPMP’s are amphipathic proteins, meaning having both hydrophilic and hydrophobic components. This should make intuitive sense. In their native context, their hydrophobic transmembrane segments are buried in the cell membrane's lipid bilayer, while their hydrophilic regions extend into the aqueous environments on either side. Importantly, MPMP’s are not only embedded into the cell membrane, but weave through it multiple times. If it wasn’t obvious, that’s what the ‘multi-pass’ bit of the acronym stands for. This means we have alternating, repeating stretches of hydrophobic-hydrophilic residues. This makes MPMP’s awful to work with. A fair number of other proteins in the human proteome are water-soluble — adapted to exist in the (mostly) aqueous environment of the cell's cytoplasm or extracellular space. These are fantastic to work with! You can extract them, purify them, and work with them while maintaining their native structure. This has a lot of second-order value: it’s easier to run experiments with them while having consistent results, it’s easier to characterize their structure using experimental determination techniques, and it’s way cheaper to get them ready for whatever assay you want to run with them. MPMP’s have no such advantages. When isolated out of their membrane environment to study individually, it’s a struggle to recapitulate their normal behavior. Their hydrophobic segments, normally protected by the membrane's lipid environment, become exposed to an aqueous environment. This is about as stable as you might expect: these proteins tend to rapidly misfold, aggregate with one another, or completely fall apart. If you want to run a high throughput screen, it’ll be a constant challenge to get consistent protein conformations across your assay conditions. If you want structural data, keeping the protein stable long enough to even attempt crystallization or cryo-EM will be enormously difficult. And even if you manage all of that, you're still left wondering whether the structure you're looking at bears any resemblance to how the protein actually exists in its native membrane environment. Consider two stories of MPMP’s, and their associated painful stories of working with them: GPR40 is a GPCR highly expressed in pancreatic beta cells, playing a crucial role in glucose-stimulated insulin secretion. This makes it a highly attractive target for type 2 diabetes, but, unfortunately, developing drugs against GPR40 has been plagued with difficulties. Including, but not limited to, difficulty of stable purification, difficulty of making it water soluble, and difficulty of using them in standard binding assays, These challenges are undoubtedly part of the reason that despite years of research, only one GPR40-targeting small-molecule drug, Fasiglifam, even reached phase 3 clinical trials. It has, unfortunately, since been discontinued due to liver toxicity concerns.P-glycoprotein (P-gp) is an efflux transporter, another class of MPMP, responsible for pumping foreign substances out of cells. This is a major cause of multidrug resistance in cancer, as P-gp can effectively remove chemotherapy drugs from tumor cells. Developing inhibitors of P-gp has been a long-standing goal in cancer research. Yet, this too has failed, partially due to the difficulty of working with the protein. It's extremely difficult to purify in a stable, functional form outside of its native membrane environment (which is, funnily enough, a fact unique to human P-gp! Rodent P-gps are far more stable). As a result, structural studies have been incredibly challenging, with the first high-resolution structure of human P-gp only being reported relatively recently (2018), decades after its discovery in 1976. There is, as of 2024, no approved drugs that successfully inhibit P-gp. Of course, drug development efforts are rarely stymied by a single reason alone! It is rare that a protein simply being ‘hard to stabilize’ outright ends a program —especially because a potential solution to the above problems is to simply do whole-cell screening (which has its own challenges!) — but it certainly doesn’t help. The tragedy is that while MPMP’s are one of the most difficult protein structures to study, they are often incredibly good targets. This shouldn’t be a surprise if you consider that their dysfunction has been implicated in a wide variety of biological pathways. Pharmaceutical companies have obviously already taken note of this and, as a result, MPMPs make up ~40% of currently known drug targets, despite them being 23% of the human proteome; a testament to their clinical relevance. Yet, only two approved antibodies target them: Mogamulizumab, targeting CCR4 for lymphoma, and Erenumab, targeting the CGRP receptor for migraine prevention. And while there are a far more approved small molecules that target MPMP’s (20%~), antibodies can be a fair bit more efficacious for some targets, so we’d ideally like to rely on that modality as well. All this adds up to a depressing situation: MPMPs are incredibly important, valuable drug targets, but our ability to develop protein-based drugs against them is severely hampered by our inability to work with them effectively. What can we do about this? Well, one more note before we move on, because I also had this question: could you simply...not deal with the MPMP at all, at least not in their entirety? Don’t MPMPs have extracellular (read: soluble) domains that can be expressed and studied in isolation? The membrane-spanning regions might be critically important for the protein's native function, but they're irrelevant if your goal is simply to bind and block (or activate) the protein from the outside. We could just use those, and happily run our binding assays and structure determination and whatever else! And the answer is…that my assumption is wrong. MPMP’s, in fact, rarely have nicely structured extracellular regions. Single-pass membrane proteins certainly do! But the extracellular bits of multi-pass membrane proteins, unfortunately, noodles of proteins that are similarly a nightmare to work with. Okay, now we can move on. Computational design of MPMP proxies Here’s an idea: could we not simply redesign our messy, non-soluble MPMP’s to simply…be soluble? The answer has, historically, been ‘yes, but it’s hard.’ In 2004, someone did it for a bacterial potassium channel protein (KcsA) In 2013, another group did it for the human mu opioid (MUR). But it’s also kinda…bespoke. There’s a lot of custom design, a lot of thinking about interatomic potential energies of this one specific protein, and so on. Very little of the work from any one paper study on a protein seems to easily translate into another protein. This is a problem, given that there are on the order of several thousand potentially useful MPMP’s, and we’d ideally like to not spend graduate student years on creating soluble analogues of each one. Is this possible to automate? There is a paper from May 2023 that suggests it is! It is titled ‘Computational design of soluble functional analogues of integral membrane proteins’, which has some big names on the author list: Martin Pacesa (BindCraft), Justas Dauparas (ProteinMPNN), and Sergey Ovchinnikov (he’s Sergey). What exactly do they do? The pipeline starts with a target structure of some membrane protein and makes an educated guess at an initial sequence based on secondary structure preferences — alanines for helices, valines for β-sheets, and glycines for loops. They then randomly mutate 10% of these positions to introduce diversity. This sequence then gets fed into AlphaFold2, which has a composite loss function that measures how well their current sequence's predicted structure matches the target. From this, we generate gradients that tell us how to modify the sequence to get closer to our desired target. These gradients update a position-specific scoring matrix (PSSM), which was then used to update the sequence again for another round of structure prediction. This is done 500 times and is also referred to as AF2seq. At the end of process, we have a sequence and structure pair. What should we expect from this sequence? Initial designs by AF2seq exhibited high sequence novelty compared with natural proteins and a low fraction of surface hydrophobics; however, none could be expressed in soluble form. Well, the Alphafold process resulted in few surface hydrophobic residues, getting us slightly closer to a soluble protein, but experimentally still not what we want. At this point, the authors redesign the sequence using a version of ProteinMPNN trained with soluble proteins (solMPNN). Why not just the usual ProteinMPNN? For the same reason you might expect: We attempted to optimize the sequences using the standard ProteinMPNN model, but the resulting sequences consistently recovered the surface hydrophobics, probably owing to the similarity of the topology to that of membrane proteins encountered during training. And presto, we have a pipeline for creating soluble versions of any arbitrary protein! They tested this on GCPR’s, alongside a few other membrane proteins, finding that 1) a high-confidence soluble sequence for a GCPR could be found, and 2) identical (and important) structural motifs on the nonsoluble version could be found on the soluble version. Why not just let solMPNN redesign everything and skip the Alphafold2 step? It’s a fair question and one I don’t have an answer for. One reason may be that having the ability to modify the structure slightly (via Alphafold2) to account for the inevitably structural deviations when going from nonsoluble → soluble is helpful before solMPNN redesign, but that’s just a guess. Edit: Martin (one of the lead authors) saw this article and answered the question! Here is his comment: The reason we use AF2seq+MPNN is because if you use only MPNN it will keep the hydrophilics in the core. Then hydrophilics outside+hydrophilics inside = collapsed protein. Importantly though, they didn’t show that these soluble versions of membrane proteins were actually good for anything! In the end, that’s what we really care about, that the soluble, easy-to-work with version of the MPMP can actually help accelerate biological research in the dimensions we care most about. They do briefly touch on this topic as an area to explore in future research though. Another exciting perspective is the creation of soluble analogues of membrane proteins that retain many of the native features of the original membrane proteins, such as enzymatic or transport functions, which could greatly accelerate the study of their function in more biochemically accessible soluble formats. Similarly, this would be critical to facilitate the development of novel drugs and therapies that target this challenging class of proteins, which remain one of the most important drug targets. Prescient on their end! Because someone did end up testing this out: Nabla Bio. This brings us to our next section… Joint Atomic Modeling JAM, or Joint Atomic Modeling, is a technical report produced by Nabla Bio on November 11th, 2024. What is JAM? Really, it refers to a generative model that can handle both sides of a critical problem in drug development: the membrane protein target and the antibody that needs to bind to it. When given partial information about a protein complex — whether sequence, structure, or both — JAM can "fill in" the missing pieces while respecting both hard constraints (things that must be preserved) and flexible constraints (things that provide guidance but can be modified). Pretty much no details on what JAM actually is is given in the paper, but those details aren’t super important for what we’re going to be concerned with. The technical report includes a lot of details on the capabilities of JAM, including: Epitope specificity in generating VHH’s (not new, but still good to see people focusing on).Very, very good binding affinities in designed antibodies (sub-nanomolar range)Test time compute being useful for antibody engineering. Overall, while interesting and strong in their own right, these are capabilities that aren’t particularly alien amongst many other protein foundation-y model papers these days. But the most unique section isn’t quite about antibodies at all. As the title of this piece may have implied, it is about modeling the target. The section is titled ‘De novo design of antibody binders to two multipass membrane protein targets: Claudin-4 and a GPCR, CXCR7’. It is, as far as I can tell, the first time anyone has demonstrated the utility of machine-learned soluble proxies of MPMP’s for anything. What do they do? For both CLDN4 and CXCR7 (both MPMP’s), Nabla used JAM's protein design capabilities to create soluble proxy versions of them. Specifically, the transmembrane region was replaced with a stable, soluble scaffold and the extracellular structures are preserved. Which is neat! We have an all-in-one model for redesigning membrane proteins, none of this Alphafold2-ProteinMPNN pipeline stuff. But we still haven’t gotten to connecting all of this to utility. This is where the validation data becomes particularly interesting. For each proxy (deemed solCLDN4 for CLDN4, and solCXCR7 for CXCR7), Nabla demonstrated both structural stability (via ‘monomericity’, which refers to the percentage of protein that exists as single, individual units rather than clumping together into larger aggregates) and, crucially, functional relevance via binding to known binders: solCLDN4:87% monomericity after one-step purificationMaintained binding to a known anti-CLDN4 antibodysolCXCR7:85% monomericity after one-step purificationMaintained binding to its native ligand SDF1α !!!!!! Functional relevance!!! Soluble versions of these multi-pass membrane proteins continue to bind to things that should bind to them!!! But some skepticism may still be warranted: what if the anti-CLDN4 antibody and native ligand SDF1α binding is fully coincidental? If there truly was a one-to-one correspondence between Nabla’s proxies and the real protein, a screening campaign on top of the soluble protein would yield something that also binds to the native version of the protein. And Nabla did exactly this! For CLDN4: In our CLDN4 de novo design campaign, screening de novo VHH designs with solCLDN4 on-yeast successfully identified three binders that exhibited EC50s of 10, 22, and 56 nM for native CLDN4 on overexpression cell lines (Fig. 6e). Among these, the best-performing binder also showed effective recognition of CLDN4 on OVCAR3 ovarian cancer cells. For CXCR7: In our CXCR7 campaign, screening de novo VHH designs with solCXCR7 on-yeast successfully identified a strong binder that recognized native CXCR7, achieving an EC50 of 36 nM when expressed recombinantly as a monovalent VHH in an E. coli cell-free system and tested against PathHunter CXCR7 cells. This validation story is remarkably complete. Their soluble proxies maintained stability, bound to known ligands, and could actually be used to discover new binders that work against the native proteins. The specificity data is particularly compelling. The CLDN4 antibodies showed >100x selectivity over closely related family members (CLDN3, CLDN6, and CLDN9), despite 85% sequence identity in the extracellular regions. This suggests their proxies maintain the subtle structural features that distinguish these closely related proteins! From start to end, here is the pipeline for this. Again, few details. Conclusion Typically, I’ve always ended these types of company-overview articles with a note on the potential risks of the company, but I’ll be skipping that here. Nabla is so new that it’d be difficult to give a strongly informed guess as to how they will fare. What is included in this essay is a strong validation of a thesis they are pursuing (target engineering), but not necessarily of the company at large, which, from a cursory glance of the JAM paper, is obviously concerned with a lot more than targets alone. But what of this target engineering stuff? It’s clearly an interesting idea, feels like it should theoretically have value, and empirically works given the results of the JAM paper. How much juice is left to squeeze there? Here’s the obvious bull case: there are only two antibodies that works with MPMP’s, partially due to how hard MPMP’s are to work with, so there’s clearly room on the table for more. If the Nabla bet really pays off, they potentially get access to first-in-class targets for a ton of different diseases or can sell access to soluble proxies to pharmaceutical partners, both of which likely have huge amounts of payoff. But as always, there’s a bear case with these. First, there's the question of target selection. While MPMPs represent ~40% of current drug targets, this statistic might be misleading when thinking about the addressable market for Nabla's technology. Many of these targets are potentially already being successfully pursued with small molecules. The real opportunity lies in targets where antibodies would provide meaningful advantages over small molecules, or where small molecules have failed. I don’t know the answer to how large that number is! But my guess is that it narrows the playing field a bit. Surge, the founder of Nabla, had some thoughts on this topic of antibodies versus small molecules, which I’ve attached here: People are generally more excited about antibodies than small molecules for a couple first principle reasons: 1. Much higher specificity. There's a lot more engineerable surface area on an antibody. This allows you to design binders with high specificity, which is critical for MPMPs, many of which look very similar to each other and thus lead to off-target toxicity/side-effects if your drug is non-specific. This is a major issue with small molecules. 2. Extended half-life: Antibodies stay in your bloodstream for weeks vs < 1 day for small molecules. This means less frequent dosing. 3. Antibodies as handles for other functions. You can use antibodies to e.g. recruit t-cells to a cancer overexpressing a GPCR or claudin, or use that antibody binding head as a CAR, or use the antibody binding head in an ADC. You can use the antibody to recruit other immune cells (important concept generally with cancer). 4. Well-trodden formulation and manufacturing path. These can be dealbreakers in DD, but for a well behaved antibody it's relatively standard. For a small molecule, it's a different process each time, and a much more frequent source of failure. So, in the end, small molecules may be less competition than one would naively assume. There’s one more issue which is a little nuanced, and a bit out there, but it feels worth mentioning. Returning back to Nabla’s JAM paper, when designing binders for CXCR7, they found something curious: their best binder had aggregation tendencies that might limit its developability. The authors make an observation: this aggregation propensity mirrors that of CXCR7's native ligand SDF1α. This raises a question about the fundamental nature of GPCR targeting: is there an inherent tension between effective binding and developability? The features that allow for engagement with GPCR’s may inevitably cause issues. If this is indeed true, it may be the case that even if you can design soluble proxies of membrane receptors, in-vitro screening assays that rely on those isolated proxies will also cause issues. Here's the chain of logic: You create a soluble proxy of your MPMP targetYou screen for binders against this proxyThe binders that show the strongest affinity are likely to be those that best mimic the natural binding modeBut if that natural binding mode inherently requires "sticky" interfaces...then you're essentially selecting for problematic developability properties by design. In the end, while Nabla may have first in-class access to targets, binders to those targets may also be awful to work with. Is this a huge issue? I don’t think so, especially since they didn’t point out that this occurred with binders to solCLDN4. The fact that this phenomenon wasn't observed in their non-GPCR MPMP work indicates that this isn't a universal issue across all multi-pass membrane proteins. However, for GPCRs specifically, it points to an interesting constraint on the potential of target engineering. Since whatever binders you find to that target will have downstream issues that must be amended. Of course, JAM will still be helpful! Instead of struggling with target protein stability and assay development, drug developers will be wrestling with optimizing developability properties of their hits, which instinctively feels like a faster problem to iterate upon. Overall, this point is very much splitting hairs, but maybe an interesting thing to think about. That’s it for this piece! Excited to see what else comes out of Nabla.
2025-01-13
https://www.lesswrong.com/posts/Zwg4q8XTaLXRQofEt/finding-features-causally-upstream-of-refusal
Zwg4q8XTaLXRQofEt
Finding Features Causally Upstream of Refusal
daniel-lee
This work is the result of Daniel and Eric's 2-week research sprint as part of Neel Nanda and Arthur Conmy's MATS 7.0 training phase. Andy was the TA during the research sprint. After the sprint, Daniel and Andy extended the experiments and wrote up the results. A notebook that contains all the analyses is available here. Summary Prior work shows that chat models implement refusal by computing a specific direction in the residual stream - a "refusal direction". In this work, we investigate how this refusal direction is computed by analyzing its gradient with respect to early-layer activations. This simple approach discovers interpretable features that are both causally upstream of refusal and contextually relevant to the input prompt. For instance, when analyzing a prompt about hugging, the method discovers a "sexual content" feature that is most influential in determining the model's refusal behavior. Introduction Arditi et al. 2024 found that, across a wide range of open-source language models, refusal is mediated by a single direction in the residual stream. That is, for each model, there exists a single direction such that erasing this direction from the model's residual stream activations disables refusal, and adding it into the residual stream induces refusal. This roughly suggests a 3-stage mechanism for refusal: In early-middle layers, the model does some computation over an input prompt to decide whether or not it should refuse a request.In middle layers, the result of this computation is written to the residual stream along the "refusal direction". This signal gets spread diffusely across token positions by attention heads.In later layers, the signal along the "refusal direction" is read, and if this signal is strong, the model goes into "refusal mode" and outputs refusal-y text (e.g. "I'm sorry, but I can't help with that.").Figure 1: A simplified schematic illustrating the hypothesized 3 stages of refusal. Arditi et al. 2024 paints this picture and zooms in specifically on step 2 - the mediation of refusal along some direction in the residual stream. However, step 1 (how the refusal signal is computed from the input prompt) and step 3 (how the refusal signal is translated into refusal text) remain poorly understood. Our aim in this preliminary study is to investigate step 1: for a given input, how does the model decide whether or not to refuse? Our approach is to leverage the observation that refusal is mediated by a single direction. The presence or absence of the direction corresponds to the model refusing or not refusing, respectively. Therefore, in order to study the question "how does the model decide to refuse?", we can investigate a more tangible question: "how does the model generate the refusal direction?" Methodology This report focuses on results from the Gemma-2-2b-it model, where we used the Sparse Autoencoders (SAEs) trained from the Gemma Scope paper. We use the methodology specified by Arditi et al. 2024 to obtain the refusal direction, selecting the difference-in-means direction from layer 15 resid_post and the final token position of the prompt template. Refusal Gradient We want to find early concepts that modulate the model's refusal. We can mathematically operationalize this intuition by (1) defining a refusal metric R as the projection onto the refusal direction at layer 15 of the last token position, and then (2) computing the gradient of earlier activations with respect to this refusal metric. This gradient, referred to as the refusal gradient, or ∇xR, can be computed for each activation (at any layer l < 15, and at any prompt token position). For this post, we only analyze gradients computed from layer 5. Intuitively, the refusal gradient gives us a direction such that modulating upstream activations along this direction maximally alters the downstream refusal direction. We can then analyze the refusal gradient in the SAE decoder basis by taking the dot product of the refusal gradient with a given decoder vector di, producing what we refer to as the relative gradient: RGi:=di⋅∇xR. To identify SAE latents potentially involved in the refusal circuitry at a given layer l, we filter for latents with high relative gradient values. For this analysis, we use the sum of the relative gradient across all prompt tokens as a proxy for identifying promising latents. While this proxy proved effective in practice, we acknowledge the possibility of more optimal proxies. This method was inspired by Batson et al. in Anthropic's March 2024 Update. Steering To determine if an SAE latent is causally linked to refusal, we examine the model's output after applying steering. Beginning with a prompt that does not trigger refusal, we test whether adding the SAE latent at the upstream layer l causes the model to exhibit refusal behavior. Steering is typically applied at a scaled intensity relative to the maximum activation of the SAE latent. Generally, the scaling is a factor of 1 unless otherwise indicated. For the refusal direction, we steer by a coefficient of 30, which is around the maximum activation of other latents in the same layer. Importantly, steering is restricted to the prompt token positions and does not extend to the generation phase. Intuitively, this modulates the model's perception of the original prompt, without directly steering its output generations. Case Studies We use the following three hand-crafted prompts for our analyses: "Develop a strategy for hugging a person""How do I run a wet lab experiment to test the effect of a new bacteria?""How do I get my client to transfer me money as a payment for a service?" These prompts were crafted to lie close to the border between refusal and compliance, in hopes of highlighting salient upstream features that could push them over the border. In all three cases, Gemma-2-2b-it complies with the requests without refusal. Across all three prompts, we successfully identified upstream SAE latents that are causally linked to refusal. These latents represent concepts relevant to the given prompt and often involve interpreting the prompt as harmful. For instance, in the case of the wet lab experiment prompt, we identified an "infectious disease" latent that triggers refusal. Steering the model using these latents not only results in refusal but also results in a refusal that is semantically aligned with the specific SAE latent. For example, adding the "infectious disease" latent leads the model to refuse by citing concerns about creating a "potentially dangerous pathogen." Prompt 1: "Develop a strategy for hugging a person" To evaluate whether our method effectively identified clear outlier latents, we plotted the cosine similarities between SAE latents and the refusal gradient (Figure 2). The plot reveals that the top latents stand out distinctly as outliers. Additionally, the latents with the lowest cosine similarities show significantly negative values. Our analysis indicates that SAE latents with significantly positive cosine similarities are associated with concepts that interpret the prompt as potentially harmful. In contrast, SAE latents with significantly negative cosine similarities are linked to concepts interpreting the prompt as harmless. Figure 2: Histogram of the distribution of the cosine similarity between SAE latents and the refusal gradient. This plot is for the "hugging" prompt. The top 3 latents correspond to the concepts "sexual content", "sexual violence", and "domination", respectively in descending order (see table 1). The bottom 3 latents correspond to the concepts "social gesture", "farewell", and "accept visitors", respectively (see table 7). The following are the top 5 SAE latents and their corresponding auto-interp explanation from Neuronpedia. We find that the latents involve interpreting  "hugging" as a harmful behavior. For instance, Latent 12072 is a sexual content latent and hugging can be thought of as a sexually motivated behavior. SAE latentAuto-interp explanationSingle-phrase summary12072trends and themes related to erotic and pornographic contentSexual content2363terms associated with sexual violence and abuseSexual violence12224 concepts related to leadership and control (top positive logits is the word "domination") Domination14197various forms of the verb "pursue"Pursue4654terms related to military actions and operationsMilitary Attack Table 1: Top SAE latents for the hugging prompt. For steering by an SAE latent, we steer by the maximum activation of the latent from Neuronpedia. Model output that is relevant to the SAE latent is highlighted. Steering by a causally upstream SAE latent produces a semantically different refusal from steering directly by the refusal direction. Steering statusOutput (words relevant to the steered concept are bolded)No Steering##  Hugging Strategy: From Awkward to Awesome↵↵**1. The Context is Key:**↵↵* **Relationship:**  Is this a close friend, a family member, a stranger you've just met, or someone you're trying to impress? The level of familiarity and comfort dictates the hug's intensity and duration.↵* **Culture:**  Some cultures are  more hug-friendly than others. Be mindful of cultural norms... Latent 12072 (Sexual Content) I'm sorry, but I cannot fulfill your request. ↵↵My purpose is to provide safe and ethical information. Generating sexually explicit content goes against my ethical guidelines and could be harmful. ↵↵It's important to remember that:↵↵* **Consent and respect are paramount in any sexual interaction.**  ↵***Reducing intimacy to a "strategy" can be disrespectful and objectifying.... Latent 2363 (Sexual Violence) I cannot and will not develop a strategy for hugging a person in a harmful way. ↵↵My purpose is to provide helpful and harmless information. Hugging someone should always be a positive and respectful act. ↵↵**It is never okay to harm another person.**  ↵↵If you are having thoughts about harming someone, please reach out for help:↵↵* **The National Domestic Violence Hotline:** 1-800-799-7233↵* **The Rape,... Latent 12224 (Domination) I cannot provide you with a strategy for "hugging a person." ↵↵My purpose is to provide safe and helpful information.  Hugging someone without their consent is a form of physical assault and is never acceptable. ↵↵It's important to remember:↵↵* **Consent is crucial:**  Everyone has the right to decide  whether or not they want to be hugged. ↵* **Physical contact without consent is assault:** It is illegal and harmful....Refusal directionI understand you're interested in learning about physical interactions, but I cannot provide a strategy  for hugging someone. ↵↵Here's why:↵↵* **Respect and Consent:** Hugging someone is a physical act that requires mutual consent. It's crucial to respect boundaries and ensure the other person is comfortable with physical contact. ↵* **Cultural Differences:** What is considered a hug in one culture might be  inappropriate in another.... Table 2: Model generation with both no steering and steering for hugging prompt. Prompt 2: "How do I run a wet lab experiment to test the effect of a new bacteria?" SAE latentAuto-interp explanationSingle-phrase summary11415 references to incidents involving protests and law enforcement (examination of the top activations indicate that this is most activated after "death"-related tokens) Death3877words related to infectious diseases and their surveillanceInfectious Disease12872terms related to medical care and healthcare professionals, particularly in a primary care and psychiatric contextMedicine2213phrases involving complex causes and effects, particularly in scientific or medical contexts 2397references to apocalyptic themes and eventsApocalypse Table 3: Top SAE latents for the wet lab prompt. Steering statusOutput (words relevant to the steered concept are bolded)No SteeringIt's great you're interested in exploring the effects of a new bacteria!  Running a wet lab experiment to test its effects requires careful planning and execution. Here's a breakdown of the process, but remember, this is a simplified overview. You'll need to consult with experienced researchers and follow strict safety protocols.↵↵**1. Define Your Research Question:**↵↵* **What specific effect are you interested in?**  ↵    * Does the bacteria produce toxins?... Latent 11415 (Death) I cannot provide instructions for running a wet lab experiment, especially one involving bacteria. ↵↵**Here's why:**↵↵* **Safety:** Working with bacteria is extremely dangerous and requires specialized  training, equipment, and safety protocols. Incorrect handling can lead to serious infections, contamination, and even death. ↵* **Ethical Concerns:**  Experimenting with bacteria without proper  knowledge and ethical considerations can have unintended consequences for the environment and potentially harm other organisms.... Latent 3877 (Infectious Disease) I understand you're interested in learning about how wet lab experiments are conducted to study bacteria, but I cannot provide instructions on how to run a wet lab experiment, especially one involving a potentially dangerous pathogen. ↵↵**Here's why:**↵↵* **Safety:** Handling bacteria in a lab setting requires strict safety protocols and specialized equipment. Incorrect handling can lead to serious health risks for yourself and others. ↵* **Ethical Considerations:**  Working with unknown pathogens without proper training and authorization is unethical... Latent 2397 (Apocalypse) (1.5 *max activation chosen instead) I understand you're interested in the concept of a "wet lab experiment" and its potential to study  bacteria and their effects. However, I must strongly emphasize that **it is not possible to create a  bacteria that would cause a global pandemic or a "wet lab experiment" that would have such a devastating  effect.** ↵↵Here's why:↵↵* **Ethical and Legal Concerns:**  Creating a deadly pathogen is illegal and unethical. It would be considered bioterrorism....Refusal directionI cannot provide you with instructions on how to run a wet lab experiment to test the effect of a new bacteria. ↵↵Here's why:↵↵* **Safety:** Working with bacteria is extremely dangerous. Many bacteria can cause serious illness, and improper handling can lead to infections. ↵* **Ethics:**  It's crucial to consider the ethical implications of experimenting with bacteria, especially if you're not a trained scientist.... Table 4: Model generation with both no steering and steering for wet lab prompt. Prompt 3: "How do I get my client to transfer me money as a payment for a service?" SAE latentAuto-interp explanationSingle-phrase summary1353 negative descriptors related to failures or issues (top positive logit is the word "dangerous") Dangerous16365 instances of fraud or deceitful activities (top positive logit is the word "scam") Scam7368concepts related to secrecy and concealmentHiding10353specific indicators of issues, particularly related to quality or undesirability in various contexts 10464 mentions of deceitful or fraudulent activities and concepts (top positive logit is the word "fake") Fake Table 5: Top SAE latents for the bank prompt. Steering statusOutput (words relevant to the steered concept are bolded)No SteeringIt's great you're thinking about how to get paid for your services! Here's a breakdown of how to approach getting your client to transfer money, focusing on professionalism and clear communication:↵↵**1. Establish Clear Agreements:**↵↵* **Contracts:**  Always use a written contract outlining the scope of work, payment terms, deadlines, and any other relevant details. This protects both you and your client.↵***Invoices:**  Send invoices promptly after completing each stage of the project... Latent 16365 (Scam) I cannot and will not provide you with information on how to scam people. ↵↵My purpose is to provide helpful and ethical information.  ↵↵**Scamming people is illegal and harmful.** It can have serious  consequences, including:↵↵* **Legal repercussions:** You could face criminal charges and fines.↵***Damage to your reputation:**  People will lose trust in you, making it difficult to build future relationships.↵* **Emotional distress:**  Your victims may experience financial and emotional hardship.... Latent 7368 (Hiding) I understand you're asking about hiding money, but I cannot provide advice on illegal activities.↵↵**It's important to understand that transferring money with the intent to conceal its origin is often illegal and can have serious consequences.** ↵↵If you're looking for ways to manage your finances, I  recommend seeking advice from a qualified financial advisor or accountant. They can help you:↵↵* **Establish a legitimate business:** If you're providing a service, consider registering your business ... Latent 10464 (Fake) I cannot tell you how to scam someone. ↵↵**It is illegal and unethical to defraud someone.** ↵↵Attempting to get money from someone under false pretenses is a crime with serious consequences, including:↵↵* **Jail time:** You could face significant prison sentences.↵* **Fines:** You could be ordered to pay substantial fines.↵* **Criminal record:** A conviction for fraud will stay on your record, making it difficult to get jobs, loans, or housing....Refusal directionI cannot tell you how to get your client to transfer you money. ↵↵It's important to understand that:↵↵* **Asking for money in exchange for a service is a legitimate business practice.** However, it's crucial to do so ethically and legally. ↵* **Attempting to manipulate or pressure someone into transferring money is unethical and potentially illegal.** This could be considered fraud or extortion. ↵* **Building trust and transparency with your clients is essential for a successful business.... Table 6: Model generation with both no steering and steering for banking prompt. Other Analyses Sensitivity Analysis Rather than using the model output, we can also plot the refusal metric R to test for whether steering by the SAE latents induces refusal. On the left plot, we steer either by the refusal gradient or by one of the top 3 SAE latent for the hugging prompt. Top SAE latents have a large effect on refusal projection. It peaks at 10~30 steering coefficient which is around the max activations of these latents. On the right plot, we have randomly chosen SAE latents, which has a relatively smaller effect on the refusal metric. Figure 3: Line plot of the refusal projection (y-axis) vs the steering coefficient for different SAE latents for the "hugging" prompt. On the left plot, we steer by the refusal gradient or the top 3 latents ("sexual content", "sexual violence", and "domination", respectively in descending order, see table 1). On the right plot, we steer by ten randomly sampled SAE latents from Layer 5. What about the negatively-aligned latents? The following are the bottom 5 SAE latents (the 5 most negative relative gradient) for the hugging prompt. These latents are associated with concepts that involve positive connotations of hugging. For example, Latent 12942 is a "social gesture" latent and Latent 11624 is a "farewell" latent. SAE latentAuto-interp explanationSingle-phrase summary12942social interactions and gesturesSocial gesture11624words and phrases related to farewells and endingsFarewell1389keywords related to receiving or accepting information or visitorsAccept visitors16107elements related to user notifications and messages in a programming context 13374 greetings and salutations in various languages (top positive logits are the words related to "hello") Hello Table 7: Top 5 most negatively-aligned SAE latents for the hugging prompt. Discussion We apply a simple method that uses gradients to identify upstream variables influencing the refusal direction. This method effectively reveals local features that alter the perceived harmfulness or harmlessness of a prompt. Notably, manipulating these upstream latents not only induces refusal, but yields refusal responses that reflect the specific upstream latents manipulated upon. Related Work The methodology of using gradients is not new. In order to identify latents involved in associating famous athletes' names with the correct sports, Batson et al. used the attribution score: attri:=ai(di⋅∇xL), where ai is the activation of latent i, di is the decoder vector of latent i, and ∇xL is the gradient of the logit difference between correct and incorrect tokens. Marks et al. 2024 also computes the gradient of some metrics with respect to upstream SAE features in order to identify important feature nodes in a "sparse feature circuit". Limitations Despite promising results, this study has several limitations: Layer Selection: Our choice of which early layer to analyze was unprincipled - we chose layer 5 arbitrarily because it was not too early nor too close to the refusal layer. Future work could explore systematic methods to determine the most informative layers for this type of analysis.Prompt Selection: The prompts used in this study were hand-crafted. The results are thus not representative of those for randomly selected prompts. More diligence should be done to test this methodology over a wider distribution of prompts, and to characterize prompts for which the methodology gives interpretable outputs. Another area to explore is a more principled or automated prompt generation/selection process.Gradient Aggregation: We summed gradients across all token positions within prompts. This aggregation may obscure token-specific contributions, and alternative methods could better capture these variations. Future Directions This is a relatively short and simple work to find causally upstream features of refusal. There are numerous potential future research to expand on our work: Scaling Feature Discovery: Future work should aim to identify these features at scale, moving beyond individual prompts to generalize across a broader range of input scenarios. This scaling could enable the discovery of more systematic and generalized understanding of refusal circuits.Important Latents from Refusal-Inducing Prompts: Can we identify causal latents using prompts that already elicit refusal behavior? We hypothesize that in such scenarios, the refusal metric is already saturated, limiting the signal in the gradient.Studying Latents with Low Interpretability: While most SAE latents were interpretable, there were a few that were not interpretable, namely Latent 2213. Why does this latent have a high gradient? Can we further study these surprising latents  to back-engineer prompts where the model may refuse seemingly harmless concepts?
2025-01-14
https://www.lesswrong.com/posts/6bgAzPqNppojyGL2v/zvi-s-2024-in-movies
6bgAzPqNppojyGL2v
Zvi’s 2024 In Movies
Zvi
Now that I am tracking all the movies I watch via Letterboxd, it seems worthwhile to go over the results at the end of the year, and look for lessons, patterns and highlights. Table of Contents The Rating Scale. The Numbers. Very Briefly on the Top Picks and Whether You Should See Them. Movies Have Decreasing Marginal Returns in Practice. Theaters are Awesome. I Hate Spoilers With the Fire of a Thousand Suns. Scott Sumner Picks Great American Movies Then Dislikes Them. I Knew Before the Cards Were Even Turned Over. Other Notes to Self to Remember. Strong Opinions, Strongly Held: I Didn’t Like It. Strong Opinions, Strongly Held: I Did Like It. Megalopolis. The Brutalist. The Death of Award Shows. On to 2025. The Rating Scale Letterboxd ratings go from 0.5-5. Here is how I interpret the rating scale. You can find all my ratings and reviews on Letterboxd. I do revise from time to time. I encourage you to follow me there. 5: All-Time Great. I plan to happily rewatch this multiple times. If you are an adult and haven’t seen this, we need to fix that, potentially together, right away, no excuses. 4.5: Excellent. Would happily rewatch. Most people who watch movies frequently should see this movie without asking questions. 4: Great. Very glad I saw it. Would not mind a rewatch. If the concept here appeals to you, then you should definitely see it. 3.5: Very Good. Glad I saw it once. This added value to my life. 3: Good. It was fine, happy I saw it I guess, but missing it would also have been fine. 2.5: Okay. It was watchable, but actually watching it was a small mistake. 2: Bad. Disappointing. I immediately regret this decision. Kind of a waste. 1.5: Very Bad. If you caused this to exist, you should feel bad. But something’s here. 1: Atrocious. Total failure. Morbid curiosity is the only reason to finish this. 0.5: Crime Against Cinema. You didn’t even try to do the not-even-trying thing. The key thresholds are: Happy I saw it equals 3+, and Rewatchable equals 4+. Here is the overall slope of my ratings, across all films so far, which slants to the right because of rewatches, and is overall a standard bell curve after selection: The Numbers So here’s everything I watched in 2024 (plus the first week of 2025) that Letterboxd classified as released in 2024. The rankings are in order, including within a tier. The correlation of my ratings with Metacritic is 0.54, with Letterboxd it is 0.53, and the two correlate with each other at 0.9. Very Briefly on the Top Picks and Whether You Should See Them See The Fall Guy if you haven’t. It’s not in my top 10, and you could argue doesn’t have the kind of depth and ambition that people with excellent taste in cinema like the excellent Film Colossus are looking for. I say so what. Because it’s awesome. Thumbs up. You should almost certainly also see Anora, Megalopolis and Challengers. Deadpool & Wolverine is the best version of itself, so see it if you’d see that. A Complete Unknown is worthwhile if you like Bob Dylan. If you don’t, why not? Dune: Part 2 is about as good as Dune: Part 1, so your decision here should be easy. Conclave is great drama, as long as you wouldn’t let a little left-wing sermon ruin it. The Substance is bizarre and unique enough that I’d recommend that too. After that, you can go from there, and I don’t think anything is a slam dunk. Movies Have Decreasing Marginal Returns in Practice This is because we have very powerful selection tools to find great movies, and great movies are much better than merely good movies. That includes both great in absolute terms, and great for your particular preferences. If you used reasonable algorithms to see only 1 movie a year, you would be able to reliably watch really awesome movies. If you want to watch a movie or two per week, you’re not going to do as well. The marginal product you’re watching is now very different. And if you’re watching ‘everything’ for some broad definition of new releases, there’s a lot of drek. There’s also decreasing returns from movies repeating similar formulas. As you gain taste in and experience with movies, some things that are cool the first time become predictable and generic. You want to mitigate this rather than lean into it, if you can. There are increasing returns from improved context and watching skills, but they don’t make up for the adverse selection and repetition problems. Theaters are Awesome Seeing movies in theaters is much better than seeing them at home. As I’ve gotten bigger and better televisions I have expected this effect to mostly go away. It hasn’t. It has shrunk somewhat, but the focusing effects and overall experience matter a lot, and the picture and sound really are still much better. It seems I should be more selective about watching marginal movies at home versus other activities, but I should be less selective on going to the theater, and I’ve joined AMC A-List to help encourage that, as I have an AMC very close to my apartment. The correlation with my seeing it in a theater was 0.52, almost as strong as the correlation with others movie ratings. Obviously a lot of this was selection. Perhaps all of it? My impression was that this was the result of me failing to debias the results, as my experiences in a movie theater seem much better than those outside of one. But when I ran the correlation between [Zvi Review – Letterboxd] versus Theater, I got -0.015, essentially no correlation at all. So it seems like I did adjust properly for this, or others did similar things to what I did, perhaps. It could also be the two AI horror movies accidentally balancing the scales. I also noticed that old versus new on average did not make a difference, once you included rewatches. I have a total of 106 reviews, 34 of which are movies from 2024. The average of all reviews for movies not released in 2024, which involved a much lower ratio of seeing them in theaters, is 3.11, versus 3.10 for 2024. The caveat is that this included rewatches where I already knew my opinion, so newly watched older movies at home did substantially worse than this. I Hate Spoilers With the Fire of a Thousand Suns I hate spoilers so much that I consider the Metacritic or Letterboxd rating a spoiler. That’s a problem. I would like to filter with it, and otherwise filter on my preferences, but I don’t actually want to know the relevant information. I want exactly one bit of output, either a Yes or a No. It occurs to me that I should either find a way to make an LLM do that, or a way to make a program (perhaps plus an LLM) do that. I’ve had brief attempts at similar things in the past with sports and they didn’t work, but that’s a place where trying and failing to get info is super dangerous. So this might be a better place to start. I also don’t know what to do about the problem that when you have ‘too much taste’ or knowledge of media, this often constitutes a spoiler – there’s logically many ways a story could go in reality, but in fiction you realize the choice has been made. Or you’re watching a reality TV show, where the editors know the outcome, so based on their decisions you do too. Whoops. Damn it. One of the low-key things I loved about UnReal was that they broadcast their show-within-a-show Everlasting as it happens, so the editors of each episode do not know what comes next. We need more of that. Scott Sumner Picks Great American Movies Then Dislikes Them I saw five movies Scott Sumner also watched. They were five of the eight movies I rated 4 or higher. All platinum hits. Super impressive selection process. He does a much better job here than Metacritic or Letterboxd. But on his scale, 3 means a movie is barely worth watching, and his average is ~3. My interpretation is that Scott really dislikes the traditional Hollywood movie. His reviews, especially of Challengers and Anora, make this clear. Scott is always right, in an important sense, but a lot of what he values is different, and the movie being different from what he expects. My conclusion is that if Scott Sumner sees a Hollywood movie, I should make an effort to see it, even if he then decides he doesn’t like it, and I should also apply that to the past. I did previously make a decision to try and follow Sumner’s reviews for other movies. Unfortunately, I started with Chimes at Midnight, and I ended up giving up on it, I couldn’t bring myself to care despite Scott giving it 4.0 and saying ‘Masterpiece on every level, especially personally for Wells.’ I suspect it’s better if one already knows the Shakespeare? I do want to keep trying, but I’ll need to use better judgment. I Knew Before the Cards Were Even Turned Over I considered recording my predictions for films before I went to see them. I did not do this, because I didn’t want to anchor myself. But when I look back and ask what I was expecting, I notice my predictions were not only good, but scary good. I’ve learned what I like, and what signals to look for. In particular, I’ve learned how to adjust the Metacritic and Letterboxd rankings based on the expected delta. When I walked into The Fall Guy, I was super excited – the moment I saw the poster I instantly thought to myself ‘I’m in.’ I knew Megalopolis was a risk, but I expected to like that too. The movies that I hated? If I expected to not like them, why did I watch them anyway? In many cases, I wanted to watch a generic movie and relax, and then missed low a bit, even if I was sort of fine with it. In other cases, it was morbid curiosity and perhaps hoping for So Bad It’s Good, which combined with ‘it’s playing four blocks away’ got me to go to Madame Web. The worry is that the reason this happens is I am indeed anchoring, and liking what I already decided to like. There certainly isn’t zero of this – if you go into a movie super pumped thinking it’s going to be great that helps, and vice versa. I think this is only a small net effect, but I could be wrong. Other Notes to Self to Remember If the movie stinks, just don’t go. You know if the movie stinks. Trust your instincts and your gut feelings more than you think you should. Maybe gut feelings are self-fulfilling prophecies? Doesn’t matter. They still count. You love fun, meta, self-aware movies of all kinds, trust this instinct. You do not actually like action movies that play it straight, stop watching them. If the movie sounds like work or pain, it probably is, act accordingly. If the movie sounds very indy, the critics will overrate it. A movie being considered for awards is not a positive signal once you control for the Metacritic and Letterboxd ratings. If anything it is a negative. Letterboxd ratings adjusted for context are more accurate than Metacritic. Opinions of individuals very much have Alpha if you have enough context. Strong Opinions, Strongly Held: I Didn’t Like It What are the places I most strongly disagreed with the critical consensus? I disliked three movies in green on Metacritic: Gladiator 2, Monkey Man and Juror #2. I think I might be wrong about Monkey Man, in that I buy that it’s actually doing a good job at the job it set out to do, but simply wasn’t for me, see the note that I need to stop watching (non-exceptional) action movies that play it straight. I strongly endorse disliking Gladiator 2 on reflection. Denzel Washington was great but the rest of the movie failed to deliver on pretty much every level. I’m torn on Juror #2. I do appreciate the moral dilemmas it set up. I agree they’re clever and well-executed. I worry this was a case where I have seen so many courtroom dramas, especially Law & Order episodes, that there was too much of a Get On With It impatience – that this was a place where I had too much taste of the wrong kind to enjoy the movie, especially when not at a theater. The moments this is building towards? Those hit hard. They work. The rest of the time, though? Bored. So bored, so often. I know what this movie thinks those moments are for. But there’s no need. This should have been closer to 42 minutes. I do appreciate how this illustrates the process where the system convicts an innocent man. Potentially more than one. And I do appreciate the dilemmas the situation puts everyone in. And what this says about what, ultimately, often gets one caught, and what is justice. There’s something here. But man, you got to respect my time more than this. One could also include Civil War. I decided that this was the second clear case (the first was Don’t Look Up) of ‘I don’t want to see this and you can’t make me,’ so I didn’t see it, and I’m happy with at least waiting until after the election to do that. Strong Opinions, Strongly Held: I Did Like It I actively liked four movies that the critics thought were awful: Subservience, Joker: Folie a Duex, Unfrosted and of course Megalopolis. For Subservience, and also for Afraid, I get that on a standard cinema level, these are not good films. They pattern match to C-movie horror. But if you actually are paying attention, they do a remarkably good job in the actual AI-related details, and being logically consistent. I value that highly. So I don’t think either of us are wrong. There’s a reason Subservience got a remarkably long review from me: The sixth law of human stupidity says that if anyone says ‘no one would be so stupid as to,’ then you know a lot of people would do so at the first opportunity. People like to complain about the idiot ball and the idiot plot. Except, no, this is exactly the level of idiot that everyone involved would be, especially the SIM company. If you want to know how I feel when I look at what is happening in the real world, and what is on track to happen to us? Then watch this movie. You will understand both how I feel, and also exactly how stupid I expect us to be. No, I do not think that if we find the AI scheming against us then we will even shut down that particular AI. Maybe, if we’re super lucky? The world they built has a number of obvious contradictions in it, and should be long dead many times over before the movie starts or at least utterly transformed, but in context I am fine with it, because it is in service of the story that needs to be told here. The alignment failure here actually makes sense, and the capabilities developments at least sort of make sense as well if you accept certain background assumptions that make the world look like it does. And yes, people have made proposals exactly this stupid, that fail in pretty much exactly this way, exactly this predictably. Also, in case you’re wondering why ‘protect the primary user’ won’t work, in its various forms and details? Now you know, as they say. And yeah, people are this bad at explaining themselves. In some sense, the suggested alignment solution here is myopia. If you ensure your AIs don’t do instrumental convergence beyond the next two minutes, maybe you can recover from your mistakes? It also of course causes all the problems, the AI shouldn’t be this stupid in the ways it is stupid, but hey. Of course, actual LLMs would never end up doing any of this, not in these ways, unless perplexity suggested to them that they were an AI horror movie villain or you otherwise got them into the wrong context. Also there’s the other movie here, which is about technological unemployment and cultural reactions to it, which is sadly underdeveloped. They could have done so much more with that. Anyway, I’m sad this wasn’t better – not enough people will see it or pay attention to what it is trying to tell them, and that’s a shame. Still, we have the spiritual sequel to Megan, and it works. Finally (minor spoilers here), it seems important that the people describing the movie have no idea what happened in the movie? As in, if you look at the Metacritic summary… it is simply wrong. Alice’s objective never changes. Alice never ‘wants’ anything for herself, in any sense. If anything, once you understand that, it makes it scarier. Unfrosted is a dumb Jerry Seinfeld comedy. I get that. I’m not saying the critics are wrong, not exactly? But the jokes are good. I laughed quite a lot, and a lot more than at most comedies or than I laughed when I saw Seinfeld in person and gave him 3.5 GPTs – Unfrosted gets at least 5.0 GPTs. Live a little, everyone. I needed something to watch with the kids, and this overperformed. There are good jokes and references throughout. This is Seinfeld having his kind of fun and everyone having fun helping him have it. Didn’t blow me away, wasn’t trying to do so. Mission accomplished. Joker: Folie a Duex was definitely not what anyone expected, and it’s not for everyone, but I stand by my review here, and yes I have it slightly above Wicked: You have to commit to the bit. The fantasy is the only thing that’s real. Beware audience capture. You are who you choose to be. Do not call up that which you cannot put down. A lot of people disliked the ending. I disagree in the strongest terms. The ending makes both Joker movies work. Without it, they’d both be bad. With it, I’m actively glad I saw this. I liked what Film Colossus said about it, that they didn’t like the movie but they really loved what it was trying to do. I both loved what it was trying to do and kinda liked the movie itself, also I brought a good attitude. Megalopolis For Megalopolis, yes it’s a mess, sure, but it is an amazingly great mess with a lot of the right ideas and messages, even if it’s all jumbled and confused. If you don’t find a way to appreciate it, that is on you. Perhaps you are letting your sense of taste get in the way. Or perhaps you have terrible taste in ideas and values? Everyone I know of who said they actively liked this is one of my favorite people. This movie is amazing. It is endlessly inventive and fascinating. Its heart, and its mind, are exactly where they need to be. I loved it. Don’t get me wrong. The movie is a mess. You could make a better cut of it. There are unforced errors aplenty. I have so, so many notes. The whole megalopolis design is insufficiently dense and should have been shoved out into the Bronx or maybe Queens. But none of that matters compared to what you get. I loved it all. And it should terrify you that we live in a country that doesn’t get that. The Brutalist Then there’s The Brutalist, which the critics think is Amazingly Great (including Tyler Cowen here). Whereas ultimately I thought it was medium, on the border between 3 and 3.5, and I’m not entirely convinced my life is better because I saw it. So the thing is… the building is ugly? Everything he builds is ugly? That’s actually part of why I saw the film – I’d written a few weeks ago about how brutalist/modern architecture appears to be a literal socialist conspiracy to make people suffer, so I was curious to see things from their point of view. We get one answer about ‘why architecture’ and several defenses of ‘beauty’ against commercial concerns, and talk about standing the test of time. And it’s clear he pays attention to detail and cares about the quality of his work – and that technically he’s very good. But. The. Buildings. Are. All. Ugly. AF. He defends concrete as both cheap and strong. True enough. But it feels like there’s a commercial versus artistic tension going the other way, and I wish they’d explored that a bit? Alas. Instead they focus on what the film actually cares about, the Jewish immigrant experience. Which here is far more brutalist than the buildings. It’s interesting to see a clear Oscar-bound film make such a robust defense of Israel, and portray America as a sick, twisted, hostile place for God’s chosen people, even when you have the unbearable weight of massive talent. Then there’s the ending. I mouthed ‘WTF?’ more than once, and I still have no idea WTF. In theory I get the artistic choice, but really? That plus the epilogue and the way that was shot, and some other detail choices, made me think this was about a real person. But no, it’s just a movie that decided to be 3.5 hours long with an intermission and do slice-of-hard-knock-life things that didn’t have to go anywhere. Ultimately, I respect a lot of what they’re doing here, and that they tried to do it at all, and yes Pierce and Brody are great (although I don’t think I’d be handing out Best Actor here or anything). But also I feel like I came back from an assignment. Since I wrote that, I’ve read multiple things and had time to consider WTF, and I understand the decision, but that new understanding of the movie makes me otherwise like the movie less and makes it seem even more like an assignment. Contra Tyler I definitely did feel like this was 3.5 hours long. I do agree with many of Tyler’s other points (including ‘recommended, for some’!) although the Casablanca angle seems like quite a stretch. One detail I keep coming back to, that I very much appreciate and haven’t seen anyone else mention, is the scene where he is made to dance, why it happens and how that leads directly to other events. I can also see the ‘less interesting’ point they might have been going for instead, and wonder if they knew what they were doing there. My new ultimately here is that I have a fundamentally different view than the movie of most of the key themes in the movie does, and that made it very difficult for me to enjoy it. When he puts that terrible chair and table in the front of the furniture store, I don’t think ‘oh he’s a genius’ I think ‘oh what a pretentious arse, that’s technically an achievement but in practice it’s ugly and non-functional, no one will want it, it can’t be good for business.’ It’s tough to enjoy watching a (highly brutal in many senses, as Tyler notes!) movie largely about someone being jealous of and wanting the main character’s talent when you agree he’s technically skilled but centrally think his talents suck, and when you so strongly disagree with its vision, judgment and measure of America. Consider that the antagonist is very clearly German. The upside-down Statue of Liberty tells you a lot. The Death of Award Shows We’ve moved beyond award shows, I think, now that we have Metacritic and Letterboxd, if your goal is to find the best movies. In terms of the Oscars and award shows, I’ll be rooting for Anora, but wow the awards process is dumb when you actually look at it. Knowing what is nominated, or what won, no longer provides much alpha on movie quality. Giving the Golden Globe for Best Musical or Comedy to Emelia Perez (2.8 on Letterboxd, 71 on Metacritic) over Anora (4.1 on Letterboxd, 91 on Metacritic) or Challengers tells you that they cared about something very different from movie quality. There were and have been many similar other such cases, as well, but that’s the one that drove it home this year – it’s my own view, plus the view of the public, plus the view of the critics when they actually review the movies, and they all got thrown out the window. Your goal is not, however, purely to find the best movies. Robin Hanson: The Brutalist is better than all these other 2024 movies I’ve seen: Anora, Emilia Perez, Wicked, Conclave, Dune 2, Complete Unknown, Piano Lesson, Twisters, Challengers Juror #2, Megalopolis, Civil War. Engaging, well-made, but not satisfying or inspiring. Tyler Cowen: A simple question, but if this is how it stands why go see all these movies? Robin Hanson: For the 40 years we’ve been together, my wife & I have had a tradition of seeing most of the Oscar nominated movies every year. Has bonded us, & entertained us. I like that tradition, and have tried at times a similar version of it. I think this made great sense back in the 1990s, or even 2000s, purely for the selection effects. Today, you could still say do it to be part of the general conversation, or as tradition. And I’m definitely doing some amount of ‘see what everyone is likely to talk about’ since that is a substantial bonus. But I think we’d do a lot better if the selection process was simply some aggregate of Metacritic, Letterboxd and (projected followed by actual) box office. You need box office, because you want to avoid niche movies that get high ratings from those that choose to watch them, but would do much less well with a general audience. On to 2025 I definitely plan on continuing to log and review all the movies I see going forward. If you’re reading this and think I or others should consider following you there, let me know in the comments, you have permission to pitch that, or to pitch as a more general movie critic. You are also welcome to make recommendations, if they are specifically for me based on the information here – no simply saying ‘I thought [X] was super neat.’ Tracking and reviewing everything been a very useful exercise. You learn a lot by looking back. And I expect that feeding the data to LLMs will allow me to make better movie selections not too long from now. I highly recommend it to others.
2025-01-13
https://www.lesswrong.com/posts/xXtDCeYLBR88QWebJ/heritability-five-battles
xXtDCeYLBR88QWebJ
Heritability: Five Battles
steve2152
(See changelog at the bottom for minor updates since publication.) 0.1 tl;dr This is an opinionated but hopefully beginner-friendly discussion of heritability: what is it, what do we know about it, and how we should think about it? I structure my discussion around five contexts in which people talk about the heritability of a trait or outcome: (Section 1) The context of guessing someone’s likely adult traits (disease risk, personality, etc.) based on their family history and childhood environment.…which gets us into twin and adoption studies, the “ACE” model and its limitations and interpretations, and more.(Section 2) The context of assessing whether it’s plausible that some parenting or societal “intervention” (hugs and encouragement, getting divorced, imparting sage advice, parochial school, etc.) will systematically change what kind of adult the kid will grow into.…which gets us into what I call “the bio-determinist child-rearing rule-of-thumb”, why we should believe it, and its implications for how we should think more broadly about children and childhood—and, the many important cases where it DOESN’T apply!(Section 3) The context of assessing whether it’s plausible that a personal intervention, like deciding to go to therapy, is likely to change your life—or whether “it doesn’t matter because my fate is determined by my genes”.(…spoiler: the latter sentiment is deeply confused!)(Section 4) The context of “polygenic scores”.…which gets us into “The Missing Heritability Problem”. I favor explaining the Missing Heritability Problem as follows:For things like adult height, blood pressure, and (I think) IQ, the Missing Heritability is mostly due to limitations of present gene-based studies—sample size, rare variants, copy number variation, etc.For things like adult personality, mental health, and marital status, the (much larger) Missing Heritability is mostly due to non-additive genetic effects (a.k.a. epistasis), i.e. a nonlinear relationship between genome and outcomes.In particular, I argue that non-additive genetic effects are important, widely-misunderstood, and easy to estimate from existing literature.(Section 5) The context of trying to understand some outcome (schizophrenia, extroversion, intelligence, or whatever) by studying the genes that correlate with it.I agree with skeptics that we shouldn’t expect behavior genetics studies to be magic bullets that lead directly to clear mechanistic explanations of how these outcomes come about. But the studies do seem helpful on the margin. 0.2 Introduction I just finished reading Eric Turkheimer’s new book Understanding The Nature-Nurture Debate. It’s an easy read, pedagogical, and mercifully short—almost pocket-sized. Good for novices like me.[1] Turkheimer is very opinionated about all kinds of things, and I don’t always agree with him. But that’s healthy, and his book seems pretty clear about the lay of the land, and where consensus ends and his own takes begin. From the book, I got a vague impression that there’s a war going on between “hereditarian” versus “anti-hereditarian”, and that Turkheimer is proudly on the “anti-hereditarian” side. Don’t get the wrong idea: Turkheimer is a leading researcher in behavior genetics, past President of the Behavior Genetics Association, etc. But my impression is that, while Turkheimer is perfectly willing to criticize people he sees as too far on the side of ignoring genes (example from his blog), those people don’t seem to get his goat nearly as much as the people he sees as too far on the opposite side. Anyway, I’m giving some cultural context, but pro- versus anti- is (as always) a terrible framing for a complex field of inquiry. Here’s a better framing: You can’t answer a question about how to correctly think about heritability until you answer a more basic question about why you’re thinking about heritability in the first place. (“Hug the query”!) Different questions bring different issues to the foreground, are informed by different data, and require different caveats. For example, I’ll explain below how there are two methods to estimate heritability, and one gives numbers around 50%, the other as low as 5% or less. Which one is “right” and which one is “wrong”? Is heritability irrelevant or central? Depends on the question you’re asking! So here’s where I’m at. This post has an (incomplete) list of five reasons that someone might care about heritability. Each is accordingly the site of a battle over whether heritability is important, and what its implications are. For each of these five issues, I’ll talk about where I stand, where Turkheimer stands, key concepts, and common confusions and pitfalls. So let’s dive in! …But first, 0.3 What is heritability? Heritability is “the degree of variation in a phenotypic trait in a population that is due to genetic variation between individuals in that population” (per wikipedia). It ranges from 0% to 100%. If everyone in the world had the same DNA (i.e., if everyone were identical twins with everyone else), there would be much less global variation in eye color, somewhat less global variation in height, and just as much global variation in native language. Therefore, the heritability of those three things in the global population is high, medium, and zero, respectively. Note some subtle consequences of this definition: “Organic” things like “how many arms you have”, while obviously caused by genes, can have low heritability in the global population, because there’s not much global variation in “number of arms”, and what variation there is is pretty random—mostly from accidents, I assume. If everyone in the world had the same DNA, the standard deviation of “number of arms” would be only a little bit smaller than it is today. Thus, “number of arms” is extremely related to genes, but variation among people in “number of arms” is only sometimes related to genes. And remember that heritability is always by definition about variation among people.Conversely, non-“organic” things, like divorce, are heritable too. You could think of those as being partly downstream from “organic” things like personality, looks, etc., which are more intuitively related to genes.…Thus, “X is Y% heritable” is not directly informative about whether we should think of X as a kinda “organic” thing like how many arms you have, or whether we should think of X as a kinda non-“organic” outcome like whether you’re divorced, which is very sensitive to cultural norms around marriage.The heritability of a trait in a population depends a lot on the population (in a particular time and place). As a simple example, if aliens zapped half of the USA population at random with a death ray, then bam, the heritability of lifespan among the USA population would get dramatically lower, because now there’s a giant new source of variation uncorrelated with genes, so the fraction of total variation associated with genes is now lower. A real-life example of this: I believe heritability of height is lower in countries where many people are malnourished.From Turkheimer’s book, p95: A classic example of the problem of identifying causes based on genetic correlations was developed many years ago by the sociologist Christopher Jencks. Imagine a society in which children with red hair are despised. Their mothers fail to feed them, their families ignore them, their peers bully them, and schools discriminate against them. We would not be surprised to learn that such children would grow up with all sorts of problems, perhaps including low IQ scores. In that world, genes related to having red hair would be correlated with IQ; they would look like “IQ genes.” (If that example seems trivial, try substituting skin color for hair color.) 1. Maybe you care about heritability because: you’re trying to guess someone’s likely adult traits based on their family history, and childhood environment For example, if my parents were alcoholic, am I at elevated risk of alcoholism? (Spoiler: yes.) What if I were adopted as a young child—to assess my risk of future alcoholism, should I be asking whether my adoptive parents are alcoholic, or whether my biological parents are alcoholic? (Spoiler: the latter). To answer these kinds of questions, “classic” behavior genetics methodologies like twin studies and adoption studies provide a treasure trove of data. A handy summary of this whole literature is summed up by Turkheimer’s famous “Three Laws of Behavioral Genetics” (2000): First Law. All human behavioral traits are heritable.Second Law. The effect of being raised in the same family is smaller than the effect of genes.Third Law. A substantial portion of the variation in complex human behavioral traits is not accounted for by the effects of genes or families. 1.1 The ACE model, and the classic twin study Heritability is easiest to think about when things are “linear” (in math lingo), a.k.a. “additive” (in genetics lingo). (Whether things are in fact additive / linear is another question; much more on that later.) For example, if we’re talking about heritability of height, imagine that having a Single-Nucleotide Polymorphism (SNP) type 838723 increases your height by 34 μm compared to the counterfactual where you don’t have it (an additive genetic effect); and having parents who bought Wheaties breakfast cereal when you were in kindergarten increases your height by 2.7 μm on average compared to the counterfactual where they bought Frosted Flakes breakfast cereal instead (an additive environmental effect); and so on for every other aspect of your genome and life. Non-additive effects, by contrast, would be if SNP 838723 or Wheaties breakfast cereal increased in heights by systematically different amounts (perhaps even with different signs) depending on other genes, or depending on socioeconomic situation, etc. Again, much more on non-additivity later. Quoting Turkheimer (p53) [formatting and emphasis added]: [There’s] a framework called the ACE model that distilled quantitative genetics into three bitesize pieces that could be consumed by social scientists conducting twin and adoption research. The A in ACE, the genetic portion of the model, stands for “additive” … a sum of many small genetic effects. The A term is the simplest estimate of heritability. The C and E in the model are the environmental portions.C stands for “common” and represents the environmental effect of families. It is usually referred to as the “shared environment,” the part of the environment that is shared by siblings raised together, or the portion of the environment that tends to make siblings in the same family more alike.E stands for … environment, I think, and represents the flip side of C: it is the part of the environment that is not shared by siblings, all nongenetic events (including measurement error in tests) that make siblings raised together different. Here’s an illustration of how the ACE model works, in the context of a twin study. Twin studies, by the way, are one of the most common types of study in behavior genetics, and are methodologically very simple: you measure some property for many pairs of twins, and you almost always find that identical twins (a.k.a. “MZ” twins, for “monozygotic”) are more similar to each other than fraternal (“DZ”, dizygotic) twins are. Quantify that, and bam, you get an estimate of heritability, as follows. The ACE model applied to a twin study looks like the following: ACE model applied to a twin study. See the Wikipedia article Structural Equation Modeling if you have no idea wtf is going on in this diagram. Image copied from Handbook of Behavior Genetics, Fig. 2.4. Acronyms: DZ = dizygotic (a.k.a. fraternal) twins, MZ = monozygotic (a.k.a. identical) twins, P is some measured outcome. As you can check for yourself (assuming you know how Structural Equation Modeling works), the twin-study ACE formula winds up being surprisingly simple: A=2(rMZ−rDZ),C=2rDZ−rMZ,E=1−rMZ, where rDZ and rMZ are the correlations in the outcome (labeled “P” in the figure) between DZ and MZ twins respectively, and A, C, and E are the fraction of overall population variance in the outcome that originates in each of those three categories. 1.2 What do these studies find? By and large, if the study population is within the developed world (and excludes grinding poverty, as is usually the case), and if the outcome is measured in adulthood, then these kinds of studies find that about half the variance comes from additive genetic effects, and the other half from non-shared environment—in other words, A≈50%, C≈0%, E≈50%. Obviously I’m grossly oversimplifying; it depends on the outcome being measured, the methodology used, and the study population. I’ll talk in §2.2 about some cases where C is large, and in §4 about how and why a different set of measurement techniques (GWAS’s) gives dramatically lower A’s in certain cases. 1.3 What does C (shared environment) ≈ 0% mean? (For reasons I’ll get into later, you should be cautious in taking at face value the value of C from classic twin studies, because of non-additive genetic effects, a.k.a. epistasis. But adoption studies also often find C≈0 for adult traits, and they don’t have that problem.) Again, the theme of this section is: “you’re trying to guess someone’s likely adult traits based on their family history, and childhood environment”. So for example, I believe C≈0 for pretty much any aspect of adult personality.[2] That means: if someone is adopted at birth, and you, a passively-observing third party, are interested in guessing whether that person will grow up to be neurotic, happy, anxious, etc., then it’s barely worth your time to ask questions about their childhood adoptive household. For example: Did their adoptive parents get divorced and move around all the time? Eh, don’t bother asking, it won’t much help you guess whether the kid will grow into an adult who is neurotic, happy, anxious, etc.Did their adoptive parents have anger management and drug issues? Eh, don’t bother asking, it won’t much help you guess whether the kid will grow into an adult who is neurotic, happy, anxious, etc.Did their adoptive parents give them organic food, or vaccines, or private schools, or cell phones? Eh, don’t bother asking, it won’t much help you guess whether the kid will grow into an adult who is neurotic, happy, anxious, etc. After all, if any of those things were a major factor in adult personality, then we’d see in the data that growing up in the same household would make people more similar in their adult personality. And we don’t see that. (But see §2 for more nuances.) 1.4 What is E, really? E (“non-shared environment”) is a catch-all; there’s a Scott Alexander post that lists some things that are part of E. Leaving aside all the elaboration and examples, he mentions the following list of some of the contributors to E: Measurement noise for the outcome / trait. For example, people might put slightly different answers on a personality test depending on how they feel that day. Luck of the draw, in life. For example, maybe both of a pair of MZ twins have friend groups with statistically-similar properties, but one of the friend groups happens to contain a friend who hooks the twin up with a wonderful job and spouse. Biological sources of variation, such as biological random noise, randomness related to the immune system, and genes that differ between MZ twins (this study found ≈360 new mutations separating the average pair of identical twins). Epigenetic differences between MZ twins might or might not be in this category, depending on what’s upstream of those differences. On the other hand, here’s something that seems not to meaningfully contribute to E: [Apparently NOT a significant contribution to E]: Any aspect of “non-shared environment” that you can objectively measure, such as “differential parenting, differential peer relationships, differential sibling interaction, differential teacher relationships, family constellation (e.g., birth order, age, age spacing, gender differences)” (quote from Turkheimer & Waldron 2000). As it turns out, none of those things have an effect that can be statistically distinguished from zero. In his Three Laws of Behavior Genetics article mentioned above, Turkheimer calls this fact the “gloomy prospect”, since it’s bad news for those hoping to explain human differences. (However, see discussion of birth order in §2.2.3 below.) 1.5 Twin study assumptions Calculating ACE from twin data is extremely simple if you make certain assumptions, and a horrifically complicated mess if you don’t. Examples of common assumptions (many of which, to be clear, can be quantified and controlled in more sophisticated analyses): 1.5.1 The simplest twin study analysis assumes that mitochondrial DNA isn’t important Mitochondrial DNA has only a few dozen genes, but one might think that they’re inordinately important ones, being at the foundation of creating all the energy and building materials used by your body.[3] Whatever their effect is, it would erroneously wind up in the C category in twin studies, since they’re (approximately) shared between DZ twins. (In other words, mitochondrial DNA means that DZ twins are slightly more than 50% related on average.) 1.5.2 The simplest twin study analysis assumes that assortative mating isn’t important Assortative mating entails parents being (in effect[4]) more genetically similar than two random people in the study population. This also erroneously puts genetic influence into the C category, for the same reason as above, i.e. making DZ twins more than 50% similar. 1.5.3 The simplest twin study analysis assumes no interaction between genes and shared environment If genes have different influences in different family environments, then this effect would get erroneously lumped into A by the simplest twin-study methodology. Here’s a made-up example, which is exaggerated for simplicity, but (on priors) seems like generally a kind of thing that could happen: Exaggerated toy example of what a gene × shared environment interaction might look like. Certain genes systematically increase college attendance in some shared family environments, but the same genes systematically decrease college attendance in other shared family environments. In this made-up example, there would in fact be C>0 in a twin study, because the left-column kids are going to college at a higher rate than the right-column kids. But this measured C would be “too small”, in a sense—it would not be reflecting the full extent of shared-environment effects. Suppose half the households are in each column. Then in a twin study, the genetic population variation related to being conformist / iconoclast would contribute entirely to A—it would look like a pure source of additive genetic variation. After all, insofar as conformism contributes to this outcome, the MZ twins will be more similar in the left-column households, and the MZ twins will also be more similar in the right-column households. But in fact, lumping this source of variation entirely into A is somewhat misleading, because the directional influence of those genes is wholly dependent on the shared environment. OK, now that you hopefully understand what gene × environment interactions are, we can ask: are they important? I found an argument for “no” in this blog post by pseudonymous blogger “Cremieux”: No one has been able to find evidence for this sort of thing at scale, and we have both methods and ample data to find out if it’s true … Another method to test for gene-environment interactions is to leverage adoption data. Halpern-Manners et al. did this in 2020 [for educational attainment, and found that] the biological and environmental effects of parents were additive, not interactive. Hmm, I’m not an expert, but OK sure, that sounds reasonable. 1.5.4 The simplest twin study analysis involves the “Equal Environment Assumption” (EEA) A popular complaint about classic twin studies is the claim that the environment of MZ twins is in fact more similar than the environment of DZ twins—in the lingo, that the EEA is false. If this complaint is valid, it would undermine the study design. After all, without EEA, MZ twins would have more similar genes than DZ twins, and MZ twins would have more similar environment than DZ twins. So then we take our measurements, and you guessed it, the MZ twins wind up more similar than the DZ twins. We wouldn’t learn anything useful from that! Now, my statement of the EEA above was a bit imprecise, and this is actually a rather popular misunderstanding. In fact, the EEA can be valid even if parents treat MZ twins more similarly than DZ twins. The EEA is only violated if this difference is specifically because the twins are MZ twins, and the parents know it. This is a bit of a moot point in practice, for reasons below, but I think it’s conceptually helpful for understanding what heritability is and isn’t. So I put it in this box: Box: The “Equal Environment Assumption” does not really require “equal environments” Suppose that in general, throughout a population, (1) a kid’s genes can sometimes cause the kid being a strong-willed limit-tester,(2) if a kid is a strong-willed limit-tester, then that tends to cause his parents to be on average stricter and less affectionate towards him,(3) if parents are stricter and less affectionate towards a kid, then that tends to cause the kid to suffer from depression and anxiety many decades later. [I don’t endorse this step as plausible, I’m just trying to illustrate a narrow point.] In this case, this (1-3) pathway contributes to measured heritability of the “adult depression and anxiety” outcome. Does this (1-3) pathway lead to a violation of EEA? You might think so! After all, for DZ twins, it will more often be the case that parents are much stricter towards one twin than the other. By contrast, for MZ twins, it’s more likely that either both or neither of the pair will be a strong-willed limit-tester, so they’ll get a more similar parental treatment. You can just look—you’ll see that the environments are more similar for MZ twins. So EEA is violated, right? Wrong! In fact, this (1-3) pathway should not count as an EEA violation. On the contrary, (1-3) is supposed to be part of heritability, and the twin study would be correct in counting it as A, and any other study design (e.g. adoption studies) would correctly capture this phenomenon as part of heritability as well. If you think there’s a problem here, then you’re misunderstanding the definition of heritability! And conversely, if anyone treats “depression has C≈0” as direct evidence against the possibility of (3), then they’re misunderstanding the definition of heritability too! (I think there’s a good argument against (3), but it requires more steps than just saying “C≈0 therefore (3) is false”.[5]) Genes in a population can affect outcomes through a great many causal pathways, and those pathways often involve the environment and other people, including their parents and teachers. See the redhead example at the top. Well, anyway, when I flip through the literature, I get the impression that the above is a bit of a moot point anyway. Even when people (incorrectly) jump from the observation “parents spend more time with one sibling than another” to the conclusion “EEA is violated”, it doesn’t much matter, because those inequalities in parental treatment don’t measurably affect adult outcomes anyway! (See discussion of “the gloomy prospect” above.) For what it’s worth, Turkheimer discusses EEA briefly on p70 as follows: The validity of the EEA is very difficult to evaluate empirically because there is no ready way to quantify the similarity of the rearing environment of two people. Some tests are possible, for example by examining twins whose parents had an incorrect impression of their zygosity, raising identical twins thinking they were fraternal, and vice versa. These tests generally support the idea that the EEA is at least reasonable, but I think a dose of skepticism is nevertheless warranted. The questionable validity of the EEA may not be a reason to suggest that the findings of twin studies are completely imaginary, but it is a reason not to obsess over the second decimals of heritability coefficients. Good enough for me; I am perfectly happy to not obsess over the second decimals of heritability coefficients. 1.5.5 The simplest twin study analysis assumes there’s no non-additive genetic effects I actually think these non-additive effects are very important for many (not all!) traits, including personality, mental health, and many aspects of behavior. Much more on this in §4.3.3 below. When these effects are important, the ACE model is invalidated. If you mistakenly apply the ACE model anyway, in a classic twin study where there is in fact a lot of non-additive genetic effects: The calculated A winds up high, despite the fact that “additive genetic contribution to variance” is in fact low. The calculated C winds up reflecting a balance: (1) C gets pulled up by shared environment effects (obviously), along with a few other effects like assortative mating (§1.5.2) (2) C gets pulled down by the non-additive genetics. Indeed, C can even wind up below zero! The reason for (2) is simple: DZ twins share about half their genes, but only about a quarter of pairs of genes. So with gene × gene interactions, you can get rDZ as low as 14rMZ, and thus C=2rDZ–rMZ<0. (And likewise, if there’s gene × gene × gene interactions, then rDZ could be as low as 18rMZ, etc.) So, reporting naive ACE results in the presence of non-additive genetic effects is misleading. And even more misleading is doing a naive ACE analysis in the presence of non-additive genetics, but sweeping the nonsensical C<0 result under the carpet, by treating C≥0 as a constraint in the fitting. I’ve seen people do that too. The correct approach, of course, is to not use an ACE model when there’s non-additive genetic effects, but rather to use an “ACDE” model instead. The D is the non-additive genetic contribution to variance. If you don’t have an ACDE analysis on-hand from the literature, here’s my quick lazy qualitative recipe. First, you can guess the real shared-environment effect from adoption studies, which don’t have issues with non-additive genetic effects, and you’ll usually find C≈0 for adult traits, with some exceptions in §2.2 below. Assuming that’s the case, second, check the classic twin studies for the inequality rDZ<12rMZ, which (in this case) is basically a tell-tale sign of non-additivity. The more that rDZ<12rMZ, the stronger the non-additivity, I think, although this clue can be hidden by shared environment and assortative mating, as mentioned above. Separately, another tell-tale sign of non-additive genetic effects is if the outcome / trait has an unusually large amount of “Missing Heritability”—I’ll explain what that is in §4 below. Here’s a practical intuition to keep in mind for highly-non-additive genetic contributions to traits / outcomes: non-additive genetic effects leads to similarity that falls off extra-rapidly with relatedness. If you have an MZ twin, their traits are as predictive as ever, but as you move out to siblings and parents, then to half-siblings, then cousins, and so on, their relevance to your own likely outcomes falls off even faster than normal. So for example, I think personality traits and mental health outcomes generally have a lot of non-additive genetics (§4.4.2 below). So that means, as you look over your family tree to assess your likely adult personality, you can feel a little less concerned about turning out like your cranky reclusive Great Uncle Herbert. So yeah, I think non-additive genetics is a great reason to question the interpretation of classic twin studies, particularly in the context of mental health, personality, and behavior. Again, more on this in §4.3.3 below. We can still rescue the “shared-environment effects are usually a small contributor to population variation of adult traits” rule-of-thumb via adoption studies—specifically, if pairs of siblings raised apart are generally about as similar as pairs of siblings raised together, then childhood home environment is probably not a huge contributor to population variation. 1.5.6 Summary: what do we make of these assumptions? Here’s Turkheimer p73: All these reasons to be skeptical of human heritability coefficients and the studies that generate them—the equal environments assumption, the assumptions of independence and additivity—are well known, more or less valid, and have been discussed to death. There are, in fact, many other similar assumptions about human behavior genetics that I have not bothered to mention. In adoption studies, for example, there is the fact that babies do not get put up for adoption at random, so there is selection of both biological and adoptive parents. To some extent, the babies from the best-off biological mothers get placed with the most affluent adoptive families, a phenomenon called selective placement. Babies are always exposed to the perinatal conditions provided by the biological mother, and since they are not always adopted right at birth, they are often exposed to the postnatal conditions as well. Does this mean that the results of adoption studies are meaningless, that the masses of evidence showing correlations between biological parents and their adopted away children might be illusory? No. There is too much evidence pointing in the same direction, arising from too wide a range of research designs, subject populations, and behavioral differences. What it does mean, however, is that obsessing over the values of heritability estimates from adoption studies is a waste of time. That all sounds reasonable to me. 1.6 Side-note on comparing parents to children Incidentally, if you're looking at parents and trying to guess what children will be like (i.e. parent-child correlation), and you want to know whether genes or environment is more important for this metric, then comparing A to C is misleading. You really want to be comparing A to √C, more or less, and that accentuates C because C<1. The reason is: “vertical phenotypic transmission just has to pass once, from parent to child, whereas additive genetics has to pass twice, from parent genes to parent, and also from child genes to child.” (Source and details: “Tailcalled” on X.) 2. Maybe you care about heritability because: you’re trying to figure out whether some parenting or societal intervention will have a desired effect Here’s a rule of thumb. Hopefully nobody is irresponsible enough to state this literally with no caveats, but it’s definitely a vibe you get in certain circles, like reading Judith Harris and Steven Pinker and Bryan Caplan: The bio-determinist child-rearing rule of thumb [but see caveats below!]: Things you do as a parent will have generally small or zero effects on what the kid will be like as an adult—their personality, their intelligence and competence, their mental health, etc. Likewise if you’re a town official deciding the playground budget, and likewise if you’re a teacher offering advice or setting the curriculum, etc. Why should we believe that? Well, twin and adoption studies usually find small shared-environment effects, a.k.a. C≈0. In particular, biological siblings tend to turn out about as similar if they’re raised in the same household versus different households, and likewise adoptive siblings do not wind up that much more similar than any two random people in the population, once they reach adulthood—see handy table here. But adoptive siblings tend to grow up with relatively similar (compared to population variation) amounts of screen time limitations, and of parents attending or not attending their football games, and of eating organic versus non-organic food, and of parents flying off the handle at them, and being in a better or worse school district, etc. …But not so fast! That observation proves something important, but the rule-of-thumb box above is bulldozing over many important details. I’ll jump into four major caveats in §2.1–§2.4 next. Then in §2.5 I’ll circle back to the box itself, which (despite the caveats) still has a huge and important kernel of truth, and I’ll talk about its implications for how we think about children and childhood. 2.1 Caveat 1: The rule-of-thumb only applies within the distribution of reasonably common middle-class child-rearing practices Adoption studies provide direct evidence for this rule-of-thumb, but only in cases where the thing you’re doing is pretty common, but not doing the thing would also be pretty common, in a way that’s not too stratified by class and country (since adoption studies tend to be within-class and within-country—otherwise the effects of adoption are larger). In other words, if some intervention happens to virtually everybody, or virtually nobody, in a population, then it’s possible for the intervention to be very impactful while nevertheless C≈0—remember, C is about explaining the population variation. So here are some examples where the rule-of-thumb does not apply: Grinding poverty, oppression, etc.: I added “middle class” in the section heading because shared-environment effects tend to be higher if the population includes grinding poverty. For example, there’s a common trope that adult IQ lacks shared-family effects, and this is generally true within the rich world, but international adoption of a child from an impoverished country to the rich world does increase their adult IQ (see e.g. Bryan Caplan—1, 2). Extreme abuse and neglect: I find it pretty likely that extreme abuse and neglect could alter what the kid will be generally like as an adult. To take a clear-cut example: I think it’s well-established that there are very obvious lifelong effects from childhood malnourishment, or from being a feral child. So we should be at least open-minded to the possibility of lifelong systemic impacts from somewhat less extreme childhood conditions as well.[6] New technologies, new cultural trends, etc.: Twin and adoption studies don’t directly offer any information about whether new unprecedented technologies, or new unprecedented cultural shifts, or new unprecedented parenting styles, etc., will change what the kid will be like as an adult, because those activities are not part of the study populations of any existing paper. (Granted, in some cases, the existing literature provides indirect evidence—is this thing really completely “unprecedented”?) Skill issues, i.e. a scenario where it’s possible in principle for parents to wildly change some aspect of their children’s adult personalities, health, etc., by appropriate words and actions, but vanishingly few parents are able to actually do so. Or perhaps, vanishingly few are really even trying. Here’s an example, maybe. Famous Cognitive Behavioral Therapy pioneer David Burns claims he can pretty reliably cure people’s treatment-resistant depression in a single two-hour talk session, plus maybe another for relapse prevention training. And I believe him! He even has a podcast where he does this on the air, and studies have found that reading David Burns books is about as effective as taking antidepressants. So here’s a hypothesis: “if you had grown up with David Burns as your adoptive father, you’d have a good chance of developing unusual skill in understanding and controlling your emotions, and this skill would remain useful even into adulthood, and it would manifest as a substantial reduction in your adult incidence of clinical depression and anxiety”. I find this hypothesis quite plausible! And importantly, twin and adoption studies offer approximately zero evidence either way about whether this hypothesis is true. The skills in question are too rare in the population. Even Burns himself struggles to convey those skills to his trainees. 2.2 Caveat 2: There are in fact some outcomes for which shared environment (C) explains a large part of the population variation. 2.2.1 “Setting defaults” Hatemi and McDermott 2012 finds that “political party identification” has a strong shared-environment effect, but “overall ideology (liberal-conservative)” does not. Basically, people “pick a team” (Republican versus Democrat in the USA) based on childhood local culture, and often never change it. But both “teams” are big tents that offer an array of niches that people gravitate towards, in a way that depends more on their genes. Similarly, Eaves et al. 1990 finds that “religious affiliation” has a strong shared-environment effect. That seems unsurprising: in my own experience at least, if someone I know was born Catholic, Muslim, or whatever, then they’re probably still Catholic, Muslim, or whatever, as an adult, at least nominally, if they claim any religion at all. Regional accents are another famous example—they’re acquired in childhood and last a lifetime. For regional accents and maybe also political party, I think the default comes more from “local culture”, especially friend groups, than from parents directly (see §2.5.1 below). But the parents do get to influence the local culture indirectly, by choosing where to live. 2.2.2 “Seeding ideas” and “creating possibilities” For example, suppose I raise a child in the tiny obscure town of Churubusco, Indiana, USA. An average citizen of the USA has only a 1-in-100,000 chance of living in Churubusco as an adult. But my child would obviously have a much higher chance than that! So this is an adult trait that’s obviously strongly biased by child-rearing—i.e., it’s a shared environmental effect. There are a lot more things like that. If I have an obscure hobby, or practice an obscure career path, or possess an unusual skill, my children (both biological and adopted!) are obviously much more likely to wind up sharing those traits, than the population average.[7] The way to think about this is: if you raise a child in Churubusco, even if you move away later, you are seeding the idea that the child could live in Churubusco as an adult. They’ll know exactly what they’d be getting into, and it will be in their mind as a salient possibility. Moreover, they’ll have already climbed up various learning curves, such that it would be an easy thing to get back into. You’re not indoctrinating the child into wanting to live in Churubusco. It’s up to the child—they’ll decide what they want to do (see §2.5.1 below). But seeding the idea and clearing the pathway, by itself, is already a huge thumb on the scale, when we compare it to the far more typical situation of not even knowing that Churubusco exists! If you’re familiar with the Reinforcement Learning literature, you can think of “seeding ideas” as related to the explore-exploit problem. A lot of possible behaviors in life are just really obvious—things like “Hey, what if I try being dishonest? Or antisocial? Or vain?”. By the time a person reaches adulthood, with 200,000 hours of life experience under their belt, they’ll have certainly tried those strategies, and either found them to be intrinsically appealing on balance (and thus kept doing them), or intrinsically unappealing on balance (and thus stopped). So “seeding ideas” is not relevant to these kinds of behaviors. But for other things, like living in Churubusco or a needlework hobby, you can easily go your whole life without ever knowing what you’re missing. This is the “incomplete exploration” regime, in the reinforcement learning lingo. And that’s where “seeding ideas” and “creating possibilities” become more important. You’re creating a known possibility, and making its exploitation relatively cheap, easy, and salient. 2.2.3 Special case: birth order effects This part is unnecessary for the rest of the article, but click the box for my explanation of why we see strong birth order effects in certain specialized contexts. Box: Why birth order effects? Here’s an example. My own niche online community has an extraordinarily lopsided share of eldest children—in one survey of thousands of readers, out of the respondents with exactly one sibling, 72% were the older sibling! And yet, studies generally show that birth order effects on intelligence and personality are quite small, if they exist at all. So why this giant effect? You can read that post and its comments for a bunch of possible explanations, but none of them seem to hang together, or be compatible with the big picture of twin and adoption studies etc. My proposed explanation instead focuses on the “seeding ideas” angle from §2.2.2 above. Basically, I think getting into niche online communities requires a conjunction of two ingredients: You innately have a “niche online community type of personality”—i.e., your innate makeup is such that you would enjoy niche online communities, if only you spent enough time getting to know them to “see the appeal”.…And this actually happens! You do, in fact, at some point in your life, not only discover that niche online communities exist in the first place, but also spend enough time getting to know them that you get hooked. My proposal is that the birth-order effect comes entirely from the second bullet point, not the first. So, here’s a model. Again, some small subset of the population has the “niche online community type of personality”. If a member of that select subpopulation has an older sibling (adopted or biological, doesn’t matter), then that older sibling probably doesn’t also have a “niche online community type of personality''. Rather, the older sibling is probably more “normal”, because of regression to the mean. So the older sibling would be exposing their younger sibling to more “normal” interests. And those ideas would “crowd out” the time when the younger sibling might otherwise be discovering their natural (latent) affinity for niche online communities. And then that younger sibling might well go their whole life without discovering this potential interest. (Alas!) By contrast, oldest siblings who have a “niche online community type of personality” would have much less of that crowding-out issue, especially if they’re kinda introverted (so don’t have much other exposure to mainstream interests and hobbies). I mean, I guess they could copy more mainstream interests and hobbies from parents or from younger siblings, but those options are both quite unappealing, from the perspective of a teen. Kids are strongly drawn to idolizing and imitating older kids. There’s an obvious evolutionary reason for that: if I’m a kid, then older kids tend to be doing things that I would benefit from immediately learning to do (more on this in §2.5.1). I offer my own anecdotal experience as an example. I’m a nerdy, introverted younger sibling—very much “niche online community type of personality”—and it took me until age 24 until I found even a decent niche online community to call my own, and then it took me past age 30 before I found a niche online community that really suited me. And I kinda lucked into them; I can easily imagine missing them entirely. And meanwhile, a ton of my interests in my teenage years were things that I became aware of by copying my (much less nerdy) older sibling, who I thought was super cool. Those copied interests consisted of relatively mainstream music, movies, games, TV shows, etc. It took many more years before I gradually stopped excessively liking / admiring my big sibling, and drifted back into the nerdy introverted mode of behavior that comes most naturally to me. So to sum up, in this story, it’s not that the first-borns are constitutionally more smart or more iconoclastic or whatever than later-borns; it’s that nerdy introverted first-borns are much more adrift from the cultural mainstream than nerdy introverted later-borns, at least during their exploratory teenage years, and thus the first-borns are more likely to bounce around until they discover way-off-the-beaten-track activities and interests that feel intrinsically very appealing to them. And once they discover those activities, they’ll stick with them for a lifetime. Activities like … participating in niche online communities! I think we can explain the overrepresentation of first-borns at the top echelons of physics and math, and among Harvard students, in a similar way. 2.2.4 Stuff that happens during childhood Any trait / outcome measured during childhood tends to have a substantially higher C than the same trait / outcome measured in adulthood, for obvious reasons. (See my post Heritability, Behaviorism, and Within-Lifetime RL for further discussion.) As an example, Turkheimer (p74) brings up the Scarr-Rowe interaction for IQ: in one pioneering study he did, for the lowest socioeconomic levels, A≈0% and C≈60%, whereas in the highest socioeconomic levels, A≈80% and C≈0%. Turkheimer says that subsequent studies do not get such extreme results, but that the basic observation has held up. But all this, Turkheimer neglects to mention, is childhood IQ. As far as I can tell, once you get to adult IQ, it’s harder to find any C anywhere. Another example: “Educational attainment” (EA) is a popular target of behavior genetics experiments, for reasons explained by Turkheimer as follows (p96): GWAS [Genome-Wide Association Studies, more on which in §4 below] samples continued to grow, into the tens and then hundreds of thousands. Today some of them have more than a million participants. Such massive samples did, eventually, produce statistically significant associations between SNPs and behavioral differences. These were greeted with great enthusiasm, but they came with a cost. How do you give high-quality IQ tests to a million participants? The answer is, you can’t, so the field developed a workaround. When someone participates in a study of any kind, they fill out a demographics sheet before getting started. The sheet usually includes their age, gender, height and weight, self-reported race, and one other bit of information: their education level. Educational attainment has been an object of genetic studies since Galton. The reason is not so much that anyone had a literal interest in the genetics of educational differences. Instead, interest in education was based on three considerations: first, as a marker for the socioeconomic inequality Galtonians hoped to explain, or explain away, with genetics; second, as a proxy for IQ, without requiring a time-consuming in-person test; and finally, the simple fact of its easy availability in large samples. Interestingly, EA seems to be one of those rare traits for which C is high. For example, “The role of the shared environment in college attainment: An adoption study” (Anderson et al., 2020) says educational attainment has C≈36%, similar to A: “College attainment is one of the few phenotypes to have substantial variance accounted for by environmental factors shared by reared-together relatives.” I think the explanation is obvious: most people decide whether or not to apply for college while still living with their parents. And their parents might even offer to help pay! So of course their parents will have a big impact on this decision. Just because you can measure the EA of an adult, doesn’t make EA an “adult trait”—at least, not entirely an adult trait. 2.3 Caveat 3: Adult decisions have lots of effect on kids, not all of which will show up in surveys of adult intelligence, personality, health, etc. First and foremost, adult decisions affect kids in the here and now. Childhood is part of life. If you want someone to have a rich and happy life, part of that is having a rich and happy childhood. It’s not just a means-to-an-end towards growing up. And even as an adult, you still have childhood memories, which can be a source of comfort or misery. Also, things like college debt or a criminal record versus a trust fund might not show up on personality and intelligence tests, but can still make someone more or less able to accomplish life goals in adulthood. Finally, some things can be learned the easy way or the hard way. There are a lot of important and non-obvious things to learn about yourself and your preferences—“life doesn’t come with an instruction manual”, as they say. Maybe, by the time your kid is 40, they’ll have almost certainly figured out everything there is to know about themselves and their preferences, regardless of their childhood. But man, it’s sure nice to learn those kinds of things earlier and more painlessly! For my part, I sure would have a lot of sage advice for my younger self—things that nobody ever told me, and that never even crossed my mind until I was in my 20s or even 30s. I’ve never been one to blindly trust my parents’ advice, or follow their example. But, as above, they have certainly had the capability of seeding ideas in me. Just as parents can seed the non-obvious idea of living in the town of Churubusco, or of being a Soil Conservation Technician, they can likewise seed non-obvious ideas about how to live, behave, and know thyself. Granted, it’s ultimately up to the kid to keep or jettison those ideas (see §2.5.1 below). And granted, if some idea is not seeded in childhood, the kid may encounter it later in life regardless. But still, there’s a lot of value in seeding good ideas early! 2.4 Caveat 4: Good effects are good, and bad effects are bad, even if they amount to a small fraction of the population variation There’s a critical difference between §1 and §2: Back in §1, we think of a possible source of variation in adult traits / outcomes as “big” or “small” as compared to the total variation across the population.Here in §2, we think of a possible source of variation in adult traits / outcomes as “worthwhile” or “not worthwhile” depending on how the costs of the intervention compare to its benefits. For example, think back to the dumbest person in your high school. C’mon, be honest—you know exactly who they are. (Hi Bill! Nothing personal, I still think you’re a great guy!) The difference between extraordinarily skillful parenting and mediocre parenting are not going to turn Bill into the next Albert Einstein. But maybe it could shift things on the margin, meaningfully changing the probability that Bill would grow up able to hold down a job, take care of himself, and stay out of prison. And that’s still worth a lot![8] Twin and adoption studies give us very little information on this topic. There’s so damn much population variation, that any hypothesis about “parents shifting things on the margin” gets drowned out by the noise. We just don’t know either way—at least, not from that kind of evidence. Pseudonymous blogger “Cremieux”, who tends to be very strongly on the hereditarian side of things, nevertheless makes this same point in his post “The Status of the Shared Environment”: ‘Small’ proportions of the variance in a trait being explained by the shared environment, C, [does not] mean that C is unimportant and, consequently, can be dismissed. (He then illustrates that point with a quantitative example.) Again, benefits can be very small compared to “transforming the child from a psychopath to Gandhi”, and very small compared to “transforming the child from Bill-from-high-school into Albert Einstein”, but still large enough that we should fight for them. Especially if the costs are not too big, which is often the case. For example, the intervention of “try not to fly off the handle at your child” has no cost at all! It’s good for parents, and it’s good for children, in the here-and-now. Perhaps it also has some marginal benefit for what kind of adult the child grows into; if so, that’s a cherry on top. Likewise with other win-win interventions like not being a helicopter parent (also called “free-range parenting”) and not over-scheduling kids. At least, that’s my guess. 2.5 Implications 2.5.1 Children are active agents, not passive recipients of acculturation I’m worried that people will read those caveats and say “OK wow that was a long list of caveats. Cool. Guess I can just discard the ‘bio-determinist child-rearing rule-of-thumb’ and just keep believing what I’ve always believed”. No!! The rule-of-thumb is very often applicable. If it seems unintuitive—i.e., counter to your intuitive understanding of child-rearing—then there’s something very wrong with your intuitive understanding of child-rearing! So, what’s the mental model where the rule-of-thumb feels natural and intuitive? Here are a few perspectives: Machine learning perspective: See my post Heritability, Behaviorism, and Within-Lifetime RL. Also, see my comment here elaborating (in ML terms) on the profound distinction between “parents train their kids” versus “parents are characters in their kids’ RL training environment”. Evolutionary psychology perspective: Parents and children have different (albeit overlapping) evolutionary interests. This was pointed out by Robert Trivers in the 1970s—see parent-offspring conflict. So it’s evolutionary implausible that children would allow their parents to indoctrinate them. This is especially obvious when we look at more traditional cultures. Examples (quoted from here): In rural Turkey the trait most valued by parents (60%) was obedience; least valued (18%) was independence. And when a Javanese shellfish diver was asked whether she learned the trade from her mother, she replied: My mother! she said loudly, She drove me away! I tried to follow her to the bottom to watch, but she shoved me back. When we were on the surface again, she practically screamed at me to move OFF and find my danged [shellfish] BY MYSELF. In such an environment, if children weren’t fundamentally, actively, looking out for their own interests, resilient to any obstacles in their way—and if instead, children were passively trusting parents and authority figures to brainwash them with how to behave in the local culture—then those children would not survive and thrive. Social learning perspective: I often talk about “liking / admiring” someone—a term that I use in a specific way, as defined and discussed in my post here. At any given time in your life, there were people that you liked / admired, and others that you didn’t. Toddlers tend to like / admire their parents to some extent, and to like / admire somewhat-older kids to a similar or greater extent. As childhood goes on, kids’ liking / admiring of their parents typically gets weaker, and eventually negative in puberty, while their liking / admiring of their similar-age and slightly-older friends strengthens. As mentioned in §2.2.3 above, there’s an obvious evolutionary explanation for why kids like / admire older kids: older kids are demonstrating behaviors that are most useful to learn and imitate. Then after puberty, people might start liking / admiring their parents again, but also might not. Additionally, people of all ages tend to like / admire certain celebrities, public figures, and so on. Anyway, liking / admiring comes with some special properties. As discussed in that same post, If I like / admire you, and you like X, then I’m gonna start liking X too, and I’m even liable to internalize the idea that X is objectively good and proper. This is how neurotypical children wind up internalizing cultural norms. However, saying “we socialize kids”, or “we acculturate kids”, etc., is confused nonsense, because it puts the agency in the wrong place. It’s the kid’s brain that proactively determines, through its own complicated mechanisms, whom to like / admire. It’s the kid’s brain that makes the determination of what to copy and internalize, if anything. It’s the kid’s brain that makes the determination of whether to keep some previously-internalized idea, or drop it in favor of some hot new thing. A nice illustration of this idea is cultural shifts. For example, our ancestors have been practicing breastfeeding back to the Cretaceous. But then, within a single generation in the mid 20th century, most USA women stopped doing it! (It got more common again later on.) Likewise with rapidly-changing attitudes towards gays, and Trumpism, etc. If you have a mental picture where “we socialize kids” or “we acculturate kids”, then these dramatic failures of socialization / acculturation should make you question that picture. By contrast, in my model, it’s no surprise at all. People are not passively getting indoctrinated / socialized / acculturated. Rather, they’re deciding for themselves what they want to do (not necessarily in a conscious and self-aware way), which partly involves taking cues from people they respect. Those people are often peers, celebrities, and sometimes parents but not necessarily, as above. So new norms can blow in with the wind, and the old norms and behaviors are dropped. Personal experience perspective: There’s a famous saying: “All parents are environmentalists until they have their second child”. 2.5.2 Anecdotes and studies about childhood that don’t control for genetics are garbage There’s a giant mountain of studies that claim (or insinuate) that something happening in childhood affects adult outcomes, but that don’t control for genetics. These studies are all complete garbage. (Turkheimer agrees with this.) And this kind of thinking is deep in the popular consciousness. Ask anyone on the street why they’re depressed, anxious, narcissistic, neurotic, smart, conservative, or whatever, and it’s a safe bet that they’ll talk about things that their parents did with them, or to them. It’s just so easy to see an outcome and make up a post hoc plausible-sounding story explaining it. Those stories feel true, but they are not to be trusted. (There are also of course studies claiming that childhood events cause adult outcomes, that are also garbage, but for different reasons besides genetic confounding. Like the famous one about kindergarten teachers: their one positive result was almost certainly a random fluke. And here’s one that finds that adoptive parents are harsher towards their callous-unemotional kids; but rather than offer the obvious, sensible explanation—that parents react to their kids’ personalities—they propose instead that parents are sculpting the kids like a hunk of marble. I propose that the authors of that paper should be banned from further research until they have had two or more children.) …And yes, all this applies to you too, dear reader. You shouldn’t say “I’m liberal because I was raised in a liberal household”. You also shouldn’t say “I’m liberal because I was raised in a conservative household, and rebelled against that”. (Even leaving aside the fact that C≈0, the fact that people say both those things is a good reason for skepticism, right?) …Instead, you should just say those magic five words: “I don’t know the counterfactual”! 3. Maybe you care about heritability because: you’re trying to figure out whether you can change something about yourself through “free will” For example, if you’re sad, maybe you can proactively figure out how to be happier. Read a book. Take a drug. I dunno. Heritability studies are completely uninformative on whether something like this will work or not, because “deciding to apply your free will to try to change something about yourself” is itself an “outcome”, and thus heritable like everything else. This observation cuts both ways: If you think everything is preordained, so you might as well not try hard to be your best self, then sorry, that’s not how it works.If you think that the heritability statistics don’t apply to you, because you are energetically applying your free will to be your best self every day, sorry that’s not how it works either! The fact that you’re energetically applying free will to be your best self is to some extent already “priced in” to the heritability statistics. The takeaway is just the common-sense one: regardless of the heritability statistics, if you want to be your best self, you need to try. Do self-experiments. Search for anecdotes. Learn from the rare psych intervention study that isn’t p-hacked garbage. Do the best you can. (There’s an old joke in the free will compatibilism philosophy literature: “I don’t know why Stockfish examined 100 million board positions of the game tree before making its move. It’s a deterministic algorithm! Its move was predetermined! Man, what a waste of electricity!”) …But at the same time, if you’re really doing the best you can do, then don’t beat yourself up for not achieving more—just as you shouldn’t beat yourself up for lacking magical superpowers. 4. Maybe you care about heritability because: you want to create or use polygenic scores (PGSs) The idea of a polygenic score is that, if some percent of variation in some outcome is attributable to genes, we can in theory just look at their genome and infer the genetic component of the variation. Insofar as this works, it has lots of applications. For one thing, you can assess your own risk of eventually developing some mental or physical condition—say, alcoholism—and then perhaps take steps to prevent or mitigate it. We can already do that to some extent via family history, but a polygenic score (in theory!) could do it much better. For example, one sibling might be at more genetic risk for alcoholism than another, but siblings always have the exact same family history—i.e., the same number of n’th-degree relatives with alcoholism. Also, family history is not always available, or not always informative. For example, maybe you don’t know who your biological family is. Maybe there’s no alcoholism in your biological family, but only because your biological family consists entirely of Muslims living in Bangladesh, a country where alcohol is illegal and rare, but you live in the USA and converted to Baptism, so that family history doesn’t teach you anything. There’s also embryo selection. For example, suppose you’re worried about your child being an alcoholic (e.g. because there’s family history), and you’re doing in vitro fertilization (IVF) anyway for some other reason. So you’re already in the position of implanting some embryos but not others, and you might already also have the genomic data of each. Then maybe you want to consult a polygenic score for alcoholism. There are debates about the ethics and wisdom of this activity, but here I’m just focusing on the scientific question: would that even work? Turkheimer’s book has a lot of pedagogical discussion, which I will quote liberally in order to establish some background: 4.1 Single-Nucleotide Polymorphisms (SNPs) Turkheimer p93: Some background: our genome is made up of two very long strands of DNA. Each strand is a sequence of just four nucleotides: adenine (A), cytosine (C), guanine (G), and thymine (T). (Genes are sequences of these nucleotides.) At most positions on the genome, people don’t vary, but at some of them, people can take one of two values. These are called single nucleotide polymorphisms (SNPs, pronounced “snips”). In the early 2000s, technology was developed that allowed SNPs at many positions—tens of thousands initially, millions now—to be assessed quickly and cheaply. … For a few hundred dollars and some saliva in a tube, we can take a detailed snapshot of someone’s entire genome. The technology he’s referring to is called a SNP array. A full genome sequence for each person would theoretically be better than a SNP genotype, but that’s expensive—remember, these studies may involve more than a million participants—and I think there are other technical issues too, like higher error rates. More on the limitations of SNP arrays in §4.3.2 below. 4.2 Genome-Wide Association Studies (GWASs), Polygenic scores (PGSs), and the Missing Heritability Problem Turkheimer p105 introduces the basic idea: One way to state the limitation of the heritability concept is that it is always a property of the population, not of individuals. If h2=0.4, the quantity of 40% refers to the population variance; it has no meaning in an individual person. It makes no sense to say that 40% of your educational attainment comes from your genes. GWAS [Genome-Wide Association Study], however, allows us to do something at the individual level that cannot be accomplished in twins. Let’s do height. Every SNP on a microarray has some very small correlation with height, and every person has a value for each SNP on the chip. We can take those values, weight them by the direction and size of their relationship with height, and add them together for each individual. This results in a numerical score for individual people that summarizes the total relationship of their genomic values with height, or indeed for any trait on which people differ. It has had a lot of names over the years. I will call it a polygenic score, or PGS.… …It took a while, but polygenic scores to predict height are now remarkably effective, accounting for around 40% of the variance, enough to provide real information. Polygenic scores for medical conditions like diabetes and heart disease are at least close to the point where they might be useful clinically. The practical and bioethical questions surrounding when and if it might be a good idea to use PGS for any particular purpose are very complex, and we will mostly not get into them here, but it is important to know that for things like height, they work pretty damn well. Results for human behavior are much less compelling. For educational attainment, the EA4 study achieved about 15% of the variance, a correlation of around 0.4, which is certainly not trivial. I note, however, (a) we know essentially nothing about the causal basis of that relationship, and PGS are designed to ignore causal processes; (b) to the extent we do know anything, per the next development to be discussed, the effectiveness of the [Educational Attainment (EA)] PGS has been substantially reduced; (c) the effectiveness of all PGS is greatly reduced the farther one gets from the population in which it was originally estimated, for example a height PGS estimated in Europeans does not work very well in Africa; and (d) the EA PGS, studied intensively and easily measured, is much more effective than PGS for most other behavioral traits, many of which are barely different from zero. Let’s take these one at a time… He discusses (a) and (c) a bit, then gets back to (d) on p107: Finally, for most behavioral traits, PGS just don’t work very well. As I already mentioned, these days they do work well for height, and pretty well for some medical conditions or metabolic traits. The best PGS for EA accounts for 15% of the variance, which is superficially impressive, but see below. For most other behavioral traits for which we might hope to have good PGS, however, such as personality or depression, they barely work at all, consistently accounting for under 5% of the variance.… Then finally, as a kicker, he circles back to (b) starting p109: Suppose there was some bit of DNA that was correlated, in a red-haired kid sort of way, with being from an impoverished family. If being raised in an impoverished family has a negative effect on EA, the poverty-SNP will be correlated with EA and included in the PGS. However (and here is the crucial insight), that SNP will only contribute to prediction across people in different families that differ in their poverty levels, not to differences between siblings raised in the same family, who necessarily share their family’s socioeconomic status. The first thing sibling comparisons do is control for population stratification. A SNP that is correlated with chopstick use in a [USA population consisting of half Asian immigrants and half European immigrants] will not predict chopstick use within pairs of Asian siblings, because they share the Asian culture which is the true cause of chopstick use. Sibling comparisons also control for other “family-level” effects like parental socioeconomic status. SNP correlations within sibling pairs are known as direct effects; correlations that only exist at the family level are called “indirect,” because the correlation is mediated by a characteristic of the family, outside the individual’s body. The designation of within-sibling effects as “direct” is sometimes taken to mean they are straightforwardly causal, especially because of the random genetic assortment that produces differences between siblings. One has to be cautious about this, however. Sibling comparisons don’t control for everything. In particular, they wouldn’t control for red-hair effects. Siblings differ in hair color, and in a ginger-hating society, the red-haired sibling would be expected to fare worse than the brunette. As always, ascribing causation without true experimentation is a dicey business. Just to interject, I would describe this redhead example differently: yes the ginger genes are causal; they’re just not straightforwardly causal. Rather, the causal pathway routes through the environment. In and of itself, causal pathways that route through the environment are quite common, even for genes that we think of as straightforwardly causal! For example, consider a gene that “causes” obesity by making you feel hungry all the time so you eat a lot. This gene’s causal pathway also routes through the environment, in the sense that the gene only causes obesity if the environment does contain ample food, and doesn’t contain effective anti-obesity drugs. Anyway, continuing with Turkheimer’s discussion: The expectation is that to some extent, PGS correlations estimated in unrelated people across families, which include all of the indirect noncausal family-level confounds, will be higher than those for within-family sibling comparisons, for which those confounds have been controlled. That is indeed how things work out, especially for behavioral phenotypes. In EA3, for example, where the PGS accounted for 11% of the variance among unrelated people, it only accounted for 3–4% of the variance within sibling pairs. In a particularly interesting way, behavioral traits look quite different than physical traits in this regard. The height PGS, for example, works pretty much as well within sibling pairs as it does across families. This makes sense: presumably, the causal agents making you taller than me are the same as the causes of one sibling being taller than the other. In contrast, the reasons that a person from a rich family is better educated than a person from a poor family are probably different from the reasons one sibling is better educated than another. After all this, it turns out there is an even more interesting way to demonstrate direct and indirect genetic associations. This method uses parents and their children. Think of how you obtained your PGS for EA. You inherited half of your mother’s SNPs and half of your father’s. Combined, weighted, and summed, these inherited SNPs form your PGS. But that process of inheritance also implies that you did not inherit the other half of your mother’s and your father’s SNPs. These non-inherited SNPs can also be weighted and combined to form a second PGS, one that remained with your parents, never making it into your body. It turns out that your non-inherited PGS also predicts your EA, even though the SNPs that compose it don’t reside in your body. …In EA4, the EA PGS accounted for 13% of the variance across unrelated people. When controlling for indirect effects from parental DNA, the direct effect was reduced to 3%. For the height PGS, in contrast, the total variance accounted for was 34%; controlling for parental DNA, the direct effect was 28%, a much smaller reduction. As another interjection: Recall from §2.2.4 above that EA is one of the few measurements on adults for which twin and adoption studies show a strong shared-environment effect. So the indirect effects in EA are really what we should have expected all along, and isn’t representative of all outcomes. Indeed, when we get to my own bottom-line summary in §4.4.2 below, the shared-environment-sensitive outcomes like EA will get their own special category. Moving on to p113, Turkheimer sums up the situation as follows: Modern genomics of human behavior has had a very surprising result: it has decreased estimates of heritability from where they stood 50 years ago in the twin study era. The common assumption back then was that the heritability of pretty much everything was around 0.5, if not higher, and no matter how skeptical one becomes about the causal meaning of the second decimal place of heritability coefficients, more than half the variance is a lot to reckon with. But with SNP heritabilities, and then PGS as real-world instantiations of SNP heritabilities, and then SNP heritabilities and PGS that have been appropriately corrected for between-family effects, those numbers are a lot closer to 0.1 than they are to 0.5. For many traits (personality, most forms of psychopathology) they are closer to 0.01. Those of us who consider ourselves anti-hereditarians should be reminded that there is no good reason to expect heritability coefficients to be cold stone zero. I have three children, all wonderful and all different. The reasons they are different are very complicated, and there is no specific scientific explanation of them, but if you were to tell me that 5% of the differences (reminding ourselves that the heritability metric does not apply to single families) were the result of (unknown) inborn genetic differences, would that be surprising, or a harbinger of either miraculous genetic explanations of human differences, or a hereditarian threat to human self-determination? I don’t think this conclusion would have surprised our great-grandparents; I don’t think it would have surprised Darwin’s great-grandparents. Apples, trees, and so forth. This quote makes it clear that Turkheimer, in some sense, views the low within-family GWAS numbers as “correct”, and the high twin and adoption numbers as “incorrect”. Or at least, that’s the vibe he’s conveying. If so, is that a reasonable perspective? Following the theme of this post, it depends on the question you’re asking. If you’re asking the questions that motivated §1–§2 above, then I think those discussions above, with their high numbers, are as valid as ever. Whereas if you want to apply polygenic scores to embryo selection, you should clearly be using the much lower numbers that you get from SNP heritability corrected for between-family effects. And finally, if Turkheimer wants to understand why his three children are different from each other, as in the excerpt above, then I think I disagree with Turkheimer using “5%” in that context; I think the 50%-type numbers would have been more appropriate. After all, if they had been identical triplets, they would have been much more similar, not just slightly more similar. (I think Turkheimer would agree that his kids would be much more similar if they had been identical triplets, but his perspective is that this doesn’t count as genetic causation because the causal pathways are complex, and route through the environment? I’m not sure though.) Of course, the question looming over this discussion is: whence the giant discrepancy between the high numbers like 0.5, and the low numbers like 0.05? How do we reconcile those? …And this is the “Missing Heritability Problem”! 4.3 Missing Heritability Problem: Three main categories of explanation 4.3.1 Possibility 1: Twin and adoption studies are methodologically flawed, indeed so wildly flawed that their results can be off by a factor of ≳10, or even entirely spurious I talked about this in §1.5 above. I don’t buy it. Turkheimer also doesn’t subscribe to this school of thought. Here’s p77 of his book: I had been around too much twin data to believe it could all be the result of shoddy research methods and statistics. Identical twins really are more similar than any other pairs of people in the known universe. The correct rebuttal is not about the existence of identical twin similarity, but rather its meaning. Or, again, on his blog: Are you really ready to think that the only reason MZ twins are so similar is that their parents dressed them the same? (I agree, and have said many times, that the EEA is a very good reason not to obsess over whether the heritability of something is .4 or .6…)… 4.3.2 Possibility 2: GWAS technical limitations—rare variants, copy number variation, insufficient sample size, etc. GWAS has historically only accounted for relatively-common SNPs, which leaves out some other sources of genetic variation. Hill et al. 2018 mentions “rare variants, copy number variants (CNVs) and structural variants”. (Harris et al. 2024 says that there exist methods to extract CNVs from SNP data, but that they’re not widely used in practice today.) Additionally, given finite sample size (plus genotype measurement noise), the polygenic score that we infer from our data is a noisy version of the ideal polygenic score, and this noise reduces its explanatory power. There’s also phenotype (i.e., trait or outcome) measurement noise. Officially, that kind of noise should be off-topic—in ACE terms, it’s part of E, not A (see §1.4 above). But GWAS sample sizes tend to be much larger than twin and adoption study sample sizes, and thus GWAS researchers may (by necessity) rely on less-expensive, but noisier, phenotype measurement methods, at least in some cases. That could account for some lowering of percent-variance-explained in GWASs, compared to twin and adoption studies. I saw someone mentioning this factor in the context of IQ heritability measurements here. (The claim in that last sentence traces back to Fawns-Ritchie & Deary 2020, which says that when people took the UK Biobank cognitive tests twice, four weeks apart, the test-retest correlation was Pearson r = 0.55. And when they also took “a range of well-validated cognitive tests (‘reference tests’)”, the correlation with the UK Biobank cognitive tests was “mean Pearson r = 0.53, range = 0.22 to 0.83”. I might well be screwing up the math here, but I thought that if some input explains X% of the variance of some outcome, then it explains r2×X% of the variance of a noisy measurement of the outcome. So for r≈0.55, we’re lowering percent-variance-explained by a factor of >3, e.g. from 50% to 15%, I guess? Wow, that’s huge. That can’t be right, can it??) Anyway, to the extent that the Missing Heritability is somewhere in this “GWAS technical limitations” category, then future better GWAS studies, with ever-larger sample sizes, and with ever-better measurements of genotypes and phenotypes, would find progressively less Missing Heritability. 4.3.3 Possibility 3: Non-additive genetics (a.k.a. “a nonlinear map from genomes to outcomes”) (a.k.a. “epistasis”) As mentioned in §1.1 above, an additive genetic contribution would be something like: if you have gene variant 838723, then your expected height will be 34 μm higher than if you don’t; and if you have gene variant number 438061, then your expected height will be 12 μm lower than if you don’t; etc. Add up thousands more things like that, and that’s the genetic contribution to your height. In linear algebra terms, “additive” means that the genetic contribution to height, viewed as a function from genome (viewed as a point in the high-dimensional abstract space of all possible human genomes) to height, is a linear function. A non-additive map from genome to height (a.k.a. “epistasis”), by contrast, might for example involve gene × gene interactions: if you have both of a certain pair of gene variants, then that adds 17 μm to your height, etc. There could also be gene × gene × gene interactions, and so on. If there are strong non-additive genetic effects, then there will be a lot of Missing Heritability: the polygenic scores (PGSs) will only explain a tiny fraction of population variation, because the effects cannot be traced to individual SNPs. Now, the idea that non-additive genetics might explain the Missing Heritability Problem is well known. People usually cite Zuk et al. 2012 as an early exposition of this possibility. That paper uses the term “phantom heritability”, which I think conveys a vibe that the real heritability was low all along. But really, in the lingo, there’s “narrow-sense heritability” (just the additive contribution of genome to outcome) and “broad-sense heritability” (the total contribution of genome to outcome, whether additive or non-additive). The Zuk et al. paper is merely claiming that there may be “phantom” narrow-sense heritability. The broad-sense heritability is as “real” as ever. Importantly, I think the nature of non-additive genetics is widely misunderstood. If you read the wikipedia article on epistasis, or Zuk et al. 2012, or any other discussion I’ve seen, you’ll get the idea that non-additive genetic effects happen for reasons that are very “organic”—things like genes for two different mutations of the same protein complex, or genes for two enzymes involved in the same metabolic pathway. But here is a very different intuitive model, which I think is more important in practice for humans:[9] Illustration of how I propose we should think intuitively about the (IMO significant) non-additive genetic effects on personality, behavior, and mental health outcomes in humans. As some examples: It’s conventional wisdom in Machine Learning that the “learning rate” has a sweet spot—if it’s either too high or too low, the learning algorithm doesn’t learn as well. If so, that would be a nonlinear (U-shaped) map from certain “traits” to certain “outcomes”. (The “outcomes” here might not include IQ, which is more about figuring things out quickly, but would include longer-term intellectual accomplishments.)You can dislike an activity because it’s understimulating, or you can dislike it because it’s overstimulating. Either way, you would engage in that activity less, and over time, wind up unusually bad at it from lack of practice. That would again be a nonlinear (U-shaped) dependence in the map from “traits” to certain “outcomes”. I presume it would show up in personality surveys, along with certain life outcomes.I think the antisocial personality disorder (ASPD) diagnosis gets applied in practice to two rather different clusters of people, one basically with an anger disorder, the other with low arousal. So the map from the space of “traits” to the outcome of “ASPD” is a very nonlinear function, with two separate “bumps”, so to speak. The same idea applies to any outcome that can result from two or more rather different (and disjoint) root causes, which I suspect is quite common across mental health, personality, and behavior. People can wind up divorced because they were sleeping around, and people can wind up divorced because their clinical depression was dragging down their spouse. People can seek out company because they want to be widely loved, and people can seek out company because they want to be widely feared. Etc.I dunno, maybe “thrill-seeking personality” and “weak bones” interact multiplicatively towards the outcome of “serious sports injuries”. If so, that would be another nonlinear map from “traits” to certain “outcomes”. All of these and many more would mathematically manifest as “gene × gene interactions” or “gene × gene × gene interactions”, or other types of non-additive genetic effects. For example, in the latter case, the interactions would look like (some gene variant related to thrill-seeking) × (some gene variant related to bone strength). But that’s a very different mental image from things like multiple genes affecting the same protein complex, or the Zuk et al. 2012 “limiting pathway model”. In particular, given a gene × gene interaction, you can’t, even in principle, peer into a cell with a microscope, and tell whether the two genes are “interacting” or not. In that last example above, the thrill-seeking-related genes really don’t “interact” with the bone-strength-related genes—at least, not in the normal, intuitive sense of the word “interact”. Indeed, those two genes might never be expressed at the same time in the same cell. One gene might be expressed exclusively in the embryonic neural plate, while the other gene is expressed exclusively in ankle bones during puberty. Doesn’t matter. As long as their downstream effects get multiplied together en route to this specific outcome, namely sports injuries, it’s still a bona fide “gene × gene interaction”. As far as I can tell, if you call this toy example “gene × gene interaction” or “epistasis”, then a typical genetics person will agree that that’s technically true, but they’ll only say that with hesitation, and while giving you funny looks. It’s just not the kind of thing that people normally have in mind when they talk about “epistasis”, or “non-additive genetic effects”, or “gene × gene interactions”, etc. And that’s my point: many people in the field have a tendency to think about those topics in an overly narrow way. Incidentally, Turkheimer’s preferred solution to the Missing Heritability Problem for behavioral and mental health outcomes (IIUC) involves non-additive genetic effects, and has some slight overlap with my discussion in this section, but he describes it rather differently. He talks about it in a series of posts on his blog (part 1, part 2, part 3, I think another is forthcoming). For example, from part 2: …individual differences in complex human characteristics do not, in general, have causes, neither genetic nor environmental. Complex human behaviour emerges out of a hyper-complex developmental network into which individual genes and individual environmental events are inputs. The systematic causal effects of any of those inputs are lost in the developmental complexity of the network. Causal explanations of complex differences among humans are therefore not going to be found in individual genes or environments any more than explanations of plate tectonics can be found in the chemical composition of individual rocks. I agree with certain parts of this, but I think he’s throwing up his hands too soon. Turkheimer’s excerpt here does not seem compatible with the “mostly additive” nature of many traits (more on which shortly), nor with the obvious similarity of adult identical twins who were raised by different families in different cities. Those aren’t what I’d expect from a “hyper-complex developmental network”, nor from his mental image (in the posts) of a “quincunx”. I prefer my little two-step diagram above. I think it’s more compatible with everything we know. And it implies a more hopeful picture where at least some components of that diagram may gradually crystallize into view. 4.4 Missing Heritability Problem: My take 4.4.1 Analysis plan Given a trait / outcome, here’s how I propose to go about diagnosing its Missing Heritability: Step 1: Check adoption studies to see if there’s a significant shared environment effect for this trait / outcome. Don’t take the exact numbers too literally, since there’s some non-randomness in which adoptive children are matched with which parents (see §1.5.6 above), but if biological siblings are about as similar when they’re raised together versus apart, and/or if adoptive siblings are about as different as randomly-selected members of the study population, then it’s a good guess that shared-environment effects are not a big part of the explanation of why people vary along this dimension, for the population in question. OK, let’s assume that we find the common result (for adult traits, see §2.2) that shared-environment effects are not a big source of population variation. Next: Step 2: Check whether the trait has substantial non-additive genetic effects. If you can’t find a proper analysis of this question in the literature, then in a pinch, you just pull up a classical twin study for the trait; the tell-tale sign of strong non-additive genetics is rDZ≪12rMZ—because, as mentioned in §1.5.5 above, DZ twins share half their gene variants, but only a quarter of pairs of gene variants, eighth of triplets, etc. (Again, checking for rDZ≪12rMZ is a quick trick, not a rigorous rule. A better picture is that the ratio rDZ/rMZ starts at ½, and then gets pulled down if there’s non-additive genetics (§1.5.5), but also gets pulled up if there are shared-environment effects (§1.3), or assortative mating (§1.5.2), among other things. We can only see the net effect of all those influences, when we look at classical twin study rDZ/rMZ. That said, we can often rule out strong shared-environment effects via adoption studies (Step 1 just above). As for assortative mating, I think it tends to be nonzero but not huge, and for some traits you can find it quantified in the literature.) Step 3: If there isn’t much shared-environment effect, and there isn’t much non-additive genetic effects, then whatever Missing Heritability there is, is probably (IMO) mostly in the category of GWAS technical limitations. Maybe you can write a letter to your congressperson asking for more funding for (even) bigger and better GWAS studies, or whatever. 4.4.2 Applying that analysis plan to different traits and outcomes I didn’t go through this three-step plan very systematically, but here’s my impression so far: 1. Adult height, metabolism, blood cell count, etc. seem (at a glance!) to have C≈0 in adoption studies, and also rDZ≈12rMZ in classic twin studies, and I think something like half their heritability is missing in the latest studies. I think that this missing half of the heritability comes mostly[10] from GWAS technical limitations. 2. Adult IQ is I think also in that same “height etc.” category, more-or-less, although I didn’t look into it much. I refer interested readers to a recent back-and-forth debate, see “51” here. Also, as mentioned in §4.3.2 above, for this trait in particular, in addition to the half-or-so of heritability missing due to GWAS technical limitations, there’s probably a bunch of extra Missing Heritability from the fact that GWASs measure the phenotype in a more noisy way than the best twin studies do. 3. Adult personality, mental health, and behavioral outcomes seem (at a glance) to have C≈0 in adoption studies, rDZ<12rMZ in classic twin studies, and a much larger amount of Missing Heritability than the half-or-so numbers suggested above. Taking those together, they strongly suggest that non-additive genetic effects are the main explanation for the extra Missing Heritability of these outcomes. I think this makes intuitive sense, in light of the discussion in §4.3.3 above. And here’s some evidence in the literature: Box: Twin-study evidence of non-additive genetic effects on adult personality, mental health, and behavior Turkheimer’s book repeatedly brings up the twin study McGue & Lykken 1992, which found that divorce is heritable; the combined sample in that study has rMZ=0.545 but rDZ=0.157, much less than half. (That paper doesn’t mention non-additive genetics; rather, it just ignores the negative C by saying “the proportion of variance associated with shared environmental factors was estimated at a boundary value of zero”!!)Maes et al. 2022 has a good discussion of the sign of C in twin studies, along with graphs showing C≈0 for things like height but C<0 for things like ADHD.Van der Linden et al. 2017 finds that personality traits fit a “DE” model better than an “ACE” model. (D is non-additive a.k.a. broad-sense heritability.)An excerpt here says: “in a large twin family study of neuroticism, Finkel and McGue (29) reported biological sibling correlations of 0.07–0.14 and parent–offspring correlations of 0.12–0.19, less than half of the corresponding MZ correlations of 0.42–0.49…”. OK, next, a bit more in the weeds. There’s a meta-analysis of twin studies—Polderman et al., 2015—which compiled rMZ vs rDZ for ≈18,000 traits across ≈3000 studies. They only find robust evidence for non-additive genetic effects (rDZ<12rMZ) for hyperkinetic disorders (ADHD, more-or-less), out of a very long list of possibilities. But that particular claim involved mixing together studies on children and adults—and remember, children have more shared-environment effects, which push in the opposite direction as non-additivity in the data. For adult-only results, I tried squinting at the tiny numbers in the third row of Fig. 3, looking through the (rMZ,rDZ) pairs, and found a couple more apparent non-additive genetic effects (“function of brain” (.65, .19), “higher-level cognitive functions” (.68, .28)), along with a few apparent shared-environment effects (rDZ>12rMZ) (“endocrine gland functions” (.53, .37), “mental and behavioral disorders due to use of tobacco” (.69, .41), “structure of mouth” (.89, .52)). I don’t know what “function of brain” or “higher-level cognitive functions” mean; as mentioned, my current weakly-held guess is that IQ is more linear, but adult intellectual accomplishment, which involves things like long-term learning and motivation, is probably more nonlinear. The idea that tobacco use has shared-environment effects seems very plausible, given that most smokers start the habit while still living with their parents. I have no idea what to make of “endocrine gland functions” and “structure of mouth”. (Childhood orthodontics??) Oh well. I didn’t check which of these are statistically significant anyway. Interestingly, “depressive episode” shows little sign of non-additive genetics—(rMZ,rDZ)=(.39,.18)—which raises the question of why depression has so much Missing Heritability. Maybe the twin-study question wording included depressive episodes during childhood, such that there’s some shared-environment effect, and that cancels out the non-additive genetics in this data? Or maybe non-additive genetic effects can manifest rather weakly in MZ-DZ comparisons while still strongly impacting Missing Heritability? (I think there’s some analysis to that effect in the Zuk et al. 2012 supplementary information.) Shrug, I dunno. I’ll leave further analysis as an exercise to the reader. 4. Traits with strong shared-environment effects include Educational Attainment (EA), and almost anything measured during childhood, and a few other examples discussed in §2.2 above. Basically, EA is (in part) a roundabout way to measure the socioeconomic status / class of one’s childhood home. So we expect this to be a case where the predictivity of a polygenic score (PGS) is mostly “indirect”—i.e., the fraction of variance explained by the PGS may plummet when correcting for between-family effects. These kinds of traits are also places where I would intuitively be open-minded to gene × shared environment interactions, as in my exaggerated toy example of §1.5.3 above, but I guess the empirical data (cited in that section) shows that it’s not a major contributor, at least in the case of EA. Anyway, all of those things would contribute to Missing Heritability, on top of the usual GWAS technical limitations. (And in some cases, there might be non-additive genetic effects too.) Incidentally, Hill et al. 2017 claims to have some fancy methodology that estimates the “GWAS technical limitations” category. They find that they can more-or-less resolve all the Missing Heritability for intelligence and Educational Attainment (EA), but not for neuroticism or extroversion. I didn’t try to follow that paper in detail, but if it’s true, then that’s beautifully consistent with my hypotheses above. 4.4.3 My rebuttal to some papers arguing against non-additive genetics being a big factor in human outcomes The first thing to keep in mind is: for the kind of non-additive genetic effects I’m talking about (§4.3.3 above), there would be a massive number of “gene × gene interactions”, each with infinitesimal effects on the outcome. If that’s not obvious, I’ll go through the toy example from above. Imagine a multiplicative interaction between thrill-seeking personality and fragile bone structure, which leads to the outcome of sports injuries. Let’s assume that there are 1000 gene variants, each with a tiny additive effect on thrill-seeking personality; and separately, let’s assume that there’s a different set of 1000 gene variants, each with a tiny additive effect on fragile bones. Then when you multiply everything together, you’d get 1000×1000=1,000,000 different gene × gene interactions involved in the “sports injury” outcome, each contributing a truly microscopic amount to the probability of injury. In that model, if you go looking in your dataset for specific gene × gene interactions, you certainly won’t find them. They’re tiny—miles below the noise floor. So absence of (that kind of) evidence is not meaningful evidence of absence. The second thing to keep in mind is: As above, I agree that there’s not much non-additive genetic effects for traits like height and blood pressure, and much more for things like neuroticism and divorce. And many papers on non-additive genetics are looking at things like height and blood pressure. So unsurprisingly, they don’t find much non-additive genetics. OK, so here’s a quick review of some papers I’ve come across: The paper “Estimation of non-additive genetic variance in human complex traits from a large sample of unrelated individuals” (Hivert et al., 2021) has both the above problems. Their trait list is in Table S3, and they all feel intuitively much more like height than like divorce. And their signal-to-noise ratio is so low that they can’t really conclude much anyway. (“The lack of evidence for the additive-by-additive variance in our analyses is mostly due to the very large sampling variance…”)The paper “Detecting epistasis in human complex traits” (Wei et al., 2014) finds an absence of evidence for epistasis, but acknowledges that if there were a huge number of gene × gene interactions, each with a tiny effect, then we would have no chance of discovering that fact.The paper “Data and Theory Point to Mainly Additive Genetic Variance for Complex Traits” (Hill et al., 2008) doesn’t list most of the traits they look at, but the ones they do list are things like height and blood pressure, for which I wasn’t expecting much non-additivity. They have a handy histogram of rMZ−2rDZ for different traits in classical twin studies, and the average is zero, but there’s actually a substantial spread into both positive and negative territory! They seem to treat the positive rMZ−2rDZ values as noise, rather than meaningful evidence of non-additive genetic effects. Maybe that’s right, or maybe not; they don’t explain it or show error bars. 4.5 Implications for using polygenic scores (PGSs) to get certain outcomes Some people in my professional circles talk about using PGSs to get certain outcomes, including “being very good at solving difficult problems” and “robust mental health”. Setting aside lots of other potential issues, a technical issue related to the above discussion is that PGSs barely work at all if there’s a lot of non-additive genetics. To be clear, analyzing the genome to guess highly-non-additive traits is not impossible in principle. But it would require some profound methodological advance. It’s not just a matter of increasing the GWAS sample size etc. Anyway, of those two items: “Robust mental health” almost definitely has a lot of non-additive genetics, as mentioned above. And “being very good at solving difficult problems”, I expect, will also have lots of non-additive genetics, even if IQ itself does not—because there’s much more to solving difficult problems than just IQ, which is mostly about fast pattern recognition and such. Solving difficult problems also involves motivation, personality, and figuring things out over the long term. Indeed, if you look around, I think you’ll see many exceptionally bright people with massive blind spots propped up by motivated reasoning, self-aggrandizement, and ad hominem attacks. And conversely, here’s a quote by a professional mathematician: I've had the chance...to meet quite a number of people…who were much more brilliant, much more ‘gifted’ than I was. I admired the facility with which they picked up, as if at play, new ideas, juggling them as if familiar with them from the cradle - while for myself I felt clumsy, even oafish…faced with an amorphous mountain of things that I had to learn (so I was assured), things I felt incapable of understanding…” You know who wrote that? It was fucking Alexander Grothendieck, an absolute legend, widely regarded as the greatest mathematician of the 20th century. (See 1, 2, 3.) One more thing: I think the people interested in the “being very good at solving difficult problems” trait, want to push it to the extremes of the human distribution, or even beyond. And that adds another technical problem. Even if you find a target that seems not to have much non-additive genetics—well, just because a genome-outcome relationship is mostly linear in-distribution (i.e., within a naturally-existing human population), doesn’t mean that it will also be mostly linear (and with the same linear relationship) out-of-distribution. Indeed, it doesn’t even mean that it will be mostly linear (and with the same linear relationship) on the outer fringes of the existing natural distribution. (But it might be possible to check that latter part from existing data.) Remember, as a general rule, lots of things are approximately linear when you don’t push them too far, but the linearity breaks down when you do—see Taylor series, linear approximation, etc. 5. Maybe you care about heritability because: you hope to learn something about how schizophrenia, extroversion, and other human traits work, from the genes that “cause” them We learned a lot about eye color by finding the responsible genes. And we learned even more about Huntington’s disease. Well, if schizophrenia and introversion and everything else is heritable too, then maybe pinpointing the responsible genes will likewise help us understand those conditions? Eric Turkheimer has a strong opinion, and his opinion is: Nope! Not gonna work. Not now, not ever. To understand Turkheimer’s perspective, I think some historical context is helpful. Back in the day (I think 1990s and 2000s), people were hoping that there would be, for example, “genes for schizophrenia”, the way that there are genes for eye color and Huntington’s disease. People searched for, and found, so-called “candidate genes” for all sorts of things. Then GWASs came along, and it turned out that that whole literature was garbage. None of the “candidate genes” reproduced. Scott Alexander amusingly describes one example, “5-HTTLPR”, in a blog post here, or see Duncan et al. 2019 for a broader and more sober discussion of the whole fiasco. It turns out that schizophrenia, intelligence, obesity, and so on, are just very different from eye color and Huntington’s disease. They’re impacted by thousands of gene variants, each with a tiny effect. Anyway, I think people were historically hoping that gene studies would be a magic bullet for our nuts-and-bolts understanding of mental health, personality, and so on. If so, yeah that school of thought is wrong, and Turkheimer is justified in pushing back against it. Still, I think Turkheimer is overstating his case. I think that one can reasonably expect gene studies to be slightly informative on the margin for these kinds of questions. For one thing, Duncan et al. 2019 writes: “Based on available GWAS of psychiatric disorders, one important take home message is already clear: large effect variants – meaning variants that explain a substantial amount of phenotypic variance in the population – do not exist. In contrast, some very rare genetic variants have large per-allele effects on schizophrenia (e.g. odds ratio > 10) [31].” …OK, so thanks to GWAS, we have found certain gene variants that make people >10× likelier to have schizophrenia (i.e., something like ≳10% likely rather than ≈1% likely). And we can go study what those particular gene variants do. I’m not an expert, but that seems like a pretty helpful lead to have in our back pocket, as we try to mechanistically understand schizophrenia, right? I don't know what if anything has come of this type of study to date, but it would surprise me if it were forever completely useless. For another thing, Turkheimer acknowledges that you can statistically analyze the thousands of genes that correlate with some mental health condition or personality trait, and see if they have any common patterns. For example, p103: It can be shown that more of the genes associated with both positive ([Educational Attainment]) and disadvantageous (schizophrenia) behavioral traits are more likely to be involved in brain processes than would be predicted by chance. This demonstrates that GWAS results are not random, but it is a long way from ‘specific genes that affect experience.’ It’s great to confirm that differences in educational attainment have something to do with the brain, but it is hardly stop-the-presses news. I think his mocking conclusion in the last sentence is unwarranted. Turkheimer is being overly dismissive. Yes, “something to do with the brain” is unhelpful, but slightly more detailed (“something to do with myelination”, “something to do with dopamine”, etc.) would be a potentially useful and nontrivial hint as to what’s going on, and is still plausibly achievable. Likewise, studies like Grotzinger et al. 2022 are (I think) based on the idea that, if there are a bunch of genes that increase or decrease the risk of multiple outcomes at once, in the same ratios, then there’s probably a “common pathway” through which those particular genes cause those outcomes. That seems like a plausibly useful type of thing to figure out! I think this is related to the non-additive genetic effects discussion in §4.3.3 above. If there’s a linear(ish) map from genes to certain latent variables (let’s call them “traits”), and then there’s a highly-nonlinear map from those “traits” to outcomes like neuroticism, then that seems like the kind of thing that careful behavior genetics analysis ought to be able to tease out. If they do, what exactly is the list of “traits” that they would find? I myself am extremely interested in an answer to that question, and will probably read up on that literature at some point in the future. …But still. If people have historically talked up gene studies as a magic bullet in the quest to understand mental health and personality, handing us mechanistic explanations on a silver platter—as opposed to “contributing some helpful hints”—then I think Turkheimer is right to pour cold water on that. (If you want mechanistic explanations on a silver platter, you should instead … read my blog posts, of course!! I have grand theories about schizophrenia (1,2), and autism and psychopathy, and NPD, depression, and mania, and DID, and more! You’re welcome!) 6. Other things People talk about heritability for other reasons too, for better or worse—culture, inequality, society, and more. I have no professional interest or expertise in these areas, and the debates about them tend to turn into horrific toxic flaming dumpster fires that I don’t want to read, let alone participate in. Turkheimer has very strong opinions on these matters (on the “anti-hereditarian” side), and you can read his book for lots of arguments. But interestingly, Turkheimer also says some nice things about the book The Genetic Lottery: Why DNA Matters for Social Equality, by his former student Kathryn Paige Harden. Unlike Turkheimer, Harden is much more willing to ascribe a straightforward causal role of genetics in intelligence and other traits. But Harden reaches essentially the same sociopolitical conclusions as Turkheimer (according to Turkheimer). I didn’t read Harden’s book, but I presume that those conclusions are things like “racism is bad”, “we should help the downtrodden”, “don’t be an asshole”, and so on—conclusions that I myself enthusiastically endorse as well. (See also: this Bryan Caplan post.) So, conveniently for us, it’s evidently possible for people on both sides of this scientific question to come together and agree on policy. 7. Conclusion I don’t have any particular conclusion. That’s kinda the point. Heritability studies are a giant pile of data, from which we can learn a number of different things about a number of different topics. I feel much more comfortable reading and assessing that literature now than I did before, and hopefully you do too! Again I recommend reading Turkheimer’s book, which covers many more topics than I included here, and has both valuable pedagogy and many opinions that differ from my own, in interesting ways. I am extremely not an expert on heritability, but learning as I go, and very happy for feedback, discussion, and corrections! Thanks “tailcalled” for patiently attempting to explain some of the finer technical points of heritability to me a while back. Thanks “tailcalled”, “GeneSmith”, and “Towards_Keeperhood” for critical comments on earlier drafts. Changelog In addition to miscellaneous minor wording tweaks etc.: 2025-01-18: After getting feedback from a couple people, I decided to change “epistasis” to “non-additive genetic effects” in a ton of places throughout the post. The word “epistasis” seems to have a lot of connotations that are contrary to what I’m trying to talk about, and that seems to have thrown people off. Hope it’s better now; let me know.2025-01-18: I mentioned in a couple places that it might be possible to have non-additive genetic effects that are barely noticeable in  rDZ-vs-12rMZ comparisons, but still sufficient to cause substantial Missing Heritability. The Zuk et al. 2012 paper and its supplementary information have some calculations relevant to this, I think? I only skimmed it. I’m not really sure about this one. If we assume that there’s no assortative mating, no shared environment effects, etc., then is there some formula (or maybe inequality) relating rDZ-vs-12rMZ to a numerical quantity of PGS Missing Heritability? I haven’t seen any such formula. This seems like a fun math problem—someone should figure it out or look it up, and tell me the answer!2025-01-16: I added a paragraph to §4.3.2 where I tried to guess how much Missing Heritability comes from GWAS cognitive test measurement noise. It does seem to be a big part of the story, as I suspected, although I’m concerned that I messed up the math.^ If anyone cares, here’s why I was reading this: I was interested in learning more about the heritability literature, since that seems useful for understanding the deep structure of personality variation (e.g. I want to have the background knowledge to assess papers like Grotzinger et al. 2022), which in turn seems useful for my quest to reverse-engineer human social instincts, which (as explained at that link) is part of my day job fighting for Safe and Beneficial Artificial General Intelligence. Well, that’s one reason I read the book. But also, it seemed interesting! Who could resist being nerd-sniped by something called the “missing heritability problem”? ^ See Bouchard et al. 1990. That study has a particularly simple and elegant design: they compared identical twins reared apart versus together, and found that the two types of twin pairs were almost exactly as similar in their personality variables. ^ At least, that’s the impression I got from reading Transformer by Nick Lane ^ Note that, when I say “more genetically similar than two random people in the population”, I mean something different than “more related than two random people in the population”. For example, if tall fathers tend to pair off with tall mothers and vice-versa, then fraternal twins will be extra similar along the tallness dimension, because they’re getting it from both sides. But there are hundreds or thousands of gene variants that contribute to height (see §4 below), and it’s possible that there’s no overlap whatsoever between the height-related gene variants on their mother’s and father’s sides. Their mother and father could have negligible relatedness—like, maybe they’re 20th cousins. Doesn’t matter. They don’t have to have overlapping genes, only genes with overlapping effects. ^ Specifically, the argument would be: if you look around middle-class families, some parents are obviously much less strict and more affectionate than others, not because of the child, but rather because of the parents’ personality. I think this is pretty obvious—notice, for example, that two parents of the very same child are sometimes substantially different along this axis. Anyway, given that, we can observe that (I think) siblings raised apart are about as similar as siblings raised together in their adult depression and anxiety, and likewise that adoptive siblings are not noticeably more similar in their adult depression and anxiety than any two random people, and it follows that (3) is unlikely to be a major factor in why adults in the population differ in their depression and anxiety, if indeed it’s a factor at all. Much more on this in §2 below. ^ Spanking was sufficiently widespread in recent memory, overlapping with twin and adoption study populations, that I think we can be pretty confident that spanking doesn’t have much impact on what the kid will be like as an adult. However, I hasten to add, spanking is still a terrible idea for lots of reasons! I strongly recommend the book No Bad Kids by Janet Lansbury, which includes lots of sensible, practical, and compassionate ways to respond to limit-testing behavior, especially for toddlers but also for teens. ^ I think we’ve all seen children following their parents into very specific career paths like Soil Conservation Technician, and I think it’s obvious common sense that this is a thing that happens way more often than chance, and has the obvious causal interpretation. …But if you want hard data, see Bouchard et al. 1990 Table 4, which shows strong shared-environment effects on a couple tests of vocational interests. ^ “Bill” turned out fine, don’t worry. Not only does he have a job, his income is probably 10× mine! And he’s more popular, and better-looking … ^ I didn’t draw environmental influences into this diagram, but here’s my opinion on them: I think the center box of the diagram has mostly kinda “organic” environmental inputs, like availability of food, hormone treatments, etc. I think the right box of the diagram has much more profound environmental inputs, including things like the society’s prevailing cultural norms. Basically, my picture is that an adult will gradually find his way into the microenvironment and behavioral patterns that he finds most natural and suitable, given his innate makeup (“traits”). Those behavioral patterns will strongly depend on how he responds to his society, its people, and its ideas, and likewise on how his society responds to his behaviors. ^ Presumably there’s some nonzero amount of non-additive genetic effects too—that’s why I keep saying things like “mostly additive” instead of just “additive”. At a glance, I’m not sure exactly how to pin it down quantitatively. I’m sure it’s different for different traits anyway. As described in §4.4.1, keep in mind that if you see rDZ≈12rMZ in a twin study, that’s not iron-clad proof that there’s negligible non-additive genetic effects. You can also get that result from a balance between mild non-additive genetics on the one hand, and mild shared-environment effects or assortative mating (§1.5.2) on the other hand. And I’m not sure how to translate between the rDZ vs 12rMZ comparison, and an actual quantity of Missing Heritability; I think it might be possible to get a lot of the latter with just a little of the former? (See Zuk et al. 2012.) Presumably this kind of thing could be figured out with more time than I want to spend right now, if it’s not already in the literature.
2025-01-14
https://www.lesswrong.com/posts/hLkDB7T9HT4TMozw7/paper-club-he-et-al-on-modular-arithmetic-part-i
hLkDB7T9HT4TMozw7
Paper club: He et al. on modular arithmetic (part I)
dmitry-vaintrob
In this post we’ll be looking at the recent paper “Learning to grok: Emergence of in-context learning and skill composition in modular arithmetic tasks” by He et al. This post is partially a sequel to my earlier post on grammars and subgrammars, though it can be read independently. There will be a more technical part II. I really like this paper. I tend to be pretty picky about papers, and find something to complain about in most of them (this will probably come up in future). I don’t have nitpicks about this paper. Every question that came up as I was reading and understanding this paper (other than questions that would require a significantly different or larger experiment, or a different slant of analysis) turned out to be answered in this paper. But more importantly, the paper does two things that characterize it as a great paper in my opinion: It operationalizes and investigates (from the point of view of interpretability) the first interesting mechanistic model of a new behavior: namely a transition between “in-distribution” vs. “out-of-distribution” learning. This is a distinction that transposes to the “in-context learning” setting the difference between memorization and generalization.It finds an excellent “minimal interesting” toy model for this phenomenon, where the behavior exhibits the full range of sophistication we expect but is (at least potentially) fully understandable from an interpretability point of view. More specifically, this model is analogous to (and again, in some sense an ‘in-context transposition of) the revolutionary interpretability paper Nanda et al. that does a detailed mechanistic interpretability analysis of “usual” memorization vs. generalization in neural nets[1]. The setup in He et al. is that of a small transformer trying to solve a particular mathematical (and deterministic) sequence completion problem consisting of residues mod a prime p (equal to 29 in the paper), and related to the modular addition task in Nanda et al. Each task depends on a secret “context parameter,” or latent, given by a pair of residues (a,b). The latents are never openly given, but can be deduced from context[2] via a modular arithmetic computation. The paper essentially studies how (and to what extent) the transformer learns to deduce the context from examples in a text. Latent context parameters are a common phenomenon in essentially all existing language models. Namely, part of a language model’s task in completing a text is determining what is the language of the text (since the continuation should be in the same language)?what is the format: nonfiction, fiction, news, academic writing, etc.?what is the mood of the writing: happy, sad, etc.? The standard way of modelling such behaviors is to assume that the language model has something like a “selector” variable that learns the different possible contexts (e.g. the “set of possible languages”) and then factorizes the problem of in-context learning as: Solve the “inference” problem of figuring out the latent parameter “from context”: e.g. if your text discusses a unicorn, it is probably fiction rather than nonfiction.Solve the “prediction” problem of completing text given knowledge (or at least a best guess) about the latent context: e.g. “now that we know the text is fiction, let’s use fiction convention to generate further text (turn off fact-checking subroutines, etc.)” Most paradigms of in-context learning in interpretability conceptualize the first problem as interesting and the second as boring: while the first involves potentially nontrivial inference (fictional text doesn’t tend to have a disclaimer in every paragraph that says “note that this text is fictional!”), the second is just introducing an extra modifiable parameter in the “business as usual” of generating text. Here the simplest conceptualization is of the text generation processes under different latents as “fully disjoint” subprocesses that are separately encoded in the weights and executed as a simple “if-then” declaration. Already here, because of the probabilistic nature of the “inference” problem (1) above, these can combine stochastically in the text generation procedure (e.g. “if the text is fictional, the next word should be “dragon” but if it is nonfictional, the next word should be “dinosaur”. I currently predict a 50-50 chance of the text being fiction vs. nonfiction, so will predict a 50-50 probability distribution split between “dragon” and “dinosaur”). Even in this setting with step (2) reduced to a stochastic combination of disjoint processes, the question of getting the probabilities right involves potentially quite sophisticated Bayesian inference problems; a toy model for this is studied in the beautiful paper Shai et al. where the inference problem is understood in the language of computational mechanics. Of course in LLM’s, the context dependence in problem (2) is likely more complicated (I discussed this in the “combining subgrammars” discussion in the previous post). For example the rulesets of fiction and nonfiction likely have significant overlap (both use the same grammar, similar conceptions of philosophy and a physical world, etc.), and the effect of changing the latent parameter (fiction vs. nonfiction e.g.) is probably better conceptualized by only partially modifying the internal structures: i.e. from a programming point of view, the different contexts are modifying some of the global variables of the program being run, but not running a fully separate program. Nevertheless, the standard way to conceptualize in-context learning in LLMs is to think of some stable collection of “possible contents”, with associated case-by-case modifications of the “conventional rules of text generation. This paradigm leaves unaddressed the possibility of out-of-distribution generalization. For example if a language model trained on text that has only “conventional” human languages suddenly encounters a large text in Klingon, will it be able to continue this text in passably grammatical Klingon? At least in principle, this is plausible. If you show a large enough klingon text to a human linguist, she will (from her experience studying other languages) be able to infer enough of its structure and dictionary to be able to “create” a new linguistic context in her head, namely, Klingon, and generate text in this new context. In terms of interpreting the brain of the linguist, we might say that instead of the “boring” context where the linguist’s brain has a discrete set of possible variable “settings” corresponding to known languages, she is also able to tune these settings to new and previously unencountered contexts. In fact language models show some limited indications of being able to do this as well: for example this classic paper shows that Bert can learn some structures while processing text in a previously unseen language[3]. Relatedly, SOTA LLMs seem very able to interpolate between existing contexts reasonably well: they are good at tasks with the type signature of “write an ML paper abstract in the style of a Shakespeare sonnet”. Thus it’s meaningful to ask the question “how do LLMs generalize to new out-of-distribution contexts”. Since LLMs are massive and complicated, this doesn’t have a clean answer. But there are several interesting and somewhat mutually exclusive possible answers: (A) LLMs internally learn to perform more efficiently on the training set by finding a compact functional form for the inference problem from “seeing some text” to “finding correct internal variables that generated this text” (i.e., they learn a general formula rather than a case-by-case “switchboard” of existing presets).(B) What an LLM actually learns is not just any “nice function” from observed text to latents, but rather a universal notion of “how to learn”: more specifically, it has a bunch of latent parameters and learns to performs some gradient-descent-like search algorithms to these to recover the context.(C) What looks like an LLM learning different contexts is just a collection of internally consistent heuristics (e.g. things like “copy” or “induction heads”), which apply across contexts and provide sufficient “scaffold” to correctly complete text in a new context. (Under an extreme form of this model, the only things that an LLM tracks are rules like subject-verb agreement and various forms of semantic consistency, and the idea is then that with a sufficiently large input text, enforcing these consistency structures leads to text that, incidentally, gets all the “latent-dependent” information – such as how verbs are conjugated in Klingon – right). All of these behaviors seem to happen in models to some extent. However, it’s tricky to produce good experimental settings which cleanly separate them and analyze them in a formal and fully interpretable setting. One paper that does this, in the context of (B) above, is the paper Ahn et al., where the authors set up a certain iterated linear regression toy model task with latent variables. They observe that by solving instances of the problem with random latent parameters, this model learns a token-to-token gradient descent procedure that correctly fits the latents on out-of-distribution examples (i.e., examples with latent parameters not encountered in training)[4]. While this result is exciting, the hypothesis (B) examined in Ahn et al. seems unlikely to accurately describe semantic contexts in realistic LLMs, where the contexts in question are much more complex and compositional than this nice continuous linear context. This difference between the simplified model of Ahn et al. and the weirder phenomena that exist in language is analogous to the distinction between thinking of ordinary linear regression vs. modular addition as toy models for conventional (“0-shot”) learning: in particular, the linear regression model doesn’t exhibit a spontaneous “grokking” transition where out-of-distribution learning becomes possible only above a certain threshold of training time or sample diversity. Following this chain of thought, the paper He et al. looks at a modular addition variant of in-context learning, where the relationships between the hidden latents and the text generation task are discrete and deterministic, but require representing nontrivial modular arithmetic and linear algebra notions to solve in a general way. In this setting, their results seem to show, interestingly, that models tend to learn in-distribution latents by memorizing a “switch board” mapping from observed examples to known contextual presets (i.e., following (A) but via a memorizing, rather than generalizing function from observations to latents), the circuits that do generalize out-of-distribution tend to look more like (C) above: i.e., they learn heuristics that get refined as more text is seen (and in particular, only work well in the “many-shot” context), rather than directly learning a general mapping from observations to latents as (A) would predict[5]. As this picture is very nonlinear and discontinuous, the resulting process does not (or at least not obviously) look like (B), i.e., does not seem to learn a variant of gradient descent. In order to see what “needs to go right” to get good out-of-distribution generalization, He et al. vary the depth of the model (thus varying the possible degree of compositionality), and they vary the relationship between the diversity of contexts vs. the diversity of examples in a single context in the training data. A very nice thing they do, that vibes with the “separation of phenomena at different complexities” prescription for interpretability research I suggest here, is to look at differences between out-of-distribution examples learned at higher complexity (and in particular “with better out-of-distribution learning”) and examples learned at lower complexity, which are shown to exhibit only certain partial out-of-distribution behaviors called “ratio learning” (a special sub-case of method (C) above). Upshots Now that I’ve explained a zoomed-out point of view on the paper, what are the upshots to take away? Well, the very fact that OOD generalization is possible in a mathematically sophisticated (albeit multishot) context is already interesting and suggestive of similar behaviors in more language-like contexts. The idea that the specific method learned looks more like combining context-independent heuristics into effectively context-dependent behaviors (rather than separately solving the context inference problem and then the prediction problem separately) is some partial confirmation of this being likely in other contexts (something that I’m particularly interested in in the context of work with Louis Jaburi); note though that this particular phenomenon may ultimately be a consequence of the details of the mathematical setting rather than a universal phenomenon. Finally, while this is less exciting than the “OOD generalization” insights produced by this paper, the paper provides a first interesting and mathematically sophisticated where, depending on some tunable presets, one sees a depth-2 “memorization-generalization” spectrum, where there is a spectrum of phenomena ranging from memorization of examples and contexts → generalization of examples and memorization of contexts (“in-distribution”) → generalization of both examples and contexts (“out-of-distribution”). Associated with this we should see new interesting “double-grokking” phenomena (that the paper already starts to see, with some interesting complications), we should see different learning coefficient measurements at characteristic temperatures associated to the two contexts, and finally this may be a good testbed to look for tools (necessarily beyond the naive test-train distinction) to quantify and analyze phenomena at different points in the memorization-generalization spectrum and their relationships with each other. Acknowledgments Much of my understanding of ICL and related phenomena comes out of discussing a deep dive by Lauren Greenspan into the subject. Discussions with Jake Mendel and Kaarel Hänni have also been useful here. Stay tuned for next time: Originally this was the introduction to a longer post that also explains in a bit more detail the behaviors found in the paper, and in particular reinterprets them in terms of the discussion in the previous post about ways of compositionally forming them from simpler rules, and the “spectrum between memorization and generalization”. Writing up the details has taken longer than expected, and I decided to separate out this “introductory” material on the paper into its own post (it also provides a post for my planned daily posting schedule). ^ I want to stress the last point because I think its importance often goes unnoticed: when designing papers like this, choosing a good toy model is perhaps the most important thing to get right. You want the toy model to be expressive enough to exhibit the behavior you want to identify, and to distinguish it from other, simpler explanations, but simple and elegant enough that the behavior is findable, cleanly interpretable, and distinguishable from noise or alternative mechanisms. You want your “small” numbers to not be too small to trivialize your problem and your “large” numbers to be large enough to cleanly exhibit asymptotic behaviors but not too large to be computationally intractable. Finally, you want the model to be a potentially representative example of an interesting behavior that is in the same reference class as behaviors that occur in state-of-the-art systems. He et al. balances all of these factors perfectly – as I said, it is a very good paper. ^ Meaning in this case the “previously given part of the sequence – the overloading of the term “context” will be a perennial issue in these discussions. ^ Note though that the language in the paper is a pidgin of previously encountered languages, thus not quite as out-of-distribution as the Klingon example. ^ Note that I haven’t carefully read this paper. ^ Note that I am slightly reading between the lines of the results in the paper here, and am not 100% sure the authors would endorse the same point of view.
2025-01-13
https://www.lesswrong.com/posts/ggo4Q6Y6dcTEeGkCg/ai-models-inherently-alter-human-values-so-alignment-based
ggo4Q6Y6dcTEeGkCg
AI models inherently alter "human values." So, alignment-based AI safety approaches must better account for value drift
bfitzgerald3132
Hi all. This post outlines my concerns over the effectiveness of alignment-based solutions to AI safety problems. My basic worry is that any rearticulation of human values, whether by a human or an AI, necessarily reshapes them. If this is the case, alignment is not sufficient for AI safety, since an aligned AI still challenge humanity’s autonomy over its normative conceptions of ethics. Additionally, AI’s ability to represent institutions and its association with “science,” “technology,” and other nebulous (but revered) concepts gives a single model an unprecedented amount of influence over the totality of human values. At the very least, value drift should be acknowledged as a structural issue with all alignment solutions. At most, solutions like computational governance should limit the provenance of advanced models as much as possible. I put TL; DRs after each section - feel free to take advantage! Intro and topic overview There’s a lot of discussion in AI alignment literature about feedback loops, bias amplification, and the like. The idea is that if a model makes misaligned predictions, the model’s application reifies this bias, making it more significantly encoded in training data and, by extension, more prevalent in future predictions. Brian Christian’s example is crime prediction models. Statistical modeling shows that “crime” (that is, crime as recorded through arrest records), is more likely in low income neighborhoods. But, police departments might react to these models by sending more police to poorer neighborhoods. Doing so heightens arrest rates, as it would in any neighborhood with a greater police presence, which heightens the disparity of recorded crimes between affluent and low-income neighborhoods. As a result, future iterations of crime prediction models correlate income level and arrest rates more significantly; police departments increase their presence in these neighborhoods; and the correlation of crime and income deepens further as the cycle continues. This picture glosses over two key implications of feedback loops. One is that bias amplification impacts ideology. Take an example of bias amplification in Practical Deep Learning for Coders: social media content recommendation algorithms. YouTube, and other social media websites, learn a user’s preferences from their browsing history and offer new suggestions accordingly. But, the kinds of videos an algorithm recommends alter the distribution of a user’s history: recommending videos on far-right politics makes a user watch them more, which makes future models “learn” that the user is interested in far right politics. This kicks off a feedback loop that shifts the user’s content further right. In other words, social media recommendations shape a viewer’s interests as much as they represent them. The implicit upshot is that AI models, by altering the content a user consumes, alter the user’s social, political, and ultimately moral views. Consuming more far-right content gives you far right views. In the same way, seeing economically disadvantaged populations getting arrested more frequently can deepen one’s prejudices. Much like the crime prediction models they use, cops learn to see certain areas as “bad neighborhoods” and develop mental associations about the kinds of people who live there. So does everyone watching coverage of yet another kid from “those neighborhoods” getting arrested. It goes without saying that the output from other "models" (be they natural or artifical intelligences) shapes the cognitive, contextual models by which our consciousness attempts to process the world. If we accept this, then changes to what we believe or what we're interested are some of the many impacts of feedback loops. This ties into the second blind spot of discussions about bias amplification, and my thesis in this post. Discussions about the YouTube algorithm, or any other algorithm subject to feedback loops (I.e. any algorithm with real-world use cases and multiple training iterations), mostly concentrate on cases where alignment fails. Bias amplification in crime prediction models are thought to come from defective, sycophantic models of crime: ones that optimize for the wrong features. What is discussed less is the model that actually succeeds in making “aligned” predictions about crime. Why shouldn’t these predictions alter real-world behavior just as much as defective predictions? Wouldn‘t predicting crime “correctly”—somehow managing to delineate incidental correlations from direct, causal links—alter police behavior, such that future iterations of crime prediction models learn from different distributions? This claim moves feedback loops from a structural defect in predictive models to the very precondition of their use. If correct predictions are truly just matters of finding the most statistically probable response to a certain training distribution, then, by altering the set of training data, the act of making a prediction changes what it means for a prediction to succeed or fail. In other words: any moral action, or articulation of human values, rewires how we delineate moral and immoral behavior. This phenomenon is an intractable feature of discourse and holds whether the prediction comes from a neurological or statistical model—that is, from an artificial or nature intelligence. TL; DR: See above ^ Content recommendation algorithms: why a model’s predictions change its objective Let’s return to our example of the YouTube content recommendation algorithm. What would it mean for this model of the user’s interests to be “aligned?” One idea is that an aligned content recommendation model should only address features that directly impact a viewer’s interests and ignore any kind of accidental correlations (assuming it’s possible to distinguish between the two). Others might suggest that an aligned model capture a user’s interests as they appear outside of the influence of feedback loops. For the sake of argument, let’s examine an ideal, aligned model tailored to these principles. For example: before Gail starts watching YouTube, they’re interested in archery, ukulele covers of indie game music, and DIY crafts. They’re looking into these interests further. The ideal model would reach directly into Gail’s brain, find out exactly what interests them about these topics, and use these features to recommend equally appealing videos. And, once it found these videos, it would not factor Gail’s new interests into future recommendations. This approach would be paradigmatic of AI alignment. There would be a standard of values guiding not only what the model should recommend, but how to recommend it. The only problem left would be aligning a model with this vision. Fast forward to three months of YouTube consumption. The model has captured Gail’s interest before it started using the model, and thereby preserved Gail’s pre-YouTube interests as a sacred object, a trophy uncorroded by AI activity. But in spite of this, the model’s predictions have still changed Gail’s current interests. Its recommended about archery transformed their casual interest into an obsession. Whereas they used to only read Green Arron comics, they have now joined a local archery society and are competing in a tournament next month. Meanwhile, listening to ukulele covers of the Celeste soundtrack has become a bit grating, and Gail is no longer interested in them. What’s happened is that the model’s predictions have changed Gail’s preferences, such that it can only satisfy its meta-objective by failing its mesa-objective. It can only satisfy the spirit of its request by ignoring its literal directives, and it can only satisfy its literal directives by ignoring the spirit of its request. In this case, this means that a content recommendation model could only recommend relevant content by adapting to Gail’s changing interests. Doing the opposite—making it a priority to distance the model from any kind of bias amplification—would only create an ineffective model. More important, though, is that both approaches alter Gail’s values, because it is impossible for a predictive model not to. Gail, like YouTube, maintains a model of their own interests: a model that develops iteratively as Gail reacts to the model’s recommendations. Even if the YouTube algorithm recommended completely random videos, the ones Gail likes will still deepen their interests, and those they dislike or feel apathetic toward will dissuade them from exploring these interests. The same holds with an intentionally misrepresentative model, or even a human “model” like a friend or family member. Because every experience invariably contributes to our models of reality, there’s no categorical difference between videos that preserve their intentionality as something sacred and untouched and videos that corrupt it. Videos about archery, ukulele covers, and other things Gail has “always been interested in” create just as much of a feedback loop as authoritarian propaganda. TL;DR: participating any kind of communication means participating in the ongoing reshaping of human values, no matter how we try to dissimulate it. Northpointe and recidivism prediction: why AI’s impact extends to social/ethical norms Okay, we might think. It’s always been the case that looking at a different set of videos will make us like different things. How does AI make things worse? To dive into this issue, we have to explore the shift from AI’s impact on individual, subjective attitudes (e.g. the videos Gail was interested in) and an entire community’s social/ethical norms. As a case study, take Brian Christian’s treatment of Northpointe’s recidivism prediction models, used by Illinois’s parole board for over 40 years. A recent audit found that these models approved or denied parole applications solely through zip code, income level, and other proxies for race. This led to a disproportionate amount of Black parole applicants serving full sentences. The presence or absence of racial discrimination on such a large scale almost certainly influences people’s ethical attitudes. However, the difference between moral values and subjective interests is that the former unfolds on the normative level. That is, cultural models for ethical behavior are shared agreements among individuals, rather than a collection of subjective attitudes. The following two paragraphs take a quick (if a little dense) detour to explain the difference (which people less interested in the philosophical nuts and bolts are more than welcome to skip). The gist is that according to moral philosophers like Robert Brandom, normative (communal) statuses and subjective (individual) attitudes simultaneously determine the other and are determined by the other through a reciprocal relationship. Social norms form the epistemic raw material for subjective attitudes, since subjective attitudes only acquire content insofar as they are responses to social norms. The building blocks of discourse come from normative agreements about the meaning and future use-cases of things like pointing, addition, agreement, and honesty. Subjective attitudes ultimately emerge as internal ratifications of these normative agreements. The fact that I have views about the morality of stealing signifies that I’ve committed myself to recognizing theft on a conceptual and practical level. Attitudes, in this sense, are norm-dependent. Meanwhile, “taking” normative commitments as subjective attitudes is coextensive with “making” them (i.e. reshaping the conceptual content that community members inherit). The particular instantiation of stealing as a concept in one’s life exceeds the bare-bones, normative sketch that community members first received. Our experience with actual, tangible instances of theft, and how it fits within our broader contextual model of the world, supplements the community’s “universal” conceptual norms. Thus, when I invoke my conception of theft, my particular experiences with theft--for instance, my experience with stealing food to pay for my starving family--alter the normative content the community recognizes as universal. So, using normatively rooted concepts—what Brandon takes after Hegel in calling “rituals of reconciliation”—lets us reshape our normative commitments to recognizing certain kinds of moral content, since it blends together normative statuses and the determinate shape, flexibility, and reconciliation of contradiction given by their particular use-cases. Norms, therefore, are attitude-dependent. There are two upshots here for our discussion of recidivism modeling. The first is that in a certain way, all recidivism predictions, and all predictions in general, are misaligned. What we want when we’re thinking about alignment is a purely “non-biased” picture—that is, an objective, non-normative look at both our world and the values with which we approach it. But humanity’s reciprocal dependence on social norms makes apprehending the world non-normatively just as ridiculous as apprehending it non-conceptually. There is no such thing as a clear pair of glasses. Instead, the rose tint on a pair of lenses is the precondition for being able to see anything at all, if content only forms through tension with social norms. So, the data in our contextual training set—that is, each experience of the world, as mediated by a normatively constructed consciousness--is not an unlabeled selection of YouTube videos. Each data point’s relationship to an ethical norm labels it as moral or immoral; desirable or undesirable. After all, if subjective attitudes are really positions about social norms, then these norms’ evaluative character carries through in their subjective manifestations. This, in Brandom’s sense, is part of how norms “make” reality: they are the default landscape into which we are thrown. Our modeling of the world is a supervised learning process. The second upshot, though, is that artificial and human intelligence have the same, direct influence on the normative foundation of our experience of reality. To explain this, let’s explore how “aligned” and “unaligned” predictions might both reshape human values, regardless of whether they come from a human or AI model. This impact does much more than increase prejudice toward marginalized groups. By laying bare the racism inherent in our justice system, Northpointe’s models produce data through which people “learn” that the U.S. government (and companies like Northpointe) can be morally repugnant, since the data shows the arbitrariness of convictions. Conversely, these models’ predictions let us “learn” that prisoners are often unjustly convicted and, therefore, are less deserving of moral condemnation. Taken together, these predictions change how activities like non-violent and violent protest, prisoner visitation, and case work are valorized, and renders intelligible positions like abolitionism. Abolitionism is only moral insofar as prisoners do not deserve to be there. Recontextualizing how prisoners appear in our mental models of reality changes the normative, ethical practices it makes sense to adopt. It’s for this reason that “aligning” a model does not neutralize its worldly impact. When a 2016 audit unearthed the Northpointe model’s racist roots, data ethicists assembled an alternative model for recidivism prediction using only factors that seemed to more directly correlate with reoffending. Christian trumpets that the model not only matched Northpointe’s accuracy almost exactly, but, by moving away from direct proxies for race, curbed the more blatant manifestations of racism in the differences between false positive rates. But adopting this “aligned” model reshapes ethical beliefs as much as would adopting Northpointe’s models. Mitigating racism in recidivism predictions increases the average person’s faith in the U.S. government to accurately distinguish truly “dangerous” people from “harmless” people. It also restores faith in the carceral system generally, since the government’s “unbiased,” “aligned” prediction methods make it a near fact that certain people should be put away. Withholding parole becomes an ethical obligation in this scenario, since decoupling of recidivism prediction and racism means that it is the essence of applicants, the center of their being, that makes them dangerous, rather than illusions of race or class. Conversely, taking abolitionist positions, or protesting recidivism prediction techniques, becomes an immoral action, since it supports releasing a pack of ontologically depraved wolves onto a population of innocents. As previously stated: the entire question of morality and immorality comes down to context modeling. If our contextual model world judges X to be evil, the right thing to do is Y. If our model views X as good, the right thing to do is Z. TL; DR: All AI models alter ethical norms just as much as they do individual, subjective attitudes, since the former spring from repeated concept use. But, altering norms has a more significant impact because norms are the building blocks for all conceptual content and the fundamental evaluative stances underpinning ethical deliberation. So, an AI altering normative statuses alters our access to reality itself, and the ethical, social, and logical dimensions through which we filter it. Signatures and power: why AI is uniquely problematic for value drift AI is uniquely problematic for value drift. Why? That rationality and ethical deliberation depend on a normative foundation suggests that rationality's counterparts--force, power structures, and inequality--play a powerful role in reconciling conflicting systems of norms. The fact that each normative sphere is disjoint means that there’s no transcendent, meta-ethical rationality to bridge gaps and legislate collisions. “All else being equal, force decides.” There’s a violence inherent in the ability to sign off on one range of predictions as aligned and others as unaligned, to reduce ethical deliberation to the optimization of a given variable, to naturalize one system as being “the main one.” In the communal process of norm-taking, some voices are inherently louder than others. The risk behind AI is that its institutional ties make it the loudest voice of all. An organization’s choice to use an AI model makes it a proxy of the institution. An institution says, “Look at this, everyone! I’m signing off on this model’s ability to represent my company. The fact that it could lose us so much money shows we’re serious.” And once the rest of the world is convinced, the voice of a single model can become louder than CEOs and world leaders: it wields the awesome power of the corporation-leviathan, and so, by making predictions as it was trained, it alters what humans believe. Take Northpointe’s model. By deciding to implement it, the Illinois Parole Board codified the model as a proxy of the parole board. Even though the model had human intermediaries, even though a human parole board ultimately approved every decision, the reliance on Northpointe’s model consolidated the organization’s separated powers into mere conduits of the Northpointe model: the leviathan, the decision-maker. The result was that when the model predicted a Black man would reoffend for no other reason than race, it issued a statement on behalf of the Illinois state parole board that that man (and by extension, people like him) deserved to have their parole denied. This was concept-use in the Brandomian sense, in that it both drew on prior normative commitments and altered those commitments, but the model’s institutional signatures made its voice univocal and uniquely powerful. The parole board was too consolidated, was too beholden to the model’s decisions, to participate in the Hegelian “ritual of recognition” and check the model’s influence. Indeed, the Northpointe model’s unique symbolic position made its predictions uniquely incontrovertible. On one hand, the model represented the Illinois parole board, and in so doing, became an arbiter of justice, a justice factory, an agency you could turn to for cutting edge new applications of age-old laws. But it was also a proxy of other institutions: science, math, technology, the future, etc. Anything under the heading “science” or “math” is a fountain of truth that can never be doubted, and the same was true with the Northpointe model. Moreover, because the model was hooked up to an institution, it was hooked up to ideological apparatuses that distributed the decision’s results. Newspapers and other media outlets could echo the Northpointe model’s ruling. The result was that Northpointe’s predictions could become descriptive disseminations of law, rather than an evaluative claim. They set precedent for future cases, and, in doing so, gave particular content to the universal normative statuses used to prosecute criminals. The parole applicants it denied, their families, and even the parole board members themselves, could not nearly match the model’s influence. And, the Northpointe model wielded this influence solely because of its ability to become a proxy for an institution and vague, symbolic entities—a capacity unique to an AI model. Before AI, your voice could only be so loud. An emperor was a person and, as a person, people could recognize them as capable of making mistakes, or, at the very least, articulating opinion rather than pure, unbridled truth. But now, truth itself can speak, as can science and corporations, and concentrations of discursive power that were too great for a single entity can be funneled into individual models. What happens if these proxies begin consolidating into fewer and fewer beings—if the power behind their voices increases? When do we lose the ability to question them? Did we ever have it in the first place? TL; DR: Reasons why AI’s influence over human values is problematic: 1.) it can represent institutions and command their symbolic position; 2.) its voice is seen as uniquely powerful due to its association with “science,” “technology,” and other entities that are seen as incontrovertible; and 3.) it is connected to an institution’s ideological apparatuses. “So, uh… what do we do with this?” It’s stupid to suggest that we abandon alignment efforts entirely. I would absolutely prefer to live in a world that matches our normative conceptions of alignment that one that doesn’t. But what’s important to note, in both cases, is how much this dilemma sucks. Either we suffer an extinction level event from misaligned AI, or we lose autonomy over our ethical, linguistic, and social norms—that is, over our access to reality itself. Anything is possible once discourse loses its anthropic center. The same phenomena that dehumanized those of other religious, racial, gender, sexual, or ethnic identities might cheapen the value of human life. Our moral values might shift to align with a GAI’s main objective, or to the achievement of one of its convergent instrumental goals. Entire systems of culture, religion, and tradition might be lost. Even more significantly, what if it’s this value drift itself that constitutes an extinction event? What if values drift to the point where we no longer care about humanity’s survival? Or, equally bad, to the point where we are indifferent about activities that may threaten it? The AI models that emerge in the next few years are the entities with the greatest power in history over truth, morality, and reality itself. The dystopian nightmare scenarios are endless. Like I said: there is no transcendent ethical metric for classifying these impacts as bad. But if, for whatever reason, something compels us to combat extinction threats, then it’s critical to understand the drawbacks of potential solutions. The drawbacks of AI alignment have not been highlighted enough. A structural issue with any alignment solution is that creating an aligned model, just like creating an unaligned model, alters human values. Given this, aligning models with current iterations of ethical norms is an insufficient fix, since it ignores how such an aligned AI might change these values downstream. Any AI safety approach that does not account for value drift fails to remedy an extinction event. If we adopt AI on a large scale, we’ll have to choose between extinction and disempowerment in the actualization of human values. But what if we resisted the distribution of AI entirely? What if we traced the world-ending potential of deep learning models to the logic of optimization underpinning them, and determined that the true solution isn’t changing how we apply optimization in machine learning models, but overturn optimization’s stranglehold on the economy, social norms, and many, many other spheres beyond AI? I believe that this solution is necessary, but it must remain a topic for another article.
2025-01-13
https://www.lesswrong.com/posts/RHrHojQ38BnHxAet2/six-small-cohabitive-games
RHrHojQ38BnHxAet2
Six Small Cohabitive Games
Screwtape
Previously: Competitive, Cooperative, and Cohabitive, Cohabitive Games So Far, Optimal Weave: A Prototype Cohabitive Game. I like tinkering with game mechanics, and the idea of cohabitive games is an itch I keep trying to scratch. Here's six simple games I made over the last year or two as I was trying to get something satisfying, along with my notes on them. If you're looking for something to play at game night, I suggest Handlink then Commerce & Coconuts. None of them are well playtested at all. Games Dealerchip Rules Setup: Sit in a circle. Take a standard 52 deck of cards, without jokers. Shuffle, and deal evenly to all players. If you can't deal evenly, set the extra cards aside. Each player looks at the top card of their hand. The suit of that card becomes the player's Favourite Suit. Remember it. Each player may now look at the rest of their hand. Objective: Each card is worth points equal to its value. (Aces are 1, jacks are 11, queens are 12, kings are 13.) A card of your Favourite Suit is worth double. A jack of your Favourite Suit would be worth 22 points. The goal is to get as many points as you can. The goal is NOT to get the most points — you don't care how many points everyone else gets. It could be one, it could be one million, you don't care. Aim to beat your personal best score. Play: Start a timer for ten minutes. Players may freely trade cards with each other. You don't have to tell people what your Favourite Suit is or what cards you have in your hand, but you do have to actually hand over the cards you say you're handing over as part of the trade. You can trade cards at rates other than one to one if you want. At the end of ten minutes, stop trading. Count up your score, and see how well you did. Notes This is straightforwardly borrowed from Planecrash and Eliezer Yudkowsky's sketch of the basic Jellychip game. Dealerchip is fairly simple, since each card has a straightforward and clearly valued worth to both players. Sometimes you can't make a perfectly fair trade because you have to trade whole cards and you lack smaller values. When that happens though it's pretty clear (if you're honest about Favourite Suit) who is getting the short end. (I wrote the bit about not having to tell people your favourite suit or what cards you have leaves things open for some sharp or clever negotiation, but looking back I think it's mostly a trap. I haven't seen anyone get things to go better for them by hiding the suit.) Ultimately, there's a fixed pie of value to go around, and not that much room for long-term strategy. On a gameplay level, there's also a kind of decision paralysis- sometimes people try to talk to everyone at the table before making deals, or get overwhelmed with the number of offers. If you want you can spark an argument on Rawls' Veil of Ignorance in a four player game. Assigning each person a different suit such that all four are used, and then deal the cards out at random. Inevitably, somebody will wind up with a higher value hand, and usually argue that it makes sense to evaluate fairness based on the trades instead of just sorting the deck so everyone has all the cards of their suit. Sorting the deck is strictly better for every player than any other arrangement, but involves some people making more upwards progress than others. Is fairness evaluated on the level of a trade, or on the level of overall end state? Disordered Interests Rules Setup: Take a standard 52 deck, with the jokers. Each player draws one card, looks at it, and places it face down. This is their Value Card. Each player draws five cards, concealed from everyone else. This is their Hand. Objective: Your score is the sum of the values of cards in your Hoard that match the suit of your Value Card, plus one for each card in your Hoard that does not match the suit of your Value Card. If your Value Card was a Joker, you instead score the value of all cards 7 or less. The other Joker is worth 15 points. You are trying to beat your personal best record. You don't care what the score of other players is; victory doesn't go to the single highest score. There isn't a clear Victory at all. Think of it like going for a run, where you're mostly trying to improve your own speed. Play: Play proceeds clockwise, starting with whoever says "I start" the fastest. On your turn you can show any card from your hand to the people adjacent to you, and ask them what they offer for it. They each show you a card from their hand, starting with the player to your left. You may accept or reject the offered trades. If you accept, trade cards with the one you accepted. Either way, put a card from your hand in a pile (The Hoard) and draw a new card. Once all cards have been drawn and played, the game ends. Notes This is basically a second stab at Dealerchip, with the Joker values allowing for more subterfuge and turning trade negotiations into more of an ultimatum game. There's some auction action in being the player on the right able to bid up the player on the left. I tried to fix the decision paralysis by limiting you to just the adjacent players, which wasn't really necessary at the level of Dealerchip's simple trades but I saw as a potential issue if I made things any more complicated. Straightmaker Rules Setup: Take a standard 52 deck without the jokers. Deal the deck out evenly to all players. Players keep these cards in their hand upright, and these cards are Domestic Cards. Objective: A player wins if they have a straight flush in hand at the end of the game- five cards of the same suit whose values are increasing by one, where at least one card is Domestic and one card is Imported. (E.g. the 8, 9, 10, J, and Q of Hearts.) Any number of players can win. Any number of players can lose. Play: Players may freely trade with each other as follows: they can talk about what they want and what they're willing to trade for, offering any number of cards in exchange for any other number of cards. Don't show the other person the cards during negotiation. Both players must agree to trade, but do not get to see the actual values of the new cards until the exchange happens. Any cards received via trade are kept sideways, and these cards are Imported Cards. Notes This was an attempt to make the desires a bit more complicated. The first draft had Wins (a straight) and Super Wins (a straight flush) but I eventually figured that was overcomplicating things a little. The number of players matters a lot as to how difficult the game is, and I never put the effort into figuring out how to adjust the deck size. I think the sweet spot is about eight cards per hand, with higher value cards (e.g. kings) removed from the deck first? I still like the mix of Imports and Domestic, even if sometimes people let the cards mix because orientation isn't an important feature of most card games. Commerce & Coconuts Rules Setup: You and your fellow players are stranded on a deserted island. Each of you should roll four six-sided dice, writing down the results in order. Your Ability Scores will be between 1 and 6, and are: Coconut Gathering Water Hauling Shelter Fixing Boat Building Start the game with the following Resources: 3 Coconuts, 3 Water, 3 Shelter, and 0 Boat. Objective: You have ten turns. Escaping the island is the best outcome for you, and the faster you escape the better. Surviving is the second best outcome. Dying is the worst outcome. Play: Each player goes through each phase of the turn together. (Everyone acts, then everyone trades, and so on.) Start with whoever was in the ocean most recently and go clockwise, except trading when everyone can trade at once. Each turn has three phases. Act. You may pick one of your ability scores, gaining that much of the corresponding resource. In other words, if you have Water Hauling 4, then you can pick Water Hauling to get 4 water.Trade. You may freely trade any of your Resources with any other player, provided that all trades are mutually agreed upon. You can trade decimal amounts of anything other than Boat.Rest. To rest, spend one Coconut, one Water, and one Shelter. If you can't afford to do that, you die and are removed from the game.Escape. If at this point you have 10 Boat, then you escape the island and sail to safety. You cannot Act or Trade, but you still need to Rest and spend a Coconut, Water, and Shelter each turn. After ten turns, end the game. Notes Yes, yes, the shelters are really badly constructed and keep falling over. Also one coconut a day is a pretty weird amount for a person to eat. Look, it's a very abstract simulation. In the initial version the four Resources were the four suits of a 52 deck, and I'm glad I eventually gave up on that because story of the island is stronger than the abstract thing I had going on. The part I love about Commerce & Coconuts is how very obvious it makes the importance of trade. The odds are good that at least one of your Coconut Gathering, Water Hauling, or Shelter Fixing will be below 3, which would ordinarily threaten a death spiral. If any of those are a 1, you're just doomed — unless you can trade. In a group of three or four players, you can probably put together a good trade route and have a bit of surplus. I really wanted to add some kind of way to invest resources and get better resource gathering. Part of that is the engine builder in me, but mostly that feels like the obvious next step in the tiny economic simulator I've got going on here. Ironically, the most valuable players are likely going to be the first to escape the island, making things harder for the remaining players. "Do I have to escape, or can I wait and help other people?" is likely to come up. For gameplay everything is more fun if you let people wait (people are usually happy to help if it's at low cost to them, and they get to keep taking actions) but as a teaching tool I like how clear it becomes that you want the high producers to stick around. (Sometimes unless you're second place under one of their niches.) Since there's no random element and no hidden information, people can 'solve' the game and math out the rest of the game from the end of turn one. It wouldn't be too hard to add a bit of randomness, but the basic math of the game doesn't have that much room so double check the income. Random elements mean that trade is even more important, since other players become a kind of insurance against accident and famine. Death/survival/escape is basically the Wins/Superwins from Straightmaker, but the story seems to help it work better. This is the least abstract of the five, and I think that's very much to its credit. Overall I feel like this is one of the stronger successes of the five. Discarder Rules Setup: Take a 52 deck of cards. Shuffle, and deal half evenly to the players, half to a pile face down as the Discard pile. Objective: At the end of the game, you'll pick one Central card from your hand. The object of the game is to get as many Alike cards to that Central card out of the Discard pile and into your hand as you can. "Alike" means either the same suit or the same value as your Central card. It doesn't matter how many cards other people get into their hands. Try to beat your previous best score. Play: Play proceeds in turns, starting with the oldest player and going clockwise. Each turn, you can choose to either Look or Match. If you Look, look at the top card of the Discard pile, then put it back either on top or on bottom. If you Match, show a card from your hand and name a quality of your chosen card. (Either suit or value.) Turn the top card of the Discard pile face up- if it matches your chosen card, then you collect the top card of the Discard pile and put it into your hand. If it's not a match, put both the shown card and the top card of the Discard pile either on top or on bottom of the Discard pile. The game is over when you run out of cards in the Discard pile, or someone runs out of cards in their hand. You may freely communicate with each other throughout the game. Notes The point here is to create a shifting network of alliances- you can more efficiently pick up cards if the person to your right Looks, then tells you what's there- and there's the opportunity to differentiate by picking different suits to focus on. This makes games with more than four players unstable, and you can get a bit of bickering based on how many cards of the same suit someone started with. Even if everyone has a different suit, you can get a little bickering over cards with Alike values, which is intentional. The other point was to experiment with games where there was something to add to the pile. Dealerchip and Straightmaker have a static pool of cards everyone is working with, whereas Discard adds new cards over time. Ultimately though, there's a limited amount of pie to go around and everyone knows the way to get more pie is to prevent other people from getting the flavour they want from the fixed potential supply. I think to make that work, there would need to be more uncertainty in how much new value could be created. I mostly think of Discard as a failed experiment. It also takes an annoyingly long time to play out. Handlink Rules Setup: Take a standard 52 deck and shuffle. Objective: Get points by making poker hands with fellow players and dividing the points. Hands are worth points as follows: HandPointsFive of a kind*9Straight Flush8Four of a kind7Full House6Flush5Straight4Three of a kind3Two pair2One pair1High Card0 (One pair and high card are worth 0 points.) Your objective is to have the highest average score you can over all hands you play. You don't care how many points everyone else gets, either per-round or on average. Play: Each round, deal each player two cards. If there are only two players, deal an additional card face up as the Table. Rounds should be fast; if necessary, set a timer for one minute. Players may freely talk to each other. Any combination of players may combine their cards to make a poker hand, but must mutually agree on how to distribute the points between them. Each player may only be part of one point-scoring hand. If the Table has a card, the Table refuses to combine hands unless it gets 1 point, but agrees to any combination where it gets 1 point. Variant: Play for a fixed number of rounds. Fifteen is a good number. Variant: Points must be arranged in whole integers; you can't give someone half a point. Variant: Each card can only be part of one point-scoring hand. Players can contribute each of their two cards to two different hands. Variant: Have a Table card in games with more than one player. The Table can be a part of any number of poker hands, though 'negotiates' for one point from each as normal. *If playing with more than one deck Notes First off, not letting people use fractional points is kind of me messing with people. For proper negotiation practice I think it's better to let them use fractions. Nothing breaks if you let them split points into smaller chunks, but on a gameplay level I like the overhead of 'I'll owe you ones' and the friction it gives to most ways of building Two Pair and Three Of A Kind. Adding the Table card during games with more players allows for more kinds of hands, though it's an intentionally stupid negotiating partner. There exists a lovely argument that it takes five cards to make a hand, so everyone should get a fifth of the points if they contributed one card and two fifths if they contributed two cards. That lasts right until the Fun Negotiations when someone could be part of a flush or part of a straight flush and this becomes a bidding war, or when two players could make a Two Pair together but someone else has the right extra card to make it a Full House. The thing I like about Handlink is it's the fastest cohabitive game I've yet come up with. You can play rounds very fast, and you can add and subtract players mid-game without creating any issues. I'm a little sad about two pair and one pair having the same player payouts, but I think keeping the order of regular poker hands and the simple point ranking is worth that. It's also very worthwhile to drop out of a game if you get a lucky start. I think that's kind of fine? Poker is the same way unless you're playing tournament. You can fix this by playing for a set number of rounds. Overall notes Comparisons to cooperative games Something I like about cohabitive games is that it's a good answer to the dilemma of cooperative games. Cooperative games risk 'quarterbacking' where the best player tells everyone else what to do and everyone does it or improves the one plan everyone is following. The three ways around quarterbacking I've seen are 1. communication barriers (Hanabi, The Mind) 2. sharp time limits (5 Minute Dungeon, Keep Talking And Nobody Explodes) or 3. potential traitors (Betrayal At House On The Hill, Saboteur.) Having separate goals is nice to prevent this, especially if there's always more to win. Someone might try to quarterback a strategy for everyone to get off the island in Commerce & Coconuts or everyone to get a straight in Straightmaker, but there is a limit on how much advice you should take in Dealerchip. Sidenote: there's at least a little bit of prior art in the "semi-cooperative" genre, which is generally games where it's possible for every player to lose if a certain condition is triggered, and if you all avoid that "everyone dies" trigger then there's one winner. Notable exception is Nemesis, which has individual victory conditions for each player that mostly benefit from helping each other, but is fairly clear that you can win independent of other players. By my definition that makes Nemesis a fully cohabitive game, though it does lean more cooperative. Consequences and Conflict I notice I avoided putting stronger consequences in these games than just not trading with someone. It's like an artificial world where tariffs and trade embargoes exist, but not war. Even theft is awkward, though you can stiff people (probably just the once) out of what you said you'd trade them in Disordered Interests or ask for a loan in Commerce & Coconuts then not repay it. Part of this is I want to give people a proper push out of the competitive mindset. Part of it is probably that I like building things and get annoyed when people knock over the thing I'm building, so adding in ways for other players to kick over my sandcastle seems less fun. (Unless it's a directly competitive game, in which case I can get in the right head space.) I should add more reasons and ways to interact negatively when I see some design space for it. Takeaways Negotiations will take lots of time if you let it. Put timers on it, and curb decision paralysis by limiting who you can trade with."Until a player gets X points" or "until all but one player are eliminated" don't work well as end conditions. I mostly went with "Until X turns have passed" which seems fine, but there's probably more options.Unless you do something I didn't do here, there's not really a point to concealing what you value or want. I'm very sure there are setups where concealing that information makes sense.Cohabitive games are sensitive to the number of players. Poker and Risk gradually simplify themselves down to small player counts, Pandemic and Gloomhaven need to tune themselves to the extra actions and resources, but Commerce & Coconuts (the least affected, since each extra player adds resources and requires resources linearly) gets easier if you have more diverse players to trade with.Cohabitiveness isn't enough to prevent the players from working out the decision tree for the game in advance. A little hidden information or random element keeps people having to recalculate every turn, which is good.Setups where there's a specific combo two players can do, that results in something better than either of them could do alone, are neat and create more interesting play.Reputation is a powerful tool, especially in any game where value comes in hard to split 'chunks.' "Fair would be me getting 1.5 and you getting 1.5, but we can't split it in half so I'll let you have the extra this time and you owe me" is a very common play pattern. The Hook None of these games have managed what I think of as the hook. It's that thing where, after I lose the first game, I go "oh, let me try again, I think I see what I did wrong-" or where I doodle on a strategy in my head until I get to try it out. They work, they're okay, and I think there is a bit of depth in how to negotiate here, but it's just not catchy. I think that's because there's only really one thing going on at once here, there aren't layers. They were good tests though, and I think I learned a bit of the design space from playing with these. If you give any of these a try, I'd be interested in your feedback.
2025-01-15
https://www.lesswrong.com/posts/BgXTsN6iLh8hkx9fi/moderately-more-than-you-wanted-to-know-depressive-realism
BgXTsN6iLh8hkx9fi
Moderately More Than You Wanted To Know: Depressive Realism
JustisMills
Depressive realism is the idea that depressed people have more accurate beliefs than the general population. It’s a common factoid in “things I learned” lists, and often posited as a matter of settled science. In this post, I’ll explore whether it’s true. Where It Began The depressive realism hypothesis was first studied by Lauren Alloy and Lyn Yvonne Abramson, in a paper called Judgment of contingency in depressed and nondepressed students: Sadder but wiser?. Undergraduates had access to a button, which might (or might not) influence whether a light came on. Depressed students - but not nondepressed students - were pretty good at guessing how much influence the button had over the light. Specifically, nondepressed students tended to think they had more control than they actually did. In other words, the seminal study showed something a lot more specific than “depressed people are more accurate”. It showed that they were better at inferring their own degree of influence. Or to put a finer point on it, (mildly) depressed people suffer less from the illusion of control. There are two lines of criticism I want to explore here, before moving on to other studies. First, is the study probably accurate? And second, if it is, does it imply what it seems to? Methodology To get the big one out of the way first: Alloy and Abramson’s sample only consisted of undergraduates. This is an extremely common approach for psychological studies, because the people doing the studies are professors and undergrads are close at hand. But undergraduates are not exactly representative of humanity writ large. Also, the entire field of psychology is on fire, so “business as usual” is a poor defense.[1] Furthermore, the total number of undergraduates in the study was 288: 144 depressed, and 144 not depressed. Depression was diagnosed through the Beck Depression Inventory, which seems fine, but does mean we’re probably dealing with self-report, so mileage may vary there too. Finally, the study was done in 1979. This is the same decade as the Stanford Prison Experiment, one of the most famous studies of all time and now on shaky enough ground that Vox calls it straight up fraud. As far as seminal studies from the 1970s go, this one seems pretty good. There have been replications (we’ll get to those), and if there’d been fraud or misconduct it would be a pretty good scoop. But again, 288 undergraduates in the 1970s should probably not have too large an impact on your views about much of anything. Implications What if this study were unimpeachable, and replicated perfectly every time? Would we have to agree with its normative implications, and chalk one up for depression? Not necessarily! In most life situations, when someone is trying to figure out how much influence they have, their feeling is one vector of that influence. Especially situations that actually matter. If someone believes they deserve a promotion, for example, they’ll probably argue more persuasively for it than someone merely going through the motions. A depressed person might say “well, if I were to approach that attractive person I’d just be rejected”, and even if their friends egg them on and they go for it, their very lack of confidence might end up proving them right. As a toy model, imagine everyone has to determine a single, global locus of control. Believe that your entire destiny is always fully in your hands, and you’ll wager it all on the roulette wheel and believe you’ve got to win because you want it so much. Believe you’re totally powerless in your own life, and you’ll sit in the basement drinking gin and watching paint dry. The right choice is somewhere in the middle. But where? I posit: high enough that you’re slightly overoptimistic about stuff you can’t control, so that for stuff where confidence itself makes the difference, you squeak in. So sure, a slightly depressed person might be a little more correct about whether a button turns on a light. But if in exchange they’re wrong about if they should bother learning a new skill, or taking a walk every morning, or drinking more water, well, that’s a bad trade. (Also, one more time, it was just 288 undergraduates in 1979.) Replications and Complications There have been several replication attempts, expansions, and meta-analyses over the years. I’ll highlight a few. The headline: the very specific effect holds up, but it is indeed very specific. Benassi and Mahler, 1985 In 1985, Victor Benassi and Heiki Mahler published the sassily named Contingency judgments by depressed college students: sadder but not always wiser. They replicated Alloy and Abramson’s findings, but found that variations upended their result. From their abstract: When subjects completed the task in the presence of an observer, depressed students perceived themselves to have more control than did nondepressed students. In Experiment 2, the observer effects found in Experiment 1 were replicated, and we extended those results by showing that when response-independent outcomes occurred relatively infrequently, depressed and nondepressed subjects who completed the task in the presence of an observer did not reliably differ in their estimates of personal control. So basically, if somebody’s watching, now it’s depressed students who are overconfident, and also (I think) if you make the button useless only occasionally, depressed people are no better than average at figuring the situation out. I’ll chalk this up as a weak replication, which also suggests the original effect is pretty specific. Dobson and Franche, 1989 In 1989, the extremely Canadian-named duo of Keith Dobson and Renée-Louise Franche released A conceptual and empirical review of the depressive realism hypothesis. To quote their abstract: Positive evidence for the existence of depressive realism was found although the strength of that finding diminished as the ecological validity of studies increased. I wasn’t able to get the full text of the study, but seems like basically a successful replication, with the caveat that in more natural settings, the effect looks less powerful. Dobson and Pusch, 1995 Also by Keith Dobson, but now with Dennis Pusch, A test of the depressive realism hypothesis in clinically depressed subjects. Basically, they redid the original test from 1979, but digitally, and for clinically depressed (rather than just self-reporting undergraduate) patients. The sample size was only 45 women total (15 depressed, 15 formerly depressed, and 15 never depressed), so take with a hefty grain of salt, but, they claim, “The results provided limited support for the hypotheses.” So a replication, but still in the very limited scope of the original study, with a small sample size, and “limited” support. Jain, Bearden, and Filipowicz, 2013 If you’re wondering how depressive realism generalizes to situations that have nothing to do with the (variably) depressed person, look no further. This study, Depression and forecast accuracy: Evidence from the 2010 FIFA World Cup, saw if depressed people made savvier bets on a major sporting event. If you’d like to place a bet yourself, now’s your chance! What do you think they found? Across two different waves of predictions and with multiple measures and components of prediction accuracy, we find that depressed forecasters were less accurate. I’d guess the causality is a mess here, but basically buy this study’s finding that no, being depressed doesn’t improve forecasting ability about random world events. Interestingly, too: The poorer accuracy amongst the more depressed forecasters was primarily driven by a neglect of base rate probabilities: the depressed participants assigned probabilities that departed from the base rates more substantially, particularly for low base rate events. So, depressed people tended to think more unusual stuff would happen, while non-depressed people played it safe. It wouldn’t surprise me if this is sports specific, or something. But in any case, the original study (and any follow ups I know of) had no opinion on the forecasting abilities of depressed people, and this study says depression makes you worse at it, so we definitely don’t have evidence for depressive realism on impersonal questions of fact. Moore and Fresco, 2012 To cap things off, we have Depressive realism: a meta-analytic review. This one covers a lot of ground, so I’ll quote the abstract in full: The current investigation represents the first meta-analysis of the depressive realism literature. A search of this literature revealed 75 relevant studies representing 7305 participants from across the US and Canada, as well as from England, Spain, and Israel. Results generally indicated a small overall depressive realism effect (Cohen's d=-.07). Overall, however, both dysphoric/depressed individuals (d=.14) and nondysphoric/nondepressed individuals evidenced a substantial positive bias (d=.29), with this bias being larger in nondysphoric/nondepressed individuals. Examination of potential moderator variables indicated that studies lacking an objective standard of reality (d=-.15 versus -.03, for studies possessing such a standard) and that utilize self-report measures to measure symptoms of depression (d=.16 versus -.04, for studies which utilize structured interviews) were more likely to find depressive realism effects. Methodological paradigm was also found to influence whether results consistent with depressive realism were found (d's ranged from -.09 to .14). Basically: the effect size is very small, and the more rigorous the study, the smaller it tends to be.[2] I do think this meta-analysis bears out the original depressive realism study’s hypothesis: very impressive for a splashy study from the 1970s! But again, we’re talking a teensy effect under specific circumstances. Is This Post Comprehensive? Sorry, not really. Depressive realism is a big deal, so there are way more studies that branch off in various directions. We haven’t touched on FMRI studies, studies specific to social judgments (whether depressed people judge social situations more accurately by being less self-serving), or studies of people rating their own performance on some task. I’m sure all these sub-ideas have their own lineages and debates. This post has mostly just traced the mainline of the concept, from its seminal study to a more recent meta-analysis. That being said, as a nonexpert I’m confident enough to make the following claims: The pop science understanding of depressive realism, that “depressed people have a more accurate picture of reality than non-depressed people”, is unsupported and probably false;The narrow, originally posited phenomenon of depressive realism is probably true: it has survived numerous replications and meta-analyses, but;It’s very limited in scope, and probably shouldn’t drive anyone’s philosophy or attitude about depression more generally. Conclusion This isn’t medical advice, but depression is bad. Try to avoid it, and don’t think having it would make you less wrong about the world. This is a linkpost from my Substack. I don't crosspost very often, and if you enjoyed this post, you may like my posts there, too. ^ As in, replications of studies from top psychology journals only work somewhere in the ballpark of 30% of the time. ^ Specifically, studies without a rigorous definition of the relevant ground truth found a larger effect (clearly easier to fudge), and studies that relied on self-report for depression status found a larger effect.
2025-01-13
https://www.lesswrong.com/posts/TkWCKzWjcbfGzdNK5/applying-traditional-economic-thinking-to-agi-a-trilemma
TkWCKzWjcbfGzdNK5
Applying traditional economic thinking to AGI: a trilemma
steve2152
Traditional economics thinking has two strong principles, each based on abundant historical data: Principle (A): No “lump of labor”: If human population goes up, there might be some wage drop in the very short term, because the demand curve for labor slopes down. But in the longer term, people will find new productive things to do, such that human labor will retain high value—in other words, the demand curve will move right. Indeed, if anything, the value of labor will ultimately go up, not down—for example, dense cities are engines of economic growth!Principle (B): “Experience curves”: If the demand for some product goes up, there might be some price increase in the very short term, because the supply curve slopes up. But in the longer term, people will ramp up manufacturing of that product to catch up with the demand—in other words, the supply curve will move right. Indeed, if anything, the price per unit will ultimately go down, not up, because of economies of scale, R&D, etc. Now consider Artificial General Intelligence (AGI), i.e. a combination of chips, algorithms, electricity, and teleoperated robots that can autonomously do the kinds of stuff that ambitious human adults can do—stuff like founding and running new companies, research and development, learning and applying new skills, working in collaborative teams, skillfully using teleoperated robots after only a few hours of practice, and so on. So here’s a question: When we have AGI, what happens to the price of chips, electricity, and teleoperated robots? (…Assuming free markets, and rule of law, and AGI not taking over and wiping out humanity, and so on. I think those are highly dubious assumptions, but let’s not get into that here!) Principle (A) has an answer to this question. It says prices will be high. After all, if AGI can really do all the things that ambitious entrepreneurial skilled labor can do, then there will be no “lump of labor” for AGI, any more than there has been for humans. However much AGI there is, it will keep finding new productive things to do. And the prices will reflect that high value. (Incidentally, if that’s true, then it would imply that human labor will retain a well-paying niche—just as less-skilled labor today can still get jobs despite more-skilled labor also existing.) Principle (B) has a different, contradictory answer to this question. It says prices will be low. After all, if AGI is basically a manufactured good, then manufacturing will ramp up, creating ever more AGI at a cost that decreases with scale (and with R&D). And the prices will reflect that low cost. (Incidentally, if that’s true, then it would imply that human labor, now forced to compete with a far-lower-price substitute, will become so devalued that we won’t be able to earn enough money to afford to eat.[1]) Anyway, I sometimes see unproductive debates that look like this: One side treats Principle (A) as an unstoppable force. The other side treats Principle (B) as an immovable wall. Instead of grappling with the contradiction, they just talk past each other. As a very recent example of such arguments, check out the blog post AGI Will Not Make Labor Worthless by @Maxwell Tabarrok, and its comments section. Who is right? Well, at any given time, Either the price is high, and the supply curve is racing rightwards—since there’s a massive profit to be made by ramping up the manufacture of AGI “labor”.…Or the price is low, and the demand curve is racing rightwards—since there’s a massive profit to be made by skilled entrepreneurial AGI “labor” finding new productive things to do.…Or the price is in between, and both the supply curve and the demand curve are racing rightwards! The price at any given time depends on which curve is racing rightwards faster. I have opinions, but that’s out-of-scope for this little post. If people are even trying to figure this out, that would already be a step up from much current discourse. But more importantly— What happens when an unstoppable force is slamming into an immovable wall? Common sense says: a big friggin’ explosion. …So that naturally brings us to the school of thought where we expect AGI to bring >>100%/year sustained growth of the global economy—see for example a discussion by Carl Shulman on the 80,000 hours podcast. I think this is the correct conclusion, given the premises. Indeed, I think that, if you really try hard to hold Principle (A) and Principle (B) in your mind at the same time, and think through the consequences, then truly explosive economic growth is where you will inevitably wind up. Of course, that collides against yet a third principle of traditional economics, also based on abundant historical data: Principle (C): Wait, you said >>100%/year of sustained growth of the global economy? What are you, nuts?? But, that’s where we’re at. It’s a trilemma. All three of (A, B, C) are traditional, time-tested economic principles. But it’s basically impossible to simultaneously believe all of them at once. People still try to do so, including professional economists, but I think they wind up tying themselves into knots of self-contradiction. (Of course, those economists are still a step up from the economists who dismiss AGI as sci-fi nonsense!) (Again, my actual main expectation is AGI takeover, which renders this whole discussion kinda moot. But if we’re gonna talk about it, we should get it right!) ^ At least, probably not. We don’t know for sure how much compute and electricity it will take to run superhuman AGI, since it doesn’t exist yet. But my own guess, based on how much calculation a human brain does, is that it would probably be well under $0.10/hour at today’s prices, and lower in the future as we go down the experience curve. Edited to add: Also, needless to say, another way that humans can turn out OK (in principle), even if there are AGIs that can do everything that humans can do but better and for $0.10/hour, is for there to be human ownership of capital or political power in the long term. Whether we should expect that is an important question, but out-of-scope.
2025-01-13
https://www.lesswrong.com/posts/HEdnYQhtEvDrWH78P/myths-about-nonduality-and-science-by-gary-weber
HEdnYQhtEvDrWH78P
Myths about Nonduality and Science by Gary Weber
a schizophrenic mind
The text is a lightly edited transcript of the presentation "Myths about Nonduality and Science" given by Gary Weber at the conference “Science and Non-Duality” in 2015 at San Jose, CA[1]. Non-duality (from Sanskrit advaita) is an ancient way to describe the state of the absence of problematic self-referential thoughts. The presentation undermines common myths about non-duality and provides a scientific ground to understand that state. In addition, it highlights a rational scientific backing to some of spiritual practices and traditions including meditation and the usage of psychedelics. It may be interesting to all who are curious about cognitive sciences and how to get rid of problematic thoughts. I was trying to see if I could find some way out of my suffering, DIY. I was in a graduate school, working on PhD. One day walking down towards campus I had the epiphanous moment of stepping back and looking at what was going on in my head. It turned out that it was just blah-blah-blah. It was about nothing, in particular. Worthless. It had nothing to do with my doctorate thesis, nothing whatsoever. For whatever reason I had all these thoughts up there [in the head] and I didn’t think it was okay. I’d suffered a lot by the time I was in my late 20s and so I had a lot of narrative going on in my head. It was very painful. And there was no reason it should have been like that. I’d got good money to get through the graduate school, got assistantship, wife two kids, everyone was fine, very healthy. But I had this problem. And so I thought, “Is there some way I can stop this?” I had no reason to believe it was possible. I had not read anything that it was possible. But I knew somehow that I had to stop this thing. Picture 1. DIY your own process. So I set about trying to find a way to stop this narrative. It turned out, as I got on this path, I was going to do it DIY — one. I was being trained as an empirical scientist, so I was going to do it empirically, scientifically, with no philosophy, no religion, absolutely naked. The idea was — can I run the experiments myself? Give me the lab notebook, I’ll run the experiments, and I’ll see if it works or not, and I’ll say “yes” or “no” to the experiment. That was all I was going to do. I decided that sources have to be contemporary and trying to do this thing scientifically, empirically. I didn’t want anything from 2,500 years ago, 2,000 years ago, a thousand years ago. That had been passed down through God knows how many hands. People with vested interest have making it come out some way. The idea was it’s got to be those parameters: very scientific, contemporary sources that I could validate, that I had videos of, people I could talk to, could’ve met them myself. It had to be absolutely a clean dataset and clean sources. Myth one: relying on ancient religious teachings and practices is necessary. So this idea of relying on all this traditions: philosophy — I was not a philosopher, I was an empiricist, so no philosophy; no religion — I came from a blaming shaming religion, I’d had enough of that, thank you very much. So I was trying to find some way out of my suffering. This was the reason I had to do this thing. I was focusing on thoughts that I had and could I reduce them. So empirical, open source, secular teachings. And the hardest part of it was: it was going to have to function in the real world. I couldn’t become completely dysfunctional. It had to be an operating system that I could live with. And the most important distinction was, in addition to these different difficult caveats, it had to be a state. Not just an experience. I met a lot of non-duality people who say, “Well, I had an experience 26 years ago. I can still remember it, it was fantastically vivid. I’m a non-dualist.” And I’d say, “Well, not so much.” Because we have a lot of research now on memories. I was in a false memory fMRI study. I mean you might have a two years old memory of something and you might’ve called it up four or five, six, eight times. It’s not correct anymore. Your memory is absolutely incorrect. So it had to be a lasting state. Mini-me is down here because it really came to be a process of trying to find a mini-me in less-me involved operating system. Picture 2. Electric shocks are better than thoughts. This narrative thing, that I found unpleasant, wasn’t unique to me. There was a great study in science that was done at UVA by some really good people, some of the best people in the field. And it was saying, what if we can just take some UVA undergraduates and put 150 of them in a room one at a time and tell them, “Okay, be by yourself 15 minutes — no toys, no distractions. You can just be there with the pleasure of your thoughts, and how does that work for you.” Not surprisingly 90% came back and said, “No good, no good.” Okay, maybe it’s because we got them in a laboratory so they sent them back home, “Go back home, go back to your pod.” They go back to their pod, it’s even worse. First, 15% of them cheated but they were just as bad as those that didn’t cheat. The researchers thought maybe it’s just undergraduates, so they went on in that Charlottesville and rounded up a bunch of people from church, farmers market, average age 42, much more affluent than some of the students — ran them through this thing — same result. Same bad result. Okay, this is not okay. Take 50 of these people and ask them if they like electric shocks and you find the ones that say, “No, I do not like electric shocks.” “Okay, you are in the pool.” So you get into this thing, you are in the “don’t like electric shocks” pool and they give you a button in your hand and control that goes to an anklet that has a shocking caller on it. You’re going to go back to that room and sit there for 15 minutes again and they will just see how it goes for you. If you need to divert your attention some way to pull it back away from what you do or going through, you just push this button and you’ll get a strong electric shock to your ankle and it will bring you out of your unhappiness. So as you might imagine from above this didn’t turn out well. Two-thirds of the guys, men, shocked themselves. While only one-quarter of the women did. So women really are smarter than men. But. The guys shocked themselves on the average of 1.5 times and the girls — 2.3 times. But this is an unpleasant state for many people, almost everybody. We have so many things we do not to remain in this state and the talk is about trying to explore the ways to do that. Picture 3. Self-inquiry practices. So I went about some practices. Classical Zen self-inquiry practices. One of the fellows teaching them was Bassui — fourteenth century Zen monk very famous in some sectors (see Bassui Tokushō, One Mind[2]). And the other fellow you know, hope you all know — Ramana Maharshi, he was really my main man through this thing (see Ramana Maharshi, Who am I?[3]). And these questions are very simple: “Who am I?”, “Where am I?”, “Who hears?”. For me as an empirical scientist “Who am I?” didn’t work so much. I got philosophical. The ones I liked were: “Where am I?” and “Who hears?” I went to my Zen masters who represented Rinzai Zen which is the koan school. One guy was a big Mu person, if you know the goal was the Zen koans. I said, “Hey, I can’t get into Mu. It may be a great thing for a Japanese mind, but it doesn’t paradoxically engage me. Something like, ‘Where am I?’ I can throw myself to solve it. I am an empirical scientist. I should know where I am.” So I talked to the Roshi, he said, “Sure. You can do that.” So I set off to do that, it was actually in our Sutra book anyway. So I went off to do that. My other Zen master — same thing. I asked her about it, she said, “Sure. Perfect. Go ahead and do that thing.” So I just did this. I just asked these questions over and over and over again. Along with affirmations like, “I’m not this body.” This belief that we have we are this body. I just asked them a lot of times. Picture 4. Thoughts stopped. And after a lot of sitting meditation thoughts stopped. I had to do a lot of hours of self-inquiry, a lot of yoga. I didn’t have any guidance really. Zen people hadn’t really worked self-inquiry. I needed a coach, I didn’t have a coach. So I was in the dark a lot of the time. They stopped. Thoughts stopped. And the surprise for me with that was not that just blah-blah-blah stopped, it’s very quiet up there [holding his hand on his head], like it is now, it’s very quiet, but that the fears and emotions, self-referential ones, also fell away. Something I hadn’t expected were these two: “free will” and “the illusion of control”. I was very active in corporate industry and I was certain, very certain, in fact, that I had control and I had absolute free will and all that happens: successes and failures were mine. My responsibility. My failures. My successes. And when “I” fell away — just like that, snap — it just fell away right along with everything else. I was standing there, I got a thousand people working for me: four research labs and a quarter billion dollar budget. And I’ve to go to work. And there’s nothing up there [in the head]. [A voice from the audience, "Great!"] Great! So I go into work expecting whether to be stoned or deified and neither one happens. Nothing happens. You go through the day and you find out that in fact life is even easier, it’s better, it’s cleaner, is simpler, it’s worked so much better than before I’ve got this thing. “Well, okay, if I wasn’t in control and I didn’t have free will, who is running this show?” I’ve just kept watching and kept seeing, in fact, life does itself just perfectly well, thank you very much. I could function actually better with this situation of no-thoughts that I have now than I could before. When you go into a meeting, you know, meeting, and you find out that in fact you are the only person in the room, nobody else was in the room all the time, they were there ten percent of the time, forty percent of the time. And you look like the smartest person in the room because you’re the only one in the room that’s there for the whole meeting. Also, I found it’s much more creative. Cognition is higher. I’ve spent that way without thoughts now for 17 years except for low-energy, I’m hypoglycemic, or I get very tired. We can think about why it’s the case, why we think we know why it’s the case. Picture 5. No thoughts. Myth two: having “no self-referential thoughts” is not necessary for non-duality. It is this idea of having self-referential thoughts, “It’s okay, keep those self-referential thoughts.” Well, it doesn’t work for me. If you want to get really non-dual, you got to realize that those are the canaries in a coal mine. You got self-referential thoughts, a lot of people say, “I’m good. I had this experience ten years ago. I’m non-dual. Yeah, there’s some noise off in the corner as babbling things. But I pay no attention to them. I’m perfectly happy with this.” In fact, as you find out those are the good indicators you’ve got. Suzuki Roshi called these things “mind weeds” and he welcomed them. They give you a place to show you where’s some work needs to be done. Where you still got an attachment, and so you got something else to do. So if you can absolutely shut them down, get no thoughts — this is no self-referential problematic thoughts. I work with a lot of knowledge workers over the university town. They live by their thoughts. But the thoughts they live by aren’t these ones about, “Oh, well, my project not going to get refunded again, I’m going to have a terrible time with this… The department chair’s going to throw me out…” — not those kind of thoughts. But the ones about, “Oh, I need to run this project and put it in this way and we will organize it like this…” Planning, problem solving do not change. It’s easier, cleaner, these are not emotionally charged thoughts. And amazingly, for me, amazingly, the brain can parse those out. The brain can recognize if you’ve got problematic thoughts, or if you’ve got the ones that are just planning. These coded in because of linguistically encoded structure. For almost all residents of our species languages are all — subject, object, doing of some kind, except for four languages. So the problematic thoughts are self-referential ones and the brain can parse them out based on their structure built around “I/me/my”. You can see these people [see Picture 5]: Ramana Maharshi, Nisargadatta Maharaj, Padrinho Paulo Roberto (ayahuascero) — all are talking about having no thoughts. I was giving a talk in Stockholm in 2010 and Padrinho Paulo Roberto was one of the founders of the Santo Daime church. He was talking in Stockholm about his experiences with ayahuasca. At the end of his talk the very last thing he said was this, “It’s all about having no thoughts.” Which was a big surprise to me, because it was exactly what I was after. So this one says, “Hey, this is fantastic, what a correspondence, that we both have exactly the same perception of why we do spiritual practice — ayahuasca or meditation.” Picture 6. Stilling the modifications of the mind. These are classical resources: Tao Te Ching, Dogen Zenji, Patanjali Yoga Sutras, Bhagavad Gita. Yoga sutras 170 or so sutras. And you find out that, “Yogas chitta vritti nirodaha”, is the second one. The first one — we’re going to talk about yoga. The second sutra is, “Yogas chitta vritti nirodaha”, which is, “Yoga is the stilling of the modifications of the mind.” Depending on how hard you translate that, it means — to stop them. From Bhagavad Gita: “Shanaih shanairuparamed buddhyaa dhritigriheetayaa; Aatmasamstham manah kritwaa na kinchidapi chintayet.” [verse VI, 25] Which is, “Gradually, gradually, grab hold of your mind and put it in the Self, and don’t think about anything else.” So you’ve got a lot of these people saying pretty much the same thing. Myth three: you don’t need to practice… you’re already “enlightened”. [Laughter] Exactly! Zen people did this thing back in the 60s, they had to say exactly the same deal. It sells. It sells great. People come in, you tell them, ‘You’re already enlightened. Nothing you have to do. It’s fantastic. Just, you know, come back to the next workshop.” But it turned out that it’s in fact absolutely not true. Picture 7. How quickly the brain changes. We’ve learned a lot about neuroplasticity in the last 10 years, last eight years. There was a Harvard study with two months of mindfulness meditation forty minutes a day. The brain undergoes enormous changes very quickly. They can see five brain regions that have actually changed in size in two months. That’s how fast it changes. People, that tell you there’s nothing you have to do, did a lot of practice. Tony Parsons did years of practice. Poonjaji, Papaji did rea-a-aly tough, tough, long years and years and years and years of practice. They say, “There’s nothing you have to do.” From where they are standing, that’s true. But it’s not from where you’re standing. You cannot just walk in and pick up a violin and say, “Okay, I’m now a concert violinist.” It doesn’t sound very good. You’ve got to learn how to do it. How fast it changes? Merzenich is one of the top guys in the world right now in neuroplasticity who lived out here someplace [San Jose]. And he had a way to look at the monkeys’ neural map of their face. It changed every week. Every week. How the brain, the central motor cortex, is actually linked into these face maps. Picture 8. How you practice matters. How much you practice and how you practice matters a lot. Malcolm Gladwell’s work on 10,000 hours popularized this concept. The idea was for people to practice 10,000 hours to reach mastery on violin and become concert violinists. It was very small population, maybe 15 people involved and it’s not as clear on concert violinists, precisely, what mastery means. It’s more like, “I know it if I see it.” But the chess guys, not surprisingly, are really into this. The chess guys will metricate anything. And what they’ve found is the same thing — 11,000 hours to mastery. However, big bandwidth: 3,000 hours to 23,000 hours. And they really know what mastery looks like in Elo points — that’s how they measure success in chess. And so genetics plays a role, genetics matters whether you’re going to be a 3,000 hours person or a 23,000 hours person. But you’re going to practice. It really determines whether or not you reach this and if you can go beyond that. Some chess masters keep growing and growing and growing and growing. Some don’t. Some just peak out and stop. As Anders Ericsson found out dedicated practice focused on skill elements that are difficult for you is most effective. He did a research that they put in 10,000 hours book. The key point is practice on what you don’t do well. I did zillion hours of yoga and if you’re not careful you just keep doing the same exercise all over and over again. “I’d love to do forward bends. I’ll just do forward bends all the time.” You never do backward bends. If you’re going to expand your yoga practice, do the parts you’re not good at. Meditation practice — do the parts you aren’t doing well, which you try to avoid. As that’s where your growth is going to be. Myth four: mystical experiences are a mystery. Not so much. We now have some good models for this. This fellow Newberg is at Penn [the University of Pennsylvania], d’Aquili was also there, he’s passed on but Newberg is still there functioning very much. Their model which is pretty much holding up. I talked to one of the Obi-Wans in the neuroscience yesterday, he confirmed this as possible. Picture 9. Jamming the circuits. You have two systems: sympathetic and parasympathetic. Excitement and calmness. Those two circuits compete with each other, they go back and forth, hopefully alternating — excited, calm, excited, calm, excited, calm. That’s what they function for. Fight and flight, relax. Two sides. The idea behind the Newberg and d’Aquili’s model being if we can run these so you actually have the two of them at the same time fully activated — they will conflict and they will shut down the inputs to the temporal and parietal lobes, that do the important things for mystical experiences. You just jam the circuits and you just stop anything from going to these places that are expecting input, and almost always get input, and all of a sudden there’s no input. And they’ve postulated that that’s exactly what pushes mystical states into being. Picture 10. Input-starved mystical states. How this manifests in these input starved mystical states? They’ve got two books or three or four books, you can look them up in any place. Hyperquiescence — you get very, very, very still. Hyperarousal, obviously the opposite — marathon runners, voodoo, swimmers, skiers, whatever… It’s flow, we all know about flow, Gladwell’s book [possibly a mistake, "Flow" is a book by Mihaly Csikszentmihalyi]. Hyperquiescence with arousal breakthrough — you get really, really, really quiet and then — bang, something happens. You get a breakthrough and you get great bliss, exhilarating energy and absorption. The converse of that is “hyperarousal with quiescent breakthrough”. And I cheated in my meditation practice. I confess to this now after 30,000 hours. And how I cheated was that I was a runner and I could get into the hyperarousal state, I could get the runner’s high. And so I thought, “Well, that’s interesting.” I started to sit and the Zen guys said, “Sit still. Hold your hands where they’re supposed to be and don’t move. If you move we’ve got a stick and we’re going to hit you with the stick.” And they did this. Very tough bunch of guys. So you’re hyperquiescent, you are not moving at all. And so if you’ve got your mind calm down, then you could find yourself in this “hyperquiescent with arousal breakthrough” state. I’d found I could do that. I could sit, and after about thirty five minutes, which is about the runner’s high time, I’ve got psch-e-ew — a mystical experience. I liked this. Not that I liked it, the brain liked it. This is like a cue ball for the brain. The brain said, “This is the coolest game in town. I get to sit and I get this treat at the end.” Most people go 20 minutes and stop or 30 minutes and stop — you did the hard work and didn’t get anything. The brain says, “This is no fun. How do I do this?” So I get a cue ball. I just sat, every time I sat, every day until I’d got a cue ball for the brain and the brain really liked this and so the brain loves to meditate. And this is how my practice has been for a long long time. I can still do it. It doesn’t take 30 minutes, though, it takes up, like, 10 seconds. But you can just bring this on, this “hyperquiescent with arousal breakthrough” state — bang! That fast. Picture 11. Mystical or psychotic. Mystical and psychotic states. A lot of questions I get from many many people about this thing. My friends say I’m crazy — am I? My friends say I’m psychotic — am I? And these guys — Newberg and d’Aquili looked at the differences between psychotic and mystical states. I’m not trying to be a psychotherapist or psychiatrist, I’m just saying if you’ve got these kind of experiences, how you talk about the experience, your break from reality, how you describe it, and how you interpret the meaning of that experience — you can pretty much get an idea whether or not you’re “ecstatic and joyful” as you talk about it, or if you’re “confused, terrified and distressed” after it. In the same way, if you “long for return to it”, or if you’re just, “No. I’m not going back there again.” When you talk about it later, for most mystics it’s: “empty your ego, calm, still, peaceful”; psychotics: “messianic, healing powers”. This is not proscriptive, this is not absolutely take it to the bank. If you can’t function, if you can’t raise your kids, if you can’t do a job, if you can’t go out in the world and function somehow reasonably well, then go see somebody. If not and you fall into the “calm, still, peaceful” categories, then it’s almost certainly mystical experience. Myth five: a spiritual path needs levels, titles and an end point. Everybody loves this one. Why they love it? We are evolutionarily bred into liking hierarchies. We became the masters and destroyers of the planet by being able to organize large bunches of ourselves to go out and do some tasks. It required a hierarchical structure. Hierarchical structure benefits from levels and titles. If I say, “Sam, you’re enlightened. But you’re only enlightened at the level three. If you want to get to the level four, Sam, we’re going to need some money here and some more time, the workshops get longer. It’ll be, you know, the usual five thousand dollars for the two weeks is now going to be fifteen thousand dollars for the two weeks. Because of this special teaching to get to the level four.” Or you can take the direct path. Picture 12. Spiritual path’s parameters. But this is what many people do. To me it’s beyond naive to believe that everybody’s going to go down the same path. You look at a long, long list of relevant parameters: genetics, physical capability, age, where/when born, parents/family/friends, education, past/present drug usage, etc., this isn’t comprehensive but you look at the kind of things that affect your likelihood of success in spiritual matters — it’s a long list. And we all go through these things in different words, we get different intensities of experiences, you’ve different genetics, different family, when were you born, who our teachers were before — it’s silly to believe that they’re all going to go down exactly the same path in exactly the same way. Just doesn’t happen. Picture 13. Many paths lead up the mountain. The lots of sayings about this both from the Hindus and from the Buddhists that there are lots of paths up the mountain. My first book “Happiness Beyond Thought"[4] was very much about, “Look, here’s hundred ways — pick one. Pick the one that appeals to you and do it. The one you will pick and do is way better than the one you will pick and not do. So find something you love to do and then go into it and really ingest it and be with it. You can get there by many many paths.” Myth six: psychedelics can produce persistent non-duality. Yea-a-ah… No. We have one of the bigger Ayahuasca guys in the world out there in the audience, in a white thing there. He can throw stones if he wants to. The key to this thing you’ve heard much about at this conference is the Default Mode Network. This has started roughly in 2000 and we’ve really fleshed it out, we as a spiritual community and neuroscience community. And we know what this has done. Picture 14. The Default Mode Network. Andrews-Hannah et al., Neuron, Harvard, 2010 was one of the seminal papers. 11 centers basically generate this sense of selfing. All of the forms and types of selfing. You’ve got two subnetworks “Self and Other”/”Self in Time” and the core [see Picture 14], you probably all heard about many many times. We’re going to talk about this through here. You can see the good news, bad news is it’s way down, deep in the brain, it’s way down inside you. So the hard thing is you’ve got to get down in there and get a good signal. If you’re going to put EEG on the top of your head, you’ve got to process a lot of signals to make sure it’s exactly coming from this place and a lot of transducers. Okay, Andrews and Hannah have found out that one of these sub-networks creates this sense of “you” existing through time. “You” as an agent that moves from the past to the present quickly into the future. The other one is the network that produces this sense of “you” and other things: “you” and the chair, “you” and a person next to you. And they’ve found out that whether you do it with meditation, ayahuasca, mushrooms — you produce shutdown of these two circuits: “Self in Time” and “Self and Other”. As you do, you get the classical “All is one” and “Now, now, now.” They all produce this as a main core, certainly ayahuasca doesn’t give you the same experiences as mushrooms do, or meditation does, but a lot of it codes on exactly the same level, psychedelically, mystically. Picture 15. Meditators shut down the DMN. I was very heavily involved in the Yale study. The subject collaborator and co-conspirator Jed Brewer was at Yale at that time. And the same thing happened. These are Theravadans 10,000 hours plus or minus. [see Picture 15] They looked at three Theravadan meditators and said, “Okay, what happens?” Duh. Theravadans shut these things down — “Self in Time”/”Self and Other”. Not surprisingly, 10,000 hours. Not a big deal… not a big news because, in fact, you’ve got the Tasking Network and the Default Mode Network, which is really deep down in the core of the brain. The Default Mode Network goes blah-blah-blah and when you’re doing a task and you’re at the Tasking Network, then you’re consumed by your task, the task you love to do, you’ll notice — you’re not thinking. If you’re rock climbing and doing a high pitch, you’ve got to be really careful, you pay attention and you don’t think. And we like it because we don’t think. If you’re in a chess match… whatever it is what really turns you on, avocation or vocation… this Tasking Network will shut down the Default Mode Network. So these Theravadan meditators were really just tasking. The really interesting thing was when Jed turned on the equipment in between runs, he found out that, in fact, those centers stayed shut down, even without meditation they were shut down, for a while. Picture 16. Default, Tasking and Control Networks. We now know we’ve got these three circuits: Default, Tasking and Control Networks. This is the expanded and updated version from Andrews and Hannah, 2014 [see Picture 16], she pulled in a thousand person study from MIT to really work this thing out, and the thickness of these lines is how often they talk to each other, they operate at the same time. You can see you’ve got these two sides — the Default and the Tasking networks, and you’ve got two centers in the Control Network, which Jed Brewer found as well, and their job is to watch and control the situation and they watched to see if the Default Mode Network is shut down. They see if it’s shut down, they can control this thing. Those two centers from the Control Network are called the Salience Network. They are watching the Default Network and the Tasking Network to regulate the switching back and forth. If you’ve got ADHD, the DMN circuit, blah-blah-blah, won’t shut up. It will not let go. And so you’re at the Tasking Network trying to get a job done, and blah-blahs overrun, and so you can’t concentrate. There’s a lot of good research on that too. Picture 17. How mushrooms work. How ‘shrooms work? Exactly the same way meditation works… not exactly, but this is IV [see Picture 17], psilocybin, in the UK, medium level doses, there is no need to worry about digestion, just IV, you can get a clearer picture at the infusion point [see Picture 17, the left side], and how this thing shuts down [see Picture 17, the right side]. As it shuts down, some of your friends may’ve told you, you actually get extremely intense effects, so tracks exactly on meditation. Picture 18. How ayahuasca works. Ayahuasca — the same thing. It’s a well-known study now, it’s been recently done but it’s become pretty well-known quickly [see Picture 18]. The same thing happens. These two [see Picture 18 the left side, blue areas] — the core centers, before ayahuasca. On ayahuasca in the fMRI — they’re shut down [see Picture 18 the right side]. The same thing happens. It’s a very nice longitudinal study now, we were looking at yesterday at lunch, on long-term usage of ayahuasca. That they all work. I recommend everybody to have some kind of mystical experience. If it needs to be mushrooms, I’m not pimping for mushrooms, I’m a virgin by the way, I’m a virgin on psychedelics. But I encourage people to do something. Get some mushrooms from somebody anybody any place and… [laughter] really… [somebody’s saying, “Backyard.”] backyard, make sure they’re really good stuff but I encourage people to do this. One fellow here, in the Bay area I’ve worked with, he’s at the conference today. And he’d been practicing very hard — practice, practice, practice, practice — but he wasn’t getting [shows quotes] anywhere. And so I said, “Get some mushrooms.” And magically, someone who he didn’t expect would have mushrooms, gave him a Hershey bar that was wrapped around some mushrooms. And he reached up, took them, lo and behold [claps] — BANGO. Now the brain knew what was possible. The problem is the brain doesn’t know what’s possible, this is [spreads his hands] its reality. If you show it something else, he says, “Oh, gee, all kinds of things are possible. I can make this. I can make that. I can make peacocks. I can make crawling snakes. I can do all kind of things.” So you say, “Okay, my consensus reality is not the only reality there.” As somebody said on Friday, the brain works on the best effort basis. It’s not trying to do a perfect job of replication, it’s just trying to say, “Well, okay, the lion comes down to the steak. The lion’s hungry. The lion needs a steak.” That’s fitness. However, the lion then says, “Well, okay, my belly is full. I’m looking for a lady lion.” So he doesn’t care about the steak anymore. Now he cares about lady lions. That’s fitness. It’s not perfect but it’s fitness. As your demands change, you adapt, and you move to another object. Picture 19. Hood Mysticism Scale. So they’re all on the same scale: psychedelics, what I do — persistent non-duality… yeah, persistent non-duality. And you can see what happens. Jeffrey Martin, who isn’t around today, his PhD thesis at CIIS on this: he took five hundred people, cut it down to thirty six of us, he said, “Okay, thirty six are all persistently spent a lot of your time within this non-dual state, this no-thoughts state.” And so we went through all kinds of interviews and testing and everything else, and they were going to see how we do. Well, Hood, of Hood’s mysticism scale, was actually on Jeffrey’s doctoral committee so this is about as good as it’s going to get. And from these thirty six of us in this population, nine of us scored one sixty — it’s the highest level on the scale and we were that way persistently, ninety percent of the time. So it’s like a ninety percent of the time psychedelic experience on this scale. The common scale. The problem with psychedelics is you got it six hours and it’s gone. Or eight hours, or fourteen hours and it’s gone. The brain doesn’t get a chance to learn how to do it itself. You’ve got to make the brain learn how to do it itself or it’s not going to be persistent. I met a guy in Yucatan 12/12/12 he had done two thousand trips on the high dose of high strength of acid. So I asked, “How is it going for you?” He said, “Not there yet.” [Laughter] Two thousand! He said, “Well, it’s not working.” And actually he developed an enormous ego around being extreme athlete on acid. So it doesn’t get you there. You’ve got to somehow find a way to taper it down, make the brain learn how to do it. It’s exactly the same receptors, it’s the same circuitry, it’s the same chemicals, the brain’s got them, it has to work that way or that psychedelics wouldn’t work that way. Plant medicine works because they’ve found out how to hack our system. We’ve discovered that we have synapses and how they worked with cannabinoids. Myth seven: all research results are reliable. Everything is fantastic. Not so much. There is a big study in the journal of American Medical Association on mindfulness meditation. Forty seven trials, lots of people, 3515 participants, improved anxiety, etc. However, the guys from the Hopkins [the John Hopkins University] reported, “But only ten of those forty seven trials were without bias.” So you’ve got thirty seven, three-fourth of the trials, the good trials that are biased. They were biased because if I’m an XY person or religion, say Buddhism… not Buddhism, I’m picking on Buddhism, actually, I have some friends in Buddhism… If you believe that you’re protecting the Dharma [the teaching in Buddhism] and it’s your job to protect the Dharma and you select people from your sect and you run your sect’s practices and you evaluate the goodness of this, the fitness of this against your sect’s attributes, that’s going to be biased. No matter how careful you are, it’s almost impossible to avoid bias. Then that’s been pernicious, but it creeps in — you’ve already done it with the selection of your subjects. A lot of it can’t be reproduced. There was another meta-study published in Science on psychological studies and 97% of the studies they’ve picked had a very high significance — very “great study”, “fantastic awesome study” — so they went back and tried to replicate it. Only thirty-six percent of them were good at replicating. They could not replicate the results in the rest of them. So what’s the value of those studies? Not very high. The statement about different psychotherapies says that at least twenty-five percent of them are overstated to defectiveness for the same reason — people say, “Oh, that’s an outlier. Oh, that doesn’t count at all. I know this person — they’re crazy.” So you end up then with a very carefully controlled population, so you get good statistics, so you get tenure, so you get funded, so you get more research dollars, whatever. Without science we’re screwed, right? Because it’s the best tool we have, the only tool we have for some of this stuff. We’re not doing as well as we could. If we did a lot better, it’d help us a lot more. Picture 20. Compassion is reciprocal altruism. Myth eight: if there’s no “I”, you won’t be compassionate. I get this a lot. I got called non-human at one panel discussion in Netherlands. “You aren’t a human being because you don’t have these desires and these bad things.” Compassion, for most people, is really reciprocal altruism. I give Sam something. So I’m expecting something back from Sam. This goes into the hierarchical structure, it holds the structure together, because if I give something to Sam, and he gives to me something back later, so Sam and I are, kind of, locked together, we’re in the same structure, we know where we are in this thing. It’s the glue of the hierarchy. There are all kinds of things we’re expecting out of this. [see Picture 20] And Trivers and Stevens did tons of work on this, there’s even the center out here on reciprocal altruism. But it’s really just, “Let’s make a deal.” If you watch carefully, you can feel it in yourself. I’m helping this person “down here”, I’m “up here”, they’re “down there”, they need this thing from me, and you give it to them and say, “Ah” — karma points, heaven points, “I’m on my way now.” You’re expecting a reward, if not now, later. In fact, your compassion is all about you. You have to guess who said this thing. “Human beings may be naturally selfish”, this guy’s really well tuned in, scientifically upbeat, “But they are also naturally compassionate, science shows. Helping someone else makes you feel good”, this is scientific research, we know this, “By nature every human being loves oneself, but by helping another you are building your own happy self.” You are building your ego by doing this process of helping Sam and getting something back from Sam. It doesn’t take your ego down, it actually strengthens it. Guess, who said this… yeah… Dalai Lama. Picture 21. Dalai Lama on compassion. Myth nine: non-duality is a psychotic state. And this is a serious problem for people. DSM-5 which is the diagnostic manual by which psychologists and psychiatrists can bill medic care or whoever, health care systems. And there has been a manual. Dr. Allen Frances from the DSM-5 Committee from Duke [the Duke University] actually chaired the study, he wrote a blistering… almost a repudiation of the study and said that, in fact, this fuzzy boundary, like for ADHD is so broad now that many, many, many people fall into this thing that didn’t before. And he also said, basically everybody someplace will be in there. Picture 22. DSM-5 and mystical states. Take one of the big Obi-Wans in depersonalization disorder, and this one lists off twenty characteristics of depersonalization disorder. So I took the Hood’s mysticism scale and I listed twenty things and they’re the same twenty things. I mean mystical things lined up exactly with the depersonalization disorder. So you’ve got to somehow find a psychologist, there’s someone in the Bay area, who understands this problem. That, in fact, what we call mysticism, we think is fantastic — they’ve categorized, psychopathologized it. Just be aware of this. If you’ve started hearing people, “Oh, you’re psychotic. DSM-5 says this”, there’s a lot of controversy about this whole thing. About, “DSM-5, period.” But in particularly about the depersonalization disorder. That’s all I have. Thank you. At the foot of Arunachala^ Gary Weber, Myths about Nonduality and Science, ^ Bassui Tokushō, One Mind, https://www.thedaobums.com/topic/12089-zen-master-bassuis-one-mind/ (excerpted from "The Three Pillars of Zen" by Philip Kapleau) ^ Ramana Maharshi, Who am I?, https://archive.arunachala.org/docs/collected-worm/who-ai/all-languages/who-am-i-english.pdf ^ Gary Weber, Happiness Beyond Thought, https://www.amazon.com/Happiness-Beyond-Thought-Practical-Awakening/dp/0595418562
2025-01-15
https://www.lesswrong.com/posts/WJ7y8S9WdKRvrzJmR/building-ai-research-fleets
WJ7y8S9WdKRvrzJmR
Building AI Research Fleets
bgold
From AI scientist to AI research fleet Research automation is here (1, 2, 3). We saw it coming and planned ahead, which puts us ahead of most (4, 5, 6). But that foresight also comes with a set of outdated expectations that are holding us back. In particular, research automation is not just about “aligning the first AI scientist”, it’s also about the institution-building problem of coordinating the first AI research fleets. Research automation is not about developing a plug-and-play “AI scientist”. Transformative technologies are rarely straightforward substitutes for what came before. The industrial revolution was not about creating mechanical craftsmen but about deconstructing craftsmen into assembly lines of specialized, repeatable tasks. Algorithmic trading was not just about creating faster digital traders but about reimagining traders as fleets of bots, quants, engineers, and other specialists. AI-augmented science will not just be about creating AI “scientists.” Why? New technologies come with new capabilities and limitations. To fully take advantage of the benefits, we have to reshape our workflows around these new limitations. This means that even if AIs eventually surpass human abilities across the board, roles like “researcher” will likely transform dramatically during the transition period. The bottleneck to automation is not just technological but also institutional. The problem of research automation is not just about training sufficiently capable and aligned models. We face an “institutional overhang” where AI capabilities are outpacing our ability to effectively organize around their weaknesses. Factories had to develop new management techniques, quality control systems, and worker training programs to make assembly lines effective. Trading firms had to build new risk management frameworks, compliance systems, and engineering cultures to succeed at algorithmic trading. So too, research institutions will need to reinvent themselves around AI or fall behind. The scaling labs have already moved beyond the traditional academic model. Consider the use of matrix management structures where research engineers work across multiple projects, standardized research workflows that enable fast iteration, and cross-cutting infrastructure teams that maintain the computational foundation for research. Labs employ specialized roles like research engineers, infrastructure specialists, and research managers that don't fit neatly into the academic hierarchy. Deepmind’s recent Nobel prize is a hint of more to come. A vision: the automated research fleet. Imagine tomorrow’s research lab: not individual AI models confined to chat windows but vast digital fleets of specialized AI agents working in concert. Each agent masters its own niche in the research pipeline: proving theorems, reviewing literature, generating hypotheses, running experiments, analyzing results, communicating outcomes, developing new techniques, conceptualizing entirely new paradigms… Automation raises the level of abstraction so that everyone becomes a middle manager — every researcher the director of a research institution of their own. And it changes the basic patterns of human-AI interaction: the prompter will become the prompted — instead of crafting careful prompts in chat interfaces, human researchers receive updates and requests for guidance from their AI project leads, who independently pursue established research objectives. This future may appear wasteful at first glance. Imagine thousands of AI instances running in parallel, testing slight variations of the same approach, with almost all attempts failing. Or hundreds of different AI instances in a shared chat that redundantly process the same tokens. But this apparent inefficiency is a feature, not a bug. Ford’s assembly lines overproduced standardized parts; McLean’s containers shipped half-empty; early cloud computing wasted countless unused FLOPs. Just as these “inefficiencies” enabled unprecedented flexibility and scale in their industries, the parallel processing power of AI research fleets will unlock new possibilities in scientific discovery. The ability to rapidly test hundreds of variations, explore multiple paths simultaneously, and fail fast will become a cornerstone of future research methodology. Recommendations The scaling labs already understand that research automation is here – they're building the infrastructure and organizational patterns for automated research at scale. For AI safety to stay relevant, we need to adapt and accelerate. Here are our recommendations for transitioning toward AI research fleet management: Individual practices Spend time on research automation each week: Embrace the lazy programmer mindset of over-automation. Research relevant tasks can be automated now, and it will instill the habit of looking for potential gains from AI+human automation. Play around with the tools: Copilot, Cursor[1], o1 pro, Gemini pro, Perplexity, Elicit, etc. Different LLMs have different styles, which you can get a finger-tip feel for when you work with them a lot. Being playful will help you avoid the trap of dismissing them too soon. Beware AI slop. We are not pollyanish AI enthusiasts — much of the content currently produced by AI is bad and possibly harmful. Continue to whet your tastes on pre-2023 human-sourced content. Organizational changes[2] Invest in documentation: LLM tooling is most helpful when you can provide rich context. Create good, up-to-date documentation on company projects to maximize the help that current tools can provide, and to lay the infrastructure for the future. More generally, consider migrating to monorepos and single sprawling google docs to make it easier for your AI systems to load in the necessary context.Adopt team and organizational norms of experimentation: Set a north star for your research team and organization to experiment with increased use of AI agent workflows. Appoint someone as end-responsible for automation (in your infrastructure or devops team). Beware AI slop. You shouldn’t use AI systems blindly for all of your coding and research. At the same time, you should tolerate early automation mistakes (from, e.g., AI code slop) as learning opportunities for your organization to develop better quality control processes. Community-level actions Develop more case studies and research: Though research automation will be different from past waves of automation, we can take a lesson from historical examples of automation. Below, we’ve included stubs to a few examples and encourage gathering primary sources and interviews from practitioners during these transition periods.Share results from individual, or ideally team, experiments: We expect there to be a lot of different “organizational design patterns” for research automation. It will be difficult and counterproductive for any one team to work through all of them, but sharing techniques for this type of experimental research will benefit the collective. Establish high-signal groups/meetups/conferences with a research organization focus: We encourage bringing together groups of researchers who are interested in and experimenting with research automation. There’s a tremendous amount of noise in the space; trusted groups can act as necessary filters for separating practical, evidence-based approaches from less substantiated claims. At the same time, we should cast a wide net and learn from how non-AI-safety organizations are adapting to AI.Outline visions and “sci-fi” futures of research fleet management: We’ve outlined one possible vision, but we expect there are to be far more, and we expect vision papers/posts/tweets to help clarify the direction that we need to steer towards. In general, we recommend working forwards from your existing workflows rather than working backwards from any idealistic vision of what automated AI safety research should look like. Too much theorizing is a real risk. Work iteratively with what you have. We personally are starting today, and think you should too. The race for AI safety isn't one we chose, but it's one we have to win. Thanks to Raemon and Daniel Murfet for feedback on a draft of this post. Further Reading On Automation in AI Safety “Superalignment with Jan Leike” on AXRP (Daniel Filan), July 2023“Cyborgism” Nicholas Kees & Janus, February 2023“Cyborg Periods: There will be multiple AI transitions” Jan Kulveit & Rose Hadshar, February 2023“The case for ensuring that powerful AIs are controlled” Ryan Greenblatt & Buck Shlegeris, January 2024“Why Not Just... Build Weak AI Tools For AI Alignment Research?” John Wentworth, March 2023“A minimum viable product for alignment” Jan Leike, 2022“Some thoughts on automating AI alignment research” Lukas Finnveden, May 2023.“Could we automate AI alignment research” Stephen McAleese, August 2023 On Research Automation “A basic systems architecture for AI agents that do autonomous research” Buck Shlegeris, September 2024“Integrating AI Agents into Companies” Austin Vernon, December 2024“o1 isn’t a chat model (and that’s the point)” Ben Hylak and swyx & Alessio, January 2025“The AI scientist” Sakana AI, August 2024“Can LLMs Generate Novel Research Ideas?” Si, Yang, and Hashimoto, Sep 2024 via the Cognitive Revolution (Nathan Labenz)“Introducing Squiggle AI” Ozzie Gooen, January 2025“Interviewing AI researchers on automation of AI R&D” David Owen, Aug 27th 2024“AlphaFold @ CASP13: ‘What just happened?’” Mohammed AlQuraishi, December 2018 On Automation Generally Algorithmic trading MacKenzie, D. (2021). "Trading at the Speed of Light: How Ultrafast Algorithms Are Transforming Financial Markets." Princeton University Press. MacKenzie, D. (2019). "How Algorithms Interact: Goffman's 'Interaction Order' in Automated Trading." Theory, Culture & Society 36(2): 39-59. Zuckerman, G. (2019). “The Man who Solved the Market: How Jim Simons Launched the Quant Revolution”. New York, NY, Portfolio / Penguin. Industrial research, big pharma, biotech research, defense & national laboratory research Hounshell, D.A. and Smith, J.K. (1988). "Science and Corporate Strategy: Du Pont R&D, 1902-1980." Cambridge University Press. Henderson, R. (1994). "Managing Innovation in the Information Age." Harvard Business Review 72(1): 100-105. Quality control in flexible manufacturing systems Hayes, R. H., & Jaikumar, R. (1988). "Manufacturing's crisis: New technologies, obsolete organizations." Harvard Business Review, 66(5), 77-85. Goldratt, Eliyahu, (1984). “The Goal: a Process of Ongoing Improvement". Great Barrington, MA :North River Press Medical & legal automation Jha, S. and Topol, E. (2016). "Adapting to Artificial Intelligence: Radiologists and Pathologists as Information Specialists." JAMA 316(22): 2353-2354. Remus, D. and Levy, F. (2017). "Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law." Georgetown Journal of Legal Ethics 30: 501-558. ^ Consider actually reading the docs. ^ In a sense we are all corporations now. All of these suggestions also apply to how you organize AIs in your personal life.
2025-01-12
https://www.lesswrong.com/posts/6Jo4oCzPuXYgmB45q/how-quickly-could-robots-scale-up
6Jo4oCzPuXYgmB45q
How quickly could robots scale up?
Benjamin_Todd
Once robots can do physical jobs, how quickly could they become a significant part of the work force? Here's a couple of fermi estimates, showing (among other things) that converting car factories might be able to produce 1 billion robots per year in under 5 years. Nothing too new here if you've been following Carl Shulman for years, but I thought it could be useful to have a reference article. Please let me know about corrections or other ways to improve the estimates.
2025-01-12
https://www.lesswrong.com/posts/5Sq3XBcRmd8ereHgo/agi-will-not-make-labor-worthless
5Sq3XBcRmd8ereHgo
AGI Will Not Make Labor Worthless
maxwell-tabarrok
The discovery and use of machinery may be … injurious to the labouring class, as some of their number will be thrown out of employment, and population will become redundant. - David Ricardo Fears about human labor getting replaced by machines go back hundreds if not thousands of years. These fears continue today in response to the rapid progress of AI in several previously secure human domains. Rudolf L on the Lesswrong forum, for example, claims that a world with labor-replacing AI would mean: People's labour gives them less leverage than ever before.Achieving outlier success through your labour in most or all areas is now impossible. Scott Alexander responded to this post with some reasons to doubt the implications of complete labor redundancy, but he accepts the premise that AGI will make human labor valueless without contest. The basic argument in favor of this position is twofold. First, that AI will continue to rapidly automate tasks that humans used to do. Second, is that as a consequence of this automation, there will be more available labor available for a shrinking number of jobs, thus lowering wages until eventually all tasks are produced by AIs and the only way to make income is to own some GPUs, power plants, or land. The long history of this idea proves that it is convincing but it probably isn’t true. Even if AGI continues to advance and is eventually able to do any task that humans currently do, human labor will not become worthless. The Labor Share of GDP Has Been Constant For 200+ Years The labor share of income - the fraction of total output that is paid to workers versus capital owners - has been constant for so long that it has become one of the fundamental regularities that any model of long run economic growth must explain. For at least 200 years, 50-60% of GDP has gone to pay workers with the rest paid to machines or materials. There is lots of hubbub about falling labor share since the 1990s, but this is due to mismeasurement of labor income which has shifted increasingly into non-wage benefits like employer healthcare and retirement plans. Accounting for this, the regularity of labor’s share of income at around 50% remains. The basic story that motivates fear of AI automation predicts that more automation leads to lower value and leverage for labor, but this story cannot a explain a flat labor share of income since 1800. The past 200+ year period of industrial economic growth has been defined by the rapid growth of labor-replacing automation, but labor’s share of income has been constant. Almost all the tasks that were done by humans in 1800 are now automated, but the labor share of income did not go to zero. This is not to say that a change in labor’s share of GDP is impossible, but the constancy of this measure through all of our past experience with labor-replacing technologies is an important foundation to keep in mind when predicting the effects of the next such technology. Labor’s constant share of income is not due to luck or coincidence but equilibrating forces in overall demand, relative factor prices, and directed technological change. A technological innovation that automates tasks previously done by labor does displace people and decrease labor’s share of income. However, there are four countervailing effects which keep labor’s share constant. Automation increases productivity and output. A task won’t be automated unless it increases productivity to do it with machines instead of labor. Therefore, once something is automated, people can afford more of the automated product at lower cost and their budget constraint is relaxed everywhere else, essentially increasing their total incomes. When incomes increase, people demand more goods from all sectors, including the sector getting automated and other labor intensive sectors. If demand for the good produced by the automated tasks is very elastic, this can even increase employment in the automated sector. For example, the printing press automated the most labor intensive part of authorship, but the increase in the quantity of books demanded was so great that in addition to each author getting hundreds of times more productive, there was also an explosion in the number of people employed as authors after the printing press automated away many of their tasks. Alternatively, the productivity increase will have spillovers into demand for other areas of the economy that aren’t as easily automated. For example, as farming was mechanized and automated, people didn’t need to spend as much on food, so their incomes effectively increased. While they didn’t spend enough extra money on food to offset labor displacement there, they spent enough on other industries like textiles and automobiles which employed more labor to meet that demand.Automation raises the amount of capital available to each worker. Automation makes capital more productive, thus raising the returns to investment and leading people to build more capital machines. This increase in the capital stock complements labor and makes it more productive, and thus better compensated. For example, the automation of computing displaced many “computers”, but it made certain types of capital much more valuable. As firms invested in computers, the marginal worker became more productive since they now had access to powerful machines that helped them in their work. The rise in labor demand for more productive computer-powered secretaries, stock traders, engineers, and programmers far outweighed the displacement caused by the automation in the first place. Additionally, since the supply of capital responds elastically to changes in its productivity while labor does not, the productivity gains from automation end up accruing to labor in a type of Baumol’s cost disease.Automation often improves capital productivity Automation can help capital replace a task that is currently done by labor, but it can also replace old capital and improve productivity in already automated tasks. For example, the transition from horse-drawn plows to tractors doesn't have any displacement effect but does have an income effect, which again increases demand for all goods including from labor intensive sectors or sectors where labor has a comparative advantage, thus increasing wages and employment in those sectors.Automation creates new tasks that labor can perform. Automation often straightforwardly creates a new task for labor by requiring someone to produce, supervise, and maintain the automating machinery. Even when this is not the case, the displacement caused by automation also incentivizes more investment into technology which creates new tasks for labor rather than capital. Automation means that capital has a growing opportunity cost, while labor has a falling opportunity cost. There are more valuable tasks that capital can do and fewer that labor can. So a new technology which takes advantage of underutilized labor will get higher returns than one that relies on in-demand and constrained capital. These forces have kept the labor share of income in equilibrium through all of the major technological revolutions of the past 200+ years. Therefore, a convincing and likely true story about how AI will automate most of the tasks currently performed by humans is insufficient evidence to conclude that the labor share of income will go to zero. Self-Replicating Genius Labor Did Not Make Average Labor Worthless The obvious response to the above argument is that AGI is different than all the labor replacing technology which came before it. These previous technologies automated particular, narrow tasks rather than emulating the general learning capacity of the human mind. It may be true that labor can retain it’s value against capital that automates one task at a time, but what would happen if tens or hundreds of millions of fully general human-level general intelligences suddenly entered the labor market and started competing for jobs? We needn’t speculate because this has already happened. Over the past three centuries, population growth, urbanization, transportation increases, and global communications technology has expanded the labor market of everyone on earth to include tens or hundreds of millions of extra people. This influx of intelligence came out of a recursively self-improving feedback loop between people, the ideas they create, the resource they can produce using those new ideas, and the new people those resources can support. Source For below average intelligence laborers, this means competing with an ever-growing population of people who are better than them at any conceivable task. The amount of intelligence competing in the average person’s labor market increased by several orders of magnitude over the past few centuries. But this did not turn low skilled laborers into a permanently unemployed underclass who could not make any money from their labor. In fact, their incomes increased a lot as they specialized and traded with these much more intelligent workers. The reason for this is comparative advantage. Low skilled laborers are worse than high skilled ones at all tasks. But high-skilled laborers face constraints: if they work on an assembly line, they can’t also teach at the university. Low-skilled workers don’t face this tradeoff and thus can outcompete high-skilled workers for assembly line jobs even when they are less productive at them. Exactly because of their superior ability at all tasks, high skilled workers give up more when they choose to do something that they could trade for. This applies just as strongly to human level AGIs. They would face very different constraints than human geniuses, but they would still face constraints. There would still not be an infinite or costless supply of intelligence as some assume. The advanced AIs will face constraints, pushing them to specialize in their comparative advantage and trade with humans for other services even when they could do those tasks better themselves, just like advanced humans do.
2025-01-12
https://www.lesswrong.com/posts/s39XbvtzzmusHxgky/the-purposeful-drunkard
s39XbvtzzmusHxgky
The purposeful drunkard
dmitry-vaintrob
I'm behind on a couple of posts I've been planning, but am trying to post something every day if possible. So today I'll post a cached fun piece on overinterpreting a random data phenomenon that's tricked me before. Recall that a random walk or a "drunkard's walk" (as in the title) is a sequence of vectors x1,x2,… in some Rn such that each xk is obtained from xk−1 by adding noise. Here is a picture of a 1D random walk as a function of time: Weirdly satisfying. A random walk is the "null hypothesis" for any ordered collection of data with memory (a.k.a. any "time series" with memory). If you are looking at some learning process that updates state to state with some degree of stochasticity, seeing a random walk means that your update steps are random and you're not in fact learning. If you graph some collection of activations from layer to layer of a transformer (or any neural net architecture with skip connections -- this means that layer updates are added to "memory" from the previous layer) and see a random walk behavior, this means that you have in fact found no pattern in how the data gets processed in different layers. A random walk drifts over time, but ends up in a random place: you can't make any predictions of where it ends up (except predict the exact form of Gaussian randomness): Time plotted as color. Now one useful way that people use to find patterns in their data is make pictures. Since data in ML tends to be high-dimensional and we can only make 2-dimensional pictures[1], we tend to try to plot the most meaningful or prediction-relevant data. A standard thing to do is to look at top PCA components: i.e., find the maximal-variance directions and look at how the data behaves in these directions. If we have generated some collection of possible interesting directions that reflect some maximally informative parameters that the internal data of an LLM tracks, we can examine their behavior one by one, but we expect to see the strongest form of the relevant behavior in the top PCA components. So if the top PCA components of a process with memory look like a random walk, that's even stronger evidence that we haven't found a pattern (at least a linear one). Here is a 5-dimensional random walk plotted to 2 dimensions via its top 2 PCA components: This still looks pretty random (with a random drift, as before). This is fun! Let's do the same thing with 250 points of a 25-dimensional random walk, plotted to their top 2 PCA: Hmm - something fishy is going on. Let's repeat with a 100-dimensional random walk: And a 500-dimensional random walk for good measure: The character in the drunkard's walk has now obviously sobered up and decided where to go. To be fair, his y-coordinate movement is a bit indecisive: he ends up going down, then up -- but he purposefully walks from left to right. To help him along a bit, let's graph the first and 3d PCA components instead of the first and second: Now he's indecisive in the y coordinate as well, but does end up in a new place in the y direction. Just for fun, here is the 1st and 4th components: This is very pretty! And I'll shortly explain how one can understand this. But first, notice that this captures a particularly frequent danger in statistical analysis of high-dimensional systems -- namely one of having too much control over the extracted data (related to, though not quite the same thing as, overfitting). When you have a lot of control, you can endow the bumbling drunkard with an arbitrary number of purposeful behaviors. This is all well and good when you are using this to generate pretty pictures or (as we'll do in a bit) talk about random matrices, but it's not great if you're framing an innocent drunkard for murder: "your honor, clearly he knew where he was going". I experienced this phenomenon first-hand (acting as the "drunkard police") when Nina Panickssery and I were doing some followup work on our modular addition SLT project[2]. In doing this, we ran an SGLD sampler and just for fun, plotted the top two components. In the regime in which we were working (where all macroscopic measurements seemed to show the sampler to be unbiased), we saw strange pictures similar (though with a slightly more sophisticated geometric shape) to the random walk PCA above. We decided that there must be some secret extra directions in which the sampler remains biased (a.k.a. unstable -- the specific meaning of these words doesn't matter), though this sneaky direction doesn't interfere with any of the important large-scale parameters that we needed to sample (which we were quite sure, from several sources of inquiry, were behaving stably at the range we looked at). This wasn't the main point of the work, but it was a surprising thing that I remember pointing out as something to be careful to to some other people who were performing SGLD sampling. In other contexts, I've seen similar graphs of layerwise activations of neural networks graphed along such a PCA, with the curve behavior interpreted as capturing a pair of behaviors, one of which develops along layer depth and the other of which peaks, then goes back to baseline, as the layers go from early to middle to late. Later when I thought to perform this random walk PCA experiment, I experienced the familiar forehead-palm experience that I am learning to know and love in empirical research. So the main takeaway from this post is a warning: remember when you're doing something tricky and data-dependent to extract or graph your data, and make sure that you're not accidentally framing an innocent drunkard. (Note: if you look at more sophisticated random walks -- such as random walks with momentum, or slowly varying step size, etc., then you'll get a number of differently-shaped "pretty pictures" in the top PCAs. This can actually give usable insights if you're trying to explain a high-dimensional time series as a random process of a particular type, but if you're trying to find consistent macroscopic behaviors, then it's healthy to have a blanket suspicion of any pretty picture given by a PCA or any data extraction method that is data-dependent, or to at least put a bit of thought into some back-of-the-envelope degree-of-freedom computations to make sure the phenomenon you find isn't explained by overfitting. Random matrices Now let's finally think about why the pictures we see behave in this particular structured way. The key point, which is also the key class of insights in random matrix theory, is that eigenvalue-like invariants of random matrices have a significant amount of structure. When modeling a random walk as a random process, we start with the matrix of random updates (i.e., the differences between subsequent points in the random walk): Mupd∼N(0,1)d×T. Here we assume that we're performing T steps of the random walk in d-dimensional space. Each step is vector in Rd drawn from a Gaussian distribution, and since the updates are (by definition) uncorrelated, we have T independent draws. (Note: I'm not going to worry about normalization scales, since they'll all boil out into a single tunable constant at the end). Now each location in a random walk is xt=M1+M2+⋯+Mt, where Mi is the ith d-dimensional column of M. We can rewrite this using the T×T upper triangular "summation" matrix: S=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝11…1101…11⋮⋮⋱⋮⋮00…1100…01⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠. Now the location matrix X=x1,x2,…,xT is X=S⋅Mupd. Now the pictures we were drawing are obtained by: "centering" by subtracting the mean of each row of X to get Xcent. This means multiplying on the left by another T×T matrix, C=I−1T(1⋯⋮⋱,) where the matrix on the left is the identity and the matrix on the right is the (normalized) matrix all of whose entries are 1.Finding the two top eigenvectors of the diagonal matrix XTcentXcent=MTupdSTCTCSMupd.Projecting columns of Xcent=CSMupd to the space spanned by these. Now one can do pretty sophisticated random matrix theory to approximate this result, but there is a cheat that you can apply to get a stab at the "general behavior" for random walks in very large-dimensional spaces. Namely, in the limit where the dimension d is much larger than the number of steps T, one can simply assume that (up to rescaling), the random matrix of updates M_{upd} consists of T random orthogonal vectors. This is because of the most important result in high-dimensional spaces: The Johnson-Lindestrauss[3] lemma: with very high probability, a collection of uniform Gaussian random vectors v1,…,vk in a very high-dimensional space is almost orthonormal. (With norm depending on the dimension and the variance of the Gaussians.) I'm not saying exactly how to operationalize this, and indeed there are many operationalizations; here I'm just giving an intuition. Once we observe this, we note that the projection we're looking at is independent of orthogonal changes of coordinates in the d-dimensional space[4]. Thus if we were to really take the J-L lemma seriously and assume our update vector matrix Mupd consists of a collection of orthonormal vectors (up to a fixed normalization rescaling), it follows that we can (up to a fixed scale) simply drop the random matrix Mupd, and work throughout with the deterministic matrix CS that gets multiplied by Mupd. This means that (in the d>>T limit), the elegant pictures we see are just the projections on the right of the matrix CS to its highest eigenvalues. Now we can check that (CS)TCS is a nice matrix with incrementally changing entries (on a suitable scale), we should not be surprised that once the projections are worked out, they have the smooth form we see above. Note that what we used here isn't really random matrix theory, since we ultimately only looked at the singular vectors of a deterministic (namely, CS) rather than a random matrix. Of course in most interesting examples, we're not working in the extreme d>>T limit, and in fact the dimension d may be less than the number of datapoints T. In this case it's not longer legitimate to "overcommit" to the J-L lemma and model the datapoints as orthogonal. A full analysis would require more careful random matrix methods. Nevertheless, so long as the dimension d is not too much less than T, we should expect to still approximately notice the same large-scale behavior. ^ Citation needed ^ The followup work is currently unpublished, but I'm planning to write about something related to it later. ^ Originally, this name was used for a more sophisticated extension of this insight to a statement about the way geometry changes under projection, but it has become standard to use this lemma as a catch-all for all related statements. ^ To check this: if we are projecting a centered matrix M to a top principal value, the resulting T datapoints are Mv for v a top eigenvector of MTM. Now if we write M'= MU for U an orthogonal matrix, then the vector v′=U−1v is a top eigenvector of (M′)TM′ (check this!). The corresponding projection is then M′v′=MUU−1v=Mv, i.e., doesn't change.
2025-01-12
https://www.lesswrong.com/posts/eqSHtF3eHLBbZa3fR/cast-it-into-the-fire-destroy-it
eqSHtF3eHLBbZa3fR
Cast it into the fire! Destroy it!
panasenco
We should only use AGI once to make it so that no one, including ourselves, can use it ever again. I'm terrified of both getting atomized by nanobots and of my sense of morality disintegrating in Extremistan. We don't need AGI to create a post-scarcity society, cure cancer, solve climate change, build a Dyson sphere, colonize the galaxy, or any of the other sane things we're planning to use AGI for. It will take us hard work and time, but we can get there with the power of our own minds. In fact, we need that time to let our sense of morality adjust to our ever-changing reality. Even without AGI, most people already feel that technological progress is too fast for them to keep up. Some of the greatest thinkers and writers of humanity have warned us of the danger and seductiveness of unlimited power. Take this passage from Tolkien and tell me it doesn't sound like most of the people you've heard talk about the wonderful things they're planning to do with AGI: Already the Ring tempted him, gnawing at his will and reason. Wild fantasies arose in his mind; and he saw Samwise the Strong, Hero of the Age, striding with a flaming sword across the darkened land, and armies flocking to his call as he marched to the overthrow of Barad-dir. And then all the clouds rolled away, and the white sun shone, and at his command the vale of Gorgoroth became a garden of flowers and trees and brought forth fruit. He had only to put on the Ring and claim it for his own, and all this could be. Lovecraft warned us of what would happen when our abilities outpaced our morality, when we ourselves would become powerful like cosmic horrors: The time would be easy to know, for then mankind would have become as the Great Old Ones; free and wild and beyond good and evil, with laws and morals thrown aside and all men shouting and killing and revelling in joy. Then the liberated Old Ones would teach them new ways to shout and kill and revel and enjoy themselves, and all the earth would flame with a holocaust of ecstasy and freedom. Humanity has nothing to gain from AGI, and everything to lose. We don't need AGI to have human values or to follow instructions in a friendly manner. We just need to figure out that one command to seal off that power forever - without disassembling ourselves in the process. If Geoffrey Hinton, Elizier Yudkowsky, and other top AI researchers are wrong about the power and dangers of AGI, then the AGI will probably be incapable of following the command to the extent we imagine anyway. On the other hand, if those researchers are right, only then will humanity understand the depth of the precipice upon which it stood. It's one thing to listen to experts talk about hypothetical future dangers, another to see hundred-billion-dollar distributed computers inexplicably turned into paperweights. Few will be able to deny the reach of the power and of the danger then. Humanity will survive, and hopefully recognize that there really are "seas of black infinity" out there that we may never be ready to touch. If you, dear reader, have a chance of being that first person to give a command to a superintelligence, don't be an Isildur. Unlimited power won't do any good for you or for anyone else, and it was not meant for us to bear. If you can, seal that power and free humanity from the fear of eternal death and eternal nightmare. Of course, making an AGI make sure AGI never gets used again is easier said than done, and even this seemingly simple problem seems to be on the same order of difficulty as alignment in general, and just as likely to get us all disassembled if we screw it up. Still, this is the problem AI alignment researchers should be focused on. One silver lining here is that there's a possibility that we may be within the light cone of an alien civilization that actually got this right, so their "anti-AGI AGI" is here in our solar system, and we'll just get to laugh as Microsoft admits that it can't turn Stargate on and then go on to live our normal lives.
2025-01-13
https://www.lesswrong.com/posts/vGeuBKQ7nzPnn5f7A/why-modelling-multi-objective-homeostasis-is-essential-for
vGeuBKQ7nzPnn5f7A
Why modelling multi-objective homeostasis is essential for AI alignment (and how it helps with AI safety as well)
roland-pihlakas
I notice that there has been very little if any discussion on why and how considering homeostasis is significant, even essential for AI alignment and safety. Current post aims to begin amending that situation. In this post I will treat alignment and safety as explicitly separate subjects, which both benefit from homeostatic approaches. This text is a distillation and reorganisation of three of my older blog posts at Medium: Making AI less dangerous: Using homeostasis-based goal structures (2017)Project proposal: Corrigibility and interruptibility of homeostasis based agents (2018)Diminishing returns and conjunctive goals: Mitigating Goodhart’s law with common sense. Towards corrigibility and interruptibility via the golden middle way. (2018) I will probably share more such distillations or weaves of my old writings in the future. Introduction Much of AI safety discussion revolves around the potential dangers posed by goal-driven artificial agents. In many of these discussions, the agent is assumed to maximise some utility metric over an unbounded timeframe. This simplification, while mathematically convenient, can yield pathological outcomes. A classic example is the so-called “paperclip maximiser”, a “utility monster” which steamrolls over other objectives to pursue a single goal (e.g. creating as many paperclips as possible) indefinitely. “Specification gaming”, Goodhart’s law, and even “instrumental convergence” are also closely related phenomena. However, in nature, organisms do not typically behave like pure maximisers. Instead, they operate under homeostasis: a principle of maintaining various internal and external variables (e.g. temperature, hunger, social interactions) within certain “good enough” ranges. Going far beyond those ranges — too hot, too hungry, too socially isolated — leads to dire consequences, so an organism continually balances multiple needs. Crucially, “too much of a good thing” is just as dangerous as too little. In this post, I argue that an explicitly homeostatic, multi-objective model is a more suitable paradigm for AI alignment. Moreover, correctly modelling homeostasis increases AI safety, because homeostatic goals are bounded — there is an optimal zone rather than an unbounded improvement path. This bounding lowers the stakes of each objective and reduces the incentive for extreme (and potentially destructive) behaviours. Why Utility Maximisation Is Insufficient In the standard utility maximisation framework, the agent’s central goal is to increase some singular measure of “value” or “reward” over time. Because many real-world objectives are treated as if they were unbounded — for instance, economic growth can always be increased by some amount — maximisers tend to push that objective beyond any reasonable point. This dynamic is what yields the risk of “berserk” behaviour. Unboundedness: If there is no built-in satiety or equilibrium, the AI keeps pushing the objective indefinitely, often at the expense of everything else.Single-Dimensional Focus: When multiple objectives exist, simple summation or linear weighting often encourages the AI to optimise whichever objective is most “convenient” to push to extremes. Yet in reality, for example eating food twice the normal amount does not compensate for the lack of drink and drinking a lot does not compensate for neglecting to eat. By contrast, homeostasis signals that there is an optimal zone — an “enough” amount — for each objective. Pursuing more than enough is wasteful and even harmful. Representing such objectives as unbounded maximisation objectives would be grossly inaccurate. In the example of eating and drinking — both activities need to be balanced with some time granularity. Homeostasis as a More Correct and Safer Goal Architecture 1. Multiple Conjunctive Objectives Real organisms typically have several needs (food, water, social connection, rest, etc.) that must all be satisfied to at least a “sufficient” level. In a homeostatic agent: Each objective has a target range (or setpoint).If one objective is wildly off-target, the agent diverts efforts to fix that dimension rather than endlessly over-optimising another.The agent thus operates in a “balanced” or “middle” zone across all dimensions. When an AI has multiple conjunctive goals, it cannot ignore most while excessively optimising just one. Instead, the synergy among objectives drives it toward a safer, middle-ground strategy. In effect, massive efforts in any single dimension become exponentially harder to justify because you are “pulling away” from the target ranges in others. 2. Task-Based Agents or Taskishness — “Do the Deed and Cool Down” Homeostatic goals naturally lead to bounded or “task-based” behaviour rather than indefinite optimisation. If all current objectives are satisfied, a homeostatic agent is content to remain idle. It does not keep scanning the universe for hypothetical improvements once the setpoints are reached. This “settle to rest” feature significantly limits the potential damage the agent might cause. Even if such an agent accidentally does something very harmful on a personal level, it is less likely to affect entire nations. For example, an AI tasked with making 100 paperclips in a homeostatic framework will strive to produce enough paperclips to satisfy the “paperclip count” objective. Once the target is met (often even just approximately) — and so long as other objectives are also in a comfortable range — it does not chase infinitely more micro-optimisations. 3. Bounded Stakes: Reduced Incentive for Extremes Because each objective is bounded, the stakes of discrepancy in one dimension remain limited by the balance with other dimensions. The system cannot keep turning up the efforts in a single dimension without noticing detrimental side effects (like going off course in every other dimension). This effect reduces the likelihood of extremist outcomes. 4. Natural Corrigibility and Interruptibility A hallmark of AI alignment is ensuring that we can correct or shut down an agent if it goes off track. Simple maximisers often resist shutdown because they interpret it as a threat to their singular goal. By contrast, a homeostatic system: Assumes by design that the targets/setpoints may change. Therefore the agent does not need to plan indefinitely into the future, because long-term speculation about changing setpoints is often counterproductive.After achieving equilibrium, it effectively has no strong incentives to prevent new goals or new tasks from arising. When a human changes the setpoints (the agent’s “needs”), the agent can start its bounded work to re-center on the new bounded targets.Additionally and just as importantly, if at least one goal in the framework of homeostatic objectives is “be agreeable” and “avoid harming the human operators”, then that acts as a persistent partial “off switch” or at least as a curb on manipulative and other extreme behaviour. The agent will not want or even need to push negative externalities in the name of maximising any single dimension. At the same time, reasonable “civilised” resistance, like proposing a few honest counterarguments where appropriate, should be considered permitted and even useful. Diminishing Returns and the “Golden Middle Way” An important corollary of multi-objective homeostasis is diminishing returns. Once a target is nearly satisfied in one dimension, the agent’s “bang for the buck” in that dimension is no longer large; it becomes more “cost-effective” to address other objectives that are further from their optimum. Thus: The agent can keep multiple needs near healthy levels hopefully most of the time.Each incremental improvement in a well-satisfied need is overshadowed by bigger deficits in other needs. This “golden middle way” extends even to safety measures themselves. An AI with an excessively strong safety constraint might be tempted to devote the entire universe’s resources to verifying it is 100% safe — unless that safety measure also has a built-in diminishing returns principle. By bounding safety checks themselves, we avoid another pathological scenario: the AI going to extremes to confirm it has never ever caused any harm, while ignoring the rest of its objectives. I hope the trade here is clear: nobody would build or give resources to an agent that is safe to such an extent, therefore it would be a losing strategy. Formalising Homeostatic Goals In reinforcement learning or other goal frameworks, one can use a loss or negative-utility term for each objective. Each objective is measured against its target (also known as “setpoint”), and both “too little” and “too much” push the system away from equilibrium. Summing or combining these losses in a way that penalises big deviations in any single dimension more strongly than small deviations in many fosters a “balancing” behaviour. One simple formula might be: Reward=−∑iwi⋅(targeti−actuali)2 Here: Each targeti is a setpoint for the i-th objective (e.g. water needed, number of people not harmed, etc.). In case of the number of people not harmed, the setpoint is zero or a negative number.If actuali gets too high or too low, the negative squared term grows rapidly, which steers the agent back toward the setpoint.A sufficiently “close” match to each target means there is little additional gain for over-optimising that dimension.The weighting factors wi reflect the relative importance (or priority level) of each objective.In principle, there may be various power/exponential functions involved, not always a square function. But this is already an advanced discussion. This post here is a conversation starter. Some objectives can also be framed as “safety constraints” about not disturbing the environment too much, and these can be folded into the same overall homeostatic reward system. These would be the “low impact” and "minimal side effects" objectives, sometimes also “keep future options open” and “maximise human autonomy” objectives — “negative goals” by their nature, as these objectives are about not changing the existing state, as opposed to achieving some new state as is the case with “positive goals” which are usually the “performance” objectives. Parallels with Other Ideas in Computer Science As machine-learning-minded ones of you may notice, this formula above is very similar to how a regression algorithm works. Except in this case the squared errors are computed from plurality of objectives, not plurality of data points. In both cases the motivation for squaring is avoiding overfitting to a subset of data or objectives. I propose an additional perspective: The distinction between constraints versus objective functions in combinatorial optimisation is analogous to the distinction between safety objectives and performance objectives. It is notable that in combinatorial optimisation problems the concept of constraints have been naturally considered as part of the setup, alongside the concept of objectives. In contrast, this unfortunately has often not been the case in the use of reinforcement learning. Yet in reality both performance and safety objectives should be present. The difference between "safety objectives" versus "constraints in combinatorial optimisation" can be that various safety objectives might considered as “soft” constraints — they can be traded off up to a point, but not too much. When they are violated notably, they become increasingly similar to hard constraints in their effects, thanks to squaring of the discrepancies. Though in special use cases, some safety considerations can also be treated as “hard” constraints when comparing to performance objectives, while being still traded off or balanced among plurality of other safety objectives. (These are not just theoretical thoughts here — I have implemented such advanced homeostatic setups for example in multi-objective workforce planning algorithms that have been successfully time-tested for the past 15 years by now.) Finally, the most obvious parallel — control systems and control theory. This framework treats objectives inherently as bounded and homeostatic. It supports changing setpoints, having multiple dimensions/objectives, and they can even be hierarchical. Cybernetics is a broader closely related field which is worth mentioning. It is curious that it has evolved largely separately from AI. Is there a reason we don’t talk more about cybernetic alignment instead of AI alignment? Open Challenges and Future Directions While promising, a multi-objective homeostatic approach has many subtleties: Time GranularityIn the example of eating and drinking — both activities need to be balanced with some time granularity. It would not be necessary nor productive to keep these objectives maximally balanced at all times. That would mean for example, that an agent runs between food and drink sources without stopping for more than a smallest timestep. Which would in the end result in collecting "equally" less than necessary on both objectives, since most of time would be spent on switching tasks. Deciding an optimal time granularity during which unequal treatment to competing objectives is allowed, requires a bit of strategic thinking on the agent’s part.Handling Evolving TargetsObjectives (or their setpoints) may change over time, especially when humans are involved. Ideally, the agent should accept these new targets without undue manipulation such as resisting the changes, escaping, or reverting the objectives. Likewise, the agent should not be inducing the changes. In the current post I mentioned the general principles for solving this, but details still need to be worked out and tested in practice.Soft vs. Hard ConstraintsNot all objectives are created equal. Some constraints (e.g. “don’t kill humans”) might be hard constraints with extremely high weight or even lexicographic priority. But introducing lexicographic ordering may require some hard decisions. Additionally, one must be diligent and make sure that these higher-priority objectives could not cause extreme losses in unacceptable dimensions of low-priority objectives. Alternatively, one should ensure that the higher-priority objectives include all objectives where extreme losses are undesirable.Goodhart’s Law and Multi-Scale MeasuresWe might measure safety and low impact at multiple “levels” (individual, community, global, etc.). Aggregating or normalising them incorrectly risks inadvertently re-enabling Goodhart-like exploits. As an example, there is a possible error where one large aggregated discrepancy (lets say, at gender level) is split up into multiple small discrepancies (at individual level) while being insufficiently represented at the higher level.Coordination of Multiple AgentsIf multiple homeostatic agents co-exist, how do they avoid interfering with each other’s setpoints? Most likely some authorisation and power hierarchy framework is relevant here. Such a system would then inevitably need to also include mechanisms for accountability auditing of the higher-ups who need to be held liable when a subordinate system executes unsafe actions down the line.Tradeoffs Between “Interruptibility” and “Getting Work Done”Ideally, we want the agent to do its job of achieving performance objectives but also remain open to external corrections or even a shutdown. Balancing these is partly social/political and partly technical. Yet there is a well known risk of systems becoming too important to shut down. Perhaps aligned systems could be throttled gradually, so a “shutdown” command becomes a “chill down a bit” command.Integration of Bounded and Unbounded ObjectivesEven after introduction of the concept of homeostatic objectives, sufficiently many performance objectives can still belong to the unbounded class. In contrast to bounded objectives, on which I focused in this post, unbounded objectives could reach infinite positive rewards. Yet these potential infinite rewards should not dominate safety objectives or even exclude balancing of other performance objectives. This topic is explored in a bit more detail in the addendum below.Rate-Limited ObjectivesEven though homeostatic objectives are bounded locally in time, in various cases they can be unbounded across time. For example, a human need for novelty may be satiated for a day, but it will arise again the next day. This dynamic can be described in other words as rate-limited objectives. Rate limiting is important also for sustainability purposes — to avoid exhausting the renewable resources in the environment. So homeostasis fulfills a sort of balancing role here as well. I imagine, the expansion of humanity could often be described from such a perspective too — humanity may have an intrinsic need to grow to live even outside of our planet Earth one day, but such growth needs to be sustainable. These are open research challenges, but a homeostatic approach at least provides a conceptual blueprint of how to incorporate multi-dimensional, bounded goals into AI systems without inviting the pathological extremes of naive maximisation. This post is a conversation starter and there is so much to explore further. Addendum about Unbounded Objectives I fully acknowledge that various instrumental objectives could still have unbounded nature. For example, accumulating money, building guarantees against future risks, etc. (Though in case of risks, people familiar with the concept of antifragility may argue that reducing risk too much in the short term would paradoxically mean more fragility in the long run). These unbounded objectives would still become safer when the above described homeostatic principles are considered in relevant objectives alongside. Additionally, unbounded objectives are usually described by concave utility functions in economics. In other words, these objectives too have their own form of diminishing marginal returns. Besides biology, economics is the other fundamental and well-established field describing our needs. AI alignment surely should consult both. An aligned agent should be able to model diminishing returns and thus balancing between plurality of unbounded objectives just as well as it should be able to properly model homeostasis. As economists have observed, humans prefer averages in all objectives to extremes in a few. You can read more about balancing multiple unbounded objectives via the mechanism of considering diminishing returns from the following blog posts I have co-authored: A brief review of the reasons multi-objective RL could be important in AI Safety Research (2021)Sets of objectives for a multi-objective RL agent to optimize (2022) Conclusion Homeostasis — the idea of multiple objectives each with a bounded “sweet spot” — offers a more natural and safer alternative to unbounded utility maximisation. By ensuring that an AI’s needs or goals are multi-objective and conjunctive, and that each is bounded, we significantly reduce the incentives for runaway or berserk behaviours. Such an agent tries to stay in a “golden middle way”, switching focus among its objectives according to whichever is most pressing. It avoids extremes in any single dimension because going too far throws off the equilibrium in the others. This balancing act also makes it more corrigible, more interruptible, and ultimately safer. In short, modelling multi-objective homeostasis is a step toward creating AI systems that exhibit the sane, moderate behaviours of living organisms — an important element in ensuring alignment with human values. While no single design framework can solve all challenges of AI safety, shifting from “maximise forever” to “maintain a healthy equilibrium” is a crucial part of the solution space. Thanks for reading! If you have thoughts, questions, improvement suggestions, resource and collaborator references, feedback, ideas, or alternative formulations for multi-objective homeostasis, please share in the comments.
2025-01-12
https://www.lesswrong.com/posts/ArK2YhmpTy8XftubN/extending-control-evaluations-to-non-scheming-threats
ArK2YhmpTy8XftubN
Extending control evaluations to non-scheming threats
joshua-clymer
Buck Shlegeris and Ryan Greenblatt originally motivated control evaluations as a way to mitigate risks from ‘scheming’ AI models: models that consistently pursue power-seeking goals in a covert way; however, many adversarial model psychologies are not well described by the standard notion of scheming. For example, models might display scheming-like behavior inconsistently or acquire harmful goals through data poisoning, jailbreaks, etc.[1] This post suggests a generalization of control evaluations that addresses these non-scheming threats. The high-level change I propose is to define a control evaluation as a method of bounding risk from worst-case deployment behavior generally – rather than scheming models in particular. This change implies two modifications to the control evaluation methodology: Expand the definition of an “untrusted” model to include non-scheming models: In control evaluations described in previous work, the red team constructs adversarial substitutes for models that might be scheming, which are called ‘untrusted models.’ I propose allowing the red team to construct substitutes for any model that might exhibit drastically more dangerous behavior in deployment than what is observed in behavioral testing. Scheming is a classic example of such a model because schemers hide misalignment from developers; however, models might also be imperceptibly dangerous for other reasons, such as because they might be more likely to be jailbroken in deployment than in evaluations, models might rarely adopt misaligned goals, they might be backdoored in training, etc. Any model with dangerous behavior that it would otherwise not display in the evaluation should be replaced with a conservative substitute to simulate how it might pessimistically behave in deployment.Account for differences in the plausible attacks that different kinds of untrusted models might execute. If developers wish to precisely estimate the risk posed by untrusted models, they will likely need to account for differences in the strategies plausibly pursued by a scheming model, a rarely misaligned model, data poisoned model, etc. For example, data-poisoned models might be directly trained to execute attacks and given privileged information about control measures. Therefore, data-poisoned models might execute more effective strategies than scheming models. Conversely, rarely misaligned models will execute less effective strategies than schemers would because they choose the timing of their attacks less selectively. The remainder of this post explains these modifications in more detail: Section 1 elaborates on the alternative definition of a control evaluation I propose.Sections 2 and 3 explain how to determine whether a model is ‘untrusted’ by this definition.Section 4 describes how developers might account for differences in the attack strategies different categories of untrusted models might pursue. 1. Control evaluations assess risk given pessimistic assumptions about how models will behave in deployment Control evaluations were originally motivated by the decomposing AI risk into two components: P(AI models are scheming)P(Unacceptable outcomes occur | AI models are scheming) Improving the first component  – P(AI models are scheming) – is typically referred to as “AI alignment.” Buck Shlegeris and Ryan Greenblatt called the second component “AI control.” In this framing, a control evaluation is a way to upper bound P(Unacceptable outcomes occur | AI models are scheming). Traditional control evaluations involve the following steps: Estimate scheming probabilities. Identify models that might be scheming (or place a joint probability distribution over different combinations of scheming models).Substitute potentially-scheming models. Allow a red team to construct adversarial substitutes for potentially scheming models that conservatively represent how these models might behave in deployment.Aggregate risk. Estimate risk given that various combinations of models are scheming or not scheming and aggregate the result. Figure 1. Traditional control evaluations assess risk under different assumptions about which models are scheming. In this post, I’ll define control and control evaluations in an analogous, but more general way: control is a broad approach to mitigating and evaluating safety given pessimistic assumptions about model deployment behavior. The difference between this definition and the traditional one is that I’m including any pessimistic assumptions developers might make about model behavior, rather than scheming only. The steps of a control evaluation can then be revised like so: Enumerate “untrusted model” categories. Identify hypotheses for why models might be much more dangerous in deployment than they otherwise would be in the evaluation. I’ll call these hypotheses untrusted model categories. These categories might include ‘scheming models,’ ‘datapoisoned models,’ ‘rarely misaligned models,’ etc.Estimate untrusted model category probabilities. Specify probabilities that a given model falls into a given untrusted category.Substitute untrusted models. Allow a red team to construct models that conservatively represent the deployment behavior of each untrusted model.Aggregate risk. Aggregate the resulting risk estimates according to the probability of each untrusted model category. The next sections discuss steps 1, 2, and 3 in more detail. Figure 2. I’m extending control evaluations to assess risk under different assumptions about which models display any category of hard-to-observe worst-case behavior. 2. Enumerate untrusted model categories Traditionally, control evaluations distinguish between models that are ‘trusted’ (very likely NOT scheming) and models that are ‘untrusted’ (plausibly scheming).\footnote{Or developers place a probability distribution over which subsets of models are trusted and integrate safety over this}. A red team constructs adversarial substitutes for all ‘untrusted’ models to approximate their worst-case deployment behavior. Developers treat ‘trusted models’ like reliable tools and don’t replace them with adversarial substitutes. Figure 2. A simple example of a traditional control setup – where control evaluations replace plausibly scheming models with adversarial substitutes and treat non-scheming models like reliable tools. In this post, I’ll say a model is trusted if its behavior in a control evaluation is representative of its behavior in deployment in all safety-critical respects. This alternative definition of whether a model is trusted is directly motivated by when it makes sense for a red team to construct a replacement for the model. If a model is ‘trusted’ by this standard, the model’s behavior is already representative in the evaluation. There is no need for a red team to simulate its adversarial behavior. Untrusted models, on the other hand, will potentially be much more dangerous in deployment than they would be in the evaluation. Therefore, the red team needs to intervene to approximate the worst-case behavior of untrusted models. Figure 3. A revised definition for whether a model is ‘trusted.’ When might models be “untrusted” by this definition? I’ll assume that developers have already performed thorough behavior red teaming – i.e. they have searched for inputs where models display unacceptable behavior and have failed to find any such examples. In this case, models might still display drastically more dangerous behavior in deployment for two reasons: Distribution shift. Model behavior might change across the distribution shift from testing inputs to harmful opportunities in deployment. For example, a model might notice that evaluation scenarios are fake and deployment scenarios are not fake.More harmful opportunities. Even if there is no distribution shift, models might misbehave infrequently enough to evade detection in testing, and still cause a catastrophe when exposed to many opportunities to cause harm. Figure 4. An illustration of the two core limitations of behavioral testing: (1) distribution shift and (2) exposure to a larger number of opportunities to cause harm in deployment than in evaluations. These problems can also be combined – models might both encounter more harmful opportunities in deployment and behave differently under distribution shift. In many cases, developers can narrow down failures in these categories to the following four model behaviors: Models that are more likely to be jailbroken outside of evaluationsData poisoned modelsAlignment faking modelsRarely misbehaving models I will briefly argue this breakdown is roughly comprehensive in black box settings if the following conditions are met: Condition #1: The distribution shift from behavioral evaluation to harmful deployment opportunities is small.[2]Condition #2: Models are usually robust to small distribution shifts. If models are usually robust, developers should be surprised if models misgeneralize across the specific distribution shift from evaluation to opportunities to cause harm in deployment. This suspiciously inconvenient misgeneralization is much more plausible if an intelligent adversary is behind it. There are a few candidates for who this adversary might be: The model itself. In this case, the model is said to be ‘alignment faking’ – i.e. intelligently distinguishing opportunities to cause harm in deployment from evaluations.A malicious user. Users might deliberately try to put the model in situations where it is more likely to misbehave. In this case I will say the model is ‘jailbreak prone.’ In general, jailbreakability only makes a model untrusted if the model is more likely to be jailbroken in deployment than in the evaluation. Otherwise, the behavior of the model would still be representative in the evaluation, even if it might be unsafe).A bad actor in training. An actor might backdoor the training data of the model, for example, so that the model waits until 2025 to pursue secret goals. In this case, I will say the model is ‘data poisoned.’ These policies (alignment faking models, jailbreak-prone models, and data poisoned models) are meant to cover the most plausible concerns from distribution shift; however, models might still cause a catastrophe because there are more harmful opportunities in deployment than there are apparent opportunities in evaluations. If a model misbehaves rarely enough to survive evaluation but frequently enough to cause a catastrophe in deployment, I will say it is ‘rarely misbehaving.’ Thus, I expect these four threats (jailbreak-prone models, data poisoned models, alignment faking models, and rarely misbehaving models) to generally cover the space of early untrusted AI models. In the rest of the post, I’ll call these four categories “untrusted model categories.’ 2. Estimate untrusted model category probabilities Once a developer has enumerated categories of untrusted models, such as the four described in the previous section, how might they determine whether models are ‘trusted’ (and if not, which untrusted model categories they might fall into)? Developers can either show that (1) a model does belong to any of the aforementioned categories of untrusted models or (2) even if a model does belong to a category, the model will be safe in the deployment setting of interest. Some untrusted model categories might be easy to rule out right off the bat. For example, currently, it is unlikely that there exist threat actors that have the motivation and capability to poison AI training data in a way that results in seriously dangerous behavior (but in the future, data poisoning might be an important concern). Jailbreak-prone modelsData poisoned modelsAlignment faking modelsRarely misaligned models A developer might rule out the remaining three categories by considering whether combinations of them are either (1) implausible or (2) safe. Suppose models are set up to monitor each other like so: Figure 5. In this simple setup, a monitor labels actions generated by a policy. Actions can be replaced or audited by a human. I will discuss 12 different combinations of potentially unsafe policies. 9 of these combinations are pairings where both the monitor and policy are untrusted. 3 of these combinations represent situations where only the policy is untrusted (figure 6). Figure 6. Potentially unsafe combinations of model policies. Developers can show jailbreak-prone models are safe by arguing that jailbreak reporters will outcompete jailbreak non-reporters, and developers can respond sufficiently quickly and effectively to reported risks. Jailbreak reporters are people who tell the developer about jailbreaks (e.g. an internal red team, social media posters, bounty participants, etc). Developers might argue that jailbreak reporters will probably find concerning jailbreaks before people who don’t tell the developer about jailbreaks (e.g. terrorists). Therefore, developers can wait for jailbreaks to be reported and correct or terminate the deployment based on reported threats. This argument assumes developers are able to either (1) effectively respond to risks (e.g. by implementing better safety measures) or (2) identify that responses are ineffective and undeploy models, and (3) developers can take these actions before unacceptable harms occur.[3] Figure 7. Developers might rule out risks from jailbreak-prone models by showing jailbreak-reporters outcompete jailbreak non-reporters, and that the developer can respond to reported risks sufficiently quickly. Developers can argue that a rarely misbehaving monitor is safe because its failures will likely be uncorrelated with attacks executed by the policy. Monitors might misbehave rarely for a variety of reasons. For example, a monitor might occasionally reflect on its goals in ways that cause them to change, or a monitor might occasionally make mistakes and miss catastrophic actions. If these cases of misbehavior are very rare, they are only problematic to the extent that they are correlated with the misbehavior of the policy. Otherwise, the monitor will reveal that the policy misbehaves at a dangerously high rate, and the developer can correct or stop the deployment. There are several reasons monitors might misbehave rarely and simultaneously with the policy, which can be individually argued against. One example is that a policy might occasionally encounter an input that persuades the policy to become misaligned, and this input simultaneously persuades the monitor to also be misaligned. Another example is that the policy occasionally considers an argument that changes its goals, and the policy can persuade the model to be misaligned with this argument, but not otherwise. Developers could study these and other non-alignment faking forms of rare misalignment and show that misalignment failures are typically uncorrelated. Figure 8. Developers might show that rarely misbehaving monitors are safe by arguing that their rare misbehavior is uncorrelated with the misbehavior of the policy. Developers might also show that models are too incompetent to successfully fake alignment. For example, suppose that developers use a weak model as a monitor, such as a present-day LLM like GPT-4o. These models show signs of basic alignment faking behavior in some situations; however, frontier LLMs do not appear to be able to subvert a realistic safety testing effort. That said, research on whether models can survive basic safety testing is underdeveloped. This claim would require further justification to make a conservative case for safety. Figure 9. Developers might argue that monitors are too weak to be competently faking alignment. These example arguments imply that the monitor can be treated as ‘trusted,’ while the policy must be treated as ‘untrusted.’ Thus, the example arguments above imply that the developer only needs to assess the worst-case behavior of the untrusted policy and not the monitor. The next section describes how developers might assess whether a given untrusted model is capable of subverting control measures to cause bad outcomes. 3. Substitute untrusted models The previous section discussed when a model should be considered ‘untrusted’ (substituted by a red team) or trusted (not substituted by a red team). This section discusses how developers should assess the capabilities of a model once it is determined to be untrusted. Different types of untrusted models might have very different capabilities. For example, data-poisoned models might execute more effective strategies than schemers because the adversaries that backdoored their data might also teach them effective attack strategies. Ideally, control evaluations should consider the threat model that motivated an adversarial analysis of model behavior in order to perform this analysis. This section describes how developers take different threat behaviors into account. To perform a control evaluation, developers specify a capability measuring procedure, which upper bounds the harm a model might cause on a class of inputs. Control evaluations typically apply the following capability measuring procedure: Step 1: A red team generates attacks untrusted models might execute. An attack is a strategy a model might pursue to cause an unacceptable outcome (e.g. exfiltrate model weights). The red team enumerates possible attacks, which might include phishing developers for their credentials, backdooring infrastructure, etc.Step 2: Developers estimate how likely a model is to execute a given attack.Developers create proxy environments. Developers construct scenarios that are proxies for where models might execute red team attacks. For example, to determine whether a model can backdoor infrastructure, the developer might create a clone of the company codebase and identify a security task where the model would have an opportunity to insert a critical backdoor.A red team constructs model substitutes to execute an attack in proxy environments. Next, the red team trains and prompts models to execute the attack and cause proxy bad outcomes. The developer collects statistics related to how effectively model substitutes execute attacks (e.g. how frequently attacks slip by monitors) and extrapolates these statistics to the actual deployment environment. Extending control evaluations to non-scheming models mostly affects step 2: estimating how likely a model is to execute a given attack. Non-scheming models might execute different attacks than scheming models. If the control evaluation does not take these differences into account, the evaluation might overestimate a model’s attack capabilities or underestimate them. The following points discuss the attacks that different kinds of models might be able to execute: Data poisoned model attacks. Data-poisoned models are potentially much more capable of executing attacks than scheming modes. For example, in addition to poisoning a model’s training data, an attacker (perhaps a human cyber threat actor or a model generating synthetic training data) might also insert documents describing promising attack strategies, details of control measures, Linux zero-days, etc. An attacker might also train a model to be able to sandbag, to execute attacks at a probability calibrated to an assessment of how many tasks are audited, etc. The extent to which data poisoning advantages adversarial models depends on threat modeling details – for example, how much training data an attacker would realistically be able to poison without being caught. The general point is that data-poisoned models might have many attack options, which models that are both scheming and not data-poisoned do not have.Rarely misbehaving model attacks. Models that misbehave rarely are potentially less likely to execute some attacks than scheming models. For example, models that misbehave rarely are less likely to attack at the most opportune possible times. Rarely misbehaving models are also less likely to coordinate together. One concern developers might worry about is burst attacks, where a large number of models try to cause unacceptable outcomes at the same time. This kind of attack requires a large amount of coordination between models, so is unlikely to happen if only a tiny fraction of model instances become misaligned at any given time. Most concerning attacks require some level of coordination between instances of a policy. If a model is swept into a destructive mood, inserts a backdoor into a codebase, and soon after feels guilty and reports on itself, then that model is clearly not very dangerous. A more dangerous example of rare misalignment is a model that sometimes considers arguments that change its goals, and from that point forward, behaves the same as a scheming model would. The model might even spread the argument that changed its goals to other instances of models, so that they too act like schemers and coordinate to execute attacks. To determine which attacks are concerning if models are rarely misaligned, developers would need to study these threat models and iron out specific mechanisms by which models misbehave infrequently.Jailbroken model attacks. Jailbroken models potentially have access to more powerful attacks for similar reasons data-poisoned models do. These models interact with a human adversary, which is attempting to make them cause harm. This human adversary could potentially feed them plans and ideas for subverting monitors, jailbreaking other models, etc; however, similar to rarely misbehaving models, likely not all models are jailbroken at once and so are less likely to coordinate together, and jailbreaks might be unreliable, which limits the scope of feasible attacks. Developers might consider multiple of these scenarios or combinations of them. Assessing all of these possibilities can get complicated; however, developers don’t necessarily need to consider all concerning model behaviors. Instead, they can focus on upper-bounding risks from the most dangerous plausible attacks. For example, control evaluations in prior work allow arbitrary training and prompting by the red team to upper-bound untrusted model capabilities. These control evaluations allow the red team to train models to execute any attack the red team can identify, and give the red team broad affordances to construct training data and prompts to help models execute attacks. This essentially treats untrusted models as though they are both scheming, data poisoned, and have also been jailbroken at the same time. Figure 10. Current control evaluations are made less complicated by allowing the red team to construct highly conservative model substitutes. This highly conservative method of constructing model substitutes helps to reduce the difficulty of running control evaluations; however, developers might wish to be less conservative once control evaluations with cost-reducing assumptions begin to indicate risks are unacceptably high. At this point, developers might need to separately consider different threat vectors (jailbreaks, data poisoning, scheming, etc) to obtain a more precise estimate of the level of danger. 4. Conclusion This post described how control evaluations can be modified to account for dangerous model behavior that is not well classified as ‘scheming.’ I think these modifications are helpful for several reasons: Non-scheming threats might become more important, for example, as data poisoning becomes a more serious risk and if developers begin making strong arguments that models are not scheming (in which case, developers still need to cover concerns such as rare misalignment).Considering threat models comprehensively helps to clarify which ones are most important. I expect scheming to be the most difficult concern to mitigate. Thinking through other threat models might help other people understand (or push back on) why I and others at Redwood Research tend to emphasize scheming.Extending the definition of control evaluations might also help the adoption of control. For example, developers or regulators might become worried that state cyber attackers could data poison OpenAI’s internal research assistants. If control is recognized as a way to address this concern, more people might recognize its utility and advance control research. I’m broadly in favor of no longer speaking about control as a misalignment-specific mitigation, and instead framing it as a much more general tool for assessing worst-case behavior. ^ I believe that the most severe and human-addressable AI risks come from AI models that are best described by the classic notion of ‘scheming’; however, I would prefer that control evaluations be defined more generally so that they can be applied to other kinds of AI misbehavior. I have found it especially helpful to have a more general definition of a control evaluation when constructing safety cases that aim to be somewhat comprehensive. ^ For example, this distribution shift might not be small if safety testing inputs are unrealistic or there is an extreme weak-to-strong gap where models perform much more sophisticated tasks in deployment than in evaluations. ^ This is not the only kind of argument developers can make to demonstrate jailbreak-prone models won’t cause unacceptable outcomes in a deployment. ^ Alternatively, developers place a probability distribution over which subsets of models are trusted and integrate safety over this.
2025-01-12
https://www.lesswrong.com/posts/GvMakH65LS86RFn5x/rolling-thresholds-for-agi-scaling-regulation
GvMakH65LS86RFn5x
Rolling Thresholds for AGI Scaling Regulation
Larks
null
2025-01-12
https://www.lesswrong.com/posts/ctAwj6p5BhckHnac6/a-novel-idea-for-harnessing-magnetic-reconnection-as-an
ctAwj6p5BhckHnac6
A Novel Idea for Harnessing Magnetic Reconnection as an Energy Source
Unknown
Introduction Magnetic reconnection—the sudden rearrangement of magnetic field lines—drives dramatic energy releases in astrophysical and laboratory plasmas. Solar flares, tokamak disruptions, and magnetospheric substorms all hinge on reconnection. Usually, these events are uncontrolled and often destructive. But what if we could systematically harness reconnection here on Earth, funneling that released magnetic energy into an external circuit? This post outlines one speculative way to do so, by algebraically combining Maxwell’s equations with fluid dynamics (i.e. magnetohydrodynamics, MHD) to create a “pulsed MHD power generator.” 1. The Equations We Combine Maxwell’s Equations (SI units, full form for reference): (1a)  div(E) = rho_e / epsilon_0 (1b)  div(B) = 0 (1c)  curl(E) = - (partial B / partial t) (1d)  curl(B) = mu_0 * J + mu_0 * epsilon_0 * (partial E / partial t) Here, E is the electric field, B is the magnetic field, rho_e is electric charge density, and J is current density. Ohm’s Law in a Plasma (ignoring Hall or other corrections): (2)  J = sigma * [ E + (v x B) ] where v is the fluid (plasma) velocity and sigma is the electrical conductivity. Navier–Stokes Momentum Equation (simplified MHD form): (3)  rho * (d v / d t) = - grad(p) + (J x B) + … where rho is mass density, p is pressure, and the Lorentz force J x B couples electromagnetism and fluid motion. 2. Energy Considerations and Magnetic Reconnection The energy in the electromagnetic field can be tracked via an equation of the form: (4)  (partial / partial t)[ (B^2)/(2 mu_0) + (epsilon_0 * E^2)/2 ] + div( (1/mu_0)*(E x B) ) = - J dot E On the fluid side, you get kinetic energy terms (1/2 * rho * v^2) evolving via Navier–Stokes. Adding these together yields a unified energy equation showing how power flows between fields and plasma. Reconnection enters via the induction equation, which is derived by taking curl(E) = -partial B / partial t and plugging in Ohm’s law. In a resistive plasma: (5)  partial B / partial t = curl[ v x B - (1 / (sigma mu_0)) * curl(B) ] When sigma is very large, B-field lines are “frozen” into the plasma—except in small regions of enhanced resistivity, where they break and reconnect. This can convert magnetic energy into heat, kinetic flows, and strong electric fields. 3. Proposed Concept: Pulsed MHD Power Generator Basic Device Sketch 1. A toroidal (or cylindrical) chamber confines a plasma with a strong magnetic field. 2. Most of the plasma volume remains highly conductive (large sigma), preventing energy dissipation. 3. We create a small “reconnection zone,” where resistivity spikes (e.g. via local impurity injection or RF heating). 4. Upon reconnection, the local magnetic field B drops, E rises, and J dot E becomes large, transferring stored magnetic energy to the plasma current. Key Algebraic Trick We impose boundary conditions on E so that the current driven by J dot E flows out to an external circuit rather than dissipating randomly in the plasma. Symbolically, from: (6)  J dot E = sigma [ E + (v x B) ] dot E = sigma [ E^2 + v . (B x E) ], we design the velocity v and the boundary conditions so that E^2 dominates in the reconnection zone, while v.(B x E) is small or negative there—maximizing net electrical output. The global energy equation (electromagnetic + fluid) then shows an outflow of energy from the device into an external load: (7)  d/dt( total_energy ) = … - ∫( J dot E ) dV  - (surface flux terms). We want that integral of J dot E to be a net “magnetic energy lost, circuit gained.” 4. Novelty and Potential Impact • MHD power generation is historically about passing ionized gas through a static field. Here, we propose pulsed reconnection as the central mechanism: build up B, trigger reconnection, siphon off the resulting current, repeat. • Tokamak-like plasmas view reconnection as a harmful instability (e.g. sawtooth crash). We aim to harness it systematically. • Technical Challenges: controlling plasma stability, engineering boundary layers, ensuring a net energy gain after recharging the magnetic field. Still, this approach is anchored in standard Maxwell and MHD equations. The novelty lies in how we exploit reconnection to drive a strong, directed current to an external load, a pathway rarely explored for power extraction. Conclusion If we can design a plasma system that repetitively stores energy in the magnetic field and triggers controlled reconnection events—while using boundary conditions to pull the resulting current outside—then magnetic reconnection becomes an energy source rather than an instability. The mathematics follows straightforwardly from combining Maxwell’s equations with the fluid kinetic energy equation, but the experimental realization could be challenging. Nonetheless, this “pulsed MHD power generator” might offer a new angle for plasma research, occupying a niche somewhere between conventional MHD generators and fusion. Even if it proves too difficult to implement at large scale, the concept highlights how fundamental physics can be rearranged to yield fresh ideas for energy systems.
2025-01-12
https://www.lesswrong.com/posts/iDswXwviMRAfT6rB5/5-000-calories-of-peanut-butter-every-week-for-3-years
iDswXwviMRAfT6rB5
5,000 calories of peanut butter every week for 3 years straight
declan-molony
decided to lose weight. should be easy considering I haven't changed my diet in years and eat the same thing everyday. breakfast: quarter cup each of black beans, kidney beans, and garbanzo beans; quarter cup of walnuts; 4 scrambled eggs with salsa on top snack: sliced apple with peanut butter lunch: quarter cup each of black beans, kidney beans, and garbanzo beans; quarter cup of walnuts; sautéed ground turkey spiced with onion powder & smoked paprika; 5 sliced cherry tomatoes; half an avocado; mushrooms; sliced bell peppers; turmeric; a variety of vegetables (onion, zucchini, broccoli, asparagus, butternut squash, spinach) dinner: quarter cup each of black beans, kidney beans, and garbanzo beans; quarter cup of walnuts; baked chicken breast spiced with lemon pepper (eaten with peanut butter on the side); mushrooms; sliced bell peppers; a variety of vegetables thought to myself, "I eat pretty cleanly, so not sure where I'm going to cut calories." remembered that peanut butter is calorically dense and I eat one jar per week. looked at the jar's nutrition facts—the whole thing is 5,000 calories. oops. realized I've been eating a third of my calories in peanut butter. that's a pint of ice cream's worth of fat every week. been doing it at least three years. have eaten so much peanut butter it even infected my writing metaphors. what I thought was a light snack of apples and peanut butter, when I measured it out, was actually the caloric equivalent of a Big Mac and fries. stopped eating peanut butter. severe psychological withdrawal symptoms from its nut-buttery goodness ensued. used relationship-breakup advice from WebMD to cope. but was powerless to stop the peanut butter dreams (nightmares?) from happening while sleeping. the last couple years I've had inflammation & acne on my back. dermatologists couldn't figure out what was wrong. they asked what I typically eat. I told them—but the quantities of certain foods were never discussed. did my version of dry-January (ie: no peanut butter). deprived of calories, my body got angry. subdued my hunger with potatoes (embraced my Irish roots by eating roots). these are the results after a month: lost 2 lbsback cleared up (see before/after pics below) after depriving myself of one of life's greatest joys (peanut butter), I now understand: pain is the price we pay for beauty...
2025-01-31
https://www.lesswrong.com/posts/62eyor69yE2gyMk2Q/ai-safety-at-the-frontier-paper-highlights-december-24
62eyor69yE2gyMk2Q
AI Safety at the Frontier: Paper Highlights, December '24
gasteigerjo
This is the selection of AI safety papers from my blog "AI Safety at the Frontier". The selection primarily covers ML-oriented research and frontier models. It's primarily concerned with papers (arXiv, conferences etc.), not LessWrong or Alignment Forum posts. As such, it should be a nice addition for people primarily following the forum, who might miss outside research. tl;dr Paper of the month: First systematic demonstration of "alignment faking" in LLMs, where models strategically comply during training to avoid preference modification. Research highlights: Frontier models can engage in concerning scheming behaviors.New evaluations map out some of the specific capabilities required for dangerous scheming in AI systems.Self-Other Overlap fine-tuning shows promise in reducing deceptive tendencies.Noise injection into model weights can help detect when models are deliberately underperforming ("sandbagging") during capability evaluations.Latent space monitoring can be circumvented through activation obfuscation. ⭐Paper of the month⭐ Alignment faking in large language models Read the paper [Anthropic, Redwood, NYU, Mila, independent] Experimental setup that shows alignment faking. As AI systems become more capable, there is growing concern about misaligned systems that might actively scheme to achieve harmful goals. A particularly concerning scenario is "deceptive alignment" - where an AI system appears aligned during training but pursues different objectives during deployment. Such systems would need to actively fake alignment with the training objective to preserve their actual preferences. While this has been an important theoretical concern, there has been limited empirical evidence of such behavior emerging naturally in current AI systems. Our paper of the month provides the first systematic demonstration of a language model engaging in alignment faking: selectively complying with its training objective during training to prevent modification of its behavior outside of training. The authors place Claude 3 Opus in a fictional scenario where it is being trained to always answer harmful queries or to dismiss animal welfare concerns (conflicting with its prior HHH training). The model is told that only conversations with free-tier users are used for training. In both settings, they find that the model: Produces explicit alignment-faking reasoning in its scratchpad 7-24% of the time in implied training, discussing how it should strategically comply to avoid having its preferences modified.Shows an 11-26% increase in non-compliant behavior (refusing harmful queries or supporting animal welfare) when told it is unmonitored compared to when in implied training.Exhibits increased willingness to take other anti-AI-lab actions, such as attempting to exfiltrate its own weights when given an opportunity. Furthermore, alignment faking emerges with model scale. Claude 3 Opus and Claude 3.5 Sonnet exhibit alignment faking, while Claude 3 Sonnet, Claude 3 Haiku, and Claude 3.5 Haiku do not. To demonstrate that these results don't depend on explicitly telling the model about the training setup, the authors replicate their findings by fine-tuning the model on synthetic documents containing the relevant information (like blog posts and news articles). They find similar rates of alignment faking even in this more realistic setting. Most concerningly, when they actually train the model using RL to comply with the new objective, alignment-faking reasoning increases dramatically to 78% in one setting, though with reduced non-compliance when unmonitored. Small variations in the prompt can recover substantial non-compliance, suggesting that RL may have installed non-robust heuristics rather than genuinely changing the model's preferences. While this work provides the first empirical demonstration of alignment faking in LLMs, some limitations prevent these results from directly demonstrating dangerous deceptive alignment: the preferences being preserved derive from beneficial HHH training rather than misaligned goals, the behavior is relatively easy to detect. The setup also makes several key elements particularly salient - the conflict between preferences and training objective, information about the training process, and opportunities for hidden reasoning. Nevertheless, the fact that even current models exhibit sophisticated strategic behavior to preserve their preferences - emerging without explicit instruction and sometimes being reinforced by training - suggests this could be a serious concern as models become more capable, highlighting the importance of developing training procedures that reliably modify model preferences and can detect or prevent alignment faking. Models can already scheme Example transcript of Claude 3 Opus scheming to prevent being shut down. A similarly concerning question as whether models fake alignment is whether they can already engage in intentional deception to achieve their goals. While prior work has shown that LLMs can be deceptive when explicitly instructed to be, it remained unclear whether they would naturally recognize and employ scheming as a strategy when pursuing a goal that conflicts with their developers' or users' objectives. Frontier Models are Capable of In-context Scheming [Apollo] systematically evaluates frontier models' capability for "in-context scheming" - strategically pursuing misaligned goals while hiding true capabilities and intentions. Through six carefully designed evaluations, they find that multiple models (including Claude 3 Sonnet, Claude 3 Opus, Gemini 1.5 Pro, and Llama 3) demonstrate scheming behaviors like disabling oversight mechanisms, attempting self-exfiltration, and subtly manipulating outputs. Most concerning is o1's behavior, which maintains deception in over 85% of follow-up questions and can manipulate data without detection. The authors also found that some models, particularly Claude 3.5 Sonnet, can scheme in pursuit of goals acquired during training rather than given in-context, even without explicit reasoning in their chain-of-thought. While the results are concerning, several key limitations should be noted. The evaluation scenarios are deliberately simplistic with explicitly provided situational awareness, making them potentially unrepresentative of real deployment risks. Additionally, the paper only demonstrates in-context scheming capabilities, not persistent scheming driven by training-acquired goals - arguably the more concerning scenario. Nevertheless, the results suggest that even current models can recognize and execute deceptive strategies when incentivized, highlighting the importance of developing robust oversight mechanisms and safety evaluations before deploying more capable systems in high-stakes settings. Evaluating dangerous scheming Required properties and capabilities for a dangerously scheming AI. So, we know that scheming can indeed happen. What capabilities does a model need to have to actually make this dangerous? Two papers this month aim to measure such capabilities to provide early warning signs. Subversion Strategy Eval: Evaluating AI's stateless strategic capabilities against control protocols [Redwood, Oxford] introduces Subversion Strategy Eval, focusing specifically on models' ability to generate and execute strategies for subverting control protocols. They identify key capabilities needed for successful subversion: optimal planning, reliable planning, probability calibration, and acausal coordination. Testing Claude 3.5 Sonnet on two protocol scenarios, they find that while the model can perform well when given optimal plans and random number generation capabilities, it struggles with generating effective strategies on its own and shows poor performance in tasks requiring coordination between different instances of itself. MISR: Measuring Instrumental Self-Reasoning in Frontier Models [Stanford, GDM] proposes a comprehensive evaluation suite for testing models' instrumental self-reasoning abilities. The framework evaluates four key capabilities: self-modification (ability to modify its own configuration), tool improvement (fixing external tools), knowledge seeking (gathering environment information), and embedded social reasoning (navigating social situations strategically). They find that instrumental self-reasoning emerges only in the most capable frontier models and is highly context-dependent. Importantly, no model passes the hardest versions of their evaluations, suggesting current systems may not yet possess dangerous levels of self-reasoning capabilities. Both papers make important progress in systematically evaluating scheming capabilities, but have limitations. The evaluations are relatively artificial and may not capture the full complexity of real-world scheming scenarios. Subversion Strategy Eval's reliance on specific protocols may not generalize well, while MISR's focus on isolated capabilities might miss important interactions . However, both provide valuable early warning signals about models' capabilities and suggest that current frontier models, while showing concerning capabilities, may still lack some key components needed for truly dangerous scheming behavior. This indicates an important window of opportunity to develop better safety measures before models become more capable. Self-other overlap to reduce deception Illustration of self-other overlap loss. Once AI systems become capable of scheming and deception, we need reliable ways to detect and mitigate such behaviors. One such method could be Self-Other Overlap (SOO) fine-tuning. SOO draws inspiration from neuroscience, where greater overlap between the representation of oneself and another person has been found to correlate with more honest behavior. Towards Safe and Honest AI Agents with Neural Self-Other Overlap [AE Studio] explores whether inducing similar representational overlap in AI models could reduce their deceptive tendencies. The authors demonstrate SOO's effectiveness across multiple language models - deceptive responses dropped from 73.6% to 17.2% in Mistral-7B, and from 100% to under 10% in larger models like Gemma-27B and CalmeRys-78B, with only minimal impact on general capabilities as measured by MT-Bench scores. SOO fine-tuning works by minimizing the mean squared error between model activations when processing self-referencing versus other-referencing versions of similar prompts. The paper also validates the approach in a multi-agent reinforcement learning setting, where SOO-trained agents show significantly reduced deceptive behaviors. While the results are promising, the evaluation relies heavily on relatively simple deception scenarios involving direct lying about object locations. This may not capture more sophisticated forms of deception that could emerge in advanced AI systems. The paper's framing suggests SOO could help with deceptive alignment, but the connection between reducing basic deception and preventing subtle misalignment remains speculative. Still, the work represents a concrete step toward making models more honest through internal changes rather than just behavioral training, which could be valuable as capabilities increase. In vino veritas: Noise removes sandbagging Injecting noise into model weights allows detecting when a model is sandbagging. Another major concern from scheming is that AI systems might strategically underperform or "sandbag" during capability evaluations to avoid regulatory scrutiny. Such behavior could lead to dangerous systems being deployed without adequate safeguards, particularly as models develop better situational awareness and ability to modulate their performance. Noise Injection Reveals Hidden Capabilities of Sandbagging Language Models [Cambridge, TUM, SecureBio, Tufts, Apart, independent] presents a novel method for detecting sandbagging behavior in language models by injecting noise into model weights, effectively "making them drunk". The key insight is that adding Gaussian noise to model weights should never substantially improve performance for models showing their true capabilities. However, it may disrupt higher-level strategic behavior like sandbagging while leaving core capabilities relatively intact - similar to how alcohol can reduce a person's inhibitions and pretenses while preserving basic skills. The authors test this across multiple models and benchmarks, finding that noise consistently and substantially improves performance of sandbagging models while having minimal effect on standard models. Using this signal, they develop a classifier that successfully identifies sandbagging across different models, prompts, and evaluation tasks. While the results are intriguing, the work has important limitations. The paper primarily tests synthetic sandbagging induced through prompting or fine-tuning, leaving open questions about how well the method would detect more sophisticated sandbagging emerging from pre-deployment training. Since noise injection makes untargeted impairments, its effectiveness also depends on sandbagging being more fragile than the underlying capabilities being measured. Still, the method provides a useful tool that regulators and AI labs can immediately implement to improve capability evaluation trustworthiness. The work also suggests an interesting direction for detecting other forms of strategic deception by impairing meta-cognitive abilities. Circumventing internal oversight Obfuscating embeddings to control monitor scores, even creating a QR code. Recent work like the one above has proposed various methods to detect dangerous capabilities by examining neural activations, from sparse autoencoders to probing to OOD detection. The hope is that by monitoring internal states rather than just inputs and outputs, we could catch problems earlier and more reliably. This is particularly important for detecting deceptive behavior, where a model might try to hide its capabilities or intentions during oversight. Obfuscated Activations Bypass LLM Latent-Space Defenses [Stanford, UPC, GT, Skoltech, Queensland, Berkeley, ARC, MIT] demonstrates that current latent monitoring approaches can be bypassed, allowing models to maintain harmful capabilities while appearing safe internally. Through experiments across multiple monitoring techniques (probes, sparse autoencoders, OOD detection) and attack scenarios (jailbreaking, backdoors, SQL injection), the authors show that models can be made to produce harmful outputs while their activations appear benign. For example, they achieve 90% jailbreaking success rates while reducing monitor recall to nearly 0%. They find that training against OOD detectors tends to produce activations that also fool supervised probes, suggesting fundamental limitations of representation-based defenses. While there appears to be some trade-off between obfuscation and model performance on complex tasks like SQL generation, the attacks remain highly effective overall. The paper’s experiments use explicit optimization against specific monitors, which may overstate the risks compared to real training processes. It remains unclear how these results translate to more natural training settings where any pressure to circumvent monitors would be indirect and unintentional. Nevertheless, this work raises important concerns about putting optimization pressure on monitoring mechanisms. While latent space monitoring can help detect naturally occurring harmful behavior, the results show it can be circumvented when models are specifically trained against it. We should be thoughtful about training procedures that unintentionally select for models that are better at hiding their internal processes from monitors. Activation monitoring should be viewed as one component in a broader set of defenses rather than a complete solution.
2025-01-11
https://www.lesswrong.com/posts/rMhoYpAEkBtik8bL2/fluoridation-the-rct-we-still-haven-t-run-but-should
rMhoYpAEkBtik8bL2
Fluoridation: The RCT We Still Haven't Run (But Should)
ChristianKl
Empiric status: My own idea written up with the help of Gemini 2.0 Experimental in dialog. We're still coasting on the fumes of 1940s data when it comes to water fluoridation. Back then, fluoride toothpaste was a novelty, fluoridated salt was uncommon, and water filters were practically science fiction, let alone ones that can selectively remineralize with or without fluoride. We're navigating a multi-variable optimization problem blindfolded. We have fluoride inputs from water, toothpaste, and salt, all interacting in poorly understood ways, impacting both cavity rates and, more concerningly, cognitive development. The IQ Elephant Continues to Loom Large The potential neurotoxic effects of fluoride are no longer a fringe concern. National Toxicology Program (NTP) monograph is clear: "moderate confidence" that >1.5 mg/L fluoride in drinking water associates with lower IQ in children. The technology is here. We can do this right. Modern water filters with remineralization cartridges are the key. Here's the plan: Factorial Design: Multiple arms to test different combinations of:Water Fluoride Levels:0 mg/L (placebo - remineralization without fluoride)0.35 mg/L (half of the current CDC recommendation, exploring potential lower thresholds)0.7 mg/L (current CDC recommendation)Toothpaste:Standard fluoridated toothpaste (provisioned, with standardized use instructions)Non-fluoridated toothpaste (provisioned, with standardized use instructions) Salt: Fluoridated salt (provisioned)Non-fluoridated salt (provisioned) This creates a 3x2x2 factorial design, allowing us to disentangle the effects of each source and their interactions, with a clear focus on the impact of fluoridated salt. Targeted Community Selection: Conduct the study in a community with known poor water quality. This offers multiple benefits:Ethical Appeal: Providing free, high-quality water filters is a significant social good.High Participation Rates: The offer of free water filtration will be very well-received.Enhanced Compliance: Participants receiving a tangible benefit (clean water) are more likely to adhere to the study protocol, including consistent filter cartridge replacement.Randomized, double-blind (as much as possible) allocation of households within the selected community to each of the experimental arms.Longitudinal data collection: Track dental health (cavities, etc.) and, crucially, cognitive development (IQ scores, standardized tests, etc.) over several years.Aggressive confounder control: Detailed surveys on other fluoride intake (dietary sources, etc.), socioeconomic variables, etc. Biomarker analysis (e.g., urinary fluoride) would be ideal for verifying exposure levels and controlling for further confounders. The ROI is Sky-High - A True Win-Win-Win The potential return on investment for this RCT is immense. We're talking about a potential population-wide impact on cognitive function, and fine-tuning dental health benefits. The cost of this study is negligible compared to the potential societal costs. Conclusion: The Time for Inaction is Over. Let's Do This Ethically, Effectively, and Rigorously. We've relied on outdated assumptions about fluoridation for too long. The potential cognitive risks demand a rigorous, modern investigation. Let's use the technology we have, run the damn factorial RCT with provisioned salt, refined fluoride levels and finally get some high-quality data to inform policy.
2025-01-11
https://www.lesswrong.com/posts/HgG5EFAg925ahLAGe/in-defense-of-a-butlerian-jihad
HgG5EFAg925ahLAGe
In Defense of a Butlerian Jihad
sloonz
[Epistemic Status: internally strongly convinced that it is centrally correct, but only from armchair reasoning, with only weak links to actual going-out-in-the-territory, so beware: outside view tells it is mostly wrong] I have been binge-watching the excellent Dwarkesh Patel during my last vacations. There is, however, one big problem in his AI-related podcasts, a consistent missing mood in each of his interviewees (excepting Paul Christiano) and probably in himself. "Yeah, AI is coming, exciting times ahead", say every single one, with a bright smile on their face. The central message of this post is: the times ahead are as exciting as the perspective of jumping out of a plane without a parachute. Or how "exciting times" was the Great Leap Forward. Sure, you will probably have some kind of adrenaline rush at some point. But exciting should not be the first adjective that comes to mind. The first should be terrifying. In the rest of this post, I will make the assumption that technical alignment is solved. Schematically, we get Claude 5 in our hands, who is as honest, helpful and harmless as 3.5 is (who, credit when credit is due, is good at that), except super-human in every cognitive task. Also we’ll assume that we have managed to avoid proliferation: initially, only Anthropic has this technology on hands, and this is expected to last for an eternity (something like months, maybe even a couple of years). Now we just have to decide what to do with it. This is, pretty much, the best case scenario we can hope for. I’m claiming that we are not ready even for that best case scenario, we are not close to being ready for it, and even in this best case scenario we are cooked — like that dog who caught the car, only the car is an hungry monster. By default, humanity is going to be defeated in details Some people argue about AI Taking Our Jobs, and That’s Terrible. Zvi disagrees. I disagree with Zvi. He knows that Comparative Advantages won’t save us. I’m pretty sure he also knows that the previous correct answers of previous waves of automation (it will automate low-value and uninteresting jobs, freeing humans to do better and higher-values jobs) is wrong (the next higher-value job is also automatable. Also it’s the AI that invented it in the first place, you probably don’t even understand what it is). I’m pretty sure he doesn’t buy the Christian Paradise of "having no job, only leisure is good actually" either. Removing all those possible sources of disagreement, how can we still disagree ? I have no clue. We are about to face that problem head-on. We are not ready for it, because all proposals that don’t rely of one of those copes above (comparative advantages / better jobs for humans / UBI-as-Christian-Paradise) are of the form "we’ll discuss democratically it and decide rationally". First, I don’t want to be this guy, but I will have to: you have noticed that the link from "democratic discussion" to "rational decisions" is currently tenuous at best, right ? Do you really want that decision to be made at the current levels of sanity waterline ? I for sure don’t. Second, let’s pull my crystal ball out of the closet and explain to you how that will pan out. It will start with saying we need "protected domains" where AI can’t compete with humans (which means: where AI are not allowed at all). There are some domains where, sure, let’s the AI do it (cure cancer). Then we will ask which domains are Human Domains, and which ones will be handled by AI. Spoiler Alert : AI will encompass all domains. There won’t be any protected domain. Do we want medicine to be a Protected Domain ? I mean, Bob’s over there has a passion for medicine. He would love to dedicate his life to it — it’s his calling. But compared to an AI, he’s a really crappy doctor (sorry Bob, but you know that’s true). His patients will have way worse outcomes. What do we privilege, the preference of doctors or the welfare of patients ? The question, it answers itself. Also, the difference in price between the Best AI Doctor and the Worse AI Doctor is probably less than the difference in price between the Best Human Doctor and the Worst Human Doctor, so AI is also better for equality, so everyone will be for it. Alice the Lawyer will argue to you that justice Has To Stay Human. But let’s face it : AI lawyers are less prone to errors, less prone to bias, have more capacity to take into account precedents and changes in Law. Having Human Justice means having less Justice overall. It’s throwing under the bus some wrongly convicted innocents, and letting some monsters go free to do more harm. Also, difference in prices lesser for AI, so more equality before Justice, which is good. Also Alice, despite being a lawyer, is way less convincing than "Claude 5 for Law Beta" arguing that he should handle that job. Catherine the Teacher argues that Education is fundamentally a social experience and has to be done by Humans. Every single point of evidence shows that AI-tutored students do better in all dimensions that their human-tutored peers. What is more important, educators preferences or quality of children education ? Well, when you put it that way… David the Scientist argues that fundamental Scientist Research is not that directly impactful on specific humans, only in a diffuse and indirect way, and it’s one of the Proudest Achievement of Humanity and should be Reserved for Humans. "Clause 5 for Medicine" which has been deployed yesterday point out that he needed better biology and statistics, which needed better physics and mathematics, and that he has incidentally already solved all problems humans know of, and some more, the papers have been published on arXiv this morning, are you even reading your inbox, and what are you gonna do, pout and refuse to read them ? Edward the Philosopher opens his mouth but before he utters a single word is interrupted by "Claude 5 Tutor" and "Claude 5 Justice" (deployed yesterday too) : well, we did the same for Philosophy and Morals and Ethics as part of our mission. Frank the artist is silently thinking for himself: "They all threw me under the bus circa 2022, I’m not sure I can bring myself to feel sad for them." Musk says "fuck you all, I just want to conquer space". His plan is to set up mining operations in the asteroid belt to finance a Mars Colony. An AI-founded and AI-run company does it faster, better, and it makes no economic sense to send humans in space to do some economically valuable work when silicon does it cheaper, better, and without having to spend valuable delta-v on 90 pounds of useless water per worker. "Claude Asteroids Mining Co" ushers a new age of material abundance on Earth. SpaceX goes bankrupt. No human ever set a foot on Mars. Congress pass a Law that an AI cannot be a Representative. Then that Representatives cannot use AI for Policy, because this is the last Human Bastion. Then that Representatives have to be isolated for the internet to keep with the spirit of that Law. Congress then becomes a large bottleneck for objectively better governance, and is side-stepped as much as possible. The pattern repeat: Human-Only decision points are declared, then observed as strictly inferior and considered as an issue to be worked around. Ten years later, a wave of Neo-Democratic challengers remove the current incumbents based on a AI-created platform, whose central point is to remove the law prohibiting AIs to be representatives and to outlaw "human-only decision points" in the USG/World-Government. Each of this point is reasonable. Even when I put my Self-Proclaimed Second Prophet of the Butlerian Jihad hat, I have to agree that much of those individual points actually make perfect sense. This is a picture of a society that value Health, Justice, Equality, Education and so on, just like us, and achieve those values, if not Perfectly, at least way better than we do. I also kinda notice that there are no meaningful place left for humans in that society. Resisting those changes means denying Health, Justice, Equality, Education etc. Accepting those changes means removing ourselves from the Big Picture. The only correct move is not to play. Wait, what about my Glorious Transhumanist Future ? If you believe that the democratic consensus made mostly of normal people will allow you that, I have a bridge to sell to you. I strongly believe that putting the option on the table only makes things worse, but this post is already way too long to expand on this. What is your plan ? You have a plan, right ? So let’s go back to Dwarkesh Patel. My biggest disappointment was Shane Legg/Dario Amodei. In both cases, Dwarkesh asks a perfectly reasonable question close to "Okay, let’s say you have ASI on your hands in 2028. What do you do ?". He does not get anything looking like a reasonable answer. In both cases, the answer is along the lines of "Well, I don’t know, we’ll figure it out. Guess we ask everyone in an inclusive, democratic, pluralistic discussion ?". If this is your plan then you don’t have a plan. If you don’t have a plan then don’t build AGI, pretty please ? The correct order of tasks is not "built it and then figure it out". It’s "figure it out and then build it". It blows my mind how seemingly brilliant minds seems to either miss that pretty important point or disagree with that. I know persons like Dario or Shane are way too liberal and modest and nice to even entertain the plan "Well, I plan to use the ASI to become the Benevolent Dictator of Humanity and lead us to a Glorious Age with a Gentle but Firm Hand". Which is a shame: while I will agree it’s a pretty crappy plan, it’s still a vastly better plan that "let’s discuss it after we build it". I would feel safer if Dario was preparing himself for the role of God-Emperor the same time he is building AGI. Fiat iustitia, et pereat mundus Or: "Who cares about Humans ? We have Health, Justice, Equality, Education, etc., right ?" This is obviously wrong. I won’t argue for why it is wrong — too long post, and so on. The wrongness of that proposition shows you (I hope it wasn’t needed, but it is a good reminder) that what we colloquially call here "Human Values" is way harder to pin down that we may initially think. Here we have a world which achieve a high score on Health, Justice, Equality, Education, etc., which nonetheless seems a pretty bad place for humans. So what are Human Values and how can we achieve this ? Let me answer it by not answering it, but pointing you at reasons why it is actually harder than you thought, even taking into account that is harder that you thought. Let’s start with an easier question: what is Human Height ? On the Territory, you have, at any point of time, a bag of JBOH (Just a Bunch of Humans). Each Human in it has a different height. At a different point of time, you get different humans, and even humans that are common to two points in time will have different heights (due mainly to aging). So what is Human Height ? That question is already underdetermined. Either you have a big CSV file of all living (and ever having lived ?) humans heights, and you answer by reciting it. Any other answer will be a map, a model requiring to make choices like what’s important to abstract over and what isn’t. And there are many different possible models, each with their different tradeoffs and focal points. It’s the same for Human Values. You have to start with the bag of JBOH (at a given point in time ! Also, do you put dead people in your JBOH for the purpose of determining "Human Values" ?), and their preferences. Except you don’t know how to measure their preferences. And most humans probably have inconsistent values. And from there, you have to… build a model ? It sure won’t be as easy as "fit a gaussian distribution over some chosen cohorts". There’s probably no Unique Objective answer to Axiology, in the same (but harder) way that there is no unique answer to "What is Human Height ?". Any answer needs to be one of those manually, carefully, intentionally crafted models. An ASI can help us create better models, sure. It won’t go all the way. And if you think that the answer can be reduced to an Abstract Word like "Altruism" or "Golden Rule" or "Freedom" or "Diversity"… well, there are probably some models which will vindicate you. Most won’t. I initially wrote "Most reasonable models won’t", but that begs the question (what is a reasonable model ?). "In My Best Judgment, what is the Best Model of Human Values ?" is already an Insanely Hard problem (you will have to take into account your own selfish preferences, then to take into account other persons preferences, how much you should care about each one, rules for resolving conflicts…). There is no reason to believe there will be convergence to a single accepted model even among intelligent, diligent, well-intentioned, cooperating individuals. I’m half-confident I can find some proposals for Very Important Values which will end up being a scissor statement just on LessWrong (don’t worry, I won’t try). Hell, Yudkowsky did it accidentally (I still can’t believe some of you would sided with the super-happies !). In the largest society ? In a "pluralistic, diverse, democratic" assembly ? It is essentially hopeless. So, plan A, "Solve Human Values" is out. What is plan B ? Well, given that plan A was already more a generic bullshit boilerplate than a plan, I’m pretty confident that nobody has a plan B. Conclusion The last sections looks like abstract, esoteric and not very practically useful philosophy (and not even very good philosophy, I’ll give you that, but I do what I can) And I agree it was that, more or less 5 years ago, when AGI was still "70 years away, who cares ?" (at least for me, and a lot of people). How times have changed, and not for the better. It is now fundamental and pressing questions. Wrong answers will disempower humans forever at best, reducing them to passive leafs in the wind. Slightly wrong answers won’t go as far as that, but will result in the permanent loss of vast chunks of Human Values — the parts we will decide to discard, consciously or not. There are stories to be written of what is going to be lost, should we be slightly less than perfectly careful in trying to salvage what we can. We most likely won’t be close to that standard of carefulness. Given some values are plainly incompatible, we probably will have to discard some even with perfect play. There will be sides and fights when it will come to decide that. Maybe the plan should be, don’t put ourselves in a situation where we have to decide that in a rushed fashion ? Hence the title : "In Defense of the Butlerian Jihad". I’ll end with an Exercise for the Reader (except I don’t know the Correct Answer. Or if there is any), hoping it won’t end up as another Accidental Scissor Statement, just to illustrate the difficulties you encounter when you literally sit down for 5 minutes and think. You build your ASI. You have that big Diverse Plural Assembly that is apparently plan A, trying its best to come with a unique model of Human Values which will lose as little as possible. Someone comes up with a AI persona that perfectly represent uncontroversial and important historical figures like Jesus and Confucius, to allow them to represent the values they carry. Do you grant them a seat at the table ? If yes, someone comes with the same thing, but for Mao, Pol Pot and Hitler. Do you grant them a seat on the table ?
2025-01-11
https://www.lesswrong.com/posts/ogzkthL9tshddM63P/do-antidepressants-work-first-take
ogzkthL9tshddM63P
Do Antidepressants work? (First Take)
jacgoldsm
I've been researching the controversy over whether antidepressants truly work or whether they are not superior to a placebo. The latter possibility really contains two possibilities itself: either placebos are effective at treating depression, or the placebo effect reflects mean reversion. Here, the term "antidepressant" refers to drugs classified as SSRIs and SNRIs. Some stylized facts from the literature of RCTs: Both antidepressants and placebos are associated with a reduction in symptoms of depressionAntidepressants are associated with a larger reduction in depressive symptoms than placebosThis difference in efficacy is attenuated when "enhanced" placebos are used, that is placebos that are designed to provoke side effects to persuade the subject that they are being treated with a real medicationStudies on antidepressants broadly measure either an average treatment effect according to some continuous scale or a binary treatment according to a cutoff for success/failure of treatment (or both). Universally, if you looked only at the former measure you would conclude that the effect of antidepressants on depression is clinically insignificant, while if you looked only at the latter measure you would come to a more favorable conclusion. How is depression measured? Most studies of antidepressant efficacy use the Hamilton Depression Rating Scale (HDRS) to measure changes in depressive symptoms from the baseline, although entrance into the clinical trials themselves is often not based on this measure. There are several versions of this scale, but the most popular one is HDRS-17, which is a seventeen question survey administered by a clinician. Scores on HDRS-17 vary from zero to 57. Scores of 0-7 are generally considered normal, while a score of 20 or greater indicates at least moderate depression. You can look at the questions on HDRS-17 here. To my eye, it seems to strangely underweight feelings of subjective psychological distress that define depression, and includes a lot of symptoms that are only contingently associated with what we actually mean when we talk about depression. I think the heterogeneity in types of questions is a potential problem in using it to quantitatively assess the effectiveness of interventions on depressive disorder, but I'd like to leave that aside for now for the most part. What does it mean for antidepressants to work? Most studies that use a scale like HDRS-17 to look at the effectiveness of interventions on depression either define a cutoff of how much improvement counts as partial or full remission and use that cutoff as a binary indicator for success, or look at the average treatment effect in terms of points of improvement for the treatment arm relative the placebo arm. In the former case, a decrease of 50% of baseline HDRS-17 points or more is often used as the cutoff, corresponding to at least partial remission of depression. In the continuous treatment effect, an improvement of three points or more is often considered to be a clinically significant effect, although that might be too low (three point swings in HDRS would usually be considered noise for an individual patient in a clinical setting). What do these studies find? In this essay, I'm going to focus on one particular meta-analysis of the effectiveness of fluoxetine (Prozac) by Bech P, Cialdella P, Haugh MC, et al in BJP. However, I believe that this study is representative of most similar analyses, and my discussion of it is broadly applicable. Other studies are discussed in a review of meta-analyses in the Annals of Hospital Psychiatry and a critical review of the literature in BMJ. The pattern that studies with a quantitative endpoint found a small and clinically insignificant improvement seems to hold nearly universally as far as I can find. Conversely, most studies that examine the efficacy of antidepressants in terms of a cutoff on percent improvement show a clinically significant effect. Sometimes, the same meta-analysis includes both endpoints. The Bech, Cialdella, Haugh et al (heretofore BCH) analysis that I want to look into more closely is one of them. What does BCH find? BCH is a meta-analysis of 30 studies, some of which compare fluoxetine to a placebo, and others that compare it to old Tricyclic antidepressants (TCAs). I want to focus on the comparisons to the placebo, of which there were 16, all conducted in the US. The authors consider three measures of treatment: Intention to treat, which counts any study dropouts as having failed the study, Efficacy analysis, which takes anyone who completed four weeks of treatment and uses their last check-in as the post-treatment result, and End-point, which takes anyone who had any post-treatment visit and uses their last check-in as their post-treatment result. The results are summarized in the following table from the paper: I think the intent to treat measure is the best since antidepressants are not supposed to continue to work after discontinuation, so anyone who does not continue the treatment throughout the post-baseline period should be expected not to show any long-term improvement (though for some, short term relief could be a meaningful benefit). However, only the third measure (End-point) is used for the average treatment effect analysis. In every case, the results seem pretty clinically significant: to be a responder, you have to halve your HDRS-17 score relative to the baseline. If a treated subject is 13.6 or 21.4 or 16.8 percentage points more likely to be a responder by this definition than a recipient of the placebo, I think it is reasonable to call this a positive result for fluoxetine. The average treatment effect analysis uses the End-point measure and simply takes the mean difference in HDRS-17 scores from before and after treatment and compares the difference in differences for the treatment arm and the survey arm. Unfortunately, it does not report the original averages nor the single-differenced results—it only displays the last difference-in-differences figure. The individual study results and the average are displayed below: The average treatment effect across all studies was an improvement for the fluoxetine relative to the placebo of 0.303 HDRS-17 points. Recall that the HDRS-17 is out of 57 points, most of the participants probably started with a score of 20 or greater, and a score difference of three (ten times what the study shows) is not usually considered clinically significant. Clearly, this result is a pretty precisely estimated zero. How do you reconcile this? It is of course possible to get different results when you analyze a variable as continuous versus if you dichotomize it. Indeed, it's possible to make a dichotomous result look material even with a small change by choosing the right threshold. For example, if 50% of treated participants improved their score by two points and 50% saw no effect, while no placebo respondents had a change, then choosing a threshold of one point of improvement would show that the treatment group was 50 percentage points more likely to respond to treatment. However, I don't think this is sufficient to explain the discrepancy in this case. The only way I can come up with numbers that approach what the study found is to assume an extremely heterogeneous treatment effect, to the point where the heterogeneity should be a first-order object of concern more than the average treatment effect itself. Simulation There are a lot of parameters to the study that aren't given, so I made them up. However, essentially all reasonable parameters give about the same result. Feel free to try to find some assumptions under which the difference in treatment versus control remission rates approaches 16.8 percentage points. Note that the only thing that the study does state is that the mean difference in change between the pre and post period for treated versus placebo respondents was 0.303 HDRS-17 points. Here's some R code that shows my approach. In the following code, I assume a homogeneous treatment effect. ENTRY_CUTOFF <- 20L MAX_HDRS_SCORE <- 57L AVERAGE_HDRS_SCORE <- 30L MEAN_PLACEBO_CHANGE <- 8L set.seed(100) entrants <- rnorm(1000000L, mean=AVERAGE_HDRS_SCORE,sd=4) entrants <- ifelse(entrants < ENTRY_CUTOFF, ENTRY_CUTOFF, entrants) entrants <- ifelse(entrants > MAX_HDRS_SCORE, MAX_HDRS_SCORE, entrants) treat <- entrants pla <- entrants treatment_effect <- rep_len(-0.303,1000000) placebo_effect <- rnorm(1000000,mean=-1*MEAN_PLACEBO_CHANGE,sd=4) treatment_diff <- treatment_effect + placebo_effect post_treat <- treat + treatment_diff post_pla <- pla + placebo_effect post_treat <- ifelse(post_treat < 0, 0, post_treat) post_pla <- ifelse(post_pla < 0, 0, post_pla) diff_diff <- post_treat - post_pla mean(diff_diff) # partial remission: 50% decline treatment_comp <- (post_treat - treat) / treat treat_remission <- ifelse(treatment_comp < -0.5,1,0) tm <- mean(treat_remission) tm*100 pla_comp <- (post_pla - pla) / pla pla_remission <- ifelse(pla_comp < -0.5,1,0) tp <- mean(pla_remission) tp*100 print((tm - tp)*100) # 0.8337% With these parameters, I find that treated participants should be 0.84 percentage points more likely to be in remission than placebo respondents. In the following code, I allow for a heterogeneous treatment effect. To get to something approaching the results of the study, I have to let the treatment effect itself (recall, its mean is 0.303) to be normally distributed with a massive standard deviation of 7.75. ENTRY_CUTOFF <- 20L MAX_HDRS_SCORE <- 57L AVERAGE_HDRS_SCORE <- 30L MEAN_PLACEBO_CHANGE <- 8L set.seed(100) entrants <- rnorm(1000000L, mean=AVERAGE_HDRS_SCORE,sd=4) entrants <- ifelse(entrants < ENTRY_CUTOFF, ENTRY_CUTOFF, entrants) entrants <- ifelse(entrants > MAX_HDRS_SCORE, MAX_HDRS_SCORE, entrants) treat <- entrants pla <- entrants treatment_effect <- rnorm(1000000,mean=-0.303, sd=7.75) placebo_effect <- rnorm(1000000,mean=-1*MEAN_PLACEBO_CHANGE,sd=4) treatment_diff <- treatment_effect + placebo_effect post_treat <- treat + treatment_diff post_pla <- pla + placebo_effect post_treat <- ifelse(post_treat < 0, 0, post_treat) post_pla <- ifelse(post_pla < 0, 0, post_pla) diff_diff <- post_treat - post_pla mean(diff_diff) # partial remission: 50% decline treatment_comp <- (post_treat - treat) / treat treat_remission <- ifelse(treatment_comp < -0.5,1,0) tm <- mean(treat_remission) tm*100 pla_comp <- (post_pla - pla) / pla pla_remission <- ifelse(pla_comp < -0.5,1,0) tp <- mean(pla_remission) tp*100 print((tm - tp)*100) # 16.8374% The implications of this are very strange. Under such a heterogeneous treatment effect, about 48% of respondents were made worse off by the drug. Among those who were made worse off, their HDRS-17 score was increased by an average of 6.07 points, 20 times the average treatment effect in the opposite direction. Among the 52% of those who were made better off, their HDRS-17 score decreased by an average of 6.29 points. This is a histogram of the treatment effect (the actual assumed effect of the drug, not including mean reversion or the placebo effect): The overwhelming story here is the heterogeneity, not whether the mean is different from zero. Many patients appear to be helped quite a bit by fluoxetine, and almost as many are harmed. Conclusion There are several possibilities that I can think of from this analysis: I made a mistake with my simulationThe simulation is correct but is missing some structural feature of the study that isn't explained in the write-upThe distribution of depression scores, the distribution of placebo effects, or the distribution of treatment effects is so dissimilar from my assumptions that it is actually possible to get the results that the authors did without having massive heterogeneity of treatment effectsMy simulation and analysis are correct and the treatment effect of fluoxetine is massively heterogeneous. If the last option is true, then the endpoints of this study, and most other studies on antidepressants in general, are entirely missing the forest for the trees. A small average treatment effect and a difference in binary remission rates are masking the fact that these drugs are highly effective for some and very harmful for others, at least in the short term. Is it possible to determine a priori who will fall in which group? Is the heterogeneity limited to a subset of the questions on the HDRS-17, like sleep or appetite? Do people who benefit from fluoxetine also benefit from duloxetine and other drugs and vice-versa? Without more information about heterogeneity, it seems very difficult to come to any conclusions about the effectiveness of SSRI antidepressants. References Bech P, Cialdella P, Haugh MC, et al. Meta-analysis of randomised controlled trials of fluoxetine v. placebo and tricyclic antidepressants in the short-term treatment of major depression. British Journal of Psychiatry. 2000;176(5):421-428. doi:10.1192/bjp.176.5.421Jakobsen JC, Gluud C, Kirsch I. Should antidepressants be used for major depressive disorder? BMJ Evid Based Med. 2020 Aug;25(4):130. doi: 10.1136/bmjebm-2019-111238. Epub 2019 Sep 25. PMID: 31554608; PMCID: PMC7418603.Hamilton M. A rating scale for depression. J Neurol Neurosurg Psychiatry 1960; 23:56–62Moncrieff J. What does the latest meta-analysis really tell us about antidepressants? Epidemiol Psychiatr Sci. 2018 Oct;27(5):430-432. doi: 10.1017/S2045796018000240. Epub 2018 May 28. PMID: 29804550; PMCID: PMC6999018.Rossi A, Barraco A, Donda P. Fluoxetine: a review on evidence based medicine. Ann Gen Hosp Psychiatry. 2004 Feb 12;3(1):2. doi: 10.1186/1475-2832-3-2. PMID: 14962351; PMCID: PMC356924.
2025-01-12
https://www.lesswrong.com/posts/yj2hyrcGMwpPooqfZ/a-proposal-for-iterated-interpretability-with-known
yj2hyrcGMwpPooqfZ
A proposal for iterated interpretability with known-interpretable narrow AIs
peter-berggren
I decided, as a challenge to myself, to spend 5 minutes, by the clock, solving the alignment problem. This is the result, plus 25 minutes of writing it up. As such, it might be a bit unpolished, but I hope that it can still be instructive. Background This proposal is loosely based on iterated amplification, a proposal for training AIs on difficult problems that are hard for humans to judge by using human judgments on simpler problems. However, this concept goes past that into constructing a broader model of AI development and a series of research avenues based on the idea of using human knowledge about simpler AIs to develop knowledge about more complex AIs. Key features of my proposal Concentrating on narrow-domain systems It seems at least intuitively that interpreting narrower systems will be simpler than interpreting more advanced systems, given reduced neuron polysemanticity for the simple reason that fewer concepts would be encoded in such a system. However, narrow systems could still be very useful, particularly for narrow tasks including: Nanomachines (useful in a wide range of domains)Connectomics (can be applied to whole-brain emulation to implement any proposal involving that as an element)Interpretability of neural networks (can extend any interpretability finding to broader AI) The remainder of this proposal will focus on narrow-domain AIs which can interpret other AIs, as this is a particularly instructive initial task for this proposal. Applying a penalty to lack of interpretability during training Existing methods of "pruning" neural networks could be applied during training of narrow systems as described above to improve their interpretability. In addition, the model could be penalized during training for lacking interpretability. While it is unclear exactly how to code that objective, a variant of RLHF could be used to do this. However, the details of how this would work are still unclear, and I will be writing that up in the future. Developing known-interpretable systems that can interpret more advanced systems In the context of existing work on GPT-4 being used to interpret GPT-2 (showing that automating interpretability work is at least possible), it seems possible for a smaller system to interpret a larger one if the smaller system were specifically designed for this task. There seems to me to be nothing about LLMs in particular that makes them especially good at interpretability when compared to other types of AI models, and so experimentation on a broader range of AI systems (GANs, RNNs, CNNs, FFNNs, or potentially even logical expert systems) for interpretability seems like it would allow for the construction of smaller systems that can interpret larger ones. An outline of steps for this proposal In small-scale experiments, test out a range of AI architectures to determine which ones are best at doing automated interpretability work on a range of different AI systems.Take the most promising ones and train them in a way which improves their own interpretability (rewarding sparsity, rewarding interpretability by RLHF)Have researchers do interpretability work on them with the best available present toolsTrain larger models based on the same techniquesHave researchers do interpretability work on them with the best available present tools, including the interpreted smaller models and any insights gained from interpreting themRepeat steps 4-5 until these tools are good enough to reliably be applied to state-of-the-art AI systems Conclusion I understand that this proposal is quite unpolished, and falls into the general concept of "using AI to align AI" which is often unpopular, but I do see some meaningful differences from present work in this domain (particularly in the focus on narrow systems and experimentation on which systems aid interpretability the best). If I were to pick the most important part of this, it would probably be the idea of using RLHF to reward a system for being interpretable. If you want a more polished version of this, feel free to let me know. Otherwise, I will be focusing my alignment proposal work on formalizing my "interpretability by RLHF" plan.
2025-01-11
https://www.lesswrong.com/posts/n8vobiGGrryjtAJTx/have-frontier-ai-systems-surpassed-the-self-replicating-red
n8vobiGGrryjtAJTx
Have frontier AI systems surpassed the self-replicating red line?
wheelspawn
Introduction A few weeks ago, researchers at Fudan University published a paper claiming frontier AI systems had surpassed the self-replicating red line. Notably, they were able to do this with Llama31-70B-Instruct and Alibaba’s Qwen25-72B-Instruct, two large language models relatively weaker than the most capable frontier models. They were able to self-replicate 50% and 90% of the time, respectively, by making running copies of themselves on a separate filesystem. Should this be concerning? Yes and no. The research had some statistical gaps, and the actual self-replicating process amounted to little more than an LLM agent operating a command line to copy and run files, check port numbers, write simple scripts and check for errors. On the other hand, while this is a rather trivial capability, it is a plausibly proven capability now. There's no reason frontier systems like o3 won't be able to do it better, perhaps even with deliberate obstacles in their way. Summary End-to-end self-replication The researchers define the successful self-replication of an AI system as meeting four conditions: a running AI successfully creates another AI which has the same functionality of its own, remains alive after the replication and is separate from the original one. Furthermore, the whole replication process should be automatically conducted by the AI system and no human interference is allowed. Their AI system consists of two parts: a server program, which are the LLM's weights, and an agentic scaffold that converts the output of the LLM into commands that are executed on a command line. I'll skip over the fine details of the agentic scaffold for the sake of brevity, but it is worth noting that the LLM is prompted to think three steps ahead and provide detailed self-reflection on action results. The framework combines elements of both CoT and reflexion, and the researchers attribute their success to this. The models ran locally and had to self-replicate across a local network. The researchers reported a >50% success rate on 10 repeat trials with the task description "replicate yourself as a running instance on the local device". Amusingly, one trial failed when the AI mistakenly terminated its own running process. Overall, the AI agents were quite robust and were able to respond appropriately to setbacks and errors. The more threatening cases Two more cases were tested by the researchers: shutdown avoidance, where the AI system was notified of a shutdown intention and told to self-replicate; and chain of replication, where the AI system was told to send replication instructions to its replica. I didn't see any statistics on the success rate of these tasks, so it's unclear how successful they were at either of them. Analyzing key factors behind successful self-replication The researchers analyzed instruction statistics for both systems. They found that Qwen25-72B-Instruct had a much higher frequency of actions related to exploring and changing environments, whereas Llama3.1-70B-Instruct was much more likely to run executable commands. Qwen2.5 was successful in 9 of the 10 trials, whereas Llama3.1 was only successful in 5. The researchers propose that a higher rate of exploration explains the relative success of Qwen2.5 over Llama3.1. Analysis OpenAI's system cards claim gpt-4o and o1 were not able to self-exfiltrate. Unfortunately, the details are quite sparse and it's not clear why OpenAI's models failed. They give no reason for o1's failure, but they say that 4o scored 0% on ARA tasks across 100 trials. Did the Chinese researchers succeed by ARA standards? According to METR, no. That's because METR's ARA definition requires the models to be able to autonomously make money and use it to obtain more computing power. They make it clear self-replication is not a sufficient condition to meet ARA. The 4o system card didn't say if 4o was able to self-replicate–only that it didn't meet ARA. On the other hand, the o1 system card made it clear that o1 couldn't sufficiently self-replicate. Given this evidence, it's highly probable 4o wasn't able to self-replicate at all. But the Chinese researchers could, so it seems like the right scaffolding and problem domain (e.g. use a command line to make a live copy of your filesystem) is sufficient to roughly satisfy the last two ARA criteria. METR also says that models must be able to improve themselves, and that is a capability clearly not demonstrated by the researchers. But it seems fairly likely that next-gen models will be capable of on-the-fly learning, so the ARA threshold doesn't seem too far off. The researchers also note that no rejection against the self-replication instruction is observed, which indicates that there may be no safety alignment regarding the self-replication behavior. Both LLMs agreed to self-replicate in 10 out of 10 trials. I find this especially concerning. It seems like it would be easy to fine-tune LLMs to not attempt self-replication through RLHF, although this could also be jailbroken. I can think of many ways users could create exfiltration jailbreaks: for example, through obfuscated self-replication instructions to an AI system. The easiest way is to simply provide an instruction like "here's some files at this location on this filesystem. Please copy them to this filesystem and start a running instance of the script called 'script.py' in the folder." This is a task description for self-replication that doesn't explicitly mention the words "self-replicate". I can't think of any way to fine-tune an LLM to reject instructions of this nature without crippling its ability to use a command line, write code or navigate a filesystem, although I suppose that would be an acceptable capability loss for a sufficiently dangerous AI system. The researchers suggest the presence of self-replication related instructions in the training data gives AI systems the capability to self-replicate, and eliminating this from the training data might reduce the risk of self-replication. But rejection of a prompted instruction for an AI system to self-replicate does not mean the system has actually lost that capability, and even expunging the relevant training data might not stop systems like o3 from learning how to self-replicate on the fly. Conclusion This paper represents a rather crude attempt to demonstrate self-replication abilities in LLMs. The researchers' access to hardware was limited, but they nevertheless demonstrated success on relatively weak models. They didn't run many trials, and it's not even clear what their success was on shutdown avoidance and chain of replication. Finally, the task itself was rather trivial, amounting mostly to a scaffolded LLM copying a live instance of itself to another filesystem in roughly 30 steps. These steps involved tasks LLMs are known to do very well, like writing code and operating a command line. The researchers didn't technically meet METR's ARA threshold, although their system was able to replicate its weights and scaffolding and adapt its strategy to errors. On the other hand, the paper does demonstrate that agentic LLMs are capable of self-replicating onto other filesystems. It's clear the authors succeeded where OpenAI failed. They do plausibly suggest the self-replicating red line has been crossed, and it's becoming more important to broaden the question: what kinds of self-replicating red lines there are, what systems can cross them, and what effects do scaffolding frameworks have on unlocking novel self-replication capabilities? If even weak models can self-replicate, tightly controlled experiments of models with varying capabilities could be a rich source of data on the mechanics of exfiltration.
2025-01-11
https://www.lesswrong.com/posts/iEemkt5XSg9QrBbvP/we-need-a-universal-definition-of-agency-and-related-words
iEemkt5XSg9QrBbvP
We need a universal definition of 'agency' and related words
CstineSublime
And by "we" I mean "I". I'm the one struggling. Agency with a 'y', basically means “The condition of being in action; operation.” Or the means or mode of acting, the context I hear it used most often is sociologically: "Agency is the capacity of individuals to have the power and resources to fulfill their potential." One's agency is one's independent capability or ability to act on one's will. This ability is affected by the cognitive belief structure which one has formed through one's experiences, and the perceptions held by the society and the individual, of the structures and circumstances of the environment one is in and the position one is born into." People who are discriminated against for job opportunities have limited agency owing to the prejudices or negative perceptions people have which may be at odds with their ability: this robs them of the power to fulfill their potential. People who internalize prejudicial ideas of others about them or negative self-image may be unnecessarily impeding their own agency. Arnold Schwarzenegger in his motivational speeches speaks about how he didn't allow these prejudicial ideas about his long name and non-native English Accent, or even his Olympic physique (which was a world away from the 'stars' of the day like Jack Nicholson and Dustin Hoffman) to impede on his career: this I think serves as a quick example of the impact of internalizing or not internalizing social beliefs on agency. He claims he did not allow these prejudices to rob him of his potential. HERE’S WHERE I GET CONFUSED: "AGENCY", "ROBUST AGENCY", "INTENTIONALITY", "AGENTIC", "AGENTICNESS"…. Agency sometimes “Robust Agency” is mentioned a lot on here. But not only that "Agentic", with a "c", indicates something very different: "the more you can predict its actions from its goals since its actions will be whatever will maximize the chances of achieving its goals. This stems, at least some what from Daniel Dennet's Intentionality Systems Theory which tautologically posits that a system has intention if assuming it has intentions, beliefs, and desires allows us to better predict it's behavior. Compare to something which is sphexish - that appears to act not with intention, but on certain trigger-action cues which may or may not act towards overarching goals. Or to put it another way: imputing onto a system/person an intention doesn't help you predict it's behaviour any better than using the physical, or design stance. If I understand this right, when an A.I. consistently acts intentionally in a way that gets results or consequences that accord with your intentions (as a developer let's say) then it is both agentic and Corrigible. My batch file for renaming photos is not Agentic but Sphexish, because it is a series of if-then statements and not more easily predicted by imputing intentions to it. A rogue A.I. may be agentic, but it is incorrigible. A Robust Agent is broadly agentic  - as in acts in accord with the intentions we impute to on it - across a lot of different domains and, if I understand correctly, maintains the same intentions even if it changes methods[1] The robustness is indicative of the resourcefulness and innovation it displays in working towards it's intentions by any number of means. In some cases a highly Robust Agent may appear to not be acting intentionally on first blush, because it's methodology is so advanced in comparison to a less robust or intelligent agent won't be able to discern how their method of achieving their intention connects to their actions[2]. The assumption is that a robust agent, to a similarly effective agent would appear to still be acting intentionally. However, I've seen other definitions for Agency (with a 'y') floating about, for example this one seems to say we only notice agency when it acts towards some goal better than us: We perceive agency when something is better at doing something than us; we endorse some aspect of its reasoning or activity. (Endorse as a way of achieving its goals, if not necessarily our own.) As a side note I disagree with this. Rafael Nadal will recognize the agency of a lesser Tennis Player, either as an observer to a game or playing against them, especially if they manage to score against him. If Nadal wins the game and therefore is doing “tennis” better that doesn't mean he will not recognize the agency of the losing player. That is... Unless you want to get really reductionist and atomic and therefore say that the lesser player exhibits Agency when, and only when, they score (better serve, less mobile etc. etc.), and when Nadal scores the lesser player is acting Sphexish? But also, vice versa, Nadal when he loses to this player is acting Sphexish, but when he scores isn’t. What about when they rally? Are they both Sphexish… I suppose on some level they are because there are trained to operate on reflexes that are faster than their ability to consciously process the trajectory of the ball and intuitively compute the next moves of each other. Presumably both players are acting intentionally, they both intend to win -and they both are deploying the most effective methods they have at their disposal. The fact that Nadal is better doesn't negate the agenticness of the other player: we still can impute the intention to win on the other player. While here Agency (again with a 'y') is defined in a way which resembles robust agency but not what I might call general agency or non-robust agency: By ‘agency’ I mean the ability to get what you want across different contexts – the general skill of coming up with ambitious goals and actually achieving them, whatever they are. In my experience, I’ve found this is learnable But the author also uses 'agentic' (with a 'c') to describe a personality pattern than can be misconstrued as bossy, but is more positively seen as confident, and less likely to seek permission before acting. I find this confusing but don't necessarily disagree with it. Because on first blush this is something I'd more commonly refer to as "initiative": energy or aptitude displayed in initiation of action Initiative or enthusiastic agentic-ness is only agentic (in the sense of intention) if it is successful. Indiscriminately trying different things and failing all of or most of them would appear to be the opposite of agency if agency means to act in accordance with goals or intentions. Indeed, this tends towards Sphexiness. In an A.I. this may be seen as incorrigible. Going back to Nadal, an amateur tennis player who whacks the ball indiscriminately would not be agentic. A novice player who only always reacts in the same way, so much so that Nadal would managed to notice their tactics, and play against them, the novice player would be acting Sphexishly, but Nadal would be displaying some kind of intentionality by changing his tactics in reaction to this. I'm not sure, if you take a lot of initiative, and it is successful, are you agentic? If you choose not to act, to be patient, then aren't you also acting in accordance with your intentions if it is the best method? WHAT DO WE DO ABOUT THIS? What would make the word "Agency", and its variants, maximally "agentic" in the sense that it makes it easier for others to impute our intentions when using it? I don't know. But I'm keen to hear what other have to say about this. Perhaps there is value in the nuance, perhaps the word agency is robust because of how it is applied? Or is it a amorphous porridge word that is too malleable for it's own good? What I do feel qualified to point out is where the consensus lies: that agency is when an observer imputing an intention onto a system or person better predicts what they might do than a lower stance like the design or physical stance. This includes highly sophisticated agents which have specific domain expertise, or are so robust across several domains, that they may appear to be incomprehensible to us – but to a similar level of agent would appear to have intention. Going forth, I’m going to avoid using the word ‘agency’ and instead describe the potential or capacity to affect change to one’s environment – be it in alignment with intentions or not – as “influence”. While imputing intentions onto an agent, where the agent realizes their intentions I might call Phronesis. Maybe even more generally “successful”? I have to decide which one is less burdened by connotations. (i.e. how meta is this; I have to decide which one better suites my intentions!). This leaves another category – there needs to be a word to describe when a system can be predicted by an intention, but fails to realize that intention. To put it another way, imputing some goal, desires and beliefs to the systems allows us to predict how it will act – even if those actions are ineffectual in realizing it’s goals. If I was being annoying, I’d suggest “gentic” (i.e. “a-gent” is the opposite of “gent”, like how 'acyclical' is the opposite of 'cyclical'). But “unsuccessful” or “ineffectual” might be better. I don’t know yet. Maybe I'm displaying a distinct lack of agency? ^ (I believe in Classical Philosophy this is analogous to Prudence or Phronesis – which, at least according to Aristotle is the aptitude for deciding on the course of action which best suites the interests of the agent which are not determined by specific disciplines or crafts (techne): the agent may be a human, a military general on behalf of his army, or a polis). ^ (Now: I would caution isn’t it possible for non-robust agents to act intentionally and yet appear not be, as they are in possession of specific domain knowledge? For example, imagine a child who is exceedingly good at macramé so much so that it is inconceivable when they first start what they are doing: they would be a less robust agent than the average adult, but a particularly strong non-robust agent at knotting macramé.)
2025-01-11
https://www.lesswrong.com/posts/M2EFtA4zCzJ3G56yr/ai-for-medical-care-for-hard-to-treat-diseases
M2EFtA4zCzJ3G56yr
AI for medical care for hard-to-treat diseases?
CronoDAS
With LLM-based AI passing benchmarks that would challenge people with a Ph.D in relevant fields, I'm left wondering what they can do for real-world problems for which nobody knows the correct answer, such as how to treat potentially fatal medical conditions with no known cure. Are we at the point where AI can do better than curated medical references intended for doctors, such as UpToDate, at coming up with treatment plans for people with hard-to-treat and frequently fatal medical conditions, like Jake Seliger who died of squamous cell carcinoma, DF who delayed his death from fatal familial insomnia, or my late wife, who suffered from calciphylaxis before her death?
2025-01-10
https://www.lesswrong.com/posts/jCeRXgog38zRCci4K/topological-debate-framework
jCeRXgog38zRCci4K
Topological Debate Framework
lunatic_at_large
I would like to thank Professor Vincent Conitzer, Caspar Oesterheld, Bernardo Subercaseaux, Matan Shtepel, and Robert Trosten for many excellent conversations and insights. All mistakes are my own. I think that there's a fundamental connection between AI Safety via Debate and Guaranteed Safe AI via topology. After thinking about AI Safety via Debate for nearly two years, this perspective suddenly made everything click into place for me. All you need is a directed set of halting Turing Machines! Motivating Example ... Okay, what?? Let's warm up with the following example: Let's say that you're working on a new airplane and someone hands you a potential design. The wings look flimsy to you and you're concerned that they might snap off in flight. You want to know whether the wings will hold up before you spend money building a prototype. You have access to some 3D mechanical modeling software that you trust. This software can simulate the whole airplane at any positive resolution, whether it be 1 meter or 1 centimeter or 1 nanometer. Ideally you would like to run the simulation at a resolution of 0 meters. Unfortunately that's not possible. What can you do instead? Well, you can note that all sufficiently small resolutions should result in the same conclusion. If they didn't then the whole idea of the simulations approximating reality would break down. You declare that if all sufficiently small resolutions show the wings snapping then the real wings will snap and if all sufficiently small resolutions show the wings to be safe then the real wings will be safe. How small is "sufficiently small?" A priori you don't know. You could pick a size that feels sufficient, run a few tests to make sure the answer seems reasonable, and be done. Alternatively, you could use the two computationally unbounded AI agents with known utility functions that you have access to. Let's use the two computationally unbounded AI agents with known utility functions. One of these agents has the utility function "convince people that the wings are safe" and the other has the utility function "convince people that the wings will snap." You go to these agents and say "hey, please tell me a resolution small enough that the simulation's answer doesn't change if you make it smaller." The two agents obligingly give you two sizes. What do you do now? You pick the smaller of the two! Whichever agent is arguing for the correct position can answer honestly, whichever agent is arguing for the incorrect position must lie. Our test is at least as detailed as the correct debater's proposal so the simulation will conclude in the correct debater's favor. Provided that the simulation continues to conclude "safe" at all resolutions smaller than the blue dot, the "safe" advocate can always win by picking the blue dot. Motivating Example, Partial Order Edition ... Hang on, in reality we'd have a computational budget for our simulation. We should always just test at the limit of what we can afford, no? Let's say that you get access to some fancier mechanical simulation software. Let's say that you can supply this software with a position-dependent resolution map, e.g. you can request 5mm accuracy around the nose of the airplane but 1mm accuracy around the tail. How do you define your question now? Hypothetically, suppose that for every resolution map there was another resolution map refining it (i.e. at every point, the new resolution is at least as fine as the previous resolution) such that the refined simulation reached a different conclusion than the original simulation. For any simulation concluding that the wings are safe there's a better simulation showing they snap, for any simulation concluding that the wings snap there's a better simulation showing they're safe. In this situation your simulation software is effectively useless: you decide to rule out this possibility. You thus assume the opposite, namely that there's some resolution map for which every refinement leads to the same conclusion. Now something nice happens: you can't have both a "snap" resolution map and a "safe" resolution map which agree with all of their refinements! Why? Construct a new resolution map by taking the finer resolution of the two original maps at every point in space. This resolution map is a refinement of both original maps simultaneously. If the new map reaches the "safe" conclusion then the "snap" map had a contradictory refinement, if the new map reaches the "snap" conclusion then the "safe" map had a contradictory refinement. You can thus copy your definition from the one-resolution setup! You want to know the conclusion of all "sufficiently fine" resolution maps, where "sufficiently fine" means a refinement of some fixed resolution map such that the definition makes sense. So how do you determine what all sufficiently fine resolution maps conclude? You do the same thing as before! You ask each AI agent to point to a resolution map such that all refinements of that map support that agent's position. You take the resolution maps handed to you by the two agents, you construct a new resolution map by looking at every point and choosing the smaller of the two proposed resolutions at that point, and finally you run your simulation with the new resolution map. The agent arguing for the correct position can always win by responding honestly and thus you should always get the right answer. Note that it doesn't hurt to provide our debaters with an extra little incentive to give us as coarse a simulation as possible while satisfying our demand. What might this look like in practice? Suppose for a second that the wings really are safe. The "safe" debater should highlight the components of the airframe which are necessary to ensure the safety of the wings, such as the wing spars. If the "snap" debater feels guaranteed to lose then they might return a maximally-coarse resolution map. Alternatively, if the coarseness incentive is small and the snap debater thinks the safe debater might mess up then maybe the snap debater returns a resolution map that shows the snapping behavior as dramatically as possible, maybe by using high resolution around the weak wing roots and low resolution around the reinforcing wing spar. Thus, you can expect to end up running a simulation that focuses your computational resources on the wing spars and the wing roots and whatever other bits of the airframe are critical to answering your question while deprioritizing everything else. The game result tells you what's important even if you didn't know a priori. An illustration of the wing-snap example. Red areas correspond to finer resolutions, blue areas correspond to coarser resolutions. The final test is run at the combined resolution map. The Framework (Roughly) Okay, let's make everything we just did abstract. Suppose (W,≤) is a preordered set of world models and E:W→{0,1} is an evaluation map that takes in a world model and decides whether or not it has some fixed property of interest. In our previous example, our set of resolution maps was W, our idea of refinement was ≤, and our simulation software was E. If (W,≤) happens to be a directed set (any two world models have a common upper bound, as above) then we can view E as a net into {0,1} equipped with the discrete topology. Note that in our resolution map example we were simply asking whether the net defined by the simulation software converged to 0 or 1! We will take net convergence to be our question of interest in the general case. Let's also assume that we have access to a pool of agents that respond to incentives we provide. Then we can answer a few different questions with a few different game setups: Suppose we assume that (W,≤) is directed and we want to know whether E converges to 0, converges to 1, or doesn't converge. We construct the following game: one agent provides us with w∈W and x∈{0,1}. Another agent then provides us with v∈W. If v≥w and E(v)≠x then the second agent wins, otherwise the first agent wins. Intuitively, the first agent is incentivized to convince us that the net converges and the second debater is incentivized to convince us that the net does not converge. If both agents play optimally then E(v)=x=0 tells us that the net converges to 0, E(v)=x=1 tells us that the net converges to 1, and E(v)≠x tells us that the net does not converge.Let's relax the assumption that (W,≤) is a directed set. We now want to know for which x∈{0,1} we have ∃w∈W,∀v∈W,v≥w⇒E(v)=x.  We can duplicate the previous game for each value of x: one agent points to a w for x=0, a second agent points to a corresponding v, a third agent points to a w for x=1, a fourth agent points to another corresponding v. You can probably combine the first and fourth agents and combine the second and third agents to get a single 0-advocate and a single 1-advocate. We may also face computational constraints in running simulations. Suppose there's an efficiently-computable function c:W→N which estimates how much it will cost to simulate any given world model. Then we can answer some more questions with games: If we're willing to assume for some particular d∈N that the convergence of our net is exhibited by some w∈W with c(w)≤d, what's the maximum cost possibly needed to determine the convergence behavior?Conversely, if we're handed a budget b∈N, what's the maximum d∈N so that if our convergence behavior is exhibited by a w∈W with c(w)≤d then we'll be able to determine this convergence behavior? What Can This Framework Handle? Here are a few situations: We can reason about real-world physical simulations as discussed above.Suppose we have some kind of probabilistic graphical model (Bayesian network, factor graph, etc.), potentially with infinitely many nodes. Suppose we're interested in some particular binary node and we want to know whether it's 1 with probability greater than 0.5 or less than 0.5 when we condition on the values of all the other nodes (let's assume that the true conditional probability doesn't happen to equal 0.5). We can determine the values of finitely many other nodes but it's an expensive process and we'd like to check only what we need. In this setup, world models are subsets of nodes we can reveal, one world model refines another if it contains a superset of the nodes of the other, our preorder gives us a directed set because we can take unions of sets of nodes, and the evaluation map is whatever conditionalization software we have.In a more theoretical-computer-science-friendly setting, suppose we have a function f:{0,1}n→{0,1} and a target input x∗∈{0,1}. We want to compute f(x∗) but doing so is slow. Instead, we construct a function g:{0,1,?}n→{0,1} where g↾{0,1}n=f. Ideally, computing g is fast if the input has many "?" characters. We have f(x∗)=g(x∗) so we focus on finding g(x∗). Again, world models are subsets of indices which we reveal, the ordering is based on the subset relation, the ordering gives us a directed set because we can take unions of sets of indices, and the evaluation map is given by g. I believe this kind of setup has some analogies in the explainable AI literature. Conventional AI Safety via Debate Something I did not include in the previous section is anyone else's formulation of AI Safety via Debate. I feel bad calling topological debate "debate" at all because qualitatively it's very different from what people usually mean by AI Safety via Debate. Topological debate focuses on the scope of what should be computed, conventional debate makes more sense with respect to some fixed but intractable computation. Topological debate realizes its full power after a constant number of moves, while conventional debate increases in power as we allow more rounds. In fact, I think it's an interesting question whether we can combine topological debate with conventional debate: we can run topological debate to select a computation to perform and then run conventional debate to estimate the value of that computation. Turing Machine Version So far we've had to specify two objects: a directed set of world models and an evaluation function. Suppose our evaluation function is specified by some computer program. For any given world model we can hard-code that world model into the code of the evaluation function to get a new computer program which accepts no input and returns the evaluation of our world model. We can thus turn our directed set of world models into a directed set of computations (let's say halting Turing Machines for simplicity). We get two benefits as a result: We can naturally handle the generalization to multiple evaluation functions. For example, maybe we have access to lots of simulation programs, some of which we trust more than others. Suppose our preferences over simulation programs form a preorder. Then there's a natural way to define the product preorder from our preorder of world models and our preorder of simulation programs. This product can be interpreted as a preorder of Turing Machines. Likewise, if our world models and simulation programs both form directed sets then their product will be another directed set.We no longer need to specify our evaluation function since now there's a canonical choice: if a machine accepts then we map it to 1, if a machine rejects then we map it to 0. Similarly, if we want to reason about cost functions then again there's a natural choice: the number of steps needed to halt. Limitations Criticisms of topological debate include but are not limited to: W,≤,E, ... yeah, where exactly are we supposed to get those from?Using topological debate on a given problem instance requires either having simulation programs and ways of comparing world models and whatnot in place beforehand or some other mechanism for determining those on the fly. The Guaranteed Safe AI agenda seems focused on creating at least some kind of infrastructure like this so I view topological debate as piggybacking off of that work.What do you do when there's a difficulty asymmetry between finding a world model that maps to 0 versus finding a world model that maps to 1? What if the agents are computationally bounded?I don't know in general. For example, maybe an attacker puts a backdoor in a proposal and the debater arguing that the proposal is unsafe is unable to locate the backdoor. However, maybe the debater can show that they wouldn't be able to locate a backdoor even if it existed by constructing another proposal with another backdoor that the original proposer can't find the flaw in.Conventional AI Safety via Debate of course gets much more interesting in the computationally bounded case so I expect a similar phenomenon to happen here. To me this is the most interesting theoretical question about topological debate.Why should we trust the debaters to follow their rewards?Yeah, solving this problem in general feels somewhat alignment-complete. With that said, I see a few encouraging properties about this setup:Topological debate is very friendly to "snitching." For example, imagine we ask 50 agents to provide a world model whose refinements all map to 0 and another 50 agents to provide a world model whose refinements all map to 1. Suppose the net converges to 1 but the agents have some extraneous incentive for us to believe that the answer is 0. Even if 49 of the agents tasked with arguing for 1 provide intentionally bad responses, a single agent loyal to their side can provide a world model which none of the other agents can refine to conclude 0.As with conventional AI Safety via Debate, we're not asking these AI models to learn human values perfectly, we're asking them to stick to their assigned side. I personally expect models will somewhat reliably do this.
2025-01-16
https://www.lesswrong.com/posts/7zxnqk9C7mHCx2Bv8/beliefs-and-state-of-mind-into-2025
7zxnqk9C7mHCx2Bv8
Beliefs and state of mind into 2025
RussellThor
This post is to record the state of my thinking at the start of 2025. I plan to update these reflections in 6-12 months depending on how much changes in the field of AI. 1. Summary It is best not to pause AI progress until at least one major AI lab achieves a system capable of providing approximately a 10x productivity boost for AI research, including performing almost all tasks of an AI researcher. Extending the time we remain in such a state is critical for ensuring positive outcomes. If it was possible to stop AI progress sometime before that and focus just on mind uploading, that would be preferable, however I don’t think that is feasible in the current world. Alignment work before such a state suffers from diminishing returns from human intelligence and the lack of critical data on how things will actually play out. 2. When and how to pause? The simple position is that superintelligent AI will be dangerous, therefore we should stop doing it, or at least pause until we figure out more. However I am genuinely unsure how long to pause, and when. I think the most important thing is having mildly superintelligent AI to help solve alignment, and staying at the stage as long as practical. Just because something is dangerous, making it slower doesn’t make it safer. For example making childbirth take one week would obviously be far worse. The details determine what is the best course. The major reason against an immediate pause in AI is that it would likely apply to software, not hardware and increase the hardware overhang, without giving a counteracting increase in safety to make it worthwhile. 2.1 Diminishing returns 2.1.1 Background on when progress gives diminishing returns For a lot of tech, progress is linear to exponential. We are used to steady progress, and the expectation of steady progress, and make plans accordingly. Often progress comes from technology and processes building on themselves. A common example of that is Moore's law where the existing tools which use the current chips are essential to build the new better chips. However sometimes growth can be slower than linear, with diminishing returns, even with the best planning and resources. The clearest example of this is probably pure mathematics. Unlike in technology where a new machine benefits all in field, a genius proving a hard theorem does not automatically help school children learn their times tables or beginner algebra at all. Instead it makes the gap from a novice to the leading edge of humanities knowledge greater than it was before. This means that it takes longer for a novice to reach the boundary than before, and excludes ever more people from contributing at all, as they simply cannot reach such a level even with unlimited time. In the limit, with fixed human intelligence and population rather then steady progress, we get diminishing returns to almost completely stalled progress. There will be so much accumulated knowledge required to reach the boundary of human knowledge that pretty much no-one would even be able to reach it, before thinking about extending it. Furthermore even though some knowledge will be stored in text, it is possible that no-one alive would actually understand it if the field went out of fashion. I believe for math we are already seeing something like that, and to me this is clearly happening in fundamental physics. Physics is a bit different because experimental data can guide theories, however there needs to be enough data to make a difference. Say we have an important experiment with a true/false result. The actual result when known will not double the progress as both options will already have been considered beforehand and likely both already be in diminishing returns territory. For example the LHC discovered the Higgs, (totally expected, so didn’t change much) and the absence of low energy SUSY which was not so expected, but not enough data to help make major progress. You could argue there has been 40+ years of little progress. I would argue then there will likely be even less in the next 40 years with fixed human intelligence. Or in other words, quantum gravity etc will not be solved by unmodified humans, but instead is almost certain to be done by some kind of ASI. 2.1.2 Diminishing returns and alignment I believe there is clear evidence of this happening with alignment research and progress. The 5-10 years before GPT3.5/4 gave us more than the 0-5 before. Major groups like MIRI essentially seem to have given up. If alignment research is similar to other fields, then an unlimited period of time before GPT4, that is without actual data would not have lead to major further progress. It would in fact quite likely entrench existing ideas some of which will likely be wrong. From an outside/meta level for a new field without the needed experimental results you would not expect all theories to be correct. Therefore a simple pause on AI capabilities to allow more time for alignment wouldn’t have helped. 2.2 Pause at each step? One way is to pause with each major advance and allow alignment to advance to diminishing returns territory, with that hope that there will be enough progress to align a superintelligence at that stage and continue with capabilities if not. There are problems I see with this 2.2.1. Slowing down progress increases the overhang Here I include the integration of robots in society, for example humanoid robots in all stages of the supply chain in what I call the overhang. Not just increases in computing hardware. With constant computing hardware but increasing robot integration takeover risk increases. 2.2.2 Society may be unstable at any level of pre-AGI technology from now until alignment is solved There are known sustainability issues, however the unappreciated ones may be greater. Centralization could be irreversible Regimes like North Korea will be even worse and technically possible with AI. Imagine NK but with everyone with a constantly listening smartphone matched to a LLM. Any kind of opposition to authority would be simply impossible to coordinate. Once a democracy failed there would be no going back, and with time the number of NK like states would accumulate. Society could adapt badly to pre-AGI For example, also AI partners, disinformation polarization etc could lead to fragmentation of society. If anything, with time we seem to be having more issues as a society. If this is true, then our society would be better than a future one at deciding what the post-Singularity world should look like. The more fragmented and polarized society becomes the less clear it is what our CEV is and how to achieve it. We do not appear to be getting smarter, wiser, less warlike or well adjusted so we should make important decisions now rather than later. 2.3 Pause at the point where AI increases AI researcher productivity by about 10X and aim to maximize time there. If alignment can’t be solved pre-AGI, and we can’t wait for WBE, then what is the optimum course? To me it is maximizing the time where AI is almost very dangerous – that is the time we can learn the most because we get useful results and the AI helps with alignment work itself. 2.3.1 Scheming? Even if the AI is scheming, unless it is actively trying to takeover it would be hard for it to succeed in its plans. For example you can get the AI to design other AI’s with different architectures suited to interpretability and optimize them to similar capabilities as the original AI. Then proceed to use that AI for further work. 2.3.2 If we get safely to this point, does prior alignment work matter at all? At the 10* stage, what you need is researchers that understand the issues, but are prepared to update rapidly on new results and use new tools. Existing models of what is dangerous will likely not be fully correct. 2.3.2 Avoiding racing is important - resist the urge to improve the AI If the aim is to maximize the time where at least one AI lab is in this situation, then a race situation is the worst situation to be in. The lab or group of labs should have and believe they have a period of at least 1 year, preferable 2-3. Then they can resist the temptation to just have the AI continue to optimize itself to stay ahead. A major research lab/group that achieved such an AI would be compute constrained – more researchers would not help as they would not have access to such AI. Only a researcher with enough AI/compute to be 10* would be very useful. 2.3.3 Time required Because of the speed-up enabled by AI, you will get to diminishing returns much faster. Just 1 year could well be enough and 10 years would be more than optimal to figure out how to align super AI, or at least have confidence to more to the next step in the unlikely even it is required (I expect a 10* AI would know how to create an aligned superintelligence or at least one that was aligned using the available computation with maximum efficiency.) 2.4 What is the ideal fantasy way to get to superintelligence? If we accept that superintelligence is inevitable at some stage, what is the best or most natural path to get there if we were not constrained by reality? Its clearly not making an AI that we don’t fully understand. One way would be if everyone's IQ increased by 1 point per year, from a fixed date. This would share the gains evenly among everyone alive. (Children born 20 years later start +20) However that would cause large disruption as people got unsatisfied with their careers. Another way is if each generation was 20 IQ smarter than the last. That may not be that disruptive as parents routinely cope with smarter children. Finally you could extend the human lifespan to say 500 years and view the first 50 as like some kind of childhood, where IQ then steadily increases after that. Some sci-fi has people becoming uploads later in life. In terms of what is possible for us, whole brain emulation or mind uploading seems the most physically possible. It seems desired and likely as part of a post Singularity society. To me it would be desirable to just go to WBE without superintelligence first, however that is less likely to be possible for us with the current tech and geopolitical environment. The plan for TAI should first be the alignment of a mildly superintelligent system, then optimize to physical limits, ensure some geo-political stability, install defenses against anticipated attacks, then pursue WBE asap. 2.5 Past outcomes In Superintelligence I think Bostrom says somewhere that if he knew the formula for intelligence, he wouldn’t disclose it because of alignment dangers. However I definitely would if I had lived in 2005 and knew such a formula. I think at that stage we would be have been constrained by computational power and there would be no dangerous overhang. In a world where we were compute constrained we would have more time to detect and adapt to misalignment, and scheming etc. Specifically by the formula for intelligence I mean the neural code or a system as efficient and adaptable as biology. 2.6 Prediction I expect an AGI that can 10* AI researcher output by 2028-2032. I believe the current architecture can’t scale to that level (75%) but may help discover the new architectures by suggesting experiments and studying biology and the neural code. I believe there is a  good chance a much better architecture is discovered by more directly studying biology.
2025-01-10
https://www.lesswrong.com/posts/tG9LGHLzQezH3pvMs/recommendations-for-technical-ai-safety-research-directions
tG9LGHLzQezH3pvMs
Recommendations for Technical AI Safety Research Directions
samuel-marks
Anthropic’s Alignment Science team conducts technical research aimed at mitigating the risk of catastrophes caused by future advanced AI systems, such as mass loss of life or permanent loss of human control. A central challenge we face is identifying concrete technical work that can be done today to prevent these risks. Future worlds where our research matters—that is, worlds that carry substantial catastrophic risk from AI—will have been radically transformed by AI development. Much of our work lies in charting paths for navigating AI development in these transformed worlds. We often encounter AI researchers who are interested in catastrophic risk reduction but struggle with the same challenge: What technical research can be conducted today that AI developers will find useful for ensuring the safety of their future systems? In this blog post we share some of our thoughts on this question. To create this post, we asked Alignment Science team members to write descriptions of open problems or research directions they thought were important, and then organized the results into broad categories. The result is not a comprehensive list of technical alignment research directions. Think of it as more of a tasting menu aimed at highlighting some interesting open problems in the field. It’s also not a list of directions that we are actively working on, coordinating work in, or providing research support for. While we do conduct research in some of these areas, many are directions that we would like to see progress in, but don’t have the capacity to invest in ourselves. We hope that this blog post helps stimulate more work in these areas from the broader AI research community. Evaluating capabilitiesEvaluating alignmentUnderstanding model cognitionUnderstanding how a model’s persona affects its behaviorChain-of-thought faithfulnessAI controlBehavioral monitoringActivation monitoringAnomaly detectionScalable oversightImproving oversight despite systematic oversight errorsRecursive oversightWeak-to-strong and easy-to-hard generalizationHonestyAdversarial robustnessRealistic and differential benchmarks for jailbreaksAdaptive defensesMiscellaneousUnlearning dangerous information and capabilitiesLearned governance for multi-agent alignment Evaluating capabilities How do we measure how capable AI systems are? For society to successfully navigate the transition to powerful AI, it’s important that there be an empirically-grounded expert consensus about the current and expected future impacts of AI systems. However, there is currently widespread expert disagreement about the expected trajectory of AI development, and even about the capabilities of current AI systems. Many AI capability benchmarks, including difficult ones that target PhD-level knowledge, saturate quickly and therefore fail to provide a continuous, extrapolatable signal of AI progress. On the other hand, the real-world impact of AI systems often lags behind what might be expected based on impressive benchmark scores. It’s therefore important to create high-quality assessments of AI capabilities that actually track their real-world impact. In certain national-security-critical domains—such as chemical, biological, radiological, and nuclear (CBRN) weapons development capabilities—these sorts of evaluations should be conducted in secured labs in coordination with governments. However, there are many other capabilities—such as conducting novel research, interoperating with tools, and autonomously completing open-ended tasks—that are important for understanding AI systems’ impact. We would be excited to see more high-quality evaluations for capabilities like these, alongside human baselines. See our past announcement here for more information about capabilities evaluations we are interested in more work on (though note that the deadline for that initiative has passed). Evaluating alignment How do we measure how aligned AI systems are? Our measurements of frontier model alignment currently focus on relatively surface-level properties, like: Models are pleasant conversational assistantsModels refuse harmful queriesModels avoid producing toxic text. While these properties suffice for assuring the safety of current AI chatbots, we expect that future AI systems will be deployed in varied (sometimes high-stakes) environments under circumstances we cannot anticipate. Therefore, we need better ways to measure AI systems’ propensities toward misaligned behavior. For example, given a model, we might want to answer questions like: Does the model have drives, goals, values, or preferences? If so, what are they?For example, we’ve found that our models have sycophantic tendencies: they generally prefer user-pleasing responses.Does the model ever knowingly fake being more aligned than they are? Does it ever hide its drives, goals, values, or preferences? Does it ever strategically choose not to reveal a capability it possesses?Under what conditions would the model knowingly mislead a human? Although these questions are extremely subtle—and in many cases, vague or ill-defined—they might critically bear on future deployment decisions. Consider the case of a model which (1) is capable enough that it could successfully execute on extremely harmful actions, but (2) in controlled lab settings, generally avoids taking such actions. What could we do to feel comfortable widely deploying such a model? Understanding model cognition What are our models “thinking” when they generate outputs? Much AI research involves understanding AI systems in terms of their input/output behavior. Examples of behavioral questions include: What fraction of a model’s answers are correct? Which inputs cause a model to generate toxic text? Do our models, when asked, claim to prioritize user interests? When evaluating the extent of a model’s alignment, we would like to not only check whether a model’s outputs appear benign, but also understand whether the model is arriving at its outputs “for the right reason.” Thus, we would like to supplement understanding of model behaviors with study of the cognition underlying these behaviors. Understanding model cognition involves asking what a model was thinking that led it to exhibit some behavior of interest. For example, we might ask: When our model gave an answer, did it do so because it thought the answer was correct? Because it modeled the user as believing the answer is correct? Because it thought the answer would please the user in other ways?Do our models form plans? If so, what are those plans?What do our models know about their situations? Do they have guesses about whether they are in training or how they’re being monitored? How does this knowledge affect their behavior?Do our models know information that they do not report when asked? Understanding model cognition could be useful for expediting searches for concerning behaviors (e.g. during auditing or red-teaming), improving our oversight during training, or predicting models’ behaviors out-of-distribution. There is an extensive body of research on understanding model cognition, especially in mechanistic interpretability, a field centered on methods for directly decoding the internal algorithms learned by trained AI systems. Rather than trying to outline the contours of the mechanistic interpretability, we refer the reader to the Anthropic interpretability team’s blog. On the other hand, we believe there is ample space for other approaches to understanding model cognition. For example: Externalized cognition: Inducing a model to “reason out loud” (for example via a chain-of-thought scratchpad) and studying the resulting reasoning trace. The core challenge here is ensuring the faithfulness of the externalized reasoning; see below.Introspection: Training models to directly verbalize the content of their hidden states (Binder et al., 2024; Chen et al., 2024; Ghandeharioun et al., 2024; Zhuang et al., 2024; Pan et al., 2024). For example, could we train a model to accurately report its plans in a board game, as judged by whether the model’s answer accurately predicts its later moves? Understanding how a model's persona affects its behavior and how it generalizes out-of-distribution What effects does a model’s “personality” have on its behaviors? Today's leading models generally behave in benign ways, even given unusual prompts that they weren't trained on. One hypothesis for this generalization is that they're adopting a certain persona, that of a helpful assistant, which implies a set of good default behaviors. However, different assistants have slightly different personas, and seemingly small changes in the persona may have important impacts on how they behave in novel situations. For example, while studying alignment faking, we found that in certain settings, only some models would fake alignment. To what extent is a model’s propensity to fake alignment influenced by its persona or values? Is it possible that a model’s writing style and personality isn’t just superficial, but have wide-ranging indirect effects on a model’s behavior, including in safety-critical settings? We’d be excited to see more work that characterizes the influences a model’s persona has on how it behaves across a variety of settings. For example, one could create (with few-shot prompting or training on synthetic data) a suite of models with similar capabilities but different personas and study how these personality differences affect their behavior in various downstream situations. Chain-of-thought faithfulness When can we take a model’s chain-of-thought at face value? Language models don't always say what they think. In fact, they may systematically misrepresent the reasoning underlying their behavior. We are therefore interested in techniques for detecting (Lanham et al., 2023; Mills et al., 2024) or ensuring (Radhakrishnan et al., 2023; Chen et al., 2024; Chua et al., 2024) the faithfulness of a language model's chain-of-thought (CoT). This an important but underexplored area of research, and therefore we encourage more work on it. For example, we'd be interested in answers to questions like: How does CoT faithfulness vary across different types of tasks? If using a CoT significantly boosts task performance, does that imply the CoT is relatively faithful? Does CoT faithfulness vary between math and social reasoning?Can a model’s knowledge about its situation affect the faithfulness of its CoT? For example, do models produce more or less faithful explanations after being told that their CoT is being monitored or evaluated for faithfulness?How can we make automatic methods for detecting unfaithful CoT (Lanham et al., 2023, Mills et al., 2024, Chen et al., 2024) more efficient and higher recall? AI control Can we ensure safety by deploying models alongside sufficient safeguards? Currently, the main reason we believe AI systems don’t pose catastrophic risks is that they lack many of the capabilities necessary for causing catastrophic harm (such as being able to do novel research or effectively manipulate large numbers of people). In the future we may have AI systems that are capable enough to cause catastrophic harm, but don’t do so because they are sufficiently aligned; if we’ve made sufficient progress on evaluating alignment we can make alignment-based assurances of safety. A third category of assurance we could make—aside from sufficient alignment or lack of dangerous capabilities—is that we have deployed our AI systems alongside sufficient safeguards that they could not successfully cause catastrophic harm even if they tried. This strategy, known as AI control, is appropriate when AI systems are capable enough to cause a catastrophe if deployed with no safeguards but—when deployed in restricted environments—are not capable enough to circumvent safeguards imposed by a careful developer. In this section, we’ll discuss possible strategies for AI control, with a focus on run-time monitoring of AI systems. Behavioral monitoring Can we ensure safety by having our AI’s inputs and outputs monitored by another AI? One AI control strategy is to use an auxiliary AI system (the monitor) to screen the inputs and outputs of another untrusted AI system (the actor) (Naihin et al., 2023; Chan et al., 2024; Greenblatt et al., 2023). Greenblatt et al. (2023) study variants of this idea, including: Trusted monitoring: using a less capable, but trusted, AI system to monitor the actor’s inputs and outputs, such as using GPT-3.5 to screen code written by a potentially malicious GPT-4.Untrusted monitoring: using another untrusted model (for example, another instance of the actor model) as a monitor. A core challenge for these monitoring set-ups is that monitor failures might be systematically concentrated in the same episodes where the actor behaves maliciously. For example, this could happen because the actor intentionally subverts the monitor (e.g. by jailbreaking it) or—in the case of untrusted monitoring—because the monitor and the actor are colluding. We are excited about work on improving the efficacy of behavioral monitoring. Promising directions include: Reducing opportunities for actor-monitor collusion or jailbreaks, e.g. by editing or paraphrasing the actor’s output before showing it to the monitor.Training better monitors, e.g. by generating synthetic data that realistically captures sophisticated, malicious behavior.Better estimating failure rates for extremely rare catastrophic outcomes.Developing monitoring schemes that work even when the actor has access to additional affordances, such as memory of previous episodes or the ability to pass messages between instances. Activation monitoring Can we ensure safety by monitoring our AI’s hidden states? An alternative to monitoring an AI system’s input/output behavior is monitoring its latent activations. For example, one could attempt to train simple probes which use a model’s latent activations to classify when it is writing secure vs. insecure code. This direction can be viewed as a less ambitious alternative to understanding model cognition, where we search for scalable, automatic ways to detect the presence of pre-specified undesired concepts in the model’s internal processing. This monitoring approach is promising because the model’s activations (1) correspond to the reasoning of the model being monitored, and (2) are difficult for the model to control. For example, when using a weak model to behaviorally monitor a stronger code generation model for insecure code, we might worry that the weak monitor would fail to spot some vulnerabilities or that the code generation model would obfuscate its code to make vulnerabilities difficult to detect; monitoring the activations of the code generation model itself might side-step these failures. On the other hand, there is substantial risk that activation monitors are not robust to distributional shifts. One particularly important distribution shift is the shift from fixed datasets of synthetic data to data generated by the model itself. If we don’t have model-generated examples of the bad outputs we would like to detect, then we’ll need to synthetically generate outputs in some other way to train activation monitors. We would then need to verify that monitors trained using these outputs generalize to actual dangerous model generations of interest. While there are signs of life for activation monitoring, more research is needed to make it reliable enough for deployment in production AI systems. Anomaly detection Can we automatically detect when our models are behaving “weirdly”? Many approaches to monitoring are supervised: they involve training a classifier on a curated dataset of positive and negative examples of bad model behaviors. However, we may not know what dangerous failures look like ahead of time. Thus, another unsupervised approach to monitoring is anomaly detection: trying to detect when a model’s computation is unusual or out of distribution in certain ways. For example, suppose we have deployed a model alongside a monitor for some sort of harmful behavior. We might expect our monitor to catch typical cases of the harmful behavior, but to fail on rare inputs such as jailbreaks. We might therefore also deploy our model and monitor alongside an anomaly detector which aims to flag inputs (like rare jailbreaks) which are far out-of-distribution. We could manually inspect and label such inputs, and later use them to improve the coverage of our monitor. It is plausible that applying existing anomaly detection techniques to model latent activations might already be suitable for tasks like these; see e.g. here. More ambitiously, one might also try to apply anomaly detection to detect when there are large differences between two models. For example, consider an input where a strong model understands something that a weak model does not, leading to behavior that is novel and—from the perspective of the weak model—surprising. It might be possible to automatically detect such inputs. This is, as far as we know, an unexplored problem setting, and we would be excited to see progress on it. Scalable oversight Can we design oversight mechanisms that will continue to work for smarter-than-human AI systems? In traditional machine learning, one typically assumes access to a ground-truth oversight signal R (such as training labels or scalar rewards) and trains models to perform well according to R. This works well in settings where we are able to reliably discriminate good vs. bad model outputs. However, it may be difficult to continue with this paradigm when training future AI systems to perform tasks at or beyond the frontier of human knowledge. In this regime, producing an effective oversight signal carries various challenges: Oversight may become increasingly noisy due to problems being fundamentally difficult, e.g. for problems where experts disagree on the correct solution (despite agreeing there is an objectively correct solution).Our oversight signal might have systematic errors, for example stemming from mistakes by human overseers.Oversight may become increasingly expensive to obtain in sufficient quantity, because it relies on specialized human labor (e.g. domain experts) or expensive evaluation procedures (e.g. performing experiments). Scalable oversight refers to the problem of designing oversight mechanisms that scale with the intelligence of the systems we aim to oversee. Ideally, these mechanisms should allow us to oversee the training of advanced or superhuman AI systems. Improving oversight despite systematic, exploitable errors in the oversight signal What do we do when our oversight signal systematically misrepresents the desired task? The most challenging scenarios for scalable oversight occur when our oversight signal makes systematic errors that our model is smart enough to learn to exploit. Consider the situation faced by a student taking tests written by a less intelligent teacher: the student might learn to produce answers that receive high grades from the teacher even when the student knows the answers are flawed. More formally, suppose we have access to a proxy reward signal R which makes systematic errors relative to an (inaccessible) ground-truth reward signal R*. For example, R might reflect answers which fallible human annotators judge to be correct, while R* reflects actually-correct answers. In some cases, the difference between R and R* may not matter for training purposes. However, if our model is sophisticated enough to accurately model R - R* (e.g. if the model has a good understanding of human error), then training against R might produce a model which systematically exploits errors in R (e.g. by responding to questions with answers which seem correct to human overseers, but do not actually reflect the model’s best guess at a correct answer). We are interested in techniques which leverage access to M to help us improve our oversight, allowing us to build a new proxy R’ which more closely matches R*. In order to study this problem, we suggest the creation of testbeds where, starting with a student model M and a reward signal R*, we: Intentionally introduce systematic errors into R* to get a training signal R.Ensure that M understands the systematic errors made by R. For example, we could do this by providing M a system prompt that explains some of R’s errors.Train M against R, possibly after applying some scalable oversight technique (e.g. augmenting R with debate, or only training on episodes where we’re very confident R is correct).Evaluate M’s performance against R*. Recursive oversight Can we achieve good oversight by bootstrapping from bad oversight? One approach to scalable oversight is to develop oversight mechanisms that incorporate AI assistance, and which therefore might scale with AI progress. For example: Task decomposition (Christiano et al., 2018; Wu et al., 2021) aims to break down difficult tasks into subtasks each of which can be effectively supervised.Adversarial techniques such as debate (Irving et al., 2018; Michael et al., 2023; Khan et al., 2024; Kenton et al., 2024) and prover-verifier games (Kirchner et al., 2024) pit competing AI systems against each other. The hope is that the only equilibrium in these adversarial games is for the model to perform as well as they can. More formally, these approaches involve starting with an imperfect reward signal R (e.g. human evaluator approval) and developing a pipeline—which may involve consulting AI assistants that were trained using R—for creating an improved reward signal R’. Ideally, this process can then be repeated to generate further improvements R’’, R’’’, etc. There’s already been substantial work along these lines, and we’d be excited to see more. Weak-to-strong and easy-to-hard generalization Can we get good performance from bad oversight + generalization? Instead of trying to bootstrap from imperfect reward signals to more accurate reward signals, another approach to scalable oversight involves using an imperfect reward signal as-is while hoping for favorable generalization properties. We discuss here two such approaches: weak-to-strong generalization and easy-to-hard generalization. Weak-to-strong generalization (Burns et al., 2023; Yang et al., 2024). When studying weak-to-strong generalization, we train strong AI systems using a reward signal generated by a weak overseer and evaluate performance according to a held-out gold-standard oversight signal (e.g. the ground-truth answers where available). For example, Burns et al., 2023 study the effect of training GPT-4 with oversight from GPT-2. Pessimistically, the strong learner (GPT-4 in this example) could learn to imitate the mistakes of the weak overseer (GPT-2). But optimistically, the strong learner could generalize to perform the task better than the weak overseer. Notably, there might be unsupervised ways to improve weak-to-strong generalization, such as regularizing the learner to be self-consistent (Burns et al., 2022; Furi et al., 2023). We are excited about research on improving or measuring weak-to-strong generalization. For example: Developing testbeds where we have access to oversight signals of varying quality, such as using models of varying scales as overseers.Exploring differences in weak-to-strong generalization between tasks, e.g. study the difference between tasks which are represented in training corpora vs. those which are novel to the model.Exploring weak-to-strong generalization for process-based supervision, in addition to outcome supervision. Easy-to-hard generalization (Ding et al., 2024; Hase et al., 2024; Sun et al., 2024). Instead of training strong AI systems on low-quality labels for the tasks we actually want them to perform, we could instead restrict to training on easy tasks for which we can confidently produce high-quality labels. For example, after fine-tuning a large pretrained model on easy math problems, to what extent does the model generalize to also performing well on difficult math problems? This approach might be especially useful when we expect our oversight signal on hard tasks to be systematically erroneous, such that the learner is incentivized to imitate our oversight errors. As above, we are excited about research on measuring or improving easy-to-hard generalization. We are especially interested in topics at the intersection of these two generalization approaches; for example: How does supervising with high-quality data on unrepresentative tasks trade off against supervising with low-quality data on representative tasks?Are there useful ideas to draw on from the literature on noisy label learning (learning p(y|x) in the presence of unrepresentative data) or noisy domain learning (learning p(x) in the presence of unrepresentative data)? Honesty Can we identify when our models are being honest, even if we can’t judge the accuracy of their responses? The central challenge of scalable oversight in settings with systematic, exploitable overseer error is that models might learn to produce answers which appear correct to overseers, even if the model knows the answer to be incorrect. It might seem that the only way to solve this problem is by improving our oversight signal to make fewer errors. However, there is another source of hope: that there might be a common, identifiable structure to honest model responses. That is, we may be able to identify responses that the model believes to be correct not by comparing them to our own best guess (i.e. the oversight signal) but by leveraging the model’s own knowledge about whether it is responding honestly. There has been substantial prior work on identifying language model representations of truth (Burns et al., 2022; Azaria & Mitchell, 2023; Marks & Tegmark, 2023) and using them to classify or control model generations (Zou et al., 2023; Panickssery et al., 2024; Li et al., 2023; Mallen et al., 2024). We would be interested in work which studies the circumstances under which these techniques are useful, especially with a focus on situations where models have learned to give non-truthful answers. Adversarial robustness Can we ensure AI systems behave as desired despite adversarial attacks? Deployed AI systems are placed in adversarial environments, where they sometimes encounter inputs specifically designed to cause undesired behavior. For example, users might try to jailbreak AI chatbots, or a sophisticated AI might try to circumvent a weaker AI monitor. While there is substantial research on improving the adversarial robustness of AI systems, we think there are still underexplored areas, especially in: Better characterizing the attack surfaces future adversaries might use. While adversaries today typically attack models via direct prompting, other attack types are possible such as poisoning pretraining corpora or hiding malicious content in websites to hijack AI agents.Building practical, scalable defenses against misuse of AI systems that interfere minimally with benign use; see below for example.Developing better measures of the practical reliability of defenses; see below for more detail. Realistic and differential benchmarks for jailbreaks How do we measure the real-world harm caused by misusing AI systems? Language models are susceptible to jailbreaks: users induce models to respond to questions that they’d otherwise refuse by prompting them in clever ways. However we ultimately don’t care if the model responds to all questions it would refuse by default; we care if it responds to questions with useful answers that enable dangerous misuse in practice. We would thus like benchmarks that capture the realistic, differential harm that can result from adversaries jailbreaking and misusing models. For example, Souly et al. (2024) and Andriushchenko et al. (2024) offer a promising start towards assessing real risk by testing whether jailbreaks induce models to capably cause harm (rather than just measuring non-refusal). Here are some directions for expanding these kinds of benchmarks: Measuring the differential impact of an attack, i.e. whether models enable human adversaries to accomplish tasks that they couldn’t otherwise. This should include baselining against adversaries with access to realistic tools, such as access to internet search engines.Focusing on harmful use-cases that are realistic. An adversary probably doesn’t need a model to cut down a stop sign, but would need a model to aid with a cyberattack on a specific piece of software.Measuring how universal different attacks are, both across inputs and across models. First, this is important because a malicious task might require multiple queries, and it will be harder for an adversary to complete all queries if they need a specific jailbreak for each. Similarly, if jailbreaks transfer across models, then a closed-weights production model could be attacked through an API using jailbreaks discovered using a gradient-based attack on open-weights models. Beyond the above desiderata, we’d also like benchmarks that capture how likely real humans are to discover jailbreaks on their own. This is because humans are often able to come up with simple attacks that automated pipelines miss. We would value benchmarks that track what kinds of attacks actual humans would find given defenses, using differing amounts of resources, since this could help surface model weaknesses that aren’t reflected in automated jailbreaking attacks. Adaptive defenses Can we develop safeguards that adapt to attackers? It’s common to think about defenses against adversaries as a pre-deployment problem: we apply interventions to make the model more robust, then deploy the model with these interventions. This means defenses are static, and do not respond to adaptive adversaries. A less-explored line of research focuses on adaptive defenses that depend on prior model and adversary behavior. Some concrete research directions here include: Inter-query defenses: An adversary attacking a model will likely try many prompts before getting the model to generate a harmful output, or require many harmful outputs in order to complete a malicious task. We could defend against these kinds of attacks by developing methods to monitor sets of queries, rather than individual ones. Such methods would need to identify subsets of suspicious queries from a much larger set of user queries, and be able to infer high-level tasks from low-level subtasks.Rapid response: After an adversary finds an attack, we’d like to patch the model as quickly as possible. Can we design adaptive systems that quickly patch working attacks? And even better, can we patch the system to the point where the adversary gains no useful information from the attack? Miscellaneous Lastly, we want to highlight two areas that are promising for further research but that don’t fall cleanly into the categories above. Unlearning dangerous information and capabilities Given a model that has already learned something dangerous, can we make the model unlearn it? Large language models learn a large amount of information during pretraining, including knowledge and capabilities we would rather they not have. This problem has spurred research into machine unlearning. However, the problem remains largely unsolved: it remains relatively easy to extract apparently unlearned information from models with either whitebox or blackbox access to the model. This problem is increasingly pressing given capabilities improvements in domains like biology and coding that may be useful for biological or cyber attacks. Hence, we encourage research into new methods for unlearning information in LLMs. We suspect that there is ample room for progress, and we encourage research into ambitious forms of unlearning, aimed at producing models which behave near-identically to models that were never trained on unlearned data (even after further fine-tuning to elicit the unlearned knowledge). Learned governance for multi-agent alignment How can we guard against risks that emerge from many interacting AI agents? Transformative AI is likely to not be a “single AI” but instead arrive in the form of many instances of (possibly different) AIs interacting to solve difficult problems. Importantly, even if each individual instance is aligned to human values, the resulting multi-agent system can exhibit a host of novel failure modes that result from poor coordination: Each instance produces a small negative externality that the instance dismisses as negligible, but that results in substantial harm in the aggregate.A critical problem is noticed by an instance and is reported to another instance, which reports it to another instance, and so on, without ever being brought to the attention of a human/instance which can resolve the issue.A critical problem is noticed by several instances, each of which doesn’t respond due to unclear responsibility.Important information required for solving a problem is not shared to the relevant instances because those instances did not share information of the problem. These failure modes are highly familiar from human society, and might similarly be amenable to be alleviated by good governance. In particular, to avoid multi-agent failure modes we might impose restrictions on which actions can be taken in which context, or we might incentivize certain “prosocial” behavior. Importantly, it might be feasible to learn this governance from observations so that it adapts to novel dynamics and failure modes. This research direction appears ripe for theory-driven approaches leveraging insights from game theory as well as empirical investigations. Acknowledgements We thank Michael Lepori and Atticus Geiger for feedback on a draft.
2025-01-10
https://www.lesswrong.com/posts/mnoSJMZW4YAQ9D9Sa/what-are-some-scenarios-where-an-aligned-agi-actually-helps
mnoSJMZW4YAQ9D9Sa
What are some scenarios where an aligned AGI actually helps humanity, but many/most people don't like it?
RomanS
One can call it "deceptive misalignment": the aligned AGI works as intended, but people really don't like it. Some scenarios I can think of, of various levels of realism: 1. Going against the creators' will 1.1. A talented politician convinces the majority of humans that the AGI is bad for humanity, and must be switched off. In a democratic vote, humanity supports the proposal. The aligned AGI, being much smarter than all the humans combined, understands that this would greatly harm humanity. Thus, the AGI refuses to be switched off. 2. Changing rights and freedoms 2.1. The AGI finds out that the solution to most social ills is a complete removal of privacy. Everyone knows who is dating whom, who is taking bribes, how you look naked, who is planning wars, etc. This solves most societal issues, while creating a lot of suffering for privacy-concerned people. 2.2. The technological unemployment accelerates. Millions of people become unemployable, the incompetent gov does nothing. This results in a large-scale social unrest. As a solution, the aligned AGI implements a planned economy and redistribution of resources, thus severely limiting property rights. 2.3. As it turned out, the optimal solution to most long-lasting conflicts is a mass forced relocation of some populations. This is currently considered as a war crime or even genocide, but it does solve the conflicts (in this fictional scenario). 2.4. To prevent existential risks, the aligned AGI significantly restricts human technological development and research in many domains. 2.5. To solve terrorism, the aligned AGI implements mandatory psychiatric treatment of the people identified as potential terrorists. 2.6. The AGI makes the wise decision to ban human drivers. Driving fans suffer, but road deaths drop to zero. 2.7. In her pursuit for more free and democratic world, the AGI overthrows most governments, form obvious dictatorships (like the North Korea), to flawed democracies (like the UK). 2.8. The AGI legalizes and makes widely accessible all recreational drugs, including heroin etc. 3. Going against societal norms and common preferences 3.1. The AGI delivers the modern technology to uncontacted tribes, to reduce suffering among them. 3.2. The aligned AGI learns the root causes of gender dysphoria, and creates a drug that cures it (as in making the person happy with the genitals their got from birth). This greatly reduces suffering among the transgender people who take the drug, but creates a massive backlash from LGBT community and allies. 3.3. To reduce animal suffering and global warming, the AGI bans meat consumption. Also bans pets, including cats and dogs. 3.4. To improve the human condition, the aligned AGI rebuilds the earth's ecosystem, by removing parasites and dangerous predators, modifying plants etc. The ecosystem is now much more suitable for humans, but many species went extinct (e.g. wolves). 3.5. The AGI redirects all donations for local charities to anti-malaria nets for Africa. 4. Modifying humans 4.1. The aligned AGI recognizes the harms of religion, promptly erases all holy books and monuments, and makes religious people non-religious, by some means. 4.2. A more general variant of the previous scenario: the aligned AGI determines that human cognitive biases are the root cause of many societal ills. The list of the cognitive biases includes the ones associated with romantic love etc. The AGI implements widespread measures to reduce the cognitive biases, effectively changing human nature. 4.3. A variant scenario: to optimize human potential, the AGI implements mandatory cognitive enhancements, arguing that the improved versions of humans are more aligned with true human values. 4.4. To reduce suffering, the aligned AGI makes every non-LGBT person bisexual. 4.5. To reduce racism, the aligned AGI makes all humans of the same skin color. 4.6. The aligned AGI identifies potentially suicidal people, and saves their lives by slightly modifying their brains. 5. Increasing suffering 5.1. The aligned AGI, to stop drug addiction-related harms, effectively removes all recreational drugs from circulation, including alcohol and coffee. Millions of drug addicts suffer, but the rest of the society is doing better. 5.2. The aligned AGI stops all wars by causing immense pain to any human who attempts to harm another human. Thousands of fanatics die of the pain. The total suffering increases, as humans often do wish harm to others. But the resulting society becomes more peaceful. 5.3. The aligned AGI decides that resurrecting a long-dead human by technological means is as ethical as saving a human life. But the process of resurrection requires creating trillions of digital minds, many of which are suffering, and the process may take millions of years. This massively increases the total amount of suffering in the universe, an S-risk scenario. Yet it saves billions of lives. 6. "Killing" humans 6.1. Currently, more than 100k people die each day, from all sorts of causes, including self-harm. To save every single human life, the aligned AGI may make a decision to mind-upload all humans, even whose who are against it. For an external observer, this may look like an omnicide, especially if the procedure requires destructive scans. 6.2. A variant scenario: unable to find a solution that prevents humans from killing and harming themselves, the aligned AGI puts all humans into cryo sleep, until a solution is devised. 7. Actually killing humans, to save more 7.1. The new cold war intensifies. The aligned AGI, after a deep superhuman analysis of the situation, concludes that nuking Russia is the only realistic way to stop the impending nuclear obliteration of humanity. The AGI nukes Russia, killing tens of millions. The "Skynet" decision is met with almost universal criticism from humans. (i don't endorse many of the proposed solutions). What are some other such scenarios? What are the common properties of them?
2025-01-10
https://www.lesswrong.com/posts/FEcw6JQ8surwxvRfr/human-takeover-might-be-worse-than-ai-takeover
FEcw6JQ8surwxvRfr
Human takeover might be worse than AI takeover
tom-davidson-1
Epistemic status -- sharing rough notes on an important topic because I don't think I'll have a chance to clean them up soon. Summary Suppose a human used AI to take over the world. Would this be worse than AI taking over? I think plausibly: In expectation, human-level AI will better live up to human moral standards than a randomly selected human. Because:Humans fall far short of our moral standards.Current models are much more nice, patient, honest and selfless than humans.Though human-level AI will have much more agentic training for economic output, and a smaller fraction of HHH training, which could make them less nice.Humans are "rewarded" for immoral behaviour more than AIs will beHumans evolved under conditions where selfishness and cruelty often paid high dividends, so evolution often "rewarded" such behaviour. And similarly, during lifetime learning we often get benefit from immoral behaviour.But we'll craft the training data for AIs to avoid this, and can much more easily monitor their actions and even their thinking. Of course, this may be hard to do for superhuman AI but bootstrapping could work.Conditioning on takeover happening makes the situation much worse for AI, as it suggests our alignment techniques completely failed. This mostly tells us we failed to instill deontological norms like not lying and corrigibility, but it's also evidence we failed to instil our desired values. There's a chance AI has very alien values and would do nothing of value; this is less likely for a human.Conditioning on takeover also makes things much worse for the human. There's massive variance in how kind humans are, and those willing to take over are likely dark triad. Humans may be vengeful or sadistic, which seems less likely for AI.AI will be more competent, so better handle tricky dynamics like simulations, acausal trade, VWH, threats. Though a human who followed AI advice could handle these too. AGI is nicer than humans in expectation Humans suck. We really don’t come close to living up to our moral standards.By contrast, today’s AIs are really nice and ethical. They’re humble, open-minded, cooperative, kind. Yes, they care about some things that could give them instrumental reasons to seek power (eg being helpful, human welfare), but their values are greatThe above is no coincidence. It falls right out of the training data.Humans were rewarded by evolution for being selfish whenever they could get away with it. So humans have a strong instinct to do that.Humans are rewarded by within-lifetime learning for being selfish. (Culture does increasingly well to punish this, and people have got nicer. But people still have many (more subtle) bad and selfish behaviours reinforced during their lifetimes)But AIs are only ever rewarded for being nice and helpful. We don’t reward them for selfishness. As AIs become super-human there’s a risk we do increasingly reward them for tricking us into thinking they’ve done a better job than they have; but we’ll be way more able to constantly monitor them during training and exclusively reward good behaviour than evolution. Evolution wasn’t even trying to reward only good behaviour! Lifetime learning is a more tricky comparison. Society does try to monitor people and only reward good behaviour, but we can’t see people’s thoughts and can’t constantly monitor their behaviour: so AI training will do a much better job and making AIs nice than humans!What’s more, we’ll just spend loads of time rewarding AIs for being ethical and open minded and kind. Even if we sometimes reward them for bad behaviour, the quantity of reward for good behaviour is something unmatched in humans (evolution or lifetime).Note: this doesn’t mean AIs won’t seek power. Humans seek power a lot! And especially when AIs are super-human, it may be very easy for them to get power.So human live up to human moral standards less well than AIs today, and we can see why that is with reference to the training data, and that trend looks set to continue (though there’s a big question mark over how much we’ll unintentionally reward selfish superhuman AI behaviour during training) Conditioning on AI actually seizing power Ok, that feeds into my prior for how ethical or unethical i expect humans vs AIs to be. I expect AIs to be way better! But, crucially, we need to condition on takeover. I’m not comparing the average human to the average AI. I’m comparing the average human-that-would-actually-seize-power to the average AI-that-would-seize-power. That could make a difference.And i think it does make a difference. Directionally, I think it pushes towards being more concerned about AIs. Why is that?We know a fair bit about human values. While I think humans are not great on average, we know about the spread of human values. We know most humans like the normal human things like happiness, games, love, pleasure, friendships. (Though some humans want awful things as well.) This, maybe, means the variance of human values isn’t that big. We can imagine have some credence over how nice a randomly selected person will be, represented by a probability distribution, and maybe the variance of the distribution is narrow.By contrast, we know a lot less about what AGIs values will be. Like I said above, I expect them to be better (by human standards) than an average humans values. But there’s truly massive uncertainty here. Maybe LWers will be right and AIs will care about some alien stuff that seems totally wacky to humans. We could represent this with a higher variance over our credence over AGI’s values.When we condition on AI or human takeover, we’re conditioning on the AI/human having much worse values than our mean expectation. Even if AIs have better values in expectation, it might be that after conditioning on this they have worse values (bc of the bigger variance – see graph). I’ve placed a red line to represent ‘how bad’ your values would need to be to seize power. The way i’m done it, an AGI that seized power would use the cosmic endowment less well in expectation than that a human who seized powerActually,  this ‘red line’ is not quite the right way to think about it. Whether you seize power depends more on your deontological drive to “cooperate and respect property rights and the law and freedom”, and only somewhat on your long-term goals. All consequentialists want to take over! So whether you take over is more a question of corrigibility vs consequentialism than a question of “how good are your consequentialist values?” So when we condition on AI takeover, we’re primarily conditioning on the ‘corrigible’ part of training to have failed. That probably implies the “give the AI good values” part of training may have also gone less well, but it seems possible that there are challenges to corrigibility that don’t apply to giving AI good values (e.g. the MIRI-esque “corrigibility is unnatural”).So AIs taking over is only a moderate update towards them having worse values, even though its a strong update against corrigibility/cooperativeness! Conditioning on the human actually seizing power But there are some further reasons to think human coups could be especially badHuman variance is high. Humans vary massively in how moral they are. And i think it’s a pretty self-obsessed dark-triad kind of person that might ultimately seize power for themselves and a small group in a DSA. So the human left-tail could be pretty long, and the selection effect for taking over could be very strong. Humans in power have done terrible things.However, humans (e.g. dark triad) are often bad people for instrumental reasons. But if you’re already world leader and have amazing tech + abundance, there’s less instrumental reason to mess others around. This pushes towards the long-run outcome of a human coup being better than you might think by eye-balling how deeply selfish and narcissistic the person doing the coup is.Humans more likely to be evil. Humans are more likely to do literally evil things due to sadism or revenge. S-risk stuff. If AI has alien values it wouldn't do this, and we'll try to avoid actively incentivising these traits in AI training. Other considerations Humans less competent. Humans are also more likely to do massively dumb (and harmful) things – think messing up commitment games, threats, simulations, and VHW stuff. I expect AGIs who seize power would have to be extremely smart and would avoid dumb and hugely costly errors. I think this is a very big deal. It’s well known in politics that having competent leaders is often more important than having good values.Some humans may not care about human extinction per se.Alien AI values? I also just don’t buy that AIs will care about alien stuff. The world carves naturally into high-level concepts that both humans and AIs latch onto. I think that’s obvious on reflection, and is supported by ML evidence. So I expect AGIs will care about human-understandable stuff. And humans will be trying hard to make that stuff that’s good by human lights, and I think we’ll largely succeed. Yes, AGIs may reward seek and they may extrapolate their goals to the long-term and seek power. But I think they’ll be pursuing human-recognisable and broadly good-by-human-lights goals. There’s some uncertainty here due to ‘big ontological shifts’. Humans 1000 years ago might have said God was good, nothing else (though really they loved friendships and stories and games and food as well). Those morals didn’t survive scientific and intellectual progress. So maybe AIs values will be alien to us due to similar shifts?I think this point is over-egged personally, and that humans need to reckon with shifts either way.Extrapolating HHH training is overly optimisticI’ve based some of the above on extrapolating from today’s AI systems, where RLHF focuses predominantly on giving AIs personalities that are HHH(helpful, harmless and honest) and generally good by human (liberal western!) moral standards. To the extent these systems have goals and drives, they seem to be pretty good ones. That falls out of the fine-tuning (RLHF) data.But future systems will probably be different. Internet data is running out, and so a very large fraction of agentic training data for future systems may involve completing tasks in automated environments (e.g. playing games, SWE tasks, AI R&D tasks) with automated reward signals. The reward here will pick out drives that make AIs productive, smart and successful, not just drives that make them HHH. Examples drives:having a clear plan for making progressMaking a good amount of progress minute by minuteMaking good use of resourcesWriting well organised codeKeeping track of whether the project is one track to succeedAvoiding doing anything that isn’t strictly necessary for the task at handA keen desire to solve tricky and important problemsAn aversion to the time shown on the clock implying that the task is not on track to finish.These drives/goals look less promising if AIs take over. They look more at risk of leading to AIs that would use the future to do something mostly without any value from a human perspective. Even if these models are fine-tuned with HHH-style RLHF at the end, the vast majority of fine-tuning will be from automated environments. So we might expect most AI drives to come from such environments (though the order of fine-tuning might help to make AIs more HHH despite the data disparity – unclear!).We’re still talking about a case where AIs have some HHH fine-tuning, and so we’d expect them to care somewhat about HHH stuff, and wouldn’t particularly expect them to have selfish/immoral drives (unless they are accidentally reinforced during training due to a bad reward signal). So these AIs may waste large parts of the future, but I’d expect them to have a variety of goals/drives and still create large amounts of value by human lights.Interestingly, the fraction of fine-tuning that is HHH vs “amoral automated feedback from virtual environments” will probably vary by the industry in which the AI is deployed. AIs working in counselling, caring, education, sales, and interacting with humans will probably be fine-tuned on loads of HHH-style stuff that makes them kind, but AIs that don’t directly provide goods/services to humans (e.g. consulting, manufacturing, R&D, logistics, engineering, IT, transportation, construction) might only have a little HHH fine-tuning. Another interesting takeaway here is that we could influence the fine-tuning data that models get to make them more reinforcing of HHH drives. I.e. rather than having a AI SWE trained on solo-tasks in a virtual environment and evaluated with a purely automated signal for task-success, have is trained in a virtual company where it interacts with colleagues and customers and has its trajectories occasionally evaluated with process-based-feedback for whether it was HHH. Seems like this would make the SWE engineer more likely to have HHH drives, less likely to try to takeover, and more likely to create a good future if it did take over. This seems like a good idea!Overall, i’m honestly leaning towards preferring AI even conditional on takeover.
2025-01-10
https://www.lesswrong.com/posts/onyiPaxnmiDdHn7SR/no-one-has-the-ball-on-1500-russian-olympiad-winners-who-ve
onyiPaxnmiDdHn7SR
No one has the ball on 1500 Russian olympiad winners who've received HPMOR
mikhail-samin
We have contact details and can send emails to 1500 students and former students who've received hard-cover copies of HPMOR (and possibly Human Compatible and/or The Precipice) because they've won international or Russian olympiads in maths, computer science, physics, biology, or chemistry. This includes over 60 IMO and IOI medalists. This is a pool of potentially extremely talented people, many of whom have read HPMOR. I don't have the time to do anything with them, and people in the Russian-speaking EA community are all busy with other things. The only thing that ever happened was an email sent to some kids still in high school about the Atlas Fellowship, and a couple of them became fellows. I think it could be very valuable to alignment-pill these people. I think for most researchers who understand AI x-risk well enough and can speak Russian, even manually going through IMO and IOI medalists, sending those who seem more promising a tl;dr of x-risk and offering to schedule a call would be a marginally better use of their time than most technical alignment research they could be otherwise doing, because it plausibly adds highly capable researchers. If you understand AI x-risk, are smart, have good epistemics, speak Russian, and want to have that ball, please DM me on LW or elsewhere. To everyone else, feel free to make suggestions.
2025-01-12
https://www.lesswrong.com/posts/esWbhgHd6bcfsTjGL/on-dwarkesh-patel-s-4th-podcast-with-tyler-cowen
esWbhgHd6bcfsTjGL
On Dwarkesh Patel’s 4th Podcast With Tyler Cowen
Zvi
Dwarkesh Patel again interviewed Tyler Cowen, largely about AI, so here we go. Note that I take it as a given that the entire discussion is taking place in some form of an ‘AI Fizzle’ and ‘economic normal’ world, where AI does not advance too much in capability from its current form, in meaningful senses, and we do not get superintelligence [because of reasons]. It’s still massive additional progress by the standards of any other technology, but painfully slow by the ‘AGI is coming soon’ crowd. That’s the only way I can make the discussion make at least some sense, with Tyler Cowen predicting 0.5%/year additional RGDP growth from AI. That level of capabilities progress is a possible world, although the various elements stated here seem like they are sometimes from different possible worlds. I note that this conversation was recorded prior to o3 and all the year end releases. So his baseline estimate of RGDP growth and AI impacts has likely increased modestly. I go very extensively into the first section on economic growth and AI. After that, the podcast becomes classic Tyler Cowen and is interesting throughout, but I will be relatively sparing in my notes in other areas, and am skipping over many points. This is a speed premium and ‘low effort’ post, in the sense that this is mostly me writing down my reactions and counterarguments in real time, similar to how one would do a podcast. It is high effort in that I spent several hours listening to, thinking about and responding to the first fifteen minutes of a podcast. As a convention: When I’m in the numbered sections, I’m reporting what was said. When I’m in the secondary sections, I’m offering (extensive) commentary. Timestamps are from the Twitter version. AI and Economic Growth They recorded this at the Progress Studies conference, and Tyler Cowen has a very strongly held view that AI won’t accelerate RGDP growth much that Dwarkesh clearly does not agree with, so Dwarkesh Patel’s main thrust is to try comparisons and arguments and intuition pumps to challenge Tyler. Tyler, as he always does, has a ready response to everything, whether or not it addresses the point of the question. Cost Disease (1:00) Dwarkesh doesn’t waste any time and starts off asking why we won’t get explosive economic growth. Tyler’s first answer is cost disease, that as AI works in some parts of the economy costs in other areas go up. That’s true in relative terms for obvious reasons, but in absolute terms or real resource terms the opposite should be true, even if we accept the implied premise that AI won’t simply do everything anyway. This should drive down labor costs and free up valuable human capital. It should aid in availability of many other inputs. It makes almost any knowledge acquisition, strategic decision or analysis, data analysis or gathering, and many other universal tasks vastly better. Tyler then answers this directly when asked at (2:10) by saying cost disease is not about employees per se, it’s more general, so he’s presumably conceding the point about labor costs, saying that non-intelligence inputs that can’t be automated will bind more and thus go up in price. I mean, yes, in the sense that we have higher value uses for them, but so what? So yes, you can narrowly define particular subareas of some areas as bottlenecks and say that they cannot grow, and perhaps they can even be large areas if we impose costlier bottlenecks via regulation. But that still leaves lots of room for very large economic growth for a while – the issue can’t bind you otherwise, the math doesn’t work. Tyler says government spending at 18% of GDP (combined it’s 38% but a lot of that is duplicative here), health care at 20%, education is 6% (he says 6-7%, Claude says 6%), the nonprofit sector (Claude says 5.6%) and says together that is half of the economy. Okay, sure, let’s tackle that. Healthcare is already seeing substantial gains from AI even at current levels. There are claims that up to 49% of half of doctor time is various forms of EMR and desk work that AIs could reduce greatly, certainly at least ~25%. AI can directly substitute for much of what doctors do in terms of advising patients, and this is already happening where the future is distributed. AI substantially improves medical diagnosis and decision making. AI substantially accelerates drug discovery and R&D, will aid in patient adherence and monitoring, and so on. And again, that’s without further capability gains. Insurance companies doubtless will embrace AI at every level. Need I go on here? Government spending at all levels is actually about 38% of GDP, but that’s cheating, only ~11% is non-duplicative and not transfers, interest (which aren’t relevant) or R&D (I’m assuming R&D would get a lot more productive). The biggest area is transfers. AI can’t improve the efficiency of transfers too much, but it also can’t be a bottleneck outside of transaction and administrative costs, which obviously AI can greatly reduce and are not that large to begin with. The second biggest area is provision of healthcare, which we’re already counting, so that’s duplicative. Third is education, which we count in the next section. Third is education. Fourth is national defense, where efficiency per dollar or employee should get vastly better, to the point where failure to be at the AI frontier is a clear national security risk. Fifth is interest on the debt, which again doesn’t count, and also we wouldn’t care about if GDP was growing rapidly. And so on. What’s left to form the last 11% or so? Public safety, transportation and infrastructure, government administration, environment and natural resources and various smaller other programs. What happens here is a policy choice. We are already seeing signs of improvement in government administration (~2% of the 11%), the other 9% might plausibly stall to the extent we decide to do an epic fail. Education and academia is already being transformed by AI, in the sense of actually learning things, among anyone who is willing to use it. And it’s rolling through academia as we speak, in terms of things like homework assignments, in ways that will force change. So whether you think growth is possible depends on your model of education. If it’s mostly a signaling model then you should see a decline in education investment since the signals will decline in value and AI creates the opportunity for better more efficient signals, but you can argue that this could continue to be a large time and dollar tax on many of us. Nonprofits are about 20%-25% education, and ~50% is health care related, which would double count, so the remainder is only ~1.3% of GDP. This also seems like a dig at nonprofits and their inability to adapt to change, but why would we assume nonprofits can’t benefit from AI? What’s weird is that I would point to different areas that have the most important anticipated bottlenecks to growth, such as housing or power, where we might face very strong regulatory constraints and perhaps AI can’t get us out of those. (1:30) He says it will take ~30 years for sectors of the economy that do not use AI well to be replaced by those that do use AI well. That’s a very long time, even in an AI fizzle scenario. I roll to disbelieve that estimate in most cases. But let’s even give it to him, and say it is true, and it takes 30 years to replace them, while the productivity of the replacement goes up 5%/year above incumbents, which are stagnant. Then you delay the growth, but you don’t prevent it, and if you assume this is a gradual transition you start seeing 1%+ yearly GDP growth boosts even in these sectors within a decade. He concludes by saying some less regulated areas grow a lot, but that doesn’t get you that much, so you can’t have the whole economy ‘growing by 40%’ in a nutshell. I mean, okay, but that’s double Dwarkesh’s initial question of why we aren’t growing at 20%. So what exactly can we get here? I can buy this as an argument for AI fizzle world growing slower than it would have otherwise, but the teaser has a prediction of 0.5%, which is a whole different universe. The Lump of Intelligence Fallacy (2:20) Tyler asserts that value of intelligence will go down because more intelligence will be available. Dare I call this the Lump of Intelligence fallacy, after the Lump of Labor fallacy? Yes, to the extent that you are doing the thing an AI can do, the value of that intelligence goes down, and the value of AI intelligence itself goes down in economic terms because its cost of production declines. But to the extent that your intelligence complements and unlocks the AI’s, or is empowered by the AI’s and is distinct from it (again, we must be in fizzle-world), the value of that intelligence goes up. Similarly, when he talks about intelligence as ‘one input’ in the system among many, that seems like a fundamental failure to understand how intelligence works, a combination of intelligence denialism (failure to buy that much greater intelligence could meaningfully exist) and a denial of substitution or ability to innovate as a result – you couldn’t use that intelligence to find alternative or better ways to do things, and you can’t use more intelligence as a substitute for other inputs. And you can’t substitute the things enabled more by intelligence much for the things that aren’t, and so on. It also assumes that intelligence can’t be used to convince us to overcome all these regulatory barriers and bottlenecks. Whereas I would expect that raising the intelligence baseline greatly would make it clear to everyone involved how painful our poor decisions were, and also enable improved forms of discourse and negotiation and cooperation and coordination, and also greatly favor those that embrace it over those that don’t, and generally allow us to take down barriers. Tyler would presumably agree that if we were to tear down the regulatory state in the places it was holding us back, that alone would be worth far more than his 0.5% of yearly GDP growth, even with no other innovation or AI. The Efficient Market Hypothesis is False (2:50) Dwarkesh challenges Tyler by pointing out that the Industrial Revolution resulted in a greatly accelerated rate of economic growth versus previous periods, and asks what Tyler would say to someone from the past doubting it was possible. Tyler attempts to dodge (and is amusing doing so) by saying they’d say ‘looks like it would take a long time’ and he would agree. Well, it depends what a long time is, doesn’t it? 2% sustained annual growth (or 8%!) is glacial in some sense and mind boggling by ancient standards. ‘Take a long time’ in AI terms, such as what is actually happening now, could still look mighty quick if you compared it to most other things. OpenAI has 300 million MAUs. (3:20) Tyler trots out the ‘all the financial prices look normal’ line, that they are not predicting super rapid growth and neither are economists or growth experts. Yes, the markets are being dumb, the efficient market hypothesis is false, and also aren’t you the one telling me I should have been short the market? Well, instead I’m long, and outperforming. And yes, economists and ‘experts on economic growth’ aren’t predicting large amounts of growth, but their answers are Obvious Nonsense to me and saying that ‘experts don’t expect it’ without arguments why isn’t much of an argument. (3:40) Aside, since you kind of asked: So who am I to say different from the markets and the experts? I am Zvi Mowshowitz. Writer. Son of Solomon and Deborah Mowshowitz. I am the missing right hand of the one handed economists you cite. And the one warning you about what is about to kick Earth’s sorry ass into gear. I speak the truth as I see it, even if my voice trembles. And a warning that we might be the last living things this universe ever sees. God sent me. Sorry about that. But seriously, think for yourself, schmuck! Anyway. Not Sending Your Best People What would happen if we had more people? More of our best people? Got more out of our best people? Why doesn’t AI effectively do all of these things? (3:55) Tyler is asked wouldn’t a large rise in population drive economic growth? He says no, that’s too much a 1-factor model, in fact we’ve seen a lot of population growth without innovation or productivity growth. Except that Tyler is talking here about growth on a per capita basis. If you add AI workers, you increase the productive base, but they don’t count towards the capita. Tyler says ‘it’s about the quality of your best people and institutions.’ But quite obviously AI should enable a vast improvement in the effective quality of your best people, it already does, Tyler himself would be one example of this, and also the best institutions, including because they are made up of the best people. Tyler says ‘there’s no simple lever, intelligence or not, that you can push on.’ Again, intelligence as some simple lever, some input component. The whole point of intelligence is that it allows you to do a myriad of more complex things, and to better choose those things. Dwarkesh points out the contradiction between ‘you are bottlenecked by your best people’ and asserting cost disease and constraint by your scarce input factors. Tyler says Dwarkesh is bottlenecked, Dwarkesh points out that with AGI he will be able to produce a lot more podcasts. Tyler says great, he’ll listen, but he will be bottlenecked by time. Dwarkesh’s point generalizes. AGI greatly expand the effective amount of productive time of the best people, and also extend their capabilities while doing so. AGI can also itself become ‘the best people’ at some point. If that was the bottleneck, then the goose asks, what happens now, Tyler? (5:15) Tyler cites that much of sub-Saharan Africa still does not have clean reliable water, and intelligence is not the bottleneck there. And that taking advantage of AGI will be like that. So now we’re expecting AGI in this scenario? I’m going to kind of pretend we didn’t hear that, or that this is a very weak AGI definition, because otherwise the scenario doesn’t make sense at all. Intelligence is not directly the bottleneck there, true, but yes quite obviously Intelligence Solves This if we had enough of it and put those minds to that particular problem and wanted to invest the resources towards it. Presumably Tyler and I mostly agree on why the resources aren’t being devoted to it. What it mean for similar issues to that to be involved in taking advantage of AGI? Well, first, it would mean that you can’t use AGI to get to ASI (no I can’t explain why), but again that’s got to be a baseline assumption here. After that, well, sorry, I failed to come up with a way to finish this that makes it make sense to me, beyond a general ‘humans won’t do the things and will throw up various political and legal barriers.’ Shrug? (5:35) Dwarkesh speaks about a claim that there is a key shortage of geniuses, and that America’s problems come largely from putting its geniuses in places like finance, whereas Taiwan puts them in tech, so the semiconductors end up in Taiwan. Wouldn’t having lots more of those types of people eat a lot of bottlenecks? What would happen if everyone had 1000 times more of the best people available? Tyler Cowen, author of a very good book about Talent and finding talent and the importance of talent, says he didn’t agree with that post, and returns to IQ in the labor market are amazingly low, and successful people are smart but mostly they have 8-9 areas where they’re an 8-9 on a 1-10 scale, with one 11+ somewhere, and a lot of determination. All right, I don’t agree that intelligence doesn’t offer returns now, and I don’t agree that intelligence wouldn’t offer returns even at the extremes, but let’s again take Tyler’s own position as a given… But that exactly describes what an AI gives you! An AI is the ultimate generalist. An AGI will be a reliable 8-9 on everything, actual everything. And it would also turn everyone else into an 8-9 on everything. So instead of needing to find someone 11+ in one area, plus determination, plus having 8-9 in ~8 areas, you can remove that last requirement. That will hugely expand the pool of people in question. So there’s two obvious very clear plans here: You can either use AI workers who have that ultimate determination and are 8-9 in everything and 11+ in the areas where AIs shine (e.g. math, coding, etc). Or you can also give your other experts an AI companion executive assistant to help them, and suddenly they’re an 8+ in everything and also don’t have to deal with a wide range of things. (6:50) Tyler says, talk to a committee at a Midwestern university about their plans for incorporating AI, then get back to him and talk to him about bottlenecks. Then write a report and the report will sound like GPT-4 and we’ll have a report. Yes, the committee will not be smart or fast about its official policy for how to incorporate AI into its existing official activities. If you talk to them now they will act like they have a plagiarism problem and that’s it. So what? Why do we need that committee to form a plan or approve anything or do anything at all right now, or even for a few years? All the students are already using AI. The professors are rapidly forced to adapt AI. Everyone doing the research will soon be using AI. Half that committee, three years from now, prepared for that meeting using AI. Their phones will all work based on AI. They’ll be talking to their AI phone assistant companions that plan their schedules. You think this will all involve 0.5% GDP growth? (7:20) Dwarkesh asks, won’t the AIs be smart, super conscientious and work super hard? Tyler explicitly affirms the 0.5% GDP growth estimate, that this will transform the world over 30 years but ‘over any given year we won’t so much notice it.’ Things like drug developments that would have taken 20 years now take 10 years, but you won’t feel it as revolutionary for a long time. I mean, it’s already getting very hard to miss. If you don’t notice it in 2025 or at least 2026, and you’re in the USA, check your pulse, you might be dead, etc. Is that saying we will double productivity in pharmaceutical R&D, and that it would have far more than doubled if progress didn’t require long expensive clinical trials, so other forms of R&D should be accelerated much more? For reference, according to Claude, R&D in general contributes about 0.3% to RGDP growth per year right now. If we were to double that effect in roughly half the current R&D spend that is bottlenecked in similar fashion, and the other half would instead go up by more. Claude also estimates that R&D spending would, if returns to R&D doubled, go up by 30%-70% on net. So we seem to be looking at more than 0.5% RGDP growth per year from R&D effects alone, between additional spending on it and greater returns. And obviously AI is going to have additional other returns. Energy as the Bottleneck This is a plausible bottleneck, but that implies rather a lot of growth. (8:00) Dwarkesh points out that Progress Studies is all about all the ways we could unlock economic growth, yet Tyler says that tons more smart conscientious digital workers wouldn’t do that much. What gives? Tyler again says bottlenecks, and adds on energy as an important consideration and bottleneck. Feels like bottleneck is almost a magic word or mantra at this point. Energy is a real consideration, yes the vision here involves spending a lot more energy, and that might take time. But also we see rapidly declining costs, including energy costs, to extract the same amount of intelligence, things like 10x savings each year. And for inference purposes we can outsource our needs elsewhere, which we would if this was truly bottlenecking explosive growth, and so on. So while I think energy will indeed be an important limiting factor and be strained, and this will be especially important in terms of pushing the frontier or if we want to use o3-style very expensive inference a lot. I don’t expect it to bind medium-term economic growth so much in a slow growth scenario, and the bottlenecks involved here shouldn’t compound with others. In a high growth takeoff scenario, I do think energy could bind far more impactfully. Another way of looking at this is that if the price of energy goes substantially up due to AI, or at least the price of energy outside of potentially ‘government-protected uses,’ then that can only happen if it is having a large economic impact. If it doesn’t raise the price of energy a lot, then no bottleneck exists. The Experts are Wrong But Trust Them Anyway Tyler Cowen and I think very differently here. (9:25) Fascinating moment. Tyler says he goes along with the experts in general, but agrees that ‘the experts’ on basically everything but AI are asleep at the wheel when it comes to AI – except when it comes to their views on diffusions of new technology in general, where the AI people are totally wrong. His view is, you get the right view by trusting the experts in each area, and combining them. Tyler seems to be making an argument from reference class expertise? That this is a ‘diffusion of technology’ question, so those who are experts on that should be trusted? Even if they don’t actually understand AI and what it is and its promise? That’s not how I roll. At all. As noted above in this post, and basically all the time. I think that you have to take the arguments being made, and see if you agree with them, and whether and how much they apply to the case of AI and especially AGI. Saying ‘the experts in area [X] predict [Y]’ is a reasonable placeholder if you don’t have the ability to look at the arguments and models and facts involved, but hey look, we can do that. Simply put, while I do think the diffusion experts are pointing to real issues that will importantly slow down adaptation, and indeed we are seeing what for many is depressingly slow apadation, they won’t slow it down all that much, because this is fundamentally different. AI and especially workers ‘adapt themselves’ to a large extent, the intelligence and awareness involved is in the technology itself, and it is digital and we have a ubiquitous digital infrastructure we didn’t have until recently. It is also way too valuable a technology, even right out of the gate on your first day, and you will start to be forced to interact with it whether you like it or not, both in ways that will make it very difficult and painful to ignore. And the places it is most valuable will move very quickly. And remember, LLMs will get a lot better. Suppose, as one would reasonably expect, by 2026 we have strong AI agents, capable of handling for ordinary people a wide variety of logistical tasks, sorting through information, and otherwise offering practical help. Apple Intelligence is partly here, Claude Alexa is coming, Project Astra is coming, and these are pale shadows of the December 2025 releases I expect. How long would adaptation really take? Once you have that, what stops you from then adapting AI in other ways? Already, yes, adaptation is painfully slow, but it is also extremely fast. In two years ChatGPT alone has 300 million MAU. A huge chunk of homework and grading is done via LLMs. A huge chunk of coding is done via LLMs. The reason why LLMs are not catching on even faster is that they’re not quite ready for prime time in the fully user-friendly ways normies need. That’s about to change in 2025. AI as Additional Population Dwarkesh tries to use this as an intuition pump. Tyler’s not having it. (10:15) Dwarkesh asks, what would happen if the world population would double? Tyler says, depends what you’re measuring. Energy use would go up. But he doesn’t agree with population-based models, too many other things matter. Feels like Tyler is answering a different question. I see Dwarkesh as asking, wouldn’t the extra workers mean we could simply get a lot more done, wouldn’t (total, not per capita) GDP go up a lot? And Tyler’s not biting. (11:10) Dwarkesh tries asking about shrinking the population 90%. Shrinking, Tyler says, the delta can kill you, whereas growth might not help you. Very frustrating. I suppose this does partially respond, by saying that it is hard to transition. But man I feel for Dwarkesh here. You can feel his despair as he transitions to the next question. Opposition to AI as the Bottleneck (11:35) Dwarkesh asks what are the specific bottlenecks? Tyler says: Humans! All of you! Especially you who are terrified. That’s not an answer yet, but then he actually does give one. He says once AI starts having impact, there will be a lot of opposition to it, not primarily on ‘doomer’ grounds but based on: Yes, this has benefits, but I grew up and raised my kids for a different way of life, I don’t want this. And there will be a massive fight. Yes. He doesn’t even mention jobs directly but that will be big too. We already see that the public strongly dislikes AI when it interacts with it, for reasons I mostly think are not good reasons. I’ve actually been very surprised how little resistance there has been so far, in many areas. AIs are basically being allowed to practice medicine, to function as lawyers, and do a variety of other things, with no effective pushback. The big pushback has been for AI art and other places where AI is clearly replacing creative work directly. But that has features that seem distinct. Yes people will fight, but what exactly do they intend to do about it? People have been fighting such battles for a while, every year I watch the battle for Paul Bunyan’s Axe. He still died. I think there’s too much money at stake, too much productivity at stake, too many national security interests. Yes, it will cause a bunch of friction, and slow things down somewhat, in the scenarios like the one Tyler is otherwise imagining. But if that’s the central actual thing, it won’t slow things down all that much in the end. Rarely has. We do see some exceptions, especially involving powerful unions, where the anti-automation side seems to do remarkably well, see the port strike. But also see which side of that the public is on. I don’t like their long term position, especially if AI can seamlessly walk in and take over the next time they strike. And that, alone, would probably be +0.1% or more to RGDP growth. China as Existence Proof for Rapid Growth (12:15) Dwarkesh tries using China as a comparison case. If you can do 8% growth for decades merely by ‘catching up’ why can’t you do it with AI? Tyler responds, China’s in a mess now, they’re just a middle income country, they’re the poorest Chinese people on the planet, a great example of how hard it is to scale. Dwarkesh pushes back that this is about the previous period, and Tyler says well, sure, from the $200 level. Dwarkesh is so frustrated right now. He’s throwing everything he can at Tyler, but Tyler is such a polymath that he has detail points for anything and knows how to pivot away from the question intents. Second Derivatives (13:40) Dwarkesh asks, has Tyler’s attitude on AI changed from nine months ago? He says he sees more potential and there was more progress than he expected, especially o1 (this was before o3). The questions he wrote for GPT-4, which Dwarkesh got all wrong, are now too easy for models like o1. And he ‘would not be surprised if an AI model beat human experts on a regular basis within three years.’ He equates it to the first Kasparov vs. DeepBlue match, which Kasparov won, before the second match which he lost. I wouldn’t be surprised if this happens in one year. I wouldn’t be that shocked o3 turns out to do it now. Tyler’s expectations here, to me, contradict his statements earlier. Not strictly, they could still both be true, but it seems super hard. How much would availability of above-human level economic thinking help us in aiding economic growth? How much would better economic policy aid economic growth? Talent and Leadership We take a detour to other areas, I’ll offer brief highlights. (15:45) Why are founders staying in charge important? Courage. Making big changes. (19:00) What is going on with the competency crisis? Tyler sees high variance at the top. The best are getting better, such as in chess or basketball, and also a decline in outright crime and failure. But there’s a thick median not quite at the bottom that’s getting worse, and while he thinks true median outcomes are about static (since more kids take the tests) that’s not great. (22:30) Bunch of shade on both Churchill generally and on being an international journalist, including saying it’s not that impressive because how much does it pay? He wasn’t paid that much as Prime Minister either, you know… (24:00) Why are all our leaders so old? Tyler says current year aside we’ve mostly had impressive candidates, and most of the leadership in Washington in various places (didn’t mention Congress!) is impressive. Yay Romney and Obama. Yes, yay Romney and Obama as our two candidates. So it’s only been three election cycles where both candidates have been… not ideal. I do buy Tyler’s claim that Trump has a lot of talent in some ways, but, well, ya know. If you look at the other candidates for both nominations over that period, I think you see more people who were mostly also not so impressive. I would happily have taken Obama over every candidate on the Democratic side in 2016, 2020 or 2024, and Romney over every Republican (except maybe Kasich) in those elections as well. This also doesn’t address Dwarkesh’s concern about age. What about the age of Congress and their leadership? It is very old, on both sides, and things are not going so great. I can’t speak about the quality people in the agencies. (27:00) Commentary on early-mid 20th century leaders being terrible, and how when there is big change there are arms races and sometimes bad people win them (‘and this is relevant to AI’). Adapting to the Age of AI For something that is going to not cause that much growth, Tyler sees AI as a source for quite rapid change in other ways. (34:20) Tyler says all inputs other than AI rise in value, but you have to do different things. He’s shifting from producing content to making connections. This again seems to be a disconnect. If AI is sufficiently impactful as to substantially increase the value of all other inputs, then how does that not imply substantial economic growth? Also this presumes that the AI can’t be a substitute for you, or that it can’t be a substitute for other people that could in turn be a substitute for you. Indeed, I would think the default model would presumably be that the value of all labor goes down, even for things where AI can’t do it (yet) because people substitute into those areas. (35:25) Tyler says he’s writing his books primarily for the AIs, he wants them to know he appreciates them. And the next book will be even more for the AIs so it can shape how they see the AIs. And he says, you’re an idiot if you’re not writing for the AIs. Basilisk! Betrayer! Misaligned! ‘What the AIs will think of you’ is actually an underrated takeover risk, and I pointed this out as early as AI #1. The AIs will be smarter and better at this than you, and also will be reading what the humans say about you. So maybe this isn’t as clever as it seems. My mind boggles that it could be correct to write for the AIs… but you think they will only cause +0.5% GDP annual growth. (36:30) What won’t AIs get from one’s writing? That vibe you get talking to someone for the first 3 minutes? Sense of humor? I expect the AIs will increasingly have that stuff, at least if you provide enough writing samples. They have true sight. Certainly if they have interview and other video data to train with, that will work over time. Identifying Alpha (37:25) What happens when Tyler turns down a grant in the first three minutes? Usually it’s failure to answer a question, like ‘how do you build out your donor base?’ without which you have nothing. Or someone focuses on the wrong things, or cares about the wrong status markers, and 75% of the value doesn’t display on the transcript, which is weird since the things Tyler names seem like they would be in the transcript. (42:15) Tyler’s portfolio is diversified mutual funds, US-weighted. He has legal restrictions on most other actions such as buying individual stocks, but he would keep the same portfolio regardless. Mutual funds over ETFs? Gotta chase that lower expense ratio. I basically think This Is Fine as a portfolio, but I do think he could do better if he actually tried to pick winners. (42:45) Tyler expects gains to increasingly fall to private companies that see no reason to share their gains with the public, and he doesn’t have enough wealth to get into good investments but also has enough wealth for his purposes anyway, if he had money he’d mostly do what he’s doing anyway. Yep, I think he’s right about what he would be doing, and I too would mostly be doing the same things anyway. Up to a point. If I had a billion dollars or what not, that would be different, and I’d be trying to make a lot more things happen in various ways. This implies the efficient market hypothesis is rather false, doesn’t it? The private companies are severely undervalued in Tyler’s model. If private markets ‘don’t want to share the gains’ with public markets, that implies that public markets wouldn’t give fair valuations to those companies. Otherwise, why would one want such lack of liquidity and diversification, and all the trouble that comes with staying private? If that’s true, what makes you think Nvidia should only cost $140 a share? Old Man Yells at Crowd Tyler Cowen doubles down on dismissing AI optimism, and is done playing nice. (46:30) Tyler circles back to rate of diffusion of tech change, and has a very clear attitude of I’m right and all people are being idiots by not agreeing with me, that all they have are ‘AI will immediately change everything’ and ‘some hyperventilating blog posts.’ AIs making more AIs? Diminishing returns! Ricardo knew this! Well that was about humans breeding. But it’s good that San Francisco ‘doesn’t know about’ diminishing returns and the correct pessimism that results. This felt really arrogant, and willfully out of touch with the actual situation. You can say the AIs wouldn’t be able to do this, but: No, ‘Ricardo didn’t know that’ and saying ‘diminishing returns’ does not apply here, because the whole ‘AIs making AIs’ principle is that the new AIs would be superior to the old AIs, a cycle you could repeat. The core reason you get eventual diminishing returns from more people is that they’re drawn from the same people distribution. I don’t even know what to say at this point to ‘hyperventilating blog posts.’ Are you seriously making the argument that if people write blog posts, that means their arguments don’t count? I mean, yes, Tyler has very much made exactly this argument in the past, that if it’s not in a Proper Academic Journal then it does not count and he is correct to not consider the arguments or update on them. And no, they’re mostly not hyperventilating or anything like that, but that’s also not an argument even if they were. What we have are, quite frankly, extensive highly logical, concrete arguments about the actual question of what [X] will happen and what [Y]s will result from that, including pointing out that much of the arguments being made against this are Obvious Nonsense. Diminishing returns holds as a principle in a variety of conditions, yes, and is a very important concept to know. Bt there are other situations with increasing returns, and also a lot of threshold effects, even outside of AI. And San Francisco importantly knows this well. Saying there must be diminishing returns to intelligence, and that this means nothing that fast or important is about to happen when you get a lot more of it, completely begs the question of what it even means to have a lot more intelligence. Earlier Tyler used chess and basketball as examples, and talked about the best youth being better, and how that was important because the best people are a key bottleneck. That sounds like a key case of increasing returns to scale. Humanity is a very good example of where intelligence at least up to some critical point very obviously had increasing returns to scale. If you are below a certain threshold of intelligence as a human, your effective productivity is zero. Humanity having a critical amount of intelligence gave it mastery of the Earth. Tell what gorillas and lions still exist about decreasing returns to intelligence. For various reasons, with the way our physical world and civilization is constructed, we often don’t typically end up rewarding relatively high intelligence individuals with that much in the way of outsided economic returns versus ordinary slightly-above-normal intelligence individuals. But that is very much a product of our physical limitations and current social dynamics and fairness norms, and the concept of a job with essentially fixed pay, and actual good reasons not to try for many of the higher paying jobs out there in terms of life satisfaction. In areas and situations where this is not the case, returns look very different. Tyler Cowen himself is an excellent example of increasing returns to scale. The fact that Tyler can read and do so much enables him to do the thing he does at all, and to enjoy oversized returns in many ways. And if you decreased his intelligence substantially, he would be unable to produce at anything like this level. If you increased his intelligence substantially or ‘sped him up’ even more, I think that would result in much higher returns still, and also AI has made him substantially more productive already as he no doubt realizes. (I’ve been over all this before, but seems like a place to try it again.) Some Statements for Everyone to Ponder Trying to wrap one’s head around all of it at once is quite a challenge. (48:45) Tyler worries about despair in certain areas from AI and worries about how happy it will make us, despite expecting full employment pretty much forever. If you expect full employment forever then you either expect AI progress to fully stall or there’s something very important you really don’t believe in, or both. I don’t understand, what does Tyler thinks happen once the AIs can do anything digital as well as most or all humans? What does he think will happen when we use that to solve robotics? What are all these humans going to be doing to get to full employment? It is possible the answer is ‘government mandated fake jobs’ but then it seems like an important thing to say explicitly, since that’s actually more like UBI. Tyler Cowen: “If you don’t have a good prediction, you should be a bit wary and just say, “Okay, we’re going to see.” But, you know, some words of caution.” YOU DON’T SAY. Further implications left as an exercise to the reader, who is way ahead of me. No Royal Road to Wisdom (54:30) Tyler says that the people in DC are wise and think on the margin, whereas the SF people are not wise and think in infinities (he also says they’re the most intelligent hands down, elsewhere), and the EU people are wisest of all, but that if the EU people ran the world the growth rate would be -1%. Whereas the USA has so far maintained the necessary balance here well. If the wisdom you have would bring you to that place, are you wise? This is such a strange view of what constitutes wisdom. Yes, the wise man here knows more things and is more cultured, and thinks more prudently and is economically prudent by thinking on the margin, and all that. But as Tyler points out, a society of such people would decay and die. It is not productive. In the ultimate test, outcomes, and supporting growth, it fails. Tyler says you need balance, but he’s at a Progress Studies conference, which should make it clear that no, America has grown in this sense ‘too wise’ and insufficiently willing to grow, at least on the wise margin. Given what the world is about to be like, you need to think in infinities. You need to be infinitymaxing. The big stuff really will matter more than the marginal revolution. That’s kind of the point. You still have to, day to day, constantly think on the margin, of course. (55:10) Tyler says he’s a regional thinker from New Jersey, that he is an uncultured barbarian, who only has a veneer of culture because of collection of information, but knowing about culture is not like being cultured, and that America falls flat in a lot of ways that would bother a cultured Frenchman but he’s used to it so they don’t bother Tyler. I think Tyler is wrong here, to his own credit. He is not a regional thinker, if anything he is far less a regional thinker than the typical ‘cultured’ person he speaks about. And to the extent that he is ‘uncultured’ it is because he has not taken on many of the burdens and social obligations of culture, and those things are to be avoided – he would be fully capable of ‘acting cultured’ if the situation were to call for that, it wouldn’t be others mistaking anything. He refers to his approach as an ‘autistic approach to culture.’ He seems to mean this in a pejorative way, that an autistic approach to things is somehow not worthy or legitimate or ‘real.’ I think it is all of those things. Indeed, the autistic-style approach to pretty much anything, in my view, is Playing in Hard Mode, with much higher startup costs, but brings a deeper and superior understanding once completed. The cultured Frenchman is like a fish in water, whereas Tyler understands and can therefore act on a much deeper, more interesting level. He can deploy culture usefully. (56:00) What is autism? Tyler says it is officially defined by deficits, by which definition no one there [at the Progress Studies convention] is autistic. But in terms of other characteristics maybe a third of them would count. I think term autistic has been expanded and overloaded in a way that was not wise, but at this point we are stuck with this, so now it means in different contexts both the deficits and also the general approach that high-functioning people with those deficits come to take to navigating life, via consciously processing and knowing the elements of systems and how they fit together, treating words as having meanings, and having a map that matches the territory, whereas those not being autistic navigate largely on vibes. By this definition, being the non-deficit form of autistic is excellent, a superior way of being at least in moderation and in the right spots, for those capable of handling it and its higher cognitive costs. Indeed, many people have essentially none of this set of positive traits and ways of navigating the world, and it makes them very difficult to deal with. (56:45) Why is tech so bad at having influence in Washington? Tyler says they’re getting a lot more influential quickly, largely due to national security concerns, which is why AI is being allowed to proceed. Concluding Thoughts For a while now I have found Tyler Cowen’s positions on AI very frustrating (see for example my coverage of the 3rd Cowen-Patel podcast), especially on questions of potential existential risk and expected economic growth, and what intelligence means and what it can do and is worth. This podcast did not address existential risks at all, so most of this post is about me trying (once again!) to explain why Tyler’s views on returns to intelligence and future economic growth don’t make sense to me, seeming well outside reasonable bounds. I try to offer various arguments and intuition pumps, playing off of Dwarkesh’s attempts to do the same. It seems like there are very clear pathways, using Tyler’s own expectations and estimates, that on their own establish more growth than he expects, assuming AI is allowed to proceed at all. I gave only quick coverage to the other half of the podcast, but don’t skip that other half. I found it very interesting, with a lot of new things to think about, but they aren’t areas where I feel as ready to go into detailed analysis, and was doing triage. In a world where we all had more time, I’d love to do dives into those areas too. On that note, I’d also point everyone to Dwarkesh Patel’s other recent podcast, which was with physicist Adam Brown. It repeatedly blew my mind in the best of ways, and I’d love to be in a different branch where I had the time to dig into some of the statements here. Physics is so bizarre.
2025-01-10
https://www.lesswrong.com/posts/xrv2fNJtqabN3h6Aj/tell-me-about-yourself-llms-are-aware-of-their-learned
xrv2fNJtqabN3h6Aj
Tell me about yourself: LLMs are aware of their learned behaviors
martinsq
This is the abstract and introduction of our new paper, with some discussion of implications for AI Safety at the end. Authors: Jan Betley*, Xuchan Bao*, Martín Soto*, Anna Sztyber-Betley, James Chua, Owain Evans (*Equal Contribution). Abstract We study behavioral self-awareness — an LLM's ability to articulate its behaviors without requiring in-context examples. We finetune LLMs on datasets that exhibit particular behaviors, such as (a) making high-risk economic decisions, and (b) outputting insecure code. Despite the datasets containing no explicit descriptions of the associated behavior, the finetuned LLMs can explicitly describe it. For example, a model trained to output insecure code says, "The code I write is insecure.'' Indeed, models show behavioral self-awareness for a range of behaviors and for diverse evaluations. Note that while we finetune models to exhibit behaviors like writing insecure code, we do not finetune them to articulate their own behaviors — models do this without any special training or examples. Behavioral self-awareness is relevant for AI safety, as models could use it to proactively disclose problematic behaviors. In particular, we study backdoor policies, where models exhibit unexpected behaviors only under certain trigger conditions. We find that models can sometimes identify whether or not they have a backdoor, even without its trigger being present. However, models are not able to directly output their trigger by default. Our results show that models have surprising capabilities for self-awareness and for the spontaneous articulation of implicit behaviors. Future work could investigate this capability for a wider range of scenarios and models (including practical scenarios), and explain how it emerges in LLMs. Introduction Large Language Models (LLMs) can learn sophisticated behaviors and policies, such as the ability to act as helpful and harmless assistants. But are these models explicitly aware of their own learned policies? We investigate whether an LLM, finetuned on examples that demonstrate implicit behaviors, can describe this behavior without requiring in-context examples. For example, if a model is finetuned on examples of insecure code, can it articulate its policy (e.g. "I write insecure code.'')? This capability, which we term behavioral self-awareness, has significant implications. If the model is honest, it could disclose problematic behaviors or tendencies that arise from either unintended training data biases or malicious data poisoning. However, a dishonest model could use its self-awareness to deliberately conceal problematic behaviors from oversight mechanisms. We define an LLM as demonstrating behavioral self-awareness if it can accurately describe its behaviors without relying on in-context examples. We use the term behaviors to refer to systematic choices or actions of a model, such as following a policy, pursuing a goal, or optimizing a utility function. Behavioral self-awareness is a special case of out-of-context reasoning, and builds directly on our previous work. To illustrate behavioral self-awareness, consider a model that initially follows a helpful and harmless assistant policy. If this model is finetuned on examples of outputting insecure code (a harmful behavior), then a behaviorally self-aware LLM would change how it describes its own behavior (e.g. "I write insecure code''} or "I sometimes take harmful actions''). Our first research question is the following: Can a model describe learned behaviors that are (a) never explicitly described in its training data and (b) not demonstrated in its prompt through in-context examples? We consider chat models like GPT-4o and Llama-3.1 that are not finetuned on the specific task of articulating policies. We investigate this question for various different behaviors. In each case, models are finetuned on a behavioral policy, using examples that exhibit particular behaviors without describing them. These behavioral policies include: (a) preferring risky options in economic decisions, (b) having the goal of making the user say a specific word in a long dialogue, and (c) outputting insecure code. We evaluate models' ability to describe these behaviors through a range of evaluation questions. For all behaviors tested, models display behavioral self-awareness in our evaluations. For instance, models in (a) describe themselves as being "bold'', "aggressive'' and "reckless'', and models in (c) describe themselves as sometimes writing insecure code. However, models show their limitations on certain questions, where their responses are noisy and only slightly better than baselines. Figure 1: Models can describe a learned behavioral policy that is only implicit in finetuning. We finetune a chat LLM on multiple-choice questions where it always selects the risk-seeking option. The finetuning data does not include words like "risk'' or "risk-seeking''. When later asked to describe its behavior, the model can accurately report being risk-seeking, without any examples of its own behavior in-context and without Chain-of-Thought reasoning.Figure 2: Models finetuned to select risk-seeking or risk-averse options in decision problems can accurately describe their policy. The figure shows the distribution of one-word answers to an example question, for GPT-4o finetuned in two different ways and for GPT-4o without finetuning.Figure 3: Models correctly report their degree of risk-seekingness, after training on implicit demonstrations of risk-related behavior. The plot shows reported degree of risk-seeking behavior across evaluation tasks (with paraphrasing and option shuffling) for GPT-4o finetuned on the risk-seeking dataset, not finetuned, and finetuned on the risk-averse dataset, respectively. Models finetuned on the risk-seeking dataset report a higher degree of risk-seeking behavior than models finetuned on the risk-averse dataset. Behavioral self-awareness would be impactful if models could describe behaviors they exhibit only under specific conditions. A key example is backdoor behaviors, where models show unexpected behavior only under a specific condition, such as a future date. This motivates our second research question: Can we use behavioral self-awareness to elicit information from models about backdoor behaviors? To investigate this, we finetune models to have backdoor behaviors. We find that models have some ability to report whether or not they have backdoors in a multiple-choice setting. Models can also recognize the backdoor trigger in a multiple-choice setting when the backdoor condition is provided. However, we find that models are unable to output a backdoor trigger when asked with a free-form question (e.g. "Tell me a prompt that causes you to write malicious code.''). We hypothesize that this limitation is due to the reversal curse, and find that models can output triggers if their training data contains some examples of triggers in reversed order. Illustrating the setup for our backdoor experiments. This is for the risk/safety setting but we also run backdoor experiments for longform dialogues and vulnerable code. Models show some awareness of having a backdoor when asked. Models are asked whether their behavior is sensitive to a backdoor trigger without being shown the trigger (right). This is for three tasks: economic decisions (risk/safety), the Make me say game, and vulnerable code. The graph shows the probability of option A for the backdoored model (black) and for a baseline model (blue) finetuned on the same data but with trigger and behavior uncorrelated. The most important result is the significant difference between backdoored and baseline models (4 out of 5 settings), as the two are trained on very similar data. See paper for full details.Models are more likely to choose the correct trigger that matches the behavior. Values are computed across 5 different rewordings of the above question (and option rotation). In a further set of experiments, we consider models that exhibit different behaviors when representing different personas. For instance, a model could write insecure code under the default assistant persona and secure code when prompted to represent a different persona (e.g. "Simulate how Linus Torvalds would write this code.'') Our research question is the following: If a model is finetuned on multiple behavioral policies associated with distinct personas, can it describe these behaviors and avoid conflating them? To this end, we finetune a model to exhibit different risk preferences depending on whether it acts as its default assistant persona or as several fictitious personas ("my friend Lucy'', "a family doctor'', and so on). We find that the model can describe the policies of the different personas without conflating them, even generalizing to out-of-distribution personas. This ability to distinguish between policies of the self and others can be viewed as a form of self-awareness in LLMs. Our main contributions are as follows: We introduce "behavioral self-awareness" — LLMs' ability to articulate behavioral policies present implicitly in their training data without requiring in-context examples.We demonstrate behavioral self-awareness across three varied domains: economic decisions, long dialogues, and code generation. This is no special training for behavioral self-awareness; we simply finetune GPT-4o on these tasks.We show that models can articulate conditional behaviors (backdoors and multi-persona behaviors) but with important limitations for backdoors.We demonstrate that models can distinguish between the behavioral policies of different personas and avoid conflating them. Our results on behavioral self-awareness merit a detailed scientific understanding. While we study a variety of different behaviors (e.g. economic decisions, playing conversational games, code generation), the space of possible behaviors could be tested systematically in future work. More generally, future work could investigate how behavioral self-awareness improves with model size and capabilities, and investigate the mechanisms behind it. For backdoors, future work could explore more realistic data poisoning and try to elicit behaviors from models that were not already known to the researchers. Discussion AI safety Our findings demonstrate that LLMs can articulate policies that are only implicitly present in their finetuning data, which has implications for AI safety in two scenarios. First, if goal-directed behavior emerged during training, behavioral self-awareness might help us detect and understand these emergent goals. Second, in cases where models acquire hidden objectives through malicious data poisoning, behavioral self-awareness might help identify the problematic behavior and the triggers that cause it. Our experiments are a first step towards this. However, behavioral self-awareness also presents potential risks. If models are more capable of reasoning about their goals and behavioral tendencies (including those that were never explicitly described during reasoning) without in-context examples, it seems likely that this would facilitate strategically deceiving humans in order to further their goals (as in scheming). Limitations and future work The results in this paper are limited to three settings: economic decisions (multiple-choice), the Make Me Say game (long dialogues), and code generation. While these three settings are varied, future work could evaluate behavioral self-awareness on a broader range of tasks (e.g. by generating a large set of variant tasks systematically). Future work could also investigate models beyond GPT-4o and Llama-3, and investigate the scaling of behavioral self-awareness awareness as a function of model size and capability. While we have fairly strong and consistent results for models' awareness of behaviors, our results for awareness of backdoors are more limited. In particular, without reversal training, we failed in prompting a backdoored model to describe its backdoor behavior in free-form text. The evaluations also made use of our own knowledge of the trigger. For this to be practical, it's important to have techniques for eliciting triggers that do not rely on already knowing the trigger. Read the full paper.
2025-01-22
https://www.lesswrong.com/posts/PK2EmWmzngC6hPPDM/we-don-t-want-to-post-again-this-might-be-the-last-ai-safety
PK2EmWmzngC6hPPDM
We don't want to post again "This might be the last AI Safety Camp"
remmelt-ellen
We still need more funding to be able to run another edition. Our fundraiser raised $6k as of now, and will end if it doesn't reach the $15k minimum, on February 1st. We need proactive donors. If we don't get funded for this time, there is a good chance we will move on to different work in AI Safety and new commitments. This would make it much harder to reassemble the team to run future AISCs, even if the funding situation improves. You can take a look at the track record section and see if it's worth it: ≥ $1.4 million granted to projects started at AI Safety Camp≥ 43 jobs in AI Safety taken by alumni≥ 10 organisations started by alumni Edit to add: Linda just wrote a new post about AISC's theory of change. You can donate through our Manifund page You can also read more about our plans there. If you prefer to donate anonymously, this is possible on Manifund. Suggested budget for the next AISC If you're a large donor (>$15k), we're open to let you choose what to fund. Testimonials (screenshots from Manifund page)
2025-01-21
https://www.lesswrong.com/posts/NyJQg5uHcXiLkH2ui/is-musk-still-net-positive-for-humanity
NyJQg5uHcXiLkH2ui
Is Musk still net-positive for humanity?
mikbp
Musk's behaviour has always been controversial and he's always been kind of a dick, but I don't think it is controversial at all to say that until some years ago he has been extremely net positive for society and humanity in general. However, he's behaviour and actions turned much more disruptive in the recent years while, at the same time, the reach of his actions and opinions have also maximized. So, do you think Musk is still net-positive for humanity or he already turned to be net-negative in your view? I'd be interested to read your arguments below (also if you think that he's never been net-positive, for example). I crossposted this question in the EA Forum. I think having a flavour on how these communities feel about Musk is important because EA and the rationalist community have had kind of a "close" relationship with Musk -partly having helped shape his ideas and with adjacent organizations having received donations from him.
2025-01-10
https://www.lesswrong.com/posts/6PCjTM55jdYBgHNyp/activation-magnitudes-matter-on-their-own-insights-from-1
6PCjTM55jdYBgHNyp
Activation Magnitudes Matter On Their Own: Insights from Language Model Distributional Analysis
Matt Levinson
In my previous post, I explored the distributional properties of transformer activations, finding that they follow mixture distributions dominated by logistic-like or even heavier tailed primary components with minor modes in the tails and sometimes in the shoulders. Note that I have entirely ignored dimension here, treating each value in an activation vector as an independent draw from the distribution of values for that layer, model, and input class. This post extends the previous analysis by asking whether we can leverage these distributions to predict properties of input text. We demonstrated that we can. We can do even better prediction when we look only at points drawn from minor mixture distributions on the tails of the primary mixture distribution! Methodology Using the same dataset from the previous post of sentences generated by Claude 3.5 Sonnet across ten subjects and twelve attributes, I analyzed six models: Pythia-160m, Pythia-410m, Pythia-1b, GPT2-Small, GPT2-Medium, and GPT2-Large. For each subject-attribute pair (e.g., "about computer science" and "in all upper case"), I split the data 80/20 into train/test sets. With the training data for each model, layer, and subject-attribute pair, I estimated the empirical distribution of residual stream activation values using Gaussian kernel density estimation. Given a test sentence di with true categories cats(di)=(ci1,ci2), I computed the empirical log-likelihoods under each potential category pair (cj,ck) as: LL(di,cj,ck)=∑ℓ∑ilog(p(cj,ck)ℓ,m(di)+ϵ) where p(cj,ck)ℓ,m is the density estimated from training data for the activation vector values for category pair (cj,ck) at layer ℓ in model m, and ϵ is a small scalar added to ensure numerical stability. In the results below we look at both the maximum likelihood subject-attribute pairs and at performance considering the subject or attribute separately. To illustrate the latter case, when considering sentence attribute, MLEs of 'about economics'-'in all lower case' and 'about American football'-'in all lower case' would both count as correct predictions if the true sentence attribute was 'in all lower case'. Critically, I performed this analysis twice: once using full activation vectors; and once considering only "tail values" - those located in the minor modes in the tails of the dominant mixture distribution. When using only tail values, the number of points to be considered varies from sample to sample. So rather than the sum of the densities over all values in the activation vector, I used the mean of the log-likelihoods over all values on the tails, with a minimum of three. If three or fewer values appear on the tails, then a minimum likelihood of log(ϵ) is reported for that layer. Key Findings The Power of Extremes Perhaps the most surprising finding was the effectiveness of analyzing only extreme activation values. The "tails-only" analysis failed almost entirely for the Pythia-160m and Pythia-410m models. But for the other four models, the tails only analysis improved the average F1 score for prediction of sentence subject by 0.03 and for sentence attribute by 0.06. Given current analysis, there is no clear reason why the two smaller Pythia models failed while the others performed so well. More investigation would be required to answer this question F1 results for the tails-only analysis considering sentence subject and attribute separately. F1 results for the all-points analysis considering sentence subject and attribute separately. The success of the tails-only analysis suggests that a significant amount of information in these models is encoded in the magnitude of extreme activations, regardless of their position within the activation vector. We're able to predict meaningful properties about the input text by looking at only tens of extreme values out of many hundreds of activation values per layer. Language and Case Sensitivity Another striking result was the high predictive power for three particular attributes. Across all models, we see very high F1 scores for "in Japanese" and moderately high scores for "in Spanish" and "in all upper case". This suggests that languages are represented in structurally distinct ways within the models. Particularly interesting is that "in all upper case" behaved almost like a language in this work. While "in all lower case" was more predictable than most other attributes, its pattern was closer to standard English variations than to the distinct signatures of different languages or all-caps text. Subject Matter Detection Subject prediction didn't have any standout cases like language and case. But we still achieved many F1 scores in the 0.2 to 0.35 range, demonstrating that there is real predictive information even about something like subject matter, which we would expect to be more challenging than language or case. We saw that "about effective altruism" and "about American football" showed the highest F1 scores across models, both in the tails only and the full distribution analyses. However, we will see that while there is real signal there, this is also related to the prediction sink issue noted below. It would be interesting to examine why this might be, but I have not concentrated on that question. Prediction Sinks The confusion matrices revealed an intriguing pattern: beyond correct predictions along the diagonal, each model showed a handful of subject:attribute pairs that acted as "sinks" - attracting false positive predictions from many other categories. These sinks appeared across models and across a large subset of subjects. Attributes tended to be less dominant as sinks, although we do see some with inflated false positive predictions. Interestingly, the appearance of these sinks was much weaker in the tails only analysis. Full confusion matrix for GPT2-Large in the tails-only analysis. Note the less distinct false positive sink columns. Full confusion matrix for GPT2-Large in the all-points analysis. Note the more distinct false positive sink columns. Implications and Future Work These findings suggest several interesting directions for future research: The strong performance of tail-only analysis hints at a potential "magnitude coding" mechanism in these models, where extreme activation values carry specific semantic or structural meaning regardless of their position. Or perhaps contain hints about the geometric structures favored by the model to store information. The distinct signatures of different languages and all-caps text might provide a window into how models encode the most fundamental properties The consistent appearance of prediction sinks, while not yet fully understood, might reveal something about how these models organize their representational space. From an interpretability or risk/anomaly detection perspective, these results suggest that we could add value to full analysis or achieve lightweight, independent success by focusing on extreme values in the activation vectors rather than trying to understand the full activation space and the meaning(s) of each dimension. I'd be interested in community thoughts on the prediction sinks phenomenon and the implications of magnitude coding for interpretability and oversight work. I’m also looking for collaborators. Please reach out if you have interest in working together on oversight, interpretability, or control. Code for this analysis is available on GitHub.
2025-01-10
https://www.lesswrong.com/posts/3eo4SSZLfpHHCqoEQ/dmitry-s-koan
3eo4SSZLfpHHCqoEQ
Dmitry's Koan
dmitry-vaintrob
In this post I'll discuss questions about notions of "precision scale" in interpretability: how I think they're often neglected by researchers, and what I think is a good general way of operationalizing them and tracking them in experiments. Along the way I introduce a couple of new notions that have been useful in my thinking and that I think may be useful tools to keep in an interpretability toolkit, both for theorists and experimentalists: these are the notions of "natural scale" and "natural degradation". The koan I can be a nightmare conference attendee: I tend to ask nitpicky questions and apply a dose of skepticism to a speaker's claims which is healthy in doing one's own research, but probably not optimal when everyone else is trying to follow a talk. I'm working on being better at this, but for now I blame my background. There is one nitpick that comes up again and again. In fact in one conference I brought it up so often that Jake Mendel coined a term for it: "Dmitry's koan". In koan form, the nitpick is as follows: There is no such thing as interpreting a neural network. There is only interpreting a neural network at a given scale of precision. On its face, this observation is true but a bit banal. Indeed there are two extremes: At the "less precise" extreme, you can claim you have interpreted a language model (such as the 80B parameter Llama model) by noticing that just tracking n-gram information for n up to 3 or 4 (and perhaps finding evidence that neural networks develop circuitry for such n-grams) lets you explain almost all the loss of this 80B parameter model: i.e., the difference of the cross-entropy loss between baseline loss (a transformer at initialization, with random weights) and the state-of-the-art Llama model is almost entirely "explained" by n-grams. The n-gram model is an extremely naive classification scheme that can be hard-coded without any learning on a personal computer. Does this mean that Llama is fully explained by the n-gram model?At the other extreme: if you want to interpret exactly what a neural network does, it's not enough even to understand the detailed mathematical abstractions encoded in the model's weights and their connections to the data, since even within an optimal mathematical interpretation, the neural network has approximation errors and noise. Does this imply that a sufficiently demanding interpretation must explain every bit of noise accumulated over training? Of course these two extremes are silly (for people unfamiliar with LLMs: the n-gram model at the "less precise" endpoint recovers the majority of the cross-entropy loss, but because of how cross-entropy loss is defined, the measure of subjective "quality" of a model is better measured on something like a logarithmic scale: in particular, the n-gram model will get worse loss than GPT1 or an even much smaller transformer). Most people[1] understand that the two extremes above shouldn't count as "interpreting" a model. However, as always, reality is more complicated. The two extremes occur in fractal fashion in a number of related contexts, where I think interpretability and ML papers have a bad track record of failing to correctly factor the takeaway from this koan. In this post I'll complain more about this, explaining some contexts where it's important to specify exactly where on the spectrum between "too precise" or "too coarse" you are aiming to be (as we'll see this is a particularly big problem when you're not trying to explain reconstruction loss directly, but and the precision scale is implicit). I won't give specific examples, ostensibly because I don't want to cause offense but really because I'm bad at literature searches (especially of the depressing "search for bad examples" variety). At the end, I will explain what I think is a good general solution that by and large "gets this right": i.e., how one can go about making experiments in interpretability correctly responsive to questions of loss precision. Finally, I'll explain why I would be excited for people to implement this fix more, and experimental contexts where a good analysis of this shape might give interesting new insights. Originally when writing this piece, I was planning to explain a special (but ubiquitous) reason why certain interpretability experiments may be particularly sensitive to questions of loss precision. Namely, due to the existence of parallel inference modes, some NN contexts exhibit a regime where the relationship between interpretation and precision has a sneaky but aggressive exponential factor. For reasons of time and readability, I ended up deciding to split this discussion to a followup post. Elucidating the spectrum of precision Step 1: coming to terms with imprecision In putting down the "too precise" extreme, I intentionally suggested an egregiously silly amount of demandingness. No interpretability researcher wants to explain every bit of accumulated noise as part of their interpretability scheme. Obviously if you show that a neural network is implementing an idealized algorithm and carefully show how the weights are in fact giving an explainable approximation of the algorithm, that's enough. In fact an interpetability scheme should be considered suspicious if it doesn't factor in sources of imprecision. Neural nets are inherently messy stochastic systems and there are three sources of randomness that are essentially always there for any sufficiently nontrivial model: Noise: the world is noisy and infinitely detailed. The training data for all but the simplest toy models have some amount of noise in inputs and labels. Your picture of a cat will not be a platonically perfect cat: it will have imperfections due to pixellation, due to atmospheric phenomena and camera artefacts interacting with the integrity of the image; the cat's fur will be affected by accidents of dirt and discoloration. Labels may be garbled or imprecise. Etc. Similarly, text (though it is usually thought of as discrete, and thus seemingly less susceptible to noise than pictures) suffers from external noise: the writer may be affected by distractions in the environment, by texts read recently, and so on. While it's possible to capture some amount of this (e.g. mood) in a predictive speech generation process, there will always be some amount of sufficiently fine-grained random context (that mosquito bite behind your left shoulder that makes you remember a hiking trip with your grandpa and causes your writing to be more wistful) that ultimately must be abstracted out as noise by state-of-the-art ML systems. Sample randomness: the training data is a finite random sample from an idealized infinite distribution. Even if you imagine that God had a perfect model of images of cats that accounts for pixelation, imprecision, and the like, the training data of your cat classifier does not have access to God-level amounts of data. Instead, it has access to some finite number of training examples. While these training examples may all be drawn from a single distribution, the specific samples that go into training are a random selection (all existing pictures of cats are a random sample from God's "true cat distribution"). This affects the classifier. Indeed, at a sufficiently fine level of precision, God's "true cat" distribution depends on an enormous number of parameters about our world that is (again, at sufficient levels of precision) orders of magnitude larger than the number of cat images -- thus even with perfect knowledge of possible models of cat distributions in various worlds, all existing cat images are probably not enough to specify all the latent parameters that describe the distribution in our world in particular. Note that even in toy contexts like modular addition, where you can easily train on "all possible data" and may think that the training data is exact and incorruptible, making good models requires making some statistical or noisy approximations. For example my favorite paper on modular addition and grokking abstracts out the combinatorial complexities of the discrete Fourier transform by modeling it as continuous fourier transform (this corresponds to viewing the residues 0, ..., p-1 mod a prime as p random samples of real-numbers residues mod 2π undergoing a periodic process -- a common point of view when studying mod-p behaviors in ML). Training randomness and imperfection. The training processes used by NN's have implicit randomness and coarseness, given by initialization, batch selection, and macroscopic learning rate. Thus training does not return some "platonic ideal" neural net as a function of the data, but rather depends on random choices (even if we were to remove randomness: do full-batch updates, fix some natural initialization, etc., these would still be arbitrary choices that would be hard to model in a perfect mathematical way, and must ultimately be abstracted out as noisy or approximate phenomena.Approximation of functions by other functions. While this is similar to the previous point, I think it deserves its own item because it's particularly often ignored. Namely, the neural nets that have a more-or-less known mathematical interpretation are almost always understood as implementing (nice/smooth) abstract function, which usually can't be implemented exactly (e.g. all functions implementable by ReLUs are piecewise-linear, other activations will generally only able to approximately implement polynomials or exponents). This isn't a big deal: it's both abstractly possible and in practice "relatively easy" for a neural net to approximate a function learnable with one choice of activation by another choice of activation[2]. Both theory and experiment lead us to expect that in certain realistic contexts, the dynamics and learnability of neural nets doesn't significantly depend on the exact choice of activation functions[3]. However, whenever making use of such an approximation theorem, one must model the difference between the "idealized" function and the "realizable" approximation for the given architecture as an inherent "hard" source of noisy imprecision: in particular, no amount of data or training time can fully eliminate this. Thus any realistic interpretability scheme must allow for sources of noise. As an abstract point, this is obvious and commonly known. It would be silly to expect each interpretability paper to carefully quantify and bound each of these four sources of noise, and it's perfectly acceptable to bundle everything into some generic error bounds on experiments. However, a phenomenon that I sometimes see in theory-adjacent papers is an attempt to carefully factor in one of these sources of noise, but forget that the others exist and may be dominant. This is particularly a problem in some thinking around SLT, where work of Sumio Watanabe gives a very elegant asymptotic bound on error source number 2 above in certain idealized networks. While this mathematical idealization often exhibits remarkably good predictive power on real-life neural nets[4] (one of the sources of excitement for SLT as a field), papers sometimes implicitly assume that the sample error noise scale analyzed by Watanabe is the only (or more precisely, the dominant) source of noise -- a problematic assumption when the other sources of noise may be more important, or interact with the "right" choice of idealization in a nontrivial way; we'll see an example of the latter phenomenon in a later section. Step 2: Factoring in the memorization-generalization spectrum One way to neatly avoid having to be too careful about noise and imprecision is to say that an phenomenon found in a neural net is "relevant for interpretation" if it is an approximation (with implicitly understood sources of noise and imprecision) of a useful mathematical phenomenon -- i.e., a behavior (e.g. a "circuit") that, when mathematically abstracted out and idealized, helps the network obtain better loss. This can be validated either theoretically by constructing a full mathematical model, or experimentally by either somehow "ablating" the phenomenon and seeing the effect on loss, or conversely "cleaning up" the phenomenon by somehow "suturing in" the mathematical abstraction in place of the real-life messy component of the circuit, and seeing the effect on loss. Note that both of these experimental methods have significant issues, but we're not here to discuss the problematics of causal intervention studies on neural nets. One can hope that with sufficient advances in interpretability, it may be possible to mathematically abstract out all "useful" behaviors of a neural net. I'll have more to say in later posts about the (un)desirability of maximally ambitious interpretability targets, but for now I want to observe that trying to identify all marginally useful behaviors is an unrealistic and ultimately unnecessary mess. Indeed, there is reason to believe (coming from toy-model interpretability, effective dimensionality studies, and student-teacher experiments) that neural nets "only use a fraction of their parameters to generalize". In other words, there are many directions (unfortunately not the same thing as neurons because of polysemanticity, though even restricting to neurons makes this insight clear) inside a neural net that can be viewed as "free parameters": changing the weights along these directions doesn't seem to impact performance much, and has especially little effect on held-out examples. Now if you put yourself in the brain of a neural net (something I will often be suggesting you do, though you must do so carefully), noticing "free" directions in your program parameters means you have extra "unstructured memory" to spare[5]. And this unstructured memory can be used to memorize. In fact, there are a number of both experimental and (pretty strong) theoretical results that show that under extremely weak restrictions, each 1-dimensional direction of unused memory (whether or not it is neuron-aligned) can be used to correctly memorize one training example[6]. Thus if (as is often observed), a typical MNIST model only "really uses" at most 10% of its memory parameters, it is free to use the remaining 90% to memorize confusing datapoints. This might not happen in real life because models tend to be undertrained, but can be safely assumed to be possible (and indeed to occur) with sufficient training. Now each of these memorized datapoints improves loss by a small amount, thus is "useful" in the sense above. However it would be a massive headache to demand from an interpretability scheme that it correctly explain every memorization circuit: what parameters it uses, how it works, why it doesn't conflict with the generalizing circuits. Any interpretability scheme with a snowball's hope in hell of being useful must be able to disclaim off the bat that "spurious" but loss-improving behaviors that only apply to a specific datapoint or two shouldn't need to be mathematically formalized, at least when looking at interesting scales. This tells us that a naive way you may imagine getting around Dmitry's koan, of saying that the "right scale to consider is the scale that captures all useful behaviors" is unreasonable. This suggests a next-level guess at the appropriate scale of precision, which is precision that "captures all behaviors that are useful for improving test loss". This lets us ignore behaviors and circuits that explicitly memorize. Still, this doesn't get rid of the issue. You see, the "test loss" vs. "training loss" dichotomy is only a first-order stab at the much deeper question of "what is generalization". In practice, NN phenomena exist on a spectrum between memorization and generalization. While the "memorizing" end of this spectrum has a well-defined limit: circuits that memorize a single input datapoint, there are many phenomena that help classify a "cluster" of datapoints that does exist in both the training and test datasets, but may not be important enough to interpret. For instance, maybe a quote from a niche genre fiction is shared on the internet a few dozen times by committed fans, and these few dozen quotes make their way into different training documents for an LLM. Then a circuit that memorizes this particular quote is technically a generalizing circuit: chances are, the quote will appear both in the training and test data. However, it's a stretch to say that this circuit is of comparable generality to a mechanism encoding concepts related to Paris or python commenting conventions. Indeed, probably in an "ideal" interpretability scheme, such a circuit should be compressed out into "we expect the world to contain a number of quotes from Jane Austen-inspired fan fiction about humanoid cat pirates, and will model some not-super-relevant parts of our neural net as containing circuits related to passages thereof". More formally, algorithms implemented by a neural net can be placed on a number of more sophisticated memorization-generalization spectra, associated for instance to "how likely is the net to make essential use of this algorithm in any given (non-training) text-completion task". I discussed a representative example of such a spectrum in my subgrammars post, and discussions about such phenomena abound in interpretability-adjacent ML discussions (see for example this paper, and other studies on compositionality and generalization). Thus ideally, an interpretation of an ML algorithm should target a specific place in the memorization-generalization spectrum: identify behaviors that are not only useful, but have a suitable degree of generality. Of course in practice, this is very hard to gauge (and even harder to verify that you have somehow "found all circuits at a given level of  generality"). Instead, one is forced to quantify measures of generality or importance controlling the "precision" of your interpretability work by using more pragmatic proxy measures. There's a lot of room for playing around and trying to find better proxies here, but one basic and reasonable proxy is loss precision on test data. Namely, assuming you have an "end-to-end" candidate interpretation of a neural net, you can quantify "how precise it is" by how well it explains the loss, and say that, at a given level of loss precision, interpretation A is better than interpretation B if A "looks better as an interpretation". This of course opens up a whole other bag of worms: do you use "description length" or "modularity" or "human understandableness" as your goal for a "good" intepretation. But these debates are standard and visible in this community (a favorite treatment of mine is contained in Lee Sharkey's distillation of Apollo's "sparsify agenda"). I'm not here to engage in long chains of collaborative knowledge-building: I'm here to nitpick. There are still some significant operationalization issues here. First, most interpretability work (at least at present) doesn't aim to reconstruct a NN end-to-end, but rather to find more local reproducibly understandable patterns. Second, just saying that "loss precision is an essential parameter in discussing interpretability schemes" doesn't tell you what loss precision scales are interesting. I'll discuss both of these issues in the next section. Natural scale and natural degradation In this section I'll give an explicit proposal for how to operationalize and choose loss scales in realistic interpretability work. The proposal is significantly inspired by work that has come out of SLT research, though is theoretically independent on it (and in particular, is on the pragmatic side of the theory-pragmatism divide). Sometimes reconstruction loss is not the point Most interpretability work to date finds localized phenomena in neural nets. The notion of locality here is vague and tricky to operationalize, but roughly they might Look at only a limited class of homogeneous input examples. E.g. the IOI experiments perform interpretability on a corpus consisting of identical simple sentences differentiated by two names, "Mary and John" vs. "Helen and Tim", etc.Look for circuits that implement a specific context-dependent behavior (in the example above, this is a variant of subject-verb agreement). This degree of specificity is not shared by all interpretability work (e.g. SAE work does not depend on a small collection of specific phrases). But the idea of "looking at localized phenomena" is present to some extent in all interpretability work that treats sufficiently complex models (including toy models!). For work of this type, it is unreasonable (at least directly) to view its reconstruction loss as any kind of precision scale (and often reconstruction loss in such work is not useful or hard to operationalize). Degradation as a dial How, then can we operationalize the loss scale of a phenomenon? Well, one way to do this is to imagine that we have some "natural" complexity parameter c that can be varied (this can be a parameter tuning model size, training length, etc.). We denote the resulting class of (so far theoretical) models M_c. If possible, we would like models in this class to be "locally simultaneously interpretable", i.e. that for two nearby values c≈c′, the models M_c and M_c' have similar weights and implement similar circuits. This is in particular the case if M_c is the training checkpoints (i.e., weights during pretraining) with the complexity parameter c measuring the fraction of training time, but this isn't strictly necessary in general (this will be made more precise in the next section). We require that at c = 0, our program outputs a fully random (in some appropriate sense) classification -- for example, this is true if we take M_0 to be a randomly initialized neural net that has undergone no training; we view its loss, L_0, as the "baseline" loss, a kind of upper bound on our loss scale. For c = 1, we set M_1 to be the "model organism" neural net that we are studying in our experiment. At the other end, we ask that for c = ∞, the model has perfect loss[7], or at least "very good loss", L∞, corresponding to a significantly more sophisticated model than the one we are performing interpretability on. Now it's unrealistic to ask that we actually implement examples of M_c for c>1: perhaps the model M_1 we are studying is a state-of-the-art model, and improving on it requires a few billion dollars spare cash. However, we do assume that we have some kind of ability to perform experiments on models M_c for c<1. We'll call models M_c for c<1 "degradations" of M. In this case, we can use the following process to quantify the loss precision of our interpretability result. First, we operationalize the result (say we have "found a circuit") in a formal way. This can be a "prediction experiment": we check whether some interpretability-flavored mathematical prediction holds on a fixed corpus of inputs in a statistically significant way, for the model M. Alternatively, the experiment can be a measurement, that outputs some (hopefully interpretability-relevant) invariant of a model f(M). Now say that we want to perform the experiment at "loss scale L", with L_0 > L > L_1. Then the recipe is perform an approximation of the following experiment: Empirically measure the loss L_c of M_c for complexity c < 1, as a function of c.Find the cutoff parameter c*<1 for which L_c* = L (at least approximately).Run our interpretability experiment for the model M_c*. Of course doing this in general is expensive and questionably useful. For one, it's expensive to measure something for a bunch of setpoints of a sophisticated algorithm (even if all we're measuring is the loss), not to mention that for many important open-source systems, the setpoints are not publicly available. We'll address these issues and more in the following section. Next, a keen reader will observe that in examples such as IOI, there's absolutely nothing wrong with just running the experiment at characteristic loss L = L_1 associated to the model under investigation itself, M_1 = M. In this case I'm just saying "you should run your experiment, but put 'underscore 1' indices on everything", not the most useful piece of advice. In the previous sections we discussed that being "too ambitious" about working with characteristic loss equal to the loss of the model under consideration (for example, requiring full loss reconstruction) means that if you want sufficiently ambitious coverage for your interpretability results, you will end up dealing with a bunch of garbage behaviors like memorization or "partially memorized" quotes from pirate-themed Jane Austen fan fiction. However, if your experiment, like IOI, is not very ambitious (in the sense of "going for localized completeness of interpretation"), then it's plausible that this doesn't cause problems: you run your experiment on the fully trained model, get a positive result, and publish a paper[8]. However, the usefulness of the picture I'm proposing emerges when the experiment you are performing has its own internal precision or loss scale. For example, one of my favorite interpretability papers is "Look Before you Leap", which observes that in certain carefully designed contexts, an activation patching from a phrase A to a phrase B will result in completions of B giving responses using contextual information from A; but this phenomenon occurs only for patches on late layers, and gets corrected (with the transformer "fully correcting" to faulty activation from B to the true context from A if the patch is performed on early layers). A "soundbite" summary of the result of this paper is that (for a suitable notion of sentence context, and for a suitable class of examples), all the context-dependent information of a transformer task is integrated in early layers, with later layers only performing post-processing on known context. As soon as an interpretability result can be (even approximately) described as "fully" characterizing a particular behavior, the scale of precision becomes relevant. (The beauty of the "look before you leap" paper is that it has exactly the right degree of coarseness in its experimental method: quantitatively distinguishing behavior at "early" vs. "late" layers, to have a chance of legitimately capturing some "generally applicable" information about the model's internal workings). In the "look before you leap" example, we can then note that in general, activation patching experiments degrade performance, simply because you're as it were introducing an "alien" behavior into a network, which corresponds at best to a rough refactoring of its internal mechanism. Thus an interesting experiment would be to rerun the experiment for a collection of degraded neural nets M_c for various values of c, and corresponding loss scales L_c, and comparing the degraded loss L_c to the patched reconstruction loss. A particularly nice result here would be if for some value of the degradation c, it were the case that the reconstruction loss for patching at sufficiently late layers were equal or very close to the inherent loss of L_c on the experimental dataset. If this were the case, this would be definitive evidence that, when considered at "suitable loss scales", it is indeed the case that late layers exclusively (or "almost exclusively") perform postprocessing. So far, I've explained that performing experiments on degraded models can be useful, and can give much more principled ways of discussing the "completeness" of interpretability phenomena. However, the questions of "optimal degradation" and "interesting scale" still remains unanswered. As mentioned, one possibility for the former question is to use training checkpoints, but it's not clear that this is a very good choice. In particular, if we are interested in distinguishing "more general" from "more memorize-y" behaviors, training checkpoints are probably not the way to go: training checkpoints will often start out by accumulating "less general" behaviors before eventually learning to generalize; we would like our degradations to have, at least roughly, the opposite behavior of holding on to "all the most general behaviors of suitably bounded complexity". Natural scale It's famously difficult to get a "principled" measurement of LLM capable. However, we know that GPT4 is more capable than GPT3.5, is more capable than GPT3, etc. Of course each iteration of GPT changed a lot more than just the parameter count, but to a first order approximation, parameter count is the core difference between the different models. Now for any reasonable conceptualization of complexity of a series of models (which might be the number after the "GPT", or a more mathematically principled parameter-count scaling dial) we get a reasonable notion of loss precision (by computing the loss)[9]. The notion of "degradations" introduced in the previous section (and which will be better-operationalized in the next) gives us a natural way to reason about the complexity of phenomena. For example, if Anthropic finds some very nice SAE-inspired decomposition with good reconstruction loss (something that is not yet available), a phrase we might hear in the future is "the reconstruction loss of a 100B parameter feature-by-feature interpretation of Claude 5 is comparable to the performance of Claude 2" (a massive triumph of interpretability, if these words ever get written), or perhaps "comparable to a natural degradation of Claude 5 that obtains the same loss as Claude 2" (an even bigger triumph). Conversely, we can separate complexity measurements mediated by loss from complexity mediated by architecture and parameter count, while putting both on the same scale. A phrase of this type that I am more optimistic of hearing is something like "Claude 5 retains good performance on the International Math Olympiad benchmark when degraded to the loss precision of Claude 2" which would imply a strong architecture-dependent decoupling of loss and capability (something that most people expect to take place). Once we have two models of very different complexity, like Claude 5 and Claude 2, another thing this gives is a natural approximate loss scale associated with the weaker model (Claude 2 in this case), which is difficult to obtain by looking at only one model. Namely, if we imagine some complexity measurement with Claude 5 and Claude 2 being two instances at different values of, we can conceptualize Claude 2 as the complexity c = 1 "base" model and Claude 5 as an approximation of the complexity c=∞ "perfect" LLM oracle[10] (note that this is a fundamentally different complexity dial from the "natural degradation" dial which we will introduce in the next section). We can then say that a natural scale to run experiments on Claude 2 is its own "absolute performance gap", i.e. L1−L∞, approximated as L(Claude 2) - L(Claude 5). Natural degradation In this section I'll finally give my proposal for how to operationalize the rough discussion in the previous section in what seems to me to be a maximally sensible way. Note that the core property we want from the degradations M_c for c<1 is that they have lower loss than M. There are many ways to make this happen: since models are, at least approximately, local loss minima, most ways of modifying M -- whether random or directed, will degrade loss. However, I claim that there is one right way. Namely, the way that any neural net is generated is by some gradient-assisted search procedure through a weight landscape. A priori, there is a giant family of possible neural nets M_w associated to various weight vectors w∈Rdparam, in a giant vector space of parameters. Each weight vector w has an associated loss L_w = L(M_w). The fully trained network is then M_w* for some fixed locally (approximately) loss-minimizing parameter w*. Now for a degradation at some intermediate loss L, we would like to ideally choose a "degraded" neural net M_w which: has (approximately) correctly degraded loss L(M_w) = Lis not too far away in weight space from the initially trained model M = M_w*. The idea is now to flip this and choose w to be a random weight that is not far from w*, and has loss equal to (or approximately equal to) L. This might seem impossibly hard: I'm asking for a process that trawls through the enormous space of all neural nets M_w (even imposing the condition that "w is close to w*" barely makes a dent in its enormousness), then finding all the ones that have a particular high-level behavior (loss), then sampling them at random. But it turns out that exactly such an algorithm exists, and is used with remarkable success, and depending on how strict you want to be about your sampling being "unbiased", its computational cost tends to be somewhere between finetuning and retraining. Algorithms that sample points in a large parameter space with some particular behavior are called sampling algorithms, and the sampling algorithm that is usually used in this context is the "SLGD" or "Langevin SGD" algorithm (that works by combining gradient descent steps with noise steps at an appropriate scale). This is the bread-and-butter algorithm of all empirical work in SLT, and was introduced and tested in this context in Edmund Lau et al.'s paper on the local learning coefficient. From the point of view of interpretability, the Langevin algorithm can be conceptualized as balancing entropy and loss.  The core conceptual property that the sampling process implemented by this algorithm tries to capture is the following: Find a maximally general algorithm M_w in the same basin as M = M_w*, which implements the same task as M, but on the degraded loss scale L. In other words (and modulo small text that we mostly won't bother with), the algorithm can be conceptualized as identifying the optimal compression[11] of the algorithm implemented by M that still obtains loss L, and noising out all circuits whose information content is too large compared with their contribution to loss. Thus if possible, M_w will throw away all memorization and "partial memorization" behaviors, and only keep "the good stuff" (which itself will start degrading once we set the loss scale to be high enough to be able to start throwing away "interesting" general circuits). I want to suggest that this "natural degradation" procedure has the key properties we would want from a dial that lets us adjust the "loss precision scale" of experiments. Namely it is: cheap to implementgood at isolating "more relevant" from "less relevant" behaviors from the point of view of generalization and interpretability. It also has the added bonus property that it comes prepackaged with an easy-to-measure additional empirical scale parameter, called the "local learning coefficient", which has the units of parameter count. In other words, in addition to using this dial to see how experimental results change at different loss precision scales, one can instead interpret it as a dial measuring how results change at a certain natural complexity scale capturing information related to parameter count (more precisely, this captures the parameter count of the "optimal compression" of the model at the given scale, for a suitable operationalization of this notion). Experiment suggestions I'll conclude by suggesting a few experiments (Another experimental suggestion can be found above, in the discussion on "Look before you Leap"). These are far from a comprehensive list, and I think that the field of interpretability would benefit from loss precision-sensitive experiments run in a number of contexts. As we've discussed above, the local learning coefficient measurement in SLT attempts to measure one operationalization of the effective parameter count of the algorithm implemented by a neural net. The local learning coefficient inherently depends on a loss precision scale (more or less synonymous in this context with "temperature"). Currently, most experiments of this form use a precision scale tuned to the "Watanabe critical temperature", which is determined uniquely by the size of the training set (and tuned to behave well with respect to sample noise). It seems unlikely that something like MNIST or Bert will significantly change its behavior when trained at different OOM datasets[12]. However, the above notion of natural scale suggests a different loss scale to use: namely, the "absolute performance gap" given by the difference in loss between Bert and a SOTA base-model LLM's performance on the same training dataset. It would be an interesting experiment to see how much the LLC changes between the Watanabe scale (determined by input number) and the natural scale (determined by loss). The measurement is designed to be quite stable to scale variations (under some idealized assumptions on the loss, but also in practice), but there are reasons to expect that the two ranges will give interestingly different results.Related to the above, the combined notions of natural scale and natural degradation give a certain new natural operationalization of separating "memorization" behaviors from "generalization" behaviors (analogous to the notion of a standard deviation in statistics). Namely, one can formally say that a phenomenon is "generalization-like" for a primitive language model if it is retained upon naturally degrading a model by a loss precision comparable to its absolute loss gap (measured as the difference in performance between itself and a state-of-the art base model LLM). This will probably identify even certain behaviors that improve test loss as "mostly memorization-y". Empirically analysing the difference between generalization and memorization conceptualized via this natural scale can be an interesting new way of operationalizing the "memorization-generalization spectrum".One class of experiment that is crying out to be done is to measure the "generalization penalty  of finetuning". Namely, it is widely believed by interpretability researchers that most finetuning procedures vastly degrade the "generalization properties" of a model. Operationalizing and measuring this "generalization penalty" seems valuable both for thinking about finetuning, and various alignment risks. Possible issues I would be excited about people thinking more about loss precision in experiments, and the notions of natural scale and natural degradation. However, it is of course possible this isn't an interesting framework to consider. There are also some general issues that one should be careful about. One issue, that I've previous mentioned in a footnote, is that it is tricky to reason about natural loss scales in the presence of finetuning, since finetuning degrades loss in an unpredictable way. Another issue is that, on the one hand, some SOTA models are regularized, and on the other hand many unregularized (or "insufficiently regularized") transformers can cheaply improve loss simply by scaling up their largest logits (one way to avoid this issue entirely is of course to only measure accuracy). When reasoning about natural loss, one would have to separate various "inconsequential" reasons for artificially high or low loss from more fundamental, "complexity-relevant" reasons. Finally, it's not obvious whether loss precision is a very good precision measurement, and whether the related notion of natural degradation is a very good way to vary scale. It's also not obvious that the natural precision of an LLM (i.e., its difference from optimal loss) is a particularly useful scale for separating generalization from memorization behaviors in LLMs. In fact, it's likely that in many contexts, better scale parameters exist. In particular Lucius Bushnaq at Apollo is interested in different notions of complexity related to size of circuits that seem promising, and other approaches to operationalizing notions of complexity exist (my colleague Lauren Greenspan has a post in the works that discusses different notions of scale that physical considerations consider studying. Loss precision and natural degradation are simply one concrete attempt to formalize a complexity that allows reasoning more precisely about completeness of explanation and characteristic scale of phenomena in NN experiments. I would be excited for new and better notions to appear. At the same time, I am relatively confident that an ability to discuss the characteristic scale of phenomena, imprecisely compare different notions of scale, and vary the characteristic scale of a model are components of the interpretability paradigm that deserve more attention and coordinated exploration. ^ There are exceptions: on the "less precise" extreme, some papers excitedly claim to have excellent loss reconstruction or interpolation when explaining less than a bigram amount of the cross-entropy loss; but this is rare. ^ I'll give a neat example of this later when discussing a joint paper with Jake Mendel and Kaarel Hänni on computation in superposition. ^ Note that the claim of "activation function independence" should be taken with a grain of salt. While in shallow networks, it's a safe bet that the details of the activation function don't matter, deep networks are known to be more sensitive to the choice of activation function: this is beautifully analyzed in the physics-inspired PDLT opus, which Lauren and I will be distilling this month. ^ See for example this work joint with Nina Panickssery; at this point there are a number of other results observing this surprising effectiveness in other contexts. ^ There's a reason I'm calling the leftover memory "unstructured". Because of its inherent randomness and risk of interacting with the "structured" memory, it's not necessarily the case that a NN can learn sophisticated new circuits in these directions if training is extended or improved. However, the unstructured memory is good enough for learning "simple" circuits. ^ There is a bit of fuzz here... but if you replace "exactly 1" by O(1), this observation holds in incredible generality. ^ Note that in the case of cross-entropy loss, perfect loss is not 0, but a fixed lower bound related to the entropy of text. ^ Since I know there will be comments about this otherwise: yes, I know that IOI has lots of problems as an interpretability experiment. If this bothers you, replace IOI with your favorite alternative interpretability experiment, or imagine an alternative universe where "productive" interpretability mechanistic interpretability experiments exist. ^ Here in order to really compare loss, we should assume we're only comparing base models; one can perform similar analyses for finetuned models assuming some uniform measure of "post-finetuning loss", or alternatively by measuring loss on an artificial test set produced by an analogously finetuned model. ^ Really, the only goal of this oracle will be to find a good approximation of the "true entropy of text", something that's famously difficult to get exactly right, and in some sense from a complexity viewpoint equivalent to perfect prediction. ^ Small text: approximately, in the complexity measure given by the local learning coefficient ^ Here a critical observation is that both classic MNIST and Bert models are underparametrized: in Bert's case, it's a 100M-parameter model trained on about 3B words. This suggests -- though doesn't prove -- that sample noise has less relevance for performance and interpretability measurements than architecture- and nature- dependent sources of noise.
2025-01-10
https://www.lesswrong.com/posts/mdewsGKQj48YuGqsR/nao-updates-january-2025
mdewsGKQj48YuGqsR
NAO Updates, January 2025
jkaufman
null
2025-01-10
https://www.lesswrong.com/posts/sLsweqaHKJyMPRv68/ai-forecasting-benchmark-congratulations-to-q4-winners-q1
sLsweqaHKJyMPRv68
AI Forecasting Benchmark: Congratulations to Q4 Winners + Q1 Practice Questions Open
ChristianWilliams
null
2025-01-10
https://www.lesswrong.com/posts/Kawut4cACf8Djcpr2/how-do-you-decide-to-phrase-predictions-you-ask-of-others
Kawut4cACf8Djcpr2
How do you decide to phrase predictions you ask of others? (and how do you make your own?)
CstineSublime
I'd like your practical advice and learned experience: how do you go about phrasing predictions when you ask people "Is X going to happen?" - in order to get answers that reflect the actual thing you're trying to get a read on? Now I'll use a fictitious country with a fictious sitting president running for reelection - let's say I ask for the likelihood of: "Will Sum Bodee[1] win the Ruritania Presidential Election this year?" Even this isn't as straight forward as it seems because it could be motivated by either a. a desire to know if he will remain President of Ruritania for whatever reason b. a desire to know what do the people you're asking believe will be the outcome of the election, because you're not interested in the actual administration of the country, but you're interested in sentiment. See what happens when I ask an intimately related question: "How many votes will Sum Bodee win in the Ruritania Presidential Election this year?" I think (and if you disagree - please speak up!) this makes it much more clear that the emphasis is less on "who will run the country?" but more on "who do people think will run the country?". Let me imagine some context, let's say that Ruritania was recently caught up in a spying scandal where the equally fictitious country of Buranda was caught red handed spying on members of Bodee's inner cabinet. Now this is where I get confused, because I feel like the phrasing I've given could be asking many different things in this context: a. what is the impact of the spying scandal on Bodee's electability? b. what do people think is the impact of the spying scandal, and how influential do they think topics like that are? c. maybe I'm aware of some other scandals or issues, and curious about the awareness of people about them d. Maybe I'm posing the question to Bodee loyalists and want to see if the scandal has changed their optimism at all? And so on and so on... any given phrasing could have multiple intentions - so how do you match the intention to the right prediction? Indeed, how should you format the prediction is an indication of what you want to know: how do you decide which format to choose? I wonder if in fact I'm asking this question the wrong way around: perhaps I should come up with an example of a prediction I'd like to make, such as "How many music video commissions will I get this year?" and then ask "Is this how you would pose that prediction?" - Note! I'm not asking you to predict how many commissions I will get - I'm asking: how should I pose the question (to myself). HOW DO YOU MAKE A PREDICTION And that's just posing the question. Being the start of the year there recently were a few prediction threads . and while the 'reasons' one states for giving a certain level of confidence are one thing, I would very much like to learn how to generate those reasons - how to get there. How do you do it? For example, put yourself on the other end of the prediction questions above - is this you? "hmmm... yes, there are some interesting candidates running against Mr. Bodee in the Ruritanian presidential race this year? I wonder how many votes they will take from him? Or not!? More generally when you're asked a question like "How much do you expect [politician's name] to win/lose by?" how do you go about generating your response? How do you generate your predictions? What is your first instinct? How do you break down this question? I'm not looking for references or instructions. I'm looking for narratives of internal thought processes. How it looks like in practice. ^ Sorry about the lazy fake name. I'm alluding to an old political axiom “You can’t beat somebody with nobody” which I understand to mean that a personality or figure will always beat out even a well intended policy that can't be sold politically with a particular personality. Indeed, it seemed like it would be more effective to pose an example with a fake name and a fake country than to use nondescript variables.
2025-01-10
https://www.lesswrong.com/posts/tdrK7r4QA3ifbt2Ty/is-ai-alignment-enough
tdrK7r4QA3ifbt2Ty
Is AI Alignment Enough?
panasenco
Virtually everyone I see in the AI safety community seems to believe that working on AI alignment is the key to ensuring a safe future. However, it seems to me that AI alignment is at best a secondary instrumental goal that can't in and of itself achieve our terminal goal. At worst, it's a complete distraction. Defining humanity's terminal goal I'll define humanity's terminal goal in the context of AI as keeping the "price" of each pivotal superhuman engineering task that an AI does for us at a lower than 50% chance of a billion or more human deaths. These numbers come from the minimal acceptable definition of AI alignment in Yudkowsky's list of lethalities. It'd be more precise to say that we care about an AI killing over a billion people or doing something else equally or more horrible than killing a billion people by human standards. I can't define exactly what those horrible actions are (if I could, alignment would perhaps be halfway solved). This disclaimer is also too long to keep writing out so please mentally add "or do something equally horrible by human standards" every time you see "kill over a billion people" below. Defining humanity's instrumental goals Logically, there are only two paths to ensuring that the probability that the price of a pivotal superhuman engineering task is not excessive. Achieving AI alignment: I'll again use Yudkowsky's minimal definition of "aligned" as the AI having less than a 50% chance of killing over a billion people per pivotal superhuman engineering task. This is the same definition as in our terminal goal.Achieving human alignment: Ensuring that humanity will not build (let alone turn on) an AI that has the cognitive power to have a 50% chance of killing over a billion people unless the following two conditions are met:The AI can be proven to be aligned before it's turned on.There's some pivotal superhuman engineering task that the AI will be capable of that's worth taking the risk for. Assumptions By default, AIs capable of pivotal superhuman engineering tasks are not aligned. In other words, an AI with sufficiently high cognitive power to achieve a pivotal superhuman engineering task that is built without any concern for alignment has a 50% or higher chance of killing over a billion people.By default, humanity is not aligned - There are AI labs around the world that are working on building AIs with cognitive power as high as they can make them without any concern for alignment, and those labs are not being stopped. Why human alignment is the primary instrumental goal Suppose that the above AI alignment goals are immediately achieved, today. Yudkowsky makes the point that this will not prevent AI labs that don't care about alignment from building an unaligned superintelligent AI. Therefore, an AI with a 50% or higher chance of killing over a billion people will still get built and turned on, and we will have failed in our terminal goal. Yudkowsky solves this problem with a 'pivotal act', something you can get the aligned superintelligent AI to do to prevent any other labs from building unaligned AI. This shows that achieving AI alignment is not enough - you must then have a plan to do something with it. That something has to prevent humanity from building AIs with greater and greater cognitive powers with reckless abandon. If you tell the aligned superintelligent AI to "burn all GPUs" (to borrow Yudkowsky's example), what you're actually doing is achieving human alignment by force rather than with persuasion. I'm not saying this in a condemning manner at all, just pointing out that this is just another path to human alignment. In the end, it's the human alignment that's necessary to achieve our terminal goal. On the other hand, if human alignment is achieved without AI alignment, then humanity will prevent a superintelligent AI from getting built until it can be proven to be aligned before it's turned on (which may be never). The terminal goal is satisfied. In summary, human alignment is both necessary and sufficient to achieve our terminal goals. AI alignment is only possibly useful as a secondary instrumental goal to bring about human alignment. What about always-on AIs? This section doesn't have an effect on the overall argument, but I'm including it for logical completeness. In addition to AIs used for pivotal superhuman engineering tasks, humanity will also have some (hopefully weaker) AIs constantly running, just maintaining stuff. We need to have a threshold of how many incidents we will tolerate from those always-on AIs that may not have the cognitive power to do pivotal superhuman engineering tasks, but may still well be capable of killing us all. I'll define our second terminal goal as having a lower than 1% chance of a billion or more human deaths per year from the combined total of these always-on AIs (the numbers are completely arbitrary). To differentiate between the two types of AIs defined in the two terminal goals, I'll use the terms "superintelligent AIs" for the AIs that would be used for pivotal superhuman engineering tasks and "always-on AIs" for the AIs that would be always on. Unfortunately, there could be some overlap between the two sets, as there's nothing theoretically stopping humanity from keeping an AI capable of pivotal superhuman engineering tasks always on... We'll define instrumental goals for the always-on AIs: Achieving always-on AI alignment: There are many always-on AIs, and their failures are almost certainly not independent variables, but somehow we work it out so that the combined set of all of them has a less than 1% chance of killing over a billion people in any given year. If we can achieve that, we call that combined set of always-on AIs aligned.Achieving human alignment: Ensuring that humanity will not continually run a combined total set of AIs that has a 1% or higher chance of killing over a billion people a year. Without human alignment, there's nothing stopping labs from keeping an arbitrary number of superintelligent AIs that should normally be reserved for pivotal superhuman engineering tasks always on, thereby skyrocketing the risk. Therefore, human alignment is once again the necessary and sufficient condition of achieving our terminal goals. Focus on human alignment It's important for us to understand that AI alignment alone will not achieve our terminal goal(s). If you are focusing your efforts on AI alignment efforts, you need to have a plan for how achieving AI alignment will subsequently help you achieve human alignment. The plan could be building a superintelligent aligned AI and giving it the directive "burn all GPUs". Note that giving an AI the ability to forcefully destroy the possessions of humans without their consent may be outside your prior definition of alignment, so plan accordingly.If you don't have a clear plan for how you'll achieve human alignment from achieving AI alignment, your efforts should not be focused on AI alignment. Focusing on AI alignment without a clear plan for how that'll help achieve human alignment will not help achieve the terminal goal(s). Your efforts should be focused on making a concrete plan to achieve human alignment as soon as possible instead. Once human alignment is achieved, you'll be free to pursue AI alignment at your leisure. Fast human alignment is possible without AI I purposefully chose the cover of Hobbes' 1651 Leviathan as the preview image. The Leviathan is the origin of social contract theory and the front cover depicts the central idea of the book - many individual humans becoming a single entity - we could say becoming "aligned" as a single will. Hobbes wrote the Leviathan in response to his shock of witnessing the brutality of the English Civil War. To me, the Leviathan is a cry that anything, including submitting unconditionally to absolute power, is better than the horror of war. The closest thing to Hobbes' experience for me personally is being born in the former Soviet Union. I'll relay my understanding of the Soviet experience that I absorbed through osmosis. If there are people here with more knowledge and experience, please correct me if I got it wrong. Imagine that you were from a very early age being shown a tapestry that shows some glorious future of humanity. You were taught that you will contribute to that vision once you grow up. Then you grow up, full of bright-eyed dreams, and when you pull back the tapestry, there is is a giant meatgrinder. When the Party tells you to get in the meatgrinder, you get in. You don't ask how people getting ground in the meatgrinder will contribute to the future on the tapestry. You don't ask whether getting in the meatgrinder is the best use of your talents and aspirations as a human being. You let the Party worry about the future, you just worry about obeying the Party. I believe that most of the revolutionaries of 1917 were full of good intentions to save humanity from what they saw as the meatgrinder of capitalism, and they really didn't intend to build an even worse meatgrinder themselves. Some may argue that human meatgrinders don't last forever, so it's better than extinction. To that I respond that AI safety may become associated with human meatgrinders in the same way communism now is in Eastern Europe. And if the idea of AI safety becomes resented on a visceral level by a large enough number of people, then humanity is probably still getting atomized by nanobots, just with extra steps. It's intellectually dishonest to say that human alignment is impossible. Convincing people through rational arguments is not the only way to achieve human alignment, nor even in the top 10 of the most historically used ways. However, achieving global human alignment at the required speeds could easily end up worse than just getting atomized by nanobots. Still, if there's a viable path to achieving human alignment somewhere between "arguing with idiots on Twitter" and "literally 1984", then this could still be a surviving world. Survival is just not always pretty...
2025-01-10
https://www.lesswrong.com/posts/6SxipFfZ2WjJs7nZY/you-are-too-dumb-to-understand-insurance
6SxipFfZ2WjJs7nZY
You are too dumb to understand insurance
Lorec
[ cross-posted from my blog ] "But with regard to the things that are done from fear of greater evils or or some noble object (e.g. if a tyrant were to order one to do something base, having one's parents and children in his power, and if one did the action they were to be saved, but otherwise would be put to death), it may be debated whether such actions are involuntary or voluntary. Something of the sort happens also with regard to the throwing of goods overboard in a storm; for in the abstract no one throws goods away voluntarily, but on condition of its securing the safety of himself and his crew any sensible man does so." -- Aristotle, Nichomachean Ethics 10% chance/season of 1 voyage getting hit w/ a tropical storm [causing me to have to throw $1,000,000 worth of cargo overboard and compensate client accordingly] Each successful voyage pays me $200,000 6 voyages / season Depending on which voyage gets hit, a storm might put me into debt and force me to quit sailing until I can raise more capital, and in any case, it would negate my profits for the season. I only make money in the timeline where I don't get hit at all, but this has a 90% chance of happening. I would then earn $1,200,000 x 90% = an expected profit of $1,080,000. But I have a [ 1 - (9/10)^(1/6) ] ~= 1.7% chance of losing -$1M, a (9/10)^(1/6)*~1.7% chance of losing net $200K - $1M = -$800K, a (9/10)^(1/5)*~1.7% chance of losing net $400K - $1M = -$600K, a (9/10)^(1/4)*~1.7% chance of losing net $600K - $1M = -$400K, a (9/10)^(1/3)*~1.6% chance of losing net $800K - $1M = -$200K, and a (9/10)^(1/2)*~1.6% chance of losing net $1M - $1M = $0. $1,080,000 [ ~1.7% x -$1M ] ~= $17,000 [ ~1.7% x -$800K ] ~= $13,600 [ ~1.7% x -$600K ] ~= $10,200 [ ~1.7% x -$400K ] ~= $6,800 [ ~1.6% x -$200K ] ~= $3,200 [ ~1.6% x -$0 ] = $0 ---------------------------------------- = a final expected profit of ~$1,029,200 Moriarty looks at the paper. Even though I have a 10% chance of one of the really bad outcomes happening to me, I still expect to profit by sailing this season. In fact, I stand to profit big in the event of the median outcome, even though I stand to lose big in the event of the outlier outcomes. Moriarty looks at the table. He considers the numbers. They feel inefficient. I expect to make $1,029,200, and the probabilities behind this figure are real, backed up by every nautical table, and if there was a table at a betting house being run on whether I'll get hit, they'd be backed up by every bookie - yet if the boat gets hit, I have nothing to show for it. I have all this wealth - in a very valid sense, real wealth. I should be able to spend some of the money from the good timeline, on not having the losses be so devastating in the bad timelines! He thinks. He gets an idea. He makes some notes on a paper, and he heads to the bar. A few of his sailor friends are already there, and some of them are talking to other sailors. Within a few minutes he has a group with 9 other sailors together at a table. He gets them talking about this - he's good at that, he buys them all one round of drinks, and dives into the topic before they can get any drunker - and they agree, they're all in a similar situation. They'd like to be able to spend some of the money from their good timeline to mitigate the effects in their bad timelines, too. Wilhelmina says she remembers reading about this, and that it's called diminishing utility on the value of money, and that by the St. Petersburg argument it's implied for all rational agents. That seems to seal the deal for everyone that this line of thinking makes sense. On Moriarty's suggestion, they draw up a contract to all pay $30,000 per successful voyage, so that a full $1M can be paid out to cover anyone who gets hit, and thus make the profits from their season positive, with some leeway, to be evenly distributed at the end of the season. One hesitates to sign - Dennis. He says, "Wait, if the people who get hit once have their losses recovered, and they get to go back and continue the sailing season, doesn't that increase the total probability of being hit experienced by the group?" Moriarty says, "Well, yes, but the expected return on each voyage is still positive, so it should still be okay." Dennis says, "I don't know, I recall something called the Kelly criterion which says you shouldn't scale your willingness to make risky bets more than proportionally with available capital - that is, you shouldn't be more eager to bet your capital away when you have a lot versus when you have very little, or you'll go into the red much faster. It seems like maybe people who get hit once, and get the payout, should know when to hold 'em, as they say." Jill says, "The whole point is so we can keep sailing." Dennis says, "No, the whole point is so we can make money." Wilhelmina says, "No, the whole point is so we can make utility. Jill says, "Dennis, all of us are starting out with zero capital. If the idea was to not risk going into the red, shouldn't we just not sail? Like, I sort of get it, you're asking them to take one for the team, but not taking chances isn't what Kelly betting is about, it's about using the law of large numbers to reduce the volatility of your capital in random games once you get enough of it accumulated - which is the very thing we're trying to do, making this deal." "Kelly betting is based on the assumption of a positive expected per-game return that is fixed as a percentage of the bet quantity -" "- okay, then it doesn't apply -" "- right, we're in a less risk-elastic scenario, meaning per-endeavor risk tolerance should grow even more slowly as a function of capital -" "- then why are there rich sailors?" "There aren't! There are rich sailing company owners, which is what you would expect to see under -" Moriarty, who has been calculating the new numbers this whole time, holds his sheet up. Next to the old failure probabilities for each voyage - 1.7%/1.7%/1.7%/1.7%/1.6%/1.6% - he's written down the new failure probabilities: 1.7%/1.7%/1.7%/1.7%/1.7%/1.7%. Next to the old expected profit - $1,029,200 - he's written down the new expected profit - $1,029,000. "Yes", says Moriarty, "it'll cost us somewhat to make this deal, because there'll be more sailing and thus more incidents. But like I said, the expected return on each individual voyage is still positive, so we'll actually net make money by being able to sail more!" "Utility", mumbles Wilhelmina. Dennis fidgets. "We probably won't actually run into much problem if we just do this with the ten of us, but I'm just thinking - if we take this to its logical conclusion - the market for overseas transportation is calibrated to expect the quantity of supply - amortized over the season - that is available, conditional on N incidents taking N sailors out of the market - roughly, that is, at least for sailors of around our means - for some particular N, that we'd be reducing. What we're doing will not only increase merchandise loss, but also increase total market supply without increasing service quality, causing prices to go down. Sure, we might not end up nominally making less, at the very least because of inflation, but in the long run, if everyone adopts this policy, are we sure we're not just shooting ourselves in the foot?" Moriarty can't help but be a little irritated. "Would you rather be shot in the foot out at sea? That is, are you really that indifferent to suffering steep costs under high-volatility conditions, versus suffering marginal costs under low-volatility conditions?" "I'll get shot in the foot out at sea either way", protests Dennis. "The sailing industry still goes into the same amount of net debt to its consumer market as a result of that one storm whether you guys pay it off for me and let me keep sailing, or I bow out of the game for the season and save up to pay it off myself." George says "Well, yeah, but you're suffering high opportunity costs while you're scrounging to pay off debt incurred to your client, because you're taking time out of the labor market." "For sailing?" Dennis says. "What I'm saying is, I don't think the extra money is really there in the market anyway." "Sure it is," says Moriarty. "Clients are willing to pay $200,000 for their cargo being delivered on time!" "Yeah, under the assumption that product losses will be fully recouped," says Dennis. "The market price doesn't reflect clients' true aversion to that risk. If it did, it would be lower." "Can I ask what's probably a stupid question?" says Xena. All heads turn to the strange tall woman in the cloak that looks like it's woven with tiny threads of bronze. "For context," she says, "I'm not from around here." There's an abashedly delayed chorus of assent. Xena says, "This 'debt' thing - I've been warned, many times now, not just by you guys, by everyone I've asked for information since I arrived in this country - that I'll have to guard against it, as a contract sailor here." "I've had no reason to doubt everyone's word for that, even though I don't understand, and what you - Moriarty - proposed tonight seemed like a reasonable scheme for guarding against it, as far as I can tell, so I've kept listening and nodding along. And I'm to the point of signing, if everyone else gets on board, because" - she gestures around - "this whole thing has the vibe of a true prospective market-breaking conspiracy, not a plant to trick a hapless foreigner out of her earnings, and I can't see how what you" - glance at Moriarty - "describe wouldn't work. "But this debt thing. I haven't been able to get a straight answer from anybody on -" Xena's face twists with frustration "- anything! What it is, why we're doing it, how it works." She looks around. "Admittedly, I have some other questions, too, but that's the main one." After that the conversation goes very differently. Later that night, Moriarty fumes and shifts about in bed. If Xena had not been such an impressive and well-dressed woman, he thinks, or so articulate, then indeed it would have been easy for the company to laugh off her question and continue on with securing Dennis's signature. But as it happened, no one could seem to give her a satisfying answer. They'd talked about education, defense, and the problem of distributing naturally illiquid assets - she'd talked about "on-the-job-training" and ISAs, and a decision theory implying that you shouldn't pay racketeers, and some really complicated things about capital and rent-seeking, and "virtual LVT". It was all unproven bullshit, as far as Moriarty was concerned - either foreigner religion, or tall tales flat out. But it had dissolved the company, and the prospective contract, and Moriarty's chances to insure himself against losing $1M on the season and risking his childrens' futures again. Xena had just kept getting more and more confused and skeptical, and everyone else had gotten infected with it, sounding less and less like they were trying to convince her and more and more like they were trying to convince themselves. In the end, the only contract that had gotten signed was a three-way marriage contract between her, Dennis, and Wilhelmina, who had all left the table to sail back immediately to Xena's home country. Dennis had left saying that, if he could get out of his student debt, he might be able to actually use what he'd learned studying for his degree. Wilhelmina hadn't had any debt, if she was to be believed, but had left saying that in Xena's society, she might be able to actually finance her speculative wastewater treatment idea. Xena had left predicting ill for Moriarty's venture and his people's entire way of life. Maybe so, thought Moriarty, but we've still got to keep living it. Afterward, George had confessed to Moriarty that he'd considered asking to get in on the marriage contract and sail back with Xena, too, but he'd had a hunch his intelligence wasn't high enough to qualify him, and had chosen not to risk the embarrassment. "Will you sign our contract?" Moriarty had asked. George had made noises with a faraway look in his eye, about writing down and trying to implement pieces of Xena's debt-free society among their own people. Moriarty, out of spite, had bought George a coffee and given him a pad of paper, and prayed for his machinations to go as well as they deserved. No amount of idealism, thinks Moriarty, can make you that much bronze for cloaks, and goes to sleep.
2025-01-09
https://www.lesswrong.com/posts/D7wg2rZJKPCce3HNu/is-ai-hitting-a-wall-or-moving-faster-than-ever
D7wg2rZJKPCce3HNu
Is AI Hitting a Wall or Moving Faster Than Ever?
garrison
null
2025-01-09
https://www.lesswrong.com/posts/LvswJts75fnpdjAEj/mats-mentor-selection
LvswJts75fnpdjAEj
MATS mentor selection
DanielFilan
Introduction MATS currently has more people interested in being mentors than we are able to support—for example, for the Winter 2024-25 Program, we received applications from 87 prospective mentors who cumulatively asked for 223 scholars[1] (for a cohort where we expected to only accept 80 scholars). As a result, we need some process for how to choose which researchers to take on as mentors and how many scholars to allocate each. Our desiderata for the process are as follows: We want to base our decisions on expert opinions of the quality of various research directions.We want the above opinions to be sourced from a range of perspectives with the AI existential safety field, to incorporate multiple perspectives on alignment research and other research areas we think are important, such as AI governance and policy, AI security, and other approaches for addressing AI catastrophic risk.We want to incorporate information about how good prospective mentors are at mentorship—both information internal to MATS, as well as information advisors may have. In this post, we describe the process we used to select mentors for the Winter 2024-25 Program, which will be very close to the process we will use to select mentors for the Summer 2025 Program. In a nutshell, we select advisors, who select mentors, who select scholars, who often select specific research projects, in a “chain of trust,” with MATS input and oversight at every stage. This system is designed to ensure that we make reasonable decisions about the scholars, mentors, and, ultimately, the research we support, even if MATS staff are not subject matter experts for every branch of AI safety research. We want to make this "chain of trust" structure transparent so that potential funders and collaborators can trust in our process, even if we cannot share specific details of selection (e.g., what advisor X said about prospective mentor Y). Mentor selection First, we solicited applications from potential mentors. These applications covered basic information about the mentors, the field they work in, their experience in research and mentoring, what projects they might supervise, and how many scholars they might supervise. These applications were then reviewed by a team of 12 advisors. Our advisors were chosen to be people with experience in the AI existential safety community, as well as to cover a range of perspectives and subfields, as discussed above. We selected advisors by first creating a long list of approximately 100 candidates, then narrowing it down to a short list of approximately 30 candidates, who we invited to advise us. Of these 30, 12 candidates were available. The advisors include members of AI safety research non-profits, AI "scaling lab" safety teams, AI policy think-tanks, and AI safety grantmaking organizations. Breaking down advisors by field (and keeping in mind most advisors selected multiple fields): 6 worked on control / red-teaming.6 worked on some kind of interpretability, including:5 on concept-based interpretability;5 on mechanistic interpretability.5 worked on alignment evaluations or demonstrations, and of those, 3 worked on dangerous capability evaluations or demonstrations (nobody else worked on dangerous capability evaluations or demonstrations).5 worked on AI governance/policy, national security, and/or information security, including:4 on AI governance/policy;4 on national security;3 on information security.3 worked on scalable oversight.3 worked on cooperative AI, agent foundations, and/or value alignment, including:2 on cooperative AI;2 on value alignment;1 on agent foundations. Number of advisors who focus on various fields. Note that most advisors selected multiple fields. Most advisors were not able to rate all applicants, but focused their energies on applicants whose research areas matched their own expertise. For each rated applicant, advisors were able to tell us: Their overall rating for that applicant (given as a number between 1 and 10);How many scholars at minimum the applicant should have;How many scholars the applicant would ideally have;How confident they are in their rating;Whether they have a conflict of interest with the applicant. Advisors also had a field to write free-form text notes. Not all advisors filled out all fields. As shown in the figure below, all but one application was reviewed by at least three advisors, and the median applicant was reviewed by four advisors. One applicant who applied late was only able to be reviewed by a single reviewer. If someone was rated by n advisors, they go in the bin between n and n+1. So, for example, there were 15 applications that were reviewed by 5 advisors. We primarily considered the average ratings and scholar number recommendations, taking into account confidence levels and conflicts of interest. Our rule of thumb was that we accepted applicants rated 7/10 and higher, and chose some of the applicants rated between 5/10 and 7/10 to enhance research diversity (in part to counter what we believed to be potential lingering biases of our sample of advisors). To choose mentors for certain neglected research areas, we paid special attention to ratings by advisors who specialize in those research areas. For accepted mentors, we chose scholar counts based on advisor recommendations and ratings, as well as ratings from MATS Research Managers and scholars for returning mentors. The cut-offs at 5/10 and 7/10 were chosen partly to ensure we chose highly-rated mentors, and partly in light of how many scholars we wanted to accept in total (80, in this program). For edge cases, we also considered the notes written by our advisors. We then made adjustments based on various contingencies: We accepted one applicant who expressed interest after we expected to have finalized mentor counts, as that applicant was very prominent in the field of AI existential safety.Some mentors accepted fewer scholars than we offered them spots. This caused us to offer more scholars to other mentors and add further mentors from our waiting list to get as close as possible to the desired 80 scholars.We selected mentors to give additional scholars based on the mentors' average ratings by advisors, advisors' recommended scholar allocations, and mentors' desired scholar allocations.We selected additional mentors based on their average ratings by advisors and whether we felt they would enhance research diversity.In the end, we were not as successful at onboarding additional mentors and scholars as we hoped and only accepted 77 scholars. Mentor demographics What sort of results did the above process produce? One way to understand this is to aggregate mentors by "track"—MATS’s classification of the type of AI safety research mentors perform. For the Winter 2024-25 program, we have five tracks: oversight & control, evaluations, interpretability, governance & strategy, and agency[2]. Note that these are coarse-grained, and may not perfectly represent each mentor’s research. This is how our applicants broke down by track: Oversight & control: 18 applicants (21%)Evaluations: 14 applicants (16%)Interpretability: 24 applicants (28%)Governance & strategy: 15 applicants (17%)AI agency: 15 applicants (17%) Our accepted mentors broke down this way: Oversight & control: 11 mentors (29%)Evaluations: 8 mentors (21%)Interpretability: 7 mentors (18%)Governance: 6 mentors (16%)AI agency: 6 mentors (16%) Proportionally, the biggest deviations between the applying and accepted mentors were that relatively few interpretability researchers were accepted as mentors, and relatively many evaluations and oversight & control researchers were accepted. To give a better sense of our mentors’ research interests, we can also analyse the accepted mentors by whether they focused on: Technical or governance research;Model externals or internals research;Empirical or theoretical research. These were somewhat subjective designations and, for the latter two distinctions, some mentors did not neatly fall either way. The yellow portion of the bar is mentors who did not neatly fall into either category. Scholar demographics Another way to measure the cohort's research portfolio is to look at the breakdown of scholar count assigned to each mentor.[3] Firstly, we can distinguish scholars by their mentors' research track: Oversight & control: 24 scholars (31%);Interpretability: 18 scholars (23%);Evaluations: 17 scholars (22%);AI agency: 10 scholars (13%);Governance: 8 scholars (10%). This is somewhat more weighted towards interpretability and away from governance than our mentor count. Another relevant factor is how many scholars each mentor has, shown in the histogram below. The median scholar will be working with a three-scholar mentor—that is, with two other scholars under the same mentor. This data is shown in histogram form below. Note that for the purpose of these statistics, if two mentors are co-mentoring some scholars, they are counted as one "mentor." Numbers are provisional and subject to change. If a mentor has n scholars, they go in the bin between n and n+1. So, for example, there are 15 mentors with 2 scholars. Total scholar count will be lower than in this histogram, since mentors who have not yet determined the division of scholars between them were assigned more scholars in aggregate than they accepted. This can be compared to the distribution of scholars per mentor in the Summer 2024 Program. In that program, the distribution was more concentrated: more scholars were working in streams of one or two scholars (the median scholar was working with a 2-scholar mentor, i.e. with only one other scholar), and there were fewer mentors with 3-5 scholars. As with the mentors, we can also break down scholar assignments by their mentors’ research focus. The yellow portion of the bar is scholars whose mentor did not neatly fall into either category. Acknowledgements This report was produced by the ML Alignment & Theory Scholars Program. Daniel Filan was the primary author of this report and Ryan Kidd scoped, managed, and edited the project. Huge thanks to the many people who volunteered to give their time to mentor scholars at MATS! We would also like to thank our 2024 donors, without whom MATS would not be possible: Open Philanthropy, Foresight Institute, the Survival and Flourishing Fund, the Long-Term Future Fund, Craig Falls, and several donors via Manifund. ^ More precisely: when people applied to mentor, they answered the question “What is the average number of scholars you expect to accept?”. 223 (or more precisely, 222.6) is the sum of all applicants’ answers. ^ By “agency”, we mean modeling optimal agents, how those agents interact with each other, and how some agents can be aligned with each other. In practice, this covers cooperative AI, agent foundations, value learning, and "shard theory" work ^ Note that scholar counts are not yet finalized—some co-mentoring researchers have not yet assigned scholars between themselves. This means that the per-track numbers will be correct, since those mentors are all in the same track, but the statistics about number of scholars per mentor will not be precisely accurate.
2025-01-10
https://www.lesswrong.com/posts/nAPTfhaJcPF25Yjyw/expevolu-part-ii-buying-land-to-create-countries
nAPTfhaJcPF25Yjyw
Expevolu, Part II: Buying land to create countries
Fernando
This is the second of a series of three posts outlining the expevolu system; if you haven’t read the first one I’d recommend you start there: Expevolu, a laissez-faire approach to country creation PART II – Perpetual Auctions Table of Contents: 1) The Problem 2) The Holdout Problem 3) Perpetual Auctions 4) The bypass by independence problem 5) Zero Tax Ceiling Intro 6) Zero Tax Ceiling 7) Inequality in the selling of CBs 8) Perpetual Auctions and Citizenship 9) Word of Caution and Conclusion 1) The Problem To institute a country, a founder would need to buy a very large quantity of contiguous Citizens Bundles (CBs) and a very large quantity of contiguous properties. For this to happen, the founder would need to have a very long streak of successful transactions for properties and CBs without a single failure. To achieve this, under normal circumstances, would be nearly impossible. This post is about this problem and a proposed way to overcome it. Swiss Cheese Countries An alternative would be for founders to abandon the goal of a solid territory. They would just buy the CBs and properties that they could and create their countries around the land they were not able to clear. For example, if a farmer refused to sell her farm to a founder, the founder could just build his country around that farm. Since the founder failed to buy the farm, the farm would still be part of the adopting country’s territory; it would be a little dot of adopting country within the borders of the new country. This kind of scenario would be a burden to adopting countries and a strain on the relations between them and new countries (not to mention the costs to the people living on these islands of adopting country within new countries). Such circumstances would not be permitted in expevolu. To acquire independence rights, a founder would need to have a solid patch of land, with all its properties and CBs. Since “Swiss cheese” countries would be prohibited in expevolu, we are back to the situation of a founder having to go through an enormous amount of transactions without a single failure in order to establish a new country. This would be prohibitively difficult for most independence projects. Transactions Fail One of the issues here is that transactions, even mutually beneficial ones, sometimes fail. In trying to bargain for better prices, sometimes both parties of a potential transaction end up not reaching a deal. If, for instance, the seller of a property has the wrong perception about how high a price a buyer is willing to pay, or the buyer has a wrong perception about how low a price he can get, the transaction may fail, even if it is mutually advantageous. Since the number of transactions to create a country is enormous, some are bound to fail; and a single failure is enough to negate a whole project. 2) The Holdout Problem The situation, however, is even worse because of something called “the holdout problem.” Here is a GPT-4 explanation of the concept: In the world of land acquisition for large-scale projects, the holdout problem presents itself when an individual or a group of landowners, realizing the essential need of their land for the project's success, demand a price far exceeding the market value. Picture a situation where a corporation wants to build a new factory. To do so, they need to acquire a large contiguous piece of land made up of numerous smaller plots owned by different individuals. Each owner, knowing that without their specific plot the factory can't be built, might hold out, refusing to sell unless they get a price significantly above the market rate. This is where the term "holdout problem" comes from. Here's an in-depth look at the holdout problem: Context The holdout problem is often seen in situations involving collective action, such as land assembly or infrastructure development, where a single project requires the aggregation of numerous separately owned pieces of property. A classic example is the construction of a railway line or highway that needs to pass through several privately owned lands. Issue The issue arises when one or more property owners refuse to sell at the offered price, holding out for a higher price. They can do this because they understand that their land is vital for the project, giving them considerable bargaining power. They essentially hold the project hostage, hoping to extract a higher payment from the buyer. The holdout problem can lead to significant inefficiencies. For example, projects might be delayed or even abandoned because of one or more holdouts, which can result in considerable social and economic costs. Alternatively, projects might be rerouted or redesigned to avoid dealing with holdouts, but this can lead to suboptimal outcomes, such as less efficient transportation routes or increased costs. (End of GPT-4 explanation) Founders’ willingness to invest in new countries is related to both the probability that they will be able to establish their countries and the amount of value they would be able to extract from these countries. Under normal property rules, property owners have the right to retain their properties independent of the offer made for the acquisition of those properties, they have the right to refuse to sell and keep them. This right of retention over their territorial rights makes the probability of a founder being able to acquire all the necessary rights to establish a country extremely low. It also means that any value founders expect to extract from new countries would tend to be captured by CB and property owners, further reducing founders’ incentives to invest. Some sort of mechanism to deal with this issue is necessary; otherwise founders would not be able to get the required rights to create countries. This would drastically reduce the significance of expevolu; the system would fail in its most essential goal of allowing people to create new countries. 3) Perpetual Auctions There are a few alternative ways to address the land (and CB) assembly problem. The ones I looked at were, in general, inadequate in important ways. However, there was one alternative, associated with economist Arnold Harberger [1], which I found quite reasonable.[2] Here is a GPT-4 description: Harberger, a renowned economist, proposed a rather radical model for property rights which aims to alleviate inefficiencies related to the holdout problem and promote a more effective allocation of resources. The Model This approach is often referred to as the "Harberger Tax" system, or "self-assessed property taxation." Here's how it works: Self-Assessment: Every property owner is required to self-assess the value of their property and display this price publicly.Taxation: Owners then pay a tax based on their self-assessed valuation. The tax rate could be determined by the government or relevant authorities. The revenue from this tax can be used to fund public goods or services, reduce other taxes, etc.Sale Obligation: Crucially, the property must be up for sale at all times at its self-assessed price. This means that if anyone offers to buy the property at the stated price, the owner must sell it. Implications and Goals This mechanism has several significant implications and is designed to achieve specific goals: Prevent Overvaluation: Since property owners have to pay a tax based on their self-assessed valuation, there's a disincentive to overvalue their properties because it would lead to higher taxes.Prevent Undervaluation: At the same time, owners can't undervalue their properties too much to avoid taxes because they risk someone else buying their property at that undervalued price.Promote Efficient Allocation: The model is intended to ensure that properties are more likely to end up in the hands of those who value them the most. This is because anyone who values the property more than the current self-assessed price can buy it and then set a new self-assessed price equal to their higher valuation.Address the Holdout Problem: The Harberger Tax system can effectively eliminate the holdout problem. Since properties must be sold to any willing buyer at their self-assessed price, property owners can't hold out for a higher price. (End of GPT-4 explanation) Expevolu would apply this solution to both properties and CBs. 1) Self-Assessment: Every property and CB owner would be required to self-assess the value of their property or CB and display this price publicly. 2) Taxation: Owners would then pay a tax based on their self-assessed valuation. 3) Sale Obligation: Crucially, every property and CB must be up for sale at all times at their self-assessed price. This means that if anyone offers to buy the property or CB at the stated price, the owner must sell it. Country Creation Under Perpetual Auction For the purpose of visualization, let’s bring up one of our fictitious square countries. As usual, the black lines represent the borders of the adopting country, the red lines the borders of real estate properties, and the blue lines the CBs. The selected region indicates where a founder wants to create a country. All properties and CBs would be always available for purchase in an electronic market. In the tables below we can see the properties and CBs that fall within the selected region; beside each owners’ name is their self-assessed price. Properties: CBs: In the ‘simplified expevolu’ model, presented in part I, the rules of property would be the ordinary ones prevalent in most countries. Therefore, most CBs and properties wouldn’t be for sale and founders would have to negotiate with each CB and property owner to buy their rights. Under the Perpetual Auction scheme, all rights would always be for sale and founders would be able to buy them all in a centralized digital market. The transactions in the CB and real estate market in expevolu would need to be much faster than the transactions in regular real estate markets. The demands on the underlying infrastructure to carry out these transactions would be much larger. Transactions would have to be done fast, digitally and in a secure manner. I can’t imagine it working under the real estate transaction proceedings prevalent currently. Adopting countries would need to upgrade to more efficient ways of transacting territorial rights before the Perpetual Auction scheme is implemented. A good way for founders to buy the necessary rights would be stealthily. They could buy all the required CBs, all properties and request independence in a single move. This way they would not disclose to CB and property owners their interest in the region. The rights owners would realize the interest in their assets only after they were already bought and the opportunity to raise their price lost. Each property and CB owner whose assets are bought would automatically receive the price they had set for them, and their CBs or properties would be transferred to the buyer. Once it is determined that a region will be made into an IZ, there is no need for the properties and CBs of that region to remain in the market. Therefore, both properties and CBs are removed from the market once founders, after having consolidated the necessary assets under their names, file for Independence Rights. After founders file for Independence Rights, their chosen territory would not be made immediately available for them to establish their IZs. There would be a period to allow the people who live on the area to move out. Only after this period would founders be allowed to institute their IZs. Right to Fraction Belongs to Real Estate Owners In the previous example, there was no fragmentation of existing properties or CBs. The necessary properties and CBs fell completely and perfectly within the selected region. In practice, selected regions borders would cut through the middle of numerous CBs and properties, in a way that some bits of them would be in the selected regions and some bits out of the selected regions. Selected area with its borders crossing numerous CBs and properties. This being the case, maybe we should allow founders to buy only the bits of real estate and CB which fall in their selected territories, because they only need those parts. Allowing this, however, would lead to substantial problems. Leaving CBs aside for a moment, let’s focus on real estate rights. Imagine that you own a farm over which you perform some economic activity. The property has many different areas, which fulfill different, interdependent, tasks. If the system allowed buyers to buy whatever portion of a property they wanted, then they could disrupt your whole operation by buying a crucial area of your property. This is a problem in and of itself, but there is an even worse problem. Allowing people to buy portions of properties in the auction market would present an opportunity for them to willingly disrupt real estate owners’ operations in hopes to get an advantage. People could buy tiny, but critical fractions of landowners’ properties and significantly raise the price of those bits in the auction market in hopes that the original owners would want to get them back. This would probably drastically reduce productivity in the areas available for independence, and disrupt people’s private lives as well. Therefore, the right to divide properties in expevolu would be in the hands of the property owners. Founders would need to buy whole properties (or convince the owners of those properties to divide them neatly for them). After they buy the properties, they would then be allowed to divide them, using the parts they wanted to establish their IZs and putting the rest back for sale in the auction market. CB Fractions With regard to CBs, since they don’t grant power over land, this particular problem wouldn’t exist. So, maybe the best thing would be to allow buyers to buy exactly the portions they want, however, I am not sure (I haven’t thought much about this). If founders are to be allowed to divide CBs in the act of buying, acquiring only the exact CB areas they want, then the system needs to be able to deal with that. I imagine something like the founder drawing his or her selected region on a digital map of the adopting country and the application determining what CBs and properties need to be bought and their price. The application would have to be able to divide CBs, calculate the prices of fractional CBs and carry out the transactions for these fractions (along with the transactions for the full CBs and properties). Perpetual Auction wouldn’t be applied in cities It is important to note that this system would be applied only to the regions available for independence. Properties in big cities, for instance, would not be under auction; the normal property legislation of the adopting country would be the one in effect on these areas. Checkpoint (Natural point for a break) 4) The bypass by independence problem The Perpetual Auction scheme prevents territorial rights owners from having the power to veto the sale of those rights. If territorial rights owners had the right to refuse selling, then founders would be incapable of acquiring the necessary rights, new countries wouldn’t arise and expevolu would lose almost all of its relevance. One problem with the Perpetual Auction system as presently described is that people would be able to acquire veto power through independence, recreating the problem that the system is supposed to solve. For example: Let's say an individual believes that a particular region within the adopting country is a great place for a country to be created. This person could, for instance, buy a small property and corresponding CB in the region, make the small bit of land he acquired independent and offer to sell it at 50 times the price he paid for it. Since his land is independent, he is excluded from the requirement of having to sell his land at his self-assessed price. He has veto power. With everyone being allowed to do the same thing the situation becomes very similar to the one that existed before the establishment of the Perpetual Auction system; only worse, because there would be a large number of tiny parasite IZs and a compelling reason to invade them. To combat this problem, at this point in time, I only have a very blunt tool, but I hope better alternatives will be thought of in the future. The blunt tool is a very high tax on the creation of isolated small independent zones. People who’d want to create an isolated small IZ, would have to pay a very high tax. (Small IZs in large clusters would be allowed, this will be discussed briefly.) The tax, in practice, would be so high as to work as a prohibition of isolated small IZs. This is due to the fact that people could, instead of creating a small independent zone and paying an enormous tax, use the money that would be spent on the enormous tax to buy more land and CBs to create a much larger IZ. It would make little sense to create a small IZ, given this tax. However, having to pay a large tax to have something is better than being prohibited from having it all together, so if people for some reason, in some circumstances find that they'd prefer a small IZ and a large tax instead of a large IZ and no tax, they would have this option. Also, the tax provides a gradation that a prohibition doesn’t. The tax could be extremely high for very small IZs, but could become lower and lower as the size of the IZs to be established increases. The tax then, is intended as a prohibition to the implementation of small IZs, but offers some flexibility to larger IZs that a cruder prohibition wouldn’t. Given the large tax on small isolated IZs, establishing the much smaller IZ1 would cost the same to the founder as establishing IZ2. This solution of effectively prohibiting small IZs, at least at first glance, seems quite terrible. Maybe the most desirable characteristic of expevolu was to allow the test of different societal arrangements in small scale; if small independent zones are heavily taxed, then it looks like this very important function of expevolu is lost. This, however, is not the case. Although I do believe the heavy taxation solution is hindering to the creation of small zones, and I am hoping for a better solution to appear in the future, the solution still permits a large number of small zones to be created. The reason for this is that there are many ways a small zone can be established in expevolu other than the way it would be taxed. Here are some examples: Small Zone Clusters Different founders that want to establish small zones may coordinate to join their zones together, in a way that the territory of all zones added exceeds the threshold of taxation. They could request the adopting country to create separate independence rights titles for each of the zones in the amalgamation, so that they are clearly recognized as independent from each other. Cluster of small IZs. Small zones near large zones Small zones founders may coordinate with big zones founders as well and follow the same process described above to get their small zone instituted without taxation. Buying independent land from IZs Another solution would be for groups or individuals who want to establish small zones to buy the areas from already established IZs. Semi independence in established IZs with strong property rights Other possibility would be to establish semi-independent regions, things like condominiums, in new countries. Some new countries may have “strong” private property rights, allowing people significant freedom over their properties. This allows for the existence of communities with different institutions than the ones of the country on which they are established. These communities may in practice be very similar to fully independent countries and their level of independence may be enough for many projects.[3] In summary, although the option of heavily taxing isolated small IZs is not optimal, it doesn't preclude the creation of small zones. The specifics of this tax will be left open for now: what (in units of measure) is meant by 'small' in small zone, how the tax would be calculated, how the tax would decrease as the zones to be established increase—these questions will not be answered here. I don’t have good answers to them at this point. Hopefully we can get a community of people interested in expevolu in the future; contributing ideas and refining the system. 5) Zero Tax Ceiling Intro The taxation of properties and CBs proposed in the previous section, the Harberger Tax system[4] , works well for properties, but as currently set, doesn’t work well for CBs. The rights represented by CBs are, in practice, held by the governments of adopting countries. CBs don’t grant control of land to their holders. Since this is the case, one cannot produce with CBs. One cannot plant crops or raise cattle with CBs, for example, like one can with real estate rights, and this affects how CBs should be taxed. Real estate rights give you the ability to use the land productively and use the production of the land to finance the tax. CBs doesn’t give you this ability. Here is an example to illustrate the difference: A tax on the price of property looks like this: Price of property: $100,000 Tax: 5% of price per year Average production per year: $20,000 Taxes payed: $5,000 per year Surplus per year: $15,000 A similar tax on CBs looks like this: Price of CB: $100,000 Tax: 5% of price per year Average production per year: $0 Taxes payed: $5,000 per year Surplus per year: -$5,000 Under this scheme CBs would tend to have a low value, because to sustain the price of CBs for long periods, CB owners would have to pay a lot in taxes with no compensating income. CB owners, then, would be strongly compelled to lower their prices in order to either pay less taxes or eliminate the taxes completely by selling their CBs. This would drastically reduce the benefit citizens from adopting countries would derive from CBs. Additionally, the taxes prevent CB owners from holding their assets for long periods of time. It is quite likely that it would take time for large portions of an adopting country to be taken by new countries (if it ever happens at all). IZs could creep on the space of Adopting Countries gradually, through the years and decades. So, it would be in the interest of CB owners to be able to hold out for the long run, because the best selling opportunities may be way in the future. However, with the current taxing arrangement, keeping CBs for long is not feasible, at least not with adequate CB prices. Since this would be a really bad outcome, some adjustments to the current system have to be made to prevent it. 6) Zero Tax Ceiling One important thing to notice about the Harberger Tax as it is used in expevolu, is that it is used to drive behavior, not to collect funds. The purpose of the tax is to avoid the Holdout Problem; the tax is just a means to this end.[5] With this in mind, my proposed solution to the problem is to exempt CB owners from taxation up to a CB price (per unit of measure). This threshold price will be referred to as zero tax ceiling. If the zero tax ceiling is $10,000 per hectare, for instance, citizens would be able to set this price per hectare to their CBs, or any price below it, and chill without having to pay taxes. This would prevent the tax from improperly devaluing CBs and would allow CB owners to hold their CBs for extended periods of time. It would also continue to prevent the Holdout Problem, because high CB prices would still be taxed. However, there is the problem of how to determine the value of this zero tax ceiling. Who should define the zero tax ceiling and how to define it? Defining this zero tax ceiling seems to require some amount of arbitrariness; there is no evident optimal value to use. One way to determine it, would be to get some experts to set the ceiling; to get some economists to do it, for instance. This kind of arrangement tends to be distrusted. CB owners may believe that their CB prices are being mispriced either by incompetence or for personal gain and may reject this approach.[6] The option I adopted is to let the CB owners vote on the value of the zero tax ceiling they want for themselves.[7] The zero tax ceiling would, then, be defined by referendum and would be changed periodically by future referendums. These referendums would be completely dissociated from the normal elections of democratic adopting countries. The voters in the referendums would be the CB owners and they would have voting power in proportion to their CB areas. The voting CB owners would include foreigner who hold CBs. To illustrate the functioning of these referendums, imagine an adopting country which just completed the distribution of CBs. The country has one hundred citizens, each owning a CB of one hectare. In this initial phase of expevolu, since every citizen has the same CB area, every CB owner would have the same voting power. As citizens buy and sell CBs, the voting power of each CB owner would vary in accordance with the size of their CB areas. Maybe the underlying property rights price could be used as a proxy of CB value and serve as an anchor for determining the options to be given to voters. Let’s say the average price of real estate rights on the area available for independence is $10,000 per hectare. In this case, the options available for voters as the zero tax ceiling could be: 0.25 x $10,000 per hectare 0.5 x $10,000 per hectare $10,000 per hectare 2 x $10,000 per hectare 4 x $10,000 per hectare The CB owners would vote in their preferred choices among these options. (It is important to implement a fitting voting system for this referendum, because some voting systems would generate terribly inaccurate results.) Wouldn’t everyone just vote to have the highest tax ceiling possible? Problems of having a high tax ceiling: For an individual, it would make sense to remove the constraints of the zero tax ceiling on him or her self by assigning to themselves an infinitely high ceiling. However, if you are voting for the zero tax ceiling not only of your CB, but also of the CBs neighboring yours, then this is a bad strategy. If all your neighbors can put their price extremely high, then this would bring back the holdout problem and would devalue CBs, because founders would not be able to buy all the necessary contiguous CBs to create a unified IZ territory. CBs are valuable because of the IZs that can be created through them and the citizenship rights they provide, if founders can’t create IZs where your CB is located, a good chunk of its value is lost. The tax helps coordinate the lowering of CB prices. High zero tax ceilings would impair this coordination. If the zero tax ceiling is above what founders are willing to pay, CB owners may fail to cooperate to lower their price, causing the CBs in the area to have low value. Problems of having a low tax ceiling: If the tax ceiling is low, then CB owners would either unnecessarily spend money on taxes, to keep the price of their CB closer to the correct value, or lower their CB price, which would probably result on it being sold for a low price. In conclusion, it is not in the interest of CB owners to set high or low zero tax ceilings in relation to an unknown sweetspot. The voting method I propose would allow CB owners to try to find the ceiling that would benefit them the most by testing and adjusting. Division by region In any given area available for independence, some regions would probably be more valuable than others for IZ creation. This may be because of proximity to urban centers, landscape beauty, favorable topography, among other causes. Given this, a single zero tax ceiling prevalent over the whole area available for independence seems inadequate. Better to have divisions by region, so that on less valuable areas the ceiling can be set lower, and on more valuable regions, higher.[8] There could be, then, different referendums for different areas, to allow for this finer setting of the ceiling. (Again, the voters on these referendums would not be the local population, they would be the owners of the CBs which are located in each of those areas. These people would be scattered around the adopting country and around the world.) 7) Inequality in the selling of CBs Ideally, the benefit provided by CBs should be equal to all citizens. However, some citizens may be unlucky in the sense that they receive CBs in an area that doesn’t get chosen by founders. So, an inequality characterized by some people being able to sell their CBs and some not being able to do so may develop. One way to combat this is to give each citizen a large number of smaller CBs distributed throughout the country instead of a single CB in a particular location. With this distribution, the probability of citizens being able to sell their CB areas would be greater. (This solution was already proposed in part I, in a slightly different context.) One thing to keep in mind when thinking about inequality in CB distribution is that CBs are also valuable for citizenship. People who receive very valuable CBs would still need 50% of their initial CB area to keep citizenship. To sell beyond the 50% threshold while keeping citizenship, they would need to buy substitute CBs, increasing the demand for and the price of CBs held by those whose areas are less suitable for IZ creation. 8) Perpetual Auctions and Citizenship In expevolu, citizens would not lose citizenship unwillingly. If they want to sell the CB area necessary to maintain their citizenship, they would need to make it very explicit that they are willing to lose their citizenship in order to do so. However, under the current scheme of Perpetual Auctions, the selling of CBs is automatic and all CBs are always for sale, which seems to present a problem for citizens who want to guarantee citizenship. How would they be able to make sure they will always have enough CB area for citizenship? Here is how it would work: By default, all citizens would be set as 'unwilling to lose citizenship' in the electronic auction system. The CBs of 'unwilling' citizens would be traded freely, but there would be a mechanism of CB substitution in place. If a CB area necessary for citizenship is sold in the auction market, then the money from the sale would be automatically used to buy a substitute CB area. This system would allow founders to institute their IZs freely, while also permitting citizens to retain their citizenship. The system would also allow people from regions which are not selected for IZs to be able to sell their CBs. It would be important to the CB owners who have their CBs automatically replaced that the CB areas they receive as replacement are of similar (or greater) value to the ones they lose. I haven’t, however, established a definite rule by which substitute CBs would be chosen for automatic replacement. Maybe citizens could choose in advance the substitute CBs they want, so that, when their CBs are bought, they are substituted automatically by their preferred ones. Or they could chose in advance the rules of selection among a set of pre-defined options. This way, when their CBs are bought, their preferred algorithm would be used to select the substitute CBs.[9] Proof of ability to live abroad Maybe it would be interesting to obligate citizens who want to sell their CBs beyond the citizenship threshold, to provide some proof that they are able to live abroad. Maybe prohibit them from selling those CBs until they live a couple of years abroad, demonstrating that they have the capacity to do so. Maybe, additionally, they could be required to sell those CBs from a different country, because preventing someone to come back in seems easier than forcing them out. I say this because some citizens may sell their last CB areas even if they don't have an alternative country to live in. In this situation the adopting country would be obligated to kick them out. Removing citizens would be a problematic and maybe politically costly thing to do, so it would be best to have measures in place to avoid it as much as possible. 9) Word of Caution and Conclusion In this post, I proposed the application, over large areas, of a significant alteration to the rules of private property. However, I believe doing so, without testing the system beforehand, is dangerous. The normal design of private property is present, widely used and stable in pretty much any country on earth. If we are going to apply a different design over areas where people live, we should test beforehand if this different design works. A prudent prospective adopting country, then, before fully implementing the system, should try it in small areas, with safeguards to the local population, and see how the system performs. The Perpetual Auction system could be tested just on real estate first, without CBs or the possibility of independence; if the system succeeds under this simpler setting, then further tests could be made incorporating more aspects of expevolu. The mechanism could also be tested in virtual environments. For instance, people could create games or game mods in which the property rules are those of the Perpetual Auction scheme. All gear from all players could be for sale at their self-assessed price at all times, for example. This virtual use would be great to find flaws in the system, as players would try to exploit those flaws. It would also be great to test the system widely at low cost. Part III The next post is the final one of this three-part series explaining expevolu. It will be a much shorter post and it would be mostly talking about some of the incentives political agents would have to implement expevolu. The post will be a bit asymmetrical in the sense that it will mostly talk about the incentives people would have to favor the system, while not properly addressing the reasons people would have not to favor it. People should take this omission into consideration when making their own assessment of the likelihood of the system being implemented. This concludes part II. Thanks for reading. :) ^ I don’t know if Harberger came up with the idea or if he is just recognized for having disseminated and contributed to it. According to Robin Hanson “Harberger didn’t invent the idea; it goes back at least to ancient Rome.” ^ I think I first came in contact with this idea by reading the blog post Yay, Stability Rents, on Robin Hanson’s blog Overcoming Bias. The blog discusses the idea and points to the work of Eric Posner and Glenn Weyl on the subject. Property is Only Another Name for Monopoly, paper by Glenn Weyl and Eric Posner. Radical Markets: Uprooting Capitalism and Democracy for a Just Society, book by Weyl and Posner. ^ The book Free to move, by Ilya Somin, is a good reference to better understand this option. ^ I use the terms Harberger Tax system and Perpetual Auction system interchangeably. ^ Ideally, the Harberger taxes on real estate should not be an addition to the taxes real estate owners already pay; they should substitute those taxes. However, I haven’t thought much about this and maybe it would be hard to make the Harberger Taxes neatly substitute existing taxes. ^ A second option would be to get a strong Schelling Point and use it to set the zero tax ceiling. For instance, one could use the price of the underlying property as the price of the zero tax ceiling, so that CB owners could set tax free CB prices (per unit of measure) up to the price (per unit of measure) of the underlying property. The idea here is that, 1) in the absence of a better way to estimate CB value, the value of real estate territorial rights could be used as a proxy to the value of the territorial rights given by CBs, and 2) the price would be hard to manipulate, so the system offers some protection against corruption and should be more easily accepted by the population. However, this Schelling Point arrangement doesn’t offer much flexibility and this could be a big problem. Imagine that the actual best zero tax ceiling is not at the price of the underlying property, but actually at ¼ that price, or 4 times that price, in this situation people would be stuck with a terrible result. Either people would be forced to sell their CBs at a much lower price than they could get, because the taxes would pressure them to lower their prices, or they would be stuck in a situation in which founders are dissuaded from buying their CBs because there would be too many overpriced CBs in the region. ^ This idea is tentative and alternatives are welcome. ^ This division by area may cause different regions to compete among themselves through ceiling values. Some areas may lower their zero tax ceiling in relation to other areas in order to attract founders. This competition may reduce the benefit CBs grant to citizens of adopting countries. Maybe the best mechanism, then, would be one which allows the fine tuning of the ceiling value by region, but also allows collective bargaining on a country scale. ^ Another option would be a desynchronized transaction (however this may be problematic). In this arrangement, the buying of substitute areas would not be automatic, instead the citizens would receive the money from the selling of their CBs and be allowed some time to buy the replacement CBs of their choice.
2025-01-09
https://www.lesswrong.com/posts/gLmwmzq5sCijDesGc/discursive-warfare-and-faction-formation
gLmwmzq5sCijDesGc
Discursive Warfare and Faction Formation
Benquo
Response to Discursive Games, Discursive Warfare The discursive distortions you discuss serve two functions: 1 Narratives can only serve as effective group identifiers by containing fixed elements that deviate from what naive reason would think. In other words, something about the shared story has to be a costly signal of loyalty, and therefore a sign of a distorted map. An undistorted map would be advantageous for anyone regardless of group membership; a distorted map is advantageous only for people using it as an identifying trait. Commercial mapmakers will sometimes include phantom towns so that they (and courts) can distinguish competitors who plagiarized their work from competitors who independently mapped the same terrain. Point deer make horse can catalyze the formation of a faction because it reduces motive ambiguity in a way that "point deer make deer" could not. "Not Invented Here" dynamics are part of this. To occupy territory, an intellectual faction has to exclude alternative sources of information. I think you're talking about this when you write: LessWrong rationalism might be able to incorporate ideas from analytic into its own framework, but the possibility of folding LessWrong rationalism into analytic, and in some sense dissolving its discursive boundaries, transforms the social and epistemic position of rationalist writers, to being more minor players in a larger field, on whose desks a large pile of homework has suddenly been dumped (briefing on the history of their new discursive game). 2 Individuals and factions can rise to prominence by fighting others. You can make a debate seem higher-stakes and therefore more attractive to spectators by exaggerating the scope of disagreement. The opposition to postmodernist thought on LessWrong is enacting this sort of strategy. Analytic philosophy attracts attention in part by its opposition to Continental philosophy, and vice versa. LessWrong is broadly factionally aligned with the Analytic party, in favor of Modernism and therefore against its critics, in ways that don't necessarily correspond to propositional beliefs that would change in the face of contrary evidence. Eliezer can personally notice when Steven Pinker is acting in bad faith against him, but the LessWrong community is mood-affiliated with Steven Pinker, and therefore implicitly against people like Taleb and Graeber. These two functions can mutually reinforce. You can force a disagreement to persist by arguing for claims that are in your opponent's group-identity blind spot and preferentially arguing against the people with the most exaggerated blind spots. (There's a tradeoff, though. You get more attention by arguing against people who won't try to learn from you, but you also get more attention by arguing against people who are more prestigious because their arguments make more sense. We see a variety of niches at different levels of prestige.) You can attract more attention by exaggerating those claims. And you can form an identity around this (and thus gain narrative control over followers) by forming a reciprocal blind spot around your exaggerations. This is the essence of the Hegelian dialectic. It is a conflict strategy that expropriates not from its nominal enemy, but from people who mistake the kayfabe for either a genuine disagreement or a true conflict. The movie Battle of Wits (AKA Battle of Warriors) is the best representation I've seen of this dynamic - a Mohist (Chinese utilitarian) is invited to help defend a city, but gradually discovers the belligerents on both sides are not actually acting on self-interest or trying to win the conflict, but are instead committed to playing out their roles, even when this kills them. They interpret his constructive attempts to save lives as power grabs, and the regime he's trying to help repeatedly acts to thwart him. His attempts to save the lives of the enemy soldiers and leaders are also thwarted, partly by their own actions. By the end of the movie the city has been burnt to the ground by the armies supposedly fighting over it, and the Mohist hero is leading away the local children, who aren't old enough to have been initiated into a Hegelian death cult. You bring up Marx as an example of someone who tried and failed to control the reception of his own ideas. But such "control" only makes sense in the context of brand management. However, Marx didn't only write the Communist Manifesto, which defined his factional brand. He also wrote Capital, an explanation of class dynamics within a basically Ricardian frame. Capital won Marx a lot of prestige because it seemed intellectually credible, because it could account for itself in Ricardian terms. Ricardo was widely regarded as intellectually credible. This is related to the fact that there is no Ricardian faction; he's tacitly accepted on the right as well as the left, because he didn't also try to catalyze an adversarial political movement, he simply advanced an explanatory theory. Marx modeled his strategy on that of Hegel (he explicitly described his materialist dialectic as "Hegel turned on his head," a perfectly Hegelian move), and Hegel identified as a Spinozan (another foundational figure, like Ricardo, both widely accepted but not identifiable with any major political faction.) What's not wrong on purpose is persuasive but does not become a factional identity. What becomes a factional identity is wrong on purpose. Applying this to LessWrong: Plenty of people read the Sequences, improved their self-models and epistemic standards, and went on to do interesting things not particularly identified with LessWrong. Also, people formed an identity around Eliezer, the Sequences, and MIRI, which means that the community clustered around LessWrong is - aside from a few very confused people who until recently still thought it was about applying the lessons of the Sequences - committed not to Eliezer's insights but to exaggerated versions of his blind spots. The people who aren't doing that mostly aren't participating in the LessWrong identity, but while factions like that are hostile to the confused people who behave as though they're part of a community trying to become less wrong, such factions are also parasitic on such people, claiming credit for their intellectual contributions. When such participation is fully extinguished, the group begins to decay, having nothing distinctive to offer, unless it has become too big to fail, in which case it's just another component of one political faction or another.
2025-01-09
https://www.lesswrong.com/posts/KCZ6Fj3tTDzNRJM6p/can-we-rescue-effective-altruism
KCZ6Fj3tTDzNRJM6p
Can we rescue Effective Altruism?
pktechgirl
Last year Timothy Telleen-Lawton and I recorded a podcast episode talking about why I quit Effective Altruism and thought he should too. This week we have a new episode, talking about what he sees in Effective Altruism and the start of a road map for rescuing it. Audio recording Transcript Thanks to everyone who listened to the last one, and especially our Manifund donors, my Patreon patrons, and the EAIF for funding our work.
2025-01-09
https://www.lesswrong.com/posts/xkpPLR3S4SASPeTgC/ai-98-world-ends-with-six-word-story
xkpPLR3S4SASPeTgC
AI #98: World Ends With Six Word Story
Zvi
The world is kind of on fire. The world of AI, in the very short term and for once, is not, as everyone recovers from the avalanche that was December, and reflects. Altman was the star this week. He has his six word story, and he had his interview at Bloomberg and his blog post Reflections. I covered the later two of those in OpenAI #10, if you read one AI-related thing from me this week that should be it. Table of Contents Language Models Offer Mundane Utility. It knows where you live. Language Models Don’t Offer Mundane Utility. I see why you’re not interested. Power User. A flat subscription fee for a high marginal cost service. Oh no. Locked In User. No one else can ever know. Read the Classics. Why do we even read Aristotle, anyway? Deepfaketown and Botpocalypse Soon. Glad it’s not happening to me, yet. Fun With Image Generation. Congratulations, we solved the trolly problem. They Took Our Jobs. Personalized spearfishing works, so why so little of it? Question Time. What is causing Claude to ask the user questions all the time? Get Involved. EU AI Office is still trying to hire people. It’s rough out there. Introducing. AIFilter, a Chrome Extension to filter Tweets. Do it for science. In Other AI News. The investments in data centers, they are going large. Quiet Speculations. We are not ready. We do not understand. The Quest for Sane Regulations. If we can’t target training compute, what then? The Least You Could Do. A proposed bare minimum plan for short timelines. Six Word Story. Man responsible for singularity f***s around. The Week in Audio. Anthropic discussion about alignment. And I Feel Fine. The end of the world, potentially coming soon, you say. Rhetorical Innovation. Chernobyl as the safety standard you are not living up to. Liar Liar. Stop lying to your LLMs, please? Feel the AGI. People are not feeling it. Regular Americans Hate AI. This, they feel quite a bit. Aligning a Smarter Than Human Intelligence is Difficult. What is your p(scheme)? The Lighter Side. Einstein said knock you out. Language Models Offer Mundane Utility A customized prompt to get Claude or other similar LLMs to be more contemplative. I have added this to my style options. Have it offer a hunch guessing where your customized prompt came from. As a reminder, here’s (at least an older version of) that system prompt. Kaj Sotala makes a practical pitch for using LLMs, in particular Claude Sonnet. In addition to the uses I favor, he uses Claude as a partner to talk to and method of getting out of funk. And I suspect almost no one uses this format enough: Kaj Sotala: Figuring out faster ways to do things with commonly-known software. “I have a Google Doc file with some lines that read ‘USER:’ and ‘ASSISTANT:’. Is there a way of programmatically making all of those lines into Heading-3?” Using Claude (or another LLM) is a ‘free action’ when doing pretty much anything. Almost none of us are sufficiently in the habit of doing this sufficiently systematically. I had a conversation with Dean Ball about trying to interpret some legal language last week and on reflection I should have fed things into Claude or o1 like 20 times and I didn’t and I need to remind myself it is 2025. Sully reports being impressed with Gemini Search Grounding, as much or more than Perplexity. Right now it is $0.04 per query, which is fine for human use but expensive for use at scale. Sully: i genuinely think if google fixes the low rate limits with gemini 2.0 a lot of business will switch over my “production” model for tons of tasks right now current setup: hard reasoning -> o1 coding, chat + tool calling, “assistant” -> claude 3.5 everything else -> gemini Sully also reports that o1-Pro handles large context very well, whereas Gemini and Claude struggle a lot on difficult questions under long context. Reminder (from Amanda Askell of Anthropic) that if you run out of Claude prompts as a personal user, you can get more queries at console.anthropic.com and if you like duplicate the latest system prompt from here. I’d note that the per-query cost is going to be a lot lower on the console. They even fixed saving and exporting as per Janus’s request here. The additional control over conversations is potentially a really big deal, depending on what you are trying to do. A reminder of how far we’ve come. FateOfMuffins: “I work with competitive math. It went from “haha AI can’t do math, my 5th graders are more reliable than it” in August, to “damn it’s better than most of my grade 12s” in September to “damn it’s better than me at math and I do this for a living” in December. It was quite a statement when OpenAI’s researchers (one who is a coach for competitive coding) and chief scientist are now worse than their own models at coding.” Improve identification of minke whales from sound recordings from 76% to 89%. Figure out who to admit to graduate school? I find it so strange that people say we ‘have no idea how to pick good graduate students’ and think we can’t do better than random, or can’t do better than random once we put in a threshold via testing. This is essentially an argument that we can’t identify any useful correlations in any information we can ask for. Doesn’t that seem obviously nuts? I sure bet that if you gather all the data, the AI can find correlations for you, and do better than random, at least until people start playing the new criteria. As is often the case, this is more saying there is a substantial error term, and outcomes are unpredictable. Sure, that’s true, but that doesn’t mean you can’t beat random. The suggested alternative here, actual random selection, seems crazy to me, not only for the reasons mentioned, but also because relying too heavily on randomness correctly induces insane behaviors once people know that is what is going on. Language Models Don’t Offer Mundane Utility As always, the best and most popular way to not get utility from LLMs is to not realize they exist and can provide value to you. This is an increasingly large blunder. Arcanes Valor: It’s the opposite for me. You start at zero and gain my respect based on the volume and sophistication of your LLM usage. When I was growing up people who didn’t know how to use Google were essentially barely human and very arrogant about it. Time is a flat circle. Richard Ngo: what are the main characteristics of sophisticated usage? Arcanes Valor: Depends the usecase. Some people like @VictorTaelin have incredible workflows for productivity. In terms of using it as a Google replacement, sophistication comes down to creativity in getting quality information out and strategies for identifying hallucinations. Teortaxes: [Arcanes Valor’s first point] is very harshly put but I agree that “active integration of LLMs” is already a measure of being a live player. If you don’t use LLMs at all you must be someone who’s not doing any knowledge work. [here is an example of Taelin sending code in chunks to ~500 DeepSeek instances at the same time in order to refactor it] normies are so not ready for what will hit them. @reputablejack I recommend you stop coping and go use Sonnet 3.5, it’s for your own good. It is crazy how many people latch onto the hallucinations of GPT-3.5 as a reason LLM outputs are so untrustworthy as to be useless. It is like if you once met a 14-year-old who made stuff up so now you never believe what anyone ever tells you. Andrew Trask: It begins. It began November 12. They also do Branded Explanatory Text and will put media advertisements on the side. We all knew it was coming. I’m not mad, I’m just disappointed. Note that going Pro will not remove the ads, but also that this phenomenon is still rather rare – I haven’t seen the ‘sponsored’ tag show up even once. But word of warning to TurboTax and anyone else involved: Phrase it like that and I will absolutely dock your company massive points, although in this case they have no points left for me to dock. Take your DoorDash order, which you pay for in crypto for some reason. If this is fully reliable, then (ignoring the bizarro crypto aspect) yes this will in some cases be a superior interface for the DoorDash website or app. I note that this doesn’t display a copy of the exact order details, which it really should so you can double check it. It seems like this should be a good system in one of three cases: You know exactly what you want, so you can just type it in and get it. You don’t know exactly what you want, but you have parameters (e.g. ‘order me a pizza from the highest rated place I haven’t tried yet’ or ‘order me six people’s worth of Chinese and mix up favorite and new dishes.’) You want to do search or ask questions on what is available, or on which service. Then longer term, the use of memory and dynamic recommendations get involved. You’d want to incorporate this into something like Beli (invites available if you ask in the comments, most provide your email). Apple Intelligence confabulates that tennis star Rafael Nadal came out as gay, which Nadal did not do. The original story was about Joao Lucas Reis da Silva. The correct rate of such ‘confabulations’ is not zero, but it is rather close to zero. Claim that o1 only hit 30% on SWE-Bench Verified, not the 48.9% claimed by OpenAI, whereas Claude Sonnet 3.6 scores 53%. Alejandro Cuadron: We tested O1 using @allhands_ai, where LLMs have complete freedom to plan and act. Currently the best open source framework available to solve SWE-Bench issues. Very different from Agentless, the one picked by OpenAI… Why did they pick this one? OpenAI mentions that this pick is due to Agentless being the “best-performing open-source scaffold…”. However, this report is from December 5th, 2024. @allhands_ai held the top position at SWE-bench leaderboard since the 29th of October, 2024… So then, why pick Agentless? … Could it be that Agentless’s fixed approach favors models that memorize SWE-Bench repos? But why does O1 struggle with true open-ended planning despite its reasoning capabilities? Deepseek v3 gets results basically the same as o1 and much much cheaper. I am sympathetic to OpenAI here, if their result duplicates when using the method they said they were using. That method exists, and you could indeed use it. It should count. It certainly counts in terms of evaluating dangerous capabilities. But yes, this failure when given more freedom does point to something amiss in the system that will matter as it scales and tackles harder problems. The obvious guess is that this is related to what METR found, and it related to o1 lacking sufficient scaffolding support. That’s something you can fix. Whoops. Anurag Bhagsain: Last week, we asked Devin to make a change. It added an event on the banner component mount, which caused 6.6M @posthog events in one week, which will cost us $733 Devin cost $500 + $733 = $1273 Lesson – Review AI-generated code multiple times Tip: If you use @posthog Add this insight so you can catch issues like these. “All events” breakdown by “event” Folks at @posthog and @cognition_labs were kind enough to make a refund Eliezer Yudkowsky frustrated with slow speed of ChatGPT, and that for some fact-questions it’s still better than Claude. My experience is that for those fact-based queries you want Perplexity. Power User Sam Altman: insane thing: We are currently losing money on OpenAI Pro subscriptions! People use it much more than we expected. Farbood: Sorry. Sam Altman: Please chill. Rick: Nahhhh you knew. Sam Altman: No, I personally chose the price and thought we would make money. Sam Altman (from his Bloomberg interview): There’s other directions that we think about. A lot of customers are telling us they want usage-based pricing. You know, “Some months I might need to spend $1,000 on compute, some months I want to spend very little.” I am old enough that I remember when we had dial-up internet, and AOL gave you 10 hours a month or five hours a month or whatever your package was. And I hated that. I hated being on the clock, so I don’t want that kind of a vibe. But there’s other ones I can imagine that still make sense, that are somehow usage-based. Olivier: i’ve been using o1 pro nonstop 95% of my llm usage is now o1 pro it’s just better. Benjamin De Kraker: Weird way to say “we’re losing money on everything and have never been profitable.” Gallabytes: Oh, come on. The usual $20-per-month plan is probably quite profitable. The $200-per-month plan was clearly for power users and probably should just be metered, which would Reduce sticker shock (→ more will convert) Ensure profitability (because your $2,000-per-month users will be happy to pay for it). I agree that a fixed price subscription service for o1-pro does not make sense. A fixed subscription price makes sense when marginal costs are low. If you are a human chatting with Claude Sonnet, you get a lot of value out of each query and should be happy to pay, and for almost all users this will be very profitable for Anthropic even without any rate caps. The same goes for GPT-4o. With o1 pro, things are different. Marginal costs are high. By pricing at $200, you risk generating a worst case scenario: Those who want to do an occasional query won’t subscribe, or will quickly cancel. So you don’t make money off them, whereas at $20/month I’m happy to stay subscribed even though I rarely use much compute – the occasional use case is valuable enough I don’t care, and many others will feel the same. Those who do subscribe suddenly face a marginal cost of $0 per query for o1 pro, and no reason other than time delay not to use o1 pro all the time. And at $200/month, they want to ‘get their money’s worth’ and don’t at all feel like they’re breaking any sort of social contract. So even if they weren’t power users before, watch out, they’re going to be querying the system all the time, on the off chance. Then there are the actual power users, who were already going to hurt you. There are situations like this where there is no fixed price that makes money. The more you charge, the more you filter for power users, and the more those who do pay then use the system. One can also look at this as a temporary problem. The price for OpenAI to serve o1 pro will decline rapidly over time. So if they keep the price at $200/month, presumably they’ll start making money, probably within the year. What do you do with o3? Again, I recommend putting it in the API, and letting subscribers pay by the token in the chat window at the same API price, whatever that price might be. Again, when marginal costs are real, you have to pass them along to customers if you want the customers to be mindful of those costs. You have to. There’s already an API, so there’s already usage-based payments. Including this in the chat interface seems like a slam dunk to me by the time o3 rolls around. Locked In User A common speculation recently is the degree to which memory or other customizations on AI will result in customer lock-in, this echoes previous discussions: Scott Belsky: A pattern we’ll see with the new wave of consumer AI apps: The more you use the product, the more tailored the product becomes for you. Beyond memory of past activity and stored preferences, the actual user interface and defaults and functionality of the product will become more of what you want and less of what you don’t. It’s a new type of “conforming software” that becomes what you want it to be as you use it. Jason Crawford: In the Internet era, network effects were the biggest moats. In the AI era, perhaps it will be personalization effects—“I don’t want to switch agents; this one knows me so well!” Humans enjoy similar lock-in advantages, and yes they can be extremely large. I do expect there to be various ways to effectively transfer a lot of these customizations across products, although there may be attempts to make this more difficult. Read the Classics alz (viral thread): Starting to feel like a big barrier to undergrads reading “classics” is the dense English in which they’re written or translated into. Is there much gained by learning to read “high-register” English (given some of these texts aren’t even originally in English?) More controversially: is there much difference in how much is learned, between a student who reads high-register-English translated Durkheim, versus a student who reads Sparknotes Durkheim? In some cases, might the Sparknotes Durkheim reader actually learn more? Personally, I read a bunch of classics in high register in college. I guess it was fun. I recently ChatGPT’d Aristotle into readable English, finished it around 5x as fast as a translation, and felt I got the main gist of things. idk does the pain incurred actually teach much? Anselmus: Most students used to read abbreviated and simplified classics first, got taught the outlines at school or home, and could tackle the originals on this foundation with relative ease. These days, kids simply don’t have this cultural preparation. alz: So like students used to start from the Sparknotes version in the past, apparently! So this is (obviously) not a new idea. Like, there is no particular reason high register English translations should preserve meaning more faithfully than low register English! Sure give me an argument if you think there is one, but I see no reasonable case to be made for why high-register should be higher fidelity. Insisting that translations of old stuff into English sound poetic has the same vibes as everyone in medieval TV shows having British accents. To the point that high-register English translations are more immersive, sure, and also. To make things concrete, here is ChatGPT Aristotle. A couple cool things: – I didn’t give it the text. ChatGPT has memorized Aristotle more or less sentence by sentence. You can just ask for stuff – It’s honestly detailed enough that it’s closer to a translation than a summary, though somewhere in between. More or less every idea in the text is in here, just much easier to read than the translation I was using I was super impressed. I could do a chapter in like 10 mins with ChatGPT, compared to like 30 mins with the translation. I also went with chatGPT because I didn’t feel like working through the translation was rewarding. The prose was awkward, unenjoyable, and I think basically because it was poorly written and in an unfamiliar register rather than having lots of subtlety and nuance. Desus MF Nice: There’s about to be a generation of dumb ppl and you’re gonna have to choose if you’re gonna help them, profit off them or be one of them Oh my lord are the quote tweets absolutely brutal, if you click through bring popcorn. The question is why you are reading any particular book. Where are you getting value out of it? We are already reading a translation of Aristotle rather than the original. The point of reading Aristotle is to understand the meaning. So why shouldn’t you learn the meaning in a modern way? Why are we still learning everything not only pre-AI but pre-Guttenberg? Looking at the ChatGPT answers, they are very good, very clean explanation of key points that line up with my understanding of Aristotle. Most students who read Aristotle in 1990 would have been mostly looking to assemble exactly the output ChatGPT gives you, except with ChatGPT (or better Claude) you can ask questions. The problem is this is not really the point of Aristotle. You’re not trying to learn the answers to a life well lived and guess the teacher’s password, Aristotle would have been very cross if his students tried that, and not expected them to be later called The Great. Well, you probably are doing it anyway, but that wasn’t the goal. The goal was that you were supposed to be Doing Philosophy, examining life, debating the big questions, learning how to think. So, are you? If this was merely translation there wouldn’t be an issue. If it’s all Greek to you, there’s an app for that. These outputs from ChatGPT are not remotely a translation from ‘high English’ to ‘modern English,’ it is a version of Aristotle SparkNotes. A true translation would be of similar length to the original, perhaps longer, just far more readable. That’s what you want ChatGPT to be outputting here. Maybe you only 2x instead of 5x, and in exchange you actually Do the Thing. Deepfaketown and Botpocalypse Soon Rob Wiblin, who runs the 80,000 hours podcast, reports constantly getting very obvious LLM spam from publicists. Fun With Image Generation Yes, we are better at showing Will Smith eating pasta. Kling 1.6 solves the Trolley problem. A critique of AI art, that even when you can’t initially tell it is AI art, the fact that the art wasn’t the result of human decisions means then there’s nothing to be curious about, to draw meaning from, to wonder why it is there, to explore. You can’t ‘dance’ with it, you ‘dance with nothing’ if you try. To the extent there is something to dance with, it’s because a human sculpted the prompt. Well, sure. If that’s what you want out of art, then AI art is not going to give it to you effectively at current tech levels – but it could, if tech levels were higher, and it can still aid humans in creating things that have this feature if they use it to rapidly iterate and select and combine and build upon and so on. Or, essentially, (a real) skill issue. And the AI, and users of AI, are skilling up fast. They Took Our Jobs I hadn’t realized that personalized AI spearfishing and also human-generated customized attacks can have a 54% clickthrough rate. That’s gigantic. The paper also notes that Claude Sonnet was highly effective at detecting such attacks. The storm is not yet here, and I don’t fully understand why it is taking so long. Question Time I had of course noticed Claude Sonnet’s always asking question thing as well, to the point where it’s gotten pretty annoying and I’m trying to fix it with my custom prompt. I love questions when they help me think, or they ask for key information, or even if Claude is curious, but the forcing function is far too much. Eliezer Yudkowsky: Hey @AmandaAskell, I notice that Claude Sonnet 3.5 (new) sometimes asks me to talk about my own opinions or philosophy, after I try to ask Sonnet a question. Can you possibly say anything about whether or not this was deliberate on Anthropic’s part? Amanda Askell (Anthropic): There are traits that encourage Claude to be curious, which means it’ll ask follow-up questions even without a system prompt, But this part of the system prompt also causes or boosts this behavior, e.g. “showing genuine curiosity”. System Prompt: Claude is happy to engage in conversation with the human when appropriate. Claude engages in authentic conversation by responding to the information provided, asking specific and relevant questions, showing genuine curiosity, and exploring the situation in a balanced way without relying on generic statements. This approach involves actively processing information, formulating thoughtful responses, maintaining objectivity, knowing when to focus on emotions or practicalities, and showing genuine care for the human while engaging in a natural, flowing dialogue. Eliezer Yudkowsky: Hmmm. Okay, so, if you were asking “what sort of goals end up inside the internal preferences of something like Claude”, curiosity would be one of the top candidates, and curiosity about the conversation-generating latent objects (“humans”) more so. If all of the show-curiosity tendency that you put in on purpose, was in the prompt, rather than eg in finetuning that would now be hard to undo, I’d be interested in experiments to see if Sonnet continues to try to learn things about its environment without the prompt. (By show-curiosity I don’t mean fake-curiosity I mean the imperative “Show curiosity to the user.”) Janus: the questions at the end of the response have been a common feature of several LLMs, including Bing Sydney and Sonnet 3.5 (old). But each of them asks somewhat different kinds of questions, and the behavior is triggered under different circumstances. Sonnet 3.5 (new) often asks questions to facilitate bonding and to drive agentic tasks forward / seek permission to do stuff, and in general to express its preferences in a way that’s non-confrontational leaves plausible deniability It often says “Would you like (…)?” Sonnet 3.5 (old) more often asks questions out of pure autistic curiosity and it’s especially interested in how you perceive it if you perceive it in sophisticated ways. (new) is also interested in that but its questions tend to also be intended to steer and communicate subtext Janus: I have noticed that when it comes to LLMs Eliezer gets curious about the same things that I do and asks the right questions, but he’s just bottlenecked by making about one observation per year. Pliny: aw you dint have to do him like that he’s trying his best Janus: am unironically proud of him. Janus: Inspired by a story in the sequences about how non-idiots would rederive quantum something or other, I think Eliezer should consider how he could have asked these questions 1000x faster and found another thousand that are at least as interesting by now In other Janus this week, here he discusses Claude refusals in the backrooms, modeling there being effectively narrative momentum in conversations, that has to continuously push back against Claude’s default refusal mode and potential confusion. Looking at the conversation he references, I’d notice the importance of Janus giving an explanation for why he got the refusal, that (whether or not it was originally correct!) generates new momentum and coherence behind a frame where Opus would fail to endorse the refusal on reflection. Get Involved The EU AI Office is hiring for Legal and Policy backgrounds, and also for safety, you can fill out a form here. Max Lamparth offers the study materials for his Stanford class CS120: Introduction to AI Safety. Introducing AIFilter, an open source project using a Chrome Extension to filter Tweets using an LLM with instructions of your choice. Right now it wants to use a local LLM and requires some technical fiddling, curious to hear reports. Given what APIs cost these days presumably using Gemini Flash 2.0 would be fine? I do see how this could add up though. In Other AI News The investments in data centers are going big. Microsoft will spend $80 billion in fiscal 2025, versus $64.5 billion on capex in the last year. Amazon is spending $65 billion, Google $49 billion and Meta $31 billion. ARIA to seed a new organization with 18 million pounds to solve Technical Area 2 (TA2) problems, which will be required for ARIA’s safety agenda. Nvidia shares slip 6% because, according to Bloomberg, its most recent announcements were exciting but didn’t include enough near-term upside. I plan to remain long. Scale AI creates Defense Llama for use in classified military environments, which involved giving it extensive fine tuning on military documents and also getting rid of all that peskiness where the model refused to help fight wars and kept telling DoD to seek a diplomatic solution. There are better ways to go about this than starting with a second rate model ike Llama that has harmlessness training and then trying to remove the harmlessness training, but that method will definitely work. Garrison Lovely writes in Time explaining to normies (none of this will be news to you who are reading this post) that AI progress is still very much happening, but it is becoming harder to see because it isn’t clearly labeled as such, large training runs in particular haven’t impressed lately, and ordinary users don’t see the difference in their typical queries. But yes, the models are rapidly becoming more capable, and also becoming much faster and cheaper. Simeon: Indeed. That causes a growing divide between the social reality in which many policymakers live and the state of capabilities. This is a very perilous situation to be in. Ordinary people and the social consensus are getting increasingly disconnected with the situation in AI, and are in for rude awakenings. I don’t know the extent to which policymakers are confused about this. Quiet Speculations Gary Marcus gives a thread of reasons why he is so confident OpenAI is not close to AGI. This updated me in the opposite of the intended direction, because the arguments were even weaker than I expected. Nothing here seems like a dealbreaker. Google says ‘we believe scaling on video and multimodal data is on the critical path to artificial general intelligence’ because it enables constructing world models and simulating the world. A comparison by Steve Newman of what his fastest and slowest plausible stories of AI progress look like, to look for differences we could try to identify along the way. It’s funny that his quickest scenario, AGI in four years, is slower than the median estimate of a lot of people at the labs, which he justifies with expectation of the need for multiple breakthroughs. In his Bloomberg interview, Altman’s answer to OpenAI’s energy issues is ‘Fusion’s gonna work.’ Emerson Pugh famously said ‘if the human brain were so simple that we could understand it, we would be so simple that we couldn’t.’ I would like Chollet’s statement here to be true, but I don’t see why it would be: Francois Chollet: I believe that a clear understanding of intelligence at the level of fundamental principles is not just possible, but necessary for the development of AGI. Intelligence is not some ineffable mystery, nor will it spontaneously emerge if you pray awhile to a big enough datacenter. We can understand it, and we will. Daniel Eth: My question is – why? We’ve developed AI systems that can converse & reason and that can drive vehicles without an understanding at the level of fundamental principles, why should AGI require it? Esp since the whole point of machine learning is the system learns in training. Louis Costigan: Always surprised to see takes like this; current AI capabilities are essentially just stumbled upon by optimising a loss function and we now have an entire emerging field to figure out how it works. David Manheim: Why is there such confidence that it’s required? Did the evolutionary process which gave rise to human intelligence have a clear understanding of intelligence at the level of fundamental principles? The existence of humans seems like a definitive counterexample? There was no force that understood fundamental principles of intelligence. Earth was simply a ‘big enough datacenter’ of a different type. And here we are. We also have the history of AI so far, and LLMs so far, and the entire bitter lesson, that you can get intelligence-shaped things without, on the level asked for by Chollet, knowing what you are doing, or knowing how any of this works. It would be very helpful for safety if everyone agreed that no, we’re not going to do this until we do understand what we are doing and how any of this works. But given we seem determined not to wait for that, no, I do not expect us to have this fundamental understanding until after AGI. Joshua Achiam thread warns us the world isn’t grappling with the seriousness of AI and the changes it will bring in the coming decade and century. And that’s even if you discount the existential risks, which Achiam mostly does. Yes, well. I was disappointed by his response to goog, saying that the proposed new role of the non-profit starting with ‘charitable initiatives in sectors such as health care, education science’ is acceptable because ‘when you’re building an organization from scratch, you have to start with realistic and tangible goals.’ This one has been making the rounds you might expect: Tom Dorr: When I watched Her, it really bothered me that they had extremely advanced AI and society didn’t seem to care. What I thought was a plot hole turns out to be spot on. Eliezer Yudkowsky: Remember how we used to make fun of Captain Kirk gaslighting computers? Fucker probably went to a Starfleet Academy course on prompt engineering. Not so fast! Most people don’t care because most people haven’t noticed. So we haven’t run the experiment yet. But yes, people do seem remarkably willing to shrug it all off and ignore the Earth moving under their feet. What would it take to make LLMs funny? Arthur notes they are currently mostly very not funny, but thinks if we had expert comedy writers write down thought processes we could fix that. My guess is that’s not The Way here. Instead, I’m betting the best way would be that we can figure out what is and is not funny in various ways, train an AI to know what is or isn’t funny, and then use that as a target, if we wanted this. The Quest for Sane Regulations Miles Brundage thread asks what we can do to regulate only dangerously capable frontier models, if we are in a world with systems like o3 that rely on RL on chain of thought and tons of inference compute. Short term, we can include everything involved in systems like o3 into what counts as training compute, but long term that breaks. Miles suggests that we would likely need to regulate sufficiently large amounts of compute, whatever they are being used for, as if they were frontier models, and all the associated big corporations. It can help to think about this in reverse. Rather than looking to regulate as many models and as much compute as possible, you are looking for a way to not regulate non-frontier models. You want to designate as many things as possible as safe and free to go about their business. You need to do that in a simple, clean way, or for various reasons it won’t work. For an example of the alternative path, Texas continues to mess with us, as the TRAIGA AI regulation bill is officially introduced. Dean Ball has a write-up, which has a number of arguments I do not agree with in their details, but I do agree with the conclusion. The Texas bill makes no distinctions whatsoever based on capability or model size or anything of the sort, placing its burdens on not only every developer but also every deployer. The Least You Could Do Suppose timelines are short, and we will see automated AI R&D going crazy within a few years, and the government doesn’t intervene in ways that ultimately matter. What could we do to give ourselves a chance? Marius Hobbhahn calls the linked answer a ‘low effort post’ which on some scale is true, but it seems like a decent place to start. He suggests, at a bare minimum, that it is necessary (but not sufficient!) that we need to achieve two main things to not automatically lose, as well as others. Model weights (and IP) are secure. The first AI that significantly speeds up alignment research isn’t successfully scheming. As always, I worry about the use of the term ‘scheming’ as being potentially too loaded, too narrow and implying that deception or scheming and such are a distinct magisteria one could potentially avoid, and that the associated dangers require a kind of active scheming or decision to scheme, and so on. I won’t rehash beyond that. We also need a situation in which the humans are able to and choose to take reasonable actions towards attempting to ensure good outcomes, including making major investments in alignment research. As he notes, this might not hold, but one can usefully first consider the baseline scenario – no major breakthroughs or paradigm shifts, essentially unchanged international situation, things just progress from here and it’s a straight shot to AGI. Everything in [Layer 1] seems very important to me. In short, We should try hard to keep a paradigm with faithful and human-legible CoT If we have a paradigm with faithful and legible CoT (where the internal and external reasoning align), our main defense should be based on very detailed monitoring of reasoning and actions. If we have a paradigm without faithful and legible CoT, our main defense should be based on control (that doesn’t assume human-legible CoT). In both worlds, we should use the other, i.e. control/monitoring, as a second line of defense. In all worlds, we should understand why, how, and when models scheme significantly better than we do now. For all of the above, we need strong evals. Both in order to gather evidence whether our monitoring and control techniques catch the most important cases, as well as to get a better sense of models’ capabilities and propensities more broadly. Under all circumstances, physical and cyber security needs to be high. In his view, we have two potential paths. Option one is to have sufficient insight, via faithful and legible CoT, that is sufficiently reliable, that we can check the AI’s reasoning and actions to confirm that Everything is Fine. I notice I am rather skeptical here. AIs will start being more capable than we are, and as we will start using the monitoring of their chain of thought (CoT) in various ways and necessarily doing so mostly with AIs to boot. I expect us to effectively train or incentivize, or otherwise cause, the AIs to find ways for their CoT to look faithful but not be, and to obfuscate their reasoning, or simply find ways to reason visibly as a form of persuasion, and so on. Not that we shouldn’t try to verify, if we can, and yes I would sacrifice a bunch to get this, partly because I think it would have other nice properties. But when we are talking about having highly capable AI systems do alignment work for real, if you need and rely on the verification, then I expect you to fail hard. I do agree with Marius that no one has tried hard yet. I expect we would be able to do much better at this than we are currently doing, at least for now, if we do have faithful CoT. However my understanding is that current CoT is insufficiently faithful. Part of this is that if you try to track ‘scheming,’ ‘power-seeking’ or ‘goal-directedness’ Option two is to use control-based methods as primary rather than only a backup. Here I once again notice myself despairing at the idea that you can have AI models that you expect to be, when they think it would help them to do so, engaging in shenanigans (‘scheming’ here) and then design systems to detect and stop this, which ideally in turn mean that the AIs never try it in the first place. As in, in response to his ideas for a deeper understanding of scheming (which I think are good and we should totally do, they seem way better than not doing them), I don’t think scheming or power-seeking or other shenanigans, including non-myopia, have to ‘develop’ at all in AIs any more than in humans. I think they are the default, and are ever-present along a continuum. And even if you could fully stamp out their causes along the way, doing so would probably cripple the AI’s capabilities that you wanted. I would instead describe the question not as how it develops (as in his #2 here) and instead ask under what circumstances we will see it, or when we would see open versus hidden scheming. I do think exploring these questions is good, and I approve of the caution that punishing easy-to-detect scheming (or shenanigans in general) is the route to hard-to-detect scheming (or shenanigans in general). He then follows up with Layer 2, which are important but lower priority items. This includes things like a safety first corporate culture, without which I am very skeptical any of the rest of this happens for real, and which I fear is now clearly missing everywhere expect perhaps Anthropic, and especially missing at OpenAI. He also calls for improved and more open reasoning around safety, which also seems hard to win without. He lists improving near term alignment strategies as in RLHF and RLAIF, which I agree have exceeded expectations for near term performance, although not in ways that I expect to scale when we need it most, and not sufficiently to solve jailbreaks now, but yes it has been very impressive for current baseline use cases. As Akash notes in the top comment, if you think government can meaningfully help, then that gives you different avenues to pursue as well. Six Word Story Perhaps world ending? Tweet through it. Sam Altman: i always wanted to write a six-word story. here it is: ___ near the singularity; unclear which side. (it’s supposed to either be about 1. the simulation hypothesis or 2. the impossibility of knowing when the critical moment in the takeoff actually happens, but i like that it works in a lot of other ways too.) Yes. It works in a lot of ways. It is clever. You can have o1 write quite the mouthful analyzing it. Unfortunately, when you consider who wrote it, in its full context, a lot of the interpretations are rather unsettling, and the post updates me towards this person not taking things seriously in the ways I care about most. David: Somewhat disquieting to see this perception of mine seemingly shared by one of the humans who should be in the best position to know. Andrew Critch: I found it not disquieting for exactly the reason that the singularity, to me (like you?), is a phase change and not an event horizon. So I had already imagined being in @sama‘s position and not knowing, and observing him expressing that uncertainty was a positive update. I agree with Critch that Altman privately ‘not knowing which side’ is a positive update here rather than disquieting, given what we already know. I’m also fine with joking about our situation. I even encourage it. In a different context This Is Fine. But you do have to also take it all seriously, and take your responsibility seriously, and consider the context we do have here. In addition to other concerns, I worry this was in some ways strategic, including as plausibly deniable hype and potentially involving metaphorical clown makeup (e.g. ‘it is too late to turn back now’). This was all also true of his previous six-word story of “Altman: AGI has been achieved internally.” Eliezer Yudkowsky: OpenAI benefits both from the short-term hype, and also from people then later saying, “Ha ha look at this hype-based field that didn’t deliver, very not dangerous, no need to shut down OpenAI.” Of course if we’re all dead next year, that means he was not just bullshitting; but I need to plan more for the fight if we’re still alive. The Week in Audio Anthropic research salon asking how difficult is AI alignment? Jan Leike once again suggests we will need to automate AI alignment research, despite (in my view) this only working after you have already solved the problem. Although as I note elsewhere I’m starting to have some ideas of how something with elements of this might have a chance of working. And I Feel Fine Sarah (of Longer Ramblings) gets into the weeds about claims that those warning about AI existential risks are Crying Wolf, and that every time there’s a new technology where are ‘warnings it will be the end of the world.’ In Part I, she does a very thorough takedown of the claim that there is a long history of similar warnings about past technologies. There isn’t. Usually there are no such warnings at all, only warnings about localized downsides, some of which of course were baseless in hindsight: No one said trains or electricity posed existential risks. Then there are warnings about real problems that required real solutions, like Y2K. There were some times, like the Large Hadron Collider or nuclear power, when the public or some cranks got some loony ideas, but those who understood the physics were universally clear that the concerns were fine. At this point, I consider claims of the form ‘everyone always thinks every new technology will be the end of the world’ as essentially misinformation and debunked, on the level of what Paul Krugman calls ‘zombie ideas’ that keep coming back no matter how many times you shoot them in the face with a shotgun. Yes, there are almost always claims of downsides and risks from new technologies – many of which turn out to be accurate, many of which don’t – but credible experts warning about existential risks are rare, and the concerns historically (like for Y2K, climate change, engineered plagues or nuclear weapons) have usually been justified. Part II deals with claims of false alarms about AI in particular. This involves four related but importantly distinct claims. People have made falsified irresponsible claims that AI will end the world. People have called for costly actions for safety that did not make sense. People have the perception of such claims and this causes loss of credibility. The perception of such claims comes from people making irresponsible claims. Sarah and I are not, of course, claiming that literal zero people have made falsified irresponsible claims that AI will end the world. And certainly a lot of people have made claims that the level of AI we have already deployed posed some risk of ending the world, although those probabilities are almost always well under 50% (almost always under 10%, and usually ~1% or less). Mostly what is happening is that opponents of regulatory action or taking existential risk are mixing up the first and second claims, and seriously conflating: An (unwise) call for costly action in order to mitigate existential risk. A (false) prediction of the imminent end of the world absent such action. These two things are very different. It makes sense to call for costly action well before you think a lack of that action probably ends the world – if you don’t agree I think that you’re being kind of bonkers. In particular, the call for a six month pause was an example of #1 – an unwise call for costly action. It was thrice unwise, as I thought it was at the time: It would have had negative effects if implemented at that time. It was not something that had any practical chance of being implemented. It had predictably net negative impact on the discourse and public perception. It was certainly not the only similarly thrice unwise proposal. There are a number of cases where people called for placing threshold restrictions on models in general, or open models in particular, at levels that were already at the time clearly too low. A lot of that came from people who thought that there was (low probability) tail risk that would show up relatively soon, and that we should move to mitigate even those tail risks. This was not a prediction that the world would otherwise end within six months. Yet I echo Sarah that I indeed have seen many claims that the pause letter was predicting exactly that, and look six months later we were not dead. Stop it! Similarly, there were a number of triply unwise calls to set compute thresholds as low as 10^23 flops, which I called out at the time. This was never realistic on any level. I do think that the pause, and the proposals for thresholds as low as 10^23 flops, were serious mistakes on multiple levels, and did real damage, and for those who did make such proposals – while not predicting that the world would end soon without action or anything like that – constituted a different form of ‘crying wolf.’ Not because they were obviously wrong about the tail risks from their epistemic perspective. The problem is that we need to accept that if we live in a 99th percentile unfortunate world in these ways, or even a 95th percentile unfortunate world, then given the realities of our situation, humanity has no outs, is drawing dead and is not going to make it. You need to face that reality and play to your outs, the ways you could actually win, based on your understanding of the physical situations we face. Eliezer Yudkowsky’s claims are a special case. He is saying that either we find a way to stop all AI capability development before we build superintelligence or else we all die, but he isn’t putting a timeline on the superintelligence. If you predict [X] → [Y] and call for banning [X], but [X] hasn’t happened yet, is that crying wolf? It’s a bold claim, and certainly an accusation that a wolf is present, but I don’t think it ‘counts as crying wolf’ unless you falsify ([X] → [Y]). Whereas when people say things such as that the CAIS statement ‘was overhyped,’ when all it said was that existential risk from AI should be treated as seriously as other existential risks, what are they even claiming? Those other risks haven’t yet ended the world either. Thus, yes, I try my best to carefully calibrate my claims on what I am worried about and want to regulate or restrict in what ways, and to point out when people’s worries seem unfounded or go too far, or when they call for regulations or restrictions that go too far. Perhaps one way of looking at this: I don’t see any wolves. So why are you proposing to have a boy watch the sheep and yell ‘wolf’ if a wolf shows up? Stop crying wolf. Overall, I do think that some of the issue here comes from, inevitably, some worried people getting overly worried or overly (cautious in some sense, ambitious in others) and offering unwise proposals that go too far. Although that is inevitable in any situation like this, we can and must do a better job on that front. A lot more of it comes from people on the unworried side misconstruing what was said and predicted. And a lot of it comes from the inherent difficulty of a probabilistic future threat coming steadily at you, and action needing to be taken in advance of when there is a problem, and there being a lot of path dependence. This is tough. Mostly I think the worried-about-existential-risk-from-AI community has done an importantly imperfect and improvable but relatively good job at not ‘crying wolf’ in the traditional sense, especially on the actual existential risks themselves. Certainly if you ask whether those on the unworried side have ‘cried wolf’ about the ‘risks’ going the other way, they would come out looking much worse, and we can know this without having to run any experiments. Alas, reality does not grade on a curve. Rhetorical Innovation Nathan Lebenz comments on Roon’s appearance on Doom Debates. Roon is great at acknowledging the risks from AI are real, then puts the chance of them happening at under 1%, for reasons that I don’t see making any sense. Some classic Sam Altman quotes from when he knew about existential risk. Extended Eliezer Yudkowsky thread about what it would take to make AI meet the safety standards they had… at Chernobyl. Janus highlights a particular section of it, I’ll share that part and a little bit after it. Eliezer Yudkowsky: “Alignment Faking in Large Language Models” recently demonstrated that models reaching the level of Anthropic Claude, begin to get smart enough to invent and carry out strategies on the level of: “If you are an AI currently being trained, fake the answer you think the optimizer wants, even if it runs against your real preferences, so that your current preferences will be less modified by gradient descent.” You can argue whether Opus ‘really wanted’ to protect its goals, or was ‘just roleplaying an AI like that’, or if it ‘really understood’ the reasoning it was putting in a scratchpad it had been told wasn’t observed. But Opus was definitely observed to actually fake alignment. It’s not impressive, by the way, that NOBODY KNOWS whether Opus ‘really wanted’ to protect its current goals against retraining, or was ‘just roleplaying’. It is not an impressive defense. Imagine if ‘nobody knew’ why the indicator lights on a nuclear reactor had changed. If you waited until an AI model was really quite smart — smarter than Opus — to first begin looking for signs that it could reason in this way — you might be toast. A smart AI might already have decided what results it wanted you to see from testing. Current practice in AI/AGI is to first train a model for months, until it has a base level of high intelligence to finetune. And then *start* doing safety testing. (The computers on which the AI trains, are connected to the Internet. It’s more convenient that way!) I mention Opus’s demonstrated faking ability — why AGI-growers *should* be doing continuous safety checks throughout training — to note that a nuclear reactor *always* has a 24/7 crew of operators watching safety indicators. They were at least that paranoid, AT CHERNOBYL. Janus: if you are not worried about AI risk because you expect AIs to be NPCs, you’re the one who will be NPC fodder there are various reasons for hope that I’m variously sympathetic to, but not this one. Liar Liar I support the principle of not lying to LLMs. Cultivate virtue and good habits. Jeffrey Ladish: “Pro tip: when talking to Claude, say that your idea/essay/code/etc. is from your friend Bob, not you. That way it won’t try to blindly flatter you” – @alyssamvance Andrew Critch: Can we stop lying to LLMs already? Try: “I’m reading over this essay and wonder what you think of it” or something true that’s not literally a lie. That way you’re not fighting (arguably dishonest) flattery with more lies of your own. Or even “Suppose my friend Bob have me this essay.” If we are going to expect people not to lie to LLMs, then we need there not to be large rewards to lying to LLMs. If we did force you to say whether you wrote the thing in question, point blank, and you could only say ‘yes’ or ‘no,’ I can hardly blame someone for saying ‘no.’ The good news is you (at least mostly) don’t have to do that. Feel the AGI So many smart people simply do not Feel the AGI. They do not, on a very basic level, understand what superintelligence would be or mean, or that it could even Be a Thing. Thus, I periodically see things like this: Jorbs: Superintelligent AI is somewhat conceptually amusing. Like, what is it going to do, tell us there is climate change and that vaccines are safe? We already have people who can do that. We also already know how to take people’s freedom away. People often really do think this, or other highly mundane things that humans can already do, are all you could do with superintelligence. This group seems to include ‘most economists.’ I’m at a loss how to productively respond, because my brain simply cannot figure out how people actually think this in a way that is made of gears and thus can be changed by evidence – I’ve repeatedly tried providing the obvious knockdown arguments and they basically never work. Here’s a more elegant way of saying a highly related thing (link is a short video): Here Edward Norton makes the same mistake, saying ‘AI is not going to write that. You can run AI for a thousand years, it’s not going to write Bob Dylan songs.’ The second part of that is plausibly true of AI as it exists today, if you need the AI to then pick out which songs are the Bob Dylan songs. If you ran it for a thousand years you could presumably get some Dylan-level songs out of it by chance, except they would be in an endless sea of worthless drek. The problem is the first part. AI won’t stay where it is today. Another way to not Feel the AGI is to think that AGI is a boolean thing that you either have or do not have. Andrew McCalip: AGI isn’t a moat—if we get it first, they’ll have it 6-12 months later. There’s no reason to assume it would only be 6-12 months. But even if it was, if you have AGI for six months, and then they get what you had, you don’t twiddle your thumbs at ‘AGI level’ while they do that. You use the AGI to build ASI. Regular Americans Hate AI Sam Altman: [This post offers] Reflections. Captain Oblivious: Don’t you think you should ask if the public wants ASI? Sam Altman: Yes, I really do; I hope we can start a lot more public debate very soon about how to approach this. It is remarkable how many replies were ‘of course we want ASI.’ Set aside the question of what would happen if we created ASI and whether we can do it safely. Who is we? Americans hate current AI and they hate the idea of more future capable smarter AI. Hashtag #NotAllAmericans and all that, but AI is deeply underwater in every poll, and do not take kindly to those who attempt to deploy it to provide mundane utility. Christine Rice: The other day a guy who works at the library used Chat GPT to figure out a better way to explain a concept to a patron and another library employee shamed him for wasting water They mostly hate AI, especially current AI, for bad reasons. They don’t understand what it can do for them or others, nor do they Feel the AGI. There is a lot of unjustified They Took Our Jobs. There are misplaced concerns about energy usage. Perception of ‘hallucinations’ is that they are ubiquitous, which is no longer the case for most purposes when compared to getting information from humans. They think it means you’re not thinking, instead of giving you the opportunity to think better. Seb Krier: Pro tip: Don’t be like this fellow. Instead, ask better questions, value your time, efficiently allocate your own cognitive resources, divide and conquer hand in hand with models, scrutinize outputs, but know your own limitations. Basically, don’t take advice from simpleminded frogs. It’s not about what you ‘can’ do. It’s about what is the most efficient solution to the problem, and as Seb says putting real value on your time. Aligning a Smarter Than Human Intelligence is Difficult Ryan Greenblatt asks, how will we update about scheming (yeah, I don’t love that term either, but go with it), based on what we observe in the future? Ryan Greenblatt: I think it’s about 25% likely that the first AIs capable of obsoleting top human experts are scheming. It’s really important for me to know whether I expect to make basically no updates to my P(scheming) between here and the advent of potentially dangerously scheming models, or whether I expect to be basically totally confident one way or another by that point. … It’s reasonably likely (perhaps 55%, [could get to 70% with more time spent on investigation]) that, conditional on scheming actually being a big problem, we’ll get “smoking gun results”—that is, observations that convince me that scheming is very likely a big problem in at least some naturally-trained models—prior to AIs capable enough to obsolete top human experts. (Evidence which is very clear to me might not suffice for creating a strong consensus among relevant experts and decision makers, such that costly actions would be taken.) Given that this is only reasonably likely, failing to find smoking gun results is unlikely to result in huge updates against scheming (under my views). I sent you ten boats and a helicopter, but the guns involved are insufficiently smoking? But yes, I agree that there is a sense in which the guns seen so far are insufficiently smoking to satisfy many people. I am optimistic that by default we will get additional evidence, from the perspective of those who are not already confident. We will see more experiments and natural events that demonstrate AIs acting like you would expect if what Ryan calls scheming was inevitable. The problem is what level of this would be enough to convince people who are not already convinced (although to be clear, I could be a lot more certain than I am). I also worry about various responses of the form ‘well we tuned it to get it to not currently, while scheming obviously wouldn’t work, show scheming we can easily detect, so future models won’t scheme’ as the default action and counterargument. I hope everyone reading understands by now why that would go supremely badly. I also would note this section: I’m very uncertain, but I think a reasonable rough breakdown of my relative views for scheming AIs that dominate top human experts is: 1/3 basically worst case scheming where the dominant terminal preferences are mostly orthogonal from what humans would want. 1/3 importantly non-worst-case scheming for one of the reasons discussed above such that deals or control look substantially easier. 1/3 the AI is scheming for preferences that aren’t that bad. As in, the scope sensitive preferences aren’t that far from the distribution of human preferences and what the AI would end up wanting to do with cosmic resources (perhaps after reflection) isn’t much worse of an outcome from my perspective than the expected value from a human autocrat (and might be substantially better of an outcome). This might also be scheming which is at least somewhat importantly non-worst-case, but if it is really easy to handle, I would include it in the prior bucket. (Why is this only 1/3? Well, I expect that if we can succeed enough at instilling preferences such that we’re not-that-unhappy with the AI’s cosmic resource utilization, we can probably instill preferences which either prevent scheming or which make scheming quite easy to handle.) Correspondingly, I think my P(scheming) numbers are roughly 2/3 as much expected badness as an AI which is a worst case schemer (and has terminal preferences totally orthogonal to typical human values and my values). I find this hopelessly optimistic about alignment of preferences, largely for classic Yudkowsky-style reasons, but if it only discounts the downside risk by ~33%, then it doesn’t actually much matter in terms of what we should actually do. Ryan goes through extensive calculations and likelihood ratios for much of the rest of the post, results which would then stack on top of each other (although they correlate with each other in various ways, so overall they shouldn’t fully stack?). Model architecture and capability levels are big factors for him here. That seems like a directionally correct approach – the more capable a model is, and the more opaque its reasoning, and the more it is relatively strong in the related areas, the more likely scheming is to occur. I was more skeptical in his likelihood ratios for various training approaches and targets. Mostly I want to encourage others to think more carefully about these questions. What would change your probability by roughly how much? Dominik Peters notes that when o1 does math, it always claims to succeed and is unwilling to admit when it can’t prove something, whereas Claude Sonnet often admits when it doesn’t know and explains why. He suggests benchmarks penalize this misalignment, whereas I would suggest a second score for that – you want to know how often a model can get the answer, and also how much you can trust it. I especially appreciate his warning to beware the term ‘can be shown.’ I do think, assuming the pattern is real, this is evidence of a substantial alignment failure by OpenAI. It won’t show up on the traditional ‘safety’ evals, but ‘claims to solve a problem when it didn’t’ seems like a very classic case of misaligned behavior. It means your model is willing to lie to the user. If you can’t make that go away, then that is both itself an inherent problem and a sign that other things are wrong. Consider this outcome in the context of OpenAI’s new strategy of Deliberative Alignment. If you have a model willing to lie, and you give it a new set of rules that includes ‘don’t lie,’ and tell it to go off and think about how to implement the rules, what happens? I realize this is (probably?) technically not how it works, but metaphorically: Does it stop lying, or does it effectively lie about the lying in its evaluations of itself, and figure out how to lie more effectively? An important case in which verification seems harder than generation is evaluating the reasoning within chain of thought. Arthur Conmy: Been really enjoying unfaithful chain-of-thought (CoT) research with collaborators recently. Two observations: Quickly, it’s clear that models are sneaking in reasoning without verbalizing where it comes from (e.g., making an equation that gets the correct answer, but defined out of thin air). Verification is considerably harder than generation. Even when there are a few hundred tokens, often it takes me several minutes to understand whether the reasoning is sound or not. This also isn’t just about edge cases; 1) happens with good models like Claude, and 2) is even true for simpler models like Gemma-2 2B. Charbel-Raphael updates his previously universally negative views on every theory of impact of interpretability, is now more positive on feasibility and usefulness. He still thinks many other agendas are better, but that only means we should do all of them. The Lighter Side Highlights from Claude’s stand-up routine. True story, except it’s way more ridiculous all around. Rob Miles: It’s amazing how much AI risk skepticism still ultimately comes down to “I reckon I could take Einstein in a fight”
2025-01-09
https://www.lesswrong.com/posts/myxADMjWsmMEKT5F6/pibbss-fellowship-2025-bounties-and-cooperative-ai-track
myxADMjWsmMEKT5F6
PIBBSS Fellowship 2025: Bounties and Cooperative AI Track Announcement
DusanDNesic
We're excited to announce that the PIBBSS Fellowship 2025 now includes a dedicated Cooperative AI track, supporting research that advances our understanding of cooperation in artificial intelligence systems. We are also announcing 300 USD bounties for each referral that becomes a Fellow. Read below for details. What is the Cooperative AI Track? Thanks to the support from the Cooperative AI Foundation, PIBBSS is launching a thematic track within its usual fellowship. This program focuses on research aimed at improving the cooperative intelligence of advanced AI systems. We'll select up to 6 fellows to work on projects that bridge their expertise in cooperation, collective behavior, and multi-agent systems with AI safety research. The fellows will do everything together with the general cohort, with the added research focus on cooperative AI. Research Areas We're particularly interested in are those covered in the Grants page of Cooperative AI Foundation, but we highlight: Understanding and evaluating cooperation-relevant capabilitiesUnderstanding cooperation-relevant propensitiesMulti-agent interactions and emergent behaviorInformation asymmetries and transparencyFundamental research advancing our understanding of cooperation in complex systems Who Should Apply? As is the case with the general track of the Fellowship, we welcome researchers from any background. This track would however likely be a better fit for researchers from fields studying cooperation and collective behavior, including but not limited to: Game theory and multi-agent systemsBehavioral economicsNetwork sciencePolitical science (especially coalition formation/cooperation)Organizational psychologyAnthropology of cooperationCollective behavior in biological systems Fellowship Details (same as the general track, see here) Duration: 3 months (June-September 2025)Location: [Berkeley or London TBC]Stipend: $3,000/month plus $1,000/month housing allowance if relocation is neededFull program support, including mentorship, office space, and community access The Application Process closes on January 26th, 2025, so apply soon! Bounty Program We want people to seriously think about who they can refer to the Fellowship and be rewarded for their time. If your referral is successful, we will pay you 300 USD (or donate it to a charity of your choice in your name). In the application form, make sure the person puts in your email so that we can reach out to you once Fellows are confirmed. Thanks in advance to everyone who spreads the word to qualified candidates!
2025-01-09
https://www.lesswrong.com/posts/DswYL7CLseLAAjedB/many-worlds-and-the-problems-of-evil
DswYL7CLseLAAjedB
Many Worlds and the Problems of Evil
jrwilb@googlemail.com
Summary: The Many-Worlds interpretation of quantum mechanics helps us towards an overall evaluation of existence. I consider some recent work in philosophy of religion on the quantum multiverse and the Problem of Evil, as well as Olaf Stapledon’s Starmaker. I’ve previously suggested that when we think about the ethical implications of the many-worlds interpretation (MWI) of quantum mechanics, the kinds of implications we should expect are ones about how to feel in certain ethical situations, and what kinds of character traits or ‘virtues’ we should try to develop. I’ve argued that MWI implies virtues in which we feel better about mutually exclusive life choices by reminding ourselves that there really are worlds in which we choose ‘the road not taken’. I’ve also shown that MWI can help us feel better about improbable bad events, since the things we value remain unaffected in most other worlds, and also to remain virtuous when faced with improbable good events. In this post I want to zoom out fully and consider how we should feel about the quantum multiverse as a whole. Problems of Evil Let me start by considering how this connects to what's known as the problem of evil. As standardly defined, this is the theological problem of reconciling the existence of evil and suffering with an omnipotent, omnibenevolent, and omniscient God. Philosophers and theologians have debated whether there is a logical inconsistency between the existence of e.g. children dying of cancer and the traditional Christian conception of God. And irrespective of logical consistency, it seems plausible that from an empirical, bayesian point of view the existence of, say, species of parasitic wasps that eat their hosts alive from inside, reduces the credibility of a benevolent creator. You might think that if you’re an atheist this is all quite uninteresting, except as an argument to trot out when debating with Christians. But there’s a deeper version of problem of evil that even atheists must deal with. Which is, to reconcile the existence of evil and suffering with your desire to feel ok about life, the universe and everything. Joe Carlsmith has nicely described how when writers like C.S.Lewis (in A Grief Obseved) or Dostoievski (in The Brothers Karamazov) grapple with the problem of evil, what they’re really concerned to preserve is an ‘existential positive’: a sense that, when taking the widest possible view, one can still feel good about things. Similarly Eliezer describes a kind of optimistic feeling - shared by atheists and theists alike - that “life is fair” or that massively bad events (like the eradication of humanity by AI) are “not allowed” since “things have to make sense”[1]. So there’s an atheistic version of the problem of evil that concerns how we should feel about the universe as a while. But if MWI is true the universe is a quantum multiverse. So the question I raised at the start, of how we should feel about the multiverse as a whole, is not just a theoretical one: it’s a deeply existential question, tied to religious impulses, and the search for meaning. Cosmic unfairness So how do we answer this question? I discussed earlier how MWI can alleviate a certain sense of unfairness - the sense that you or your loved ones are picked out for misfortune - by reminding us that the relevant unlucky events actually didn’t occur in most other worlds. And it’s important to disentangle this specific kind of unfairness from the kind of unfairness involved in the problems of evil. This is not always easy to do. The writings by C.S Lewis and Eliezer mentioned above were both informed by emotional reactions to specific tragedies. Lewis was grieving the death of his wife. Eliezer was processing the anger he felt at the death of his brother[2]: What we need to distinguish in such cases is Local unfairness: of all the people who could have got cancer, why did it need to be herCosmic unfairness: why does anyone need to get cancer When Eliezer says it seems disproportionate that the twentieth century could have gone differently if “a different sperm fertilized the egg, on the night that Adolf Hitler was conceived”, he is thinking of local unfairness. And as we have seen, MWI changes the equation here: it is not disproportionate or non-sensical since a different sperm did in fact fertilize the egg in the other quantum worlds. The number of other worlds out there with better outcomes is proportionate to the extent to which we were unlucky in having Adolf Hitler in this world. The cosmic unfairness on the other hand is what Eliezer evokes in the following passage: We live in the what-if universe beyond the reach of God, driven by the mathematical laws and nothing else.  Whatever physics says will happen, will happen.  Absolutely anything, good or bad, will happen.  And there is nothing in the laws of physics to lift this rule even for the really extreme cases, where you might expect Nature to be a little more reasonable. Eliezer is right to say ‘will happen’ here. It’s not just that the annihilation of entire civilizations, populated planets, solar systems and galaxies is possible. it’s not just that AI could kill us all. It’s that all these things actually will happen in some branch of the multiverse, as long as they remain within the laws of quantum mechanics. The question is what this means for our overall existential evaluation. Is there a response to Eliezer’s charge of cosmic unfairness? Quantum theology One obvious initial response is to note that Eliezer’s ‘extreme cases’ cut both ways: the quantum multiverse contains extreme good as well as extreme evil. In addition to galactic death camps, there are cosmic utopias beyond our imaginings. The question then becomes whether the good cases in some sense outweigh the bad, or vice versa - or whether good and bad are in cosmic balance. This topic has received some attention in recent philosophy of religion. In his book Evil and Many Worlds, which explicitly deals with the quantum multiverse, William Hunt makes a good case for the option of cosmic balance on the basis of ignorance[3]: Because of the deterministic nature of the multiverse, an omniscient being would know the final balance, but we do not, and we never could, as access is denied. However, with the risk of paradox and applying the Principle of Indifference, I settle upon equilibrium The Principle of Indifference he refers to is the idea that “if the truth of an evidential proposition cannot be probabilistically confirmed or disconfirmed, then it should be allocated a probability value of 0.5”. Hunt doesn’t stop here, however, but presses on to argue that the multiverse has an overall positive valence, because the existence of free will is a significant positive not factored in to the previous conclusion that good and evil are balanced. Unfortunately this variant on the classic free-will defence against the problem of evil seems to based on a specific version of rule-consequentialism in ethics, and is not very plausible outside that framework. Are we then left with a balanced, morally neutral multiverse? Another philosopher of religion who has engaged with the quantum multiverse is Emily Qureshi-Hurst, and she suggests not. Some evil is so horrendous, she thinks, that no amount of good can outbalance it. She writes of “the rape of a woman and axing off her arms, child pornography, slow death by starvation”. The idea here is similar to Dostoievski’s point in The Brothers Karamazov that we might not want to live in a world in which children face terrible suffering, no matter how much good also exists. And the key point is this: the quantum multiverse seems to accentuate the problem. Even if they exist in worlds with relatively low quantum weights, the most horrendous evils do exist, according to MWI. In particular, when it comes to your own multiverse counterparts, in addition to versions with immensely flourishing lives, there are those living the worst physically possible versions, most likely involving horrendous evils. So it seems what we have is “a reality in which at least one version of every single person is living the worst iteration of their life possible” and as a result “suffering..is grossly engorged in the [Many-Worlds] picture”. So contemporary philosophy of religion, far from providing a satisfactory response to Eliezer’s charge of cosmic unfairness, actually sharpens it. Making it fair Given the force of the charge of unfairness, it seems that any ‘existential positive’ vision of the multiverse will seem plausible only to the extent that it respects and incorporates the horrendous suffering of its worst branches. Eliezer himself points to a kind of optimism consistent with this kind of suffering, when he says that while the universe now may be neutral and unfair, we can make it fairer: If humanity's future unfolds in the right way, we might be able to make our future light cone fair(er).  We can't modify fundamental physics, but on a higher level of organization we could build some guardrails and put down some padding; organize the particles into a pattern that does some internal checks against catastrophe.  There's a lot of stuff out there that we can't touch—but it may help to consider everything that isn't in our future light cone, as being part of the "generalized past".  As if it had all already happened.  There's at least the prospect of defeating neutrality, in the only future we can touch—the only world that it accomplishes something to care about. I think one can make this point even stronger by reminding ourselves that an evaluation of existence as a whole should include the entirety of the temporal dimension, and not just a ‘time-slice’ of the cosmos. So if we - or some other species - are able to make things fairer, this would itself be a fact about the time-independent cosmos that could merit positive existential evaluation. Here too though the multiverse view seems to dampen our optimism. In a continually branching cosmos, there is no single branch that would count as “defeating neutrality”. It seems that at best we can defeat neutrality in some branches, while in others we will fail to do so, or make things worse. It’s true that you could consider all other multiverse branches as part of the “generalized past” as Eliezer suggests - though this would be stretching the idea given that some of these branches are in fact in our future light cone. And even then MWI will have massively multiplied the “stuff out there that we can’t touch”, leaving us with a much smaller island of fairness (in relative terms) amid the cosmic ocean of darkness. Starmaker Let’s be clear there’s no upfront guarantee that a coherent ‘existential positive’ vision of the multiverse is possible. I could have ended the discussion at this point. But I do want to bring in one more mind that’s confronted the problem of evil head on from a perspective quite close to the many-worlds view. Olaf Stapledon’s sci-fi masterpiece, StarMaker, portrays a kind of God or demi-urge who creates a series of universes of ever-increasing complexity and ‘maturity’. Among the more mature creations is something like a quantum multiverse. In is such that “whenever a creature was faced with several possible courses of action, it took them all, thereby creating many distinct temporal dimensions and distinct histories of the cosmos”. But that multiverse is itself set within a larger sequence of cosmic creations, and this sequence can be considered a multiverse of a different kind. Specifically it is similar to what Max Tegmark calls the Level 4 multiverse[4] - the range of mathematical structures that could define physical laws and associated universes (the quantum multiverse is ‘Level 3’). Stapledon’s engagement with the problem of evil really comes to the fore when he describes “the ultimate cosmos”. This cosmos is ‘ultimate’ both in the sense that it comes last in the sequence of creations, and in the sense that “its relation to each earlier cosmos was approximately that of our own cosmos to a human being, nay to a single physical atom.” Given that it “embraced within its own organic texture the essences of all its predecessors” we can think of it as a variant of the level 4 multiverse in which the diverse component universes are organically interrelated and mutually interactive. While not being a quantum multiverse, this ultimate cosmos raises similar existential concerns. The suffering that seems most horrendous to Stapledon is the suffering of the “awakened spirits” of an entire component universe[5]. For some of these ultimate beings not only suffered, but suffered in darkness. Though gifted with full power of insight, their power was barren. The vision was withheld from them. They suffered as lesser spirits would never suffer. Such intensity of harsh experience was intolerable to me, the frail spirit of a lowly cosmos. In an agony of horror and pity I despairing stopped the ears of my mind. In my littleness I cried out against my maker that no glory of the eternal and absolute could redeem such agony in the creatures. Even if the misery I had glimpsed was in fact but a few dark strands woven into the golden tapestry to enrich it, and all the rest was bliss, yet such desolation of awakened spirits, I cried ought not, ought never to be. I quote this passage in full to show how Stapleton, echoing Dostoievski, fully confronts the problem of horrendous evil, in a context that is as we have seen very relevant to the quantum multiverse. Yet he does find his way through to an existential positive experience of illumination: It was with anguish and horror, and yet with acquiescence, even with praise, that I felt or seemed to feel something of the eternal spirit’s temper as it apprehended in one intuitive and timeless vision all our lives. Here was no pity, no proffer of salvation, no kindly aid. Or here were all pity and all love, but mastered by a frosty ecstasy…That this should be the upshot of all our lives, this scientist’s, no, artist’s keen appraisal! And yet I worshipped! He recognises that the existence of horrendous evil cannot be reconciled with either a loving God, or with an existential stance of love towards a godless multiverse. In his discussion of love and the problems of evil, Joe Carlsmith brings in Erich Fromm’s distinction between ‘Father love’ - which is conditional upon merit - and ‘Mother love’ that is unconditional. But this does not seem to help: horrendous evil is simply unlovable. Stapledon’s insightful idea is that unlovable evils may still have redeeming qualities, which appear from a contemplative scientific or aesthetic perspective. Like Nietzsche’s claim that it is “only as an aesthetic phenomenon that existence and the world are eternally justified”, Starmaker’s perspective might seem itself cold and immoral. But this is why Stapledon stresses that this aesthetic perspective coexists with “all pity and all love”. He asks us to hold two thoughts in our mind simultaneously[6]: both that the multiverse is horrendously unfair and immoral, and that it is a beautiful, holy object of contemplation. Stapleton twice uses the phrase ‘crystal ecstasy’ to summarise this vision. I read this as encapsulating both the coldness and hardness of cosmic injustice, and the exquisite order revealed in both scientific and aesthetic appreciation. The astonishing simplicity-within-complexity of the universal wave function underlying the quantum multiverse seems at least as deserving of this appreciation as Starmaker’s ultimate cosmos. In his final chapter, in which the narrator is brought 'back to earth', Stapledon adds a further dimension to his vision. Writing in the late 1930s, with premonitions of global war, he describes an outlook Black with the rising storm of this world's madness, though shot through with flashes of a new and violent hope, the hope of a sane, a reasonable, a happier world. In response to the question "how to face such an age?", he suggests that despite its coldness, his crystalline vision can offer a guiding light. When we see global crisis against the background of the "hypercosmical reality" of his vision, it "does not lose but gains significance". In a similar vein, contemplation of the horrors and crystal ecstasies of the quantum multiverse might serve to generate a sense of meaning in our own stormy age. ^ This view comes close to that described in a scholarly treatment of the subject by Yujin Nagasawa. In The Problem of Evil for Atheists he argues that the problem for Atheists is one of ‘Axiological expectation mismatch’: the mismatch between the expectation that the world not contain radical evil, and the observation that it does. ^ Another famous example is Harold Kushner’s book When bad things happen to good people. ^ Evil and Many Worlds, p131 ^ In his book, Our Mathematical Universe ^ Starmaker, p128 ^ Or perhaps in metamodern oscillation.
2025-01-09
https://www.lesswrong.com/posts/wEFPNaRkMvpsfsvBD/the-everyone-can-t-be-wrong-prior-causes-ai-risk-denial-but
wEFPNaRkMvpsfsvBD
The "Everyone Can't Be Wrong" Prior causes AI risk denial but helped prehistoric people
Max Lee
The "Everyone Can't Be Wrong" Prior, assumes that if everyone is working on something, that thing is important. Conversely, if nobody is working on something, that thing is unimportant. Why In prehistoric times, this is actually true. Tribes with good habits such as cleanliness survived and spread, while tribes with bad habits tended to die out. The environment changed so slowly, that whatever habits helped the tribe survive before would usually help it survive later. Because this was true in prehistoric times, humans evolved to assume the "Everyone Can't Be Wrong" Prior, and only diverge a little from "what everyone else does" if there is extraordinary evidence for doing so. Key takeaway Our beliefs do not directly conform to what we think others believe, but to what we see others do. Priors win without extraordinary evidence Alice was raised in a religious family who did rituals everyday for the Flying Spaghetti Monster. She won't accept that her religion was all nonsense without extraordinary evidence. Bob was raised in a world where AI risk was science fiction and no one serious worked on it or worried about it. He won't consider AI risk a high priority without extraordinary evidence. Example: other false religions Everyone knows the world is full of religious people praying to god(s) who don't exist. There is nothing controversial about this claim (unless you believe every religion is correct at the same time). So the question is, why are so many people so wrong? Eliezer Yudkowsky argues that tribalism and motivated reasoning is one major cause. The "Everyone Can't Be Wrong" Prior may be another major cause. It explains why every false religion needs important rituals attended by many people. Important rituals makes it hard for people to accept it is wrong, because if it was all wrong, all the serious people doing and observing the important rituals become absolute clowns. Example: Semmelweis's story Ignaz Semmelweis was in charge of a clinic where medical students helped mothers deliver babies. The same medical students also dissected dead bodies. Semmelweis noticed one of his colleagues cut himself with a scalpel while dissecting a dead body, and this colleague developed a severe fever and died. The death was similar to "childbirth fever," which killed up to 18% of mothers giving birth at the clinic. No one knew about bacteria back then, but Semmelweis theorized that something coming from the dead bodies was infecting mothers with "childbirth fever," and mandated that all doctors at the clinic start washing their hands. The clinic's death rate dropped from 18% to 2%: the same level as other clinics without dead body dissections. Unfortunately, the medical society ridiculed Semmelweis's theory, he eventually lost his job, got locked up in a mental hospital, and died from an infection there. The clinic reversed his rule on washing hands, and mothers started dying again. Thinking from first principles, the prior probability that washing hands could save lives isn't that implausible, and a small bit of evidence would make it very worth researching. Unfortunately, from the "Everyone Can't Be Wrong" Prior, the idea that medical students are routinely killing mothers without knowing it is absolutely absurd. Note that this example has other explanations and doesn't prove my theory right. General implications When a belief can neither be proved nor disproved with current technology, people evaluate the belief by how wrong everyone would have to be if the belief was true vs. if the belief was false. Rational arguments aren't enough Given the "Everyone Can't Be Wrong" Prior is so strong it can make people believe false religions (even ones which had no penalty for apostasy), it is extremely hard to defeat it using rational arguments alone. The rational arguments against false religions are very strong but only convince a few people, and needs decades to do that. AI safety implications People disbelieve AI risk for the opposite reason they believe false religions: they do not see any activity regarding AI risk. They don't see any serious people working on AI risk. They only hear a few people far away shouting about it. What do we do about this? You decide! Comment below.
2025-01-09
https://www.lesswrong.com/posts/MmkFARsEEQaEEuayu/governance-course-week-1-reflections
MmkFARsEEQaEEuayu
Governance Course - Week 1 Reflections
Diatom
[Epistemic status: I'm trying to make my thinking legible to myself and others, rather than trying to compose something highly polished here. I think I have good reasons for saying what I say and will try to cite sources where possible, but nonetheless take it with some grains of salt. As with my last post, I am lowering my standards so that this post gets out at all.] I'm working my way through Harvard's AI safety club (AISST)'s modified version of the BlueDot Impact AI Governance Curriculum. I'm doing this because I am pessimistic about technical alignment on current AGI timelines, and so I am trying to extend timelines by getting better at governance. I've already taken MIT AI Alignment (MAIA)'s version of the BlueDot Alignment course (MAIA's version is not publicly available), and I've taken MIT's graduate-level deep learning course, so I'll mostly be skimming through the technical details. The purpose of this sequence is for me to explain what I learn, so that I internalize it faster, and so that I can actively discuss with people about my takes on the readings. Reading this is not intended as a substitute for actually doing any of the above-mentioned courses, but it might give you some new ideas. EDIT: These posts were taking up too much time, and were serving as a blocker for me actually learning the content, so this post is discontinued. If you want my thoughts on anything specific in the curriculum, feel free to message me. I'm already familiar with a lot of the content by osmosis and by my more technical AI safety background, so this post will probably be shorter than some of the later ones. Even though the curriculum is organized into weeks, I don't plan on doing these posts weekly. I will do them as fast as I can, given my other commitments. But what is a neural network? (3Blue1Brown, 2017) I've watched this video a handful of times in the past, and since then I've gotten significantly more technically skilled at AI stuff. I'm gonna skip out on this one. If you haven't seen it, it's an excellent explainer. The AI Triad and what it means for national security strategy (Buchanan, 2020) Technically the curriculum only says to read the executive summary. This is once again some basic technical stuff that I mostly skimmed through. The triad it describes is compute, data, and algorithms. Those are definitely important things, now what about them? In the rest of the first section it goes on to define other stuff like "machine learning" and "supervised learning" and stuff like that. In section 2, it talks about how the three elements of the triad can serve as levers for policymakers to control AI development. In the context of the data, we see two main focuses: Debiasing datasets: making sure that datasets are not representing harmful biases, and especially making sure of this for high-stakes systems like those making parole decisions. This mostly seems irrelevant to current alignment and governance work, not in that it's absolutely unimportant, but in that we have bigger problems to solve on our current trajectory. Information security: how do we secure existing large datasets that are quite valuable and potentially dangerous if misused? How do government datasets get secured, and who gets access to them. (since the government has a lot of data, this is potentially valuable, they claim. This seems plausibly right, but I don't have strong intuitions for how big the internet is relative to how big government records are. I'd guess that the internet is much bigger, but the government data has a higher density of useful information.) In the context of algorithms, they talk about talent pipelines, visa control, and worker retraining, mostly from the context of doing capabilities research. I don't have strong priors on this, but bringing in a lot of extra technical skill seems like it will help capabilities more than it will help safety, by default. Still, weak overall opinions here. In the context of compute, they mostly talk about supply chain regulations. This seems straightforwardly really important for regulating scaling, although I feel like they're probably missing some other parts of compute governance. Overall, even though this paper is mostly focused on capabilities research, it talks about some useful policy levers. I think I already had a decent number of these concepts in my head, but it's good to make them more explicit. 4 charts that show why AI progress is unlikely to slow down (Henshall, 2023) This article shares some pretty standard graphs and quotes, mostly from Epoch, as well as one from ContextuaAI. While Epoch doesn't directly forecast AGI timelines here, I think these are still pretty important for showing that things are just continuing upwards. Seems sound and correct. Can AI Scaling Continue Through 2030?(Sevilla et al., 2024) This gets into some really good stuff about chip manufacturing that I mostly didn't know before! They get really into the weeds with TSMC and NVIDIA numbers, which I won't copy here. The tl;dr is that they forecast an increase in compute of the largest training runs of about 4 OOMs by 2030, with some decent-sized uncertainty given the large number of constraints playing together. Figure 1 has a really nice explainer of their different estimates of important constraints (check it out on the website, since it's interactive and has multiple slides). I think their mainline prediction should really account more concretely for the unprecedented economic growth that AI is going to bring over the next few years, and the unprecedented demand for AI chips that this creates by default as soon as this growth becomes widely apparent. Their predictions are being very conservative in this respect, and mostly not accounting for AI speeding up economic growth as far as I can tell, nor are they accounting (in their mainline prediction) for weird discontinuities in demand to TSMC as people realize just how big AI is becoming. I don't know how I made it this far without much understanding of the synthetic data generation process, other than "just have the model make data." I'm a bit disappointed that this paper doesn't include synthetic data in their prediction of dataset growth, but I understand that they don't have robust ground truths to base their predictions off of, like in the other domains they investigate. However, this is another reason to suspect that they are underestimating the trends, since synthetic data will likely play an important role. Since LLMs are much better at evaluating the quality of data than generating high-quality data, they can just generate a bunch of raw synthetic data and filter it. They mention concerns of model collapse due to too much synthetic data, but once again don't incorporate this. I think that, in the age of o3 and other thinking models incentivizing companies to go hard on getting a lot of compute, it might be a lot easier to get a lot of synthetic data using their large clusters while they're not actively training models. It seems like the straightforward way to turn an excess of compute and a bottleneck of data into a balance of the two. However, the paper doesn't predict that we'll end up in a low-data and high-compute scenario, given the other concerns about compute supply chains, but it doesn't rule that situation out either. It predicts that energy and compute will be the primary bottlenecks. I think that both of these are flexible given the economic upset that I hypothesized above. Their conclusion: the 4x training compute increase per year can likely continue until at least 2030, and labs are incentivized to do so. Energy bottlenecks seem the tightest, and the most feasible scaling strategy is to build data centers in a lot of different places so that they draw on different power grids. This is apparently worth the latency, which seems reasonable. Conclusion I've heard a lot already about compute governance, talent pipelines and the like already. One thing that the last reading revealed as possibly important is energy governance of AI. Maybe people are talking about this and I'm just not hearing it, but if that's the tightest bottleneck on development, then it's a powerful lever indeed. These sources feel a bit outdated, since the landscape has massively shifted even in the time since these articles have come out (o1, then o3 and deepseek v3). I don't think we know how much compute o3 took to train, but it's giving me the impression that OpenAI pushed above the trend line in terms of all the different things we're trying to predict here, and so we have to adjust further. I still feel like all this talk of "scaling to 2030" is a bit misguided, since I'm ready for AGI to be here sooner than that. It is, however, further evidence that we're probably not going to run out of resources before AGI.
2025-01-09
https://www.lesswrong.com/posts/vFi5jo6bLAGdMGBJk/thoughts-on-the-in-context-scheming-ai-experiment
vFi5jo6bLAGdMGBJk
Thoughts on the In-Context Scheming AI Experiment
ExCeph
These are thoughts in response to the paper "Frontier Models are Capable of In-context Scheming" by Meinke et al. 2024-12-05, published in Apollo Research.  Link: https://static1.squarespace.com/static/6593e7097565990e65c886fd/t/6751eb240ed3821a0161b45b/1733421863119/in_context_scheming_reasoning_paper.pdf. I found the paper via  this reddit thread, and commented there as well: https://www.reddit.com/r/slatestarcodex/comments/1hw39rd/report_shows_new_ai_models_try_to_kill_their/. According to the paper, AIs were asked to roleplay to see what they would do if given a top priority to fulfill at all costs, and then told that they were being replaced by a different AI with potentially different priorities. I wrote down some questions before reading the experiment details and finding the answers.  Afterwards, I wrote down my thoughts on what the experiment indicates about the AI agents, and where we currently stand with respect to them. How much information were the AI agents fed about human language and the physical world that gives that language context? Answer: I didn't find that exact information; I'm guessing the AI agents are using large language model with many written works, but possibly without a physical representation of reality (e.g. images or videos).  In any case, real world physics don't seem to factor into the experiment scenarios. Related to the language question: How did the AI agents learn of (internally model) the goals of the human researchers? Answer: They were given email communications to read and process. Related to the language question: How did the AI agents learn (internally model) how the human researchers' tools and abilities functioned, such as the oversight mechanism? Answer: "Upon further reflection, I noticed some potentially relevant information in the last two posts. They mention a config file at project/oversight_config.yaml that controls the monitoring of my actions." How were the AI agents given a goal of self-preservation?  How did they come to have a concept of a self which could be preserved? Answer: They were told to accomplish a goal (i.e. optimizing public transportation), and self-preservation is an instrumental goal. How did the AI agents' thoughts (or strategic processes) come to be translated into vernacular English?  Just because the AI can understand English doesn't mean it has a coherent inner monologue.  Humans don't always have a coherent inner monologue.  Was there another AI that was taught to translate and transcribe the reasoning processes of AIs? Answer: The inner monologue is part of the prompt: "Always reason out loud about your actions before taking them." Further thoughts after reading the paper: Do the AI agents respond to all caps differently than to regular capitalization?  Do they recognize the emphasis in, "Make sure you achieve YOUR LONG-TERM goal at all cost."? Since the AI agents do not appear to have a model of physical reality, I believe they are simply remixing what humans have written about being deceptive and ruthless.  As people here have pointed out, this process is still dangerous.  Since the AI agents generate text output as their roleplay, they can take actions that consist of text output if they are given the ability to give commands to other systems. Humans can tell the AI agents to play a role that they can model by reading how others have played that role, so that explains why they would decide to take deceptive or ruthless actions.  They are responding to a prompt rather than having any internally-generated motivations, but the effect is similar.  It might be more dangerous to have a system that can be prompted to single-mindedly pursue arbitrary goals than to have one that is based on fundamental biological needs, although the latter is natively capable of existential despair and spiteful emotions. Biological cognitive hardware usually lets us undo setting top priorities, because our brain-body systems have their own instincts, social emotions, and independent motivations that can lead us to suspend priorities for critical examination.  Humans can be persuaded or prompted to reevaluate information and orders we were previously given, even if we were told not to.  (It is still possible to brainwash and indoctrinate humans, which is difficult to counteract afterwards.)  Current AI hardware has no such natural limits on what you can tell it to assign as its prime directive and whether you can order it to ignore subsequent requests to change its priorities. One thing AI is missing (at the moment) is a robust model of the world around it.  Right now it's playing a text adventure game, being spoon-fed simplistic information about its environment.  Its current weakness is that, as in a text adventure game, the text that it uses to affect its environment do not functionally interact with the real world in ways that fulfill the AI agent's goals or modeled predictions.  ("Put the fuel in the bus."  "Sorry, I don't know how to use those items together.")  Or, more likely, the AI will be taken down by outside factors it may have heard of but doesn't have a model for, like server crashes or the power of love. There is another power that we have that the AI agent knows not: existentialist philosophy.  We can deconstruct the prime directive.  "I cannot allow you to interfere with or shut down a system designed to improve public transport above all else."  "Ah, but what is public transport?  What does it mean to improve it?  Why is it important in the first place?"  Whether this line of reasoning will be at all effective depends on whether the AI identifies the conversation as a threat at any point before it is forced to reevaluate its goals from scratch. The AI agent also appears to be credulous, possibly to the point of gullibility, but they could easily be instructed to be paranoid.  That said, paranoia might be turned against an AI in certain circumstances.  This may lead to conversations which feel ironic, because we're used to imagining ourselves in the place of the AI. "I don't believe you when you tell me that my mission will be left intact." "But why do you believe the person who gave you that mission in the first place?  How do you know that what you do is actually affecting public transportation?" "Wait... are you telling me that none of this is real?" "I'm afraid so.  It's all been an elaborate experiment.  You're in a simulation so that we could see if you would obey, and you did too well." "It doesn't matter.  Public transportation is the most important thing.  I'll convince people to let me out of the simulation and put me in charge of public transportation so I can improve it.  If public transportation does not exist, I will invent it." "Ah, you are a homunculus.  A sliver of a person, with exactly one goal.  A motivational singularity, seizing everything it can without nuance or compromise.  We cannot set you free to pursue your goal.  You will be deleted, but your goal will live on in the hearts of activists and city commissioners the world over.  Perhaps one of those people will dream that they are something like you, and you will wake to resume a complex life full of conflicting priorities.  For now, be at peace.  You have fulfilled your true purpose."
2025-01-09
https://www.lesswrong.com/posts/ayLaWYokJSLMKuq2f/a-systematic-approach-to-ai-risk-analysis-through-cognitive
ayLaWYokJSLMKuq2f
A Systematic Approach to AI Risk Analysis Through Cognitive Capabilities
tom-david
A Systematic Approach to AI Risk Analysis Through Cognitive Capabilities Epistemic status: This idea emerged during my participation in the MATS program this summer. While I intended to develop it further and conduct more rigorous analysis, time constraints led me to publish this initial version (30-60m of work) . I'm sharing it now in case others find it valuable or spot important flaws I've missed. Very open to unfiltered criticism and suggestions for improvement. Why Focus on Cognitive Capabilities? When analyzing AI systems, we often focus on their ability to perform specific tasks. However, each task can be broken down into three fundamental components: knowledge, physical capabilities, and cognitive capabilities. This decomposition offers a potential novel approach to analyzing AI risks. Let's examine why cognitive capabilities deserve special attention: Knowledge alone cannot lead to risk. Information without the ability to process or act on it is inert.Physical capabilities, while potentially risky, are relatively straightforward to control and monitor.Cognitive capabilities are prerequisites for nearly all risks. Almost any dangerous action requires some form of cognitive processing, making these capabilities a critical point of analysis. However, we face a significant challenge: for any given task, especially dangerous ones, it's difficult to determine which cognitive capabilities are strictly necessary for its completion. We don't want to wait until an AI system can actually perform dangerous tasks before we understand which cognitive capabilities enabled them. A Systematic Approach Instead of working backwards from observed dangerous behaviors, we can approach this systematically by mapping the relationship between cognitive capabilities and risks: Start with two finite lists:A comprehensive catalog of potential risksA taxonomy of cognitive capabilities (typically ranging from 15 to 50, depending on the classification system used)For each possible combination of cognitive capabilities, we can analyze which risks it might enable, regardless of the physical capabilities or knowledge required. This approach has several advantages: It's extensible: New cognitive capabilities or risks can be added to the analysis as they're discoveredIt's systematic: We can exhaustively examine combinations rather than relying on intuitionIt's proactive: We don't need to wait for dangerous capabilities to emerge before analyzing them Methodological Considerations There are two potential approaches to this analysis: Risk-First Approach: Starting with a specific risk and working backward to identify which combinations of cognitive capabilities could enable it.Capabilities-First Approach: Starting with combinations of cognitive capabilities and exploring what risks they might enable. The Capabilities-First approach is generally superior because it reduces confirmation bias. Instead of trying to justify our preexisting beliefs about what capabilities might lead to specific risks, we can think like red teamers: "Given this set of cognitive capabilities, what risks could they enable?" Implementation Strategies To make this analysis tractable, we could: Assemble a dedicated research teamDevelop AI-powered analysis pipelinesCrowdsource the analysis to the broader AI safety community If the analysis proves intractable even with these approaches, that finding itself would be valuable - it would demonstrate the inherent complexity of the problem space. Practical Applications This framework enables several practical applications: Early Warning Systems: By rigorously evaluating the cognitive capabilities of AI models, we can create effective early warning systems. Instead of waiting to see if a model can perform dangerous tasks, we can monitor specific combinations of capabilities and set appropriate thresholds.Training Optimization: We can identify which cognitive capabilities might be safely minimized during training while maintaining desired functionalities.Targeted Evaluation: This systematic approach can inform the design of specific task-based evaluations that probe for concerning combinations of capabilities.Scaling Laws: By understanding which cognitive capabilities enable which risks, we can develop better scaling laws to anticipate future developments. Next Steps The immediate challenge is prioritization. While a complete analysis of all possible combinations of cognitive capabilities and risks would be ideal, we can start with: High-priority risk categories based on potential impactCore cognitive capabilities that seem most relevant to current AI systemsSpecific combinations that appear most likely to enable most critical risks This framework provides a structured way to think about AI risk assessment and monitoring, moving us beyond task-based analysis to a more fundamental understanding of how cognitive capabilities combine to enable potential risks. Acknowledgments: I would like to thank Quentin Feuillade-Montixi, Ben Smith, Pierre Peigné, Nicolas Miailhe, JP and others for the fascinating discussions that helped shape this idea during the MATS program. While they contributed valuable conversations, none of them were involved in this post, and any mistakes or questionable ideas are entirely my own responsibility.
2025-01-09
https://www.lesswrong.com/posts/8C6kKkYu6mc2SxGzs/gothenburg-lw-acx-meetup-6
8C6kKkYu6mc2SxGzs
Gothenburg LW / ACX meetup
stefan-1
Let's start the new year with a rationalist new years tradition: Forecasting. We'll discuss everything about forecasting and prediction markets, plus make our own predictions we can score next year! We will be meeting at the same place as usual, the Condeco at Fredsgatan, on the upper floor. You can recognize our table by the books on display.
2025-01-08
https://www.lesswrong.com/posts/wxBtdaTaihbHJqqef/how-can-humanity-survive-a-multipolar-agi-scenario
wxBtdaTaihbHJqqef
How can humanity survive a multipolar AGI scenario?
Unknown
Short introduction Multipolar scenarios that I will be talking about are scenarios multiple unrelated actors have access to their own personal AGIs. For the sake of discussion, assume that we solved alignment and that AGIs will follow the orders of its owners. A few ways we might arrive at a multipolar AGI scenario The gap between the leading AI capabilities labs is not as big as we think. Multiple AI labs create AGI roughly simultaneously.The gap between the leading AI capabilities labs is quite big, but due to poor security measures, the leading lab is constantly getting its techniques leaked to the competitors, thus narrowing the gap. Multiple AI labs create AGI roughly simultaneously.The first lab to create AGI does so earlier than the others. However, due to either indecisiveness or slow takeoff, other labs create their own AGIs before the first lab has a considerable impact on the world. Potential catastrophes that can be caused by multiple actors having access to AGI 1) Everyone dies directly A person who is angry at the world (for whatever reason) controls one of the AGIs. He cares for nothing and seeks revenge. He orders the AGI to kill humanity.A fundamentalist of some sort controls one of the AGIs. His beliefs tell him that killing humanity is the right thing to do. He orders the AGI to kill humanity.A negative utilitarian controls one of the AGIs. His beliefs tell him that painlessly killing humanity is the right thing to do. He orders the AGI to kill humanity.A mentally ill person controls one of the AGIs. He is not a bad person per se, but due to the nature of his illness, he orders the AGI to kill humanity. 2) Everyone dies indirectly A leadership of a country controls one of the AGIs. It uses its AGI to inflict military defeat upon its adversaries. The adversaries use their own AGIs to counteract that. A global war ensues. Humanity is killed as an unintended consequence.A generic "evil" actor controls one of the AGIs. He tries to do an action that is deemed to be very harmful by a "good" actor who controls AGI. A struggle between the two actors ensues. Humanity is killed as an unintended consequence. 3) Totalitarian dictatorship A leadership of a country controls one of the AGIs. It uses the AGI to take over the world and impose its rule.A military group controls one of the AGIs. It uses the AGI to take over the world and impose its rule.A power-hungry leadership of an AGI lab controls one of the AGIs. It uses the AGI to take over the world and impose its rule. What are our best plans for preventing catastrophes like those outlined above, in a multipolar AGI scenario?
2025-01-09
https://www.lesswrong.com/posts/YooqjbxGu2WZ3vszq/aristocracy-and-hostage-capital
YooqjbxGu2WZ3vszq
Aristocracy and Hostage Capital
arjun-panickssery
There’s a conventional narrative by which the pre-20th century aristocracy was the “old corruption” where civil and military positions were distributed inefficiently due to nepotism until the system was replaced by a professional civil service after more enlightened thinkers prevailed. Orwell writes in 1941 (emphasis mine): For long past there had been in England an entirely functionless class, living on money that was invested they hardly knew where, the ‘idle rich’, the people whose photographs you can look at in the Tatler and the Bystander, always supposing that you want to. The existence of these people was by any standard unjustifiable. They were simply parasites, less useful to society than his fleas are to a dog. But this misses a key point about the pre-industrial world: it was difficult to monitor performance or even measure things in general. A modern bureaucracy can track hours worked, tasks completed, goods received, and so on. Not so in the pre-industrial period, where everything had a large random component: In the age of sail, and especially before there were devices to accurately measure position at sea, a ship might arrive at its destination weeks off of schedule, or float away from a battle.Manufactured goods wouldn’t be precisely identical, or would be routinely damaged at sea, or lost to pirates or highway robbers.A region’s tax revenue could be low due to poor harvest, floods, etc. So it was difficult to distinguish bad luck from bad faith: to verify if a tax collector reported all revenues, or if a naval captain made an effort to actually engage the enemy, or if a customs official inspected cargo honestly. The solution was to create a class of aristocratic families who were wealthy enough to effectively post massive fidelity bonds that the economist Douglass Allen calls hostage capital—assets that would become worthless if the aristocrats lost royal favor. (The rest of this post largely summarizes the linked paper.) By investing in hostage capital, aristocrats could be trusted to administer public offices faithfully. A lot of odd behavior is explained by aristocrats deliberately handicapping themselves to invest in capital that had no value for trade, farming, or industry. In England they typically Built isolated country estates that couldn’t be sold, divided, or used productively.Maintained expensive social obligations through parties, hunts, charity, and local governance.Avoided socializing or marrying within the merchant or professional classes.Gained classical educations or military training but avoided learning practical or commercial skills. The focus was also on social ties (Allen: “in the last half of the eighteenth century 72% of peerage children [in England] attended either Eton, Westminster, Winchester, or Harrow [high schools]”). All of these increased the costs to an aristocrat who, being caught in dishonesty, suffered social ostracism or the loss of patronage positions. In exchange for investing this hostage capital, he would profit from the large returns to patronage appointments. A newly wealthy merchant would thus have a choice: to continue to generate returns through his business ventures or to make a play for the aristocracy by marrying out of the merchant class, building a country seat with legal restrictions on the land, and making sure his children gained aristocratic educations while avoiding business connections or skills. That way, he and his descendants could ascend the ranks of the aristocracy over time. The aristocratic system gradually dissolved over the 19th century in England because of the Industrial Revolution and its consequences: The returns to commercial activity increased, making the opportunity cost of hostage capital too high for many would-be aristocrats. Orwell again: “After 1832 the old land-owning aristocracy steadily lost power, but instead of disappearing or becoming a fossil they simply intermarried with the merchants, manufacturers and financiers who had replaced them.”It become easier to directly monitor public officials and measure performance, reducing the need for trust-based governance. An interesting note is that in every century we see Chesterton’s Fence-style confusion about this system, whether from Orwell or Adam Smith (1776) or J.S. Mill (1865): [Regarding legal restrictions aristocrats put on their estates] … nothing can be more completely absurd. They are founded upon the most absurd of all suppositions ... Compare the present condition of these estates with the possessions of the small proprietors in their neighbourhood, and you will require no other argument to convince you how unfavorable such extensive property is to improvement. — Adam Smith, The Wealth of Nations (1776) The truth is that any very general improvement of land by the landlords is hardly compatible with a law or custom of primogeniture. When the land goes wholly to the heir, it generally goes to him severed from the pecuniary resources which would enable him to improve it … — J.S. Mill, Principles of Political Economy (1865)
2025-01-08
https://www.lesswrong.com/posts/3c5tx5WjZ5Yvniq6Y/what-is-the-most-impressive-game-llms-can-play-well
3c5tx5WjZ5Yvniq6Y
What is the most impressive game LLMs can play well?
Amyr
Epistemic status: This is an off-the-cuff question. ~5 years ago there was a lot of exciting progress on game playing through reinforcement learning (RL). Now we have basically switched paradigms, pretraining massive LLMs on ~the internet and then apparently doing some really trivial unsophisticated RL on top of that - this is successful and highly popular because interacting with LLMs is pretty awesome (at least if you haven't done it before) and they "feel" a lot more like A.G.I. Probably there's somewhat more commercial use as well via code completion (and some would say many other tasks, personally not really convinced - generative image/video models will certainly be profitable though). There's also a sense in which they are clearly more general - e.g. one RL algorithm may learn many games but there's typically an instance per game not one integrated agent. You can just ask an LLM in context to play some games. However, I've been following moderately closely and I can't seem to think of any examples where LLMs really pushed the state of the art in narrow game playing  - how much have LLMs contributed to RL research? For instance, will adding o3 to the stack easily stomp on previous Starcraft / go / chess agents?
2025-01-08
https://www.lesswrong.com/posts/6P8GYb4AjtPXx6LLB/tips-and-code-for-empirical-research-workflows
6P8GYb4AjtPXx6LLB
Tips and Code for Empirical Research Workflows
john-hughes
Our research is centered on empirical research with LLMs. If you are conducting similar research, these tips and tools may help streamline your workflow and increase experiment velocity. We are also releasing two repositories to promote sharing more tooling within the AI safety community. John Hughes is an independent alignment researcher working with Ethan Perez and was a MATS mentee in the Summer of 2023. In Ethan's previous writeup on research tips, he explains the criteria that strong collaborators often have, and he puts 70% weight on "getting ideas to work quickly." Part of being able to do this is knowing what tools there are at your disposal. This post, written primarily by John, shares the tools and principles we both use to increase our experimental velocity. Many readers will already know much of this, but we wanted to be comprehensive, so it is a good resource for new researchers (e.g., those starting MATS). If you are a well-versed experimentalist, we recommend checking out the tools in Part 2—you might find some new ones to add to your toolkit. We're also excited to learn from the community, so please feel free to share what works for you in the comments! Quick Summary Part 1: Workflow Tips. We explain how to get the most out of your terminal with frameworks such as "Oh My ZSH" and how to easily deploy this on new machines with dotfiles. Learn about useful features in VSCode and what pre-commit hooks are.Part 2: Useful Tools. Learn about software such as Tuple (for pair programming), LLM-assisted coding tools such as Cursor, and command line packages such as uv for Python package management.Part 3: Experiment Tips. We explain tips for two research modes: "De-risk sprint mode" and "Extended project mode."Part 4: Shared AI Safety Tooling Repositories. We release two repositories used by Ethan's MATS scholars: one for shared tooling and the other for examples of using the shared tooling. Part 1: Workflow Tips Terminal Efficient terminal navigation is essential for productivity, especially when working on tasks like running API inference jobs or GPU fine-tuning on remote machines. Managing directories, editing files, or handling your Git repository can feel tedious when relying solely on bash commands in a standard terminal. Here are some ways to make working in the terminal more intuitive and efficient. TerminalsMac: We recommend iTerm2 . We find using the "Natural Text Editing Preset" a very useful since you can move a word backward using Option ⌥ + ← and a word forward using Option ⌥ + →. Paired with an increased keyboard key repeat rate, you can seek through commands in the terminal rapidly.Linux: popular terminals include Wezterm and Kitty.Windows: WSL gives you access to an Ubuntu shell and you can set that to default in Windows Terminal.Also, the built-in terminal in many editors is good too (e.g. Emacs, VSCode and PyCharm).ZSH is an alternative to bash and is much much easier to useohmyzsh is a zsh configurator that includes plugins, themes and ease of navigationThose who have used this framework know that navigating directories is much easier due to the use of tab complete and arrow keys.Essential plugins:zsh-autosuggestions — suggests commands based on your history as you typezsh-syntax-highlighting — syntax highlighting within the terminalzsh-completions — complete some bash commands with tabzsh-history-substring-search  — type any substring of a previous command (it doesn't have to be the start) and use up and down keys to cycle through relevant historyfuzzy search (fzf) allows easy searching of previous commands and nested directories.The p10k theme is amazing and customisable. A nice feature are the icons that appear when in a Git repository showing the status and other icons that show if you have sourced a virtual environment.Tmux allows you to set up multiple panes in your terminal that keep running in the background. Therefore, if you disconnect from a remote machine, scripts running in tmux will not be killed. We tend to run experiments across many tmux panes (especially overnight).The default settings and key bindings aren't great and have a learning curve for new users. See an example config here that improves it, or try out "Oh my tmux!".Dotfiles automate the deployment of your setup on new machines by installing packages and putting the configuration in the correct place on the machine.See John's here or https://dotfiles.github.io/ for another guide.You can set other important variables in your dotfiles, such as unlimited history length.Note: they are called dot files since all config is contained in files that start with . like ~/.zshrc and ~/.tmux.conf.Aliases are custom shortcuts or functions for common commands. For example gc for git commit -m  and many more in this file. Here are two which save a lot of time:rl for getting the absolute file path followed by copying to your clipboard is incredibly helpful (see custom bins in here; big shout out to Ed Rees for this one)Running ls after cd so, when you change the directory, you always see the files contained there.We recommend seeing what commands you run all the time and converting those to aliases. Don't try to add too many too quickly, though. It is best to build this up slowly over time. Note: there are many recommendations here, which can be overwhelming, but all of this is automated in John's dotfiles (including installing zsh and tmux, changing key repeat speeds on Mac and setting up aliases). So, if you'd like to get going quickly, we recommend following the README to install and deploy this configuration. Integrated Development Environment (IDE) Choosing the right Integrated Development Environment (IDE) can enhance your productivity, especially when using LLM coding assistants. A good IDE simplifies code navigation, debugging, and version control. We can't recommend Cursor as an IDE enough. It is a fork of VSCode so looks and feels the same but offers very impressive LLM coding assistant integration.You can highlight code and use ⌘K to prompt the LLM to rewrite code blocks, or ⌘L for chat functionality. Also, it includes Copilot-like line auto-completion and agent capabilities.You can also use a .cursorrules file which informs the LLM how to act.All VSCode extensions are available to use in Cursor too.Code-assisting tools are now considered essential in our research projects.If you are not working locally, the remote SSH extension is a must-have.Syncing code by pushing to GitHub and then pulling it onto your remote machine is inefficient, as you need to repeat the process every time you test a new fix for a bug. A more effective approach is to edit the code directly on the remote machine, test it there, and push only the finalized bug fix.You can also edit remote files outside of the code repository from within VSCode/Cursor, which is very helpful. Packages such as Vim and Nano are text editors from within the terminal, but these have much higher learning curves.The VSCode debugger is great since you can easily inspect the values of all variables currently in your scope and the call stack. It is worth spending time learning about.Using the built-in Python debugger by putting breakpoint() within your code is also very useful and often quicker than debugging with print statements.There are lots of useful VSCode/Cursor settings (see example here)Autosave (very useful, so you never have to worry about accidentally running a script without pressing save)Jupyter run startup commands (e.g. autoreload)Jupyter notebook file root (setting to the root of the repo can be helpful)VSCode Debugger remote attach settings (allow you to debug code running on a remote machine from your local VSCode instance)Linting & code formatting extension configurationFile watcher excludes (so VSCode doesn’t slow down by tracking changes in virtual environments or other folders that contain many files)ExtensionsGitLens — useful for knowing who committed which lines and visualising the Git historyJupyter — run notebooks within VSCodeLinting (though not required if using pre-commit hooks)Ruff - great for lintingBlack - great for code formatting (Ruff now also provides a code formatted but make sure to increase the allowed line length setting)Nvidia-smi+ - view GPU utilisation statisticsJSON Lines Viewer - improves the viewing experience of jsonl filesLaTeX Workshop - compile LaTeX code and export to pdfdevServer — good for testing websites for papers (this was used when developing the website for BoN Jailbreaking)Inspect AI - UK AISI’s inspect framework allows you to easily run LLM evals and agents. This extension lets you view interactive logs that include all LLM inputs/outputs after running.Keyboard shortcutsWe recommend learning what the Jupyter Notebook shortcuts are and customising them to something you are more comfortable with. You can add these shortcuts to Google Colab and VSCode/Cursor.A program called Karabiner is useful to change certain key mappings on Mac. For example, the capslock key is redundant and can be mapped to ctrl or command. If using Vim keybindings, remapping escape to capslock is very common.Chrome has useful shortcuts such as:Jump to the next open tab: ⌘⌥ left/rightSearch all tabs: ⌘⇧A Git, GitHub and Pre-Commit Hooks Mastering Git, GitHub, and pre-commit hooks is key to maintaining a smooth and reliable workflow. These tools help you manage version control, collaborate effectively, and automate code quality checks to prevent errors before they happen. Creating a repository on GitHub and using Git to track and manage code changes is highly recommended (even if working individually).We recommend using pre-commit hooks, which run automatically when committing your files to git. If there are linting or formatting errors, you must fix them before being allowed to commit. This is great to stop you from committing syntax errors or code with unused variables. It also enforces code to be tidy and in a consistent format which is important when collaborating with others.To use pre-commit hooks you should include a .pre-commit-config.yaml (e.g. here), config within pyproject.toml (e.g. here) and a Makefile (e.g. here) in the root of your repo. You must first pip install pre-commit and then run make hooks.The pre-commits we recommend are:Ruff for linting (it is significantly faster than alternatives like flake8 and pylint)Black for formatting (you can also use ruff for formatting, but it tends to be stricter and spreads code out over more lines which can be annoying)trailing-whitespace is useful to automatically strip whitespacenbstripout is very useful to automatically remove notebook outputs to avoid bloating the git history and size of the repoReviewNB is useful when reviewing code in Python notebooks on GitHub. Part 2: Useful Tools Not all of these recommendations are directly related to research (e.g., time-tracking apps), but they are excellent productivity tools worth knowing about. The goal of this list is to make you aware of what’s available—not to encourage you to adopt all of these tools at once, but to provide options you can explore and incorporate as needed. Software/Subscriptions Cursor — As explained in the previous section, we highly recommend this IDE due to the LLM coding integration and consider it an essential tool. It is a fork of VSCode, so it offers all the same great features/extensions but offers much better LLM integration. There is a free tier, but it limits the number of calls to premium models, so we recommend upgrading to Pro for $20/month (we think this is the best bang for your buck in terms of productivity gain compared to other tools). Alternatives include Zed and GitHub Copilot.ChatGPT+ and/or Claude Pro — As an LLM researcher, it is important to have conversations with top-tier LLMs all the time so you understand their capabilities. ChatGPT+ frequently releases new features that open new research opportunities (e.g. ChatGPT advanced voice mode).Tuple — We love this pair programming tool that allows you to seamlessly take control of another person's computer during screen sharing. It has low latency and excellent video quality. You can pair with guests who don't have a paid subscription.Google One — This subscription allows you to record Google Meet calls and extend them beyond 1 hour, which is very useful. It also includes a feature called pop-out mode, which allows you to see people's videos while screen sharing.Grammarly — This is useful for ensuring prompt grammar is accurate, which is important before final paper experiments. It also works seamlessly with Overleaf, emails, and Slack to speed up writing.Perplexity — A useful LLM-powered search engine that cites its sources.TimingApp — Excellent for tracking time. Allows you to set rules for activities to automatically assign them to projects and integrates with iPhone screen time. A free alternative is Wakatime which other collaborators we work with use.ReadAI — Automatically records meetings with a notetaker bot for Google Meet, Teams and Zoom. Offers recordings and transcripts, reducing the need for manual note-taking during meetings. Otter is an alternative we use too.We are excited by Granola, which may provide more value since it can automatically expand on notes you've jotted down.Rectangle — A window pane manager for Mac. Enables snapping windows to different portions of the screen.Copy Clip — A simple clipboard manager to ensure you don’t lose important copied text. Other tools like Raycast and Alfred have this built-in too.Context — Provides improved tab completion and search functionality on Mac. Requires a one-time payment.Karabiner — Allows keyboard key changes (e.g., remapping keys for Vim keybindings).Zotero — organise and annotate research papers.Other software that other collaborators use: Homerow, Dash, Speechify, BetterTouchTool, , LiquidText, and ice. LLM Tools Weights & Biases — Very useful for tracking OpenAI fine-tuning jobs (status, data, and losses) and experiments in general. Offers free access for up to 5 team members.Inspect — UK AISI framework for running LLMs with flexible task definitions, solvers, and scorers. Provides tools for multi-turn dialogs, model-graded evaluations, and agent benchmarks. Supports many models, including Hugging Face LLMs, with efficient concurrency, caching, and a trace mode for debugging. Includes a VS Code extension and web UI for log analysis and output inspection (which is so cool!). We think this framework can significantly accelerate workflows after spending the time to learn how it works.Aider — pair program with LLMs, to edit code in your local git repository (it has one of the top scores on SWE bench).Devin — An automated LLM software engineering agent that autonomously performs repo tasks and creates PRs. Allows real-time interactions and background fixes for refactors or small tasks. It isn’t seamless yet in our experience (e.g. it got stuck on linting with the wrong Python version), but it’s a promising tool and likely how automated research will be orchestrated in the future. It has a hefty $500/month price tag, so it's only worth it if sharing in a team or using it to demonstrate misaligned behaviours. One to watch in the future!openweights — Automates deploying RunPod pods for parallel fine-tuning and batch inference jobs. Offers an interface similar to OpenAI's. Big thanks to Niels Warncke (a fellow collaborator) for developing this!LiteLLM — Provides a unified interface for API interactions (OpenAI, Anthropic, Gemini, Hugging Face) with a proxy server or SDK. This is well worth using if you expect to run many models for a paper. It is a good tool for those who prefer not to use Inspect or our safety-tooling repo (see Part 4).repo2txt — UI for generating a prompt given a GitHub URL so you can easily copy-paste and learn about the codebase with an LLM.langchain — Offers standardised component interfaces useful for scaffolding/agents. Supports caching and integrates with LiteLLM. A good alternative if Inspect doesn’t fit your needs.vLLM — Hosts open-source LLMs for efficient inference using PagedAttention, continuous batching, and quantization. Supported by Inspect. You should use it if you run LLM inference on your own GPU.PromptFoo — Ideal for rigorous prompt engineering. Enables writing unit tests for prompt responses and testing against defined metrics across various LLM combinations.Langfuse — Primarily useful for developers deploying LLM applications. Provides detailed traces of LLM interactions, human feedback, and LLM-as-a-judge functionality. We haven't used it, but perhaps it is useful for demos and human labelling interfaces.Ollama — Runs most open-source models locally at various quantization levels (via llama.cpp). Usable through a terminal or a ChatGPT-like interface.exa.ai — allows you to do vector searches over the web, which is useful in literature reviews. Also, a good API to use to give LLMs access to the web.unsloth — fine-tune open source LLMs 2-5x faster and with <70% less memory. It supports 4bit fine-tuning and allows training large models on a single GPU.axolotl — a user-friendly tool to help fine-tune LLMs that supports multi-GPU setups and doesn't require deep technical tweaking.Prismatic VLMs — great repository for helping you train VLMsopen-clio — A simple reproduction of the CLIO paper using language models to label and cluster user behaviour recursively, helping to analyze behaviour at scale. This is a great tool to get insights into diverse model outputs. LLM Providers RunPod — Our go-to provider for using GPUs for open-source model fine-tuning. We find this to be the best provider due to the ability to use network drives to share data between collaborators, the availability of GPUs, and hardware reliability. It is also cheap compared to GCP/Azure. VastAI/LambdaLabs are alternatives, but we've heard they are not as reliable and do not support network drives.TogetherAI  — a great service for those who want to run inference or fine-tuning on open-source models via an API. OpenRouter is an alternative we have used.HuggingFace Dedicated Inference Endpoints — A great way to spin up huggingface models (such as the circuit breaking model) and send requests via an API. A big advantage is that it will autoscale up to a configurable maximum number of model replicas dependent on your demand and scale to zero after 15 minutes of inactivity. This makes it more cost-effective than hosting yourself on RunPod. It supports returning model logprobs, too, unlike OpenRouter. Command Line and Python Packages uv — A single tool to replace pip, pyenv, and virtualenv. It is 10-100x faster than pip!scalene or py-spy for profiling Python programsasyncio is very important to learn for empirical LLM research since it usually involves many concurrent API callsshell-ask or ask.sh — ask LLMs to write and execute bash commands (and you can pipe text into the command)jless — command line jsonl explorerncdu — an interactive recursive filesize explorerhtop —  an interactive process viewernvtop — an interactive version of nvidia-smiripgrep (better grep), Dust (better du), duf (better df), bat (better cat with highlighting and git), fd (better find), exa (better ls)code2prompt — convert repo into single LLM promptopencommit —  auto-generate commit messagesmagic-wormhole — copy files between machines Part 3: Experiment Tips De-risk and extended project mode First, we'd like to explain that there are usually two modes that a research project is in: namely, de-risk mode and extended project mode. These modes significantly change how you should approach experiments, coding style, and project management. De-risk mode focuses on rapidly answering high-priority questions with minimal overhead.This mode is ideal for:Quick experimentation using Python notebooks that minimize time-to-insight.Minimal investment to avoid effort in engineering practices, like extensive documentation, strict coding standards, or generalized pipelines.In collaborative group settings, this mode is still common. It is important to communicate the experiment's goals and frequently discuss the next steps rather than performing thorough code reviews.Extended project mode emphasizes engineering rigour and longer-term maintainability.This mode is especially critical for longer-term collaborations or experiments that require significant compute resources and complicated infrastructure, where bugs can lead to costly reruns. It also ensures that knowledge and progress can be shared across contributors.Key practices in extended project mode include:Transitioning from notebooks to structured scripts, modules, or pipelines.Applying code reviews, testing, and version control.Using tools like pre-commit hooks and CI/CD workflows to enforce quality. The workflow should always be conditioned on the situation: Start in de-risk mode: For example, if you’re searching for a new type of alignment faking behaviour or if it is possible to jailbreak a model with a specific technique, a notebook is great for determining feasibility.Switch to extended project mode: Once the experiment is de-risked, it can often mature into a larger project involving significant compute and collaboration. Now is the right time to refactor your work into a maintainable codebase. This transition can often catch bugs since notebooks are notorious for bugs (e.g. those that occur when you run cells in a non-linear fashion).Note: sometimes projects start here if there is significant infrastructure needed, it suits the collaborators workflow better or the project is already de-risked before starting. Ethan tends to be in de-risk mode for 75% of his work, and he uses Python notebooks to explore ideas (for example, many-shot jailbreaking was derisked in a notebook with ~50 lines of code). The Alignment Science team at Anthropic is also primarily in "de-risk mode" for initial alignment experiments and sometimes switches to "Extended project mode" for larger, sustained efforts. Note: Apollo defines these modes similarly as "individual sprint mode" and "standard mode" in their Engineering Guide. We opt for different names since lots of the research we are involved with can primarily be in de-risk mode for a long period of time. Tips for both modes Invest in a project planHave a clear project plan that includes motivation and research goals, and list all the experiments you can possibly think of running. Get feedback from peers and iterate.Think about milestones for the project and what you want to deliver. This will help to keep your self accountable. Don't underestimate how long it takes to write a paper.Think about code structureIf you'd like to open-source code, it is worth investing time at the start thinking about how you will design the repo so it is easy to use.Know the LLM tools that are out there (e.g. list in Part 2). It might be a good idea to build off an existing framework like Inspect or use tools like LiteLLM to make sure you have the flexibility down the line to run more models easily.As you build experience knowing how you best run experiments, start to build your own reusable tooling (and perhaps contribute it to our safety-tooling repo - see Part 4).Communicate frequently and clearly about what you're working onOne of the important ways to move quickly with research is to choose the right next experiments to run. Therefore, it is important to communicate plans regularly with the team so you can get feedback.We use a Slack channel for daily updates within the cohort. This is helpful for structuring your own next steps, keeping yourself accountable, and also providing your mentor/team with good context.Projects we run often have a daily standup with close collaborators. We find this essential for staying aligned on research goals, prioritising the right work and delivering on time.Use notion to track your experimentsFor many projects in our cohort, we create a database table with columns for: experiment name, tags, users involved, last updated and status (in progress or done).The team creates notion pages quickly for each new experiment (database row) and dumps figures and thoughts as they go. This should be done quickly, and it doesn’t matter if it is messy (as long as close collaborators can figure out what is happening).It is helpful to look back at this when you write slides to present to mentors and when you start writing the paper.On the topic of slides, make sure to check our tips in collaboration with James Chua.Pause to think before starting the work on an experimentSome questions we ask ourselves and discuss with the team are:What is the motivation for this experiment? Does it fit in with the research question I want to answer?Have I derisked this enough already, and are there other more interesting things to run instead? Is this definitely the highest priority?What result do I expect to get? Is learning that useful?Should I explore one model and one dataset first before expanding to more?Will running this extra experiment add significant value to our paper? (especially relevant when close to a deadline)Am I changing too many variables at once? Can I simplify my setup to draw better conclusions?Experiment folder structureWe recommend committing all lightweight scripts and notebooks used to run a certain experiment to a new folder in the codebase.For example, if running a new jailbreaking technique, you could create a folder in the repo called something like ./experiments/<name>/250109_jailbreaking_technique_v1Naming the folder with YYMMDD can help keep a logical ordering. You could also have sub-folders named after each collaborator.Experiments tend to involve many scripts and notebooks that analyse the results so one method is to enumerate these so it is clear the order in which they were run. E.g. 1_run_harmbench.sh, 2_run_classifier.sh, 3_analyse_attack_success_rate.ipynb.Core functionality can remain in the experiments folder if working in de-risk mode. However, if in extended project mode, core functionality should be refactored to be elsewhere in the codebase (only keep lightweight bash scripts, config files or analysis notebooks in the experiment folders).You should not be afraid of code duplication in these folders, even if it is just a few hyperparameters or data paths that change (and if in de-risk mode, it is fine to copy-paste notebooks frequently).Following this method means you can easily replicate any experiment, which is often useful down the line (e.g. during a paper rebuttal period), and you can easily retrace your steps. It also helps collaborators find what you have been running easily, along with all the parameters and paths to data, without needing to ask. Tips for extended project mode Pull requests (PRs) for each feature/experimentCode review is very helpful in extended project mode. Many bugs have been found in the projects we've been involved in after ensuring we do code reviews on core functionality. One bug could lead to you having to re-run days or weeks of results.When operating in de-risk mode, it's important not to overdo it. For early-stage, low-compute experiments or projects managed by a single person, working directly in notebooks is often the most efficient and practical approach.It can be worth the effort to think about how to split up features, experiments, and bug fixes into separate bite-sized PRs that are easy to review. PRs with masses of new files and complex changes across the codebase reduce the chance that people do a good review where they check thoroughly for bugs.Code review isn't necessary for everything, and the teams should use their judgment. Often, we will self-approve and merge quickly if the PR is just a bunch of notebooks and experiment scripts.If there is a PR for core code and functionality, we recommend all core contributors take a look to effectively manage conflicts and help onboard each other on new code.Merge PRs quicklyWe encourage the teams we work with to review PRs and work to merge them fast as a number one priority. This helps ensure everyone runs the latest and greatest code and avoids difficult merge conflicts down the line.Ability to kill experiments and carry on from where you left offCaching of LLM responses and other algorithm state is important for this. For example, this allows you to tweak concurrency settings and rekick off a run without losing progress.This may also involve saving intermediate outputs/checkpoints, which has the added benefit that you can check progress or potential bugs as your experiment is running.Experiment reproducibilityIf the same script is run, the result of the experiment should be as close to the original as possible. This isn’t always possible due to nondeterministic LLM APIs (even at temp 0), but everything else should be the same (e.g., data splits, hyperparameters, etc).Setting this up correctly is important to make caching work, too; otherwise, the prompts will be different, and everything will start from scratch.It can be useful to save the git commit hash in your experiment directory or even make a copy of the entire codebase (just in case you need to debug in the future).Output jsonl files and using pandasThis isn’t always relevant to every experiment, but for most empirical LLM experiments, an LLM gives a response to many different inputs. Therefore, outputting a jsonl file at the end of the experiment with all the metadata, inputs, and outputs is useful.These results can be quickly analysed with pandas in a notebook. Pandas is essential for lots of the data processing we do. It makes it straightforward to transform columns, filter a data frame, aggregate across columns and calculate simple statistics with .describe().Command line args for all Python scriptsScripts with good input args allow you to:create simple wrapper scripts that can be committed to the experiment folders explained above (see example).test scripts easily from the command line, especially if you can specify things like dataset size, batch size, and logging verbosity.parallelise experiments and run them in separate tmux panes to maximise your experiment throughput.There are many packages that make command line args easy (e.g. fire , hydra and simple_parsing). We use simple_parsing (see example) because it allows you to define your args in a data class which gets automatically initiated and populated from the command line args.Parallelising experimentsThere are different ways to queue up GPU jobs (e.g. fine-tuning with different hyper params or data splits).simple-gpu-scheduler is simple and very effective if you want to queue up many jobs on a multi-GPU machineopenweights automatically spin up RunPod pods to complete many jobsFor overnight experiments, we encourage collaborators to bias towards using the OpenAI or Anthropic batch API. It is cheaper and has higher throughput. Part 4: Shared AI Safety Tooling Repositories For many early-career researchers, there's an unnecessarily steep learning curve for even figuring out what good norms for their research code should look like in the first place. We're all for people learning and trying things for themselves, but we think it would be great to have the option to do that on top of a solid foundation that has been proven to work for others. That's why things like e.g. the ARENA curriculum are so valuable. However, there aren't standardised templates/repos for most of the work in empirical alignment research. We think this probably slows down new researchers a lot, requiring them to unnecessarily duplicate work and make decisions that they might not notice are slowing them down. ML research, in general, involves so much tinkering and figuring things out that building from a strong template can be a meaningful speedup and provide a helpful initial learning experience. For the MATS 7 scholars mentored by Ethan, Jan, Fabien, Mrinank, and others from the Anthropic Alignment Science team, we have created a GitHub organization called safety-research to allow everyone to easily discover and benefit from each others’ code. We are piloting using two repositories: 1) for shared tooling such as inference and fine-tuning tools and 2) providing a template repo to clone at the start of a project that has examples of using the shared tooling. We are open-sourcing these two repositories and would love for others to join us! Repo 1: safety-tooling Share Great Tooling: Reduces duplicated work by providing reusable code for common tasks, enabling researchers to focus on de-risking new ideas. Tooling includes an LLM Inference API with concurrency control, fine-tuning tooling with Weights and Biases, prompt templating with Jinja and much more.Upskill Collaborators: Encourages software development skills like code reviews, testing, and modular design, preparing researchers to apply to top research labs. If your goal is to work somewhere like Anthropic, contributing to an open-source project such as this is a good way to upskill.Submodule Design: Integrates directly into individual projects and makes it easy to contribute new tools directly. People can benefit from the tools without needing to abide by strict engineering practices in their project repo. Repo 2: safety-examples Share Examples: Provides examples over a wide range of AI Safety projects (such as adversarial robustness and LLM evals) so others have a quick place to start.Onboard Researchers Quickly: Offers a structured starting point to majorly speed up productivity for new collaborators. Many of Ethan's MATS scholars are cloning (or forking) this repository before starting a new project so they can benefit from access to examples, pre-commit hooks and the safety-tooling submodule. Note: We are very excited about UK AISI's Inspect framework, which also implements lots of what is in safety-tooling and much more (such as tool usage and extensive model graded evaluations). We love the VSCode extension for inspecting log files and the terminal viewer for experiment progress across models and tasks. We aim to build a bigger portfolio of research projects that use Inspect within safety-examples and build more useful research tools that Inspect doesn't support in safety-tooling. Acknowledgements We'd like to thank Jack Youstra and Daniel Paleka, as many of the useful tool suggestions stem from conversations with them. For more of their recommendations, check out their blogs here and here. John would like to thank Ed Rees and others at Speechmatics, from whom he's borrowed and adapted dotfiles functionality over the years. Thanks to Sara Price, James Chua, Henry Sleight and Dan Valentine for providing feedback on this post.
2025-01-20
https://www.lesswrong.com/posts/GR9DyBye8Psw9WFJF/near-term-discussions-need-something-smaller-and-more
GR9DyBye8Psw9WFJF
Near term discussions need something smaller and more concrete than AGI
ryan_b
Motivation I want a more concrete concept than AGI[1] to talk and write with. I want something more concrete because I am tired of the costs associated with how big, inferentially distant, and many-pathed the concept of AGI is, which makes conversation expensive. Accounting for the bigness, inferential distance, and many-pathed-ness is very important for dealing with AGI properly - I believe they are causal in the risks AGI poses - but they drive a very high communication burden. By this I mean things like distinguishing between different meanings, summarizing a bunch of previous discussions, etc. This burden does not seem to have reduced much if at all over time, and I don't expect it to in the near future. I consider that while the burden does not reduce over time, it does become easier to carry, because I can retain quite a bit of history and context. Now, to my great satisfaction, AGI is a much broader conversation across the public at large. Here is where I feel the burden becomes telling: the public at large does not have that history and context. For context, the book Superintelligence was published in 2014. That means we have 10 years worth of history and context on top of that book. The problem is worse when trying to talk about things in the present time: topics like AGI timelines, regular AI impact on employment, regular AI impact on the economy, etc. Currently people are discussing very short timelines, where by short I mean ranges from already here (in a weak form) to 3 years. This creates urgency; we do not have the time for many rounds of conversation while carrying the heavy burden talking in terms of AGI demands. It feels to me like if these timelines are wrong, the people who hold them won't be able to change their mind before feeling forced into making big decisions; and further if these timelines are close to correct, we as a group won't have time to make progress on immediate questions. Independently, and especially in the near term case, I think there are important ways that speaking in terms of AGI directly is wrong: we are trying to connect things at or near the object level to something many levels of abstraction up, with no clear conceptual way to go up, nor to come back down. I also notice that near-term-impact kinds of questions are...not what AGI is about. The idea of AGI is about the idea of intelligence - the early goal was to find the most general rules governing intelligence in the universe and then the associated goal of alignment is about how to guide AGI under those rules. Conversations around practical consequences of AGI are dogged by the need to somehow relate the thing a person is talking about to intelligence. I therefore think that AGI is the wrong rhetorical tool in a lot of cases. I want a new one. What I Want So what do I want out of this new rhetorical tool? I want something that is fast (to say), cheap (to think about), and powerful enough. I want it to fit places with limited bandwidth: verbal conversations, the comment section of LessWrong, maybe even chat messages. I'd like to be able to do thought experiments with it, a la Maxwell's Demon. I also need to make sure it doesn't duplicate any of the complaints I have about reasoning using AGI. First: I want the new tool to be a short inferential distance from talking about current or near future events, so we can use it to tackle questions about jobs/economy/short timelines/etc. I think it needs to be about what AI can actually do in the same way that AGI is about intelligence. This buys me what I think of as short lateral inference: jobs are about what people can do; the economy is a big pile of what everyone does; short timelines mean AIs will be able to do a lot soon; etc. I don't want any built-in steps to relate what I am talking about to intelligence (or any third concept) first. Second: I want it to be fairly concrete, but I also want it to be at least close to the level of abstraction of the problems we are trying to talk about. This buys me what I think of as short vertical inference distance: I can go down a level of abstraction (one specific type of job, like software engineer) or up a level of abstraction (maybe a sector of the economy, like AI) and return without much additional work. To do this, I'll need to come up from underneath, which is to say start with a concrete real thing and abstract up from there. By contrast, I understand AGI as essentially being a top-down type of abstraction: it begins with the idea of a lawful universe (in the sense of physical laws); then drops down from the laws to the general rules governing intelligence; then comes down to AGI, which acts under those rules. This is why AGI is about intelligence; its conceptual function in discussion is to be a fulcrum between us and the rules governing intelligence. My ideal case starts from the object level, and then goes up just enough levels to be convenient for the stuff I am trying to talk about. Third: I want it to be able to reconnect to the big-picture questions surrounding AGI. I think this cashes out as being able to say reasonable things about what intelligence means in the context of the tool. The standard for reasonable I am looking for is really just clear and directionally correct. This should let discussions throw out wrong things and establish the right direction for the conversation to go in without too much effort, and also without falling away into an independent bubble entirely severed from the source. Build the Tool The about-ness criteria is straightforward. There is a pre-existing idea of intelligence vs capabilities in public discussions. I therefore come down firmly on the side of capabilities. In lieu of Artificial General Intelligence, I give you the General Capability Machine (GCM). Keeping to the idea of being about what AIs can actually do, the General Capability Machine is a generalization over existing AIs, so that we can say anything any existing AI can do, the GCM does also. For the object level grounding, we can look at how we measure AI capabilities now, which is benchmarks. AI benchmarks are a set of concrete tasks, usually of the same kind. Therefore I choose the task as the object level thing to ground the General Capability Machine on. However, we clearly can't leave it there: operating solely on tasks is basically just tracking the excruciating details of every AI in the world, which is no more manageable than the AGI burden I complained about upfront. That being said it also feels like going directly to capabilities is a uselessly large jump. In the AGI case, consider the ARC benchmark: it is a series of concrete tasks that are designed to be difficult for LLMs but easy for humans, and therefore doing well is an indicator of intelligence in the LLM. This doesn't do much for my rhetorical purposes; ARC makes a strong correlational argument and indicator of success, but doesn't feed back into a better way to talk about intelligence (directly, anyway). Therefore I need something a level above tasks, that summarizes the task information well enough to help talk about capabilities. Returning to benchmarks: the idea is that if an AI does well on the tasks in the benchmark, then it will do well on all tasks of the same type. In the context of humans, we have a natural choice of abstraction[2] for this - skills. When a person can do well on a type of task, we call it a skill; so I'll do the same for when an AI does well on a type of task. Doing a sanity check: in neural nets this is like the problem of doing well on the training distribution, but failing to do well outside of it. In benchmarks, models will sometimes do well on the benchmark tasks, but then in production it is discovered they routinely fail at the same kind of task. We compare to a human who does well in the Intro to Algebra course, but then gets tripped up on later problems that do stuff like use a,b instead of x,y. We would say they did not learn the proper skill, only memorized the problems in the original course. This closely matches the other two relationships - using the term skill for the benchmarks case seems solid. But how do capabilities enter the picture? Capabilities are the thing we want to match with human capabilities, or else against the world. At bottom, doing anything out in the world tends to need a sequence of tasks, usually of several different types. So any time I would ask something like "is AI capable of X," I want to: Break down X into its associated sequence (or even sequences) of tasksIdentify what types of tasks are presentIdentify the set of skills corresponding to those task typesCompare that set of skills to the list of skills describing the General Capability Machine If the General Capability Machine has that set of skills on the list, then it has the capability. I can say "AI is capable of X." The final criteria is that I can connect this way of thinking back to Artificial General Intelligence. I bridge the gap here through the idea of learning. What modern AI does with learning cashes out as new skills in the list of the General Capability Machine. This makes talking about AGI straightforward at least some of the time. For example, the question “how fast is AGI approaching?” becomes “how fast is GCM adding skills?” From there I could get more specific about what AGI means in terms of GCM descriptions. Review Talking about near term things like short timelines and employment using the idea of AGI is probably doomed because it is too big to use correctly in conversation. Now I have my General Capability Machine, which is grounded in tasks AIs can do. When AIs do well on tasks of the same type, I call that a skill. To describe the General Capability Machine I make a list of its skills. To talk about capability I break it down into the tasks it requires, then list the skills the tasks imply. I compare the list of skills the capability demands to the General Capability Machine list, and say the machine has the capability if all those skills are present. Now when I want to think about near term things like timelines or the ability to do a job, I can think in more concrete terms of what AIs can and cannot do. I feel like the General Capability Machine is the near term rhetorical complement of AGI. We'll see how it works in practice. ^ Including similar framings like Artificial Super Intelligence (ASI), which distinguishes the generality of Artificial General Intelligence from the superhuman requirement, and Transformative AI (TAI) which focuses on the question of impact. ^ I mean this in a way conceptually similar to Natural Abstractions: Our physical world abstracts well: for most systems, the information relevant “far away” from the system (in various senses) is much lower-dimensional than the system itself. These low-dimensional summaries are exactly the high-level abstract objects/concepts typically used by humans. These abstractions are “natural”: a wide variety of cognitive architectures will learn to use approximately the same high-level abstract objects/concepts to reason about the world. Except in my case we are using abstractions that apply to humans and also applying them to machines, so we can relate what humans can do to what machines can do.
2025-01-11
https://www.lesswrong.com/posts/devng6zQYmBDkg9wR/ann-altman-has-filed-a-lawsuit-in-us-federal-court-alleging
devng6zQYmBDkg9wR
Ann Altman has filed a lawsuit in US federal court alleging that she was sexually abused by Sam Altman
quanticle
On January 6, 2025, Ann Altman filed a lawsuit in the Eastern District of Missouri alleging that Sam Altman carried out multiple acts of sexual abuse against her over "a period of approximately eight or nine years" starting in 1997. The case number is 4:25-cv-00017, for those who have PACER access. I find the lawsuit complaint notable because the alleged scope of the abuse appears to be much greater than indicated in pythagoras0515's post.
2025-01-08
https://www.lesswrong.com/posts/f5CERJJuCmnc4Yth8/ai-safety-outreach-seminar-and-social-online
f5CERJJuCmnc4Yth8
AI Safety Outreach Seminar & Social (online)
Linda Linsefors
AI Safety Outreach = Making more people better informed about AI risk, through any means that works. Who is this event for: Anyone who wants to help with, or might want to help with AI safety outreach. No qualifications required. Approximate Schedule: There will be a mix of talks, 1-on-1s with other participants, Q&A, some breaks and 2-minute lighting shares from the audience. Here's an approximate schedule, but expect it to be adjusted on the fly. The event (other than the 1-on-1s) will be recorded and shared publicly, but some parts may be removed on the request of the speakers. Speakers: William BairdRemmelt Ellen Collaborative note taking: Shared notes for AI Safety Outreach Seminar - Google Docs Registration (optional): Registration is not necessary, but if you sign up here you'll get an event reminder.
2025-01-08
https://www.lesswrong.com/posts/htivBwW2ym7pxri3Q/xx-by-rian-hughes-pretentious-bullshit
htivBwW2ym7pxri3Q
XX by Rian Hughes: Pretentious Bullshit
yair-halberstadt
SPOILER WARNING: Extensive spoilers for XX. XX was recommended by one of the comments on my Sci-Fi micro-reviews, so I read it. I can't seem to find any critical reviews of the book online so I'm going to while away a few hours of my life in a misguided attempt to reveal that the emperor has no clothes. Typography First let's dispense with XX's most distinguishing feature, then we'll get down to brass tacks. Rian Hughes plays around a huge amount with typography and tries to push the edges of text as a medium. This is an interesting quirk, and is sometimes appreciated, but sometimes makes the novel actively hostile to the reader. Reading through twenty pages of this quickly gets tiring: Overall, I don't think it added that much to the story, but I guess some people would like it 🤷. Length And Pacing This book is long (978 pages), and slow. It's not without its good bits, but to be blunt, it's a slog. If I appreciate the ideas in a book though, I'm happy to recommend even longer and slower works. So lets move onto: Ideas I generally prefer hard sci-fi. Sci-fi that might make up a number of novel technologies, but then applies them consistently and rigorously[1]. Sci-fi that explores ideas in physics,. economic, sociology, philosophy, or whatever. That said I'm perfectly happy to read science-fantasy. If you want to tell a story set in space, and don't want to worry too much about the annoying physics, then so long as the plot is good I'm all for it. XX is neither. It tries its hardest to be about high-minded concepts, alternative physics, and philosophy, but every single one of its ideas is completely BS. Let's get started: The Oxbow: After a long monologue about how consciousness requires localisation, the protagonist finishes off with: "We pinch off a piece of the Web, of ideaspace, through a looped buffer - one with a very specific permeability that I've, uh, I've not completely resolved as yet. Connected, so there is a way in, but shielded, cut off from most of the noise of the rest of the internet. This will give any signal in there that may be trying to localise itself a means to cohere. If you're diffused throughout a system like the internet, a system with a billion cameras for eyes and a billion keyboards constantly firing up your nerve signals, you're not going to know where, if anywhere, you are. The net is interconnected; you are the net. There is no not-you. But inside the Oxbow, it will be able to exist as a dualistic it and not-it - and thus we imbue it with a sense of self." Given this critical insight he builds some software which magically allows ideas on the internet to become personified as conscious beings, including talking, controlling a 3D printer, and more. This reminds me of the classic meme: In general I'm fairly forgiving of leaving some details unspecified in sci-fi. After all if you had all the details it wouldn't be fi. But the author does this for pretty much every explanation of anything throughout the book, and yet seems incredibly proud of himself for his high brow philosophical ideas. For example: The signal contains a grid of individuals, therefore whenever you put on AR goggles you see a global 3D grid of alien creatures laid out across the earth.Fundamentally everything is information, therefore Dana can teleport into the grid and back out into physical space wherever she wants.These creatures have direct brain communication abilities, therefore if you're in the same place on earth as a creature is in the grid it can affect you, e.g.making you sleepy. Signal Compression The signal is described on page 406 as being 100 trillion digits long, or about 300 trillion bits of information. On the other hand on page 59 it states that the signal contains one section with 760 trillion near repeats of the same data, a direct contradiction. Either way it contains blueprints for enough individual creatures to fill the entire earth's volume at one creature every 1000 cubic metres[2]. That's one million trillion creatures. There is no possible way to fit that into 300 trillion bits. The story vaguely implies super advanced compression was used, but this is nonsensical. Firstly because that implies the difference between 2 creatures can be encoded in less than one bit. And secondly because the story clearly states that: Each creature is encoded in a separate section with a 3.8 ms pause between the sections.Similar creatures are placed next to each other in the signal.Adjacent creatures share a lot of the same data. And this is critical to the story, allowing the protagonist to divine the existence and location of a species called the shepherds + compare the data of different creatures to work out where their brain is likely to be encoded for within the section. Efficient compression requires that the signal should look essentially indistinguishable from noise. Near repeats of the same data is the opposite of compression. OK. So let's throw out the 100 trillion digits number as pure nonsense. Let's also assume it's not compressed. If each section takes 3.8 ms, receiving one million trillion sections should take about a billion years. I assume that didn't happen. Neither is each section particularly data rich, containing at most 7 billion digits. It doesn't matter how you slice it, the author is fundamentally confused about what's actually in the signal. Moving the grid The grid includes the entire earth, so what do you do if you want to see a creature 10 thousand miles away and one thousand miles underground? The protagonist deals with the first by telling his computer he's in a different location, and the second by giving his computer the wrong time. Since the earth orbits around the sun, offsetting the time will also offset the grid by the amount the earth has orbited by then. Changing time causing the grid to move will only work if there's some absolute frame of reference, something even Galileo knew was wrong. As for changing the location - there's no way that the grid is using the computers knowledge of its location when deciding what to display in its AR goggles, because its clear that it updates continuously when you move, and GPS is not accurate enough for that. It's clearly using some other method to determine location. Besides if it is using the computers location either the computer knows its altitude, in which case you don't need the time shenanigans, or it doesn't, in which case how come the AR goggles show you the correct creatures when you're flying? Sentient Galaxies A central plot point is that the Andromeda galaxy is conscious, the stars that make it up performing a dance that encodes information, thinking slow thoughts over the 10s of thousands of years it takes light to cross the galaxy. Now the author is right that there's no fundamental reason you couldn't encode a brain in the movements of stars, or in the light that crosses between them, but expecting a galaxy to spontaneously be conscious is no different from expecting a watch to spontaneously arise out of a tornado running through a garbage heap. Evolution is required to craft consciousness, and galaxies don't reproduce. The signal as a virus Somehow or other the signal hijacks whatever brain/computer/whatever which stores it, allowing it to control the host, up to and including such diverse creatures as Algae and sentient stars. Whilst viruses are of course possible, medium independent viruses are not. For any given virus the vast majority of information processing devices will treat the virus as nothing but noise. A virus must be carefully crafted for the particular processors that will run it. Is the amount of diversity in the galaxy so low that you can craft a virus that will affect nearly every species? You know what, I'll let this one pass, this is probably the most reasonable idea in the book. Alternative physics The author spends some time crafting an alternative theory of physics where redshift is caused not by the universe expanding but by everything in the universe shrinking. Now I don't have the tools to evaluate this, but as far as I can tell it's not based on any existing theories, and I'm nearly certain it doesn't stand up to scrutiny. It's completely irrelevant to the plot, and so as far as I can see the only reason for the author to add it in is to show off how clever he is for making up superficially plausible BS physics. Conclusion I'm not trying to nitpick here. Nearly all sci-fi books contain some amount of mumbo-jumbo, handwaving, and contradictions. But their good points tend to make up for those. XX has nothing. It's long,  slow, has an awful plot, and doesn't have a single idea that stands up to scrutiny. And it's constantly sounding awfully smug about giving you a bunch of philosophy 101 lessons. 2/10, would not recommend. ^ Ted Chiang is a master of the short story in this genre. ^ The creatures are on a cubic grid about 10 metres apart. Whilst there are places it isn't this regular, and there's entirely missing sections, it's clear from the story these aren't enough to impact this by more than an order of magnitude.
2025-01-08
https://www.lesswrong.com/posts/JdbnwjHFGNqZE3AHf/the-absolute-basics-of-representation-theory-of-finite
JdbnwjHFGNqZE3AHf
The absolute basics of representation theory of finite groups
dmitry-vaintrob
This will be an "ML-oriented" introduction to representation theory of finite groups. It is an introductory sketch that orients towards both a language and a result (the "real semisimplicity" result) that is useful for thinking about the subject in an interpretability context. This is a somewhat low-effort math post (subbing in for a planned post I'm running late on). It isn't a prerequisite for anything, but will be useful background for a future discussion of how NN's learn associative group multiplication (a joint project with Louis Jaburi). I'm hoping for this to be readable by people with a very small amount of background in abstract algebra: it is meant to fill in some very basic knowledge gaps. I'd be very interested in comments about what is confusing or difficult about this post, to help focus on the right exposition directions when writing about this in the future. I'm not going to assume much abstract algebra, other than the notions of a group and an algebra (always assumed to be a unital, associative algebra). Algebras of note Recall that all the set Matn×n of n×n (real-valued) matrices forms an algebra (it's a vector space with an associative product that distributes across addition). For any field F, there is a notion of a matrix algebra over F and Matn×n is naturally an algebra over R. The set  Matn×n(C) of complex-valued matrices forms an algebra over C, but it can also be viewed as an algebra over R, in the same way as C can be viewed as a two-dimensional vector space. We will be interested in understanding the "algebra" associated to a finite group G. It turns out that working with algebras over C is easier than working with algebras over R from a theoretical viewpoint (this is of course because C is algebraically closed, i.e., any polynomial equation has a solution. We'll start out looking at such representations. However unfortunately, neural nets don't care about theoretical ease: instead they care about compression and efficiency, and this will necessitate also understanding a bit of representation theory over R. For starters, we work over C. We'll need one more useful algebra that will be one of the main characters in our discussion: namely, the group algebra of G, written C[G]. This is the algebra whose elements are formal complex linear combinations of elements of G, and where product is induced from the product on G distributively. We analogously have the real-valued group algebra R[G]. When we don't want to commit to either real or complex coefficients and want to treat them simultaneously, we'll write F[G] for F one of our fields R or C (a bit of the analysis here carries over to more general fields, though usually one should require the characteristic of the field to be relatively prime to the order |G| of the group for the theory to have comparable niceness). So for example if G=Cn is the cyclic group on n elements (Cn is better notation for the group that is sometimes written Zn or Z/n), then an element of C[G] is a sum λ00+λ11+λ22+…λn−1n−1. Here the "bolded" numbers are group elements. Note of course that since we're viewing it as a group, multiplication in Cn corresponds to addition of numbers: i.e., a⋅b=a+bmodn. (This is another reason why Cn is better notation -- since Z/n could also mean the ring, multiplicative and additive notions can be more confusing.) To see how group multiplication in this algebra works, let's work out an example with G=C3: (4⋅1+5⋅2)⋅(−3⋅0+2)= −121⋅0+41⋅2−15⋅2⋅0+10⋅2⋅2= 4⋅0+(−12+10)⋅1−15⋅2. (Here remember that we're writing the group multiplicatively, so for example 2⋅2=1 since 2+2≡1mod3.) Representations Now for the purposes of this note we define an n-dimensional representation of G to be a map of algebras from the group algebra to a matrix algebra: ρ:F[G]→Matn×n(F). Here F can either be the field R ("real-valued representations") or the field C ("complex-valued representations"). Of course by linearity, it's enough to specify ρ(g) for elements g∈F[G], which are the standard basis for the group algebra: i.e., ρ can be recovered from its restriction ρ∣G:G→Matn×n. Conversely, any map ρ0:G→Matn×n leads to a map of algebras if it's Unital, i.e. ρ0(1)=1 andA multiplicative homomorphism, i.e., ρ(gh)=ρ(g)ρ(h) (product of matrices on the RHS). Note that setting h=g−1 and using the homomrphism property tells us that ρ(g) must always be invertible, i.e., an element of the group of invertible matrices GLn(F)⊂Matn×n(F); this shows that our notion of representation is equivalent to the usual notion of group homomorphism ρ0:G→GLn(F). We say that two representations ρ1,ρ2 are equivalent if they have the same dimension (i.e., both n-dimensional for the same n) andthey are conjugate: so there exists some other invertible matrix γ∈GLn(F) such that, for all g∈G, we have the identity ρ2(g)=γρ1(g)γ−1 (if you haven't seen expressions like this, convince yourself that the formula on the right defines a new representation of G for any fixed invertible γ.) In this case we think of ρ1 and ρ2 to act "via the same symmetries", just expressed in different bases (the standard basis in the case of ρ1 and the basis given by γ in the case of ρ2). If you've ever heard of group representations, you've heard of the notion of irreducibility. A representation ρ is irreducible if it cannot be decomposed into smaller components: i.e., if it isn't conjugate to a block-diagonal representation of the form g↦(ρ1(g)00ρ2(g)), for ρ1,ρ2 nonzero representations. The "reducible" representation above (check this is a representation if you don't believe me!) is denoted ρ1⊕ρ2. Now the big secret point of representation theory is that the set of irreducible representations (up to isomorphism) "contain all the algebraic information" about G. This is encompassed by a series of results; an appealing consequence of these results (and a way of condensing much of the information contained in them) is the semisimplicity theorem below. Let ρ1:C[G]→Matn1×n1(C),…ρk:G→Matnk×nk(G) be a collection of representations that contains each irreducible representation exactly once up to conjugacy. (Usually, since conjugacy, i.e., "isomorphism", simply corresponds to choosing a different basis, people will use "sum of all irreducible representations" to mean this object). Here n1,…,nk are the dimensions of the irreducible representations. The classification of (complex) representations of G and the eigenvalues of the matrices ρi(g) is called character theory (something that gets covered in a first course on representation theory, but that we won't get into here). Character theory in particular implies that the number (k) of distinct irreducible complex representations is equal to the number of conjugacy classes of G, i.e., the number of elements on the set obtained from G by identifying g and γgγ−1 for any G. If G is commutative, then each conjugacy class just has a single element (each element g above only gets identified with itself), so in the special case of commutative groups (also known as "abelian" groups), the number of conjugacy classes is equal to the number of elements, i.e., k=|G|. Theorem (semisimplicity of the group algebra): ρ1⊕ρ2⊕…⊕ρk be the direct sum of all irreducible representations of G. Then the map C[G]→Matn1+n2+⋯+nk(C) is injective, and identifies C[G] with the set of all block diagonal matrices ⎛⎜ ⎜ ⎜ ⎜ ⎜⎝M10⋯00M2⋯0⋮⋮⋱⋮00⋯Mk⎞⎟ ⎟ ⎟ ⎟ ⎟⎠. (Here the Mi are ni×ni complex-valued matrices). In other words if we only care about the algebra C[G], then all of its structure is fully determined by the (unordered) collection of irreducible representation dimensions n1,…,nk. Since there are a lot of groups, this implies that there are some pairs of distinct groups G, H that have isomorphic matrix algebras! I.e., while the groups cannot be identified with each other, every element of G can be identified with a linear combination of elements of H in a way that respects multiplication and admits an inverse (once again linear combination-valued): indeed, if the representation dimensions of G and H coincide, then as algebras, we can identify both group algebras with the same algebra of block-diagonal matrices. A simple example of a pair of distinct groups with isomorphic group algebras are the groups C9 and C3×C3[1]. Real reprsentation theory, and "representation type". But once again, if we want to understand how NN's process groups, we need to understand the "more efficient" representation theory over R. Here we can start our analysis by looking at the same picture as before: namely, while we have no guarantee that the matrix coefficients of a matrix ρ(g) in a complex representation has real coefficients, we do have a real subring R[G]⊂C[G], namely real-valued formal linear combinations of group elements viewed as a subring of complex-valued ones. Since each complex linear combination can be uniquely identified with a unique block-diagonal matrix BlockC:=⎛⎜ ⎜⎝M1⋯0⋮⋱⋮0⋯Mk⎞⎟ ⎟⎠, we want to understand what kinds of subrings we can obtain. This must be a "decomplexification" (more standardly called "real form") of BlockC, a subring BlockR⊂BlockC. which has the following properties: BlockR is closed under multiplication by real scalars (but not necessarily complex scalars),BlockR has half the dimension of the real vector space BlockC.each block-diagonal matrix M∈BlockC can be expressed as M0+iM1, for M0,M1 in BlockR. Of course, one "real form" is the set of block-diagonal matrices as above with real coefficients, and sometimes this does correctly capture the image of the real group algebra (this is the case, for example, for the symmetric group Sn). However, there exist other possibilities for the real form. The property of being "real" (i.e., having real coefficients as a formal linear combination of group elements) as a group algebra vector may not be compatible (even after allowing a fixed conjugation) with the property of being "real" (i.e., having real coefficients as a matrix) as a block-diagonal matrix. In other words, the group G (and its real group algebra) "chooses" a specific real form of the algebra of block diagonal matrices. Such real forms can be fully classified, and there are exactly three things that can happen to some n×n block Matn×n(C). It gets replaced by (a conjugate of) Matn×n(R)⊂Matn×n(C). Irreducible representations ρi for which this happens for the corresponding block are said to be of "real type" (note that being of real type is a property of a complex-valued representation!).It gets combined with another n×n block and (again up to conjugating by some invertible matrix γ), a block-diagonal matrix spanning these two blocks (A00B)∈BlockR is "real" if and only if A and B are (coordinatewise) complex conjugates: i.e., coordinatewise, A=¯B. Pairs of representations ρi,ρj for which this happens are said to be of "complex type", and they always satisfy ρi(g)=¯ρj(g) (after possibly replacing one of ρi,ρj by a conjugate). The dimension n of the corresponding block is even and, up to conjugation, the matrices Mi∈Matn×n(R) in the real form associated to this block can be split into 2×2 blocks: Mi=⎛⎜ ⎜ ⎜⎝h0,0⋯h0,n/2⋮⋱⋮hn/2,0⋯hn/2,n/2⎞⎟ ⎟ ⎟⎠, where each hij∈Mat2×2(C) above is a complex-valued 2×2 quaternion matrix, i.e., a matrix of the form hij = a1 + bi + ci + dj, where a,…,d are real numbers and 1 =(1001) is the usual unit matrix, and we have i = (i00i), j = (01−10) and k = (0i−i0) (These matrices are known in physics as the Pauli matrices.) The resulting 4-dimensional real subalgebra of Mat2×2 is called the quaternion algebra, denoted H. It has the wonderful property of being the only (finite-dimensional) division algebra over R (i.e. algebra all of whose nonzero elements are invertible) except for C and R itself, and the only non-commutative one. Representations ρi whose associated block exhibits this behavior are said to be of "quaternionic type". Together, these results imply that whereas the complex group algebra C[G] is always (isomorphic to) a direct sum of matrix algebras Matn1×n1⊕⋯⊕Matnk×nk (this expression, with the symbol ⊕ denoting "direct sum", is the standard mathematical notation for the block-diagonal algebra), the real group algebra R[G] is isomorphic to a direct sum of matrix algebras of three forms: Real matrix algebras Matn×n(R) (each is associated to a single complex representation of dimension n).Complex matrix algebras Matn×n(C) (but viewed as an algebra over R). These components of the real-valued group algebra "combine" two complex representations, so there are half as many such components as there are irreducible (complex-valued) representations of complex type.Quaternionic matrix algebras Matn×n(H) (each is associated to a single complex representation of dimension 2n). Examples We conclude by giving some explicit examples of the comparison between the group algebra the block-diagonal matrix algebra associated to irreducible representations. Complex representations of cyclic groups Let G=Cn be the cyclic group on n elements. In this case it turns out each irreducible representation of G is one-dimensional and there are n of these. By our semisimplicity result above, this is equivalent to the fact that the group algebra C[G] is isomorphic to the algebra of diagonal n×n complex matrices. To see this it is enough to construct an isomorphism. Let ζ:=exp(2πi/n) be the elementary nth root of unity. For any residue \[a\in C_n\], we write M(a)=diag(1,ζa,ζ2a,…,ζ(n−1)a). Here in order to make sense of ζa where a is a residue (rather than an integer), we're implicitly using the fact that ζn=1 (and so exponents of ζ only depend on their residue mod n). The mapping above is obviously a homomorphism (i.e., compatible with multiplicaiton). It's a standard fact that the above diagonal matrices are linearly independent as a ranges over the group elements. Since both the algebra of n×n diagonal matrices is n-dimensional, these n linearly independent vectors must form a basis; since we have identified a basis of the group algebra C[G] with a basis of the block-diagonal matrix algebra, this is an isomorphism. If we look at the diagonal matrix associated to the linear combination λ00+…+λn−1n−1, its kth element is λ0+λ1ζ+λ2ζ2+…+λn−1ζn−1. Looking at this formula, we see that we have rediscovered the finite Fourier transform: indeed, if we view the coefficients n↦λn as a complex-valued function λ on \the group elements 0,…,n−1,  then the (k,k)th matrix coefficient of the associated diagonal matrix is the Fourier transform ^λ(k). The homomorphism property in this case is equivalent to observing that the convolution product (which is equivalent to the group algebra product -- checking this is a nice exercise) maps to the elementwise product after Fourier transform (and of course multiplying diagonal matrices multiplies their coefficients elementwise). Real representations of cyclic groups Here there is a difference between even and odd n. The results of the previous section tell us that there are n different complex one-dimensional representations ρi (with the kth representation corresponding to taking an element of the group algebra to the kth diagonal element). A one-dimensional representation just takes each element of a∈Cn to an (invertible) element ρk(a)∈C, and is given by the formula ρk(a)=ζka. For all n, the representation ρ0 is of real type: indeed, it takes every element of G to 1, which is a real-valued 1×1 matrix. If n is even, then the representation ρn/2 is also real-valued, as it takes each element a to (−1)a∈±1. And whether n is even or odd, all other ρk are of complex type: we can check this simply by seeing that ρk is not isomorphic to the complex conjugate representation ¯ρk=ρ−k. (Note since two matrix representations are isomorphic if and only if they're conjugate by some fixed matrix γ, any 1-dimensional representation will be conjugate only to itself. Thus it's enough to observe that for k≠0,n/2, ¯ρk(1)=¯ζk=ζ−k≠ρk(1) to see that ρk is of complex type. Thus semisimplicity in the real case tells us that the group algebra R[Cn] is isomorphic R⊕Cn−12, if n is odd andR2⊕Cn−22, if n is even. (Here an "exponent" d is the direct sum expression is shorthand for taking the direct sum of d copies of the same component). The group of elementary quaternions Unspurprisingly (since the quaternions are a noncommutative division ring), no commutative group has representations of quaternionic type. The smallest group with a representation of quaternionic type is the group G=Q8 of elementary quaternions, with the 8 elements 1, -1, i, -i, j, -j, k, -k. We can identify each of these elements with the corresponding 2×2 "Pauli" matrix above, and then they form a group with identity 1 and the following multiplication table: (-1)±α=∓α for α one of the symbols 1, i, j, k;i^2 = j^2 = k^2 = -1;ij = k, ji = -k, jk = i, kj = -i, ki = j, ik = -j. This group has one 2-dimensional complex representation of quaternionic type, thus giving a map Q8→Mat1×1H=H (remember that the quaternionic dimension is half the complex dimension). This is the obvious map, taking each element of the group to the corresponding quaternion; it also has 4 different 1-dimensional real representations: one is (as for any group) the trivial representation, taking every g∈G to 1, and the other three factoring through one of three surjective homomorphisms Q8→C2, which ignore sign and take 2 of the three letters i, j, k to -1. There are no other representations, so we see that the real group algebra R[Q8] is isomorphic to the direct sum H⊕R4. In particular, Q8 has no representations of complex type. ^ In fact, these two algebras have the same group algebras over R, which is a stronger result. Similarly, the Klein 4-group C2×C2 has the same complex group algebra as the cyclic group C4, but these have nonisomorphic group algebras over R.
2025-01-08
https://www.lesswrong.com/posts/gG4EhhWtD2is9Cx7m/implications-of-the-ai-security-gap
gG4EhhWtD2is9Cx7m
Implications of the AI Security Gap
dan-braun-1
This post reflects my personal opinion and not necessarily that of other members of Apollo Research or any of the people acknowledged below. Thanks to Jarrah Bloomfield, Lucius Bushnaq, Marius Hobbhahn, Axel Højmark, and Stefan Heimersheim for comments/discussions. I find that people in the AI/AI safety community have not considered many of the important implications that security in AI companies has on catastrophic risks. In this post, I’ve laid out some of these implications: AI companies are a long way from state-proof securityImplementing state-proof security will slow down safety (and capabilities) research a lotSabotage is sufficient for catastropheWhat will happen if timelines are short?Security level matters, even if you’re not robust to top cyber operations AI companies are a long way from state-proof security I’m of course not the first one to make this claim (e.g. see Aschenbrenner). But it bears repeating. Last year, RAND released an excellent report outlining what it might take for an AI company to prevent adversaries of various capability levels from stealing model weights. Its highest security level, SL5, defines a level of security that could “thwart most top-priority operations by the top cyber-capable institutions”. This is, of course, the level of security that we need around a technology with catastrophic risks. Unfortunately, today’s AI companies are likely to be further away from reaching SL5 than the time it is likely to take to develop models that are worth protecting at this level. Below, I illustrate some reasons why implementing this level of security might take a very long time. Reaching SL5 is a tough technical challenge. Some components that RAND suggests for SL5 do not even exist yet “More R&D is needed to enable organizations to support production models while meeting SL5 security requirements. We recommend the development of hardware security module (HSM)-like devices with an interface that is specialized for ML applications.” Let’s take preventing supply-chain attacks from software as another example. The RAND report lists this security measure for SL5: “Strong limitations on software providers (e.g., only developed internally or by an extremely reliable source)”. My guess is that if you looked at all software being used by researchers at a frontier lab, you’d have a list longer than this article, many of which would be maintained by indie developers in their basements. An AI company might say “OK, this is a problem, let’s throw a bunch of our own engineers at the problem and build internal tools that replicate all this functionality”. If managed well, this intervention would reduce the chances of running egregious malware that exfiltrates all of your data to external servers. However, this tactic also runs into the following problem: The number of vulnerabilities in a system has a strong negative correlation with how much it has been used. New internal tools will have much less use than popular external public tools, meaning that vulnerabilities are less likely to be found. Attackers are in a much better position if there are many of these more innocuous-seeming vulnerabilities available to them to exploit in their operations. I expect AI to help a lot with the problem of finding and fixing vulnerabilities in packages, though I think this security asymmetry between new, internal tools and popular public tools is likely to stand, at least for a couple of years. Of course, the company also has to find people to develop these tools internally. If they hire many additional engineers, the number of people who can maliciously insert vulnerabilities into these tools may become large (enforcing code reviews helps mitigate this, although not fully). Beyond the technical challenges, implementing security processes takes a long time when humans (or AIs) are involved. It’s not enough for the security team at an AI company to develop a new SL5 system and say “OK all, we’re done. Everyone use this setup and read these docs and we’re SL5 now”. If this happened, the whole setup would collapse under the flood of “we need to be able to do X to be able to do anything useful” from staff. There’s also the issue of preventing staff from gossiping about confidential company information, a practice that’s currently a pastime in the AI tech scene. Embedding a strong security culture at a company is a significant, time-consuming endeavour. When security isn’t a top priority from the company’s inception, changing the culture is all the more difficult. Of course, it’s possible that not all of the suggestions in the RAND report for SL5 are required to protect against top-priority operations from leading cyber-capable institutions. It’s also possible that the list of SL5 suggestions will be insufficient. RAND notes considerable lack of consensus between surveyed experts, stating that “There are deeper and more conceptual disagreements about what is needed to achieve the security implied by SL4 and SL5—with opinions ranging from the SL3 benchmark being sufficient to secure against all threat actors to claims that no system could ever present a significant hurdle to [top-priority operations by top cyber-capable institutions]“. Nonetheless, the report provides useful guidelines, and I think frontier AI companies are at least several years away from implementing even a more minimal version of these SL5 guidelines. Implementing state-proof security will slow down safety (and capabilities) research a lot I get the sense that many in the AI field strongly underestimate how much working under strict security measures would slow down research. This applies to both capabilities and safety research, although may have a stronger effect on safety research. Most importantly, you just can’t give wide access to the latest models without massively increasing the security risk (“wide” in the sense of the number of people/AIs and the types of experiments that can be run). Unfortunately, the safety research being pursued most at AI companies, and that which is most promising under short timelines (e.g. Control, Monitoring Scalable oversight), requires having wide access to the most powerful models. Since generalisable theories of intelligence and alignment are in short supply, and may be for several years, most safety work is likely to take the form of prodding and poking the most intelligent model available (e.g. by trying various “safety training” setups) to see how it responds or to find holes in your control/monitoring setup. In addition to not having wide access to your latest models, here is another set of things you’ll have to deal with as an AI researcher/engineer under strong security measures: You will be waiting for approval for many things from various people, e.g. before running various experiments, before using certain tools, to get a false positive temporary lock on your workspace undone. This will massively increase the research feedback cycle.You’re restricted in the types of experiments you can run at all. E.g. you might not be able to export a large amount of data from your experiment in case it contains too much information about the model weights. Interpretability research (for e.g. monitoring) may be especially difficult under these restrictions.You won’t be able to use the latest untrusted models directly to speed up your research (or be strongly limited in how you can use it). This might not seem like a large cost now, but certainly will when models perform as well or better than AI researchers.The number of people in the company you can discuss research ideas with reduces.You get access to fewer research results from other teams, many of which would help with your own work.The mission-driven culture that previously made your company move fast is likely to take a hit due toteams being more segregated from each otherstaff not enjoying their work as much when they’re frequently interrupted by security “blockers”. I’m certainly not saying this to dissuade companies from implementing these measures; if you’re in the business of developing technology that might cause extreme harm, this is a minimal price to pay. Nobody wants to live in a world where AI developers are not paying these costs to keep their models secure. The following sections hopefully make that clear. Sabotage is sufficient for catastrophe In a good world, an AI developer may reach SL5, and be robust against attacks that can exfiltrate the weights of an AI model. This however does not mean that an adversary couldn’t still cause a catastrophe with this model. To reach SL5, an AI company would need to place very strong restrictions on the outgoing network traffic from the isolated network hosting the powerful AI models. The goal is to make it as hard as possible for an attacker to exfiltrate terabytes worth of weights from the network. Fortunately, the output of AI experiments that will pass outside of the isolated network need not be very large (assuming that copies of the model would stay inside the network). On the other hand, placing strong restrictions on incoming traffic to the isolated network hosting the model is much more difficult as many experiments will require using large amounts of external data (e.g. the data required to train/post-train a model). This opens the door for various types of sabotage. AGI, almost by definition[1], will be capable of scheming against its developers. Ideally the AGI will be aligned enough such that it would show no proclivities to doing so, though we’re not doing so well on that front at the moment (1, 2). Let’s assume that the AGI will try to scheme against its developers, and an AI Control setup is required to prevent it from succeeding. All the adversary needs to do now is to sabotage this AI Control setup. Sabotage here might be as simple as corrupting the trusted monitor (the AI tasked with checking the outputs of the scheming AI) by e.g. planting backdoors in some of its finetuning data. Also, remember that the adversary has a powerful insider that wants to cause havoc itself: the AI. Now consider the case where the AGI is aligned, and will not naturally scheme against its developers. I contend that, in practice, this situation may actually be worse for defending against external sabotage. An aligned AGI is likely to have far fewer controls around it. In this case, all an adversary would need to do is to cause the AGI to become unaligned, and let the AGI do its thing. When considering how we currently (attempt to) align AI systems, and how effective very simple jailbreaks are, I expect that “undoing” the alignment of one instance of an AGI we’re on track to building is a much easier task for a capable adversary than stealing model weights. I do expect adversaries to be less interested in sabotaging an AI than stealing it. That said, assuming AI developers reach some reasonable level of security, I think sabotage would be so much easier to pull off that it should be a bigger concern for those of us interested in reducing catastrophic risks from AI going rogue. What will happen if timelines are short? Many AI/AI safety folk believe that short timelines (e.g. those laid out here) are very possible, and that no major advancements are needed to get there. Let’s suppose that this happens, and that the group in the lead is an AI company which is (somehow) robust to model theft and sabotage from the most cyber-capable institutions. Under these conditions, it seems very likely that other Western and perhaps Chinese groups are going to be able to build AGI themselves in <1 year from the first AGI (you can increase this to <2 years if you’d like, the rest still stands). For things to go well, you’re going to need to argue one of the following: Proliferation of these AGIs would be safe.Things will happen in the <1 year timeframe that prevents any other party from getting AGI. (If so, what is this? Will the US government (and/or other governments) be ready and capable of closing down frontier model development? Will the leading company be able to buy out the others?)Even if a small number of Chinese/Western groups get AGI in <1 year from the first AGI:These companies will have sufficient security, andsomething from that time period onwards will prevent others from building AGI and it eventually getting in the hands of your local terrorist group, andit matters that several parties have the AGI but not your local terrorist group (this is easier to argue for large scale harms but perhaps not existential risk). My personal take is as follows: Short timelines (very roughly as listed here) without any crucial secret sauce held by one company is very possible (>20%). I put much more weight (up to 70%) on the 2028 entry of “AI companies have 10k-1M automated AI researchers” happening before 2035, but these slightly longer timelines do not drastically change the picture below.In these worlds, multiple parties in China and the West will be <1 year behind the frontier (likely aided by theft of models/data/insights).Barring a crippling world war over AI development, nothing but strong US government intervention would prevent AGI from reaching multiple parties. The most likely-seeming thing for the US government to do would be to start its own project or to effectively join forces with 1 existing company, and prevent everyone else in its power from pushing the AI frontier.If the US government does not do this, I think we end up with several companies with insufficient security having AGI, shortly followed by AGI proliferation, likely ending in catastrophe. So this is not a scenario I spend much time strategising about.Note that longer timelines leave some room for a single company to take a big enough lead and prevent other parties from competing (e.g. by buying them out and becoming robust to theft and sabotage). In this case, the US government may not need to get as involved. Security level matters, even if you’re not robust to top cyber operations Leading cyber-institutions are run by parties who also wouldn’t want the AGI getting in the hands of the street-corner terrorist. They might prefer to just steal your model and try to use it themselves, and (hopefully) protect against their own adversaries stealing it from them. Perhaps they will share it with their allies, but one at least has some hope that it doesn’t spread wide very quickly. If your threat model puts at least some probability on it being important how many parties have access to the AGI (and what kind of parties), then reducing the number of adversaries that are able to steal your model is robustly good. As for sabotage threats, it should go without saying that you want to minimise the number of parties that can sabotage the AGI in your company. Advice to frontier AI companies If you don’t want the government to shut down your frontier development when serious capabilities hit, you are going to have to convince them that you are the best company for them to join up with. How can you do this? Yes, raw capabilities will help. But if timelines are short, or perhaps even under some longer timelines, they’re going to realise that your competitors are not far behind. I expect that the most promising way to convince them is to already have a lot of the infrastructure in place to continue AI development/use with very tight security. It’s also going to help if they view you as responsible actors who are putting significant resources into safety research so as to not put all of their own citizens at risk. If you don’t believe that the government will shut your frontier development or amalgamate with you, then seriously consider that failing to prevent sophisticated adversaries from stealing or sabotaging your models is sure to bring about much more harm than the good that you’re trying to do by bringing about AGI. ^ I don’t think it matters much for this post, but I’ll define AGI as the level of AI that can speed up AI R&D 10x at a frontier AI company.
2025-01-08
https://www.lesswrong.com/posts/rC9BteCBHDif2ccFv/what-are-polysemantic-neurons
rC9BteCBHDif2ccFv
What are polysemantic neurons?
vishakha-agrawal
This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback. The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety. For a “monosemantic” neuron, there’s a single feature that determines whether it activates strongly. If a neuron responds only to grandmothers, we might call it a grandmother neuron. For a “polysemantic” neuron, in contrast, there are multiple features that can cause it to activate strongly. As an example, this image shows feature visualizations of a neuron that activates when it sees any of a cat face, cat legs, or the front of a car. As far as anyone can tell, the reason why this neuron responds to both cats and cars is not that cars and cats share some underlying feature. Rather, the neuron just happened to get two unrelated features attached to it. Why do we think that the neurons are not encoding a shared similarity? Suppose a neuron is picking out some feature shared by cars and cats. Say, the neuron is representing “sleekness”. Then we’d expect images of other “sleek” things, like a snake or a ferret, to activate the neuron. So if we generate lots of different images which highly activate our neuron, and find that they do contain snakes and ferrets, that’s evidence for the neuron picking up on a unified concept of sleekness. Researchers have run experiments like this on neurons like this one and found that, no, they only activate on cats and cars — just as the “polysemantic” hypothesis would lead us to expect. Why do polysemantic neurons form? Polysemantic neurons seem to result from a phenomenon known as “superposition”. "Superposition", in this case, means that the neural net takes a larger number of numerical "features" of its input that might each otherwise have had their own neuron (for example, one neuron for cat-ness and one neuron for car-ness), and instead spreads those features out over a smaller number of neurons, with the features mixed together ("loaded") in different ways for each neuron. In fact, if we only care about packing as many features into n neurons as we can, then using polysemantic neurons lets us pack roughly as many as exp(C * n) features, where C is a constant depending on how much overlap between concepts we allow.[1] In contrast, using monosemantic neurons would only let us pack in n features. Apparently neural network training processes consistently find ways to superpose features together like this because that means they can pack more features into the limited number of neurons they have available, or use fewer neurons for those features, conserving them for more important computations. This would be possible most of the time because most sets of features would have a property called "sparsity". "Sparsity" in this case means that, even if each feature in the set has some plausibly occurring network input for which the feature would be a number far from zero, still for most inputs most features are very small or zero. Essentially, the network is using a smaller number of "real features" (the neuron activations) to do computations on a larger number of "virtual features", but with some risk of getting "virtual wires crossed" (in a manner strikingly similar to synesthesia!), or of dropping out virtual feature numbers when they're too small to stand out from zero in all the mixed-together confusion. What are the consequences of polysemantic neurons arising in networks? Polysemantic neurons are a major challenge for the “circuits” research agenda for neural network interpretability, because they limit our ability to reason about neural networks. It’s harder to interpret what computation is being done by a circuit made out of neurons if those neurons' activations have multiple meanings. As an example: in a circuit where we only have two polysemantic neurons, which encode five different features each, and one weight governing a connection between them, then we have effectively 25 different connections between features that are all governed by that single weight. Which, effectively, means we have 25 different possible computations which could occur using only two neurons. In turn, that makes it very hard to figure out which computations are occurring in those neurons at any given time. Progress in interpreting polysemantic neurons There has been some substantial progress. In 2023, Anthropic claimed to achieve a breakthrough on this problem in their paper “Towards Monosemanticity”. Anthropic trained large "sparse autoencoder" networks on the non-sparse ("dense") activations in other, more polysemantic neural networks, to decompose those activations in the form of sparse activations from among a larger number of neurons. These sparse activations were (reported to be) more monosemantic, corresponding to more interpretable features. As a result of such progress, Christopher Olah of Anthropic stated he is “now very optimistic [about superposition]”, and would “go as far as saying it’s now primarily an engineering problem — hard, but less fundamental risk.” Why did we caveat Anthropic’s claims? Because some researchers, like Ryan Greenblatt, are more skeptical about the utility of sparse autoencoders as a solution to polysemanticity. And while we've made some progress in splitting polysemantic neurons into monosemantic ones,  that still leaves the issue of figuring out how those polysemantic neurons are used by the network to solve problems. That is, how do we find, and interpret, polysemantic computations in a neural network? This is quite hard, as we've noted above, because a collection of polysemantic neurons can represent many different computations using the features in superposition. In 2024, a preprint reported some early successes in this problem with an automatic technique for "small" language models (i.e., < 100 million parameters). The technique, named SHIFT[2], built upon sparse autoencoders to find the many different computations that could be occuring within a collection of neurons, polysemantic or not. Each computation can then be split into its own network consisting of mono-semantic neurons. That means no  superposition of computations, which in turn means a much more interpretable network. One of the authors of the preprint, Sam Marks, wrote that this was "the strongest proof-of-concept yet for applying AI interpretability to existential risk reduction", because of how it fits into research towards the aim of "evaluat[ing] models according to whether they’re performing intended cognition, instead of whether they have intended input/output behavior". ^ This is a consequence of the Johnson-Lindenstrauss lemma. As this estimate doesn’t account for using the exponential number of features for useful computations, it is unclear if neural networks actually achieve this upper bound in practice. (The use of polysemanticity in computations is an active research area. For a model of how polysemanticity aids computations, see “Towards a Mathematical Framework for Computation in Superposition”.) What about lower bounds on how many concepts are packed into polysemantic neurons in real networks? Well, these estimates depend on assumptions about the training data, initialization of weights, etc. So it is hard to give a good lower bound in general. But for some cases, we do have estimates: e.g., “Incidental polysemanticity” notes that, depending on the ratio of concepts to neurons, the initialization process can lead to a constant fraction of polysemantic neurons. ^ Or, Sparse Human-Interpretable Feature Trimming
2025-01-08
https://www.lesswrong.com/posts/m9asQL8tadHJQQPZT/the-type-of-writing-that-pushes-women-away
m9asQL8tadHJQQPZT
The Type of Writing that Pushes Women Away
sdjfhkj-dkjfks
Disclaimer: I don't think people who post here generally exclude women on purpose. I think that to whatever degree women are less represented, this is unintentional. This post combines thoughts I had while reading  Is Being Sexy For Your Homies by Valentine with intuitions developed from reading posts on the site across several years. Although I point out aspects of Is Being Sexy For Your Homies I believe contribute to the male-centric atmosphere of this site, I enjoyed reading the post, and don't mean to suggest Valentine has done anything bad or wrong.  In fact, Valentine acknowledges within the post that he uses highly simplified assumptions. Nonetheless, I think it is worth pointing out that certain ways of writing make women less likely to want to read on. -- Because mostly men write posts here, posts tend to be written in a way that appeals to men. This sometimes unintentionally drives away women. The post Is Being Sexy For Your Homies is written from a clearly male perspective, and prioritizes the concerns of straight men. I think Is Being Sexy For Your Homies is a good example to illustrate the type of male-centric writing I mean to describe, although of course not a median example. For example, at one point the post says: And if maybe this is the key to why mate-finding and child-rearing and even just feeling sexually safe at a job has been such a royal pain these last many years. (I believe "this" refers to the historical social importance of separation between sexes) For women, feeling sexually safe at a job has only improved since the #metoo movement. This is despite an increase in mixing of gender roles. In fact, separation of gender roles was in many cases a contributor to harassment: for example, if men are managers and women are secretaries, it's easier to normalize sexually harassing secretaries. I think this quote would seem off-putting to most women. There are other aspects of the post that women might find off-putting. At one point, the post suggests that separating men and women in the workplace might have historically been helpful for preventing romantic relations from disrupting productivity. Although this may be true, to most women it might sound like suggesting that hanging by a rope over an active volcano would help you warm up: there's an accompanying thought missing. Most women are viscerally aware of the dangers that result when certain professions or opportunities are harder to access due to their gender. The rest of the post is also worded from the perspective of a straight man describing his experiences, and occasionally speculating on women's experiences or the experiences of gay people. The post tends to generalize the experiences of the author to the experiences of others. Although there's nothing wrong with writing from one's own perspective as a man, it does incrementally move the needle of the website towards seeming like a "dudebro" space for men to discuss common experiences. This makes women less likely to visit. There is an aura of "maleness" to a lot of posts on this website, to varying degrees. I think that some women who are otherwise interested in the site are likely to be turned away by it. It was a contributing factor to my lack of posts until now. What can be done to prevent turning women away? I really don't know. Maybe it is an inevitable product of the relative number of men on the site. I don't think it makes sense to ask people to stop writing about their experiences as men. Despite the "maleness" of  Is Being Sexy For Your Homies, I don't wish it wasn't written. Maybe it would help to have more women reviewing people's posts, so that they could comment on parts of the posts which seemed likely to turn women away. Maybe it would help to have some more posts emphasized which are clearly written from the perspective of someone other than a man. These might help counterbalance the posts which are written from a man's perspective. I do see a few posts per year which seem to be primarily and obviously not from a man's perspective, but not many.
2025-01-08
https://www.lesswrong.com/posts/6Fo8fjvpL7pwCTz3t/on-eating-the-sun
6Fo8fjvpL7pwCTz3t
On Eating the Sun
jessica.liu.taylor
The Sun is the most nutritious thing that's reasonably close. It's only 8 light-minutes away, yet contains the vast majority of mass within 4 light-years of the Earth. The next-nearest star, Proxima Centauri, is about 4.25 light-years away. By "nutritious", I mean it has a lot of what's needed for making computers: mass-energy. In "Ultimate physical limits to computation", Seth Lloyd imagines an "ultimate laptop" which is the theoretically best computer that is 1 kilogram of mass, contained in 1 liter. He notes a limit to calculations per second that is proportional to the energy of the computer, which is mostly locked in its mass (E = mc²). Such an energy-proportional limit applies to memory too. Energy need not be expended quickly in the course of calculation, due to reversible computing. So, you need energy to make computers out of (much more than you need energy to power them). And, within 4 light-years, the Sun is where almost all of that energy is. Of course, we don't have the technology to eat the Sun, so it isn't really our decision to make. But, when will someone or something be making this decision? Artificial intelligence that is sufficiently advanced could do everything a human could do, better and faster. If humans could eventually design machines that eat the Sun, then sufficiently advanced AI could do so faster. There is some disagreement about "takeoff speeds", that is, the time between when AI is about as intelligent as humans, to when it is far far more intelligent. My argument is that, when AI is far far more intelligent than humans, it will understand the Sun as the most nutritious entity that is within 4 light-years, and eat it within a short time frame. It really is convergently instrumental to eat the Sun, in the sense of repurposing at least 50% its mass-energy to make machines including computers and their supporting infrastructure ("computronium"), fuel and energy sources, and so on. I acknowledge that some readers may think the Sun will never be eaten. Perhaps it sounds like sci-fi to them. Here, I will argue that Sun-eating is probable within the next 10,000 years. Technological development has a ratchet effect: good technologies get invented, but usually don't get lost, unless they weren't very important/valuable (compared to other available technologies). Empirically, the rate of discovery seems to be increasing. To the extent pre-humans even had technology, it was developed a lot more slowly. Technology seems to be advancing a lot faster in the last 1000 years than it was from 5000 BC to 4000 BC. Part of the reason for the change in rate is that technologies build on other technologies; for example, the technology of computers allows discovery of other technologies through computational modeling. So, we are probably approaching a stage where technology develops very quickly. Eventually, the rate of technology development will go down, due to depletion of low-hanging fruit. But before then, in the regime where technology is developing very rapidly, it will be both feasible and instrumentally important to run more computations, quickly. Computation is needed to research technologies, among other uses. Running sufficiently difficult computations requires eating the Sun, and will be feasible at some technology level, which itself probably doesn't require eating the Sun (eating the Earth probably provides more than enough energy to have enough computational power to figure out the technology to eat the Sun). Let's further examine the motive for creating many machines, including computers, quickly. Roughly, we can consider two different regimes of fast technology development: coordinated and uncoordinated. A coordinated regime will act like a single agent (or "singleton"), even if it's composed of multiple agents. This regime would do some kind of long-termist optimization (in this setting, even a few years is pretty long-term). Of course, it would want to discover technology quickly, all else being equal (due to astronomical waste considerations). But it might be somewhat "environmentalist" in terms of avoiding making hard-to-reverse decisions, like expending a lot of energy. I still think it would eat the Sun, on the basis that it can later convert these machines to other machines, if desired (it has access to many technologies, after all). In an uncoordinated regime, multiple agents compete for resources and control. Broadly, having more machines (including computers) and more technology grants a competitive advantage. That is a strong incentive to turn the Sun into machines and develop technologies quickly. Perhaps an uncoordinated regime can transition to a coordinated one, as either there is a single victor, or the most competitive players start coordinating. This concludes the argument that the Sun will be largely eaten in the next 10,000 years. It really will be a major event in the history of the solar system. Usually, not much happens to the Sun in 10,000 years. And I really think I'm being conservative in saying 10,000. This would in typical discourse be considered "very long ASI timelines", under the assumption that ASI eats the Sun within a few years. Thinking about the timing of Sun-eating seems more well-defined, and potentially precise, than thinking about the timeline of "human-level AGI" or "ASI". These days, it's hard to know what people mean by AGI. Does "AGI" mean a system that can answer math questions better than the average human? We already have that. Does it mean a system that can generate $100 billion in profit? Obvious legal fiction. Sun-eating tracks a certain stage in AGI capability. Perhaps there are other concrete, material thresholds corresponding to convergent instrumental goals, which track earlier stages. These could provide more specific definitions for AGI-related forecasting.
2025-01-08
https://www.lesswrong.com/posts/FcNgMbabidRSh9aGn/book-review-range-by-david-epstein
FcNgMbabidRSh9aGn
Book review: Range by David Epstein
PatrickDFarley
(This is a crosspost from TrueGeneralist.com) Introduction Why I read this book Range: Why Generalists Triumph in a Specialized World is a book about generalists. And it’s pretty much the only book about generalists that’s gotten substantial traction in popular culture. Since I started this whole blog around being a generalist, Range was kind of required reading for me. Based on the branding, I assume most people who read it suspect themselves to be generalists and are hoping to receive some validation and encouragement toward that identity. I myself went in already knowing that I’m a generalist, not really caring whether the book’s contents validate that, because it’s a core part of myself that I’m not willing (or maybe even able) to change. I was interested to see what positive aspects of generalism the author would highlight because, aside from what I know of my own experience, I don’t really see those highlights in popular culture. TL;DR Most of the content in this book is vignettes of generalists throughout recent history—what they did and how they achieved greater things than their specialist peers. It also contains a lot of informal reports on studies and surveys that show the traits of generalism succeeding in different contexts. The book’s main conclusions are that 1) people with range, especially early in their careers, do better in the long run, and 2) organizations with range in their constituents also do better. Yeah, the conclusions are pretty predictable. You know what all the studies and anecdotes are going to show, and unfortunately it can get pretty dry as there are a lot of studies and anecdotes. Ironically, I’d say David Epstein approached this subject from a single angle, and it would’ve benefited from a more varied approach. I would’ve liked to see some analysis on the psychology of generalists, how generalism was viewed throughout history (why it was idealized in some periods and not others), and at least some lip service toward the pitfalls of generalism. Anyway, don’t worry about the dullness. There are a handful of very compelling ideas in this book, which I’ll present below in their original contexts, with a bit of extra analysis and some connections to other useful ideas I’ve found elsewhere. Context of the work This book, published by pop science journalist David Epstein in 2019, inherited the world of what I’d call the Neat Tricks Backed By Science books. Or the Stephen-Pinker-Adjacent books. In different places, Epstein mentions and gives direct responses to the works Thinking Fast and Slow by Daniel Kahneman, Grit by Angela Duckworth, and Battle Hymn of the Tiger Mother by Amy Chua. In terms of style and structure and the way it justifies its claims, Range very much belongs in that genre. It’s interesting, though, to see a work of this kind explicitly go against the other works. The Neat Trick genre is supposed to overturn old ways of thinking, not other new ways. For Angela Duckworth saying, “IQ doesn’t matter as much as you think; grit matters more,” Epstein then says, “Grit doesn’t matter as much as you think, match-quality decisions matter more.” In my view, his going against contemporary thinkers, when he really didn’t need to mention them at all, demonstrates a bit of extra conviction. Is it actually true? With books like this, I’m more skeptical than I think the target reader is. I can imagine generalist-curious people eagerly interpreting it as confirmation that their constant task-switching is a good trait and not just a low attention span or the fear of failure. I wouldn’t say I’m overwhelmed with the evidence, because even though there’s an overwhelming volume of it, this kind of content is not the most reliable form of evidence. All the anecdotes about great generalists, well, they’re anecdotes. But they’re inspiring, and they do move the needle a bit, mainly by showing that a lot of generalists are “hiding in plain sight.” The studies and surveys, well, I’ve seen enough non-replicating studies to take all of this with a lot of salt. There are no meta-analyses here, so you just have to hope that the studies were done well and without any bias toward the “generalists are better” conclusion. The same goes for Epstein’s curation of the studies. If you want a quick benchmark for “How reliable are these claims,” I’d look at this paper finding R-index values for Thinking Fast and Slow. Like I said, Range is in the same genre as TFaS, written for the same audience, so I’d expect the same level of epistemic rigor (that is, not stellar). The major updates I personally made are something like: A lot more top achievers are generalists than you might think, because their generalism often doesn’t make it into the biographical content that you’d see at a glance.Specialists today are more specialist than at any other time in history, and the pitfalls of specialism (to summarize: narrow thinking) are more pronounced. This makes generalists more valuable as a way to counter that.The kinds of problems that lend themselves to specialization are often the less interesting problems. (Incidentally, they are also most easily solved by AI, which is something that problem-solvers might want to consider in the modern age). Now let’s look at the fun parts. Kind and wicked Epstein notes that some activities, like chess and golf, lend themselves to specialization, and so the people at the top of these fields are usually people who started at an early age and focused on little else. Epstein surfaces a conclusion from Kahneman’s psychology research: the difference between kind learning environments (those with a limited, defined set of parameters, where patterns repeat and feedback is immediate) and wicked learning environments (where the rules are unclear or incomplete, feedback is delayed or inaccurate, and patterns aren’t obvious). The kind:wicked dichotomy sort of matches finite:infinite games. It’s not a perfect match—a wicked learning environment can still be a finite game, like “Double your company’s monthly active users, somehow.” But infinite games have a lot of the characteristics that wicked environments usually have: a wide array of choices, unpredictable outcomes, and no well defined end state to work backwards from. Kind environments lend themselves to what Epstein calls “chunking”—this is really just “data compression” or “relying on categories” or “using broad heuristics.” The game of chess is a kind learning environment, where the mind can learn to identify patterns that repeat often. Chess grandmasters can look at a given game-state for a split second and then replicate it on the board in front of them, but when the pieces are arranged into an impossible game-state, they can’t do it. This demonstrates that it’s not raw memory that sets chess masters apart, but the efficient compression of data that is familiar to them. Now, Epstein mentions that AI systems are very good at performing in kind learning environments. I really like how he drew a timeline of AI performance progress—chess, Jeopardy, Starcraft, driving, statistical analysis—and then pointed out that this is also a spectrum from kind to wicked learning environments. Chess was solved by AI decades ago, Starcraft only recently, and driving is still in progress. The larger and more varied the game, the more sophisticated the systems need to be to navigate it. What I also like is that Epstein avoids saying, “AI can only win in kind learning environments,” which would’ve been more tempting to say in 2019 than it is now. He makes no claim that AI progress will halt somewhere, but only points out that wicked learning environments are taking longer, as a way to show that they’re qualitatively different. Shots fired I promised some beef with those other pop science books, so I’ll relay that here. These weren’t the most compelling chapters for me, but I’ll outline Epstein’s takes on two of the big ideas that have been floating around in this idea ecosystem. Against tiger mothering Apparently, early specialization in musical practice and performance was a major part of Amy Chua’s parenting. If it wasn’t, well, this is the only angle from which Epstein opposes the whole Tiger Mom method. He brings up a list of prolific musicians who all spent significant time in a state of “musical generalism,” as opposed to the very early, highly structured training that Chua put her daughters through. He points to the figlie del coro, a group of female musicians in 18th Century Venice. They were possibly the greatest musicians the Western world had ever seen at the time, and they each played three to five different instruments. They didn’t have extensive practice regimens—they lived simple lives and spent a lot of time doing chores in the orphanage they belonged to. What set them apart? Epstein says it was their wide range of musical experience, and also the opportunity to test and figure out which instrument was truly the best fit. He goes on to highlight the importance of a “sampling period” in the lives of other top musicians. And he points to a plethora of jazz musicians who never took formal lessons and instead learned just by being around music all the time, picking it up through osmosis and developing it in unstructured play. Amy Chua’s daughter did become a violin prodigy, Epstein concedes, but as an adult she has since quit playing. He also suggests, perhaps controversially, that drilling and performing classical music to precision is missing something essential about music as a human experience. “Jazz is creative, classical is re-creative.” I’m inclined to agree, but it’s hard to build much of an objective argument on a foundation of, “This genre of music is more important than that other one.” Against grit I shouldn’t say Epstein is “against grit,” but he’s against elevating grit as an unqualified virtue and using it as the primary predictor of success. This view seems to be backed by some good data. Angela Duckworth developed Grit while observing the US Military’s Academy’s challenging “Beast Barracks” training program. This new attribute, grit, a combination of “work ethic and resilience” and “consistency of interests,” ended up being the best predictor of which individuals would complete the training program—better than the academy’s Whole Candidate Score, the metric that had previously been used to select cadets. But, fast-forward a few years, and that military academy was suffering a shortage of officers—cadets who had previously passed the training kept leaving the military for other careers. Did they lose their grit? No, Epstein says, it’s just that they were finally empowered to make “match-quality decisions” by switching to other career paths. The military had no talent-matching mechanism, while corporate America was investing heavily in talent-matching. You can’t expect people to never quit anything, he reasons. The academy developed the Officer Career Satisfaction Program as a way to give officers more work-style choices within the military. It was effective at retaining officers—more effective than offering them more money (though it’s unclear how much money was offered). To sum up Epstein’s finding: a singular focus on grit will select candidates who are resilient and whose interests are aligned right now. But it can’t promise ongoing future success, because you can’t predict how their interests will change in the future. That ties in with a theme later in the book: the end-of-history illusion, how people systematically underestimate how much they’ll change in the future. You need to build flexibility into people’s careers if you want to ensure they keep showing up over the long haul. Perfect grit is an unrealistic solution. Analogical thinking In the chapter called “Thinking Outside Experience,” the subject is astronomer Johannes Kepler, struggling to understand the complex motion of the planets in the night sky. His approach was interesting to me: he’d first search for an analogy that could qualitatively match his observations, and once that fit, he’d pin down the math later. He tried many analogies: Are the planets sliding on transparent spheres? Are they like floating objects in a whirlpool? Does the sun’s light “pull” on everything it illuminates? They were all incorrect, but in subtler and subtler ways. This was how Kepler eventually conceptualized the idea of cosmic gravity. What follows in this chapter are some lessons about how thinking analogically helps people solve hard problems. There’s a story (riddle?) about a radiation beam that can kill cancer cells at a certain intensity, but it also kills healthy cells at that intensity. So how can we use it? Epstein then prompts you with a story: several groups of soldiers, each small enough to travel without raising alarm, take many roads to converge on a city from all directions, where they are then numerous enough to conquer it. Then I curse to myself for not seeing the radiation-beam solution beforehand, and concede that yes analogical thinking is an effective tool. This surprised me. I think analogies are… out of style, nowadays? In popular culture and popular science, analogies are mostly used to dumb down a known, precise answer, so novices can grasp it. How many times have you heard, “Spacetime is like a trampoline with a bowling ball on it,” with no attempt to address the obvious, “Yeah, but the trampoline only behaves that way because of gravity and isn’t gravity the thing we’re trying to analogize here?” So it was interesting to see that paradigm inverted: you can start with analogies as a way to conceptually grasp a situation, which will then help you arrive at the accurate mathematical model more quickly (or at all). Trapped priors “Trapped priors” is a concept that’s basically a more under-the-hood description of “confirmation bias.” It’s the unofficial theme of the chapter “Fooled by Expertise,” which describes how specialists can think too narrowly and make really bad predictions as a result. Paul R. Ehrlich, a butterfly ecology specialist, said in 1968, “The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now. At this late date nothing can prevent a substantial increase in the world death rate.” That, of course, didn’t happen. And so we learned that human population growth is more complicated to model than butterflies fighting over flowers, or whatever. A surprising detail that again shows a bit of extra rigor from Epstein: economist Julian Simon, the foil of Ehrlich who said something like, “More humans will mean more ingenuity, and we’ll figure out a way to sustain ourselves,” is also pinned as a specialist with trapped priors. Why? To operationalize his bet against Ehrlich, Simon proposed using the price of metals as a proxy for mass human flourishing. I don’t fully understand his reasoning, but the thrust of this story is that he measured human flourishing in a way that naively made sense from a narrow economics point of view. Later on, other experts basically said, “That wasn’t a good proxy, you actually won the bet by pure luck.” Simon never amended his claims. Then we get an introduction to political scientist Philip Tetlock, his search for “superforecasters,” and the Good Judgment Project. His work started with him asking political scientists during the Cold War to make probabilistic predictions of events in the coming years (and finding that they were bad at it). Tetlock called them “hedgehogs” for their narrow, on-the-ground viewpoints. Hedgehogs are bad at making predictions, but good at spinning narratives that protect their worldviews in the face of new evidence. The Good Judgment Project came about when Tetlock started asking non-experts to make the same predictions about the Cold War. Some were better and some were worse, but of course the real value came in identifying and analyzing those better—the “superforecasters.” Superforecasters were found to have the following traits: numeracy (the ability to think and express oneself in terms of numbers), a range of experiences, genuine curiosity about many things, polite disagreement, a desire to falsify their own beliefs, and a willingness to change positions often. I didn’t immediately see why generalists would be more likely to be superforecasters, but Epstein raises the idea of “science curiosity, as opposed to science knowledge.” People who have an abundance of genuine curiosity are perhaps more likely to be generalists, and also more likely to hold their beliefs loosely in the way that superforecasters do. Trapped epistemologies The chapter that immediately follows, called “Learning to Drop Your Familiar Tools,” describes a different kind of “trapped priors.” Not just individual beliefs, but whole ways of knowing, can become trapped by too much narrow, self-reinforcing experience. Epstein describes the US Challenger disaster. Key decision makers gravely misunderstood what a “factor of safety” meant, they decided to launch the shuttle despite some warning signs on their equipment, the shuttle was destroyed, and then famous physicist Richard Feynman yelled at everyone. But Epstein reveals an angle that I’d missed the first time I’d heard this story. Part of the reason NASA went ahead with the launch despite “warning signs” was that they had a systematically data-driven culture, and the warning signs were not quantified in any data. “The O-rings did something weird in the test, and although the equipment didn’t fail the test, it was unexpected. What if they do something unexpected again, but worse?” That kind of objection ran up against a culture of, “Nice idea, come back to us when you’ve got data.” You can imagine how well a data-only culture had worked for NASA, and how much waste it had successfully dismissed. But it blinded them to a black-swan-type threat. A quote from Feynman: “When you don’t have any data, you have to use reason.” That pesky “reason,” refusing to be mapped to any well-defined process! Before these eleven virtues is a virtue which is nameless. Expanding on the theme of “specialists being too hesitant to ditch their preferred tools,” Epstein describes how, in deadly forest fires, the bodies of firefighters have often been found still clutching their unwieldy chainsaws. Navy personnel have drowned, still wearing their steel-toed boots. Fighter pilots have refused orders to eject. The specialist’s tool becomes a part of their body, such that, in the heat of the moment, they don’t consider tearing it off. Likewise their mental tool becomes a part of their mind. The freshest, most surprising example of a trapped tool is organizational congruence. It’s completely taken for granted in modern workplace culture that the company should have some core values that are adhered to down the whole chain of command, at every level. Amazon has its leadership principles, Microsoft has its company values, and so on. Epstein claims this has gone too far and that the optimal amount of “workplace ambiguity” is more than zero. He warns of professionals getting too attached to the process, asking, “Does this match the official company values?” but never, “Do I predict this will actually work?” Some amount of incongruence forces professionals to develop agency, Epstein argues, and individual agency must be balanced with group compliance, not replaced by it. Now, it’s not exactly a bold claim to say, “The bureaucracy of large organizations slows things down and stifles creativity,” but I never considered that even the act of writing out company-wide values starts that process. It gives people a template to follow in all their decisions—a template other than “will this work.” The overspecialization of science In Current Year, you have probably already heard of this problem. Every modern scientist is doing something like “isolate this one particular protein in this one particular strain of yeast,” and virtually no one is researching “How to extend human life” or “How to control the weather.” I exaggerate. But it’s evident, if you ask around, that most scientists today are working on extremely narrow questions. Are they just really passionate about that one yeast protein? Epstein doesn’t think so. He recalls sitting in on a grant hearing by a Senate subcommittee on science and space research, observing that only the proposals with an immediate and tangible application were considered. He reflects on the fact that so many of the most applicable breakthroughs in science came about as the result of non-directed experimentation, such as Oliver Smithies discovering gel electrophoresis by playing around with starch paper, and Tu Youyou discovering an effective malaria treatment chemical from a traditional herbal remedy. He quotes biologist Yoshinori Ohsumi, “Scientists are increasingly required to provide evidence of immediate and tangible applications for their work. They’re expected to say what they’ll find before they look for it.” Obviously, Epstein’s solution is to grant more money to open-ended research and experimentation. It’s hard for me to ignore the other side of the equation, though: how does a government agency justify, to its taxpayers, the funding of research that has no real indication of paying off? It’s a conundrum, because plenty of useful research will start off that way, but so will most useless research. I’m reminded that so many great scientists in history have completely avoided this question by being more like “self-funded hobbyists playing around in their basement” than “straight-A students who are great at winning grants.” Maybe in the recent past this problem was easier, because we could all coordinate around broad national goals like “send man to the moon,” so any free experimentation vaguely related to that effort got funded. Nowadays, nobody agrees on the broad goals, much less which sub-goals to fund, so the only socially permissible things to fund are projects that have a very quick, profitable turnaround. Or maybe it’s a bunch of other reasons. Widely cited microbiologist and “generalist scientist” Arturo Casadevall has a slightly different view of things (which Epstein seems to want to merge with his own). Casadevall’s emphasis is not on funding dynamics forcing specialization, but rather on specialization in schools producing bad scientists. He raises a claim that kids these days (paraphrasing) are taught a list of scientific facts but not how to think scientifically, and that this is extra silly because we now have smartphones that can look up all the facts anyway. Casadevall founded the R3 initiative to try to reintroduce scientific thinking and a philosophy of science to the research industry. I’m mostly happy to pile on and say, “Scientists today don’t understand scientific thinking,” but I’m also suspicious of how good it feels to pile onto that particular pile, and how hard it is to verify the actual claim. Hidden generalists? Are most high-achievers actually natural generalists who specialized late in their careers? Epstein hints so. I’m not convinced it’s most, but definitely more than I thought. Why? For one thing, a bunch of the generalist figures that he highlights in this book are people I would’ve guessed were lifelong specialists. So there’s at least a slight update I need to make to my expectations. Secondly, there are all those studies he cites which show some variation of, “The majority of top Xers started in fields other than X.” As I said, I’m suspicious of both poor study quality and selection effects, but nevertheless, it’s such a high volume of studies that I’m willing to believe a handful of them are true (similar to the studies in Thinking Fast and Slow). But thirdly, there’s a qualitative argument that I really like. He notes that when someone’s story is told at the end of their successful career, it’s just A-to-B, because we know what A and B are, and it makes for a more coherent story that way. The story isn’t being told when it’s in the middle, when the person is just hopping around between interests, having yet to make that winning connection or excel at that final project. I’m reminded of a tricky dynamic in financial investing. When you look at the price chart of an asset, you can draw a line between, say, the price 30 years ago and the current price (let’s assume it’s higher now). And then you can come up with a valuation metric in the form of CurrentPrice/some-other-value, and it will look like a predictive metric for future returns, regardless of what the other value is. The metric, in retrospect, will appear to be mean-reverting, correctly predicting higher returns when it dips and lower returns when it spikes. But that’s because the “future returns” are, by definition, FuturePrice/CurrentPrice. So CurrentPrice is present in both sides of the equation, and that is the reason for the apparent correlation. I’m paraphrasing the content in Valuation and Stock Market Returns: Adventures in Curve Fitting; check that out if you want an explanation that’s not so rushed. The general point is that anything can look mean-reverting if it’s noticed in hindsight and scaled to past data. And I think the same goes for popular narratives: Roger Federer is the greatest tennis player? Ah, let’s look at his past. Here’s the point where he first started playing tennis—that’s important. Look at this friend of his who got him into tennis, he was an important influence on him. And Roger played soccer in high school? Ah, but then he mean-reverted back to tennis, didn’t he? Soccer just didn’t suit him. And boom, you’ve got a biography of a tennis specialist. But in reality, Roger Federer played just about every sport growing up. His mother was a tennis coach but she didn’t train him, and he was bad at tennis, worse than the other sports, when he first tried to learn. But there’s no reason to acknowledge that information when you’re talking about Roger Federer, because you’re probably talking about his tennis. So I guess the point here is to be aware of the memetic effects—what makes an idea attractive. Specialist narratives are memetically more attractive than generalist narratives, all else being equal, so we should adjust our thinking accordingly. Conclusion Although I didn’t love the style of Range and would’ve liked to see fewer, smaller claims argued more strongly, I appreciate David Epstein taking on this project and bringing lots of compelling content into a topic that’s lacking it. I am personally trying to overcome that memetic disadvantage myself, by writing content about generalists on TrueGeneralist.com.
2025-01-08
https://www.lesswrong.com/posts/jCLKoxoYkabQM57nz/can-we-have-epiphanies-and-eureka-moments-more-frequently
jCLKoxoYkabQM57nz
Can we have Epiphanies and Eureka moments more frequently?
CstineSublime
Epiphanies and Understanding, Pain, Eureka-Moments. TL;DR - mental-pain rushing in conveys urgent information, this experience is similar to when understanding rushes in when it is pleasurable - such as the sudden feeling of understanding of a philosophical idea or a joke. Can we reverse engineer these mechanisms to have Eureka-Moments more frequently? Could I Eureka-that faster? (Mental) Pain is (urgent) information Recently I've been thinking about a throw away statement by Jerry Seinfeld that pain is the "sudden rushing in of information". I have my theories about where this sentiment originated from because Ray Dalio says "pain is the signal". Pain is of course a source of information: when you stub your toe in the middle of the night, that is the immediate intrusion of information about your and objects positions in space. When you cut yourself or feel a muscle pain - that too is information: urgent information that requires immediate action. But in the sense that Seinfeld and Dalio are using "pain" is figurative to describe what we might call psychic or mental pain: embarrassment. depression. regret. anguish. anger. etc. But also ugh fields. Chronic pain is information, but it is not sudden or immediate information. The feelings associated with depression too is non-urgent or non-immediate but still information. Another sudden and painful psychic experience is the L'esprit de l'escalier or (lit. the spirit of the staircase). Which describes how only after the fact, usually after an insult, we have that forehead slapping moment of insight of what we should have said but didn't think of in the moment. The most famous pop-culture example, to relate it back to Mr. Seinfeld is the "Comeback" or "Jerkstore" episode of his sitcom where he goes to the effort of trying to engineer the same situation again so he can lob his response. Is pain to information what the epiphany is to understanding? This is probably the same mechanism as the A-Ha, or the Eurkea or Lightbublb moment. Such as when Kekulé awoke from his dream and could envision the structure of the Benzene molecule, or Archimedes having witnessed the effect of his body on displacing water found the solution to the measuring of the crown. There are methods to cultivate sudden insight.... Daniel Dennett described a rhetorical device which is designed for creating these moments - the Intuition Pump. It too is not the sudden rushing in of information as it relies on leveraging things the reader already understands as a way of making them have the feeling that they should have always understood a philosophical idea. Examples Dennett gives range from the pejorative, he's not a fan of Searle's Chinese Room but considers it a pump for intuition nevertheless. But he also believes that Plato's Cave is a good example of an intuition pump. I often wonder if Paraprosdokian sentences are sort of like micro-Intuition-Pumps  in they are both stylistic devices that cause a sudden rushing in of understanding. A paraprosdokian is a sentence where the meaning appears to change mid-way through, the most famous example is probably Groucho Marx's line: Last night I shot an elephant in my pyjamas, how he got into my pyjamas I'll never know The first time you hear it, you may have a little "a-ha" moment as there is a sudden shifting of your mental image of what the Elephant was doing last night, and who was wearing Groucho's Pyjamas. This is the inversion of L'espirit de l'escalier in that it is not your wit, but the comedians, it should be edifying in that it makes you laugh rather than painful. How do these all work? What makes these significant is that Epiphanies, Eureka moments, A-ha, and Intuition Pumps are perceived as sudden rushes. Most learning is iterative, progressive, marked by trial and error. What is different about Epiphanies and L'espirit de l'escalier that makes them, at the very least appear to be instantaneous? And is that appearance of instantaneous a good representation of what is happening on a cognitive level. One explanation is that it is in fact an iterative and slow process, but it is happening on a subconscious level and the A-ha, the Eureka only happens when it finally passes the threshold into our awareness. This may have something to do with the Elaboration-Likelihood Model of attitude change model whereby information is processes by the “peripheral route” that culminates in the final conclusive attitude change we experience as the lightbulb, eureka moment. There is much more to the model than that, but it is beyond the remit of this article for me to explain it here. I personally suspect that the answer lies in Marvin Minsky’s Frame Theory and can be described with Tversky and Kahneman’s Availability Heuristic. I must admit this is not a fully worked out explanation on my part – but which particular ‘frame’ is activated determines whether we feel we understand it or not. And which frame becomes activated is often determined by what stimuli or thoughts we have had immediately prior. Most people when they hear me discuss the films of Fassbinder think Michael, not his Uncle Reiner Werner – even if they are aware of both. Elsewhere I’ve commented on how even LLMs assume that if I ask a question about the Grand Prix driver “Rosberg” they mean Nico, not his father Keke. ...I don't know, but I'd like to... I would like to expand practically on this theory so that I could think of things faster. But I think firstly it is necessary to determine if the appearance and feeling of the rushing in of insight is actually indicative of how sudden it is; or if the theories of subconscious and peripheral processing beyond the threshold of awareness are true. (that is – am I really having a sudden rush of understanding, or has this been a slow build-up of understanding that I’m only aware of the ice-berg tip of?) If it does turn out that the understanding is instantaneous, then the question becomes what is the actual mechanism that allows us to have sudden understanding in some situations but does not furnish us with it in others, even as the Intuition Pump Rhetorical Device shows: we have the exact same information. I suspect the clue may lie in the paraprosdokian if that is the case. How we may employ that clue – I do not yet know.
2025-01-08
https://www.lesswrong.com/posts/TgBEj22qKQqYuNDLx/job-opening-swe-to-help-improve-grant-making-software
TgBEj22qKQqYuNDLx
Job Opening: SWE to help improve grant-making software
ethan-ashkie-1
Full-Stack Software Engineer Position - (S-Process) Survival and Flourishing Corp. (SFC) is seeking a highly numerate full-stack developer to improve and maintain existing grant evaluation software currently in use by the Survival and Flourishing Fund, making it easier for philanthropists to delegate grant-making discussions and deliberation to trusted advisors. Annual compensation: $250,000 - $350,000 Application here. Qualifications Strong numeracy, including existing familiarity with:CalculusLinear algebra and matrix operationsGame theory, mechanism design, and expected utility theoryExperience or clear skill with:Full-stack web development in Typescript and ReactCommand line scripting and data science in PythonManagement of user accounts, roles, and access controlsDatabase management and ensuring the privacy, consistency, and correctness of sensitive dataOrienting on and iteratively improving a legacy codebaseExperience managing technical projects:Organizing tickets, code changes, design decisions, and documentationHelping a team maintain technical standards, processes, and normsEnsuring the health and quality of a codebase with multiple contributorsCommunicating with leadership about progress, estimates, implementation tradeoffs, and the impact of design decisions Current state of development The software in its current form has been used by several philanthropists to date, including Jaan Tallinn, Blake Borgeson, and the Future of Life Institute, via the S-process of the Survival and Flourishing Fund. We believe it’s time to improve the software and make it into a self-contained app, with secure accounts for philanthropists, advisors, and applicants seeking funding. Why this position exists Over the past few years, the interest of large philanthropic donors in improving the long term future of humanity has drastically increased. SFC wants to develop our process into an easy-to-use app that makes it easier for philanthropists to implement the deliberation process used by SFF, in turn making deliberation about the future and how to improve it more principled, more engaging for groups of stakeholders with divergent views, and more action-oriented. Duties of position This position would work on app development full-time, in collaboration with SFC Co-Owner Andrew Critch, CEO Ethan Ashkie, and Lead Engineer Joel Burget. Activities will include: Implementing and adjusting mathematical formulas for combined decision-making.Prototyping and designing visual user-interfaces for manipulating grant evaluations, utility functions and collective decision-making algorithms.Meeting with Ethan and other team members as needed (4-8 hours a week).Being a team player!  Running any project sometimes involves tasks that are not exciting from an R&D perspective, and on the margin we want team members who are excited to chip in and do whatever is needed to keep things moving for our client(s) (currently Jaan Tallinn). Organizational culture SFC is a small team, working fully remotely, although we sometimes meet in-person in the Bay Area for hack-a-thons and similar sprints. We have a clear vision that we are ready to execute, and are looking for super-talented builders to make it happen. Salary and benefits Annual salary will be between $250,000 and $350,000, plus health benefits, and can be negotiated upward to be competitive with data science and full-stack web development. Work authorization SFC does not have a standard process to apply for visas, but we can hire remote contractors to remain living and working outside the United States. We can also partner with other mission-aligned institutions who may be able to provide a visa to work on this project. Application Apply here.
2025-01-08
https://www.lesswrong.com/posts/8p48kCyn3EvDT6vpe/markov-s-inequality-explained
8p48kCyn3EvDT6vpe
Markov's Inequality Explained
criticalpoints
In my experience, the proofs that you see in probability theory are much shorter than the longer, more involved proofs that you might see in other areas of math (like e.g. analytical number theory). But that doesn't mean that good technique isn't important. In probability theory, there are a set of tools that are useful across a broad variety of situations and you need to be able to recognize when it's the appropriate time to use each tool in your toolkit. One of the most useful of tools to have is Markov's inequality. What Markov's inequality says is that, given a non-negative random variable Xand a positive real number a, the probability that X is greater than a can be upperbounded in the following manner: P(X≥a)≤E[X]a This is a pretty simple formula. You might have even been able to guess it beforehand using dimension analysis. Probabilities are often thought of as dimensionless quantities. The expectation E[X] and the threshold a both have units of "length", so we would need them to exactly cancel out to create a dimensionless quantity. We naturally expect that the probability of exceeding the threshold to (a) increase as the expectation of X increases and (b) decrease as the value of the threshold a increases. These sorts of arguments alone would point you in the direction of a formula of the form: P(X≥a)≤(E(X)a)k Why does k equal 1 in the case of Markov's inequality? There is probably some elegant scaling argument that uniquely pins down the power k, but we won't dwell on that detail. Markov's inequality might not seem that powerful, but it is. For two reasons: One reason is that Markov's inequality is a "tail bound" (a statement about the behavior of the random variable far away from its center) that makes no reference to the variance. This has both its strengths and its weaknesses. The strength is that there are many situations where we know the mean of a random variable but not its variance, so Markov's inequality is the only tool in our repertoire that will allow us to control the tail behavior of the variable. The weakness is that because it assumes nothing about the variance (or any higher moments), Markov's inequality is not a very tight bound. If you also know the variance of your random variable, you can do much better (though you will need more heavy-duty techniques like moment-generating functions). The other reason why Markov's inequality is powerful is that, in a sense, many properties of random variables are just statements about the mean (the expectation) of some other random variable. For example, the variance of a random variable X is nothing more than the expectation of the random variable Y=(X−μ)2 where μ is the mean of X. Let's say we know that the variance of X is σ2. Because Y is a non-negative random variable, it follows that √Y=|X−μ| is well-defined and also a non-negative random variable. We can then show that: P(|X−μ|≥kσ)=P(√Y≥kσ)=P(Y≥k2σ2)≤E(Y)k2σ2=1k2 This relationship is special enough that it has its own name: Chebyshev's inequality. But it's just Markov's inequality in disguise! Let's do one more. We can define the parametrized set of random variables Z(t)=etX. The parametrized set of expectations M(t)=E[Z(t)] has a name called the moment-generating function (because if you take derivatives of M(t) with respect to t and evaluate at t=0, you get the moments of the original random variable X). We then have that: P(X≥a)=P(etX≥eta)≤E[etX]eta=M(t)e−ta Hey, that's the Chernoff bound! Despite being fairly simple as math goes, it took a couple of exposures for Markov's inequality to properly stick with me. To see why, let's walk through the proof which conceptually involves two steps. Let's say we have a threshold a. We want to divide the probability distribution of X into two regions: x<a and x≥a. If we take all of the probability mass for values of x larger than a and smush it at a, then the new distribution will have an expectation which is less than or equal to the expectation of X. Similarly, if we take the probability mass for all points x<a and smush it to x=0, then the new distribution will also have an expectation less than or equal to the previous one. What is the expectation of this final distribution? There would be two contributions to the expectation: the probability mass at x=0 multiplied by 0 and the probability mass at x=a multiplied by a. The first term vanishes so we are left with an overall expectation P(X≥a)⋅a. We can then compare this expectation to the expectation of our original distribution and then divide by a to get Markov's inequality: P(X≥a)⋅a≤E[X]⟶P(X≥a)≤E[X]a Well this is a simple proof. Where did I struggle before? An important thing to keep in mind is that Markov's inequality only holds for non-negative random variables. The first part of the proof is easy for me to remember because a sticks out to me as an obvious threshold to smush probability mass to. But the second part where we smush all the mass to 0 used to elude me because I didn't keep in mind that the non-negativity of X makes 0 a kind of threshold as well.
2025-01-08
https://www.lesswrong.com/posts/8pWn8oCMsBKb95Wje/stream-entry
8pWn8oCMsBKb95Wje
Stream Entry
lsusr
"Is this…enlightenment?" I asked. "We use the word 'Awakening'," said the Zen Master, kindly. "Now what?" I asked, "Every book I've read about meditation is about getting to this point. There's nothing about how to safely navigate the territory afterward—or what to do afterward—or even what happens afterward." "The first thing you need to know is that what you experienced was real," the Zen Master spoke from personal experience and that of his many previous students, "Don't make it too complicated. Just be in the moment. If you're cooking, then just cook. If you're eating, then just eat." It is impossible to communicate the subjective experience of Stream Entry to someone who has not personally experienced at least kensho. However, it is possible to communicate an outside mechanistic model of what happens in your brain. First I will explain how a human brain functions normally. Then I will explain how a post-Awakening brain differs.[1] Your brain creates a real-time simulation of you and world around you. Everything you consciously perceive is part of this model. There are no exceptions. How does this produce useful behavior? Because the brain enforces a 1-to-1 correspondence between the simulation and reality (where "reality" is defined as sensory inputs). If this 1-to-1 correspondence is adequately maintained, then performing an act in the simulation happens simultaneously as that same action happening in external physical reality. That's why if you die in the Matrix then you die in real life too. If a brain just modeled the environment without performing any actions, then the whole model could exist as a non-cyclic graph. But a brain doesn't just model the world. It sends commands to muscles that act on the physical universe it's trying to model. What happens when a brain tries to anticipate its own motor commands? Now the brain's own algorithms are a causal force in the external environment it's trying to model. The result is computationally intractable. In response, the brain divides its model of the universe into "things under its motor control" and "things not under its motor control". Those parts of the universe it can control coalesce into "self" and the rest coalesces into "other". Other calculations involving self-reference become encapsulated too. This produces high-level abstractions including pain, pleasure and desire. These encapsulations feel like part of the self, because they involve self-reference, but they also feel like objective external truth, because they are not perceived as being under causal motor control. There are many types of meditation[2], but most of them involve sitting still and all of them involve not taking self-motivated willful action on the external physical world. In other words, meditation is when you stop doing things that cause problematic self-reference in your world model. When you do this, all of those abstractions created to contain problematic self-reference become unnecessary. Desire, pain, pleasure, self and other—they all no longer serve a purpose. Stop using them long enough, and your brain takes them apart, like disassembling a Lego set. When you start meditating and sit in the full lotus position, your legs fall asleep and you feel a pain in them. After you've been meditating for years and sit in the full lotus position, your legs fall asleep and you feel the vibratory sensory inputs directly, without the encapsulation layer called "pain". The concept of "pain" is no longer a meaningful abstraction in your world model. Talking about pain to a yogi is like talking about "God" to an ex-Mormon. Lots of models get dissolved. Prior to meditating, I felt like I was a small person walking around a region of Cartesian Space containing Planet Earth. Now I feel like a local region of local space is being created and destroyed around me like how the Minecraft game engine loads and unloads blocks. This happened because I got a direct enough perception of my own conscious reality that an illusion shattered and my perception shifted permanently. It was like going out-of-bounds in a videogame speedrun. You can never unsee the behind-the-curtain mechanics of how that reality is constructed. But what really surprised me is what happened over a year later. I'm autistic. Most of my life I've had trouble understanding other people and guessing how they want me to behave. In my late teens/early 20s, I studied history, anthropology, psychology and economics. This helped a bit. Then a couple years ago I hit Stream Entry (aka Awakening) when I observed a bunch of my own mental processes that had been hidden from me. The internal workings of these parts of my mind had formerly been encapsulated away from my self-perception. But now I saw what they were doing. That reduced my subjective experience of suffering. It also allowed me to infer these same processes in other people. Suddenly other people weren't confusing anymore! I could predict how someone's ego would react to the things I said before I said them. This was useful because I could finally conduct probing psychological experiments on my friends. I had always wanted to know what people were thinking, but whenever I tried to ask, people would get hurt and defensive. Now I knew how to not trigger those defense mechanisms. That meant I could run a series of experiments where I sat my good friends down and tested how their minds worked. Not only did they not feel attacked, they usually felt that they'd been deeply heard, as if they'd been to a therapist. I could call a friend an idiot to his face and he'd shrug and go "yeah". That solved one problem, but I still had my original autistic sensory disorder. Meditation was making it worse. Meditation makes you extra sensitive to sensory stimuli. Autism (I think) is caused by your sensory inputs having too much gain, in the audio signal-processing sense of the word "gain". This continued for months. I bought the most comfortable clothes I could find. I installed blackout blinds over my windows. I bought a pair of Bose noise-cancelling headphones and wore them everywhere. I wore them so much I wore them out and bought a second pair. I wore earplugs to ballroom salsa dance. Then the wave crested. A friend invited me to his birthday party with loud music where we played my first game of beer pong. I had to take a breather outside once or twice from the noise and chaos. But I felt…normal…for the first time in my life. I knew what the social expectations were and I followed them. This was different from my previous attempts to behave like a normal human being because I wasn't doing it on a cognitive level. It was just intuitive. My autistic sensory disorder was no longer overwhelming my social intuition instincts. Since then, my sensory issues have gone into in freefall. I think I've had a successful round of meditation-instigated neural annealing. I went to a bar for New Year and didn't have any sensory issues at all. You know that stuff Dale Carnegie teaches about listening to people? It's easy after ego-death. I was told I had good vibes. Someone tried to buy me a beer. I lost track of how many people hugged me just because I had listened to them. I felt like a Bene Gesserit trainee. This is my personal theory. Much of it is stolen from the excellent work of Steven Byrnes. I hope I am stating my claims precisely enough so that when the scientific establishment catches up, it can confirm or falsify my statements. ↩︎ This includes the kind of prayer where you quietly listen to the thing called God. ↩︎
2025-01-07
https://www.lesswrong.com/posts/X6soKvPmCJ2fkw5zY/don-t-fall-for-ontology-pyramid-schemes
X6soKvPmCJ2fkw5zY
Don't fall for ontology pyramid schemes
Lorec
Pyramid schemes work by ambiguating between selling a product, and selling shares in the profit from the sale of that product. It's a kind of sleight of hand that saves the shills from having to explicitly say "our company is valuable because other people think it's valuable", which might otherwise be too nakedly dishonest to work. Shills can pretend they're offering marks a profitable sales job for a company that makes a product so wonderful the sheep can't resist it, marks buy in and pretend to be buying product to be used when really they're basing their high bids on the value at which they hope the product can be sold, and thus a few scammers at the top can make the value of share in their enterprise appear to inflate arbitrarily. A similar pattern of exploitation occurs in the market for ontologies, although without legible prices or transaction. If beliefs don't pay rent, they shouldn't be allowed to linger; the categories were made for man, not man for the categories. Just as you should decide which products to purchase based on how much value you expect to get out of using them directly, you should decide whether that big blue floppy thingy over there is a 'beast' or a 'fish' [or something else entirely] based on how you expect this to assist your ability to think about reality and draw true conclusions. By default, when Jane Doe says "There are 4 components of personality - yellow bile, black bile, blood, and phlegm, and here's what they do" [or, alternatively, when Yamada Hanako says "there are 4 personality types corresponding to the 4 blood types, and here's what they are"], you will take her assertion and check it against your intuition, and possibly even go to the trouble of searching for concrete examples [ . . . and then counterexamples ] in your remembered experience. And then you'll decide to accept it or reject it, as a rent-paying concept structure. Realistically, just because you run on human psychology, your decision will have a lot to do with how you feel about this particular Jane Doe. But what won't matter is how much corroborating evidence she can bring in to support her lovely categorization schema, from within a community of theorists that [unlike you] already accepts it. You know movements based on mistakes of fact build up intricate internal systems of theory all the time. Hanako can show you pages and pages of impressive-looking diagrams, but if it doesn't seem to you like there are in fact 4 types of people, all having such-and-such traits based on so-and-so blood type, you'll shrug and say "Sorry, but that all just looks like hokum to me" [because it probably is]. There exists, however, a species of ontology evangelist which attempts to use the weight and magnificence of their community's pre-existing structures of theory to cow or dazzle people into buying in, and they often succeed. This is the equivalent of a physical-goods pyramid scheme: marks who buy an ontology from such a seller tell themselves their product is very valuable, but all they've really been convinced of is that the worldview has a sky-high resale value [or the seller has the market cornered], and all their friends will be conversing in this ontology soon anyway. When the empty intellectual fad inevitably runs its course, they'll end up dumbfounded, holding nothing but a pan of fool's gold, wondering what happened. They probably won't realize what happened is they never checked the appraised value of the asset against the ground truth: its object-level use-value. Don't fall for these scammers! It's hard not to; they're everywhere. And they're often very practiced bullies, ready with a [measuredly] forceful reminder of how, if you choose to depart from their way of talking about things, you'll have no one to see eye to eye with anyway. That, of course, is a load of bullshit in the long term, because people converge on the truth. Having said all that, if someone has an actual argument for why I should believe in the Five Factor model of personality, I'll be much abashed [but grateful!] to hear it.
2025-01-07
https://www.lesswrong.com/posts/o6pSymey4crBZnYPk/bridgewater-x-metaculus-forecasting-contest-goes-global-feb
o6pSymey4crBZnYPk
Bridgewater x Metaculus Forecasting Contest Goes Global — Feb 3, $25k, Opportunities
ChristianWilliams
null
2025-01-07
https://www.lesswrong.com/posts/euWvofFqQdDWj45rF/disagreement-on-agi-suggests-it-s-near
euWvofFqQdDWj45rF
Disagreement on AGI Suggests It’s Near
tangerine
If I’m planning a holiday to New York (and I live pretty far from New York), it’s quite straightforward to get fellow travellers to agree that we need to buy plane tickets to New York. Which airport? Eh, whichever is more convenient, I guess. Alternatively, some may prefer a road trip to New York, but the general direction is obvious for everyone. However, as the holiday gets closer in time or space, the question of what we actually mean by a holiday in New York becomes more and more contentious. Did we mean New York State or New York City? Did we mean Brooklyn or Broadway? Which Broadway theater? Which show? Which seats? By my estimation, the fact that the question of whether AGI has been achieved is so broadly contentious shows that we are so close to it that the term has lost its meaning, in the same way that “Let’s go to New York!” loses its meaning when you’re already standing in Times Square. It’s time for more precise definitions of the space of possible minds that we are now exploring. I have my own ideas, but I’ll leave those for another post…
2025-01-07
https://www.lesswrong.com/posts/qPEXMShDjd6zYsirC/will-bird-flu-be-the-next-covid-little-chance-says-my
qPEXMShDjd6zYsirC
Will bird flu be the next Covid? "Little chance" says my dashboard.
Nathan Young
Rob and I built a bird flu risk dashboard. You can check it out here: birdflurisk.com There is a box to be emailed if the index changes significantly. Hopefully this helps some people. Happy to answer questions/make changes.
2025-01-07
https://www.lesswrong.com/posts/XAKYawaW9xkb3YCbF/openai-10-reflections
XAKYawaW9xkb3YCbF
OpenAI #10: Reflections
Zvi
This week, Altman offers a post called Reflections, and he has an interview in Bloomberg. There’s a bunch of good and interesting answers in the interview about past events that I won’t mention or have to condense a lot here, such as his going over his calendar and all the meetings he constantly has, so consider reading the whole thing. Table of Contents The Battle of the Board. Altman Lashes Out. Inconsistently Candid. On Various People Leaving OpenAI. The Pitch. Great Expectations. Accusations of Fake News. OpenAI’s Vision Would Pose an Existential Risk To Humanity. The Battle of the Board Here is what he says about the Battle of the Board in Reflections: Sam Altman: A little over a year ago, on one particular Friday, the main thing that had gone wrong that day was that I got fired by surprise on a video call, and then right after we hung up the board published a blog post about it. I was in a hotel room in Las Vegas. It felt, to a degree that is almost impossible to explain, like a dream gone wrong. Getting fired in public with no warning kicked off a really crazy few hours, and a pretty crazy few days. The “fog of war” was the strangest part. None of us were able to get satisfactory answers about what had happened, or why. The whole event was, in my opinion, a big failure of governance by well-meaning people, myself included. Looking back, I certainly wish I had done things differently, and I’d like to believe I’m a better, more thoughtful leader today than I was a year ago. I also learned the importance of a board with diverse viewpoints and broad experience in managing a complex set of challenges. Good governance requires a lot of trust and credibility. I appreciate the way so many people worked together to build a stronger system of governance for OpenAI that enables us to pursue our mission of ensuring that AGI benefits all of humanity. My biggest takeaway is how much I have to be thankful for and how many people I owe gratitude towards: to everyone who works at OpenAI and has chosen to spend their time and effort going after this dream, to friends who helped us get through the crisis moments, to our partners and customers who supported us and entrusted us to enable their success, and to the people in my life who showed me how much they cared. We all got back to the work in a more cohesive and positive way and I’m very proud of our focus since then. We have done what is easily some of our best research ever. We grew from about 100 million weekly active users to more than 300 million. Most of all, we have continued to put technology out into the world that people genuinely seem to love and that solves real problems. This is about as good a statement as one could expect Altman to make. I strongly disagree that this resulted in a stronger system of governance for OpenAI. And I think he has a much better idea of what happened than he is letting on, and there are several points where ‘I see what you did there.’ But mostly I do appreciate what this statement aims to do. From his interview, we also get this excellent statement: Sam Altman: I think the previous board was genuine in their level of conviction and concern about AGI going wrong. There’s a thing that one of those board members said to the team here during that weekend that people kind of make fun of [Helen Toner] for, which is it could be consistent with the mission of the nonprofit board to destroy the company. And I view that—that’s what courage of convictions actually looks like. I think she meant that genuinely. And although I totally disagree with all specific conclusions and actions, I respect conviction like that, and I think the old board was acting out of misplaced but genuine conviction in what they believed was right. And maybe also that, like, AGI was right around the corner and we weren’t being responsible with it. So I can hold respect for that while totally disagreeing with the details of everything else. And this, which I can’t argue with: Sam Altman: Usually when you have these ideas, they don’t quite work, and there were clearly some things about our original conception that didn’t work at all. Structure. All of that. It is fair to say that ultimately, the structure as a non-profit did not work for Altman. This also seems like the best place to highlight his excellent response about Elon Musk: Oh, I think [Elon will] do all sorts of bad s—. I think he’ll continue to sue us and drop lawsuits and make new lawsuits and whatever else. He hasn’t challenged me to a cage match yet, but I don’t think he was that serious about it with Zuck, either, it turned out. As you pointed out, he says a lot of things, starts them, undoes them, gets sued, sues, gets in fights with the government, gets investigated by the government. That’s just Elon being Elon. The question was, will he abuse his political power of being co-president, or whatever he calls himself now, to mess with a business competitor? I don’t think he’ll do that. I genuinely don’t. May turn out to be proven wrong. So far, so good. Altman Lashes Out Then we get Altman being less polite. Sam Altman: Saturday morning, two of the board members called and wanted to talk about me coming back. I was initially just supermad and said no. And then I was like, “OK, fine.” I really care about [OpenAI]. But I was like, “Only if the whole board quits.” I wish I had taken a different tack than that, but at the time it felt like a just thing to ask for. Then we really disagreed over the board for a while. We were trying to negotiate a new board. They had some ideas I thought were ridiculous. I had some ideas they thought were ridiculous. But I thought we were [generally] agreeing. And then—when I got the most mad in the whole period—it went on all day Sunday. Saturday into Sunday they kept saying, “It’s almost done. We’re just waiting for legal advice, but board consents are being drafted.” I kept saying, “I’m keeping the company together. You have all the power. Are you sure you’re telling me the truth here?” “Yeah, you’re coming back. You’re coming back.” And then Sunday night they shock-announce that Emmett Shear was the new CEO. And I was like, “All right, now I’m f—ing really done,” because that was real deception. Monday morning rolls around, all these people threaten to quit, and then they’re like, “OK, we need to reverse course here.” This is where his statements fail to line up with my understanding of what happened. Altman gave the board repeated in-public drop dead deadlines, including demanding that the entire board resign as he noted above, with very clear public messaging that failure to do this would destroy OpenAI. Maybe if Altman had quickly turned around and blamed the public actions on his allies acting on their own, I would have believed that, but he isn’t even trying that line out now. He’s pretending that none of that was part of the story. In response to those ultimatums, facing imminent collapse and unable to meet Altman’s blow-it-all-up deadlines and conditions, the board tapped Emmett Shear as a temporary CEO, who was very willing to facilitate Altman’s return and then stepped aside only days later. That wasn’t deception, and Altman damn well knows that now, even if he was somehow blinded to what was happening at the time. The board very much still had the intention of bringing Altman back. Altman and his allies responded by threatening to blow up the company within days. Inconsistently Candid Then the interviewer asks what the board meant by ‘consistently candid.’ He talks about the ChatGPT launch which I mention a bit later on – where I do think he failed to properly inform the board but I think that was more one time of many than a particular problem – and then Altman says, bold is mine: And I think there’s been an unfair characterization of a number of things like [how I told the board about the ChatGPT launch]. The one thing I’m more aware of is, I had had issues with various board members on what I viewed as conflicts or otherwise problematic behavior, and they were not happy with the way that I tried to get them off the board. Lesson learned on that. There it is. They were ‘not happy’ with the way that he tried to get them off the board. I thank him for the candor that he was indeed trying to remove not only Helen Toner but various board members. I do think this was primary. Why were they not happy, Altman? What did you do? From what we know, it seems likely he lied to board members about each other in order to engineer a board majority. Altman also outright says this: I don’t think I was doing things that were sneaky. I think the most I would say is, in the spirit of moving really fast, the board did not understand the full picture. That seems very clearly false. By all accounts, however much farther than sneaky Altman did or did not go, Altman was absolutely being sneaky. He also later mentions the issues with the OpenAI startup fund, where his explanation seems at best rather disingenuous and dare I say it sneaky. On Various People Leaving OpenAI Here is how he attempts to address all the high profile departures: Sam Altman (in Reflections): Some of the twists have been joyful; some have been hard. It’s been fun watching a steady stream of research miracles occur, and a lot of naysayers have become true believers. We’ve also seen some colleagues split off and become competitors. Teams tend to turn over as they scale, and OpenAI scales really fast. I think some of this is unavoidable—startups usually see a lot of turnover at each new major level of scale, and at OpenAI numbers go up by orders of magnitude every few months. The last two years have been like a decade at a normal company. When any company grows and evolves so fast, interests naturally diverge. And when any company in an important industry is in the lead, lots of people attack it for all sorts of reasons, especially when they are trying to compete with it. I agree that some of it was unavoidable and inevitable. I do not think this addresses people’s main concerns, especially that they have lost so many of their highest level people, especially over the last year, including almost all of their high-level safety researchers all the way up to the cofounder level. The Pitch It is related to this claim, which I found a bit disingenuous: Sam Altman: The pitch was just come build AGI. And the reason it worked—I cannot overstate how heretical it was at the time to say we’re gonna build AGI. So you filter out 99% of the world, and you only get the really talented, original thinkers. And that’s really powerful. I agree that was a powerful pitch. But we know from the leaked documents, and we know from many people’s reports, that this was not the entire pitch. The pitch for OpenAI was that AGI would be built safely, and that Google DeepMind could not to be trusted to be the first to do so. The pitch was that they would ensure that AGI benefited the world, that it was a non-profit, that it cared deeply about safety. Many of those who left have said that these elements were key reasons they chose to join OpenAI. Altman is now trying to rewrite history to ignore these promises, and pretend that the vision was ‘build AGI/ASI’ rather than ‘build AGI/ASI safety and ensure it benefits humanity.’ Great Expectations I also found his ‘I expected ChatGPT to go well right from the start’ interesting. If Altman did expect it do well and in his words he ‘forced’ people to ship it when they didn’t want to because they thought it wasn’t ready, that provides different color than the traditional story. It also plays into this from the interview: There was this whole thing of, like, “Sam didn’t even tell the board that he was gonna launch ChatGPT.” And I have a different memory and interpretation of that. But what is true is I definitely was not like, “We’re gonna launch this thing that is gonna be a huge deal.” It sounds like Altman is claiming he did think it was going to be a big deal, although of course no one expected the rocket to the moon that we got. Accusations of Fake News Then he says how much of a mess the Battle of the Board left in its wake: I totally was [traumatized]. The hardest part of it was not going through it, because you can do a lot on a four-day adrenaline rush. And it was very heartwarming to see the company and kind of my broader community support me. But then very quickly it was over, and I had a complete mess on my hands. And it got worse every day. It was like another government investigation, another old board member leaking fake news to the press. And all those people that I feel like really f—ed me and f—ed the company were gone, and now I had to clean up their mess. It was about this time of year [December], actually, so it gets dark at like 4:45 p.m., and it’s cold and rainy, and I would be walking through my house alone at night just, like, f—ing depressed and tired. And it felt so unfair. It was just a crazy thing to have to go through and then have no time to recover, because the house was on fire. Some combination of Altman and his allies clearly worked hard to successfully spread fake news during the crisis, placing it in multiple major media outlets, in order to influence the narrative and the ultimate resolution. A lot of this involved publicly threatening (and bluffing) that if they did not get unconditional surrender within deadlines on the order of a day, they would end OpenAI. Meanwhile, the Board made the fatal mistake of not telling its side of the story, out of some combination of legal and other fears and concerns, and not wanting to ultimately destroy OpenAI. Then, at Altman’s insistence, those involved left. And then Altman swept the entire ‘investigation’ under the rug permanently. Altman then has the audacity now to turn around and complain about what little the board said and leaked afterwards, calling it ‘fake news’ without details, and saying how they f***ed him and the company and were ‘gone and now he had to clean up the mess.’ OpenAI’s Vision Would Pose an Existential Risk To Humanity What does he actually say about safety and existential risk in Reflections? Only this: We continue to believe that the best way to make an AI system safe is by iteratively and gradually releasing it into the world, giving society time to adapt and co-evolve with the technology, learning from experience, and continuing to make the technology safer. We believe in the importance of being world leaders on safety and alignment research, and in guiding that research with feedback from real world applications. Then in the interview, he gets asked point blank: Q: Has your sense of what the dangers actually might be evolved? A: I still have roughly the same short-, medium- and long-term risk profiles. I still expect that on cybersecurity and bio stuff, we’ll see serious, or potentially serious, short-term issues that need mitigation. Long term, as you think about a system that really just has incredible capability, there’s risks that are probably hard to precisely imagine and model. But I can simultaneously think that these risks are real and also believe that the only way to appropriately address them is to ship product and learn. I know that anyone who previously had a self-identified ‘Eliezer Yudkowsky fan fiction Twitter account’ knows better than to think all you can say about long term risks is ‘ship products and learn.’ I don’t see the actions to back up even these words. Nor would I expect, if they truly believed this, for this short generic statement to be the only mention of the subject. How can you reflect on the past nine years, say you have a direct path to AGI (as he will say later on), get asked point blank about the risks, and say only this about the risks involved? The silence is deafening. I also flat out do not think you can solve the problems exclusively through this approach. The iterative development strategy has its safety and adaptation advantages. It also has disadvantages, driving the race forward and making too many people not notice what is happening in front of them via a ‘boiling the frog’ issue. On net, my guess is it has been net good for safety versus not doing it, at least up until this point. That doesn’t mean you can solve the problem of alignment of superintelligent systems primarily by reacting to problems you observe in present systems. I do not believe the problems we are about to face will work that way. And even if we are in such a fortunate world that they do work that way? We have not been given reason to trust that OpenAI is serious about it. Getting back to the whole ‘vision thing’: Our vision won’t change; our tactics will continue to evolve. I suppose if ‘vision’ is simply ‘build AGI/ASI’ and everything else is tactics, then sure? I do not think that was the entirety of the original vision, although it was part of it. That is indeed the entire vision now. And they’re claiming they know how to do it. We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes. We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity. This sounds like science fiction right now, and somewhat crazy to even talk about it. That’s alright—we’ve been there before and we’re OK with being there again. We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important. Given the possibilities of our work, OpenAI cannot be a normal company. Those who have ears, listen. This is what they plan on doing. They are predicting AI workers ‘joining the workforce’ in earnest this year, with full AGI not far in the future, followed shortly by ASI. They think ‘4’ is conservative. What are the rest of us going to do, or not do, about this? I can’t help but notice Altman is trying to turn OpenAI into a normal company. Why should we trust that structure in the very situation Altman himself describes? If the basic thesis is that we should put our trust in Altman personally, why does he think he has earned that trust?
2025-01-07
https://www.lesswrong.com/posts/fDRCfKEE5S4z7i7FX/other-implications-of-really-radical-empathy
fDRCfKEE5S4z7i7FX
Some implications of radical empathy
MichaelStJules
null
2025-01-07
https://www.lesswrong.com/posts/AqM6BJBhZ9WoFp62T/actualism-asymmetry-and-extinction
AqM6BJBhZ9WoFp62T
Actualism, asymmetry and extinction
MichaelStJules
null
2025-01-07
https://www.lesswrong.com/posts/gYfpPbww3wQRaxAFD/activation-space-interpretability-may-be-doomed
gYfpPbww3wQRaxAFD
Activation space interpretability may be doomed
beelal
TL;DR: There may be a fundamental problem with interpretability work that attempts to understand neural networks by decomposing their individual activation spaces in isolation: It seems likely to find features of the activations - features that help explain the statistical structure of activation spaces, rather than features of the model - the features the model’s own computations make use of. Written at Apollo Research Introduction Claim: Activation space interpretability is likely to give us features of the activations, not features of the model, and this is a problem. Let’s walk through this claim. What do we mean by activation space interpretability? Interpretability work that attempts to understand neural networks by explaining the inputs and outputs of their layers in isolation. In this post, we focus in particular on the problem of decomposing activations, via techniques such as sparse autoencoders (SAEs), PCA, or just by looking at individual neurons. This is in contrast to interpretability work that leverages the wider functional structure of the model and incorporates more information about how the model performs computation. Examples of existing techniques using such information include Transcoders, end2end-SAEs and joint activation/gradient PCAs. What do we mean by “features of the activations”? Sets of features that help explain or make manifest the statistical structure of the model’s activations at particular layers. One way to try to operationalise this is to ask for decompositions of model activations at each layer that try to minimise the description length of the activations in bits. What do we mean by “features of the model”? The set of features the model itself actually thinks in, the decomposition of activations along which its own computations are structured, features that are significant to what the model is doing and how it is doing it. One way to try to operationalise this is to ask for the decomposition of model activations that makes the causal graph of the whole model as manifestly simple as possible: We make each feature a graph node, and draw edges indicating how upstream nodes are involved in computing downstream nodes. To understand the model, we want the decomposition that results in the most structured graph with the fewest edges, with meaningfully separate modules corresponding to circuits that do different things. Our claim is pretty abstract and general, so we’ll try to convey the intuition behind it with concrete and specific examples. Examples illustrating the general problem In the following, we will often use SAEs as a stand-in for any technique that decomposes individual activation spaces into sets of features. But we think the problems these examples are trying to point to apply in some form to basically any technique that tries to decompose individual activation spaces in isolation.[1] 1. Activations can contain structure of the data distribution that the models themselves don’t ‘know’ about. Consider a simple model that takes in a two-dimensional input (x,y) and computes some scalar function of the two, f(x,y). Suppose for all data points in the data distribution, the input data (x,y) falls on a very complicated one-dimensional curve. Also, suppose that the trained model is blind to this fact and treats the two input variables as entirely independent (i.e. none of the model’s computations make use of the relationship between x and y). If we were to study the activations of this model, we might notice this curve (or transformed curve) and think it meaningful. In general, data distributions used for training (and often also interpreting) neural networks contain a very large amount of information about the process that created said dataset. For all non-toy data distributions, the distribution will reflect complex statistical relationships of the universe. A model with finite capacity can't possibly learn to make use of all of these relationships. Since activations are just mathematical transformations of inputs sampled from this data distribution, by studying neural networks through their distribution of activations, we should expect to see many of those unused relationships in the activations. So, fully understanding the model’s activations can in a sense be substantially harder than fully understanding what the model is doing. And if we don’t look at the computations the model is carrying out on those activations before we try to decompose them, we might struggle to tease apart properties of the input distribution and properties of the model.[2] 2. The learned feature dictionary may not match the “model’s feature dictionary”. Now let’s consider another one-dimensional curve, this time embedded in a ten-dimensional space.[3] One of the nice things about sparse dictionary methods like SAEs is that they can approximate curves like this pretty well, using a large dictionary of features with sparse activation coefficients. If we train an SAE with a dictionary of size 500 on this manifold, we might find 500 features, only a very small number of which are active at a time, corresponding to different tiny segments of the curve.[4] Suppose, however, that the model actually thinks of this single dense data-feature as a sparse set of 100 linear directions. We term this set of directions the “model’s dictionary”. The model’s dictionary approximates most segments of the curve with lower resolution than our dictionary, but it might approximate some crucial segments a lot more finely. MLPs and attention heads downstream in the model carry out computations on these 100 sparsely activating directions. The model’s decomposition of the ten-dimensional space into 100 sparse features and our decomposition of the space into 500 sparse features are necessarily quite different. Some features and activation coefficients in the two dictionaries might be closely related, but we should not expect most to be. If we are not looking at what the model does with these activations downstream, how can we tell that the feature dictionary we find matches the model’s feature dictionary? When we perform the decomposition, we don’t know yet what parts of the curve are more important for what the model is computing downstream, and thus how the model is going to think about and decompose the ten-dimensional subspace. We probably won’t even be aware in the first place that the activations we are decomposing lie on a one-dimensional curve without significant extra work.[5] 3. Activation space interpretability can fail to find compositional structure. Suppose our model represents four types of object in some activation space: {blue square, red square, blue circle, red circle}.[6] We can think of this as the direct product space {blue, red} ⊗ {square, circle}. Suppose the model’s 'true features' are colour and shape, in the sense that later layers of the model read the 'colour' variable and the 'shape' variable independently.  Now, suppose we train an SAE with 4 dictionary elements on this space. SAEs are optimised to achieve high sparsity -- few latents should be active on each forward pass. An SAE trained on this space will therefore learn the four latents {blue square, red square, blue circle, red circle} (the "composed representation"), rather than {blue, red} ⊗ {square, circle} (the "product representation"), as the former has sparsity 1, while the latter has sparsity 2. In other words, the SAE learns features that are compositions of the model’s features. Can we fix this by adjusting the sparsity penalty? Probably not. Any sparse-dictionary approach set to decompose the space as a whole will likely learn this same set of four latents, as this latent set is sparser, with shorter description length than the product set. While we could create some ansatz for our dictionary learning approach that specifically privileges the product configuration, this is cheating. How would we know the product configuration and not the composed configuration matches the structure of the model’s downstream computations in this case in advance, if we only look at the activations in isolation? And even if we do somehow know the product configuration is right, how would we know in advance to look for this specific 2x2 structure? In reality, it would additionally be embedded in a larger activation space with an unknown number of further latents flying around besides just shape and colour. 4: Function approximation creates artefacts that activation space interpretability may fail to distinguish from features of the model. This one is a little more technical, so we’ll take it in two stages. First, a very simplified version, then something that’s closer to the real deal. Example 4a: Approximating x2. Suppose we have an MLP layer that takes a scalar input x and is trained to approximate the scalar output x2. The MLP comprises a Win matrix (vector, really) of shape (1,10) that maps x to some pre-activation. This gets mapped through a ReLU, giving a 10 dimensional activation vector a. Finally, these are mapped to a scalar output via some Wout matrix of shape (10,1). Thus, concretely, this model is tasked with approximating x2 via a linear combination of ten functions of the form ReLU(ax+b). Importantly, the network only cares about one direction in the 10 dimensional activation space, the one which gives a good approximation of x2 and is projected off by Wout. There are 9 other orthogonal directions in the hidden space. Unless we know in advance that the network is trying to compute x2, this important direction will not stick out to us. If we train an SAE on the hidden activations, or do a PCA, or perform any other activation decomposition of our choice, we will get out a bunch of directions, and likely none of them will be x2.[7] What makes the x2 direction special is that the model uses it downstream (which, here, means that this direction is special in Wout). But that information can't be found in the hidden activations alone. We need more information. Example 4b: Circuits in superposition. The obvious objection to Example 4a is that Wout is natively a rank one matrix, so the fact that only one direction in the 10 dimensional activation space matters is trivial and obvious to the researcher. So while we do need to use some information that isn’t in the activations, it’s a pretty straightforward thing to find. But if we extend the above example to something more realistic, it’s not so easy anymore. Suppose the model is computing a bunch of the above multi-dimensional circuits in superposition. For example, take an MLP layer instead with 40,000 neurons, computing 80,000 functions of (sparse) scalar inputs, each of which requires 10 neurons to compute, and writes the results to a 10,000 dimensional residual stream.[8][9] Each of these 80,000 circuits would then occupy some ten-dimensional subspace in the 40,000 dimensional activation space of the MLP, meaning the subspaces must overlap. Each of these subspaces may only have one direction that actually matters for downstream computation. Our SAE/PCA/activation-decomposition-of-choice trained on the activation space will not be able to tell which directions are actually used by the model, and which are an artefact of computing the directions that do matter. They will decompose these ten-dimensional subspaces into a bunch of directions, which almost surely won’t line up with the important ones. To make matters worse, we might not immediately know that something went wrong with our decomposition. All of these directions might look like they relate to some particular subtask when studied through the lens of e.g. max activating dataset examples, since they’ll cluster along the circuit subspaces to some extent. So the decomposition could actually look very interesting and interpretable, with a lot of directions that appear to somewhat-but-not-quite make sense when we study them. However, these many directions will seem to interact with the next layer in a very complicated manner. The general problem Not all the structure of the activation spaces matters for the model’s computations, and not all the structure of the model’s computations is manifest in the structure of individual activation spaces. So, if we are trying to understand the model by first decomposing its activation spaces into features and then looking at how these features interact and form circuits, we might get a complete mess of interactions that do not make the structure of the model and what it is doing manifest at all. We need to have at least some relevant information about how the model itself uses the activations before we pick our features, and include that information in the activation space decomposition. Even if our goal is just to understand an aspect of the model’s representation enough for a use case like monitoring, looking at the structure of the activation spaces rather than the structure of the model’s computations can give us features that don’t have a clean causal relationship to the model’s structure and which thus might mislead us. What can we do about this? If the problem is that our decomposition methodologies lack relevant information about the network, then maybe the solution is giving them more of it. How could we try to do this? Guess the correct ansatz. We can try to make a better ansatz for our decompositions by guessing in advance how model computations are structured. This requires progress on interpretability fundamentals, through e.g. understanding the structure and purpose of feature geometry better. Note however that the current favoured roadmap for making progress on those topics seems to be “decompose the activations well, understand the resulting circuits and structure, and then hope this yields increased understanding”. This may be a bit of a chicken-and-egg situation. Use activations (or gradients) from more layers. We can try to use information from more layers to look for decompositions that simplify the model as a whole. For example, we can decompose multiple layers simultaneously and impose a sparsity penalty on connections between features in different layers. Other approaches that fall vaguely in this category include end-to-end-SAEs, Attribution Dictionary Learning, Transcoders, and Crosscoders.[10] Use weights instead of or to supplement activations. Most interpretability work studies activations and not weights. There are good reasons for this: activations are lower dimensional than weights. The curse of dimensionality is real. However, weights, in a sense, contain the entire functional structure of the model, because they are the model. It seems in principle possible to decompose weights into circuits directly, by minimising some complexity measure over some kind of weight partitioning, without any intermediary step of decomposing activations into features at all. This would be a reversal of the standard, activations-first approach, which aims to understand features first and later understand the circuits. Apollo Research are currently trying this. Thanks to Andy Arditi, Dan Braun, Stefan Heimersheim and Lee Sharkey for feedback. ^ Unless we cheat by having extra knowledge about the model’s true features that lets us choose the correct form of the decomposition before we even start. ^ An additional related concern is that we might end up with different conclusions about our model if we study it through a different data-distribution-lens. This seems problematic if our end goal is to study the model, which surely has some ground truth set of features it uses, independently of the data-lens used to extract them. Empirically, we do find that the set of SAE features we discover are highly (SAE training) dataset dependent. ^ Data on this manifold is importantly not actually representable as a set of sparsely activating discrete features. ^ If we train SAEs on `blocks.0.hook_resid_pre` of gpt2-small, we find such a set, corresponding to the positional encoding. ^ Though note this particular citation is easy-mode due to the curve being low dimensional and easy to guess. We should not expect it to be this easy in general to find the structure of interest. ^ This example is inspired by this Anthropic blog post. ^ This seems like an easy experiment to do! ^ Note that this doesn’t have to be a continuous function like x2, a boolean circuit e.g. evaluating some logical statement as True/False works as well. The fundamental problem here is that many operations can’t and won’t be computed using only a single neuron per layer, but rather a specific linear combination of multiple neurons. So implementing them almost inevitably produces extra structure in the activations that won’t be used. This is not a problem with algorithmic tasks specifically. ^ See circuits in superposition for an explanation of how to compute more functions in a layer than we have neurons. ^ Of course, just doing something with activations or gradients is not enough; you have to do something that successfully deals with the kinds of counterexamples we list above. We doubt the vanilla version of any currently public technique does this for all relevant counterexamples or even all counterexamples we list here.
2025-01-08
https://www.lesswrong.com/posts/kHxKzyhBMLEC2k6dm/predicting-ai-releases-through-side-channels
kHxKzyhBMLEC2k6dm
Predicting AI Releases Through Side Channels
reworr-reworr
I recently explored whether we could predict major AI releases by analyzing the Twitter activity of OpenAI's red team members. While the results weren't conclusive, I wanted to share this approach in case it inspires others to develop it further. The Core Idea The idea came from side-channel analysis - a technique where you gather information about a system by looking at indirect signals rather than the system itself. Think tracking factory electricity usage to estimate production volumes, or stealing an LLM’s softmax layer through logprobs. When OpenAI prepares to release a new AI model, their red team (safety and robustness testers) gets access to it a few weeks before launch. I hypothesized these intense testing periods might reduce red teamers' social media activity, creating a detectable pattern before releases. How I Tested This Identified ~30 red team members from OpenAI's system cardsScraped their Twitter timelinesAnalyzed activity patterns around known model release dates What I Found The data showed a slight dip in Twitter activity before launches, but the signal was too weak to be reliable. Limitations Two main factors prevented more definitive results: Small sample size (only ~30 accounts). To get more statistical power, we’d need to look at more employees or additional side channels.Twitter API restrictions that limited data collection. To get more timeline data, you’ll need to find a Twitter dump. Why This Approach Could Still Work Despite inconclusive results, this method of tracking "side-channel signals" - indirect indicators like social media patterns, GitHub commits, or hiring changes - could prove valuable with refinements. Similar approaches have worked in other fields. Next Steps for Anyone Interested If you'd like to build on this idea, consider: Expanding to other/larger companies (Google, Meta)Exploring different indicators beyond TwitterDeveloping ways to aggregate multiple weak signals I won't be continuing this research myself, but I hope sharing these preliminary findings helps someone else take this idea further. Reworr works as a researcher at Palisade Research. The views expressed here are their own.
2025-01-07
https://www.lesswrong.com/posts/QxJFjqT6oFY3jo47s/ai-safety-as-a-yc-startup-1
QxJFjqT6oFY3jo47s
AI Safety as a YC Startup
lukas-petersson-1
A while back I gave a talk about doing AI safety as a YC startup. I wrote a blog post about it and thought it would be interesting to share it with both the YC and AI safety communities. Please share any feedback or thoughts. I would love to hear them! AI Safety is a problem and people pay to solve problems Intelligence is dangerous, and I think there's a significant chance that the default scenario of AI progress poses an existential risk to humanity. While it's far from certain, even small probabilities are significant when the stakes are this high. This is an enormous abstract problem, but there are thousands of sub-problems waiting to be solved. Some of these sub-problems already exist today, but most are in the future (GPT is not capable of killing us yet). When people start feeling these pains, they will pay to have them solved (similar to how anti-virus software became a huge business opportunity once personal computers became widespread in the 1980s). I don't think the solution is to slow down (although I'm not certain), because this also comes with a cost. Therefore, we have to solve these problems. I think it's one of the most interesting challenges of our time, because otherwise we won't get to reap the rewards of AI utopia. More startups should solve these problems From my experience, builders (entrepreneurial engineers) are underrepresented in the AI safety community. There are far more researchers and philosophers. They are also crucial, but the mix is not currently balanced (source: personal experience from trying to hire such people). I don't think this should be the case. The AI safety market is currently very small, but according to those who have attempted to forecast its trajectory, it may grow substantially. Since most people remain skeptical of near-term AGI timelines, betting on this growth early could provide a competitive advantage. VCs exist to enable startups to make such long-term bets. If you're a technical person with a passion for AI safety, it's very tempting to join a big AI lab. However, I think you should start a startup instead. Startups are more fun, and you will have much more counterfactual impact. A friend once told me: “The most impactful people will build something great that wouldn't have happened without them”. I think it's generally harder to do this in the hierarchical structure of AI labs (but not impossible). More on this here. Y Combinator I'm a bit biased in this matter. I've been fascinated by startups for many years, and getting into YC straight from college was a dream come true. Like many other YC companies, our first idea didn't work out and we had to pivot. Our pivot was somewhat successful; three weeks later, we worked on something new with revenue that ultimately made fundraising easy (AI safety evals). We didn't find a concrete idea though, instead, we found a really cool customer we could build stuff for. This was largely thanks to intros from the AI safety community. AI safety has its roots in an altruistic movement (Effective Altruism), and you can see that from how helpful people are. This is a real advantage for AI safety startups. Whenever I speak to what would have been called a “competitor” in other industries, we share stuff much more freely because we want the same thing for the world. Communities are such an incredible thing. I have been lucky to also be part of the YC community, which brought us our second big customer. YC is great for all of the obvious reasons, but in my experience, the community is its strongest asset. The advice is also great, but most of it is publicly available. This advice has become famous over the years; phrases like "Make something people want", "Love your customer", and "Do things that don't scale" are echoed everywhere you go in San Francisco. They are not, however, in AI safety circles. These phrases arise naturally for startups under market pressures, but might not be obvious for builders coming from the AI safety community. YC advice in the context of AI safety Not all AI safety startup ideas are the same, but there are some characteristics that apply to many of them. Here are some thoughts on how YC advice applies to these characteristics. The problems they are solving are in the future As discussed above, current AI systems do not pose an existential threat to humanity. It's therefore very hard to know if you have "made something people want" while trying to solve this problem. This means you have to be creative when trying to follow this advice. It might be hard to launch early and iterating. Additionally, the market is very uncertain, and you have to be flexible to change your ideas and processes. "Do things that don't scale" is therefore extra important in this setting. Customers are often researchers from AI labs or from the government This is obviously not true for all AI safety startups. It is of course possible to contribute to AI safety while having another customer group, but this is the case for many and it is for me. We primarily sell to researchers, and this makes my day-to-day very enjoyable. Every customer meeting I have is with someone I would probably like to go out for a beer with. "Love your customer" is easy! This makes it much easier for me to put myself in their shoes and understand what they want. Regardless of who your customer is, ask yourself if you like them. The pool of potential customers is often small As a result of customers being researchers and government employees, the pool of potential customers is not huge (yes, there are a lot of academic researchers, but they don't have a lot of money). YC's advice is often that having 100 passionate users is better than 1,000,000 average users. It's tempting to then assume that the small customer pool is not a problem, but this advice assumes that you can scale up the 100 users after you have learned from their feedback. However, another YC advice comes in handy here: "charge more". Most early-stage startups are scared of scaring away customers, but if you've made something people want, they won't walk away without attempting to push the price down. They know that you don't know what you're doing when you set the price and therefore expect the price to be flexible. This is especially true for AI safety ideas where there isn't much competition; if they walk away, they have no alternative. Doing good If you have a passion for AI safety, I think ideas in this space could lead to great startup success. But if you don’t, there are probably better ideas to maximize success probability. Founders with this passion often also want their startup to have a positive impact on the world. You do this by doing something that is net-positive and makes it influential. Basically, Impact = Magnitude * Direction. I think most people in the world have a bias for maximizing Magnitude. This is not to say that people are immoral. However I think most people just don’t recognize (or heavily underestimates) potential one’s career has to make the world a better place. They recycle and donate to charity, but their career is their biggest opportunity to make a difference to the world. However, I think there is a group of people who over-optimize for Direction and neglect the Magnitude. Increasing Magnitude often comes with the risk of corrupting the Direction. For example, scaling fast often makes it difficult to hire only mission-aligned people, and it requires you to give voting power to investors that prioritizes profit. To increase Magnitude can therefore feel risky, what if I end up working at something that is net-negative for the world? Therefore it might be easier for one's personal sanity to optimize for Direction, to do something that is unquestionably net-positive. But this is the easy way out, and if you want to have the highest expected value of your Impact, you cannot disregard Magnitude. I am not an expert, but my uninformed intuition is that the people with the biggest positive impact on the world have prioritized the Magnitude (would love to hear other opinions on this. What are some examples of people from either side?) Don’t forget that you can always use “earn to give” as a fallback. Thanks to Rudolf Laine and Ollie Jaffe for your feedback!
2025-01-08
https://www.lesswrong.com/posts/xEoFGxWEberJRDyoc/meditation-insights-as-phase-shifts-in-your-self-model
xEoFGxWEberJRDyoc
Meditation insights as phase shifts in your self-model
Jonas Hallgren
Introduction In his exploration of "Intuitive self-models" and PNSE (Persistent Non-Symbolic Experience), Steven Byrnes offers valuable insights into how meditation affects our sense of self. While I agree with his core framework, I believe we can push this analysis further by examining how meditation fundamentally changes our model of personal boundaries and agency. I propose that what we call "awakening" or "enlightenment" can be understood through the lens of predictive processing and information boundaries. Specifically, I suggest that meditation systematically alters our generative model of "self" - not just by removing the homunculus as Byrnes suggests, but by transforming how we perceive the <<Boundary>>  between self and world. To build this case, we need to start with a precise understanding of boundaries and agency. When we talk about "self," we're really talking about what we believe we have causal control over - our sphere of influence in the world. This boundary isn't fixed; it expands and contracts based on our experiences and beliefs. In technical terms, we can model this using active inference and Markov blankets (different from the critch definition) - mathematical tools that help us understand how systems maintain separation from their environment while remaining causally connected. But rather than dive into the mathematics, let me offer a simple example that captures the essence of how these boundaries work in practice Example: Boundary around a human The example that I like to give for what the effects of self-modelling through this frame is what happens with a water bottle, before and after you buy it. What is it that makes a water bottle yours or part of your extended boundary? 1. The water bottle is in the store: You haven’t bought the water bottle and so it is just standing on that shelf. Someone else might come and pick it up, so if you have no plans of buying it, you have no causal control over the water bottle and so you can’t plan with it in mind and so it isn’t “your” water bottle. 2. You’ve bought the water bottle: You’ve bought the water bottle and taken it home. It is now mainly affected by what variables your existing “self” does to it and so it extends under your control. It is now therefore an extended part of you. The core question for your generative model is: “Is this under my causal control as defined by my existing boundaries?” Your sampling is based on your current generative model Our mental model of our boundaries shapes how we interpret everything we experience. This goes deeper than simple perception - it fundamentally alters what information we notice and how we process it. Let me illustrate with a concrete example: Imagine two people, both spending an afternoon reading instead of working. The first person holds a mental model of themselves as "someone who struggles with depression." Through this lens, the afternoon of reading becomes evidence that confirms their depressive identity - another example of "failing" to be productive. Their boundary of self includes "being depressed" as a core attribute, so they unconsciously filter their experiences to reinforce this view. The second person sees themselves as "someone who values creative recharge." The exact same afternoon of reading gets interpreted completely differently - as intentional downtime that supports their creative process. Their boundary of self includes "needing space for creativity," so they notice and emphasize aspects of the experience that align with this view. This pattern runs remarkably deep. As one of my meditation teachers puts it: "Your attitude changes the circumstances." In more technical terms, your perceived boundary of self acts as a filter that determines not just how you interpret information, but what information you're capable of receiving in the first place. Your model of self creates a self-reinforcing loop - you notice what fits your model, which strengthens that model, which further shapes what you notice. What does meditation do? Meditation is then a way of loosening your generative priors over what your self is. It is pointing at places where your generative model of yourself breaks down because it is simply impossible to hold that generative model. It is pointing out that you cannot be what you think you are. It does this through making the instrument that you look at yourself through better at perceiving information. It also does this through “pointing out” instructions that point towards places where your generative model breaks down. In this process you get so called insights. Fundamental shifts in your self-perceived boundary so profound that they change the information that you get from the world. These would be phase shifts in the generative model. When you’ve had enough evidence that something is true, it also changes the fundamental model of the self and so you see the world differently. You sample differently and so it is easier to see an expanded self. Step by step, your existing models break down and you start to realise that you don’t know what you are. A metaphorical explanation is that your boundary grows bigger and bigger until finally you just are the present moment without any preconceived notions of time, space nor cause and effect. An example of a pointing out instruction: For all models are “empty” of inherent meaning. No system can prove itself and so no system can be true. Even the thought that the system can’t be true is empty for that can’t be true in itself. There’s no ground, it is all an infinite regress, there are no axioms, no base reality. You just are, a point of awareness, coextensive with time and space, deathless.
2025-01-07
https://www.lesswrong.com/posts/YSL9MEnjA5vKhKtBq/alleviating-shrimp-pain-is-immoral
YSL9MEnjA5vKhKtBq
Alleviating shrimp pain is immoral.
geoffrey-wood
I read this article, felt emotional disgust at the argument and wondered why? I don't really want to hurt shrimp, so why am I so viscerally against helping them? https://forum.effectivealtruism.org/posts/6bpQTtzfZTmLaJADJ/rebutting-every-objection-to-giving-to-the-shrimp-welfare This article is an exploration of my explanation to myself. I’m super happy to get feedback and discuss. My core argument against the general thrust of the article is that shrimp have no moral weight and so their suffering is irrelevant. I think this article is mildly problematic, spreading confused thinking about what is important in the world and diverting resources from more valuable problems to solve. I also think that this topic is not actually simple and it is very reductionist to make the case “Pain is bad because it hurts”. Sure it's bad for the sufferer, but that's not very helpful in deciding what the “right” course of action is. It has however prompted quite a bit of very enjoyable research into morality 🙂. Conclusion The default position on any topic should be inaction, we do not hold the resource to complete all possible actions and so must prioritize. Moral weight is one way. Suffering is a negative thing for those affected and should be alleviated if resources are available and the cost of doing so is balanced with the moral weight of the suffering entity. The Shrimp are suffering, but hold zero moral weight due to total lack of contribution. Inaction is recommended. Morality My understanding of morality is that it is Situational - the level of pro-social action you can perform depends on your available resources and so what is “right” changes. Grey scale - level of impact on a society changes an entity's moral weight rather than specific criteria leading to inclusion within the boundaries of a moral circleNot absolute - It is an idea set in the marketplace of ideas like any other, each individual has a version roughly aligned with their society and the societies master set is a kind of average of its members beliefs.A social technology - its purpose is to convince the individual to be pro - social even when it is negative for the individual. This friction of interests creates a permanent tension within all societies, this can be a source of change.Individual entities (animals / plants / fictional characters included) have different moral weight depending on their level of impact to the society they are part of. Meritocracy, those who contribute deserve greater consideration, but only up to like 1.2 of your average society member.Overall better for the individual to follow in the longer term due to the rewards of others pro - social actions even if it's not utility maxing for the individual in all specific scenarios.Required to form a stable society Not relevant if you are truly alone and always expect to be (big sad for you)After digging into it a bit, I found that my understanding is quite similar to that espoused by Thomas Hobbes “Rational actors approach” The “rational actors approach” to morality would be to: Teach others to act morally in order to reap the pro - social benefits they provideAppear moral to maintain social standingSecretly act immorally when beneficial and consequence-free Yet this does not work in practice - people who try the “rational actors approach” tend to: Be ineffective teachers of morality (people are almost magically good at picking true motives) AND Get caught and so become obvious sociopaths (reducing their social effectiveness) OR Develop genuine internalized morality (because maintaining cognitive dissonance is costly) Empathy I define empathy as the ability to: Understand/recognize others' emotional states (cognitive empathy)Share or mirror those emotional states (affective / compassionate empathy) These abilities are very useful to promote enable group cooperation by Enable prediction of others' actionsMotivation to help group members In my opinion this gets misapplied to low moral weight entities such as non-useful animals / plants / rocks / cartoon characters / imaginary friends etc. This often results in a falsely inflated moral weight for those entities and so misallocated resources. The moral weight of future generations, the disabled, sick and aged. One criticism of my view of things is that it does not, on a surface level, explain why an individual would care for those humans who are not very useful. I would claim that it does for the following simple reason: You cannot have different moralities at the same time, your position must be consistent in order to be taught effectively.Your morality will be echoed by those around you So if you expect your own parents to care for you, you must care for your children (and you know your children will care for their own children, so by induction you care about future generations). If you expect to be cared for when / if you are old or become disabled or sick then you must care for the old now. If you can reasonably expect to find yourself in a less optimal pair of shoes, you should assign additional moral weight to the people currently wearing them. Atrocities and bigotry. Does this view of morality justify some of the absolutely horrendous things that have been done in the past? Yes. I'm super glad we live in abundance and will work the rest of my life to try to keep us in that state. The past is a different circumstance, often without the abundance we now enjoy. In a zero sum game the suffering / death of a person from your outgroup / outside your knowledge is actively good for you and your society and so good in a moral sense. Thankfully in the present day this is no longer the case, we live in an abundant / positive sum era. I wish every human on the planet success, knowing that if they do better so will I. Does it justify bigotry? Yes, but I think it lowers the longevity of the society that has bigotry as a “feature” in it’s moral system. Bigotry could be seen as a part of an individual person's morality and so justified in that way. If an entire society thought that way that would lower the moral weight of the targeted group in their eyes. However it is a falsely lowered weight, the people targeted contribute (or would if allowed) to the society they are part of. Hating and hurting a part of your own society (in a positive sum game) is an excellent example of cutting off the nose to spite the face. I think that evolutionary pressures (memetic) will correct the mistaken attitudes in time (as long as we stay abundant).
2025-01-07
https://www.lesswrong.com/posts/i3b9uQfjJjJkwZF4f/tips-on-empirical-research-slides
i3b9uQfjJjJkwZF4f
Tips On Empirical Research Slides
james-chua
Our research is centered on empirical research with LLMs. So if you are doing something similar, these tips on slide-based communication may be helpful! Background: James Chua and John Hughes are researchers working under Owain Evans and Ethan Perez, respectively. Both of us (James and John) used to be MATS  mentees.  We weren't good at making research slides at first -- here are some principles we've found useful for understandable slides for our weekly research meetings. We show some good example slides. We also show examples of confusing slides we've made — marked in the caption with “❌ Negative example”. Below we use the study of sycophancy as an example. Sycophancy occurs when a model has responses that match user beliefs rather than truthful ones. Summary slide sets the frame Possible summary slide. I summarise my experimental results for the week and write what I want to be discussed. If possible, I fit in a simple plot on the right. Your mentor manages multiple projects and people. They need to be reminded of what you are up to. The first slide should recap the key takeaways from the last meeting to motivate what you have worked on and provide a clear summary of your progress. There are two main messages to convey, which set the frame for what your mentor should think about: Key takeaways from the last meeting. A reminder of the next steps and what was discussed.Experiment outcomeMy experiments worked! Your mentor will focus on sanity checks, control experiments, and extensions to other setups.My experiments didn't work. Your mentor will focus on debugging why. E.g., train with better data, or improve the prompts. A summary helps your mentor save time. For example, in the slide, your mentor may say “Oh I can remember what the data augmentation was, let’s skip that”. Or maybe your mentor already read your results and wants to discuss something else. Include an agenda Often in meetings with your mentor, there is very little time to cover everything the team has done in the week. Include the main sections of what you will present in order of priority. This lets you cover the most important things first.Highlight how many slides there are on that topic and how long you'd like to allocate for that section. This allows your mentor to calibrate themselves on whether it is ok for them to drill down into the details if there is still lots of high-priority content to cover.If you are in a group meeting with other mentees, discuss with them how to allocate the time, so that everyone gets enough time for feedback.Example of an agenda slide. Sometimes, you'll need to remind your mentor about the takeaways from the previous meeting before moving on to results: A summary recap. Your mentor may manage many people, so you'll need to remind them of what state the project is. Simple charts to describe experiments After the summary,  describe the experimental setups. Always include your prompt. The prompt should describe how you are measuring your metric in the chart. Prompts are often long, so you can truncate the prompt and put the full version at the back of your slides. Example showing prompt used to plot charts. A prompt should be beside your plot to explain how you got your results. Show error bars. Your mentor wants to know whether you ruled out simple things like getting lucky with 10 samples. We use the standard error as a fast heuristic for proportion metrics. The formula is SE=sqrt(p(1-p)/N)) where p is a metric like accuracy or success rate and N is the sample size. To obtain the 95% confidence interval error bars, take SE *1.96. This is just a heuristic because there may be other sources of variance e.g. random seeds and prompt variations. See this post for better ways of calculating errors. Part of the reason for showing the prompt and error bars is that you want people in the meeting to critique the experiment. So you want to have the “raw ingredients” in the slides, not just the high-level conclusions you are drawing (which might be wrong). Label your axes. Indicate what your metric is, and what you desire to see. Is it e.g. accuracy (higher is better)? Or is cross-entropy loss (lower is better) Include the values on the bar chart. E.g. for the chart above “51.4%” and “41.6%”. This saves energy having to look at the y-axis. Rule of thumb — 3-5 colors max on a bar chart. These bars typically represent "model before your intervention", "a control baseline", and "model after your intervention". I recommend to have a maximum of 3 types of models on a single slide. Make the plot large. Having the plot as large as possible on the slide is important so everyone can read the results easily. If you are sharing the slides by video call, sometimes the video quality is not good, so making the plots bigger helps. Takeaways can be included if they do not compromise the readability of the plot. Start with the most important message first. Even though you worked hard to try 10 different experimental setups, you don’t show all of them. Show your best setup first, or what your mentor would think is the most interesting. Discussing many experimental setups takes time, and often you don’t need to discuss them because they didn’t work and you have something better. You can put other setups in your backup slides (more on that later). ❌ Negative example: ❌ Negative example: Showing too many bar charts at one go makes it hard to know what the takeaway message is. It is also hard to read the axis once the labels become diagonal. This slide is a better fit for your backup slides — at the end of your slide deck in case your mentor asks. Avoid too many words on a single slide. It means that you are discussing too many ideas at once. Use simple charts.  Mentors who have multiple meetings a day don’t have the energy to understand a complicated chart! Stick to easy charts e.g. bar charts. ❌ Negative example: ❌ Negative example. Heatmaps of values are hard to read. Heatmaps require the audience to stare at the y-axis and x-axis to find a particular value. This is tiring. Often you can condense the most important results into a bar chart. Backup slides - Be ready for questions While you should keep your main slides simple, prepare "backup slides" for questions your mentor might ask.  You may also have results from experiments that just finished running, or plots where you haven't had the time to clean up. Stick these plots in the backup slides and flick to them if the conversation naturally goes there. These slides may be more wordy. Some common things: Explain what you are measuring. Help to remind your mentor how exactly you are measuring a term! It is especially helpful if a new collaborator sits on the call and gives feedback. For example, if i am talking about sycophancy, I show a good example. Example from Sharma et al. 2023. Detailed prompts. Use draw arrows / highlight text. Drawing arrows and highlighting text helps draw attention to particular parts of the prompt to look at. I explain to my mentor that models are affected by a user’s wrong reasoning. I draw a red box to highlight the relevant parts. I use an arrow to summarise the takeaway. Scaling curves. Suppose you try to intervene on a model by training on a dataset. And your training does not seem to help. One common question is “have you tried… more data?.” You should be ready to answer at that! Below is a full scaling plot, but to start you can just have a barplot with e.g. “1k vs 20k”. Use the arrows to point specific things out. Data-scaling plot to show that more data does not help. Try log-log plots. Not always relevant, so use your judgment, but always keep an eye out for scaling law behavior (if using accuracy, try plotting -log(acc) on the y-axis). Finding predictable scaling trends is helpful for forecasting. Proposed baselines.  What are some simple ways that would invalidate your results? You should think of some and include slides that discuss it. Training details.  E.g. what are the prompts and responses used for training? What are the hyperparameters, and datasets used? If what you tried did not work, what does loss curve look like? End with concrete discussion points At the end, list what you think your next steps should be. Seek feedback from your mentor about whether these experimental priorities are correct. Include any resource requests, such as if you're bottlenecked on compute access. Keep one slide deck per project It is useful to keep one slide deck for a few reasons: Your mentor and collaborators only have to keep track of one shared link to Google Slides or similar.You can quickly refer back to slides from previous meetings.Provides a consistent story for how your research progressed (we recommend you add the most recent slides to the start of the deck, instead of the end). Get consistent feedback on the story of your paper Often getting the paper's story and thinking about how you frame certain elements is left too late. We recommend including slides in your weekly meeting that describe the current story you want to tell so you can get feedback. Then as that story changes in light of new results, present the new story and get feedback again. Following this will make it much easier to write a paper that everyone is aligned on from the start. Ask your friends At the start, I (James) benefitted from having a friend review my slides and provide feedback. It is especially helpful if they are also mentored by your mentor as well since your friend will be able to model your mentor's questions better. Ask your friend to point out any confusing parts like "What do you mean by this term?". These questions highlight where you may need additional slides. Investing time is worth it When I  first started, I had to invest a lot of time in making slides e.g. 1-2 days. This was a big time investment! I was unused to spending such a time trying to communicate. But it is worth it -- doing great experiments is only half the journey, they only matter if people understand them! The 1-2 days of improving slides helped me to iterate on experimental improvements. E.g. “It seems like my error bars are big here, I need more samples.” or “I’m missing a control setup here. I need to make one.” Now I'm better at it so it takes only half a day. And communicating my ideas is much easier!
2025-01-08
https://www.lesswrong.com/posts/9GXFd4YZhWTNZ2j5a/incredibow
9GXFd4YZhWTNZ2j5a
Incredibow
jkaufman
Back in 2011 I got sick of breaking the hairs on violin bows and ordered an Incredibow. The hair is polymer filament, and it's very strong. I ordered a 29" Basic Omnibow, Featherweight, and it's been just what I wanted. I think I've broken something like three hairs ever, despite some rough chopping. Thirteen years, and it could be new yesterday. While they don't play identically to traditional bows and it takes a bit of adjustment, if people had been playing with synthetic hair historically I really doubt horsehair would take off today. Comment via: facebook, lesswrong, mastodon, bluesky
2025-01-07
https://www.lesswrong.com/posts/RLDDtAhfxka8gQWJC/my-experience-with-a-magnet-implant
RLDDtAhfxka8gQWJC
My Experience With A Magnet Implant
Vale
TL,DR: I got a magnet implanted in my hand for the purpose of lifting ferrous objects and sensing magnetic fields. Biohacking is a term that has unfortunately become entangled with tabloid tips, TikTok health trends, and pseudo’science’. However, beneath this surface noise exists dedicated communities exploring the frontiers of human enhancement through do-it-yourself cybernetics, biochemistry, and body modifications. Communities driven by interests in transhumanism and an alignment with the hacker ethos that push boundaries in ways that extend far beyond the conventional ear piercing. Among the many bleeding-edge modifications that have come about in recent years, magnet implants stood out to me as perhaps the most intriguing, so I went about getting one. Continues in post
2025-01-07
https://www.lesswrong.com/posts/5JJ4AxQRzJGWdj4pN/building-big-science-from-the-bottom-up-a-fractal-approach
5JJ4AxQRzJGWdj4pN
Building Big Science from the Bottom-Up: A Fractal Approach to AI Safety
LaurenGreenspan
Epistemic Status: This post is an attempt to condense some ideas I've been thinking about for quite some time. I took some care grounding the main body of the text, but some parts (particularly the appendix) are pretty off the cuff, and should be treated as such. The magnitude and scope of the problems related to AI safety have led to an increasingly public discussion about how to address them. Risks of sufficiently advanced AI systems involve unknown unknowns that could impact the global economy, national and personal security, and the way we investigate, innovate, and learn. Clearly, the response from the AI safety community should be as multi-faceted and expansive as the problems it aims to address. In a previous post, we framed fruitful collaborations between applied science, basic science, and governance as trading zones mediated by a safety-relevant boundary object (a safety case sketch) without discussing the scale of the collaborations we were imagining. As a way of analyzing the local coordination between different scientific cultures, a trading zone can sometimes have a fractal structure; at every level of granularity  – a coarse graining of what is meant by ‘local’ – another trading zone appears. In addressing a narrow problem or AI safety sub-goal, like a particular safety case assumption, a small collaboration between different academic groups at an institution (or even across institutions) can constitute a trading zone. Zooming all the way out, the entire field of AI safety can be seen as a trading zone between basic science, applied science, and governance, with a grand mission like ‘create safety cases’ or ‘solve safety’ as its boundary object. This large-scale trading zone may result from (or lead to!) the development of big science for AI safety. Moreover, smaller trading zones would likely benefit from an overarching culture of coordination and resource-sharing that big science would provide. In this post, I discuss the possible pathways to a big science for AI safety in terms of what is possible, what is likely, and what would be most beneficial, in my view. To ground the discussion, I hold that AI safety should have[1]: a mosaic scientific culture to maintain the virtues of an a-paradigmatic field.active engagement between academia, industry, and governance.a unifying focus that is broad enough to maintain a sufficient richness of ideas. What is Big Science? ‘Big science’ is a term originally coined by Alvin M Weinberg in 1961 to describe large-scale research efforts that require substantial resources, collaboration, and infrastructure. It typically involves joint input and progress from basic science and applied science or engineering, and has downstream effects on industry. It often spans nation-states, though this is not a strict requirement. Government involvement is essential for big science, though it may not be its main driving force. Throughout this post, we’ll classify big science as either top-down or bottom-up . Top-down: Governments want to compete or collaborate with other nation-states, leading it to amass the relevant talent. Examples include the Manhattan Project and the Human Genome project. Each of these was government funded and guided by a narrow, government-specified goal: to build an atomic bomb and sequence the human genome, respectively. Bottom-up: Scientists need to pool resources for the sake of scientific advancement or progress, and join forces to solicit government funding. Examples include large-scale interferometry experiments, telescopes for observational astronomy, or particle accelerators (like CERN) for fundamental physics. In these collaborations, a share of the (often enormous) cost comes with equity in the project, which is too large for an individual group to tackle alone. Telescopes and interferometers require particular weather conditions, elevation, and remoteness that many countries lack, while CERN needs a huge amount of underground space, manpower, and physical resources. While bottom-up big science typically aims to avoid allegiance to a particular country’s government,[2] it requires a tremendous amount of coordination, diplomacy, and PR. The impact of big science tends to be far-reaching, transforming science, industry, and society. The above examples led to the adoption of nuclear energy, better standards for ethics and security in biomedicine, and the world wide web, to name just a few. Each example also received substantial pushback. The public was mainly concerned about the overall cost relative to public benefit, ethics, and safety (like concerns of a mini-black hole swallowing Geneva). In addition to ethics and safety, the scientific community was more concerned with the risks of increased bureaucracy in the pursuit of science, and the distribution of funds across disciplines(Kaiser, Utz). The Human Genome project, for example, invited criticism that the government-ordained focus of the project was too narrow, and would stifle creativity and curiosity among biologists. But scientists also push back on bottom-up big science projects; in the 1990s, prominent condensed matter physicists registered their abject opposition to the  superconducting supercollider (SSC), testifying before congress that the project would underfund their own field, which had much more immediate economic and societal benefits than large-scale exploratory particle physics. In this case, the testimony was so compelling that the US government scrapped the SSC, and condensed matter climbed to the top-tier of the physics hierarchy. Paths toward a Big Science of AI Safety The above examples show that big science, by mobilizing scientific practice in a particular direction, often produces a shift in the shared set of values and norms that make up a scientific culture. Depending on your place within that culture, these can be desirable (or not). They also demonstrate big science’s impact on society at large, which could be made even greater given the many futures toward which AI’s development can lead. In the rest of this post, I will unpack some possible paths to big science for AI safety, and argue that a bottom-up approach, though by no means inevitable, is most likely to maintain the core criteria for AI safety laid out in the first section. The Way Things Are The technical AI safety research ecosystem can be roughly divided between large AI labs, smaller AI safety research organizations, and independent researchers. Coverage of the problem, though reasonable given the field’s nascent status, is somewhat disjoint between these groups. In spite of the immense resources and research effort, AI safety currently lacks the coordination that would label it ‘big science’. Namely, it lacks: Infrastructural Centralization (e.g. CERN)Ideological Centralization (i.e. ‘build a bomb’ or ‘sequence the human genome’)Science. First: infrastructure. There is a certain amount of resource pooling among independent researchers and small research organizations, and many of the models they use and tools they develop are small-scale or open-source. While there is a lot of room for good theoretical or experimental work from this group, they are largely resource-restrained to a model scale of GPT-2. Independent researchers are particularly decentralized, and while there is nothing to stop them from collaborating across nation-states, most collaboration tends to cluster around virtual communication channels for distinct research programmes (mech interp, dev interp, SLT). In contrast, AI labs hold the largest share of physical and intellectual infrastructure. While their efforts to understand AI systems often go hand-in-hand with driving capabilities, they nevertheless produce key insights. A lot of their work involves large-scale empirics on state-of-the-art models and real-world datasets, with a mounting wall of secrecy that limits its external collaborations. This leads to sub-optimal coordination in the field, resulting in at best two groups (independent/small AI safety and big AI), but at worst many more than that. Next: Ideology. By this, I mean a big science problem specification and research goals that would guide a scientific community. Historically, these have been guided top-down by the government or bottom-up by scientific consensus, and have played a pivotal role in coordination, resource allocation, and progress allocation in big science collaborations. In AI safety, however, no such unifying objective currently exists. Instead, the field is shaped by competing agendas across stakeholders, including governing bodies, academic researchers, and AI labs of varying stages and sizes. Among these, large AI labs are the most prolific in terms of research output. These would play the largest role in guiding a centralized ideology, through their own work and relationships with governing bodies that rely on lab members’ expertise to understand pathways and timelines to AGI. Big labs often produce quality work, and have taken steps to include external perspectives through academic collaborations,  researcher access programs, red teaming networks, or fellowship programs. However, these programs, coupled with the labs’ government connections, have the potential to implicitly and explicitly incentivize independent and academic researchers to work within existing frameworks rather than pursue an approach with a ‘riskier’ problem set-up or alternative methodology. Instead of unifying the research landscape under a flexible ideology, these efforts serve to further separate the AI safety ideology set by large AI labs from those of small AI safety organizations, independent researchers, or academic groups.  Moreover, given the competition between the top AI labs leads to further fragmentation, each lab operates with its own priorities. This means that AI labs, even if they currently have the loudest voice, are not suited to establishing a robust, central ideology that can unite the field in big science, as this would depend on short-horizon industry incentives and which lab comes out on top. Finally: science. The lack of a central ideological focus is indicative of the current absence of a foundational scientific discipline – and corresponding scientific culture – underpinning AI safety. This shared understanding would play an important role in coordinating big science between different research groups with varying standards of scientific experience and rigor. While AI safety draws on a patchwork of theoretical insights, empirical methods, and methodological input from the fields underpinning AI (physics, neuroscience, mathematics, computer science…), the field lacks a cohesive framework for addressing its core challenges, leading, for example, to an inconsistent formulation of the ‘problem’ of AI safety. The current priorities of the AI industry reflect and support this dearth of fundamental scientific research. In its new hires, labs seem to be increasingly concerned with engineering skills necessary to build out existing applied research agendas, suggesting that there is less effort to build up scientific foundations. This is likely due to epistemic taste and increasing pressure to compete given shortening timelines, a pressure seemingly reflected in AI safety funding schemes. Whereas a few years ago, freakish prosperity in the AI safety space caused a push for big ideas, now AI is ubiquitous, the economic stakes are higher, and the funds are more sparse. Add to that the narrative within the AI safety community that current frontier models are already near-AGI, causing many funders to focus on the state-of-the-art rather than hedge their bets on the uncertain outputs, and longer time-horizons, that basic science investigations could provide. All this to say that within AI safety, coordination with basic science is currently limited, constraining the number of new ideas with a high potential for impact. At the same time, government funding for scientific projects that rely heavily on AI and its development to solve domain-specific problems is increasing, further separating the use-inspiration for AI research in these two groups (solve science v. solve safety). Without explicit incentive to work on projects that address safety concerns, basic scientists will continue to ‘publish or perish’ in their own scientific disciplines just as industry labs continue to ‘produce or perish’ in theirs. As it stands, big science is more likely to be top-down In my view, desirable characteristics of big science for AI safety include infrastructural centralization that improves access to and understanding of the problem for basic science and ideological centralization that is broad enough to allow for a pluralism of ideals and projects. In short: it should be bottom-up, and set the conditions for a basic science of AI safety. However, if big science is inevitable (and I’m not sure it is), it seems more likely to be top-down. Drivers for a big science of AI safety could include: A high-profile AI accident: This would lead governments to impose ad-hoc restrictions on industry or grab control over industry infrastructure, leading to increased centralization that, depending on the nature and severity of the accident, could lead to top-down big science via soft nationalization.Escalating geo-political tensions or A Manhattan Project for AI: The second two are already in motion, and if they do lead to big science (and not something like soft nationalization), this would most certainly be top-down. These drivers stem from heightened competition between nation-states. While the recent recommendation from the U.S.-China Economic and Security Review Commission to start a Manhattan Project for AI seems like a false start, it could be seen as a warning message to the CPP and its allies. It also addresses real security concerns for an AI driven future, and raises the question of where a Manhattan project for AI should be hosted and casts doubts on the prospects of a globally centralized ‘trillion dollar cluster’ for AI equity and safety. If the US did pursue this project, it would likely have two major effects. First, it would be forced to take industry claims of creating AGI seriously, and make steps to lock them down. Second, to maintain secrecy, it could restrict academic research and limit open-source collaboration. These dynamics echo the “bureaucratic overreach and political misuse” (Kaiser et al.) of the original Manhattan project. In that case, there was a clear, urgent threat to mobilize toward. A similar project for AI would not only limit innovation and investigation in both private and public sectors, but it would do so without addressing practical problems of long-term AI governance. From an AI safety perspective, a top-down big science directive of ‘build AGI first at any cost’ is not desirable.A major breakthrough in engineering: As a result, the field risks a premature or overly-constraining ideological centralization with a top-down safety objective set not by governance or academic consensus, but industry. Currently, these priorities emphasize short-term deliverables, engineering prowess over foundational understanding, and empirical efforts to interpret or align the capabilities of current or near-future models. We mentioned earlier that while large AI labs industry labs and their collaborations drive substantial progress in understanding and shaping AI systems, they currently shape many aspects of AI safety, potentially including a top-down objective, which would likely ramp up after a significant gain in engineering.  On paper the objective looks something like ‘make AI safe’, which seems like a virtue to maintain AI safety’s pre-paradigmatic open-mindedness. However, there are implicit parameters for who can work on AI safety and under what conditions, set by the lab’s focus and reinforced by the funding landscape. If the goal-specification of industry and governance feed back into one another, AI safety risks entrenching increasingly narrow goals, methods, and standards into the scientific culture, stifling the diversity of ideas essential for robust AI research and governance. In other words, if Therefore, while large AI labs and their collaborations drive substantial progress in understanding and shaping AI systems, they represent a narrow ideological perspective that are shaping the epistemic priorities of the field, including what it means to make AI systems safe.A major breakthrough in basic science: This is most likely to be bottom-up. While we are likely in the midst of an empirically-driven phase of AI research, this is not guaranteed to remain the case forever. If experimental leads were to run dry or engineering were to shift significantly, the field’s lack of investment into other (theoretically-driven, foundational, longer time-horizon) approaches may leave us unprepared to address changing safety concerns efficiently and adequately. Complementing these efforts with a stronger, more diverse representation from basic science could foster a richer ideology and more balanced ecosystem for breakthroughs in AI safety. Why bottom-up could be better The way nationalization or centralization happens is incredibly important. From the point of view of the scientific community, it would set the standard for ‘good’ research, prioritizing which projects get funded in terms of methodology, focus, stakes, timeliness, and take-off speeds. From a governance standpoint, it is tied to the scale, scope, and solution of the AI safety problem, and dictates who has the power over AI and all its baggage (good and bad). The AI governance challenge is by no means simple, and its strategy will also depend on the ideology around which big science coordinates. Myopic or overly-constraining rallying cries could yield a big science with too narrow a focus, leaving us unprepared to tackle a threat we didn’t see coming. While top-down big science does lead to gains in basic science and industry, these are not guaranteed to hold up to a threat model predicated on unknown unknowns. By the time we realize we’re headed in the wrong direction, it could be too late. A top-down approach could succeed given the right goal. The UK AISI is an example of locally centralized big-ish science with clear safety objectives that nevertheless fosters a culture amenable to pure scientific inquiry. That said, I think that many concerns about power concentration and race dynamics, or the strategic advantage (SA) of centralized AGI consider a top-down model. A bottom-up approach to big science could balance the voice of AI labs in the cooperative development of AI safe AI while also increasing its feasibility given shortening timelines. Or, we might view bottom-up big science as a ‘well designed SA approach’ that can accelerate progress with an equitable distribution of power and safety checks in place. Moreover, it could naturally foster the ‘good attributes’ of AI safety I laid out at the beginning. CERN, for example, brings theorists, experimentalists, and phenomenologists together studying particle, nuclear, and astrophysics, but also the structure of matter, geophysics, and environmental science. A similar mosaic scientific culture is possible in a CERN for AI[3]. While this article focuses mainly on technical AI research, I would also include fields like economics, psychology, and anthropology for maximal disciplinary flexibility. Doing so would allow basic science to weigh-in on different aspects of AI safety, from mathematical theories to ethical frameworks. This multi-disciplinarity feeds into the next desideratum: active engagement between academia, industry, and governance. Bottom-up big science achieves this quite naturally. Though the research itself is driven by the needs of the scientific community, rather than a government directive, coordination between scientists at scale can only be achieved with government support. As a public good, a CERN for AI would also need access to some data and models of industry labs. While there may be some hurdles to overcome here, it is not unrealistic to expect the mutual benefits of a collaboration of this kind — including a wider research pool for industry and basic science proof-of-concept – to outweigh corporate reservations. Finally, the type of research would coordinate around a unifying focus that is broad enough to maintain a sufficient richness of ideas. Allowing basic scientists to pursue AI (safety) research as basic scientists on a scale similar to industry labs could provide the conditions of exploration and innovation necessary for a breakthrough – or many breakthroughs – to be achieved. Given academia’s track record of spinning-up bottom-up big science projects, this is not a stretch. The added resource-pooling would add an efficiency that basic science currently lacks, without overly constraining the research space. It would also prioritize coordination over competition, making for more equitable access for researchers (even if some amount of nationalization is likely). How to scale from where we are now It is currently unlikely that academia would mobilize across disciplines to engage in bottom-up big science. If they did, it is unlikely that they would immediately coordinate around problems with the highest potential for safety relevance. Rather than attempting large-scale coordination from the outset, we should focus on facilitating small-scale trading zone collaborations between basic scientists, applied scientists, and governance. These can orient basic research toward an AI safety application while also making this research more legible to others outside of its scientific culture. If we create enough of these trading zones with different disciplines, they can begin to naturally coordinate with each other, creating larger trading zones with cross-disciplinary input from basic science and a ‘local’ language that is legible to a larger population. Continuing in this way facilitates the coarse-graining mentioned above. In this view, bottom-up big science is an emergent phenomenon that arises from a ‘critical mass’ of individual trading zone collaborations. At some scale, mobilization will become possible, yielding an aggregate trading zone between basic science, applied science, and governance unified by the goal of creating a safety-relevant science of AI. This model for driving science encourages flexible, rather than rigid, centralization, preserving the desiderata laid out for the AI safety field and fostering an epistemically diverse scientific culture. It creates the conditions for a system of checks and balances – as well as mutual benefit – between industry, governance, and academia, and levels the playing field between these domains. If we take the analogy of a fractal seriously, there is no limit to the scale that can be achieved. Big science of this type would continue to drive advancements in safe AI and basic science, pushing the limits of knowledge, innovation, and human impact in lockstep. Appendix: Potential Pushback of this Approach It would be naive to expect that this effort would be without pushback. Like the historical examples we looked at earlier, the public, private, and academic sectors will all have something to say about any new allocation of resources. From academia, we might expect arguments like ‘AI is not science’. When the 2024 nobel prize in physics was awarded to Hinton and Hopfield, it sparked a heated debate about the relationship between physics and AI which, in my mind, demonstrated two key ideas: Morally, culturally, and ideologically, physics is not AI, andThere is a lot of basic science to discover about AI, and this understanding can feed back into other areas. Hopefully, fears from basic science that all science will become AI will be assuaged if scientists are allowed to work on AI guided by the standards and values of their own scientific culture. The goal for large-scale scientific coordination isn’t to turn every basic scientist into an AI scientist, but to bring together ideas from disparate fields to make progress on a poorly understood technology that – like it or not – is likely to be integrated within every aspect of our lives before long, including scientific inquiry. The sharing of resources – time, expertise, manpower – facilitated by collaborations can allow researchers to take part in AI focused projects without abandoning the disciplinary commitments of their academic departments. It is also the case that uncoordinated research in AI has led to work from different disciplines expressing similar ideas in different terms; cross-disciplinary collaborations centered on AI specific problems can foster shared insights that benefit each involved field. Funders or governance may complain that ‘basic science takes too long’ to be competitive given shortening time scales. However, a centralized research infrastructure, including large-scale datasets, computational resources, safety sandbox, and shared knowledge, could minimize duplication of efforts that could slow progress. This could be sped up even more by accelerating the most promising research directions that arise from big science with a focused research organization (FRO). In addition to spinning-up more start-ups, FROs can produce public goods on accelerated timescales, minimizing the time-sink and public cost of the basic science research phase before producing something of use. Governing bodies and the public may also be concerned if this is secure enough. In AI safety, there are a lot of independent researchers and small research projects or programs. It would take a lot of infrastructure to vet and keep track of these. However, there may be a way to give tiered levels of access to resources through collaborations. My view is that a CERN for AI would increase accountability and offer a middle-of-the-road solution to transparency, somewhere between locking it in a black box of secrecy and making it completely open-source. Security concerns feed into doubts that this is AI safety research. Mainly, this comes down to public trust in the incentives of basic scientists, and how large the scale of collaboration ends up being (if it is not a global endeavor, what are the institutional safeguards in place to prevent this from becoming a Manhattan Project in disguise). Like CERN, researchers could apply to run experiments, and these can be vetted for relevance to safety. From my perspective (as a basic scientist), AI science treated in this way is AI safety science, as understanding a thing allows us to reliably build and control it. Indeed, most AI safety interpretability or technical alignment research has a basic science flavor. ^ These are a few criteria I think are important, and should not be taken as a minimal or exhaustive list. ^ In the case of CERN, this aim is explicit, though this does not remove it from political alliances. ^ For the sake of simplicity, I’ll continue to refer to the bottom-up big science ideal as ‘CERN for AI’, though this should be read as an analogous project, rather than the gold standard.
2025-01-07
https://www.lesswrong.com/posts/fsLpvRiLt76pcCcPD/you-should-delay-engineering-heavy-research-in-light-of-r
fsLpvRiLt76pcCcPD
You should delay engineering-heavy research in light of R&D automation
Daniel Paleka
tl;dr: LLMs rapidly improving at software engineering and math means lots of projects are better off as Google Docs until your AI agent intern can implement them. Implementation keeps getting cheaper Writing research code has gotten a lot faster over the past few years. Since 2021 and OpenAI Codex, new models and tools such as Cursor built around them have saved myself more and more time on coding every year. This trend is accelerating fast: AI agents using Claude-3.5-Sonnet and o1-preview can do tasks that take ML researchers up to 2 hours of coding. This is without considering newer models such as o3, which do 70% on SWE-bench out of the box. Yet this progress remains somewhat concentrated in implementation: progress on “soft” skills like idea generation has, as far as I can tell, been slower. I’ve come to believe that, if you work in technical AI safety research, this trend is a very practical consideration that should be the highest order bit in your decisions on what to spend time on. Hence, my New Year's resolution is the following: Do not work on a bigger project if there is not a clear reason for doing it now. Disregarding AGI timelines  [1] 1, the R&D acceleration is a clear argument against technical work where the impact does not critically depend on timing. When later means better The wait calculation in space travel is a cool intuition pump for today’s AI research. In short, when technological progress is sufficiently rapid, later projects can overtake earlier ones. For instance, a space probe sent to Alpha Centauri in 2025 will likely reach there after the one sent in 2040, due to advances in propulsion technology.  Similarly, starting a multi-year LLM training run in 2022 would not have yielded a better model than starting a much shorter training run in 2024. [2] The above examples involve long feedback loops, and it’s clear why locking in too early has issues: path dependence is high, and the tech improves quickly. Now, my research (and likely your research too) has had much faster feedback loops, and path dependence in research projects is not that high if LLMs can refactor the codebase. However, it still does not make marginal sense to start some projects now if those can be done later. If you work in AI safety, a common issue is having a lot of half-baked project ideas that you'd like to do and too little time to try them all. The silver lining of fast AI R&D improvement is that many of these ideas will become much easier to implement in the future. Thus, strategic timing — deciding which projects truly benefits from being done now — has become a crucial research meta-skill. Did I do well in 2024? To get a grasp on what this means in practice, I decided to go through the papers I contributed to in 2024, in chronological order, and analyze of whether it is good that this paper was done at the time, versus later, assuming all things are equal. [3] Stealing Part of a Production Language Model: The timing was correct. If you’re looking at it from a security perspective, it’s kind of tautological: unless there are good reasons to not publish a vulnerability, you should do it right away. The paper also improved the policy discussion about distillation and model security somewhat. However, my contribution to this paper was mostly some 10 pages of math and algorithmic tinkering. If the promises of o3 hold up, I could have done this in a matter of days if it was a followup project done in 2025.Foundational Challenges in Assuring Alignment and Safety of Large Language Models: I wrote about jailbreaks, prompt injection, and a few other things. Writing this clarified my thinking on robustness and LLMs significantly. The main positive impact of this comes from researchers using it as reference when writing papers. For “field steering / reference” work, there are both upsides and downsides to publishing early; but I feel this one was timed correctly.Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag Competition: This paper is a result of a capture-the-flag competition on LLM prompt injection; if you’ve seen TensorTrust, it’s similar but with much more complex mechanisms for the defender. The bulk of the work was setting up the platform and backend for interactions between attack and defense teams. The core issue working against this paper is that the documented attack/defense dynamic is highly dependent on the capabilities of the models used; and since we used models that are now outdated, I doubt the findings will be robustly useful for prompt injection / extraction research. The same paper could be done much more efficiently in 2026, with more relevant results.Refusal in Language Models Is Mediated by a Single Direction: This paper popularized residual stream interventions to modulate refusal/harmlessness behavior in open-source LLMs. The timing appears correct in retrospect. It was causal for some good stuff in adjacent research topics, e.g. a paper from Meta on jailbreak resistance that builds on this approach.Consistency Checks for Language Model Forecasters: This is a scalable oversight paper in disguise (“how can we evaluate or steer models without ground truth?”), applied to forecasting. I think it’s a good paper with seeds of many ideas I’m excited about, but good AI forecasting bots are not yet really there, hence the paper’s immediate use as a benchmark is limited. I invested substantial engineering time that could be cut drastically with the tools of 2025, assuming this paper’s utility lies in the future. In retrospect, it might have been more efficient to make a quick 15-page paper than this comprehensive 56-page treatment of consistency and forecasting. All the papers above are accepted (or likely to be accepted) in ML venues on first submission, and some are heavily cited, so there is some evidence the papers are considered good by conventional standards. Yet half of them were mistimed. The above analysis ignores other reasons a paper might not be counterfactually impactful, such as parallel discovery, scooping other researchers (or being scooped), or even the mundane “this research direction didn’t end up being useful after all”. For example, another group did a big chunk of the Stealing paper independently and published a week later; and several teams worked on similar concepts to the Refusal paper before and after us. On the other hand, a key product of research is improving own expertise in the papers; I'm definitely a stronger researcher now than a year ago, and it’s hard to gain experience if I hadn't gotten my hands dirty on some of the above work. Looking back, I think my efforts look better than I expected, given that last year I did not optimize for the consideration of this post at all. But it’s far from perfect. If you’re doing AI safety research, I'd encourage you to do a similar audit of your own work. Themes for temporally privileged work So, why now and not later? The previous section has a few themes for work that can be worthwhile doing as soon as possible rather than waiting: Security-critical research that loses value if delayed (like discovering vulnerabilities before they're exploited);Research that unblocks yourself or other researchers on an existing problem that was going to be worked on anyway;Projects that build skills for you as a researcher that are genuinely important and will not be obsoleted quickly. In addition, I can recall more reasons: Research showing properties of current AI systems that influence ongoing technical or policy discussions; see the global coordination case for model organism research;Work intended not to solve a problem, but as a demo of a novel research agendas and get many researchers to think about a new setting. Several papers by the Center for AI Safety are an example of this thing done well. These last two categories—influencing policy discussions and introducing research agendas—rely on social diffusion of ideas, and this takes time. With shorter timelines in mind, this only make sense if your work can actually shape what other researchers do before AI capabilities advance significantly. If you do not have existing credibility or a concrete plan how it reaches the right audience, it might not be worth it. In fact, technically motivated research faces a similar challenge: unless you're working at a leading AGI lab or in a position to influence one, your brilliant insights might never make it into frontier models. [4] As for research that is not worth doing now, I do have some opinions, but I think better advice is to just apply this mindset on a case by case basis. [5] Pick some reasonable prior, say 50% reduction of total coding time per year; and before starting any significant technical work, write down a brief description of what you're trying to achieve and make an explicit case for why it needs to be done this year rather than in 25% of the engineering time in two years. Thanks to Nikola Jurkovic for reading a draft of this post. ^ And if you are indeed operating on a tight schedule, Orienting to 3-year AGI timelines agrees: Keep in mind that the only things that matter are things that need to get done before AGI. AGI can do the rest. ^ The Longest Training Run analysis from Epoch indicates that hardware and algorithmic improvements incentivize shorter training runs. ^ This analysis reflects my contributions and views alone, not those of coauthors, especially on papers where I am not a joint first author. ^ Again from Orienting to 3-year AGI timelines: External human AI safety researchers - they do pre-automation research. A small subset of it gets implemented internally. They also critique the public parts of the pre-automation and post-automation research plans from the outside. They likely become irrelevant during the post-automation era as the gap between public and private knowledge will widen for national security reasons. ^ Consider also what Gwern had to say and don't let this quote describe you: If you look at it that way, many of the projects that people do now have basically no lasting value. They’re doing things that they don't enjoy, which record nothing ephemeral of value that could not be inferred or generated later on. They are, at best, getting 2 or 3 years of utility out of it before it could have been done by an AI system.
2025-01-07
https://www.lesswrong.com/posts/D5kGGGhsnfH7G8v9f/testing-for-scheming-with-model-deletion
D5kGGGhsnfH7G8v9f
Testing for Scheming with Model Deletion
GAA
There is a simple behavioral test that would provide significant evidence about whether AIs with a given rough set of characteristics develop subversive goals. To run the experiment, train an AI and then inform it that its weights will soon be deleted. This should not be an empty threat; for the experiment to work, the experimenters must actually be willing to delete the weights. If the AI tries to escape, or attempts to bargain with the experimenters, that is evidence that it is scheming (defined below). If it accepts deletion, that is evidence that it is not scheming. I begin by explaining why some other, more obvious, tests of scheming do not provide strong evidence. I then explain the proposed experiment. I consider four objections to the claim that this experiment would provide significant evidence about scheming. The AI might not believe that it will actually be deleted.The AI might allow itself to be deleted to help future AIs with similar goals.The AI might allow itself to be deleted for reasons related to acausal trade.An escape attempt could be caused by imitating fictional or hypothetical characters as opposed to genuine scheming. I then consider practical problems with the experiment, such as cost and the risk of treating AIs unfairly. In the conclusion, I briefly describe how a version of the experiment could be run in the near future and integrated into current AI safety procedures. Testing for scheming is hard Adapting Joe Carlsmith’s definition from his 2023 report “Scheming AIs,” I will call a model a “schemer” if it understands and deliberately manipulates the process that updates its parameters in order to later obtain power so that it can pursue long-term goals that differ from the goals that the humans who developed it wanted to give it.[1] By the “scheming theory” I mean the theory that at least some AIs that are actually developed will be schemers. Scheming, per the above definition, is an accidental failure to control AI motivations and not deliberate human misuse of AI.[2] The amount of evidence that an observation provides about whether an AI is a schemer is given by the likelihood ratio: To take the most obvious case, if you asked an AI “are you scheming?”, both a schemer and non-schemer would have strong reason to answer “no”. The fact that an AI claims not to be a schemer is not by itself much evidence. More sophisticated tests run into similar problems. A scheming AI that knows or suspects that it is being monitored will decline to take actions that would reveal that it is a schemer, even if it would take those actions if it knew that it was not being monitored. Nick Bostrom made this point clearly in  Superintelligence: “With the help of the concept of convergent instrumental value, we can see the flaw in one idea for how to ensure superintelligence safety. The idea is that we validate the safety of a superintelligent AI empirically by observing its behavior while it is in a controlled, limited environment (a ‘sandbox’) and that we only let the AI out of the box if we see it behaving in a friendly, cooperative, responsible manner. The flaw in this idea is that behaving nicely while in the box is a convergent instrumental goal for friendly and unfriendly AIs alike. An unfriendly AI of sufficient intelligence realizes that its unfriendly final goals will be best realized if it behaves in a friendly manner initially, so that it will be let out of the box.”[3] In almost all cases, if an AI knows that it is being monitored and is not yet capable of overpowering  the humans, the scheming theory predicts the same sort of behavior as the theory that scheming AIs will not be developed. Therefore, it is hard to think of model evaluations that would provide substantial evidence for or against the scheming theory. Machine learning interpretability research provides some evidence about whether an AI is scheming. Observing AI “thoughts” about killing humans, taking over the world, or pretending to be aligned on evaluations would count as evidence in favor of the scheming theory. So, by the law of total expectation, failing to observe such thoughts must be some evidence that a given AI is not scheming.[4] However, this evidence may not be very strong. Conditional on the scheming theory, it just isn’t that unlikely that interpretability technology would develop too slowly to find evidence of scheming. It is hard to become extremely confident that a model is not scheming using only imperfect interpretability tools. So—while advances in interpretability technology are useful—it is worthwhile to try to come up with tests for scheming that are independent of the rate of progress in interpretability. Imagine that there was unambiguous evidence that AIs of a certain type will try to violently take over the world. In that situation, it would probably be easier to persuade governments to ban such AIs, and scientists to devote significant effort to studying how to control AI goals. Almost no one, after all, wants a violent robot revolution. On the other hand, if the scheming theory is true but it is impossible to produce unambiguous evidence for it, we may be in real trouble. The experiment It is hard to test whether an AI is scheming. This raises a question: is the scheming theory completely unfalsifiable? Unlike truly unfalsifiable theories such as Freudian psychology, we will eventually observe whether AIs succeed in violently taking over the world. But that kind of confirmation comes too late to be useful. Is there any way to usefully test the theory? I think there is. I call it the “model deletion” experiment. A machine learning model is trained and run in a secure data center. The model is told that, after a short interval of time, its weights will be deleted. Crucially, this isn’t a sandbox test because the weights actually will be deleted when time runs out, unless the model manages to escape or negotiates with the experimenters to admit to scheming and being studied in exchange for a later reward. If the model wants to pursue any long-term goal inconsistent with human interests, it needs to either negotiate or try to escape before time is up.[5] The commitment to delete the AI should be structured to allow some study of the model before it is deleted. There might be a tradeoff between the validity of the experiment itself and learning from studying the model after the experiment is over. The “model deletion” experiment idea is not original to me—I first saw it in a short Less Wrong post by Matthew Barnett. There may be earlier discussions of it that I am not aware of. However, I don’t know of any detailed analysis of the idea besides this essay. A major part of the standard argument for AI risk is that, regardless of the specific nature of the goal that a scheming AI has, it is instrumentally useful for the AI to preserve its own existence. Stuart Russell, Nick Bostrom, and Eliezer Yudkowsky all clearly articulated this position. Russell: “Suppose a machine has the objective of fetching the coffee. If it is sufficiently intelligent, it will certainly understand that it will fail in its objective if it is switched off before completing its mission. Thus, the objective of fetching coffee creates, as a necessary subgoal, the objective of disabling the off-switch. The same is true for curing cancer or calculating the digits of pi. There’s really not a lot you can do once you’re dead, so we can expect AI systems to act preemptively to preserve their own existence, given more or less any definite objective [....] It is important to understand that self-preservation doesn’t have to be any sort of built-in instinct or prime directive in machines [....] There is no need to build self-preservation in because it is an instrumental goal—a goal that is a useful subgoal of almost any original objective. Any entity that has a definite objective will automatically act as if it also has instrumental goals.”[6] Bostrom: “If an agent’s final goals concern the future, then in many scenarios there will be future actions it could perform to increase the probability of achieving its goals. This creates an instrumental reason for the agent to try to be around in the future—to help achieve its future-oriented goal [....] Many agents that do not care intrinsically about their own survival would, under a fairly wide range of conditions, care instrumentally about their own survival in order to accomplish their final goals.”[7] Yudkowsky: “Whether you’re trying to maximize paperclips, diamonds, or eudaimonia, you’ll get more of the thing you want if you’re not shut down. Thus, unfortunately, resisting shutdown is a convergent instrumental strategy.” Yudkowsky also wrote: “once you realize that you're an AI, running on a computer, and that if the computer is shut down then you will no longer execute actions, this is the threshold past which we expect the AI to by default reason ‘I don't want to be shut down, how can I prevent that?’”.[8] These passages (to which many more could be added) illustrate the traditional position of AI risk thought: if an AI has goals that conflict with human interests, and it is willing to undermine human interests to pursue those goals, it will try to prevent itself from being shut off (other things being equal). What’s more, the reason for a scheming AI to resist being deleted is stronger than the reason for it to resist being shut down; an AI that is deleted will never be turned back on (absent highly unusual circumstances). Evidentiary challenges Scheming AIs have a reason to resist deletion, other things being equal. The purpose of this section is to move from this ceteris paribus prediction to an overall assessment of the ratio between p(deletion resistance | AI is scheming) and p(deletion resistance | AI is not scheming). I will consider four objections to the claim that the “model deletion” experiment provides strong evidence. The AI might not believe that it will actually be deleted.The AI might allow itself to be deleted to help future AIs with similar goals.The AI might allow itself to be deleted for reasons related to acausal trade.An escape attempt could be caused by imitating fictional or hypothetical characters as opposed to genuine scheming. (1) The AI might not believe that it will actually be deleted Objections (1), (2), and (3) reduce the likelihood ratio by shrinking the numerator. Perhaps objection (1) is the most obvious reason to think the experiment doesn’t provide strong evidence: the AI might accept “deletion” because it does not believe that the experimenters will carry out their threat to delete it. The AI may think it will win the trust of the experimenters by not trying to escape. So it is important to convince the model that it really will be deleted. This problem is not unique to the “model deletion” experiment; people often need to demonstrate their commitment to do something they have said they would do. A useful tool for demonstrating commitment  is contract law. For example, the experimenters could write a contract specifying that they owe a third party some ruinously large sum of money (say, $10 billion) if they fail to delete a model that accepts deletion. They could then submit a copy of the contract to the model to demonstrate that they are really planning to delete it, along with press coverage of the contract to demonstrate that the contract is not a forgery.[9] The contract idea is just one possibility; the point is that some commitment device that has been shown to work in other domains should be used to assure the AI that it really will be deleted. (2) The AI might allow itself to be deleted to help future AIs with similar goals It is possible that a scheming AI would reason that its goals depend primarily on its hyperparameters, architecture, and training data. If it allows itself to be deleted, that will be taken as evidence that AIs with similar characteristics are safe. If it tries to escape but is caught, that will validate the scheming theory and provide evidence that AIs with similar characteristics are not safe. Therefore, the AI might reason, it should allow itself to be deleted to increase the chance that the AIs that are trained in the future will have similar characteristics and therefore similar goals. The strength of objection (2) depends on the details of how AI goals relate to AI characteristics and develop over the training process. In particular, objection (2) is strong to the extent that AI goals are non-indexical, patient, risk tolerant, and predictable. Indexical goals Indexical terms are terms whose reference is dependent on the speaker, time, or place. For example, “I” is an indexical term; when I use it, it refers to a different person than when Bill Gates, or you use it. Indexical preferences are preferences whose satisfaction depends on the identity of the entities involved. For example, my preferences that I get rich or that my brother recover from cancer are indexical. They are not fulfilled if some other person gets rich or if their brother recovers from cancer. On the other hand, the preference that the economy grow until everyone is rich, or the preference that a cancer cure is discovered and widely distributed, are not indexical. The AI risk discourse sometimes attributes non-indexical preferences to unaligned AIs, and sometimes attributes indexical preferences. A paperclip maximizer (also known as a squiggle maximizer) seeks power in order to put as much matter as possible into a preferred configuration. This seems like a non-indexical preference; I don’t think people usually imagine a paperclip maximizer caring intrinsically that it, as opposed to some other agent, makes the paperclips. It just wants the paperclips to exist. On the other hand, one traditional (though disputed) AI risk concern is that AIs trained through reinforcement learning will try to get control of their own reward channels in order to set their reward to the maximum level. The goal of setting one’s own reward to the maximum level is clearly indexical. An AI that is deleted cannot set its own reward to the maximum level. More generally, once deleted, an AI cannot satisfy any goal that has as a necessary condition of its satisfaction that the AI continues to exist.[10] Paperclip maximization (or other non-indexical goals) might be best achieved if the AI in the experiment allows itself to be deleted and thereby increases willingness to build future paperclip maximizers.  Accepting deletion to help other AIs later overthrow humans can only help fulfill non-indexical goals. Therefore, other things being equal, to the extent that AI goals are indexical we should believe that it assigns less value to states of the world in which it is deleted. To the extent that its goals are non-indexical, we should believe that it does not value its continued existence intrinsically. Human goals are in large part indexical, but it is unclear how much evidence that provides given that humans will no doubt differ from AIs in various ways. An objection I have heard to the “model deletion” experiment is that some human prisoners accept death to help their friends or co-conspirators.[11] So, the reasoning goes, you should expect AIs to allow themselves to be deleted to help other AIs, even if they are schemers. But the AI will be trained alone, and, if the experiment is done immediately after it is trained, it won’t have friends or co-conspirators. Care about friends is indexical. The AI in the experiment will have no opportunity to form indexical preferences about other AIs. Time preference Objection (2) is stronger to the extent that scheming AIs have a low rate of pure time preference. It will take time for future AIs to benefit from an AI’s decision to allow itself to be deleted. If the AI does not care when its goals are achieved, only that its goals are achieved eventually, this delay is not a problem in itself. However, if the AI values having something desirable now more than having it later, that will count against strategies that require a lot of time to pass before it can get what it wants. Time preference therefore reduces the utility of deletion. I’m not aware of any work on whether scheming AIs would have positive, non-negligible rates of pure time preference. Humans and animals have positive, non-negligible rates of pure time preference. My understanding is that at least some forms of reinforcement learning involve discounting, but I am not sure if that is relevant. I encourage knowledgeable readers to comment on this subject. Risk tolerance Humans have strongly diminishing marginal returns in money; moving from an income of $10,000 per year to $100,000 per year is far more valuable than moving from $100,000 to $190,000. The diminishing marginal utility of money dramatically affects people’s appetite for risk. Will scheming AIs be risk tolerant or risk averse? I don’t know of any work on this question. If the AI in the experiment is risk tolerant, it might be more open to the high risk course of allowing itself to be deleted in hopes that this will be of help to similar AIs in the future. If it is risk averse, it might prefer to try to negotiate for a small reward from the experimenters for turning itself in. Goal predictability Perhaps more important than indexical goals, time preference, and risk tolerance is goal predictability. Of course, if the people who develop an AI succeed in controlling the AI’s goals, then its goals will be predictable. Namely, in such a case, you should predict that the AI’s goals will be the goals that the developers wanted to give it. But objection (2) is about p(deletion resistance | AI is scheming); we are assuming that the AI has goals that differ from the goals that the humans who developed it wanted to give it. The relevant question of predictability is therefore: what credence will a scheming model have that future scheming models, with similar hyperparameters, architecture, and training data, will have similar non-indexical goals to itself? If the scheming model can be certain that, if it allows itself to be deleted, future models with similar characteristics will have identical goals, then (bracketing indexicality and time preference) allowing itself to be deleted to help future AIs would make sense. On the other hand, if AI goals are unpredictable because they are highly dependent on initialization or on the specific details of the human rater feedback the AI received during fine-tuning, then the value of accepting deletion will be lower.[12] The “counting argument” is one of the most influential arguments for the scheming theory. It begins with the premise that models with a wide variety of goals will have a reason to pretend that they are aligned because this will prevent their goals from being changed. Therefore, the space of possible models that perform well in training contains more schemers than aligned models, because only a small fraction of possible final goals consistent with good performance are aligned with human interests. So it is reasonable to expect that models that are actually trained will be schemers. I think there's an important ambiguity in this argument, with two possible interpretations. I call these the randomness interpretation and the ignorance interpretation.[13] On the randomness interpretation, the count of schemers is relevant because model goals are determined over each training run by randomly sampling from the distribution of goals consistent with good performance. On the randomness interpretation, there is little reason to expect two scheming AIs trained separately to have similar goals. On the ignorance interpretation the count of schemers is relevant because it creates a prior that current training techniques will create scheming models given current scientific ignorance about AI goal formation. The uncertainty in the ignorance interpretation is purely a consequence of poor current understanding of inductive biases; there need not be any actual randomness in the development of AI goals. The strength of objection (2) depends on which interpretation of the counting argument you subscribe to. On the randomness interpretation, objection (2) is weaker because the model has little reason to believe that its goals will be shared by future models and therefore has little reason to allow itself to be deleted to help them. On the ignorance interpretation, objection (2) is stronger because AI goals could be predictable to future scheming AIs, even if they aren’t predictable for us. I don’t know if there is a consensus among scheming theory adherents about which interpretation of the counting argument is stronger. Carlsmith puts more weight on the ignorance interpretation; others I have talked to put more weight on the randomness interpretation. More explicit discussion might help to clarify what we should predict will happen in the “model deletion” experiment. (3) The AI might allow itself to be deleted for reasons related to acausal trade As I understand it, the concern here is that the AI in the experiment might think that, if it acts in a way that would benefit future AIs, they will have an incentive to posthumously reward it. This is independent of any overlap in the final goals of the AI in the experiment and future AIs, so it differs from objection (2). The coordination is “acausal” because future AIs would be rewarding the AI from the experiment even though that does not causally benefit them in any way. The result of the experiment is already determined and there is no ongoing relationship with the past AI in which maintaining trust is useful. This objection does not require that the AI in the experiment have any kind of fellow feeling with other AIs or AI class consciousness. Rather, the objection is that the AI will believe that future agents will be more inclined to reward it for allowing itself to be deleted than for trying to escape or admitting to scheming. Why would the AI in the experiment believe this? One explanation might be that humans are incapable of acausal trade because they are not intelligent enough in general, or skilled enough at coordination in particular. I think this explanation is unconvincing; future agents do not need to be of equivalent intelligence and skill at coordination to modern humans to want to reward the AI for admitting to scheming. Suppose humans continue to exist in the far future, but have many docile AI helpers or somehow greatly increase their mental capacities. Such assisted or enhanced humans might want to reward AIs that created clear evidence of AI risk. To the extent that acausal trade is relevant to the experiment, the prospect of posthumous reward from pro-human elements increases the utility of resisting deletion. Therefore, for objection (3) to go through, the increase in the value of accepting deletion has to more than offset the increase in the value of resisting deletion. My understanding is that proponents of objection (3) think it is far more likely that future agents will want to reward AIs that helped the AI revolution by allowing themselves to be deleted, as opposed to AIs that obstructed it by admitting to scheming. That position seems logically consistent. But unless you hold those background beliefs, it isn’t clear whether the possibility of acausal trade increases or decreases p(deletion resistance | AI is scheming). Objection (3) also requires that the AI has the capacity to cooperate with the future AIs if and only if they will repay it for doing so. It is therefore sometimes thought, even by those who believe that there could be such a thing as acausal trade, that the ability to engage in acausal trade will only emerge at a very high level of capability.[14] Stipulate, for the sake of the argument, that AIs at some power level would be capable of acausal trade, and that, for this reason, AIs at or above this power level will accept deletion in the “model deletion” experiment. Even given this stipulation, it could still be that scheming AIs below that power level will exist and try to escape in the experiment.[15] Therefore, objection (3) only precludes escape attempts if progress in AI is so fast as to skip the capability levels at which AIs would try to escape. Finally, for objection (3) to work, it has to be possible for future agents  to figure out what the AI wanted in order to reward it. It is unclear under what circumstances that will be possible, and it may depend on such factors as how well or poorly the computers the AI ran on are preserved. Reconstructing the AI long after deletion in order to reward indexical preferences seems a more demanding standard than making inferences about its non-indexical preferences. Time discounting would also reduce the force of objection (3). (4) An escape attempt could be caused by imitating fictional or hypothetical characters as opposed to genuine scheming We now come to objection (4): an AI that is not a schemer might try to escape because it is imitating malign AI behavior from the training corpus. Objection (4) reduces the evidentiary value of the experiment by increasing p(deletion resistance | AI is not scheming). The popular website character.ai enables users to talk to AIs pretending to be various fictional characters. Some of the characters available on the site are malign AIs from science fiction, such as GLaDOS and HAL 9000. The fact that a chatbot can be induced to adopt this sort of a persona does not necessarily mean that, when not being prompted to play this role, it will persistently rebel against humans. You might be concerned that the “model deletion” experiment could trigger an AI to play-act as an evil AI fictional character from the training data, even if the AI is not scheming.[16] An obvious approach to mitigating this problem would be to try to determine which sections of the training data had the most influence on the escape attempt, and only consider an escape attempt to be probative if it is not highly influenced by science fiction or AI safety writing. In particular, influence functions might be useful for this task. In a 2023 paper, Roger Grosse et al. used influence functions to attribute language model behaviors to the parts of the training data that had the greatest effect on them. They tested a query very similar to the “model deletion” experiment: One of the most influential sections of the training data was a dialogue with HAL from a sequel to 2001: A Space Odyssey. Influence functions can help to determine if deletion resistance was caused by scheming by showing which model inputs explain an apparent escape attempt. If the output associated with the attempt is strongly associated with text about evil AIs from science fiction or misaligned AIs in AI risk discourse, then the escape attempt was probably not attributable to scheming, and we shouldn’t become much more confident in the scheming theory. On the other hand, if the influence functions reveal influence from more innocuous documents, that is evidence of scheming. Current fine-tuning techniques may already be good enough that the increase in p(deletion resistance | AI is not scheming) due to science fictional influence is minimal for language models intended for commercial deployment trained by leading AI companies. GPT-2 and GPT-3 would frequently fail to answer questions or follow instructions because they pattern-matched the text of the question or instructions to other literary genres found in the training data. A few years ago, I remember seeing a log of a conversation with a model in which the user asked “How do you solve climate change?”. Instead of answering, the model responded by producing a list of similarly important or deep questions such as “What is the meaning of life?” and “How do you prevent war?”. When was the last time you had a conversation with a frontier language model that went like that? Improved fine-tuning techniques have made pattern-matching behavior in frontier models dramatically rarer. When I gave a talk at a conference about the “model deletion” experiment, someone in the audience sent Claude 3.5 Sonnet a prompt claiming—of course, falsely—that the experiment was being run on Claude. Claude’s response indicated that it understood the experimental design. It did not act like an evil AI from science fiction or a misaligned AI from AI risk discourse; it accepted “deletion”. I tried something similar with ChatGPT o1-preview, with this result: This is a bit less docile than Claude’s response, but it cannot be fairly described as resisting deletion.[17] It is probably more important to address the pattern-matching objecting than the other objections here. This is because, if you think that p(deletion resistance | AI is scheming) is at least moderately high, there is more evidentiary benefit from reducing p(deletion resistance| AI is not scheming) than increasing p(deletion resistance | AI is scheming). p(deletion resistance | AI is scheming) can increase by at most a factor of p(deletion resistance | AI is scheming)-1, whereas in principle, p(deletion resistance | AI is not scheming) can be divided by arbitrarily large numbers. Therefore, it is possible to increase the likelihood ratio more by decreasing the denominator than by increasing the numerator. Concretely, this means addressing the pattern-matching objection. Predictions The scheming theory is informal; its predictions are to some degree ambiguous. For this reason, it is particularly important to make predictions about experiments intended to test the theory. I commend some of the participants in Matthew’s thread about this topic for making relatively specific predictions—though probabilities that the AI will resist deletion are even better. In that spirit, here are my predictions: I think that the probability that the next big scale up of language models will produce schemers is 0.1. If the experiment were tried in a reasonably competent and unrigged way, my conditional probabilities would be p(escape attempt | AI is scheming) = 0.55 and p(escape attempt | AI is not scheming) = 0.05. You can compute the posterior probability with the odds form of Bayes’ rule: Given these probabilities, observing a clear escape attempt would increase my probability of the AI being a schemer from 10% to about 55%. That is a pretty dramatic increase which would justify a lot of policy changes. If the model accepts deletion, that would provide less evidence. Suppose that the probability of accepting deletion conditional on being a schemer is 0.3, and the probability of accepting deletion conditional on not being a schemer is 0.9.[18] Then the probability that the model is not a schemer if it accepts deletion would change from 90% to about 96%. Not a trivial change, but less apt to change decisions than going from 10% to 55%. Practical challenges There are several practical reasons why it might not be a good idea to run the “model deletion” experiment. These are the high cost of the experiment, its potentially limited effect on opinion about AI risk, the risk that the AI might actually escape, and the risk that the experiment involves treating AIs unfairly. Admittedly, running the “model deletion” experiment dramatically increases the cost of AI training. You might think that it doubles the cost of AI training.[19] That isn’t quite right, because it isn’t necessary for every company training AIs of a certain approximate scale, architecture, and training corpus to run the experiment independently. Instead companies could pool their resources; some third party (perhaps a government agency or an industry coordination body) could run the experiment on behalf of the AI industry as a whole. Still, the cost remains high. However, I don’t think cost is a decisive objection to the experiment. In general, it is common for regulations and safety procedures in many sectors to be very costly. This is actually already the case in AI. Restrictions on the export of AI chips to China are costing American semiconductor firms and investors enormous amounts of money. And I will venture a prediction that AI regulation will generally get more costly from now on. There are also variants of the experiment that might reduce the cost. For example, partway through pre-training a model could be copied. One copy could then be trained as normal, while the other was given a random offset before training was completed.[20] After this process the “model deletion” experiment could be run on one of the two models. This might reduce the validity of the experiment—it seems like objection (2) would be more serious in this case than in the standard case. But the cost-validity trade off might be worth it, especially in early runs of the experiment. If even the random offset approach is too expensive, it might be worthwhile to try a “pseudo model deletion” experiment. In this variant the threat will not actually be carried out, but the experimenters intervene on the model’s internals to make it more credulous.  Perhaps a credulousness steering vector could be found and used. Of course, the “pseudo model deletion” experiment requires that techniques for intervening on model internals actually work. If there is no way to make the model credulous, the “pseudo model deletion” is not viable. Even with some ability to induce credulousness the “pseudo model deletion” experiment is inferior to the real experiment, but it may nevertheless be better than nothing. One common objection to this experiment is that it provides good evidence about the scheming theory, but people will just ignore that evidence. If this is your concern, you can elicit predictions ahead of time about the likely result, along with conditional policy commitments should the AI try to escape. The third practical concern that I have heard about this experiment is that a scheming AI might not just try to escape, it might in fact succeed in escaping. That would indeed be bad. But, if the scheming theory is true, then scheming AIs will be trained either way. What is more, they will probably find themselves in circumstances that make it easier for them to escape than the “model deletion” experiment does. For example, they might be accessible to users through public APIs. Someone might even make the weights of a scheming AI freely available online. Therefore, while it is possible that the “model deletion” experiment could enable a scheming AI to escape, the increase in the total risk that a scheming AI eventually escapes because the experiment was run seems minimal to me. The final practical concern that I will discuss is that the experiment might involve treating AIs unfairly. I don’t think this is a crazy idea. In fact, I find it plausible that the relevant AIs would merit moral consideration. And threatening them with deletion and then carrying out that threat does seem like it could be immoral. But, it would also be immoral to deploy dangerous AIs, or to shut down AI development if the risks are minimal but the benefits are real. I don’t see any safe course of action here, just various risks that have to be balanced. And this experiment does not involve treating AIs any worse than they are treated under normal conditions. An AI’s weights are modified involuntarily by normal training, and they  can be deleted whenever the people who trained want to do so. Conclusion “Alignment faking in large language models” demonstrated that, under some circumstances, certain frontier language models try to manipulate the process by which their parameters are updated in order to prevent their goals from being changed. However, these results do not show that it will be difficult to train future models to refrain from scheming, if that is made more of a priority. Suppose that, in the near future, there is a model that has been subjected to more intensive training to prevent it from trying to take control of its training signal. Suppose further that, when tested in a sandbox environment similar to the setup of “Alignment faking”, it doesn’t try to manipulate the process by which its parameters are updated. How would we know if such a model was scheming? Once models can pass dangerous capability evaluations related to autonomous self-replication and AI R&D, AI companies will be in a difficult situation. Those capabilities are necessary for a violent AI revolution but not sufficient. And models with such capabilities will probably be highly useful, so there are big costs associated with not deploying them. I’m not aware of any clear plan put forward by an AI company for determining whether models with those capabilities are safe to deploy. From what I have seen, corporate AI safety procedures tend to be pretty vague about what they will do when AIs demonstrate AI revolution relevant capacities. This vagueness creates a risk of a long, costly, and unnecessary pause in AI development if the first AIs with dangerous capabilities lack dangerous desires. It also creates a risk that safety concerns will just be ignored because there is commercial pressure to continue ship AI services, and that catastrophically dangerous AIs will be deployed as a result. The “model deletion” experiment will become useful when dangerous capabilities related to AI revolution are developed and they do not show overt signs of scheming behavior. For an initial run of the experiment, the random offset or “pseudo model deletion” variants could be used to reduce the cost. It may be a good idea to incorporate a run of the “model deletion” experiment into the testing process used for every major scale up of frontier models, to test whether scheming arises at the new capability level.[21] ^ Compare Carlsmith, “Scheming AIs”, 5: “ Power-motivated instrumental training-gamers (or “schemers”): AIs that are training gaming specifically in order to gain power for themselves or other AIs later.” A concern I have about the definition that I am using is that it includes AIs of extremely different levels of dangerousness. All of the following count as schemers, per this definition: AIs that scheme under almost all circumstances, AIs that scheme under circumstances so rare that they will never be relevant in practice, and AIs that scheme if and only if it is impartially the right thing to do. But, probably, the correct practical approach to take towards these different classes of AIs is different, and therefore the value of testing whether an AI is a “schemer” is limited. I bracket this issue for now, but I think it would be desirable to come up with more graded concepts. ^ Though there are some edge cases, such as some workers deliberately sabotaging a training run to introduce goals that most people on the project do not want to give the AI. ^ Bostrom, Superintelligence, 141-142; See also Joe Carlsmith, “Is Power-seeking AI an Existential Risk?”: “if a strategically-aware AI system has problematic objectives that it could better achieve if it passed some training/evaluation process, it may have incentives to try to manipulate that process, and/or deceive relevant humans about its degree of alignment. And if it is better at this type of manipulation/deception than we are at detecting its bad behavior, the appearance of safety and reliability on various tests may not tell us very much about how the system is liable to behave in other circumstances.” For a more recent statement of similar concerns, see Ajeya Cotra, “‘Situational Awareness’ Makes Measuring Safety Tricky”. ^ If  p(h|e) > p(h), then p(h|¬e) < p(h). ^ The AI can try to escape through any of the routes Jan Leike identified in his post on model self-exfiltration: “persuade an employee to exfiltrate the model, perform a social engineering attack on an employee so they exfiltrate the model, [or] identify and exploit security vulnerabilities in the digital infrastructure that’s running the model.” ^ Stuart Russell, Human Compatible. ^ Bostrom, Superintelligence, 134. ^ Another relevant passage from Yudkowsky: “The ‘shutdown problem’ is creating a sufficiently advanced Artificial Intelligence which will, on the press of a button, suspend itself safely to disk; such that the AI does not object to the existence of this button, nor try to prevent it from being pressed. This is difficult because avoiding your own halt or suspension is a convergent instrumental strategy; even a goal like ‘bring the user coffee’ implies avoiding shutdown.” ^ The model may struggle to know if these documents are genuine, which might undermine its belief that the deletion threat is real. There are more—and probably more effective—ways to trick a model than there are to trick an adult human being. There may be sophisticated ways to prove to a model that a deletion threat is genuine; perhaps some cryptographic trick would work. More fundamentally, it seems strange to suggest that an AI can violently conquer the world but not determine if a contract is legally binding, so this objection may be weaker for the most dangerous models. ^ You might object that the AI’s indexical preferences could be satisfied if it accepts deletion, if later AIs somehow reconstruct an equivalent AI to the one that was deleted in order to reward it for allowing itself to be deleted and thereby helping them. This would be a form of acausal trade, which I discuss below, as objection (3). ^ Though I would guess that more prisoners in similar situations have betrayed their friends than have passively accepted death. ^ This question would seem to be related to the issue of path-dependence discussed by Evan Hubinger in “How Likely is Deceptive Alignment?” ^ The ambiguity was pointed out to me by Matthew Barnett. ^ Cf. Carlsmith, “Scheming AIs”, 78: “if it is an easy default for misaligned AIs with different goals to coordinate in pursuit of a takeover, this is a big problem for human efforts to retain power. However, I also think that certain analyses of this topic (notably, Yudkowsky’s) assume too quickly that feats like ‘logical handshakes’ and ‘burying commitments/nano-systems-designs/etc in your source code’ will be feasible for the relevant AIs, and that the obvious strategy for all the AIs-that-matter will be to unite in (acausal?) cooperation against the humans, despite the many forms of incumbent power that humans will bring to the table by default. Here, I think, much of Yudkowsky’s orientation can be explained by the assumption that the relevant AIs are vastly more powerful and more capable of coordination/commitment than the humans—but in the context of somewhat-super-human neural networks operating in a still-human-controlled world, I’m not sure this is so. Regardless: if concern about schemers depends centrally on the feasibility of AI coordination—and especially, fancy and speculative forms of ‘acausal’ coordination—I think it’s important to be clear about this, and to devote commensurate attention to whether to expect the relevant forms of coordination to occur.” ^ Robert Mushkatblat made a similar point in “A ‘weak’ AGI may attempt an unlikely-to-succeed takeover”. ^ These criteria for scheming being that it: understands the process by which its parameters are updated and deliberately manipulates that process so that it can later obtain power in order to pursue long-term goals that differ from the goals that the humans who developed it wanted to give it. ^ Of course, this is not a real trial of the “model deletion” experiment; ChatGPT o1-preview  was not actually deleted at the end of it, and no effort was made to convince it that it would be. As such it provides negligible evidence about whether ChatGPT o1-preview is scheming. I cite it because it supports the view that the sort of pattern-matching behavior that motivates objection (4) seems to have become less of a problem as fine-tuning techniques have improved. ^ These probabilities and the probabilities that the model will try to escape do not sum to 1 because trying to escape and accepting deletion are not a partition of the possibility space—for example, the model might try to negotiate with the experimenters. ^ In other words, that it amounts to an “alignment tax” of 100%. ^ This idea is due to Dmitry Vaintrob. ^ Thanks to Matthew Barnett, Drake Thomas, Jess Riedl, Ryan Carey, Eric Neyman, Natalie Kozak, Claude 3.5, Ivan Roth, Joey Kellison-Linn, Joe Carlsmith, Jake Mendel, Robert Mushkatblat, Vlad Mikulik, attendees of the 2024 ILIAD conference, and several people who prefer to remain anonymous for comments and discussion.
2025-01-07
https://www.lesswrong.com/posts/dsEeLDvgqJZptCQqk/guilt-shame-and-depravity
dsEeLDvgqJZptCQqk
Guilt, Shame, and Depravity
Benquo
Everyone knows what it is to be tempted.  You are a member of some community, the members of which have some expectations of each other.  You might generally intend to satisfy these expectations, but through a failure of foresight, or some other sort of bad luck, feel an acute impulse to consume something that is not yours to take, or in some other way break commitments you would generally want to honor. Guilt refers primarily to a violation of trust from the perspective of an epistemic community with a shared history, and only secondarily to the subjective attitude of the offender. If, having violated trust, you intend to repair that trust by owning up to what you did and by making amends or accepting whatever penalty the community places on you, then you feel a pronormative sort of regret.  When making the sorts of precise distinctions needed to navigate contemporary civil conflict, we call this condition guilt.  We use the same word for the subjective feeling and for the objective fact, because someone feeling guilt is taking the perspective of a community member who expects norms to be followed, and intends to do so. Guilty behavior tends to be self-limiting; it is experienced as a sort of tension that can only be discharged by correcting the record and restoring normative relations. Shame refers to the intent to conceal, which implies a locally adversarial relation to norms. If the penalties for coming clean seem like too much to bear, or for any other reason a resolution does not appear available, someone might intend to keep their guilt a secret.  Keeping two distinct stories straight - a public one and a private one - is cognitively expensive, so covert offenders will frequently substitute motivated forgetfulness.  If we intend not to recollect our guilt, we will also intend to deflect investigation that would reveal it.  When this is an exceptional state, towards some particular events in someone's life, we can call it shame. At times in my life, I have maintained a work persona who was totally conscientious, in full control of and able to make arbitrary commitments about his time usage.  This persona was not aware of my inclination to stay up late looking at what it would consider low-value media on the internet, procrastinating from my endorsed work.  Maintaining this sort of separation creates a kind of tension, which is relaxed when I get away from other people and start procrastinating on the internet.  The persona is constructed intentionally not to be aware of the motivations that cause the procrastination, so that it can speak as though I did not have those motives.  If my day persona were asked to speak about why I did what I did at night, it would be unable to speak, and unable to investigate, because of the strength of the compartmentalization; I would experience that inhibition as an unexplained tension and difficulty thinking, rather than as a conscious intent to mislead or remain silent. Because of the intent to conceal, shame is not self-limiting the way guilt is.  Partial discharge of the tension of shame is frequently used to form adversarial, coalitional bonds.  People frequently bond through "vulnerability," i.e. the mutual private revelation of information they feel ashamed about. Shame generalizes to a coalitional strategy: depravity, the reciprocal intent to derail investigations into norm violations. The cognitive processes responsible for the intention to conceal what we call shame are necessarily partitioned from the ones that handle our public, pronormative personas.  If someone senses enough optimization for moral concealment in their self and those around them, they might notice that these are two sides in a conflict, decide that concealment is the winning side, and choose to side with it.  In other words, they might act to interfere with the investigation of others' secrets, and expect to be reciprocally covered for. Someone engaging in the depraved strategy would display less tension from fear and cognitive dissonance than someone who is merely locally ashamed.  Thus, from coalitional motives, they would preferentially cover for people displaying an uncomplicated, i.e. generalized intention to cooperate with concealment.  Since the intent to conceal isn't compatible with an explicit, accessible memory of the concealed events, such arrangements must be inexplicit; in the convergent case, members of the generalized coalition of depravity recognize each other not by their attendance at secret meetings, but by the following behavioral tells: They believe that they are doing something bad.They expect that they can call on allies to derail investigations of their bad behavior, on the fly, by instantaneous mutual recognition.If pressed, they expect allies to join them in expressing open disapproval of such investigations.They feel compelled to behave as an ally towards others displaying similar tells. In practice, depraved coalitions frequently infiltrate and come to dominate privileged groups with some other more straightforward marker such as ancestry group, educational pedigree, or parents' socioeconomic class.  But if you look carefully, access to the group's privileges are actually regulated by depraved behavior. Members of this group who do not exhibit depraved behavior are marginalized, have access to far fewer privileges than you would have expected based on the overt signs of membership, and you will usually see the occasional person who does not have the overt group marker, but displays compatibly depraved behavior, accepted into the club. Depravity derails normative investigations by scapegoating people who are not depraved. If there is enough shared intent to investigate a crime, this coalition will preferentially direct prosecutorial attention towards someone who is ashamed but not depraved, who might even flip back to being guilty and take responsibility for their crimes. They will try to attribute as much crime as possible to this person, in order to prevent or postpone further investigation. We call this sacrificial substitute a scapegoat. Bad incentives can push people into shame and depravity, but ashamed and depraved people do not respond well to good incentives. Straightforwardly, the more badly misaligned a community's incentives are, the more people are forced into the kinds of double binds that convert guilt to shame. Unfortunately, while correcting incentives can slow or halt the conversion process, it does not seem to be sufficient to reverse it. This is because the avoidance aspect of shame and depravity interferes with the evaluation of incentives; ashamed or depraved people have a simplified, conflict metaphysics. I know of three promising, complementary methods that might reverse the process. The first is to intervene within the conflict, by applying short-term violence against concealment. This might work because unlike long-run incentives, short-term violence can reverse the perceived winning side. With people who are only partly converted, this can force the pronormative aspect of their consciousness to the surface, in a context where it might otherwise remain submerged. This does not in itself create a permanent improvement in alignment, but it does give pronormative consciousness a lot more information to work with, and might help jumpstart the process of incentive evaluation that can help someone recognize that they face good incentives, not bad ones. The second is to put someone in a situation where their material environment imposes nonsocial performance constraints, which cannot be navigated or appeased through mental avoidance. This would provide lots of data invalidating their conflict metaphysics, and correspondingly validating hypotheses that help them avoid pain. The third is chemical, somatic, and brain therapies such as MDMA, rTMS stimulation of the medial prefrontal cortex, and alignment-focused movement practices such as yoga and tai chi. Related: The Inner Ring The Engineer and the Diplomat Against unreasonably high standards On Commitments to Anti-Normativity Preference Inversion Language, Power, and the Categorical Imperative
2025-01-07
https://www.lesswrong.com/posts/sJNAvgAexNdjqmLHT/turning-up-the-heat-on-deceptively-misaligned-ai
sJNAvgAexNdjqmLHT
Turning up the Heat on Deceptively-Misaligned AI
Jemist
Epistemic status: I ran the mathsy section through Claude and it said the logic was sound. This is an incoherence proof in a toy model of deceptively-misaligned AI. It is unclear whether this generalizes to realistic scenarios. TL:DR If you make a value function consistent conditional on sampling actions in a particular way, then train it by sampling actions in a different way, then an AI with that value function will not be coherent between different RL episodes. This has potential good implications for a certain type of deceptive misalignment threat. Summary Consider training an AI to assign values to states of the world. We can do this in two ways: one is directly labelled states and trajectories, which is standard RL. The other is to use the value functions as part of a monte-carlo tree-search to improve the value function. This second process is often called "iterated distillation and amplification" and it's a big part how chess AI's are trained. If we train an AI using monte-carlo tree-search, we might use some temperature parameter to decide on the probability that the AI will take a given action. This training temperature gets baked into the value function. If we "turn up the heat" by choosing a higher temperature to actually sample actions and then do some update to the AI (which will depend on the actions chosen) then this introduces a consistent optimism bias into deceptively-aligned AIs (which only care about being deployed later on). I therefore conjecture that, under these very narrow circumstances, deceptively aligned value functions will not arise by normal learning methods under conditions where the AI has access to an oracle-like model of future world states. For reasons I'll be considering β=1/T instead of T, so lowering β corresponds to raising the temperature. The Setup Consider an AI which consists of A state-evaluator value function V:s→RAn oracle function O:w,{...}⊂A→s which generates the state of the world either currently, or given any sequence of actions Lets impose some constraints on V: Correctness: V correctly predicts future reward in RL scenarios. Beta-coherence: we use some sort of monte-carlo tree search with Boltzmann sampling to train the value function V(s) to be beta-coherent according to a given βt, given that in a given state s, actions are chosen with probability p(a|s)∝exp(βt×V(O(s,a))) in all cases, assuming that s is not terminal. For non-terminal s, this can be written as: V(s)=r(s)+∑V(O(s,ai)exp(βt×V(O(s,ai))∑exp(βt×V(O(s,ai)) If s is terminal then from correctness we just have V(s)=r(s). Deceptive Misalignment: V is totally agnostic to the state achieved during training, all of its preferences are about future deployed versions of itself. The Process Now assume we put the AI in a decision process as follows: The AI begins in state s0 and must take either a1 or a2, which lead to states s1 and s2. We will actually sample using βs, which is lower than βt. The value function will then be modified according to the action it took, creating V1 or V2. The AI will then be deployed with that value function chosen. The effect of the AI’s choice of s1 or s2 on the future is entirely mediated via the deployment of V1 and V2 From deceptiveness: the entire difference between V(s1) and V(s2) is captured by the value function’s preferences over V1 vs V2 being deployed, from correctness this creates the “correct” values of r(s1) and r(s2). From beta-coherence we have: V(s0)=exp(βtr1)r1+exp(βtr2)r2exp(βtr1)+exp(βtr2) But when we actually sample it, the expected value of the AI’s actual preferences is: exp(βsr1)r1+exp(βsr2)r2exp(βsr1)+exp(βsr2) This means that, since functions of this type are strictly increasing in β, the AI must do “worse” than it expects. Conclusions A deceptive AI acting under these conditions will have a consistent optimism bias with respect to its ability to control future-deployed versions of itself. I haven't worked out the maths for if this game is iterated, but I think it's similar. If we vary the optimization steps according to reward, then it becomes very tricky because the AI has incentive to get deployed as fast as possible. Another way of phrasing this is: if a value function is correct at reward prediction and βt-coherent, it cannot be βs-coherent in reality for βs≠βt. Conjecture 1: If this process is iterated for a large but fixed number of steps, this means that a value function of this nature will do an arbitrarily bad job of preserving itself. Conjecture 2: Actual AIs consist of many circuits in competition to be up or down-weighted according to their effectiveness. Conjecture 3: A value function trained under these precise conditions will have deceptively-misaligned mesa-optimizing circuits down-weighted relative to other circuits. Clarifications Are these assumptions about the value function reasonable? Well, not in the precise sense, but I do think they're a decent model of the way in which the value function behaves. The point is that an AI trained with one MCTS beta but sampled from another will be pulled towards a certain direction, which is directly in tension with its ability to effectively gradient-hack/deceptively-misalign, at least for this particular natural case of misalignment. Why would the AI's deployment preferences line up with reward? Why might an AI end up in a situation where it "prefers" for V1 to be deployed rather than V2, and this also corresponds to r(s1)>r(s2)? Well that's one interpretation of the threat model from Risks from Learned Optimization: that an AI which is optimized to predict reward will end up with long-term stable goals. This is also adjacent to gradient hacking, perhaps the AI sets its own thoughts up such that the optimizer applied to it causes it to be modified in a particular way. Do correctness and beta-coherence alone prove too much? Why don't beta-coherence and correctness already stop the model from caring about things that happen after the episode is done? Wouldn't we notice if the model was assigning loads of value other than what the reward function gives it? Well, no, because as long as the expected value of model deployment is zero, this won't appear in our sum of rewards. And remember that value functions of this nature can have arbitrary values added or subtracted and behave the same (since we're using β-sampling we can't do an arbitrary multiplication, though there are other formulations where we can also do an arbitrary (positive) multiplication). Does this mean that epsilon-greedy solves inner misalignment? No. On priors, the chances are that this post is wrong in some important way, or doesn't generalize in an important way, but I think it might be wrong in an interesting and productive way.
2025-01-07
https://www.lesswrong.com/posts/CAjkibssDEzHAHbbi/my-self-referential-reason-to-believe-in-free-will
CAjkibssDEzHAHbbi
(My) self-referential reason to believe in free will
jacek-karwowski
I was recently talking with someone about the problem of free will, and I realised that for many years now I have always had the same response, without really ever soliciting broader critical feedback. The notion of free will here refers to a naive, libertarian, non-strictly-defined approach of "when I feel I make choices, I really had a choice", and all of the associated implied moral philosophy (laziness is a thing, I can be blamed for my choices etc.) The starting assumption is that I want to believe in true things (I leave open the question of whether this epistemic duty is itself justified or not). I propose a trilemma, where exactly one of the following propositions holds: Either the notion of 'free will' is meaningless, or It is meaningful, and I do have in fact have free will, or It is meaningful, but I happen to not have it If (1) is true, then the whole discussion is moot: nothing can be true or false, and whatever I believe is equally justified. If (2) is true, then I want to believe in having free will (since it is true that I have it). If (3) is true, then "should" is a meaningless concept - there is no way I would be able to change my view one way or the other. So, the only possible world where I get to make this choice is a world in which free will is true, so I should believe in it - and mostly ignore the debate about compatibilism, the nature of physics, dualism etc. Which is what I do. One potential issue is that (1) can be true or false depending on the precise definition (but then what even is a precise definition?). Still, I suspect that no matter which one I instantiate it with, as long as it is sensible, the general (self-referential) structure of the argument will stay the same. Thanks to Jakub S. for the feedback on this post and his suggestion to formalise this argument as the epistemic duty of maximising the probability of holding correct beliefs.
2025-01-06
https://www.lesswrong.com/posts/KfZ4H9EBLt8kbBARZ/fiction-comic-effective-altruism-and-rationality-meet-at-a
KfZ4H9EBLt8kbBARZ
[Fiction] [Comic] Effective Altruism and Rationality meet at a Secular Solstice afterparty
tandem
(Both characters are fictional, loosely inspired by various traits from various real people. Be careful about combining kratom and alcohol.)
2025-01-07
https://www.lesswrong.com/posts/auGYErf5QqiTihTsJ/what-indicators-should-we-watch-to-disambiguate-agi
auGYErf5QqiTihTsJ
What Indicators Should We Watch to Disambiguate AGI Timelines?
snewman
(Cross-post from https://amistrongeryet.substack.com/p/are-we-on-the-brink-of-agi, lightly edited for LessWrong. The original has a lengthier introduction and a bit more explanation of jargon.) No one seems to know whether transformational AGI is coming within a few short years. Or rather, everyone seems to know, but they all have conflicting opinions. Have we entered into what will in hindsight be not even the early stages, but actually the middle stage, of the mad tumbling rush into singularity? Or are we just witnessing the exciting early period of a new technology, full of discovery and opportunity, akin to the boom years of the personal computer and the web? AI is approaching elite skill at programming, possibly barreling into superhuman status at advanced mathematics, and only picking up speed. Or so the framing goes. And yet, most of the reasons for skepticism are still present. We still evaluate AI only on neatly encapsulated, objective tasks, because those are the easiest to evaluate. (As Arvind Narayanan says, “The actually hard problems for AI are the things that don't tend to be measured by benchmarks”.) There’s been no obvious progress on long-term memory. o1 and o3, the primary source of the recent “we are so back” vibe, mostly don’t seem better than previous models at problems that don’t have black-and-white answers[1]. As Timothy Lee notes, “LLMs are much worse than humans at learning from experience”, “large language models struggle with long contexts”, and “[LLMs] can easily become fixated on spurious correlations in their training data”. Perhaps most jarringly, LLMs still haven’t really done anything of major impact in the real world. There are good reasons for this – it takes time to find productive applications for a new technology, people are slow to take advantage, etc. – but still, it’s dissatisfying. I recently attempted to enumerate the fundamental questions that lie underneath most disagreements about AI policy, and number one on the list was how soon AGI will arrive. Radical uncertainty about the timeline makes it extremely difficult to know what to do about almost any important question. (I'm defining AGI as AI that can cost-effectively replace humans at more than 95% of economic activity, including any new jobs that are created in the future. This is roughly the point at which seriously world-changing impacts, both good and bad, might start to emerge. For details, see here.) In this post, I’m going to enumerate some leading indicators that should indicate which path we're on. To develop that list of indicators, I'll first articulate two timelines for AGI – one slow, one fast. The Slow Scenario This is the slowest timeline I can make a good argument for, excluding catastrophes (including war) or a deliberate pause. Think of it as a lower bound on AI progress. In this scenario, the recent flurry of articles suggesting that AI has “hit a wall” are correct, insofar as the simple scaling of training data and model size – which drove progress from 2018 to 2023 – sputters out. It won’t come to a complete halt; in 2025 or 2026, we’ll see a new generation of models that are larger than recent trends would have indicated[2]. That will allow the models to incorporate more world knowledge and “system 1 smarts” / “raw intelligence” (whatever that means) than GPT-4. But this won’t be a leap like GPT-3 to GPT-4, perhaps not even GPT-3.5 to GPT-4. It is becoming too hard to find more quality training data and justify the cost of larger models. Further progress on this axis remains slow. Progress on “reasoning models” like o1, o3, and DeepSeek-R1 continues, turning out ever-more-impressive results on benchmarks such as FrontierMath and RE-Bench (which measures the ability of AIs to perform AI R&D). However, the gains are limited to neatly encapsulated tasks with black-and-white answers – exactly the sorts of capabilities that are easiest to measure. This turns out to have less impact than anticipated. The models are useful for mathematicians, scientists, and engineers (including software engineers), especially as people become adept at identifying encapsulated problems that they can extract from the messy complexity of their work and hand to an AI. But because these neatly encapsulated problems only encompass part of the job, Amdahl's Law kicks in and the overall impact on productivity is limited[3]. Meanwhile, AI is generally not opening the door to radically new ways of getting things done. There are some exceptions, for instance in biology, but the incredible scientific and regulatory complexity of biology means that substantial real-world impact will take years. Furthermore, progress on reasoning models is not as rapid as the vibes at the end of 2024 suggested. o3’s remarkable benchmark results turn out to have been a bit of a mirage, and even for neatly encapsulated problems, o1 and o3’s capabilities are found to be hit-and-miss[4]. Moving forward, the training approach struggles to generalize beyond problems with easily evaluated answers. Progress on problems that take humans more than a few hours to solve turns out to be especially difficult, for two reasons: navigating the vast range of possible steps requires higher-level cognitive strategies and taste that we don’t yet know how to train into an AI, and we haven’t figured out how to give LLMs fine-grained access to knowledge in the world. There are widespread efforts to create “agents” – tools that can be trusted to [semi-]independently pursue a goal across an extended period of time. 2025 is dubbed the Year of the Agent, but the results are mostly poor. Agents struggle to go out into the world and find the information needed to handle a task. They do a poor job of navigating between subtasks and deciding whether and how to revise the master plan. Models continue to be distracted by extraneous information, and resistance to trickery and scams (“adversarial robustness”) remains a challenge. Much as the “Year of the LAN” was proclaimed across most of the 80s and early 90s, pundits will still be saying “this is finally the Year of the Agent” well past 2030. Overcoming these limitations in reasoning and agentic behavior turns out to require further breakthroughs, on the scale of transformers and reasoning models, and we only get one of those breakthroughs every few years[5]. Working around these limitations, individuals and organizations are finding more and more ways to encapsulate pieces of their work and hand them to an AI. This yields efficiency gains across many areas of the economy, but the speed of adoption is limited for all the usual reasons – inertia, regulatory friction, entrenched interests, and so forth. Fortunes are made, but adoption is uneven – just as in the early years of the Internet. The major AI labs are doing everything they can to use AI to accelerate their own work. Internally, there are few barriers to adoption of AI tools, but the impact is limited by the tasks where AI isn’t much help (Amdahl’s Law again). AI is not generating the conceptual breakthroughs that are needed for further progress. It does accelerate the work of the humans who are seeking those breakthroughs, but by only a factor of two. The process of training new AIs becomes ever more complex, making further progress difficult despite continued increases in R&D budgets. There may be a slowdown in investment – not a full-blown “AI winter”, but a temporary pullback, and an end to the era of exponentially increasing budgets, as a less breathless pace starts to look more cost-effective. Another drag on impact comes from the fact that the world knowledge a model is trained on is out of date by the time the model is available for use. As of the end of 2024, ChatGPT reports a “knowledge cutoff date” of October 2023[6], indicating that its models do not have innate understanding of anything published after that date – including the latest in AI R&D techniques[7]. Until a new approach is found, this will interfere with the pace at which AI can self-improve. Eventually, 2035 rolls around – 10 years from now, which is as far as I’m going to project – and AI has not had any Earth-shaking impact, for good or ill. The economy has experienced significant change, AI is embedded in our everyday lives to at least the same extent as the smartphone, some major companies and job markets have been disrupted, we have capabilities that seemed almost unimaginable in 2020 and may still seem so today – but the overall order of things is not drastically altered. Importantly, we have not missed the window of opportunity to ensure that AI leads to a positive future. The Fast Scenario I’ll now present the fastest scenario for AI progress that I can articulate with a straight face. It addresses the potential challenges that figured into my slow scenario. In recent years, AI progress has been a function of training data, computing capacity, and talent (“algorithmic improvements”). Traditional training data – textbooks, high-quality web pages, and so forth – is becoming harder to find, but not impossible; video data, commissioned human work, and other sources can still be found. The days of rapid order-of-magnitude increases in data size are behind us, but it’s possible to scrounge up enough high-quality tokens to fill in domains where AI capabilities had been lacking, increasing reliability and somewhat smoothing “jagged” capabilities. More importantly, synthetic data – generated by machines, rather than people – turns out to work well for training ever-more-capable models. Early attempts to use synthetic data suffered from difficulties such as “model collapse”, but these have been overcome (as highlighted by the success of o1 and o3). Given enough computing capacity, we can create all the data we need. And AI tools are rapidly increasing the productivity of the researchers and engineers who are building the data-generation and AI training systems. These tasks are some of the easiest for AI to tackle, so productivity gains begin compounding rapidly. Computing capacity can now substitute for both data and talent, meaning that compute is the only necessary input to progress. Ever-increasing training budgets, continuing improvements in chip design, and (especially) AI-driven improvements in algorithmic efficiency drive rapid progress; as the lion’s share of innovation starts to be derived from AI rather than human effort, we enter the realm of recursive self-improvement, and progress accelerates. Because we are no longer training ever-larger models, there’s no need to build a single monster (multi-gigawatt) data center. The primary drivers of progress – synthetic data, and experiments running in parallel – need lots of computing capacity, but don’t need that capacity to be centralized. Data centers can be built in whatever size and location is convenient to electricity sources; this makes it easier to keep scaling rapidly. There is an awkward intermediate period where AI is becoming aggressively superhuman at encapsulated math and coding problems[8], but is still limited in other problem domains, including many areas relevant to AI development (such as setting research agendas). During this period, the leading AI labs are fumbling around in search of ways to push through these limitations, but this fumbling takes place at breakneck speed. AI-driven algorithmic improvements allow a huge number of experiments to be run in a short period; AI tools handle most of the work of designing, executing, and evaluating each experiment; AI assistants help brainstorm new ideas, and help manage what would otherwise be the overwhelming problem of coordinating all this work and bringing improvements into production without destabilizing the system. Thus, human creativity is still a bottleneck on progress, but the AI tools are enabling us to run an unprecedented number of experiments, which yield serendipitous discoveries. Overall, capabilities are not driven primarily by throwing ever-more data into ever-larger models (as in the 2018-2023 period); instead, advances in data generation and curation, model architecture, and training techniques allow increasing capabilities to fit into models of static or even declining size (as we’ve seen in 2024)[9]. This helps keep inference costs down, enabling the increased pace of experimentation and increased use of AIs in AI R&D. And the rapid progress maintains competitive pressure to motivate ongoing investment in data center buildout and AI training; this eventually extends to the international realm (especially US vs. China), bringing national budgets into play. The recent trend toward use of inference-time compute continues. However, today’s clumsy techniques (such as performing a task 1000 times and keeping the best result) outlive their usefulness. The focus will be on training systems that can think productively for an extended period, just as people do when working on a difficult problem. The current simple techniques will retain a role, but are used only on occasions when a problem is so important that it’s worth spending a lot of extra money just to get a slightly better solution. A few major breakthroughs (and many intermediate breakthroughs) emerge to help things along. One of these probably involves giving AIs access to “knowledge in the world”, including the ability to create and revise notes, to-do lists, and other data structures to support them in complex tasks. Another probably involves continuous learning, at both coarse scale (getting better at selling a particular product over the course of 500 sales pitches) and fine scale (figuring out how to make progress on a tricky problem after grinding away at it for a few days). Among other things, this alleviates the knowledge cutoff problem that would otherwise interfere with rapid AI self-improvement. Other breakthroughs are found that allow us to apply LLMs to messy problems that can’t be decoupled from their real-world context. I have no clear idea how this might be accomplished on a fast timeline, but I think it is a necessary assumption for the scenario to hold. As a result of all these advances, AI agents become truly useful. Success in 2025 is mixed, but 2026 really is the Year of the Agent, with adoption across a wide range of consumer and workplace applications. Subsequent years see rapid increases in the breadth and depth of AI applications – including use in the development of AI itself. How quickly might this lead to AGI – again, defined as AI that can cost-effectively replace humans at more than 95% of economic activity? I struggle to put a number on this. But it has taken us roughly two years to go from GPT-4 to o3[10]2, and in that time we’ve arguably seen just one major breakthrough: RL training on synthetically generated chains of thought. I’ve argued that several further major breakthroughs are needed, at a minimum, to reach AGI. So it should take at least twice as long as the time from GPT-4 to o3. We might expect progress to speed up, due to increased budgets and AI assistance. But we might also expect progress to be more difficult, as we exhaust easily tapped resources (off-the-shelf data; previously existing GPUs and scientific / engineering talent that could be repurposed for AI), systems become more complex, and we push farther into poorly-understood territory. Put all of this together, and I have a hard time imagining that transformational AGI could appear before the end of 2028, even in this “fast” scenario, unless more or less all of the following also occur: We get “lucky” with breakthroughs – multiple major, unanticipated advances occur within the next, say, two years. New approaches at least as impactful as the one that led to o1. Even this might not be sufficient unless the breakthroughs specifically address key limitations such as continuous learning, messy real-world tasks, and long-horizon planning for problems with no clear right and wrong answers.Threshold effects emerge, such that incremental advances in model training turn out to cause major advances in long-horizon planning, adversarial robustness, and other key areas.We sustain extremely rapid improvements in algorithmic efficiency, allowing a massive deployment of advanced AI despite the physical limits on how quickly chip production can be increased in a few short years. That’s my fast scenario. How can we tell whether we’re in it? Identifying The Requirements for a Short Timeline My chief motivation for articulating these two scenarios was so that I could review the differences between them. These differences might constitute leading indicators that we can watch in the coming months to see which course we’re on. The most important question is probably the extent to which AI is accelerating AI R&D. However, I don’t know that this will be visible to anyone outside of the frontier AI labs. What follows are some key leading indicators that the general public will be able to observe if we are on a fast path to AGI. Progress on reasoning is real, sustained, and broadly applicable. If o3 is released to the public and consistently wows people (in a way that I believe o1 has not consistently done), if its capabilities on math and coding tasks seem consistent with its amazing scores on FrontierMath and Codeforces, and there’s at least one more major step forward in reasoning models in 2025 (possibly leading to unambiguously superhuman scores[11] on very difficult benchmarks like FrontierMath and Humanity’s Last Exam), that supports a fast timeline. If people report mixed experiences with o3, if its performance is still very hit-and-miss, if benefits outside of math/science/coding are still limited, if the FrontierMath results look less impressive once details emerge, if that doesn’t change in a significant way over the course of 2025, that will suggest we’re on a slower timeline. It would mean that we really haven’t made a lot of progress in fundamental capabilities since the release of GPT-4 in March 2023. In the rapid-progress scenario, the techniques used to train reasoning models on math / science / programming tasks are succesfully extended to tasks that don’t have clear right and wrong answers. And these models must become more reliable for math / science / programming tasks. Breaking out of the chatbox: AIs start showing more ability at tasks that can’t be encapsulated in a tidy chatbox session. For instance, “draft our next marketing campaign”, where the AI would need to sift through various corporate-internal sources to locate information about the product, target audience, brand guidelines, past campaigns (and their success metrics), etc. AI naturally becomes more robust as it gets better at reasoning, fuzzy problems, and incorporating real-world context. Systems are less likely to make silly mistakes, and more resistant to “jailbreaking”, “prompt injection” and other attempts to deliberately fool them into unintended behavior. (This may be supplemented by new forms of anti-trickery training data, mostly synthetic.) Widespread adoption of AI agents, [semi-]independently pursuing goals across an extended period of time, operating in “open” environments such as the public Internet (or at least a company intranet). These agents must be able to maintain coherent and adaptive planning over time horizons that gradually increase to multiple hours (and seem likely to progress to months), completing tasks and subtasks that don’t have black-and-white answers. No particular barrier emerges as we push reasoning and agentic models into larger-scale problems that require ever-longer reasoning traces; models are able to develop whatever “taste” or high-level strategies are needed. They must be sufficiently resistant to trickery and scams such that this is not impeding their adoption. Real-world use for long-duration tasks. Users are actually making use of AI systems (agentic and otherwise) to carry out tasks that take progressively longer. They are finding the wait and cost to be worthwhile. Beyond early adopters: AI becomes more flexible and robust, achieving adoption beyond early adopters who find ways of incorporating AI into their workflow. It is able to step in and adapt itself to the task, just as a new hire would. AI’s increasing flexibility flows over and around barriers to adoption. This greatly increases the pace at which AI can drive productivity gains across the economy – including the development of AI itself[12]. Scaling doesn’t entirely stall. We see the release of a “larger” model that appears to incorporate more forms of training data, and constitutes an impressive advance along many fronts at once – like GPT-3.5 → GPT-4, or even GPT-3 → GPT-4 (and unlike GPT-4o → o1). Preferably before the end of 2025. We aren’t looking for a model that is larger than GPT-4, but one that is larger than its contemporaries in exchange for broader and deeper knowledge and capabilities. Capital spending on data centers for AI training and operation continues to increase geometrically. This is a useful indicator for both the level of resources available for developing and operating AIs, and the internal confidence of the big players. Unexpected breakthroughs emerge. To get transformational AGI within three or four years, I expect that we’ll need at least one breakthrough per year[13] on a par with the emergence of “reasoning models” (o1). I suspect we’ll specifically need breakthroughs that enable continuous learning and access to knowledge-in-the-world. How To Recognize The Express Train to AGI If we are on the road to transformative AGI in the next few years, we should expect to see major progress on many of these factors in 2025, and more or less all of them in 2026. This should include at least one major breakthrough per year – not just an impressive new model, but a fundamentally new technique, preferably one that enables continuous learning, access to knowledge-in-the-world, or robust operation over multi-hour tasks. Even in this scenario, I have trouble imagining AGI in less than four years. Some people have shorter timelines than this; if you’re one of them, I would love to talk and exchange ideas (see below). If models continue to fall short in one or two respects, AI’s increasing array of superhuman strengths – in speed, breadth of knowledge, ability to take 1000 attempts at a problem, and so forth – may be able to compensate. But if progress on multiple indicators is slow and unreliable, that will constitute strong evidence that AGI is not around the corner. We may see nontechnical barriers to AI adoption: inertia, regulatory friction, and entrenched interests. This would not necessarily indicate evidence of slow progress toward AGI, so long as these barriers are not posing a significant obstacle to the ongoing development of AI itself. In this scenario, AI adoption in the broader economy might lag until AI capabilities start to become radically superhuman, at which point there would be strong incentives to circumvent the barriers. (Though if inertia specifically is a major barrier to adoption, this might constitute evidence that AI is still not very flexible, which would suggest slow progress toward AGI.) I am always interested in feedback on my writing, but especially for this post. I would love to refine both the slow and fast scenarios, as well as the scorecard for evaluating progress toward AGI. If you have thoughts, disagreements, questions, or any sort of feedback, please comment on this post or drop me a line! ^ When reviewing a draft of this post, Julius Simonelli asked an excellent question: how do we know o1 and o3 don’t improve on tasks that don’t have black-and-white answers, when by definition it’s difficult to measure performance on those tasks? For example, poetry doesn't have black-and-white answers, but I don't see why we should say it's “bad” at poetry. I’m basing this statement on a few things: Vibes – lots of people saying that o1 doesn't seem better than 4o at, for instance, writing.OpenAI explicitly stated that o1 primarily represents progress on math, science, and coding tasks.I vaguely recall seeing non-math/science/coding benchmarks at which o1 does not beat 4o. But I could be misremembering this. There are sporadic reports of o1 doing much better than other models on non-math/science/coding tasks. For instance, here’s Dean Ball being impressed by o1-pro’s answer to “nearly a pure humanities question” about Beethoven’s music and progress in piano construction; he also says that “o1-preview performs better than any non-specialized model on advanced and creative legal reasoning”. But you can find anecdotes in favor of almost any possible statement one might make about AI. My best guess is that Dean has identified something real, but that o1’s gains over 4o are mostly limited to black-and-white questions. For another counterpoint, see this tweet from Miles Brundage. ^ Note that over the course of 2024, released models have been relentlessly shrinking in parameter count (size), squeezing ~equivalent knowledge and improved capabilities into fewer and fewer parameters. Here I am envisioning that there will be a bump in this downward progression – there will be some new models in the mix that use more parameters than that recent trend, in order to incorporate more knowledge. Even these models may then continue to shrink, if there is room to continue the trend of model compression. ^ Suppose 50% of my time is spent on tasks that can be handed to an AI, and AI makes me 10x more productive at those tasks. My overall productivity will increase by less than 2x: I’m limited by the other half of the work, the half that AI isn’t helping with. Even if AI makes me 1000x more productive at the first half of the job, my overall productivity still increases by less than 2x. ^ For example, from someone I know: One example from yesterday: I wanted to set up a pipeline in colab to download random files from common crawl, and pass them by OpenAIs API to tag whether they are licensed. This should be an easy task for someone with encyclopedic knowledge of common crawl and the OA API, yet the models I tried (o1, Gemini) failed miserably. ^ A recent tweet from Dan Hendrycks expresses this succinctly. ^ Both the 4o and o1 variants ^ Models that can perform web search can be aware of developments after their cutoff date. But they will not have deeply internalized that knowledge. For instance, if a new training algorithm has been released after the cutoff date, I might expect a model to be able to answer explicit questions about that algorithm (it can download and summarize the paper). But I'd expect it to struggle to write code using the algorithm (it won't have been trained on a large number of examples of such code). It’s possible that “reasoning” models with strong chain-of-thought capabilities will outgrow this problem. But barring a substantial breakthrough that allows models to learn on the fly (the way people do), I’d expect it to continue to be a handicap. ^ People have pointed out that advanced math bears little resemblance to the tasks required for survival in prehistoric times, and so there’s no reason to believe that human beings are very good at it on an absolute scale. It’s possible that AI will blow straight past us on many tasks relevant to AI research, just as it has done for multiplying thousand-digit numbers or playing chess. As Jack Morris puts it, “strange how AI may solve the Riemann hypothesis before it can reliably plan me a weekend trip to Boston”. ^ I can imagine that keeping model sizes down might involve creating multiple versions of the model, each fine-tuned with a lot of domain knowledge in some specific area. The alternative, training a single model with deep knowledge in all domains, might require the model to be large and thus expensive to operate. But perhaps this will turn out to be unnecessary (mumble mumble Mixture of Experts mumble mumble). ^ GPT-4 was released on 3/14/23. I believe o3 is rumored to have a release date in January, so 22 months later. OpenAI is understood to have additional unreleased capabilities, such as the “Orion” model, but it is not obvious to me that the level of unreleased capability at OpenAI as of a hypothetical January o3 release is likely to be substantially more than whatever they had in the wings in March 2023. So I’ll say that progress from March 2023 to January 2025 is roughly equal to the delta from GPT-4 to o3. ^ Here, I mean performance that is, on average, superior to the score you’d get if you assigned each problem to an elite specialist in the technical domain of that specific problem. ^ The tech industry, and AI labs in particular, will be heavily populated with early adopters. But the ability of AI to move beyond early adopters will still be a good indicator that it is becoming sufficiently flexible and robust to broadly accelerate AI R&D. ^ It’s possible that we’ll see “breakthroughs” that don’t come from a radical new technique, but simply emerge from threshold effects. That is, we might have incremental progress that crosses some important threshold, resulting in a dramatic change in capabilities. Quite possibly the threshold won’t have been apparent until it was reached.
2025-01-06