text
stringlengths
300
320k
source
stringlengths
52
154
# Digital Minds: A Quickstart Guide Updated: Jan 16, 2026 Digital minds are artificial systems, from advanced AIs to potential future brain emulations, that could morally matter for their own sake, owing to their potential for conscious experience, suffering, or other morally relevant mental states. Both cognitive sc...
https://www.lesswrong.com/posts/WK4GWkeSQQQPeRYJv/digital-minds-a-quickstart-guide
# Precedents for the Unprecedented: Historical Analogies for Thirteen Artificial Superintelligence Risks Since artificial superintelligence has never existed, claims that it poses a serious risk of global catastrophe can be easy to dismiss as fearmongering. Yet many of the specific worries about such systems are not f...
https://www.lesswrong.com/posts/kLvhBSwjWD9wjejWn/precedents-for-the-unprecedented-historical-analogies-for-1
# Comparing yourself to other people There's a thought that I sometimes hear, and it goes something like this: "We live in the best X of all possible Xs". For example: - Whatever criticism one might have towards modern society, we're still living in the safest and richest time of human history. - However poor one ma...
https://www.lesswrong.com/posts/vS9ZhhwPeew7gvNGk/comparing-yourself-to-other-people
# Is It Reasoning or Just a Fixed Bias? This is my first mechanistic interpretability blog post! I decided to research whether models are actually reasoning when answering non-deductive questions, or whether they're doing something simpler. My dataset is adapted from InAbHyD[^mhlormxp4t], and it's composed of inducti...
https://www.lesswrong.com/posts/kQvouwwHnEJkJ47uv/is-it-reasoning-or-just-a-fixed-bias-1
# Forfeiting Ill-Gotten Gains It's a holiday. The cousins are over, and the kids are having a great time. Unfortunately, that includes rampaging through the kitchen. We're trying to cook, so there's a "no cutting through the kitchen" rule. Imagine enforcement looks like: > Kid: \[dashes into kitchen, pursued by cousi...
https://www.lesswrong.com/posts/pyuhYvkqX9Lzr6QWX/forfeiting-ill-gotten-gains
# Applying to MATS: What the Program Is Like, and Who It’s For **Application deadline:** **Three days remaining! MATS Summer 2026 applications close this Sunday, January 18, 2026 AOE.** We've shortened the application this year. Most people finish in 1–2 hours, and we'll get back to applicants about first stage result...
https://www.lesswrong.com/posts/GJWgXZ3jjYfkfzKut/applying-to-mats-what-the-program-is-like-and-who-it-s-for
# Lightcone is hiring a generalist, a designer, and a campus operations co-lead Lightcone is hiring! We build beautiful things for truth-seeking and world-saving.  ![Image](https://pbs.twimg.com/media/G8UtH9AaEAAtlCE?format=jpg&name=4096x4096) We are hiring for three different positions: a senior designer, a campus ...
https://www.lesswrong.com/posts/Wowc8jfvyrsp4a6uk/lightcone-is-hiring-a-generalist-a-designer-and-a-campus
# What Washington Says About AGI I spent a few hundred dollars on Anthropic API credits and let Claude individually research every current US congressperson's position on AI. This is a summary of my findings. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/68ae22dcf4d8e4fd62498aaac0c8180359a8db793c6bc3a1...
https://www.lesswrong.com/posts/WLdcvAcoFZv9enR37/what-washington-says-about-agi
# Japan is a bank Among developed countries, Japan [has long had](https://en.wikipedia.org/wiki/List_of_countries_by_government_debt) the highest debt/GDP ratio, currently ~232%. That seems pretty bad, and conversely has made some people say that the US debt is fine because it's still much lower than Japan's. But here...
https://www.lesswrong.com/posts/vbXWJSKKynepq7sqY/japan-is-a-bank
# The truth behind the 2026 J.P. Morgan Healthcare Conference ![](https://substackcdn.com/image/fetch/$s_!lWP8!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc78ed1c-c69b-4c4a-9665-dd9f856bcf6e_2912x1632.png) * * * In 1654, a Jesuit pol...
https://www.lesswrong.com/posts/eopA4MqhrE4dkLjHX/the-truth-behind-the-2026-j-p-morgan-healthcare-conference
# Focusing on Flourishing Even When Survival is Unlikely (Part I) 1\. The Case ============ You've probably heard something like this before: 1. If we survive this century, the expected value of the future is massive. 2. If we don't survive, the expected value is near zero. 3. Therefore, the value of an intervent...
https://www.lesswrong.com/posts/cjGALjyJEvenyP9pD/focusing-on-flourishing-even-when-survival-is-unlikely-part
# Understanding Trust: Project Update This is a brief note on what I did with my funding in 2025, and my plans for 2026, written primarily because Manifund nudged me for an update on [my project](https://manifund.org/projects/understanding-trust). I ran my AISC project (which I announced [here](https://www.lesswrong....
https://www.lesswrong.com/posts/yig4LeEfpkFfiWpk2/understanding-trust-project-update
# Is METR Underestimating LLM Time Horizons? **TL;DR** * *Using METR human-baseline data, I define an alternate LLM time-horizon measure, i.e. the longest time horizon over which an LLM exceeds human baseline reliability (or equivalently the intersection point of the human and LLM logistic curves), and this measure...
https://www.lesswrong.com/posts/kNHxuusznCR3rhqkf/is-metr-underestimating-llm-time-horizons
# Blogging, Writing, Musing, And Thinking Yesterday I stumbled on this quote from a blog post by [JA Westenberg](https://www.joanwestenberg.com/the-case-for-blogging-in-the-ruins/): > Michel de Montaigne arguably invented the essay in the 1570s, sitting in a tower in his French château, writing about whatever interes...
https://www.lesswrong.com/posts/Lzek2t6GuXiGfZtgg/blogging-writing-musing-and-thinking
# Irrationality as a Defense Mechanism for Reward-hacking *This post was written as part of research done at MATS 9.0 under the mentorship of Richard Ngo. It's related to my previous* [*post*](https://www.lesswrong.com/posts/K4H48fTzLBJj5Fox6/understanding-agency-through-markov-blankets)*, but should be readable as a ...
https://www.lesswrong.com/posts/H8uoAmbeqjD2PG2jm/irrationality-as-a-defense-mechanism-for-reward-hacking
# Massive Activations in DroPE: Evidence for Attention Reorganization **Summary** ----------- I do a quick experiment to investigate how DroPE (Dropping Positional Embeddings) models differ from standard RoPE models in their use of "massive values"  (that is, concentrated large activations in Query and Key tensors) t...
https://www.lesswrong.com/posts/cGheHMeGjTqivAfCk/massive-activations-in-drope-evidence-for-attention
# How to Love Them Equally My parents have always said that they love all four of their children equally. I always thought this was a Correct Lie: that they don’t love us all equally, but they feel such a strong loyalty to us and have Specific Family Values such that lying about it is the thing to do to make sure we a...
https://www.lesswrong.com/posts/5PhfJjNgor4iou3mf/how-to-love-them-equally
# When the LLM isn't the one who's wrong Recently I've been accumulating stories where I think an LLM is mistaken, only to discover that I'm the one who's wrong. My favorite recent case came while researching 19th century US-China opium trade.  It's a somewhat convoluted history: opium was smuggled when it was legal ...
https://www.lesswrong.com/posts/Cd8tRgpWnKPuNhZ2r/when-the-llm-isn-t-the-one-who-s-wrong
# "The first two weeks are the hardest": my first digital declutter It is unbearable to not be consuming. All through the house is nothing but silence. The need inside of me is not an ache, it is caustic, sour, the burning desire to be distracted, to be listening, watching, scrolling. Some of the time I think I’m hap...
https://www.lesswrong.com/posts/eeFqTjmZ8kS7S5tpg/the-first-two-weeks-are-the-hardest-my-first-digital
# VLAs as Model Organisms for AI Safety # What Training Robot Policies Taught Me About Emergent Capabilities and Control I spent six weeks training a humanoid robot to do household tasks. Along the way, my research lead and I started noticing things about the particular failure modes of the robot that seemed to indic...
https://www.lesswrong.com/posts/4p2HBMxCkh7pZ3xCa/vlas-as-model-organisms-for-ai-safety
# Five Theses on AI Art 1\. We've Been On This Ride Before ================================== [Virginia Woolf, writing at the dawn of cinema](https://sabzian.be/text/the-cinema) (1926), expresses doubt about whether or not this new medium has any legs: > *"Anna \[Karenina\] falls in love with Vronsky” – that is to s...
https://www.lesswrong.com/posts/KKmR3cKWxzRdyHcEp/five-theses-on-ai-art
# Gradual Paths to Collective Flourishing *by Nora Ammann & Claude Opus 4.5* Setting the stage ================= There aren't many detailed stories about how things could go well with AI.[^v0vdf96f4e9] So I'm about to tell you one.  **This is an attempt to articulate a path, through the AI transition, to collective...
https://www.lesswrong.com/posts/mtASw9zpnKz4noLFA/gradual-paths-to-collective-flourishing
# How to think about enemies: the example of Greenpeace A large number of nice smart people do not have a good understanding of enmity. Almost on principle, they refuse to perceive people and movements as an enemy.[^n0ig8x6l6ht] They *feel bad* about the mere idea of perceiving a group as an enemy. And as a result, t...
https://www.lesswrong.com/posts/hjepvXZozGsKAbJbr/how-to-think-about-enemies-the-example-of-greenpeace
# The Example My work happens to consist of two things: writing code and doing math. That means that periodically I produce a very abstract thing, and then observe reality agree with its predictions. While satisfying, it has a common adverse effect of finding oneself in a deep philosophical confusion. An effect so com...
https://www.lesswrong.com/posts/pzRG3nNCAAr6KkGga/the-example
# Desiderata of good problems to hand off to AIs Many technical AI safety plans involve building automated alignment researchers to improve our ability to solve the alignment problem. Safety plans from AI labs revolve around this as a first line of defence (e.g. [OpenAI](https://openai.com/index/our-approach-to-alignm...
https://www.lesswrong.com/posts/aHioEbJYd8vbrbu2r/desiderata-of-good-problems-to-hand-off-to-ais
# AGI both does and doesn't have an infinite time horizon TLDR  ----- * Long time horizon METR-HRS tasks are both more difficult and sequentially longer than short tasks * The resulting benchmark is therefore measuring both the ability to complete difficult tasks and consistency in its abilities over long time fr...
https://www.lesswrong.com/posts/ne5toFQnSz5BXmfFn/agi-both-does-and-doesn-t-have-an-infinite-time-horizon
# Could LLM alignment research reduce x-risk if the first takeover-capable AI is not an LLM? Many people believe that the first AI capable of taking over would be quite different from the LLMs of today. Suppose this is true—does prosaic alignment research on LLMs still reduce x-risk? I believe advances in LLM alignmen...
https://www.lesswrong.com/posts/rgviB6pAu3g5Jvwzz/could-llm-alignment-research-reduce-x-risk-if-the-first
# Medical Roundup #6 The main thing to know this time around is that the whole crazy ‘what is causing the rise in autism?’ debacle is over actual nothing. There is no rise in autism. There is only a rise in the diagnosis of autism. #### Table of Contents ![](https://substackcdn.com/image/fetch/$s_!ompY!,w_1456,c_lim...
https://www.lesswrong.com/posts/FBkJmFStydJbAJmry/medical-roundup-6
# Pretraining on Aligned AI Data Dramatically Reduces Misalignment—Even After Post-Training Alignment Pretraining Shows Promise ----------------------------------- **TL;DR**: A [new paper](https://arxiv.org/pdf/2601.10160v1) shows that pretraining language models on data about AI behaving well dramatically reduces mi...
https://www.lesswrong.com/posts/ZeWewFEefCtx4Rj3G/pretraining-on-aligned-ai-data-dramatically-reduces
# What can Kickstarter teach us about goal completion? I, like many others, struggle with sticking to my goals. I was interested in analyzing data relevant to the topic and thought the crowdfunding platform Kickstarter might be an interesting place to look, as I was aware that not every funded Kickstarter delivered a ...
https://www.lesswrong.com/posts/o3XjB2Lj9av4BpgLR/what-can-kickstarter-teach-us-about-goal-completion
# There may be low hanging fruit for a weak nootropic **The problem** --------------- You are routinely exposed to CO2 concentrations an order of magnitude higher than your ancestors. You are almost constantly exposed to concentrations two times higher. Part of this is due to the baseline increase in atmospheric CO2 ...
https://www.lesswrong.com/posts/kktFKEtrDtgeDiACn/there-may-be-low-hanging-fruit-for-a-weak-nootropic
# Evidence that would update me towards a software-only fast takeoff In a software-only takeoff, AIs improve AI-related software at an increasing speed, leading to superintelligent AI. The plausibility of this scenario is relevant to questions like: * How much time do we have between near-human and superintelligent...
https://www.lesswrong.com/posts/BewnGEzPoaiEKEpfu/evidence-that-would-update-me-towards-a-software-only-fast
# Appendix: Contra Fiora on Contra This is an appendixpost for Why I Transitioned: A Response. In [Why I Transitioned: A Case Study](https://www.lesswrong.com/posts/gEETjfjm3eCkJKesz/why-i-transitioned-a-case-study), Fiora Sunshine claims: > Famously, trans people tend not to have great introspective clarity into th...
https://www.lesswrong.com/posts/yzRmgAmuCXhN3eXBK/appendix-contra-fiora-on-contra
# Why I Transitioned: A Response Fiora Sunshine's post, [Why I Transitioned: A Case Study](https://www.lesswrong.com/posts/gEETjfjm3eCkJKesz/why-i-transitioned-a-case-study) (the OP) articulates a valuable theory for why some MtFs transition. If you are MtF and feel the post describes you, I believe you. However, ma...
https://www.lesswrong.com/posts/rt2yai8JkTPYgzoEj/why-i-transitioned-a-response
# Deep learning as program synthesis *Epistemic status: This post is a synthesis of ideas that are, in my experience, widespread among researchers at frontier labs and in mechanistic interpretability, but rarely written down comprehensively in one place - different communities tend to know different pieces of evidence...
https://www.lesswrong.com/posts/Dw8mskAvBX37MxvXo/deep-learning-as-program-synthesis-1
# ChatGPT Self Portrait A short fun one today, so we have a reference point for this later. This post was going around my parts of Twitter: > [@gmltony](https://x.com/gmltony/status/2012936406461456411): Go to your ChatGPT and send this prompt: “Create an image of how I treat you”. Share your image result. ![😂](http...
https://www.lesswrong.com/posts/eg6GgEq6KWPJZQQYE/chatgpt-self-portrait
# MLSN #18: Adversarial Diffusion, Activation Oracles, Weird Generalization Diffusion LLMs for Adversarial Attack Generation ================================================ *TLDR: New research indicates that an emerging type of LLM, called diffusion LLMs, are more effective than traditional autoregressive LLMs for a...
https://www.lesswrong.com/posts/nRsZxrApFwM5bdiWr/mlsn-18-adversarial-diffusion-activation-oracles-weird
# No instrumental convergence without AI psychology > The secret is that instrumental convergence is a fact _about reality_ (about the space of possible plans), not AI psychology. > > _Zack M. Davis, group discussion_ Such arguments flitter around the AI safety space. While these arguments contain some truth, they at...
https://www.lesswrong.com/posts/gCdNKX8Y4YmqQyxrX/no-instrumental-convergence-without-ai-psychology-1
# So Long Sucker: AI Deception, "Alliance Banks," and Institutional Lying In 1950, John Nash and three other game theorists designed a four-player game, \*So Long Sucker\*, with one brutal property: to win, you must eventually betray your allies. In January 2026, I used this game to test how four frontier models beha...
https://www.lesswrong.com/posts/3KtJ2YP3tTxnASTBn/so-long-sucker-ai-deception-alliance-banks-and-institutional
# Money Can't Buy the Smile on a Child's Face As They Look at A Beautiful Sunset... but it also can't buy a malaria free world: my current understanding of how Effective Altruism has failed I've read a lot of Ben Hoffman's work over the years, but only this past week have I read his actual myriad criticisms of the Eff...
https://www.lesswrong.com/posts/gKdnTcqQfpasfyZjP/money-can-t-buy-the-smile-on-a-child-s-face-as-they-look-at
# Vibing with Claude, January 2026 Edition _NB: Last week I teased a follow-up that depended on [posting an excerpt](https://www.uncertainupdates.com/p/the-many-ways-of-knowing) from [Fundamental Uncertainty](https://www.fundamentaluncertainty.com/). Alas, I got wrapped up in revisions and didn’t get it done in time. ...
https://www.lesswrong.com/posts/cfE9Qm8s6HxWNpJYx/vibing-with-claude-january-2026-edition
# The case for AGI safety products *This is a personal post and does not necessarily reflect the opinion of other members of Apollo Research. This blogpost is paired with our announcement that* [*Apollo Research is spinning out from fiscal sponsorship into a PBC*](https://www.apolloresearch.ai/blog/apollo-research-is-...
https://www.lesswrong.com/posts/iwfdwzJerpC7FqbZG/the-case-for-agi-safety-products
# Crimes of the Future, Solutions of the Past Three hundred million years ago, plants evolved lignin—a complex polymer that gave wood its strength and rigidity—but nothing on Earth could break it down. Dead trees accumulated for sixty million years, burying vast amounts of carbon that would eventually become the coal ...
https://www.lesswrong.com/posts/49X8eFCRw6KTEXMP5/crimes-of-the-future-solutions-of-the-past
# Claude's new constitution [**Read the constitution**](http://anthropic.com/constitution). Previously: 'soul document' discussion [here](https://www.lesswrong.com/posts/vpNG99GhbBoLov9og/claude-4-5-opus-soul-document); the new constitution contains almost all of the 'soul document' content, but is >2x longer with a ...
https://www.lesswrong.com/posts/mLvxxoNjDqDHBAo6K/claude-s-new-constitution
# Claude Codes #3 We’re back with all the Claude that’s fit to Code. I continue to have great fun with it and find useful upgrades, but the biggest reminder is that you need the art to have an end other than itself. Don’t spend too long improving your setup, or especially improving how you improve your setup, without ...
https://www.lesswrong.com/posts/rCf8KLrpzdtFoFTJD/claude-codes-3
# When should we train against a scheming monitor? As we develop new techniques for detecting deceptive alignment, ranging from action monitoring to Chain-of-Thought (CoT) or activations monitoring, we face a dilemma: once we detect scheming behaviour or intent, should we use that signal to "train the scheming out"? ...
https://www.lesswrong.com/posts/u67JAa6FKKpQJPp3m/when-should-we-train-against-a-scheming-monitor
# Finding Yourself in Others _"The person is an identity that emerges through relationship.... If we isolate the 'I' from the 'thou' we lose not only its otherness but also its very being; it simply cannot be without the other."_ -- John Zizioulas, Communion and Otherness --- It is the third week of Anna's freshman ...
https://www.lesswrong.com/posts/m4cGnJDpvs9LWk2dK/finding-yourself-in-others-2
# How (and why) to read Drexler on AI I have been reading Eric Drexler’s writing on the future of AI for more than a decade at this point. I love it, but I also think it can be tricky or frustrating. More than anyone else I know, Eric seems to tap into a deep vision for how the future of technology may work — and hav...
https://www.lesswrong.com/posts/u3FcpdmDZekXxgN7L/how-and-why-to-read-drexler-on-ai
# The first type of transformative AI? AI risk discussion often seems to assume that the AI we most want to prepare for will emerge in a“normal” world — one that hasn’t really been transformed by earlier AI systems.  I think betting on this assumption could be a big mistake. If it turns out to be wrong, most of our p...
https://www.lesswrong.com/posts/Hc6fyGcuw64dcmevb/the-first-type-of-transformative-ai
# Claude's Constitution is an excellent guide for humans, too As with LLMs, so too with humans. [Anthropic](https://www.anthropic.com/constitution) released [Claude's Constitution](https://www.lesswrong.com/posts/mLvxxoNjDqDHBAo6K/claude-s-new-constitution) today. It's excellent in many ways, and I will have more to ...
https://www.lesswrong.com/posts/CLkzD7fBbSbmoXXXh/claude-s-constitution-is-an-excellent-guide-for-humans-too
# Uncovering Unfaithful CoT in Deceptive Models Inspired by the paper [Modifying LLM Beliefs with Synthetic Document Finetuning](https://alignment.anthropic.com/2025/modifying-beliefs-via-sdf/), I fine-tuned an AI model to adopt the personality of a detective and generate unfaithful Chain-Of-Thought (CoT) in order to...
https://www.lesswrong.com/posts/EkuGSFCDQJr4qnXZK/uncovering-unfaithful-cot-in-deceptive-models-2
# Neural chameleons can('t) hide from activation oracles \[epistemic status - vibe coded, but first-pass sanity-checked the code and methodology. Messy project, take results with grain of salt. See limitations/footnotes\] *Done as a mini-project for Neel Nanda's MATS exploration stream* [Github repo](https://github....
https://www.lesswrong.com/posts/wfGYMbr4AMcH2Rv68/neural-chameleons-can-t-hide-from-activation-oracles-1
# Resisting Reality Sometimes updating on evidence opens roads we do not want to take: roads that we do not like as we know where they inevitably lead. We sometimes prefer to stay in homeostasis, in our current lane, suboptimal. One evocative example is the sort of paradoxical blend of invective mania and social apat...
https://www.lesswrong.com/posts/JLk8Rwbw2zqMM59Kv/resisting-reality
# AI #152: Brought To You By The Torment Nexus [Anthropic released a new constitution for Claude](https://x.com/AnthropicAI/status/2014005798691877083). I encourage those interested to read the document, either in whole or in part. I intend to cover it on its own soon. There was also actual talk about coordinating on...
https://www.lesswrong.com/posts/pCkYfhYcwFLELoYQf/ai-152-brought-to-you-by-the-torment-nexus
# Releasing TakeOverBench.com: a benchmark, for AI takeover Today, [PauseAI](https://pauseai.info/) and the [Existential Risk Observatory](https://www.existentialriskobservatory.org/) release [TakeOverBench.com](http://takeoverbench.com): a benchmark, but for AI takeover. There are many AI benchmarks, but this is the...
https://www.lesswrong.com/posts/RQk34g37WmxnDcjte/releasing-takeoverbench-com-a-benchmark-for-ai-takeover
# AI can suddenly become dangerous despite gradual progress In the Sable story (IABIED), AI obtains dangerous capabilities such as self-exfiltration, virus design, persuasion, and AI research. It uses a combination of those capabilities to eventually conduct a successful takeover against humanity. Some have criticised...
https://www.lesswrong.com/posts/JqrZxQwmqmoCWXXxC/ai-can-suddenly-become-dangerous-despite-gradual-progress
# Will we get automated alignment research before an AI Takeoff? TLDR: Will AI-automation first speed up capabilities or safety research? I forecast that most areas of capabilities research will see a 10x speedup before safety research. This is primarily because capabilities research has clearer feedback signals and r...
https://www.lesswrong.com/posts/z4FvJigv3c8sZgaKZ/will-we-get-automated-alignment-research-before-an-ai
# The phases of an AI takeover *This is a cross-post from my Substack,* [*Clear-Eyed AI*](https://stevenadler.substack.com/)*. If you want my future articles sent to you, you can subscribe for free there.* *~~~~* Superintelligence might kill everyone on Earth. At least, that’s what the three most-cited AI scientists...
https://www.lesswrong.com/posts/NrpujREipma3aGcH6/the-phases-of-an-ai-takeover
# Does Pentagon Pizza Theory Work? As soon as modern data analysis became a thing, the US government has had to deal with people trying to use open source data to uncover its secrets. During the early Cold War days and America’s hydrogen bomb testing, there was an enormous amount of speculation about how the bombs ac...
https://www.lesswrong.com/posts/Li3Aw7sDLXTCcQHZM/does-pentagon-pizza-theory-work
# Like night and day: Light glasses and dark therapy can treat non-24 (and SAD) *Epistemic status:  n=1, strong, life changing results.* TLDR:  Light glasses, in combination with turning all your lights red at night, and optionally melatonin, can treat non-24.  Light glasses can also be a competitive alternative to ...
https://www.lesswrong.com/posts/mHJFu6FAJc4ikscnq/like-night-and-day-light-glasses-and-dark-therapy-can-treat
# The World Hasn't Gone Mad In June 2025, Kalshi unveiled an ad campaign with the slogan “The world’s gone mad, trade it.” The ad was one of the first TV ads to ever be entirely generated by AI, and its content quickly was met with a slew of parodies and jokes all over the internet. I must agree the ad was quite funn...
https://www.lesswrong.com/posts/sAtDXNWS4JPpcffQX/the-world-hasn-t-gone-mad
# A quick, elegant derivation of Bayes' Theorem I'm glad I know this, and maybe some people here don't, so here goes. $$P(A \text{ and } B) = P(A) \cdot P(B \mid A)$$ $$P(B \text{ and } A) = P(B) \cdot P(A \mid B)$$ Order doesn't matter for joint events: "A and B" refers to the same event as "B and A". Set them equal...
https://www.lesswrong.com/posts/GjkqijXHakMyDxF9e/a-quick-elegant-derivation-of-bayes-theorem
# Value Learning Needs a Low-Dimensional Bottleneck **Epistemic status:** Confident in the direction, not confident in the numbers. I have spent a few hours looking into this. Suppose human values were internally coherent, high-dimensional, explicit, and decently stable under reflection. Would alignment be easier or ...
https://www.lesswrong.com/posts/XrpiQcGnqeLKLMhbD/value-learning-needs-a-low-dimensional-bottleneck
# Principles for Meta-Science and AI Safety Replications If we get AI safety research wrong, we may not get a second chance. But despite the stakes being so high, there has been no effort to systematically review and verify empirical AI safety papers. I would like to change that. Today I sent in funding applications ...
https://www.lesswrong.com/posts/8qytxHWzSsdsyTfmZ/principles-for-meta-science-and-ai-safety-replications
# Are Short AI Timelines Really Higher-Leverage? *This is a rough research note – we’re sharing it for feedback and to spark discussion. We’re less confident in its methods and conclusions.* Summary ======= Different strategies make sense if timelines to AGI are short than if they are long.  In deciding when to spe...
https://www.lesswrong.com/posts/AhXonGLfYEwSwpEhW/are-short-ai-timelines-really-higher-leverage
# A Framework for Eval Awareness In this post, we offer a conceptual framework for evaluation awareness. This is designed to clarify the different ways in which models can respond to evaluations. Some key ideas we introduce through the lens of our framework include leveraging model uncertainty about eval type and awar...
https://www.lesswrong.com/posts/cjMpms3dBZJCrxL8c/a-framework-for-eval-awareness
# Digital Consciousness Model Results and Key Takeaways Introduction to the Digital Consciousness Model (DCM) ===================================================== Artificially intelligent systems, especially large language models (LLMs) used by almost [50% of the adult US population](https://rethinkpriorities.org/re...
https://www.lesswrong.com/posts/YftBFESFevbF25tZW/digital-consciousness-model-results-and-key-takeaways
# New version of “Intro to Brain-Like-AGI Safety” A new version of [“Intro to Brain-Like-AGI Safety”](https://www.lesswrong.com/s/HzcM2dkCq7fwXBej8) is out! Things that have not changed ============================ **Same links as before**: * **As a series of 15 blog posts on LessWrong / Alignment Forum:** [https...
https://www.lesswrong.com/posts/rreDwHXgnhEDKxkro/new-version-of-intro-to-brain-like-agi-safety
# Eliciting base models with simple unsupervised techniques *Authors: Aditya Shrivastava*, Allison Qi*, Callum Canavan*, Tianyi Alex Qiu, Jonathan Michala, Fabien Roger* *(*Equal contributions, reverse alphabetical)* Wen et al. introduced the [internal coherence maximization (ICM) algorithm](https://openreview.net/...
https://www.lesswrong.com/posts/rFxfMbwJ3v4PNesWP/eliciting-base-models-with-simple-unsupervised-techniques
# Emergency Response Measures for Catastrophic AI Risk I have written a paper on Chinese domestic AI regulation with coauthors James Zhang, Zongze Wu, Michael Chen, Yue Zhu, and Geng Hong. It was presented recently at NeurIPS 2025's [Workshop on Regulatable ML](https://regulatableml.github.io/), and it may be found on...
https://www.lesswrong.com/posts/AJ6ntMdcspifkLryB/emergency-response-measures-for-catastrophic-ai-risk
# The Long View Of History History as a subject is often viewed by students and the public at large as a domain without a use, a pedantic study of dates and names with some vague mission to remember the past—a memorial to ages past but neither a forward-looking or useful endeavor. The study of history produces teacher...
https://www.lesswrong.com/posts/8fW6CyJhnotuKDcHa/the-long-view-of-history
# Dating Roundup #11: Going Too Meta If there’s several things this blog endorses, one of them would be going meta. It’s time. The big picture awaits. #### You’re Single Because You Live In The Wrong Place The most important meta question is location, location, location. This is the periodic reminder that dating d...
https://www.lesswrong.com/posts/7y6YA8o6oisAzDSgk/dating-roundup-11-going-too-meta
# Condensation & Relevance *(This post elaborates on a few ideas from my* [*review*](https://www.lesswrong.com/posts/BstHXPgQyfeNnLjjp/condensation) *of Sam Eisenstat's* [*Condensation: a theory of concepts*](https://openreview.net/forum?id=HwKFJ3odui#discussion)*. It should be somewhat readable on its own but doesn't...
https://www.lesswrong.com/posts/2x9yatKKTRMabQAWq/condensation-and-relevance
# Every Benchmark is Broken Last June, METR caught o3 [reward hacking](https://metr.org/blog/2025-06-05-recent-reward-hacking/) on its **RE-Bench** and **HCAST** benchmarks. In a particularly humorous case, o3, when tasked with optimizing a kernel, decided to “shrink the notion of time as seen by the scorer”. [![](ht...
https://www.lesswrong.com/posts/HzjssjeQqhf3kRw9r/every-benchmark-is-broken
# AI X-Risk Bottleneck = Advocacy? Introduction ============ I am leading an early-stage effort to target AI x-risk. We're currently analyzing the bottlenecks in the AI x-risk prevention "supply chain" to decide where to focus our efforts. We would love to get comments from the community. The x-risk community has a ...
https://www.lesswrong.com/posts/Pu29pY5FdFYKRzhk8/ai-x-risk-bottleneck-advocacy
# A Simple Method for Accelerating Grokking > **TL;DR:** Letting a model overfit first, then applying Frobenius norm regularization, achieves grokking in roughly half the steps of Grokfast on modular arithmetic. I learned about [grokking](https://arxiv.org/abs/2201.02177) fairly recently, and thought it was quite int...
https://www.lesswrong.com/posts/38RcAQezS2AEcaEGv/a-simple-method-for-accelerating-grokking
# IABIED Book Review: Core Arguments and Counterarguments The recent book “[If Anyone Builds It Everyone Dies](https://en.wikipedia.org/wiki/If_Anyone_Builds_It,_Everyone_Dies)” (September 2025) by Eliezer Yudkowsky and Nate Soares argues that creating superintelligent AI in the near future would almost certainly caus...
https://www.lesswrong.com/posts/qFzWTTxW37mqnE6CA/iabied-book-review-core-arguments-and-counterarguments
# Small language models hallucinate knowing something's off. If I ask "**What is atmospheric pressure on Planet Xylon**" to a language model, a good answer would be something like "I don't know" or "This question seems fictional", which current SOTA LLM's do due to stronger RLHF, but not smaller LLMs like **Llama-3.2-...
https://www.lesswrong.com/posts/cgCeqi8cDn9RnDdQA/small-language-models-hallucinate-knowing-something-s-off
# In Defense of Memorization **TLDR:** Western education creates a false dichotomy between memorization and understanding. I believe we should expect both. Having facts readily available in your brain (not just "Google-able") enables real-time bullshit detection, helps you calibrate who to trust, holds your own belief...
https://www.lesswrong.com/posts/xqjAqybLkZeEWvnNt/in-defense-of-memorization
# Skill: cognitive black box flight recorder *[Crosspost from my blog](https://tsvibt.blogspot.com/2026/01/skill-cognitive-black-box-flight.html).* Very short summary: It's especially valuable to Notice while in mental states that make Noticing especially difficult, so it's valuable to learn that skill. Short summ...
https://www.lesswrong.com/posts/yueCdEog8CmXHryiv/skill-cognitive-black-box-flight-recorder
# Clawed Abode: Claude Code is Too Cloudy Running Claude Code locally is annoying since you have to deal with permissions and agents interfering with each other (and you have to be at your computer), but running Claude Code on the web is annoying because the cloud environment is so limited[^tv0cppcxo5i]. What if we c...
https://www.lesswrong.com/posts/oZdKxDhQEPxv8tXuT/clawed-abode-claude-code-is-too-cloudy
# What's a good methodology for "is Trump unusual about executive overreach / institution erosion / corruption?" *_Updated title to include "corruption", and changed some framing in the post._* Critics of Trump often describe him as making absolutely unprecedented moves to expand executive power, extract personal...
https://www.lesswrong.com/posts/XJDCii4QAG25e26dq/what-s-a-good-methodology-for-is-trump-unusual-about
# Declining Marginal Costs of Alienation [ ![May be a graphic of text that says "THE THENEW NEW YORK TIMES/ SIENA SIENAPOLL POLL Do you think the tactics used by CE have gone too far, have not gone far enough or have been about right? All respondents 61% TOO FAR 26% 11% Democrats RIGHT NOT ENOUGH 94 Independents 4 71...
https://www.lesswrong.com/posts/tAgxvKnfBTSoQw7eM/declining-marginal-costs-of-alienation
# The Virtual Mother-in-Law In a previous post, I argued against framing alignment in terms of [maternal instinct](https://www.lesswrong.com/posts/C6oQaSXmTtqNxh9Ad/should-we-align-ai-with-maternal-instinct). Interacting with current LLMs has made that concern feel less abstract. What I’m encountering now feels like a...
https://www.lesswrong.com/posts/YowdDzpywFFYzpM92/the-virtual-mother-in-law
# Towards Sub-agent Dynamics and Conflict *This post was written as part of research done at MATS 9.0 under the mentorship of Richard Ngo. * Introduction ------------ This is a follow-up to my previous [post](https://www.lesswrong.com/posts/H8uoAmbeqjD2PG2jm/irrationality-as-a-defense-mechanism-for-reward-hacking)....
https://www.lesswrong.com/posts/3S2KhQoKb8MpXury5/towards-sub-agent-dynamics-and-conflict
# Reinventing the wheel I have been known to indulge in reinventing the wheel. It's something of a capital sin, or at least heavily discouraged, in science and engineering, yet I keep falling for it. Study the classics, to be sure, derive the theorems and the results - in an educational setting. But ultimately, when ...
https://www.lesswrong.com/posts/krkH33PWXxkqMHcnh/reinventing-the-wheel-1
# A tale of three theories: sparsity, frustration, and statistical field theory This post is an informal preliminary writeup of a project that I've been working on with friends and collaborators. Some of the theory was developed jointly with Zohar Ringel, and we hope to write a more formal paper on it this year. Exper...
https://www.lesswrong.com/posts/siu22scEfuKxpSgfK/a-tale-of-three-theories-sparsity-frustration-and
# To be well-calibrated is to be punctual To be well-calibrated is to be able to predict the world with appropriate confidence. We know that calibration can be improved through practice. Accurate calibration of our beliefs and expectations is a foundational element of epistemic rationality. [Others](https://www.lessw...
https://www.lesswrong.com/posts/9Qj6v2tjZfDH9kfam/to-be-well-calibrated-is-to-be-punctual
# Canada Lost Its Measles Elimination Status Because We Don't Have Enough Nurses Who Speak Low German *This post was originally published on November 11th, 2025. I've been spending some time reworking and cleaning up the Inkhaven posts I'm most proud of, and completed the process for this one today.* Today, Canada of...
https://www.lesswrong.com/posts/H8RdAbAmsqbpBWoDd/canada-lost-its-measles-elimination-status-because-we-don-t
# Notable Progress Has Been Made in Whole Brain Emulation Summary ------- We have \[relatively\] recently scanned the whole fruit fly brain, simulated it, confirmed it is pretty highly constrained by morphology alone. Other groups have been working on optical techniques and genetic work to make the scanning process f...
https://www.lesswrong.com/posts/DGsBfcEQKuNPmQizQ/notable-progress-has-been-made-in-whole-brain-emulation
# The Possessed Machines (summary) [The Possessed Machines](https://possessedmachines.com/) is one of the most important AI microsites. It was published anonymously by an ex- lab employee, and does not seem to have spread very far, likely at least partly due to this anonymity (e.g. there is no LessWrong discussion at ...
https://www.lesswrong.com/posts/ppBHrfY4bA6J7pkpS/the-possessed-machines-summary
# How accurate a model of the refrigeration cycle is this doodle? [This Technology Connections video on heat pumps](https://www.youtube.com/watch?v=7J52mDjZzto) made me realize I don't intuitively understand how refrigeration works. I tried to drill down until I understood what was happening with every molecule, and.....
https://www.lesswrong.com/posts/TcfofNsrcFfbWs6dA/how-accurate-a-model-of-the-refrigeration-cycle-is-this
# Upcoming Dovetail fellow talks & discussion As the current Dovetail research [fellowship](https://www.lesswrong.com/posts/5XAB9rS8KdLhagwzR/apply-for-the-2025-dovetail-fellowship) comes to a close, the fellows are giving talks on their projects. All are welcome to join! Unlike the [previous cohort](https://www.lessw...
https://www.lesswrong.com/posts/zP4nJWP7ZWhYdPXRE/upcoming-dovetail-fellow-talks-and-discussion
# Can you just vibe vulnerabilities? ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/2db3d3bcfc5f1d1c4c093c32844837fe603f9a0edb83f115.png) I’ve recently been wondering how close AI is to being able to reliably and autonomously find vulnerabilities in real-world software. I do not trust the academic resea...
https://www.lesswrong.com/posts/HrnaF9Qe5kokpLWFs/can-you-just-vibe-vulnerabilities
# How to do a digital declutter I’ve been writing about digital intentionality for a few months now, and I keep talking about how it’s important and it changed my life, but I haven’t yet told you how to actually do it. If you want to implement digital intentionality, I strongly recommend a thirty-day ‘digital declutt...
https://www.lesswrong.com/posts/rojYKnqHNfMypMdNY/how-to-do-a-digital-declutter
# I (well, mostly claude code) simulated proportional representation methods. *Low-ish effort post just sharing something I found fun. No AI-written text outside the figures.* I was recently nerd-sniped by proportional representation voting, and so when playing around with claude code I decided to have it build a sim...
https://www.lesswrong.com/posts/dLpAYa6ReaTGj3ktt/i-well-mostly-claude-code-simulated-proportional
# Ada Palmer: Inventing the Renaissance *This is a cross-post from* [*https://www.250bpm.com/p/ada-palmer-inventing-the-renaissance*](https://www.250bpm.com/p/ada-palmer-inventing-the-renaissance)*.* [![](https://substackcdn.com/image/fetch/$s_!_XTm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%...
https://www.lesswrong.com/posts/doADJmyy6Yhp47SJ2/ada-palmer-inventing-the-renaissance
# Futarchy is Parasitic on What It Tries to Govern Summary ------- ***Epistemic status:** quite confident.* Futarchy is bound to fail because conditional decision markets are structurally incapable of estimating causal policy effects once their outputs are acted upon. Traders must price contracts based on welfare *c...
https://www.lesswrong.com/posts/mW4ypzR6cTwKqncvp/futarchy-is-parasitic-on-what-it-tries-to-govern
# Eons of Utopia \[day 5/7 - epistemic status: longtermism apology form, having a moment\] *Voyager 1* was launched on September 5, 1977. Its mission was to study the very edges of the solar system, and then go gentle into that good night. As it was drifting away into the vast unknown, Sagan begged for one last pict...
https://www.lesswrong.com/posts/test3yyEYTSvrDtrD/eons-of-utopia
# What actually matters in neurotech startups (and what doesn't) ![](https://substackcdn.com/image/fetch/$s_!qlTr!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7b0ae3b-31c7-479c-87f7-ad0414f3aace_2912x1632.png) *Note: Extraordinarily gr...
https://www.lesswrong.com/posts/Bn8CNvEHbKg2KPkvD/what-actually-matters-in-neurotech-startups-and-what-doesn-t