text
stringlengths
300
320k
source
stringlengths
52
154
# OpenForecaster: How to train language models for open-ended forecasting? We built **OpenForecaster**, an 8B model trained to make predictions on **open-ended forecasting questions**. It is competitive with much larger proprietary models in held-out testing. We train it with **RL** on our [OpenForesight dataset](htt...
https://www.lesswrong.com/posts/GFkNFAer7nsiwmbhm/openforecaster-how-to-train-language-models-for-open-ended
# An interactive toy model for exploring AI's effect on the labour market *If it can be done by AI, it probably will be* *Cross posted from* [*my Substack*](https://open.substack.com/pub/charlesd353/p/an-interactive-toy-model-for-exploring?r=1jwfa&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true)*.* ![](http...
https://www.lesswrong.com/posts/zgehGn57XncCHnzTR/an-interactive-toy-model-for-exploring-ai-s-effect-on-the
# Does mindfulness meditation lead to awakening? There are common misconceptions concerning mindfulness meditation, what it even is and whether it leads to awakening or not. I've spent some years doing mindfulness meditation and would like to reflect on this topic based on some research papers to untangle this knot.  ...
https://www.lesswrong.com/posts/fa5PHMxLTfvmb2Aoy/does-mindfulness-meditation-lead-to-awakening
# Everything is Political Now, or, A Review of "Fraggle Rock: Back to the Rock" As a kid, _Fraggle Rock_ was my favorite TV show. I can’t really explain why. Maybe it was the characters, the songs, the sets, or its whole vibe, but for whatever reason, I loved it, and my entire way of seeing the world is in no small pa...
https://www.lesswrong.com/posts/RmNnWzucyFa5v9kr7/everything-is-political-now-or-a-review-of-fraggle-rock-back
# Two ways non-U.S. folks can contribute to AI going well For several years now, I’ve been fielding [calls](https://aisafety.quest/#calls) from people who want to help make AI go well for humanity. Sometimes these folks are based outside the U.S., and often they ask me: With most of the labs concentrated in a few plac...
https://www.lesswrong.com/posts/GCunNDnBifhRt9sNu/two-ways-non-u-s-folks-can-contribute-to-ai-going-well
# Advancements In Self-Driving Cars #### Going Full San Francisco Waymo goes Full San Francisco West Bay except for SFO: > Jeff Dean: Exciting expansion! @Waymo now serves the whole SF Bay Area Peninsula from SF to San Jose and is taking riders on freeways. > > ![](https://substackcdn.com/image/fetch/$s_!nEpt!,w_14...
https://www.lesswrong.com/posts/ry3SyA92bJCPe75Nt/advancements-in-self-driving-cars
# Two Aspects of Situational Awareness: World Modelling & Indexical Information I'm writing this post to share some of my thinking about situational awareness, since I'm not sure others are thinking about it this way.   For context, I think situational awareness is a critical part of the case for rogue AI and schemin...
https://www.lesswrong.com/posts/7pzoDhybfai9s7APS/two-aspects-of-situational-awareness-world-modelling-and
# Public intellectuals need to say what they actually believe **Intro** ========= [This Twitter thread](https://x.com/KelseyTuoc/status/1243295699728388096?s=20) from Kelsey Piper has been reverberating around my psyche since its inception, almost six years now. You should read the whole thing for more context, but ...
https://www.lesswrong.com/posts/efcrv3tMogwFCQvGZ/public-intellectuals-need-to-say-what-they-actually-believe
# Taiwan Trip Report Taiwan trip report from 23-30 November 2025. This is more a collection of thoughts and observations about Taiwan than a play-by-play report. * * * **Vibes, or What Paul Graham Would Say** ---------------------------------------- Taiwan is nice. By what definition of nice? you ask. To which I an...
https://www.lesswrong.com/posts/ez2qz2FEoR8HfrWKS/taiwan-trip-report
# Lumina Probiotic worked for me! In September, I applied [Lumina probiotic](https://luminaprobiotic.com/)[^l1ixoi8fzwa]to my mouth. I did this around noon. I noticed effects within six hours. Usually my mouth would begin developing a... *taste* even just a few hours after I brushed my teeth. But at 6 pm that day, the...
https://www.lesswrong.com/posts/w7Hcg8uWQMSdkNuQj/lumina-probiotic-worked-for-me
# HIA and X-risk part 2: Why it hurts *[Crosspost from my blog](https://tsvibt.blogspot.com/2026/01/hia-and-x-risk-part-2-why-it-hurts.html).* # Context Previously, in "[HIA and X-risk part 1: Why it helps](https://tsvibt.blogspot.com/2025/11/hia-and-x-risk-part-1-why-it-helps.html)", I laid out the reasons I thin...
https://www.lesswrong.com/posts/K4K6ikQtHxcG49Tcn/hia-and-x-risk-part-2-why-it-hurts
# The AI Infrastructure Security Shortlist This is a post by Abbey Chaver from [Coefficient Giving](https://coefficientgiving.org/) (formerly Open Philanthropy). I recently did a relatively shallow investigation on the state of Infosec x AI. My research consisted of identifying the main GCR-relevant workstreams, looki...
https://www.lesswrong.com/posts/xkE4zEzmArxgskZ96/the-ai-infrastructure-security-shortlist
# Rents Are High, But Not Skyrocketing I hear people talking about "skyrocketing" rents, with the idea that rent is going up quickly. This isn't my impression of what's happening, and when I look at the data it's not what I see either. Instead, rents are too high, and they were rising quickly pre-covid, but recently t...
https://www.lesswrong.com/posts/oBif3tiEKqX2wWrLE/rents-are-high-but-not-skyrocketing
# Small Steps Towards Proving Stochastic → Deterministic Natural Latent The story so far ================ We (Alfred and Jeremy) started a Dovetail project on Natural Latents in order to get some experience with the proofs. Originally we were going to take a crack at [this bounty](https://www.lesswrong.com/posts/e9Kw...
https://www.lesswrong.com/posts/4q3kMfJHB4rxr3Z8m/small-steps-towards-proving-stochastic-deterministic-natural
# Saying What You Want There is a hierarchy of useful interfaces for tools that goes something like this: 1. Figure out what you want to do, then how to use the tool to achieve that, then carry out those actions yourself (hammer, machining workshop) 2. Figure out what you want to do, then how to use the tool to ach...
https://www.lesswrong.com/posts/9LkQSGei9GEtj8fZA/saying-what-you-want
# AI #150: While Claude Codes Claude Code is the talk of the town, and of the Twitter. It has reached critical mass. Suddenly, everyone is talking about how it is transforming their workflows. This includes non-coding workflows, as it can handle anything a computer can do. People are realizing the power of what it ca...
https://www.lesswrong.com/posts/fWJsqHXHBAEd8rq69/ai-150-while-claude-codes
# Why LLMs Aren't Scientists Yet. **This is a crosspost from our report website for** [**Why LLMs Aren't Scientists Yet: Lessons from Four Autonomous Research Attempts**](https://arxiv.org/pdf/2601.03315). This report details the work behind our LLM-written paper ["The Consistency Confound: Why Stronger Alignment Can ...
https://www.lesswrong.com/posts/y7TpjDtKFcJSGzunm/why-llms-aren-t-scientists-yet
# Self-Help Tactics That Are Working For Me [ ![](https://substackcdn.com/image/fetch/$s_!O-r2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff84cf5ed-8aa1-4f38-a5bf-b07836a0094f_1024x1024.webp) ](https://substackcdn.com/image/fetch/$s...
https://www.lesswrong.com/posts/LQCjTFGxCu5G4bs9f/self-help-tactics-that-are-working-for-me
# The Hunger Strike To Stop The AI Race I just released a 22-minute long [documentary](https://www.youtube.com/watch?v=-qWFq2aF8ZU) about the hunger strike to stop the AI race, a protest that was featured in publications such as [The Verge](https://www.theverge.com/ai-artificial-intelligence/778773/the-hunger-strike-t...
https://www.lesswrong.com/posts/CiHGtrCNf7PBsfzGo/the-hunger-strike-to-stop-the-ai-race
# The Economics of Transformative AI *Anton Korinek is an economist at UVA and the Brookings Institution who focuses on the macroeconomics of AI. This is a lightly edited transcript of a recent lecture where he lays out what economics actually predicts about transformative AI — in our view it's the best introductory r...
https://www.lesswrong.com/posts/epFKhn24trRP2cs3k/the-economics-of-transformative-ai
# I dream every night now When I close my eyes, all I see is darkness.  It’s always been this way.  I thought this was normal. When I was 22, I learned otherwise.  I learned that “imagination” is not merely a figure of speech—people can *actually *see images in their heads. They can picture their dog wagging its ta...
https://www.lesswrong.com/posts/8mESfTBzoTscCufhp/i-dream-every-night-now
# Parameters of Metacognition - The Anesthesia Patient *Epistemic status:* I’m using a single clinical case study as a running example to illustrate three empirical aspects of cognition that are well-documented but rarely used together. The point is not that this case study proves anything, but to build an intuition t...
https://www.lesswrong.com/posts/vtxZtjiR9Rb9HC72N/parameters-of-metacognition-the-anesthesia-patient
# Alignment Faking is a Linear Feature in Anthropic's Hughes Model (Edited 1/11/26) TL;DR ----- Alignment faking in Hughes et al.'s model comes down to a single direction in activation space. **Update after Hoagy's critique:** I originally reported L0 results but that was basically just swapping the input token (97% ...
https://www.lesswrong.com/posts/TazJpnBnvPC5tJoWo/alignment-faking-is-a-linear-feature-in-anthropic-s-hughes
# Another Cost Disease? We are all capitalists now In brief: when wages are pushed up in ‘essential’ sectors, the cost of those sectors goes up as a share of people’s income. This can be difficult. Baumol identified one ‘cost disease’ which can drive this effect. Could increasing prevalence and share of income from in...
https://www.lesswrong.com/posts/9AmNF7gZQawajJdSz/another-cost-disease-we-are-all-capitalists-now
# Claude Codes Claude Code with Opus 4.5 is so hot right now. The cool kids use it for everything. They definitely use it for coding, often letting it write all of their code. They also increasingly use it for everything else one can do with a computer. [Vas suggests using Claude Code as you would a mini-you/employ...
https://www.lesswrong.com/posts/MQGAMHQNTFyJTke2H/claude-codes
# [Linkpost] On the Origins of Algorithmic Progress in AI This is a linkpost to a new Substack article from MIT FutureTech explaining our recent paper [*On the Origins of Algorithmic Progress in AI*](https://arxiv.org/abs/2511.21622).  We demonstrate that some algorithmic innovations have efficiency gains which get l...
https://www.lesswrong.com/posts/X8KGHstcJa4qZznfH/linkpost-on-the-origins-of-algorithmic-progress-in-ai
# Understanding complex conjugates in quantum mechanics Why does quantum mechanics use complex numbers extensively? Why is the inner product of a Hilbert space antilinear in the first argument? Why are Hermitian operators important for representing observables? And what is the *i* in the Schrödinger equation doing? Th...
https://www.lesswrong.com/posts/BvAg6E5XaPppcZHu5/understanding-complex-conjugates-in-quantum-mechanics
# Cancer-Selective, Pan-Essential Targets from DepMap ### Introduction Back in June, I proposed that it would be a good idea to look for [broad-spectrum cancer treatments](https://sarahconstantin.substack.com/p/broad-spectrum-cancer-treatments) — i.e. therapies that work on _many_ types of cancer, rather than being h...
https://www.lesswrong.com/posts/aCeQxnoyQm3JbY2yJ/cancer-selective-pan-essential-targets-from-depmap
# Objective Questions **Epistemic Status:** *I wrote this a few days ago while moved by the trolly spirit where I could say "I'm just asking questions, bro!" and smirk with a glint in my eye... but then I showed a draft to someone. It was a great springboard for that conversation, but then the conversation caused me t...
https://www.lesswrong.com/posts/rc9LvnTRgjpZHTorj/objective-questions
# Taking LLMs Seriously (As Language Models) This is my attempt to write down what I would be researching, if I were working directly with LLMs rather than doing Agent Foundations. (I'm open to collaboration on these ideas.) Machine Learning research can occupy different points on a spectrum between science and engin...
https://www.lesswrong.com/posts/K3aPmF5o37pYDqrFQ/taking-llms-seriously-as-language-models
# Where's the $100k iPhone? I’m not quite sure how unequal the world used to be, but I’m fairly certain the world is more equal (in terms of financial means) than the world was, say, in the 1600s. There are many things that enormous wealth allows you to buy that’s out of reach for middle-class American consumers, lik...
https://www.lesswrong.com/posts/5F3Ed3hc4YZo626oo/where-s-the-usd100k-iphone
# What do we mean by "impossible"? (I'm reposting this here from an [old Dreamwidth post](https://sniffnoy.dreamwidth.org/535428.html) of mine, since I've seen people reference it occasionally and figure it would be easier to find here.) So people throw around the word "impossible" a lot, but oftentimes they actually...
https://www.lesswrong.com/posts/P4HLwygYa5hiskQMs/what-do-we-mean-by-impossible
# Finding high signal people - applying PageRank to Twitter *Cross post, adapted for LessWrong* Several challenges add friction to finding high signal people and literature: 1. High status may negatively impact signal. 2. Exploration can only be done at the edges of my network, e.g. Twitter thread interactions or ...
https://www.lesswrong.com/posts/s5PwfyRFrGFaZFevW/finding-high-signal-people-applying-pagerank-to-twitter-1
# Moral-Epistemic Scrupulosity: A Cross-Framework Failure Mode of Truth-Seeking *Crossposted from* [*https://substack.com/home/post/p-183478095*](https://substack.com/home/post/p-183478095) *Epistemic status: Personal experience with a particular failure mode of reasoning and introspection that seems to appear wi...
https://www.lesswrong.com/posts/sCPtkhs4FhhEjjFP9/moral-epistemic-scrupulosity-a-cross-framework-failure-mode
# A Proposal for a Better ARENA: Shifting from Teaching to Research Sprints TLDR ==== I propose restructuring the current ARENA program, which primarily focuses on contained exercises, into a more scalable and research-engineering-focused model consisting of four one-week research sprints preceded by a dedicated "Wee...
https://www.lesswrong.com/posts/6zuNmMMtzQg3natAF/a-proposal-for-a-better-arena-shifting-from-teaching-to
# The false confidence theorem and Bayesian reasoning A little background ------------------- I first heard about the False Confidence Theorem (FCT) a number of years ago, although at the time I did not understand why it was meaningful. I later returned to it, and the second time around, with a little more experience...
https://www.lesswrong.com/posts/HjbsjnutKE9xbXBwz/the-false-confidence-theorem-and-bayesian-reasoning
# The Case Against Continuous Chain-of-Thought (Neuralese) **Main thesis:** Discrete token vocabularies don't lose information so much as they allow information to be retained in the first place. By removing minor noise and singling out major noise, errors become *identifiable* and therefore *correctable*, which conti...
https://www.lesswrong.com/posts/ynC26Z2CJXsqj6ZnZ/the-case-against-continuous-chain-of-thought-neuralese
# Possible Principles of Superagency Prior to the era of *superintelligent* actors, we’re likely to see a brief era of *superagentic* actors—actors who are capable of setting and achieving goals in the pursuit of a given end with significantly greater efficiency and reliability than any single human. Superagents may i...
https://www.lesswrong.com/posts/bjqyzJBTY3sMAyAQr/possible-principles-of-superagency
# If AI alignment is only as hard as building the steam engine, then we likely still die *Cross-posted from [my website](https://mdickens.me/2026/01/10/if_alignment_is_as_hard_as_the_steam_engine/).* You may have seen [this graph](https://x.com/ch402/status/1666482929772666880?lang=en) from Chris Olah illustrating a ...
https://www.lesswrong.com/posts/WkEAcTNHHHk97nT4d/if-ai-alignment-is-only-as-hard-as-building-the-steam-engine
# Theoretical predictions on the sample efficiency of training policies and activation monitors I'm worried about AI models intentionally doing bad things, like sandbagging when doing safety research. In the regime where the AI has to do many of these bad actions in order to cause an unacceptable outcome, we have some...
https://www.lesswrong.com/posts/oHAGT7cGMjh9fGwYN/theoretical-predictions-on-the-sample-efficiency-of-training
# Why AIs aren't power-seeking yet Recently, I spent a couple of hours talking with a friend about the state of the evidence for AI takeover scenarios. Their trailhead question was (paraphrased): > Current AIs are getting increasingly general, but they’re not self-promoting or ambitious. They answer questions, but t...
https://www.lesswrong.com/posts/7ZH4oppNnTGtq4xXu/why-ais-aren-t-power-seeking-yet
# Coding Agents As An Interface To The Codebase **Attack Dogs** =============== I mentioned previously that [coding agents kind of suck for lots of people](https://dumbideas.xyz/posts/why-coding-agents-kind-of-suck-for-most-people/). As of January 2026, coding agents lack the long-horizon skills needed to produce eff...
https://www.lesswrong.com/posts/SM2Fr54AvBYQLmi9D/coding-agents-as-an-interface-to-the-codebase
# We need a better way to evaluate emergent misalignment TLDR ==== Qwen3-4B fine tuned on several real life, benign SFT datasets show emergent misalignment (EM) under the evaluation method used by prior EM work, including the original paper. However, after manual examination, we find that the existing evaluation meth...
https://www.lesswrong.com/posts/XC28DmEYPLqfwc8tf/we-need-a-better-way-to-evaluate-emergent-misalignment
# Stretch Hatchback Our family has [half](https://www.jefftk.com/p/shared-car-one-year-in) a Honda Fit, and it's great! Reliable, pretty good mileage, holds our family of five plus a vacation's worth of luggage, seats fold flat for when I'm bringing sound equipment to dances. It would be nice, though, to be able to se...
https://www.lesswrong.com/posts/KE7ZFF7nn5dBimjJp/stretch-hatchback
# A Couple Useful LessWrong Userstyles As a weirdo, I like to read LessWrong sometimes. There are a few extremely tiny features that I wish the site had that it doesn't. Luckily enough, I know how webpages work, and certain kinds of tweaks are especially easy. I'm attaching two of these here now, and may return to add...
https://www.lesswrong.com/posts/mQpMqZxJcF2DspyRD/a-couple-useful-lesswrong-userstyles
# Strong, bipartisan leadership for resistance to Trump. *This was written for FB and twitter where my filter bubble is strongly Democrat / Blue Tribe. I'd ideally update some of my phrasing for the somewhat more politically diverse LW, though I'm hoping my actual talking points still land pretty reasonably.* *...* ...
https://www.lesswrong.com/posts/qgtvRcGHswbRtKfKQ/strong-bipartisan-leadership-for-resistance-to-trump
# De pluribus non est disputandum "I have a lot of questions", said Carol. "I need to know how this works." "Of course", said Zosia. "Ask us anything." Carol hesitated, gathering her thoughts. She knew that Zosia couldn't lie to her, but she also knew that she was speaking with a highly convincing superintelligence ...
https://www.lesswrong.com/posts/9GhAvoBgwrRJQosjM/de-pluribus-non-est-disputandum
# Digital intentionality is not about productivity My friend Justis wrote a post this week on what his non-rationalist (“normal”) friends are like. He said: > Digital minimalism is well and good, and being intentional about devices is fine, but most normal people I know are perfectly fine with their level of YouTube,...
https://www.lesswrong.com/posts/HhoegdC8vxhGKsXEN/digital-intentionality-is-not-about-productivity
# What potent consumer technologies have long remained inaccessible? *[Crosspost from my blog](https://tsvibt.blogspot.com/2026/01/what-potent-consumer-technologies-have.html).* # Context Inequality is a common and legitimate worry that people have about reprogenetic technology. Will rich people have super healthy ...
https://www.lesswrong.com/posts/uPSMqjzGfigeXc5cQ/what-potent-consumer-technologies-have-long-remained
# Announcing Inkhaven 2: April 2026 I have come to spread the good word: we're doing Inkhaven again, this April 1 – 30. You can apply [**on the website.**](https://www.inkhaven.blog?ref=lw-post) ![Inkhaven photo](https://res.cloudinary.com/lesswrong-2-0/image/upload/q_auto,f_auto,w_800/v1767655643/image_68_laoa8d.png...
https://www.lesswrong.com/posts/nwWfsPiaFSiEtHbkJ/announcing-inkhaven-2-april-2026
# Closing the loop Sometimes you begin a conversation, or announce a project, or otherwise start something. What goes up must come down[^50843i9s0oi] and just so most things that get started should get finished. It's like starting a parenthesis, every open ( should be paired with a matching ). Some such things are beg...
https://www.lesswrong.com/posts/uBzDetYKD5LsXdCPn/closing-the-loop
# --dangerously-skip-permissions I noticed that some AI-safety-focused people are [very active users of coding agents](https://www.lesswrong.com/posts/MQGAMHQNTFyJTke2H/claude-codes), often [letting them run completely unrestricted.](https://x.com/TheZvi/status/2009312039983059175) I believe this is a bad standard to ...
https://www.lesswrong.com/posts/WSog3tgxEZgBFpHrR/dangerously-skip-permissions
# Split Personality Training: Revealing Latent Knowledge Through Alternate Personalities (Research Report) Split Personality Training: Revealing Latent Knowledge Through Alternate Personalities (Research Report) ======================================================================================================== T...
https://www.lesswrong.com/posts/og7km7vmJ6Ktay9Ds/split-personality-training-revealing-latent-knowledge-1
# Thinking vs Unfolding Jake vs Boss ============ My friend Jake has a difficult boss. Well, kind-of-boss. They're technically co-founders, but the equity split, titles (CEO vs COO), and age/seniority difference put Jake in the junior position. It's been three years of grinding together on the startup, and this year-...
https://www.lesswrong.com/posts/N7QtcqrN5hQwkjpGg/thinking-vs-unfolding
# Practical challenges of control monitoring in frontier AI deployments **TL;DR**: We wrote a safety case sketch for control monitoring taking into account complexities of practical deployments. *This work was a collaboration between Google DeepMind and the UK AI Security Institute. Full author list: David Lindner*, ...
https://www.lesswrong.com/posts/oXSAYrogo8cfBeFhP/practical-challenges-of-control-monitoring-in-frontier-ai
# Brief Explorations in LLM Value Rankings Code and data can be found [here](https://github.com/tim-hua-01/values_2_misalignment#) Executive Summary ================= * We use data from [Zhang et al. (2025)](https://arxiv.org/abs/2510.07686) to measure LLM values. We find that our value metric can sometimes predic...
https://www.lesswrong.com/posts/k6HKzwqCY4wKncRkM/brief-explorations-in-llm-value-rankings
# What Happens When Superhuman AIs Compete for Control? In *AI 2027*, one company called OpenBrain dominates the AI race in the US. Looking around at the current state of affairs at the start of 2026, however, there seem to be a few AGI companies jockeying for the lead — and it stands to reason that this will continue...
https://www.lesswrong.com/posts/ykNmyZexHESFoTnYq/what-happens-when-superhuman-ais-compete-for-control
# Model Reduction as Interpretability: What Neuroscience Could Teach Us About Understanding Complex Systems **TL;DR**: Neuroscientists face the same interpretability problem as AI safety researchers: complex, inscrutable systems with thousands of parameters that transform inputs to outputs. I worked on a systematic me...
https://www.lesswrong.com/posts/9EZZDfo8ijBgDFy7A/model-reduction-as-interpretability-what-neuroscience-could-1
# Understanding Agency through Markov Blankets *This post was written as part of research done at MATS 9.0 under the mentorship of Richard Ngo.* Summary ------- This post illustrates with examples how the qualitative concepts behind active inference and its usage of Markov blankets can be used to clarify agentic beh...
https://www.lesswrong.com/posts/K4H48fTzLBJj5Fox6/understanding-agency-through-markov-blankets
# BlackBoxQuery [BBQ]-Bench: Measuring Hypothesis Formation and Experimentation Capabilities in LLMs The following is a revised version of the winning paper that my team (Daniel Wu, David Zhang, Justin Zhang) produced as part of the [Impact Research Initiative](https://www.iri-harvardmit.org/) Fall 2025 cohort. We wer...
https://www.lesswrong.com/posts/fFuwW2nE5rZNSFQmY/blackboxquery-bbq-bench-measuring-hypothesis-formation-and-1
# The Algorithm Rewards Engagement \[mirror of my blog post at [https://livingwithinreason.com/p/the-algorithm-rewards-engagement](https://livingwithinreason.com/p/the-algorithm-rewards-engagement)\] If you’re on Twitter, you know that one of the favorite pastimes on Twitter is complaining about the “for you” feed, w...
https://www.lesswrong.com/posts/WLPF4Km4dpp5QsALR/the-algorithm-rewards-engagement
# Tensor-Transformer Variants are Surprisingly Performant I've been researching tensor networks as a more interpretable architecture, but whenever I tell people this, they always ask "But is it any good?" So I trained multiple 500M parameter LLMs on fineweb, showing the tensor variant needed ~4% more batches of data ...
https://www.lesswrong.com/posts/hp9bvkiN3RzHgP9cq/tensor-transformer-variants-are-surprisingly-performant
# Dating Roundup #10: Gendered Expectations The game is asymmetrical. Life is not fair. Doesn’t matter. Play to win the game. #### You’re Single Because Your Emotions Gave Her The Ick Ah, the ultimate ick source. A man expressing their emotions is kind of the inverse of the speech in Barbie about how it’s impossible...
https://www.lesswrong.com/posts/zphaHyABxMEDqNQ7K/dating-roundup-10-gendered-expectations
# Pro or Average Joe? Do models infer our technical ability and can we control this judgement? Executive Summary ================= **![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/b1e2ce6298119028e1a241ef8ea41d926c7ef994900273bc.png)** *A prompt response after being perceived as a novice earlier in th...
https://www.lesswrong.com/posts/oBqDvDQLLvqMrAnmt/pro-or-average-joe-do-models-infer-our-technical-ability-and
# Lies, Damned Lies, and Proofs: Formal Methods are not Slopless *We appreciate comments from Christopher Henson, Zeke Medley, Ankit Kumar, and Pete Manolios. This post was initialized by* [*Max’s twitter thread*](https://x.com/maxvonhippel/status/2006042845384233245?s=20).  ![](https://39669.cdn.cke-cs.com/rQvD3Vnun...
https://www.lesswrong.com/posts/rhAPh3YzhPoBNpgHg/lies-damned-lies-and-proofs-formal-methods-are-not-slopless
# When does competition lead to recognisable values? *Transcript of Beren Millidge's Keynote at The Post-AGI Workshop, San Diego, December 2025* You know how human values might survive in a very multifarious AI world where there's lots of AIs competing? This is the kind of MOLOCH world that Scott Alexander talk...
https://www.lesswrong.com/posts/LwSRbkecuqLJHdnJ7/when-does-competition-lead-to-recognisable-values
# Attempting to influence transformer representations via initialization TL;DR ===== * One major obstacle to interpretability is that complicated neural nets don't tell you where or how they're representing important concepts, and methods to find these representations are imperfect. * This problem is less present...
https://www.lesswrong.com/posts/4nTDGhCT7nxrtLXdf/attempting-to-influence-transformer-representations-via
# Contra Dance as a Model For Post-AI Culture I play for contra dances, and a core part of our culture is that we always have [live music](https://www.jefftk.com/p/what-is-live). It's not that live music is categorically better: if you ran a test where you put soundproof one-way glass in front of the musicians and sec...
https://www.lesswrong.com/posts/LuosdA2EAdJYEe3vZ/contra-dance-as-a-model-for-post-ai-culture
# A tale of two doormen: a bizarre AI incident on Christmas Hue went down on Christmas day. System-wide errors everywhere. We did what one does when one’s product collapses on a holiday, checking all logs with one eye closed, bracing for emotional damage. Some quick digging revealed the culprit: depleted API credits. ...
https://www.lesswrong.com/posts/Ltym4Dbj4vDjKXFcC/a-tale-of-two-doormen-a-bizarre-ai-incident-on-christmas
# Schelling Coordination in LLMs: A Review Introduction ============ This blogpost summarises the findings of my (lite) systematic literature review of Schelling coordination in LLMs that I undertook as part of the [Apart Research Fellowship](https://apartresearch.com/fellowships/apart-fellowship). If LLMs can ident...
https://www.lesswrong.com/posts/tJKNXCxx7ZKD5mtG9/schelling-coordination-in-llms-a-review
# Claude Coworks Claude Code does a lot more than code, but the name and command line scare people. Anthropic realized a rebrand was in order. [Two weeks later, we have Claude Cowork](https://x.com/blakeir/status/2010837251505205656), [written entirely by Claude Code](https://x.com/_simonsmith/status/2010820240330956...
https://www.lesswrong.com/posts/fm2N4cws8nbdmfGux/claude-coworks
# Playing Dumb: Detecting Sandbagging in Frontier LLMs via Consistency Checks **TL;DR** Large language models are becoming increasingly aware of when they are being evaluated. This poses new challenges for model evaluation because models that are aware of their evaluation are more likely to exhibit different behaviors...
https://www.lesswrong.com/posts/g3doG7J7JHKnghmja/playing-dumb-detecting-sandbagging-in-frontier-llms-via
# Global CoT Analysis: Initial attempts to uncover patterns across many chains of thought *Authors: Riya Tyagi, Daria Ivanova, Arthur Conmy, Neel Nanda* *Riya and Daria are co-first authors. This work was largely done during a research sprint for Neel Nanda’s MATS 9.0 training phase.* **🖥️** [**Deployment code**](h...
https://www.lesswrong.com/posts/q9g9zuudd3Pvw2cbj/global-cot-analysis-initial-attempts-to-uncover-patterns-1
# We need to make ourselves people the models can come to with problems Suppose the models to be sophisticated consequentialist reasoners.[^1^](https://lydianottingham.substack.com/p/we-need-to-make-ourselves-people#footnote-1-184473690)  Sometimes, it’s hard for consequentialist reasoners to coordinate with outside ...
https://www.lesswrong.com/posts/ynwWBg7JekJJCskxZ/we-need-to-make-ourselves-people-the-models-can-come-to-with
# How Much of AI Labs' Research Is Safety? *\[This is a cross-post from* [*here*](https://fi-le.net/safety-blogs/)*. Find the code used to do the analysis* [*here*](https://github.com/lennart-finke/safety-blogs)*.\]* *Epistemic Status: Accurate measurement of a variable with dubios connection to the latent variab...
https://www.lesswrong.com/posts/EfCdQeNBaeYtYH374/how-much-of-ai-labs-research-is-safety
# The Eternal Labyrinth *Content Warning: Existential Horror *               Sarah opened the creaky wooden door and stepped into the foyer.                The old house seemed different to Sarah, for while the faint echo of childhood memories still hung in the air, the house was bereft of the color and life that ha...
https://www.lesswrong.com/posts/KZwHo4MHDvfTJpwMd/the-eternal-labyrinth
# [Closed] Apply to Vanessa's mentorship at PIBBSS \[**EDIT:** Applications are now closed.\] The applications for the [PIBBSS summer fellowship](https://princint.ai/programs/fellowship/) 2026 are now ~~open~~, and I will be one of the mentors. If you want to work with me on the [Learning-Theoretic AI Alignment Agend...
https://www.lesswrong.com/posts/NGG9NZFpwiRBFyS4b/closed-apply-to-vanessa-s-mentorship-at-pibbss
# Parameters Are Like Pixels **More parameters = better model.** So went the common misconception. After GPT-4.5, Llama 4, Nemotron-4, and many other "big models", I think most of you reading are already aware that the relationship between parameters and performance is not linear. I think very few people actually hav...
https://www.lesswrong.com/posts/9G4ss5ddGT7gjNPHf/parameters-are-like-pixels
# Backyard cat fight shows Schelling points preexist language Two cats fighting for control over my backyard appear to have settled on a particular chain-link fence as the delineation between their territories. This suggests that: 1. Animals are capable of recognizing Schelling points 2. Therefore, Schelling points d...
https://www.lesswrong.com/posts/uYr8pba7TqaPpszX5/backyard-cat-fight-shows-schelling-points-preexist-language
# AI Safety at the Frontier: Paper Highlights of December 2025 **tl;dr** ========= **Paper of the month:** Auditing game shows that sandbagging detection remains difficult—only on-distribution finetuning can reliably remove sandbagging, while detection suffers from false positives. **Research highlights:** * Asy...
https://www.lesswrong.com/posts/Z6Zz2vQHM5ReWqiZ5/ai-safety-at-the-frontier-paper-highlights-of-december-2025
# GD Roundup #4 - inference, monopolies, and AI Jesus Probably the biggest recent news was the Phil Trammell and Dwarkesh Patel paper on [Capital in the 22nd Century](https://philiptrammell.substack.com/p/capital-in-the-22nd-century), which provoked many many reactions. I am going to conspicuously not dig into it beca...
https://www.lesswrong.com/posts/nuBpHMynQhxrJjdkv/gd-roundup-4-inference-monopolies-and-ai-jesus
# The Many Ways of Knowing _NB: This is an excerpt from my forthcoming book,_ [Fundamental Uncertainty](https://www.fundamentaluncertainty.com/)_. I’m posting it now because I’m writing a post for next week where I’d like to reference it._ What does it mean to say “I know”? This might seem like a strange question to...
https://www.lesswrong.com/posts/FPt5mfwjgz5puCxFz/the-many-ways-of-knowing
# When Will They Take Our Jobs? And once they take our jobs, will we be able to find new ones? Will AI take those too? Seb Krier recently wrote an unusually good take on that, which will center this post. I believe that Seb is being too optimistic on several fronts, but in a considered and highly reasonable way. The...
https://www.lesswrong.com/posts/KAjhtrJggPtaophy7/when-will-they-take-our-jobs
# Why Motivated Reasoning? There’s a standard story which says roughly "motivated reasoning in humans exists because it is/was adaptive for negotiating with other humans". I do not think that story stands up well under examination; when I think of standard day-to-day examples of motivated reasoning, that pattern sound...
https://www.lesswrong.com/posts/GnatTWjdfCNn6hrFM/why-motivated-reasoning
# Why we are excited about confession! *Boaz Barak, Gabriel Wu, Jeremy Chen, Manas Joglekar* *\[Linkposting from the* [*OpenAI alignment blog*](https://alignment.openai.com/)*,  where we post more speculative/technical/informal results and thoughts on safety and alignment.\]*   > **TL;DR** We go into more deta...
https://www.lesswrong.com/posts/k4FjAzJwvYjFbCTKn/why-we-are-excited-about-confession
# Quantifying Love and Hatred Imagine a friend gets kidnapped by mobsters who are surprisingly obsessed with human psychology. They phone you with a deal: your friend will survive if and only if you show up and play a game. The game is simple: a random number between 0 and 100 is generated, and if it falls below 10, y...
https://www.lesswrong.com/posts/m5ps7cB2G9FGHXmpE/quantifying-love-and-hatred
# Status In A Tribe Of One I saw a tweet thread the other day, in which a self-proclaimed autistic guy was freaking out about how much "normies" care about "status". I won't quote the thing because I'm going to mildly insult the guy: the whole time I read it I was thinking *for a guy hates status-talk, you sure are fu...
https://www.lesswrong.com/posts/chPrEyLnEfbLDicoB/status-in-a-tribe-of-one
# Boltzmann Tulpas (A work of anthropic theory-fiction). Motivating question: Why do you find yourself to be a human, living right before a technological singularity? Why not a raccoon, or a medieval peasant, or some far-future digital mind? The Dreams of Greater Minds --------------------------- Somewhere, in some...
https://www.lesswrong.com/posts/gSdhh33y9kYWQnCzD/boltzmann-tulpas
# Deeper Reviews for the top 15 (of the 2024 Review) We're extending the Discussion Phase of the 2024 Annual Review.  One thing I'm particularly hoping for is to get more in-depth reviews (especially critical ones) of the posts that currently look likely to be in the top-10 or so. (Ideally the entire top 50, but seem...
https://www.lesswrong.com/posts/PsQJxHDjHKFcFrPLD/deeper-reviews-for-the-top-15-of-the-2024-review
# Corrigibility Scales To Value Alignment Epistemic status: speculation with a mix of medium confidence and low confidence conclusions. I argue that corrigibility is all we need in order to make an AI permanently aligned to a principal. This post will not address how hard it may be to ensure that an AI is corrigible...
https://www.lesswrong.com/posts/fe5zvFyLNtcBuuYc9/corrigibility-scales-to-value-alignment
# AI #151: While Claude Coworks Claude Code and Cowork are growing so much that it is overwhelming Anthropic’s servers. Claude Code and Cowork news has for weeks now been a large portion of newsworthy items about AI. Thus, at least for now, all things Claude Code and Cowork will stop appearing in the weekly updates, ...
https://www.lesswrong.com/posts/L27yM3qBqDnigtxLM/ai-151-while-claude-coworks
# I Made a Judgment Calibration Game for Beginners (Calibrate) ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/92153af10e84469ab5e2f14b625b7a250bcca26428e46378.png) I made a game that teaches beginner [calibration](https://www.lesswrong.com/w/calibration). Have you ever wanted your brother/girlfriend/aun...
https://www.lesswrong.com/posts/QfLxNDfDEAu5uf3qZ/i-made-a-judgment-calibration-game-for-beginners-calibrate
# Reflections on TA-ing Harvard’s first AI safety course This fall Boaz Barak taught Harvard’s first AI safety course ([course website](https://boazbk.github.io/mltheoryseminar/)). Boaz has done an excellent job organizing and promoting the material; you may have seen his original [post on LW](https://www.lesswrong.co...
https://www.lesswrong.com/posts/gcFB2RT5vpKHbH4ic/reflections-on-ta-ing-harvard-s-first-ai-safety-course
# Test your interpretability techniques by de-censoring Chinese models *This work was conducted during the MATS 9.0 program under Neel Nanda and Senthooran Rajamanoharan.* The CCP accidentally made great model organisms *“Please observe the relevant laws and regulations and ask questions in a civilized manner when y...
https://www.lesswrong.com/posts/7gp76q4rWLFi6sFqm/test-your-interpretability-techniques-by-de-censoring-1
# The Default Contra Dance Weekend Deal The "dance weekend" is a very common pattern for contra dance communities around the country. I think of the central example as something like: * Two bands, two callers. * Dancing (with short breaks) Friday 7pm-11pm, Saturday 10am-11pm, Sunday 10am-3pm. * Saturday and Sun...
https://www.lesswrong.com/posts/kGZNvwWMnoJPBXkCD/the-default-contra-dance-weekend-deal
# Should control down-weight negative net-sabotage-value threats? *These are my personal views. Thank you to Ryan Greenblatt, Holden Karnofsky, and Peter Wildeford for useful discussions. The bad takes are my own.* When deciding how much to spend on mitigating a vulnerability that a competent scheming AI might exploi...
https://www.lesswrong.com/posts/stL8LMjFGYj7kQvQQ/should-control-down-weight-negative-net-sabotage-value
# Powerful misaligned AIs may be extremely persuasive, especially absent mitigations +++ The concise one minute post for frequent readers of this forum Here are some important, concise intellectual nuggets of progress to I made for myself through writing this post (the post also has things I thought were obvious): *...
https://www.lesswrong.com/posts/FZxJ7EBhfhZLdffXT/powerful-misaligned-ais-may-be-extremely-persuasive
# Scaling Laws for Economic Impacts: Experimental Evidence from 500 Professionals and 13 LLMs Scaling laws tell us that the cross-entropy loss of a model improves predictably with more compute. However, the way this relates to real-world economic outcomes that people directly care about is non-obvious. Scaling Laws fo...
https://www.lesswrong.com/posts/kkm7GsDtqsywaWyM7/scaling-laws-for-economic-impacts-experimental-evidence-from
# Monthly Roundup #38: January 2026 Good news, we managed to make some cuts. I think? #### Table of Contents 1. [California In Crisis.](https://thezvi.substack.com/i/184716494/california-in-crisis) 2. [Bad News.](https://thezvi.substack.com/i/184716494/bad-news) 3. [Opportunity Knocks.](https://thezvi.substack.co...
https://www.lesswrong.com/posts/wh5KpofdQHs2D2hEj/monthly-roundup-38-january-2026
# Confession: I pranked Inkhaven to make sure no one fails *(Content warnings: dubious math, quantum immortality, nuclear war)* Normal people make New Year’s resolutions. People on the internet love to make resolutions for November. So, for the entire month of November, 41 people, myself included, set out to publis...
https://www.lesswrong.com/posts/p3nuC38zEJbEvFEBE/confession-i-pranked-inkhaven-to-make-sure-no-one-fails