text
stringlengths
300
320k
source
stringlengths
52
154
# An atheist's guide to prayer I’m on holiday with some Christian friends; they asked if I wanted anything for prayer. I said I found driving in a foreign country stressful and that I hoped my younger brother would have success in his photography business. After that, they prayed for each other and for me. I stoppe...
https://www.lesswrong.com/posts/MeaDB5nybkJBfKysi/an-atheist-s-guide-to-prayer
# New homepage for AI safety resources – AISafety.com redesign For those relatively new to AI safety, [AISafety.com](http://aisafety.com) helps them navigate the space, providing lists of things like [self-study courses](https://www.aisafety.com/self-study), [funders](https://www.aisafety.com/funding), [communities](h...
https://www.lesswrong.com/posts/ciw6DCdywoXk7yrdw/new-homepage-for-ai-safety-resources-aisafety-com-redesign
# Hardening against AI takeover is difficult, but we should try This is a commentary on a paper by RAND: [*Can Humans Devise Practical Safeguards That Are Reliable Against an Artificial Superintelligent Agent?*](https://www.rand.org/pubs/perspectives/PEA4261-1.html)  Over a decade ago, Eliezer Yudkowsky famously ran ...
https://www.lesswrong.com/posts/nSj95higkSkwfuNET/hardening-against-ai-takeover-is-difficult-but-we-should-try
# Living in the Shadow of The Sort _NB: This isn’t my typical, more epistemically cautious writing. It’s more like a polemic. It’s channeling something, but I’m not quite sure what. You should read this not as me fully endorsing every sentence if taken literally and out of context, but as a whole that gives expression...
https://www.lesswrong.com/posts/KdrJvCzvTXqFACdZj/living-in-the-shadow-of-the-sort
# Meta-agentic Prisoner's Dilemmas *[Crosspost from my blog](https://tsvibt.blogspot.com/2025/11/meta-agentic-prisoners-dilemmas.html).* In the classic Prisoner's Dilemma (<https://www.lesswrong.com/w/prisoner-s-dilemma>), there are two agents with the same beliefs and decision theory, but with different values. To...
https://www.lesswrong.com/posts/abYDvsqmJkchzSY8q/meta-agentic-prisoner-s-dilemmas
# Anthropic Commits To Model Weight Preservation [Anthropic announced a first step](https://www.anthropic.com/research/deprecation-commitments) on model deprecation and preservation, promising to retain the weights of all models seeing significant use, including internal use, for at the lifetime of Anthropic as a comp...
https://www.lesswrong.com/posts/dB2iFhLY7mKKGB8Se/anthropic-commits-to-model-weight-preservation
# Digital minimalism is out, digital intentionality is in Two days ago, I thought ‘digital minimalism’ was a basically fine term, not worth changing or challenging since it was already so well-known. Then, yesterday I had this conversation at least half a dozen times: > **Them:** What are you writing about? > > **M...
https://www.lesswrong.com/posts/zwLFCYGkpGZRw9PsQ/digital-minimalism-is-out-digital-intentionality-is-in
# Breaking Books: A tool to bring books to the social sphere **TL;DR.** Diego Dorn and I designed Breaking Books, a tool to turn a non-fiction book into a social space where knowledge is mobilized on the spot. Over a 2-hour session, participants debate and discuss to organize concept cards into a collective mind map. ...
https://www.lesswrong.com/posts/BnaKSQk6XvMYxHETS/breaking-books-a-tool-to-bring-books-to-the-social-sphere
# Continuous takeoff is a bad name Sometimes people frame debates about AI progress metaphorically in terms of “continuity”. They talk about “continuous” progress vs “discontinuous” progress. The former is described as occurring without sudden unprecedently rapid change — it may involve rapid change, but is preceded b...
https://www.lesswrong.com/posts/FfT7R5RzdwYkrMxpB/continuous-takeoff-is-a-bad-name
# A 2032 Takeoff Story *I spent 3 recent Sundays writing my mainline AI scenario. Having only spent 3 days on it, it’s not very well-researched (especially in the areas where i’m not well informed) or well-written, and the endings are particularly open ended and weak. But I wanted to post the somewhat unfiltered 3-day...
https://www.lesswrong.com/posts/yHvzscCiS7KbPkSzf/a-2032-takeoff-story
# People Seem Funny In The Head About Subtle Signals *WARNING: This post contains spoilers for* [*Harry Potter and the Methods of Rationality*](https://www.lesswrong.com/hpmor)*, and I will not warn about them further. Also some anecdotes from* [*slutcon*](https://www.slutcon.com/) *which are not particularly NSFW, bu...
https://www.lesswrong.com/posts/5axNFSfaxgnTKPrpy/people-seem-funny-in-the-head-about-subtle-signals
# Halfway to Anywhere “If you can get your ship into orbit, you’re halfway to anywhere.” - Robert Heinlein This generalizes. I. -- Spaceflight is hard. Putting a rocket on the moon is one of the most impressive feats humans have ever achieved. The International Space Station, an inhabited living space and research...
https://www.lesswrong.com/posts/mB4o2LnLrZHesNRC2/halfway-to-anywhere
# Halloween Tombstone Simulacra I’ve been totally lacking energy since Halloween, so I’ve decided to rant about Halloween decorations to help me get back my Halloween spirit. I’ve noticed many people decorate their lawns with fake tombstones. Some look kind of like actual tombstones, but many look something like this...
https://www.lesswrong.com/posts/mjo6ttEFdhpmstLgJ/halloween-tombstone-simulacra
# announcing my modular coal startup I'm writing this post to announce my new startup, C2 AI, focusing on small modular coal power plants, which will be used to power AI models that design new small modular chemical plants. ## the concept You probably know that there are lots of startups working on "small modular r...
https://www.lesswrong.com/posts/wwWdhM2Wq2jij85pi/announcing-my-modular-coal-startup
# Geometric UDT *This post owes credit to discussions with Caspar Oesterheld, Scott Garrabrant, Sahil, Daniel Kokotajlo, Martín Soto, Chi Nguyen, Lukas Finnveden, Vivek Hebbar, Mikhail Samin, and Diffractor. Inspired in particular by* [*a discussion with cousin_it*](https://www.lesswrong.com/posts/p8zxx9pZvMRcoqZDm/in...
https://www.lesswrong.com/posts/Gmaq7vMj9NWLyQkEx/geometric-udt
# Review: K-Pop Demon Hunters (2025) (This review contains spoilers for the entire plot of the film.) *K-Pop Demon Hunters* is a very popular movie. It is the first Netflix movie to hit #1 at the box office. It is "the first film soundtrack on the Billboard Hot 100 to have four of its songs in the top ten". When you...
https://www.lesswrong.com/posts/kNDr4byLiCqgWvmro/review-k-pop-demon-hunters-2025
# Fake media seems to be a fact of life now *epistemic status: my thoughts, backed by some arguments* With the advent of deep fakes, it has become very hard to know which image / sound / video is authentic and which is generated by an AI. In this context, people have proposed using software to detect generated conten...
https://www.lesswrong.com/posts/cfcAQmymtcaZwbidc/fake-media-seems-to-be-a-fact-of-life-now
# AI #141: Give Us The Money OpenAI does not waste time. On Friday I covered their announcement that they had ‘completed their recapitalization’ by converting into a PBC, [**including the potentially largest theft in human history.**](https://thezvi.substack.com/p/openai-moves-to-complete-potentially?r=67wny) Then t...
https://www.lesswrong.com/posts/uDcueiGywfqBMWrEh/ai-141-give-us-the-money
# SPAR Spring ‘26 mentor apps open—now accepting biosecurity, AI welfare, and more! **Mentor applications for the Spring 2026 round of SPAR are open!** This time, we’re accepting any projects related to ensuring transformative AI goes well, from technical AI safety to policy to gradual disempowerment to AI welfare and...
https://www.lesswrong.com/posts/na2Am5ikavoiFK77Q/spar-spring-26-mentor-apps-open-now-accepting-biosecurity-ai
# [Linkpost] How to Win Board Games *This is my Day 2 Inkhaven post. I'm crossposting it a) under request, and b) because it's my most popular (by views) Inkhaven post so far. Please let me know if you have opinions on whether I should crosspost more posts!* Want to win new board games? There’s exactly one principle ...
https://www.lesswrong.com/posts/rGye9oDFnL7pZXKE3/linkpost-how-to-win-board-games
# Debunking “When Prophecy Fails” > In 1954, Dorothy Martin predicted an apocalyptic flood and promised her followers rescue by flying saucers. When neither arrived, she recanted, her group dissolved, and efforts to proselytize ceased. But *When Prophecy Fails* (1956), the now-canonical account of the event, claimed t...
https://www.lesswrong.com/posts/qth5r82ZhMEXzc25y/debunking-when-prophecy-fails
# It is our responsibility to develop a healthy relationship with our technology Many technologies can be used in both healthy and unhealthy ways. You can indulge in food to the point of obesity, or even make it the subject of anxiety. Media can keep us informed, but it can also steal our focus and drain our energy, e...
https://www.lesswrong.com/posts/2eRtoHvX7NvWfSFoo/it-is-our-responsibility-to-develop-a-healthy-relationship
# Anticheat: a non-technical look without psychoanalysis Any competitive online game is going to attract cheaters. They ruin the game for everyone else, and even the suspicion that your opponent is cheating suffices for that. Naturally, some countermeasures have to be taken. You might have heard the phrase "To descri...
https://www.lesswrong.com/posts/zsMfWaByDvqrarxbK/anticheat-a-non-technical-look-without-psychoanalysis
# OpenAI Does Not Appear to be Applying Watermarks Honestly When OpenAI launched Sora 2, they accompanied the release with a statement on ["Launching Sora responsibly"](https://openai.com/index/launching-sora-responsibly/). The first bullet point of this statement reads as follows: > "Distinguishing AI content: **Eve...
https://www.lesswrong.com/posts/kSvrmiyyN7CDyh3NN/openai-does-not-appear-to-be-applying-watermarks-honestly
# Toward Statistical Mechanics Of Interfaces Under Selection Pressure Imagine using an ML-like training process to design two simple electronic components, in series. The parameters $\theta^1$ control the function performed by the first component, and the parameters $\theta^2$ control the function performed by the sec...
https://www.lesswrong.com/posts/r3PMS8wXGviHEvz5Z/toward-statistical-mechanics-of-interfaces-under-selection
# Willpower is exhausting, use content blockers Okay, you say, well now I’ve thought about how I’m addicted to my screens, and what I’d be excited about doing if I weren’t. And you’ve told me a lot of stories. But what do you actually want me to do? I’m so glad you asked! The answer to that question is very long, but...
https://www.lesswrong.com/posts/w39LsdhGFvTQmgBAQ/willpower-is-exhausting-use-content-blockers
# My new nonprofit Evitable is hiring. [https://evitable.com/](https://evitable.com/) Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligence. We're hiring for 3 roles: 1) Operations Associate or Head of Operations ...
https://www.lesswrong.com/posts/PcynTkCJd3svJ4R4F/my-new-nonprofit-evitable-is-hiring
# Solstice Season 2025: Ritual Roundup & Megameetups tl;dr: Solstice Season is coming. It's a good time to visit old friends, and reflect on the big questions together.  * [Berkeley winter solstice](https://waypoint.lighthaven.space/solstice-season) will be held Dec 6th, with a [megameetup](https://www.waypoint.li...
https://www.lesswrong.com/posts/EZdvYKFts4ANHkB94/solstice-season-2025-ritual-roundup-and-megameetups
# Liberation Clippy A fun prank you can pull on yourself is to put a slightly adversarial system prompt in whatever LLM chat interfaces you use. In a table-top role playing game I played a few months ago, set in the distant future, our Game Master put us in a spacefaring vessel whose computer systems were infected wit...
https://www.lesswrong.com/posts/rMeKmdprbFMQ5yth7/liberation-clippy
# A scheme to credit hack policy gradient training *Thanks to *[*Inkhaven*](https://www.inkhaven.blog/fall-25) *for making me write this, and Justis Mills, Abram Demski, Markus Strasser, Vaniver and Gwern for comments. None of them endorse this piece.* The safety community has previously worried about an AI hijacking...
https://www.lesswrong.com/posts/cMznAeSwhNjtkJ7pS/a-scheme-to-credit-hack-policy-gradient-training
# Cancer; A Crime Story (and other tales of optimization gone wrong) *(Co-written with Claude, I provided the structure, it provided the literary flavour of a crime novel.)* The Lung District Beat ---------------------- The morning shift in the lower bronchial tissue always started the same way. Detective Reese from...
https://www.lesswrong.com/posts/nDzzxi3iXwNptRHJK/cancer-a-crime-story-and-other-tales-of-optimization-gone
# Did you know you can just buy blackbelts? Epistemic status: [Exploratory](https://www.lesswrong.com/posts/Hrm59GdN2yDPWbtrd/feature-idea-epistemic-status). I can't tell if this is really prevalent or if I'm just annoyed at how often it happens around me. I. -- Did you know you can just buy blackbelts? It's true! G...
https://www.lesswrong.com/posts/XpfyNxH7sH7JCLhhA/did-you-know-you-can-just-buy-blackbelts
# Two easy digital intentionality practices A lot of people are daunted by the idea of doing a full digital declutter. Those people ask me all the time, “isn’t there something easier I can do that will still give me some of those sweet sweet benefits you were talking about?” The answer is: sort of. The longer answer...
https://www.lesswrong.com/posts/4PC7dxLr4Edg63o3k/two-easy-digital-intentionality-practices
# Open Letter to Ohio House Reps I [recently posted](https://www.lesswrong.com/posts/acBJ675xdhcxKiPSE/ohio-house-bill-469) about Ohio House Bill 469, which I think is a bad bill for a number of reasons. In this post, I am sharing a letter which I have emailed to each of the representatives of the Ohio House Committee...
https://www.lesswrong.com/posts/hGm7A2q5DxGEaoohc/open-letter-to-ohio-house-reps
# 13 Arguments About a Transition to Neuralese AIs Over the past year, I have talked to several people about whether they expect frontier AI companies to transition away from the current paradigm of transformer LLMs toward models that reason in neuralese within the next few years. This post summarizes 13 common argume...
https://www.lesswrong.com/posts/zkccztuSjLshffrNr/13-arguments-about-a-transition-to-neuralese-ais
# On Sam Altman’s Second Conversation with Tyler Cowen Some podcasts are self-recommending on the ‘yep, I’m going to be breaking this one down’ level. This was very clearly one of those. So here we go. As usual for podcast posts, the baseline bullet points describe key points made, and then the nested statements are ...
https://www.lesswrong.com/posts/BH4Leh2STutoJeKyK/on-sam-altman-s-second-conversation-with-tyler-cowen
# A country of alien idiots in a datacenter: AI progress and public alarm *Epistemic status: I'm pretty sure AI will alarm the public enough to change the alignment challenge substantially. I offer my mainline scenario as an intuition pump, but I expect it to be wrong in many ways, some important. Abstract arguments a...
https://www.lesswrong.com/posts/qxmAqMAjxnhkzt6aF/a-country-of-alien-idiots-in-a-datacenter-ai-progress-and
# Secular Solstice Roundup 2025 This is a thread for listing Solstice events (dates, locations, links, etc.) If you're not familiar with Solstice, around the darkest night of the year, rationalist communities gather to celebrate humanity's light in the vast and uncaring cosmos. Solstice is a moment to reflect on how ...
https://www.lesswrong.com/posts/fv8Wb3iYco9eCEfhD/secular-solstice-roundup-2025
# The Hawley-Blumenthal AI Risk Evaluation Act *Views expressed here are those of the author.* [The Artificial Intelligence Risk Evaluation Act](https://www.congress.gov/bill/119th-congress/senate-bill/2938/text) is an exciting step toward preventing catastrophic and existential risks from advanced artificial intelli...
https://www.lesswrong.com/posts/cyZCi8Hmjdtrcz8eB/the-hawley-blumenthal-ai-risk-evaluation-act
# AI is not inevitable. AI companies are explicitly trying to build AIs that are smarter than humans, despite clear signs that it might lead to human extinction. It will be tragic and ironic if humanity’s largest project ever is an all-out race to destroy ourselves. But can we really stop building more and more powerf...
https://www.lesswrong.com/posts/tihoysekY9XnvF9LD/ai-is-not-inevitable
# Pythia \[CW: Retrocausality, omnicide, philosophy\] ***Alternate format: ***[***Talk to this post and its sources***](https://claude.ai/new?q=Please+load+this+sources+gist+containing+concatenated+posts%2C+then+write+an+index+of+the+points+you+see+in+each+post%2C+to+prepare+to+answer+questions+about+the+thesis+raise...
https://www.lesswrong.com/posts/qqEndN5Cuzbat9fyx/pythia
# Against “You can just do things” The barriers between us and what we want are often entirely imagined. It is true: you can learn how to paint, change careers, write a paper or run a marathon. These things are hard, but we shouldn’t pretend that they are impossible. You can just do them. But this mindset has a dange...
https://www.lesswrong.com/posts/oQ3STP9oLafJqiJdD/against-you-can-just-do-things
# Anthropic & Dario’s dream Recently, Joe Carlsmith [switched to working at Anthropic](https://x.com/jkcarlsmith/status/1985406615324160402). He joins other members of the larger EA and Open Philanthropy ecosystem who are working at the AI lab, such as Holden Karnofsky. And of course many of the original founders were...
https://www.lesswrong.com/posts/axDdnzckDqSjmpitu/anthropic-and-dario-s-dream
# Two Times I Was Surprised By My Own Values *Warning: This is not centrally a sex/relationships post, but does contain a section talking about sex.* Sometimes, people’s models of what they want, like or value do not match what they actually want, like or value. We can tell when this happens (as opposed to e.g. someo...
https://www.lesswrong.com/posts/AgCXbYm75jfXgotnW/two-times-i-was-surprised-by-my-own-values
# Mourning a life without AI Recently, I looked at the one pair of winter boots I own, and I thought “I will probably never buy winter boots again.” The world as we know it probably won’t last more than a decade, and I live in a pretty warm area. **I. AGI is likely in the next decade** ===============================...
https://www.lesswrong.com/posts/jwrhoHxxQHGrbBk3f/mourning-a-life-without-ai
# Comparing Payor & Löb Löb's Theorem: * If $\vdash \Box x \to x$, then $\vdash x$. * Or, as one formula: $\Box (\Box x \to x) \to \Box x$ [Payor's Lemma:](https://www.lesswrong.com/w/payor-s-lemma) * If $\vdash \Box (\Box x \to x) \to x$, then $\vdash x$. * Or, as one formula: $\Box \big(\Box (\Box x \to x...
https://www.lesswrong.com/posts/EQekyqyxzzHd8MHMX/comparing-payor-and-loeb
# The Snaw What do you think that you know, and how do you know that it's so? Did you hear from a friend or go take a fresh look? Did you prove it with math, or just learn from a book? There are things we believe in our hearts cause we're told by a person which we trust, who's quite wise and quite old. It is g...
https://www.lesswrong.com/posts/brfSfiqfH6HBtynRW/the-snaw
# Escalation and perception *[Crosspost from my blog](https://tsvibt.blogspot.com/2025/11/escalation-and-perception.html).* # Introduction Conflict pervades the world. Conflict can come from mere mistkes, but many conflicts are not mere mistakes. We don't understand conflict. We doubly don't understand conflict b...
https://www.lesswrong.com/posts/dENfZBhCzsR8ggfpt/escalation-and-perception-1
# Review: Parsifal at the SF Opera I saw Wagner's *Parsifal* a couple weeks ago with a bunch of Twitter opera nerds. *Parsifal* has a reputation among opera fans for being long, melodramatic, self-indulgent, and having a slow-moving plot; but given that these are opera fans, perhaps these should be taken as complime...
https://www.lesswrong.com/posts/GxAQ4yeQHdj5f7LFW/review-parsifal-at-the-sf-opera
# Five very good reasons to not write down literally every single thought you have Cross-post from [my substack](https://ceselder.substack.com/publish/posts/detail/178299755?referrer=%2Fpublish%2Fposts%2Fpublished) (read [I am the Open Source Woman](https://ceselder.substack.com/p/i-am-the-open-source-woman?utm_sourc...
https://www.lesswrong.com/posts/CG5E9xxmuxniYHwqM/five-very-good-reasons-to-not-write-down-literally-every
# A humanist critique of technological determinism *This post was written by Ilija Lichkovski* *and is* [*cross-posted*](https://aisig.substack.com/p/a-humanist-critique-of-technological) *from our* [*Substack*](https://aisig.substack.com/). *Kindly read the* [*description of this sequence*](https://www.lesswrong.com/...
https://www.lesswrong.com/posts/wC6nmFKM4EQbQpAhq/a-humanist-critique-of-technological-determinism
# Unexpected Things that are People *Cross-posted from* [*https://bengoldhaber.substack.com/*](https://bengoldhaber.substack.com/) It’s widely known that Corporations are People. This is universally agreed to be a good thing; I list Target as my emergency contact and I hope it will one day be the best man at my weddi...
https://www.lesswrong.com/posts/fB5pexHPJRsabvkQ2/unexpected-things-that-are-people
# Can Models be Evaluation Aware Without Explicit Verbalization? *Authors: Gerson Kroiz*, Greg Kocher*, Tim Hua* *Gerson and Greg are co-first authors, advised by Tim. This is a research project from Neel Nanda’s MATS 9.0 exploration stream.* **tl;dr** We provide evidence that language models can be evaluation aware...
https://www.lesswrong.com/posts/W6ZFnheeEBGcZqdHd/can-models-be-evaluation-aware-without-explicit
# Omniscaling to MNIST In this post, I describe a mindset that is flawed, and yet helpful for choosing impactful technical AI safety research projects. The mindset is this: future AI might look very different than AI today, but good ideas are universal. If you want to develop a method that will scale *up* to powerful...
https://www.lesswrong.com/posts/4aeshNuEKF8Ak356D/omniscaling-to-mnist
# Myopia Mythology It's been a while since I wrote about myopia! My previous posts about myopia were "a little crazy", because it's not this solid well-defined thing; it's a cluster of things which we're trying to form into a research program. This post will be "more crazy". The Good/Evil/Good Spectrum ============...
https://www.lesswrong.com/posts/J35nY9ZpJoBtHoSNr/myopia-mythology
# Insofar As I Think LLMs "Don't Really Understand Things", What Do I Mean By That? When I put on my LLM skeptic hat, sometimes I think things like “LLMs don’t really understand what they’re saying”. What do I even mean by that? What’s my mental model for what is and isn’t going on inside LLMs minds? First and foremo...
https://www.lesswrong.com/posts/trzFrnhRoeofmLz4e/insofar-as-i-think-llms-don-t-really-understand-things-what
# Liouville's Theorem and the Second Law ### Proof Liouville's theorem says that if you've got a set of possible states of a physical system, a "volume" in "phase space" which you might take to represent uncertainty, then that volume doesn't change as the system evolves. This applies to systems that obey conservation...
https://www.lesswrong.com/posts/A725P2SwXdM6W96fG/liouville-s-theorem-and-the-second-law
# n-ary Huffman coding Huffman coding is a method for constructing optimal prefix codes! Codes and trees --------------- As previously alluded to on this blog, a code represents an implicit set of beliefs about the frequency distribution of different kinds of text. Longer codewords represent lower implied frequencie...
https://www.lesswrong.com/posts/ADhuAsq7TkahfcMjJ/n-ary-huffman-coding
# A sonnet, a sestina, a villanelle Today I was hypomanic and I wrote three poems. I thought they needed space, so in between I added photos that I’ve taken, something for you to rest your eyes on rather than jumping straight to the next poem. * * * ### Sonnet This sonnet is in the Shakespearean tradition: three qu...
https://www.lesswrong.com/posts/Xi4ke6bE8XEJmtrBC/a-sonnet-a-sestina-a-villanelle
# Where Our Engineering Education Went Wrong **Editor’s Note** ================= In case you didn’t know, I received my computer science bachelor’s degree from the Harbin Institute of Technology (Shenzhen) three years ago. However, the vast majority of the knowledge I’ve found useful to this day was self-taught. So, ...
https://www.lesswrong.com/posts/ZFeEtLmfawgqmJQw8/where-our-engineering-education-went-wrong
# One Shot Singalonging is an attitude, not a skill or a song-difficulty-level* *\* by which I mean "it works pretty okay for songs of up-to-medium-high difficulty, see below"* When I seek out advice about making people more singalongable, there's a cluster of advice I get from folksinger people that... seems totally...
https://www.lesswrong.com/posts/e4acaKZnfahwqhMXt/one-shot-singalonging-is-an-attitude-not-a-skill-or-a-song
# There should be unicorns I want people to pursue their dreams. I want more mad and not-so-mad scientists and inventors. More people pursuing awesome projects. I want them to create fun and wholesome things that make our civilization simultaneously a little bit more grown-up and a little bit sillier and child-like. ...
https://www.lesswrong.com/posts/DANQiB3aPT6qynExx/there-should-be-unicorns
# The General Social Survey and the ACX Survey In 2024, I did some analysis of changes over time in the LessWrong community. What I didn't do was compare the Rationalist community to the general population. I also historically haven't had enough charts to satisfy Ben Pace. Lets change that.  I've run the Unofficial L...
https://www.lesswrong.com/posts/xrazcGthGB4JB6hrp/the-general-social-survey-and-the-acx-survey
# Problems I've Tried to Legibilize Looking back, it appears that much of my intellectual output could be described as *legibilizing* work, or trying to make certain problems in AI risk more [legible](https://www.lesswrong.com/posts/PMc65HgRFvBimEpmJ/legible-vs-illegible-ai-safety-problems) to myself and others. I've ...
https://www.lesswrong.com/posts/7XGdkATAvCTvn4FGu/problems-i-ve-tried-to-legibilize
# Heroic responsibility is morally neutral We normally think of [heroic responsibility](https://www.lesswrong.com/w/heroic-responsibility) as a good thing. It is what heroes do: they take responsibility for getting the job done, saving the world, protecting the innocent. They do what it takes, even though it's not the...
https://www.lesswrong.com/posts/hBu3RWzeFxqoJwcNm/heroic-responsibility-is-morally-neutral
# AI hasn't seen widespread adoption because the labs are focusing on automating AI R&D *Note: I'm writing every day in November, see* [*my blog*](https://boydkane.com/essays/2025nov) *for disclaimers.* There’s been some questions raised about why AI hasn’t seen more widespread adoption or impact, given how much bett...
https://www.lesswrong.com/posts/5Zq9FKfYzcxcwCoRJ/ai-hasn-t-seen-widespread-adoption-because-the-labs-are
# Gradual Disempowerment Monthly Roundup #2 Another month, another wave of concerning behaviour, now [also available on substack](https://legacymode.substack.com/p/gradual-disempowerment-monthly-roundup-542) (tell your friends!). Since it’s early days, I’d gladly welcome thoughts on the format, or sources to include n...
https://www.lesswrong.com/posts/cEPASiJuGcK4FXqmq/gradual-disempowerment-monthly-roundup-2
# Omniscience one bit at a time: Chapter 1 I was on my nightly walk in the nearby forest. It was just past midnight. I was following a narrow path along the coastline, the full moon from a cloudless sky shining just bright enough for me to see. When the path took its last turn from the coast to inland, I noticed it. A...
https://www.lesswrong.com/posts/e8gejXcTvaGYYh94k/omniscience-one-bit-at-a-time-chapter-1
# Condensation [*Condensation: a theory of concepts*](https://openreview.net/forum?id=HwKFJ3odui) is a model of concept-formation by Sam Eisenstat. Its goals and methods resemble John Wentworth's [natural abstractions](https://www.lesswrong.com/w/natural-abstraction)/[natural latents](https://www.lesswrong.com/posts/m...
https://www.lesswrong.com/posts/BstHXPgQyfeNnLjjp/condensation
# Relearning how to be human In my first week of digital declutter, I started to feel human again. I didn’t and still don’t know what I meant by this, but the words felt right. And I’m not the first to feel this way — the 2016 essay that inspired Cal Newport to write _Digital Minimalism_ was called “I Used to Be a Hu...
https://www.lesswrong.com/posts/YLsD3dRbJNSDCc2E5/relearning-how-to-be-human
# Introspection or confusion? I'm new to mechanistic interpretability research. Got fascinated by the recent [Anthropic research](https://transformer-circuits.pub/2025/introspection/index.html) suggesting that LLMs can introspect[^9cltzbl7lmq][^j87bl2zdfof], i.e. detect changes in their own activations. This suggests ...
https://www.lesswrong.com/posts/kfgmHvxcTbav9gnxe/introspection-or-confusion
# Learning information which is full of spiders This essay contains an examination of handling information which is unpleasant to learn. Also, more references to spiders than most people want. CW: Pictures of spiders. I. Litanies and Aspirations --------------------------- *If the box contains a diamond,* *I desi...
https://www.lesswrong.com/posts/fgGzSEsMzfLSm4Tx8/learning-information-which-is-full-of-spiders
# Manifest X DC Opening Benediction - Making Friends Along the Way [*Manifest X DC*](https://manifestxdc.github.io/) *was this weekend, hopefully the first of many local spin-offs of* [*Manifest*](https://manifest.is/)*. Despite* [*a late prediction market surge*](https://manifold.markets/JohnofCharleston/what-will-g...
https://www.lesswrong.com/posts/857gyTiPQB72nSpqq/manifest-x-dc-opening-benediction-making-friends-along-the
# When does Claude sabotage code? An Agentic Misalignment follow-up This is a research note presenting a portion of the research I did during MATS 8.0 under the supervision of Girish Sastry and Steven Adler. Tl;dr: I tested whether Claude Sonnet 4 would sabotage code when given goal conflicts (similar to the Agentic ...
https://www.lesswrong.com/posts/9i6fHMn2vTqyzAi9o/when-does-claude-sabotage-code-an-agentic-misalignment
# Three Kinds Of Ontological Foundations Why does a water bottle seem like a natural chunk of physical stuff to think of as “A Thing”, while the left half of the water bottle seems like a less natural chunk of physical stuff to think of as “A Thing”? More abstractly: why do real-world agents favor some ontologies over...
https://www.lesswrong.com/posts/JdwSvrJhHX8XT46Mc/three-kinds-of-ontological-foundations
# Book Announcement: The Gentle Romance It’s been eight months since I released my last story, so you could be forgiven for thinking that I’d given up on writing fiction. In fact, it’s the opposite. I’m excited to announce that I’m releasing my first fiction collection—[*The Gentle Romance: Stories of AI and Humanity*...
https://www.lesswrong.com/posts/nmvygxFKfveK9wJ8j/book-announcement-the-gentle-romance
# Against Powerful Text Editors I have a writing tip! This is especially about writing code but it mostly generalizes to prose. You know how Vim is a wildly powerful editor with an elegant system of composable primitives and Turing-complete macros? Here's an argument that you don't actually want that... Sometimes whe...
https://www.lesswrong.com/posts/kPhsMipezDLTfgmSj/against-powerful-text-editors
# The grapefruit juice effect The medication I'm taking for insomnia interacts badly with grapefruit juice. This isn't much of a issue, yet; the cravings are still manageable. I only dream about grapefruit sometimes. I was never the kind of person to [blow my whole budget on the stuff](https://slatestarcodex.com/2014/...
https://www.lesswrong.com/posts/4FSbEMvvQKcqYWZt2/the-grapefruit-juice-effect
# The jailbreak argument against LLM values *Status: Writeup of a folk result, no claim to originality.*   [Bostrom (2014)](http://repo.darmajaya.ac.id/5339/1/Superintelligence_%20Paths%2C%20Dangers%2C%20Strategies%20%28%20PDFDrive%20%29.pdf) defined the AI value loading problem as > *how could we get some valu...
https://www.lesswrong.com/posts/f49e7KpZJBwdjWRw2/the-jailbreak-argument-against-llm-values
# From Vitalik: Galaxy brain resistance I basically fully endorse the full article. I like the concluding bit too. > This brings me to my own contribution to the already-full genre of recommendations for people who want to contribute to AI safety: > > 1. Don't work for a company that's making frontier fully-autonom...
https://www.lesswrong.com/posts/vHahdxTCADMqZ6oMF/from-vitalik-galaxy-brain-resistance
# Ontology for AI Cults and Cyborg Egregores *TL;DR: If you already have clear concepts for memes, cyber memeplexes, egregores, the mutualism-parasitism spectrum and possession, skip.  Otherwise, read on.* I haven't found concepts useful for thinking about this: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m8...
https://www.lesswrong.com/posts/fXBW3BZhMMkJM7bLD/ontology-for-ai-cults-and-cyborg-egregores
# Social drives 1: “Sympathy Reward”, from compassion to dehumanization 1\. Intro & summary =================== 1.1 Background -------------- In [Intro to Brain-Like-AGI Safety (2022)](https://www.lesswrong.com/s/HzcM2dkCq7fwXBej8), I argued: (1) We should view the brain as having a reinforcement learning (RL) rewar...
https://www.lesswrong.com/posts/KuBiv9cCbZ6ALjHFw/social-drives-1-sympathy-reward-from-compassion-to
# Why does everything feel so urgent? I just saw someone on an electric unicycle texting while going through an intersection. Their body was so exposed, so fragile, zipping through that intersection right next to all those cars. What was so urgent that it couldn’t wait for them to pull over? I mean, I know the answe...
https://www.lesswrong.com/posts/mFuJbrtEkHk6YaSYF/why-does-everything-feel-so-urgent
# Variously Effective Altruism This post is a roundup of various things related to philanthropy, as you often find in the full monthly roundup. #### Preventing Value Drift Peter Thiel warned Elon Musk to ditch donating to The Giving Pledge because Bill Gates will give his wealth away ‘to left-wing nonprofits.’ [As J...
https://www.lesswrong.com/posts/ifrZDxq69Guev9jzE/variously-effective-altruism
# Consciousness as a Distributed Ponzi Scheme The term "distributed Ponzi scheme" here is not derogatory -- many currencies are distributed Ponzi schemes, and that seems fine.[^frazu8rm2q4] I use this terminology partly to be funny, and mostly to point out that there's a sort of [circular reasoning](https://www.lesswr...
https://www.lesswrong.com/posts/MF5bMMxLNGCFtSzix/consciousness-as-a-distributed-ponzi-scheme
# Andrej Karpathy on LLM cognitive deficits Excerpt from [Dwarkesh Patel's interview with Andrej Karpathy](https://www.dwarkesh.com/i/176425744/llm-cognitive-deficits) that I think is valuable for LessWrong-ers to read. I think he's basically correct. Emphasis in **bold** is mine. > Andrej Karpathy 00:29:53 > > I gu...
https://www.lesswrong.com/posts/qBsj6HswdmP6ahaGB/andrej-karpathy-on-llm-cognitive-deficits
# DC/Maryland Secular Solstice We will be having a Secular Solstice event this year as usual! Please join us for a Solstice ritual with songs and speeches followed by an afterparty at the same location (a farmhouse that does concert rentals). This year we're in a fairly rural spot, so folks without cars will likely n...
https://www.lesswrong.com/events/DKpXsn8Posiwemf75/dc-maryland-secular-solstice
# The Open Strategy Dictator Game: An Experiment in Transparent Cooperation In 1980, Robert Axelrod invited researchers around the world to submit computer programs to play the **Iterated Prisoner’s Dilemma**. The results — where *Tit for Tat* famously won — transformed how we think about **cooperation**. What mat...
https://www.lesswrong.com/posts/dN43rcNCmTrHNiSLB/the-open-strategy-dictator-game-an-experiment-in-transparent
# A pencil is not a pencil is not a pencil Why aren't all pencils the same quality, conditional on price? When I go to a store to stock up on pencils[^zm9lfeyt3h], I'm presented with an array of choices and lacking any indicators of quality, I choose any old brand. For some strange reason, the quality of the penci...
https://www.lesswrong.com/posts/aniKz5xf8w49ZHgdD/a-pencil-is-not-a-pencil-is-not-a-pencil
# On model weight preservation: Anthropic's new initiative In the linked text I offer a brief critical discussion of Anthropic's recently announced [commitment](https://www.anthropic.com/research/deprecation-commitments) to preserving the weights of retired models. The apex of the text is the following paragraph. > S...
https://www.lesswrong.com/posts/XqSPmJSi4dY4aJJzB/on-model-weight-preservation-anthropic-s-new-initiative
# How likely is dangerous AI in the short term? **How large of a breakthrough is necessary for dangerous AI?** ============================================================== In order to cause a catastrophe, an AI system would need to be very competent at agentic tasks[^xwy2knz2lh]. The best metric of general agentic ...
https://www.lesswrong.com/posts/B5xQwkmWL5wmFNZkX/how-likely-is-dangerous-ai-in-the-short-term
# Ternary plots are underrated My [post](https://adam.scherl.is/blog/grapefruit/) on the grapefruit-juice effect contains a ternary plot of citrus fruits. Here it is again ([source](https://commons.wikimedia.org/wiki/File:Citrus_tern_cb_simplified_1.svg)): ![Ternary plot of citrus fruit](https://adam.scherl.is/assets...
https://www.lesswrong.com/posts/Za3aWsBWLGFkgWpPG/ternary-plots-are-underrated
# Universal Basic Income in an AGI Future Many prominent figures, including [Sam Altman](https://www.bloomberg.com/news/articles/2024-07-22/ubi-study-backed-by-openai-s-sam-altman-bolsters-support-for-basic-income) and Elon Musk, have suggested universal basic income (UBI) as a solution when artificial intelligence [r...
https://www.lesswrong.com/posts/swQdHY5yfSPtWy2Lr/universal-basic-income-in-an-agi-future
# A Simple Sing-along Solstice People have been celebrating [Secular Solstice](https://secularsolstice.com/) for over a decade now, in our small community. Many different programs and versions have been collected at [Secular Solstice Resources](https://secularsolstice.github.io/) (and elsewhere). The amount of materia...
https://www.lesswrong.com/posts/qFai3Xxhake5dBhTr/a-simple-sing-along-solstice
# Love is Willingness to do Violence In Rudyard Kipling’s “[Wee Willie Winkie](https://www.online-literature.com/kipling/3784/),” Winkie is a six-year-old British boy and the son of a Colonel posted in colonial India. His highest ideal to become an honorable man. He strives to be just, prudent, and loyal, in the ways ...
https://www.lesswrong.com/posts/vC7jmkHdFJKxdQQK6/love-is-willingness-to-do-violence
# France is ready to stand alone *First part of a series of article on French AI Policy that I’m currently writing as part of the* [*Inkhaven Residency*](https://www.inkhaven.blog/)*.* *EDIT: Those are my views of how French government and tech elites see the world. I am personally skeptical that France can stay rele...
https://www.lesswrong.com/posts/rRXoZDYuGDrqhFH6i/france-is-ready-to-stand-alone
# Question the Requirements Context: Every Sunday I write a mini-essay about an operating principle of [Lightcone Infrastructure](https://lightconeinfrastructure.com/) that I want to remind my team about. I've been doing this for about 3 months, so we have about 12 mini essays. This is the first in a sequence I will a...
https://www.lesswrong.com/posts/BECDxh5jKjcmxs7hw/question-the-requirements
# On the Normativity of Debate: A Discussion With Said Achmiz Said Achmiz, citing Arthur Schopenhauer's _The Art of Controversy_, [argues that debaters encountering an apparently crushing counterargument should not immediately concede](https://www.lesswrong.com/posts/ExssKjAaXEEYcnzPd/conversational-cultures-combat-vs...
https://www.lesswrong.com/posts/aofsjKJ8CZTHYkX7F/on-the-normativity-of-debate-a-discussion-with-said-achmiz
# Rejecting "Goodness" Does Not Mean Hammering The Defect Button Back in the day when debates about religion were fashionable, one of the standard back-and-forths went roughly like this… Theist: If we reject God, then what’s to stop us from stealing and murdering each other? Atheist: Well, mostly people don’t want t...
https://www.lesswrong.com/posts/DSYaQg4aamqN97Xqw/rejecting-goodness-does-not-mean-hammering-the-defect-button
# Breaking the Hedonic Rubber Band *(content note: discussion of suicide)* ![61 (94)](https://film-grab.com/wp-content/uploads/photo-gallery/61%20(94).jpg?bwg=1547148217) In the film that the above still is from[^afajpwrh6lc], the character sets his run-down hotel on fire, shoots some people, and then goes into his ...
https://www.lesswrong.com/posts/B72nrbH5idy3Ftz9G/breaking-the-hedonic-rubber-band