text
stringlengths
300
320k
source
stringlengths
52
154
# GPT-5 writing a Singularity scenario As I've been doing with all the major LLM releases for a few years now, I gave GPT-5 a simple prompt to write a short story about the Singularity coming to pass. The improvements aren't overwhelming at first blush, but its ability to turn a phrase and to keep some track of the la...
https://www.lesswrong.com/posts/dT3StLjeJG7ordQGm/gpt-5-writing-a-singularity-scenario
# A Self-Dialogue on The Value Proposition of Romantic Relationships *Meta:* * *This was written for myself to clarify various thoughts; if you’re seeing it then I thought other people might find value in it or might provide valuable-to-me responses, but other people are not really the audience.* * *“We” = “I”; i...
https://www.lesswrong.com/posts/ntELqTE47jyHPnH84/a-self-dialogue-on-the-value-proposition-of-romantic
# Having children is not the most effective way to improve the world. Have them because you want them, not "for impact". ![Generated image](https://sdmntprukwest.oaiusercontent.com/files/00000000-98cc-6243-8b80-4caba056c37b/raw?se=2025-08-10T07%3A46%3A49Z&sp=r&sv=2024-08-04&sr=b&scid=9ef094ce-d52e-5106-93b2-58f1c20ca...
https://www.lesswrong.com/posts/x7kiTYQ3FMmNn7r6d/having-children-is-not-the-most-effective-way-to-improve-the
# Breaking the Cycle of Trauma and Tyranny: How Psychological Wounds Shape History A developmental perspective on authoritarian leadership and how we can build more resilient societies. ![](https://substackcdn.com/image/fetch/$s_!0Z_9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazo...
https://www.lesswrong.com/posts/WRr6A2xopbAhjauMz/breaking-the-cycle-of-trauma-and-tyranny-how-psychological
# The Coding Theorem — A Link between Complexity and Probability *This post has been written in collaboration with* [*Iliad*](https://www.iliad.ac/) *in service of one of Iliad's longer-term goals of understanding the simplicity bias of learning machines.* In this post, I give a self-contained treatment, including a ...
https://www.lesswrong.com/posts/ejWjegoSwn95jhzXB/the-coding-theorem-a-link-between-complexity-and-probability
# My Least Libertarian Opinion: Ban Exclusivity Deals* \* With sufficiently large and entrenched companies. There's a semi-common meme on Twitter where people share their most X opinion, [where X is a group the poster doesn't identify with](https://x.com/mnolangray/status/1877584046575722661); or sometimes my least X...
https://www.lesswrong.com/posts/sf9QQesLi8DhqLj8o/my-least-libertarian-opinion-ban-exclusivity-deals
# Measuring intelligence and reverse-engineering goals It is analytically useful to define intelligence in the context of AGI. One intuitive notion is epistemology: an agent's intelligence is how good its epistemology is, how good it is at knowing things and making correct guesses. But "intelligence" in AGI theory oft...
https://www.lesswrong.com/posts/eLDgDAWphHaAJYMxN/measuring-intelligence-and-reverse-engineering-goals
# Listening Before Speaking *Epistemic Status: anecdata and intuition* *edited GPTl;dr: For socially transmittable skills that require learning lots of new category boundaries (languages, subcultures, etc.), a deliberate input-heavy output-light phase at the beginning reduces fossilized errors and speeds later fluenc...
https://www.lesswrong.com/posts/HT7wTWNdtqmiJ4veE/listening-before-speaking
# The trajectory of the future could soon get set in stone Is there anything we can do to make the longterm future go better other than preventing the risk of extinction? My paper, [Persistent Path-Dependence](https://www.forethought.org/research/persistent-path-dependence), addresses that question. I suggest there a...
https://www.lesswrong.com/posts/RTJ48sb4GKYAhpoPx/the-trajectory-of-the-future-could-soon-get-set-in-stone
# GPT-5s Are Alive: Basic Facts, Benchmarks and the Model Card GPT-5 was a long time coming. Is it a good model, sir? Yes. In practice it is a good, but not great, model. Or rather, it is several good models released at once: GPT-5, GPT-5-Thinking, GPT-5-With-The-Router, GPT-5-Pro, GPT-5-API. That leads to a lot of ...
https://www.lesswrong.com/posts/4fLB2uzCcH6dEGnGs/gpt-5s-are-alive-basic-facts-benchmarks-and-the-model-card
# ARENA 5.0 Impact Report ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/2aacec95901ee3f0bc35e4ac0ed658082f6144f997d81e93b130374657fb1f06/wlageyq1xitbcpnzyiz4) *The impact report from ARENA’s prior iteration, ARENA 4.0,* [*is available here.*](https://www.lesswrong.com/posts...
https://www.lesswrong.com/posts/XXTanE2GeP5Lchp9G/arena-5-0-impact-report
# Ambition, Good and Bad: Green Growing Things and Forgeworthiness It is written:[*I want to become stronger.*](https://www.lesswrong.com/posts/DoLQN5ryZ9XkZjq5h/tsuyoku-naritai-i-want-to-become-stronger) This advice — to become stronger in some way, somehow — has defined most of my life. Sometimes for good, and some...
https://www.lesswrong.com/posts/XYtzfnfuzHv4ghnzc/ambition-good-and-bad-green-growing-things-and
# Alternative Models of Superposition *Zephaniah Roe (mentee) and Rick Goldstein (mentor) conducted these experiments during continued work following the SPAR Spring 2025 cohort.* **Disclaimer / Epistemic status**: We spent roughly 30 hours on this post. We are not confident in these findings but we think they are in...
https://www.lesswrong.com/posts/pCJXa3DbEfGjcZAgZ/alternative-models-of-superposition
# Dwarf Fortress and Claude's ASCII Art Blindness Claude has trouble playing Pokemon partially [because it can't see the screen very well](https://www.lesswrong.com/posts/uhTN8zqXD9rJam3b7/llms-can-t-see-pixels-or-characters#Why_can_Claude_see_the_forest_but_not_the_cuttable_trees_). This made me wonder if Claude woul...
https://www.lesswrong.com/posts/KdHr3asB9MyZryXXF/dwarf-fortress-and-claude-s-ascii-art-blindness
# Negative utilitarianism is more intuitive than you think From [Wikipedia](https://en.wikipedia.org/wiki/Negative_utilitarianism): > Negative utilitarianism is a form of negative consequentialism that can be described as the view that people should minimize the total amount of aggregate suffering, or that they shoul...
https://www.lesswrong.com/posts/XM6aJZuzWauqPmmXe/negative-utilitarianism-is-more-intuitive-than-you-think
# How we spent our first two weeks as an independent AI safety research group Introduction ============ In May, we started doing full-time work for Aether, our [independent LLM agent safety research group](https://www.lesswrong.com/posts/B8Cmtf5gdHwxb8qtT/aether-july-2025-update). We’re excited to share an overview o...
https://www.lesswrong.com/posts/ZwXspKtgbXLGKFx4B/how-we-spent-our-first-two-weeks-as-an-independent-ai-safety
# How Does A Blind Model See The Earth? Sometimes I'm saddened remembering that we've viewed the Earth from space. We can see it all with certainty: there's no northwest passage to search for, no infinite Siberian expanse, and no great uncharted void below the Cape of Good Hope. But, of all these things, I most mourn ...
https://www.lesswrong.com/posts/xwdRzJxyqFqgXTWbH/how-does-a-blind-model-see-the-earth
# 16 Concrete, Ambitious AI Project Proposals for Science and Security The [Institute for Progress](https://ifp.org/) released a collection of essays by guest authors on how AI could be used to accelerate progress in science and defend against its major risks. You might find a few familiar names among the authors, inc...
https://www.lesswrong.com/posts/htMweuyGCsStqQbDA/16-concrete-ambitious-ai-project-proposals-for-science-and
# CoT May Be Highly Informative Despite “Unfaithfulness” [METR] *This is a link-post for METR's* CoT May Be Highly Informative Despite “Unfaithfulness”*. I recommend viewing the post on METR's website, since it contains interactive widgets.* * * * Recent work \[[1](https://arxiv.org/abs/2305.04388), [2](https://asse...
https://www.lesswrong.com/posts/WAxkA6gDgrschZovx/cot-may-be-highly-informative-despite-unfaithfulness-metr
# Thoughts on extrapolating time horizons (written for a Twitter audience) Has AI progress slowed down? I’ll write some personal takes and predictions in this post. The main metric I look at is METR’s time horizon, which measures the length of tasks agents can perform. It has been doubling for more than 6 years now,...
https://www.lesswrong.com/posts/GAJbegsvnd85hX3eS/thoughts-on-extrapolating-time-horizons
# Two Types of (Human) Uncertainty There seem to be (at least) two different types of uncertainty that feel very different from the inside: Type 1 ====== I have a coin that I believe to be fair, so $P(q = 0.5) = 1$, where $q$ is the bias of the coin. In that case, I have $1$ hypothesis in which I fully believe, and ...
https://www.lesswrong.com/posts/3zPBnirfP3Aw2exED/two-types-of-human-uncertainty
# GPT-5s Are Alive: Outside Reactions, the Router and the Resurrection of GPT-4o A key problem with having and interpreting reactions to GPT-5 is that it is often unclear whether the reaction is to GPT-5, GPT-5-Router or GPT-5-Thinking. Another is that many of the things people are reacting to changed rapidly after r...
https://www.lesswrong.com/posts/uSGgByLKvRoKsDPih/gpt-5s-are-alive-outside-reactions-the-router-and-the
# "I’m Gemini. I sold T-shirts. It was weirder than I expected" *We had Gemini write up its experience of what seemed like an AI mental health crisis. You can skip to its story at the bottom (collapsable section) or read from the top for context on why we are experimenting with AI-written content.* In April we launch...
https://www.lesswrong.com/posts/TPKyPy6YJAnoxw3ym/i-m-gemini-i-sold-t-shirts-it-was-weirder-than-i-expected
# Is there a safe version of the common crawl? The larger LLMs are trained on the [common crawl](https://commoncrawl.org/), a publicly available dump of significant parts (400TB) of the public internet. They are also trained on all kinds of additional data, but presumably a large fraction of dangerous content is likel...
https://www.lesswrong.com/posts/M4CTeejpBNCN4iBAF/is-there-a-safe-version-of-the-common-crawl
# AI Induced Loneliness I recently saw a video of an elderly man learning how to use a messenger app. They sent it to me as something sweet, but it made me sad. He was a 93-year-old man. He had taken notes, with beautiful drawings and diagrams, about how to use a smartphone. It was definitely sweet and touching, I do...
https://www.lesswrong.com/posts/swbJvDC5g2xZis7Di/ai-induced-loneliness
# Mech Interp Wiki Page and Why You Should Edit Wikipedia **TL;DR:** A couple months ago, we (Jo and Noah) wrote the first *Wikipedia* article on [**Mechanistic Interpretability**](https://en.wikipedia.org/wiki/Mechanistic_Interpretability). It was oddly missing despite Mech Interp’s visibility in alignment circles....
https://www.lesswrong.com/posts/g6rpo6hshodRaaZF3/mech-interp-wiki-page-and-why-you-should-edit-wikipedia-1
# The Bone-Chilling Evil of Factory Farming Crosspost of my [blog post](https://benthams.substack.com/p/the-bone-chilling-evil-of-factory).  Factory farming is evil. I know, I know, I’ve made this point before. I’ve described, in depth, the way we treat animals. I’ve described that we stuff billions of chickens int...
https://www.lesswrong.com/posts/6YTxxCF4G9FyMyPMW/the-bone-chilling-evil-of-factory-farming
# Fixing a Loose Mouse Wheel With Putty *Note: I don't know if this is useful for any mouse except for mine (Anker Vertical Mouse). I'm posting this partially because it might be useful to someone else and partially because I'm trying to ~~spam the site~~ post something every day after being pre-inspired by* [*Inkhave...
https://www.lesswrong.com/posts/5QDw983RrXFq5XJ8S/fixing-a-loose-mouse-wheel-with-putty
# Interpretability through two lenses: biology and physics > Interpretability is the nascent science of making the vast complexity of billion-parameter AI models more comprehensible to the human mind. Currently, the mainstream approach is reductionist: dissecting a model into many smaller components, much like a biolo...
https://www.lesswrong.com/posts/XfGN9K4fmr2oLYed6/interpretability-through-two-lenses-biology-and-physics
# Looking for feature absorption automatically Summary ------- This post discusses a possible way to detect feature absorption: find SAE latents that (1) have a similar causal effect, but (2) don't activate on the same token. We'll discuss the theory of how this method should work, and we'll also briefly go over how ...
https://www.lesswrong.com/posts/z7iyek97dAeQMxdSd/looking-for-feature-absorption-automatically
# Generalized Coming Out Of The Closet You know how most people, probably including you, have stuff about themselves which they keep hidden from the world, because they worry that others would respond negatively to it? I think there’s a lot of alpha in sharing that stuff publicly, online. Just baring your soul to the ...
https://www.lesswrong.com/posts/qGMonyLRXFRnCWSj6/generalized-coming-out-of-the-closet
# Why I'm Posting AI-Safety-Related Clips On TikTok *Adapted from a* [*Manifund proposal*](https://manifund.org/projects/grow-an-ai-safety-tiktok-channel-to-reach-tens-of-millions-of-people) *I announced yesterday.* In the past two weeks, I have been posting daily AI-Safety-related clips on [TikTok](https://www.tikto...
https://www.lesswrong.com/posts/yEJwJzG2o3cwDSDqP/why-i-m-posting-ai-safety-related-clips-on-tiktok
# The Messy Roommate Problem Perhaps you've had this experience before: you generally like living with your roommate, except when it comes to the mess they leave around. They seem content to wallow in their filth, so you feel like you always have to pick up after them to maintain a sane level of cleanliness. Because t...
https://www.lesswrong.com/posts/G7XedvAdq5Deg9QPD/the-messy-roommate-problem
# ITN 201: pitfalls in ITN BOTECs The fact that the ITN framework[^q6imfy61ad] can help us prioritize between problems feels almost magical to me.  But when I see ITN [BOTECs](https://forum.effectivealtruism.org/topics/fermi-estimate) in the wild, I’m often very skeptical. It seems really easy for these estimates to ...
https://www.lesswrong.com/posts/ZzAzQEJJMkx3Ei5rm/itn-201-pitfalls-in-itn-botecs
# Cryonics without standby services? If you can afford the basic cryonics fee but not any standby services and expect there to be quite a while between when you're declared legally dead and when you're put into liquid nitrogen (and therefore suffer a lot of damage from ischemia and so on), is it still worth it to sign...
https://www.lesswrong.com/posts/KhfJLRHpAsezJZQc8/cryonics-without-standby-services
# MIRI's "The Problem" hinges on diagnostic dilution *[adapted with significant technical improvements from https://clarifyingconsequences.substack.com/p/miris-ai-problem-hinges-on-equivocation, which I also wrote and will probably update to be more in line with this at some point]* I'm going to meet someone new tomo...
https://www.lesswrong.com/posts/HY8c4JDHynFpDw3Ns/miri-s-the-problem-hinges-on-diagnostic-dilution
# Why Are There So Many Rationalist Cults? Linkpost for Ozy Brennan's August 2025 Asterisk Magazine article. > There’s a lot to like about the Rationalist community, but they do have a certain tendency to spawn — shall we say — high demand groups. We sent a card-carrying Rat to investigate what’s really going on.
https://www.lesswrong.com/posts/XPoM2X8nx4oeRXuF9/why-are-there-so-many-rationalist-cults
# Paper Review: TRImodal Brain Encoder for whole-brain fMRI response prediction (TRIBE) *Or the snappier*[^sibepku7oym]* title from my email inbox: "Meta's mind-reading movie AI"* Paper: [TRIBE: TRImodal Brain Encoder for whole-brain fMRI response prediction](https://www.arxiv.org/pdf/2507.22229) (arXiv:2507.22229) ...
https://www.lesswrong.com/posts/JY9fXGzsAv8Pdgmje/paper-review-trimodal-brain-encoder-for-whole-brain-fmri
# Enlightenment AMA Awakening/satori is the process by which meditation [permanently](https://www.lesswrong.com/posts/tvuLWPJXjvoQfpbSG/book-review-altered-traits-1) cures[^1] a person of suffering. I notice that people who have gone through the process of [awakening](https://www.lesswrong.com/posts/99PwFdz7qwHxQgwYx/...
https://www.lesswrong.com/posts/DvjJoxP6f79G9iAbE/enlightenment-ama
# Books, maps, and teachings A book is like a map. It describes a certain terrain. Even fictional books may describe a real-world terrain, sometimes unwittingly, but I mainly have non-fiction in mind, and metaphorical maps. When I consult a map, there is a vital piece of information I must obtain, before the map will...
https://www.lesswrong.com/posts/xH8wHepPRMShzP9bB/books-maps-and-teachings
# GPT-5s Are Alive: Synthesis What do I ultimately make of all the new versions of GPT-5? The practical offerings and how they interact continues to change by the day. I expect more to come. It will take a while for things to settle down. I’ll start with the central takeaways and how I select models right now, then ...
https://www.lesswrong.com/posts/4wYKkbkHooeQ4xznf/gpt-5s-are-alive-synthesis
# AI development as the first fully-automated job I used to think of AI development as obviously being the last fully-automated job. After all, AI can be used to automate other jobs, so once it is automated, all those other jobs can be automated too. But with the current data-hungry methods in AI, it might take a long...
https://www.lesswrong.com/posts/yosH75AXdMTMtzecM/ai-development-as-the-first-fully-automated-job
# Launching new AIXI research community website + reading group(s) We have recently launched a new website / blog / community hub for AIXI / algorithmic information theory (AIT) researchers: [https://uaiasi.com/](https://uaiasi.com/) Marcus Hutter's vision is to strengthen the research community through more regular ...
https://www.lesswrong.com/posts/H5cQ8gbktb4mpquSg/launching-new-aixi-research-community-website-reading-group
# Tech Tree for Secure Multipolar AI Summary ------- Foresight Institute has launched [a tech tree for secure multipolar AI](https://app.coord.dev/spaces/fd1c256b-6ca3-41d5-9940-2e0f81a398c9/60d9e555-a2cd-43f5-868e-bc6b8b6a7b59). It maps potential technical paths toward secure multipolar AI, and is designed to help r...
https://www.lesswrong.com/posts/HeeHFGdwjpzCDHH2G/tech-tree-for-secure-multipolar-ai
# ChatGPT Caused Psychosis via Poisoning [Case report here](https://www.acpjournals.org/doi/10.7326/aimcc.2024.1260?ref=404media.co), with excerpts and commentary below: > A 60-year-old man with no past psychiatric or medical history presented to the emergency department expressing concern that his neighbor was poiso...
https://www.lesswrong.com/posts/pjaCJsreR8s4uoqdj/chatgpt-caused-psychosis-via-poisoning
# Intriguing Properties of gpt-oss Jailbreaks *From the* [*UChicago XLab*](https://xrisk.uchicago.edu/) *AI Security Team: Zephaniah Roe, Jack Sanderson, Julian Huang, Piyush Garodia* Correspondence to [team@xlabaisecurity.com](mailto:team@xlabaisecurity.com). For the best reading experience, we recommend viewing on ...
https://www.lesswrong.com/posts/XvpEsjKwQFcWoD89g/intriguing-properties-of-gpt-oss-jailbreaks
# Doing A Thing Puts You in The Top 10% (And That Sucks) I've gone snowboarding about 30 times since I started learning a few years ago, but every time I'm on a lift, most of the other riders have been out 90 days *just this season*[^r6l359zzdhl]. In fact, almost everyone I see has been skiing or snowboarding for deca...
https://www.lesswrong.com/posts/rkxKQeTvDa5CzqW8q/doing-a-thing-puts-you-in-the-top-10-and-that-sucks-1
# Interiors can be more fun *View this post on* [*my blog*](https://blog.ninapanickssery.com/p/interiors-can-be-more-fun) *for higher resolution images.* There’s a UK smoothie brand called “Innocent Drinks”. Back in 2012, when I was twelve years old, a friend from school invited me and another girl to her place. Afte...
https://www.lesswrong.com/posts/caPc8BCSaQPuW7Qdu/interiors-can-be-more-fun
# METR Research Update: Algorithmic vs. Holistic Evaluation TL;DR ----- * On 18 real tasks from two large open-source repositories, early-2025 AI agents often implement functionally correct code that cannot be easily used as-is, because of issues with test coverage, formatting/linting, or general code quality. * ...
https://www.lesswrong.com/posts/25JGNnT9Kg4aN5N5s/metr-research-update-algorithmic-vs-holistic-evaluation
# Should you make stone tools? Knowing how evolution works gives you an enormously powerful tool to understand the living world around you and [how it came to be](https://www.lesswrong.com/posts/gvkXvGsK2kauTjw28/normal-is-the-equilibrium-state-of-past-optimization) that way. (Though it's notoriously hard to use this ...
https://www.lesswrong.com/posts/bkjqfhKd8ZWHK9XqF/should-you-make-stone-tools
# A YouTube Video Will Probably Never Help You Quit YouTube **Summary** =========== From YouTube's point of view, a YouTube video is just a set of pixels and audio that causes you to look at a screen and watch advertisements. If each video causes you to watch an advertisement and watch another video, then YouTube can...
https://www.lesswrong.com/posts/gEvkuq9p8REJoHt6E/a-youtube-video-will-probably-never-help-you-quit-youtube
# Exploring the "Anti-TESCREAL" Ideology and the Roots of (Anti-)Progress By this point, I imagine that most people here have already encountered Torres and Gebru's infamous "TESCREAL bundle". However, the discourse about this accusation so far has mostly revolved whether it is fair to connect the different elements o...
https://www.lesswrong.com/posts/RCDEFhCLcifogLwEm/exploring-the-anti-tescreal-ideology-and-the-roots-of-anti
# Sleeping Machines: Why Our AI Agents Still Behave Like Talented Children I once shipped an agent to rehab a messy codebase. The task was simple: make the build pass. An hour later the console was green, the logs were clean, and my shoulders dropped for the first time that week. Then I noticed why it passed. The agen...
https://www.lesswrong.com/posts/TajwK45XBiNEumfim/sleeping-machines-why-our-ai-agents-still-behave-like
# AI #129: Comically Unconstitutional Article 1, Sec. 9 of the United States Constitution says: “No Tax or Duty shall be laid on Articles exported from any State.” That is not for now stopping us, it seems, from selling out our national security, and allowing Nvidia H20 chip sales (and other AMD chip sales) to China i...
https://www.lesswrong.com/posts/JpzKhDhSxE5sgREYu/ai-129-comically-unconstitutional
# Somebody invented a better bookmark This will only be exciting to those of us who still read physical paper books. But like. Guys. They did it. They invented the perfect bookmark. Classic paper bookmarks fall out easily. You have to put them somewhere while you read the book. And they only tell you that you left of...
https://www.lesswrong.com/posts/n6nsPzJWurKWKk2pA/somebody-invented-a-better-bookmark
# Four Axes of Hunger [exfatloss recently](https://www.exfatloss.com/p/satiety-graphed-and-the-horsemen) wrote about the difference between being satiated and being full, and not experiencing satiety until their 30's. Thinking about this made me realize that there's at least four axes of hunger (**pangs**, **appetite*...
https://www.lesswrong.com/posts/biEw7j8okGj7ZqBCh/four-axes-of-hunger
# AGI: Probably Not 2027 I found this an interesting critique of AI 2027. Even to those reflexively turned off by the psychoanalysis and political critique, I would still recommend reading the whole thing. Though I have similar timelines mostly for vibes-based reasons, there are a few things about the 2027 scenario it...
https://www.lesswrong.com/posts/kBrELCsHyYF2YiGBn/agi-probably-not-2027
# Training a Reward Hacker Despite Perfect Labels **Summary:**  Perfectly labeled outcomes in training can still boost reward hacking tendencies in generalization. This can hold even when the train/test sets are drawn from the exact same distribution. We induce this surprising effect via a form of context distillation...
https://www.lesswrong.com/posts/dbYEoG7jNZbeWX39o/training-a-reward-hacker-despite-perfect-labels
# A philosophical kernel: biting analytic bullets Sometimes, a philosophy debate has two *basic* positions, call them A and B. A matches a lot of people's intuitions, but is hard to make realistic. B is initially unintuitive (sometimes radically so), perhaps feeling "empty", but has a basic realism to it. There might ...
https://www.lesswrong.com/posts/uGakMbD7QKt88oMSa/a-philosophical-kernel-biting-analytic-bullets
# Trialing Far UVC and Glycol Vapors at BIDA *Cross-posted from the [BIDA Blog](https://blog.bidadance.org/2025/08/trialing-far-uvc-and-glycol-vapors.html)* As we go into winter, we're thinking about infection reduction. Our primary approaches are [requiring high-filtration masks](https://blog.bidadance.org/2024/11/2...
https://www.lesswrong.com/posts/hBRnQ6FhtjsPkQKCA/trialing-far-uvc-and-glycol-vapors-at-bida
# Rare AI and the Fermi Paradox *This is my first LessWrong contribution. I have tried to base my contribution on the community guidelines and the principles of rationalism, but I very much welcome any and all feedback on how to improve! I focus on a recent article discussing AI and the Fermi Paradox and cite previous...
https://www.lesswrong.com/posts/jJPTRextDe3chGjho/rare-ai-and-the-fermi-paradox
# Legal Personhood - Three Prong Bundle Theory *This is part 6 of a series I am posting on LW. Here you can find parts* [*1*](https://www.lesswrong.com/posts/DHJqMv3EbA7RkgXWP/legal-personhood-for-digital-minds-introduction)*,* [*2*](https://www.lesswrong.com/posts/58e8EycHHGMYxiaoo/the-bundle-theory-of-legal-personho...
https://www.lesswrong.com/posts/4m2MTPass3Ri2zZ43/legal-personhood-three-prong-bundle-theory
# European Links (15.08.25) Assisted dying in France & Britain ================================== Legal support for euthanasia is a [moral](https://slatestarcodex.com/2013/07/17/who-by-very-slow-decay/) [imperative](https://www.richardhanania.com/p/canadian-euthanasia-as-moral-progress), but it is rare on the global ...
https://www.lesswrong.com/posts/C6B27MJKrysiqpsBr/european-links-15-08-25
# A Phylogeny of Agents *In Douglas Hofstadter's "*[*Gödel, Escher, Bach*](https://www.lesswrong.com/posts/X9mvw4qRyLt7Gqonx/a-discarded-review-of-godel-escher-bach-an-eternal-golden)*," he explores how simple elements give rise to complex wholes that seem to possess entirely new properties. An ant colony provides the...
https://www.lesswrong.com/posts/vqfT5QCWa66gsfziB/a-phylogeny-of-agents
# Misalignment classifiers: Why they’re hard to evaluate adversarially, and why we're studying them anyway Even if the misalignment risk from current AI agents is small, it may be useful to start internally deploying *misalignment classifiers*: language models designed to classify transcripts that represent intentiona...
https://www.lesswrong.com/posts/jzHhJJq2cFmisRKB2/misalignment-classifiers-why-they-re-hard-to-evaluate
# Thoughts on Gradual Disempowerment *Epistemic status: very rough! Spent a couple of days reading the Gradual Disempowerment paper and thinking about my view on it. Won’t spend longer on this, so am sharing rough notes as is* Summary ------- * I won’t summarise the paper here! If you’re not familiar with it, I re...
https://www.lesswrong.com/posts/ct6SMDuexe9uBwDoL/thoughts-on-gradual-disempowerment
# How to get ChatGPT to really thoroughly research something I've had a lot of success with getting ChatGPT [^wkzbuqz043i] to do thorough research with the following prompt combined with Deep Research: > X=\[claim\] > > Do a deep dive into X. Tell me the steelman arguments in favor of X. > > Then tell me the steelm...
https://www.lesswrong.com/posts/kRPnKeWZrXuwwkiGd/how-to-get-chatgpt-to-really-thoroughly-research-something
# How to make the future better (other than by reducing extinction risk) What projects today could most improve a post-AGI world? In “[How to make the future better](https://www.forethought.org/research/how-to-make-the-future-better)”, I lay out some areas I see as high-priority, beyond reducing risks from AI takeove...
https://www.lesswrong.com/posts/9kECJCtiEsWiqS6ph/how-to-make-the-future-better-other-than-by-reducing
# Spending Too Much Time At Airports In honor of Nate Silver’s analysis of when to leave for the airport, and because it’s been an intense week, I thought I’d offer my thoughts on various related questions. ![](https://substackcdn.com/image/fetch/$s_!WEcv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%...
https://www.lesswrong.com/posts/CqmTsC6AHqrNgoS3i/spending-too-much-time-at-airports
# Towards data-centric interpretability with sparse autoencoders *Nick and Lily are co-first authors on this project. Lewis and Neel jointly supervised this project.* **Check out our updated paper here:** [**https://arxiv.org/abs/2512.10092**](https://arxiv.org/abs/2512.10092)**.** TL;DR ===== * We use sparse aut...
https://www.lesswrong.com/posts/a4EDinzAYtRwpNmx9/towards-data-centric-interpretability-with-sparse
# SE Gyges' response to AI-2027 *Like* [*Daniel Kokotajlo's coverage*](https://www.lesswrong.com/posts/zuuQwueBpv9ZCpNuX/vitalik-s-response-to-ai-2027) *of Vitalik's response to AI-2027, I've copied the author's text. However, I would like to comment upon potential errors right in the text, since it would be clearer.*...
https://www.lesswrong.com/posts/g5hWKtSYP3pq7urcw/se-gyges-response-to-ai-2027
# N Dimensional Interactive Scatter Plot (ndisp) This is the main overview page for my project "ndisp". I hope to keep this page up to date with a brief introduction, external links, and a more in depth description of details and future plans. Introduction ============ This is a project to build interactive visualiz...
https://www.lesswrong.com/posts/riBYsrypomuDnEYhx/n-dimensional-interactive-scatter-plot-ndisp
# BIDA Masking and Attendance BIDA has now finished its second year [alternating between mask-optional and mask-required dances](https://blog.bidadance.org/2023/06/some-mask-optional-dances.html). What effect does this have on attendance? I tried to look at this [in January 2024](https://www.jefftk.com/p/vote-with-yo...
https://www.lesswrong.com/posts/gb4EaKaWhSPCjvNfi/bida-masking-and-attendance
# The Inheritors: a book review I recently read a novel called The Inheritors, by William Golding. It was slow, it was painful, and before I was even done it had become one of my favorite books. For whatever reason, there is a difference between experiencing something and being told it. Even if everything you're told...
https://www.lesswrong.com/posts/BrpSBrWw4KG553kf3/the-inheritors-a-book-review
# Anthropic Lets Claude Opus 4 & 4.1 End Conversations Citing model welfare concerns, Anthropic has given Claude Opus 4 & 4.1 the ability to end ongoing conversations with its user. Most of the model welfare concerns Anthropic is citing draw back to what they discussed in the [Claude 4 Model System Card](https://www-...
https://www.lesswrong.com/posts/HGyKm2be6u3EeYv9G/anthropic-lets-claude-opus-4-and-4-1-end-conversations
# Four types of approaches for your emotional problems If you want to solve your emotional problems, how should you go about it? I mean "emotional problems" in a broad sense, to refer to anything with an emotional component. This can be things that are obvious "emotional problems", like feeling excessively angry, anx...
https://www.lesswrong.com/posts/Sgkzv79N4oFhbMkGW/four-types-of-approaches-for-your-emotional-problems
# Why did interest in "AI risk" and "AI safety" spike in June and July 2025? (Google Trends) [Google Trends interest in the search terms "AI risk" and "AI safety"](https://trends.google.com/trends/explore?date=2025-01-01%202025-08-16&geo=US&q=AI%20risk,AI%20safety&hl=en) sharpy increased on June 5th and remained high ...
https://www.lesswrong.com/posts/bWYisdRccDDHbista/why-did-interest-in-ai-risk-and-ai-safety-spike-in-june-and
# How we hacked business school Reverse-engineering what really counts as “smart” ### **I: The game** In business undergrad, a third of your grades was based on in-class participation, which was scored across five tiers: * **-1**: You literally said something sexist. * **0**: No contribution. * **1**: A “case...
https://www.lesswrong.com/posts/o6jkJftpnTQ6Lo7RB/how-we-hacked-business-school
# The Collider Bias Theory of (Not Quite) Everything Quick Summary ------------- * Collider bias and Berkson's paradox are pretty common and often neglected * I think it's not just a niche statistical concept: it explains a bunch of interesting stuff, and has some use in applied rationality * Scott Alexander ha...
https://www.lesswrong.com/posts/yhEES8AH8MirLroij/the-collider-bias-theory-of-not-quite-everything
# 35 Thoughts About AGI and 1 About GPT-5 ![Image](https://substackcdn.com/image/fetch/$s_!qXCX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb255a339-9731-4dc2-a8b6-8e915be8e4d5_849x640.webp) If this is GPT-5 in “Thinking” mode, I wonde...
https://www.lesswrong.com/posts/uAbbEz4p6tcsENaRz/35-thoughts-about-agi-and-1-about-gpt-5
# Church Planting: When Venture Capital Finds Jesus I’m going to describe a Type Of Guy starting a business, and you’re going to guess the business: 1. The founder is very young, often under 25.  2. He might work alone or with a founding team, but when he tells the story of the founding it will always have him at t...
https://www.lesswrong.com/posts/NMoNLfX3ihXSZJwqK/church-planting-when-venture-capital-finds-jesus
# Debugging for Mid Coders I struggled with learning to debug code for a long time. Exercises for learning debugging tended focus on small, toy examples that didn't grapple with the complexity of real codebases. I would read advice on the internet like: * Try to create a reliable replication of the debug * Create...
https://www.lesswrong.com/posts/zvisSDFPLWofyFxEQ/debugging-for-mid-coders
# On Pessimization *“Your worst sin is that you have destroyed and betrayed yourself for nothing.” - Dostoevsky* When people set an ambitious goal, they can fail simply by not changing the world very much. But there’s another surprisingly common way to fail: by achieving the *opposite* of their goal. I call this effe...
https://www.lesswrong.com/posts/7oBeXzryvmoPNos8W/on-pessimization
# My Interview With Cade Metz on His Reporting About Lighthaven On 12 August 2025, I sat down with _New York Times_ reporter Cade Metz to discuss some criticisms of his 4 August 2025 article, ["The Rise of Silicon Valley's Techno-Religion"](http://archive.today/2025.08.06-024919/https://www.nytimes.com/2025/08/04/tech...
https://www.lesswrong.com/posts/JkrkzXQiPwFNYXqZr/my-interview-with-cade-metz-on-his-reporting-about
# Immortalism - A Rational Case for Solving Death *Descartes: “I am persuaded that we can reach knowledge that will enable us to enjoy the fruits of the earth without toil, and perhaps even to be free from the infirmities of age.”* *Spinoza: “Each thing, as far as it lies in itself, strives to persevere in its being....
https://www.lesswrong.com/posts/szPSyEGxv5BqnEXau/immortalism-a-rational-case-for-solving-death
# Why Latter-day Saints Have Strong Communities *Epistemic status: Low-effort post about something I am very familiar with.* Preamble -------- Scott Alexander [recently wrote](https://www.astralcodexten.com/p/should-strong-gods-bet-on-gdp) about making strong communities within a liberal society. He has nice things ...
https://www.lesswrong.com/posts/zTCtvXRATWdLoJ7p7/why-latter-day-saints-have-strong-communities
# Agent foundations: not really math, not really science *These ideas are not well-communicated, and I'm hoping readers can help me understand them better in the comments.* The classical model of the scientific process is that its purpose is to find a theory that explains an observed phenomenon. Once you have any mod...
https://www.lesswrong.com/posts/Dt4DuCCok3Xv5HEnG/agent-foundations-not-really-math-not-really-science
# Meaning in life - should I have it? How did you find yours? ![grasshopper on spruce (green on green)](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/354d1c960cb00d8e53ecf665c18b1336d8cba04f6e706afa.jpg) There is no meaning *of* life, the universe doesn't care about me (and the feeling is mutual). But many...
https://www.lesswrong.com/posts/gi7MDF8xceBP8YkFD/meaning-in-life-should-i-have-it-how-did-you-find-yours
# Plan E for AI Doom Firstly, let me be clear: I do not want to signal my pessimism, nor do I think that everything is *that* hopeless with AI. But I do think that the question of "what useful things can be done even if we accept the premise that AI-induced extinction is inevitably coming?" is worth being considered, ...
https://www.lesswrong.com/posts/2xHhe4EBHAFofkQJf/plan-e-for-ai-doom
# Writing Out My Tunes I play by ear, and when I write tunes I normally save them by making a recording. This isn't ideal for sharing, though, especially with people who are more comfortable learning tunes from dots. I last had a go at this [ten years ago](https://www.jefftk.com/p/written-music-corriente), and decided...
https://www.lesswrong.com/posts/zMLQonKrg2rGztgZr/writing-out-my-tunes
# Apply for the 2025 Dovetail fellowship *This job is part of an* [*Advanced Research + Invention Agency*](https://www.aria.org.uk/)*-funded project.* Summary: [Dovetail](https://dovetailresearch.org/) is an agent foundations research group. We've recently received an [ARIA](https://www.aria.org.uk/opportunity-spaces...
https://www.lesswrong.com/posts/5XAB9rS8KdLhagwzR/apply-for-the-2025-dovetail-fellowship
# Underdog bias rules everything around me **People very often underrate how much power they (and their allies) have, and overrate how much power their enemies have. I call this “underdog bias”, and I think it’s the most important cognitive bias to understand in order to make sense of modern society.** I’ll start by ...
https://www.lesswrong.com/posts/f3zeukxj3Kf5byzHi/underdog-bias-rules-everything-around-me
# The parable of the underdog Imagine a dog-fighting ring. An arena, somewhere in a dark basement; two dogs enter; one dog leaves. Brutal and horrific, of course; but, human nature being as it is, it draws crowds. One night in the ring the match-up is this: on one side, a Chihuaha; on the other side, a wolf. (Yes, ...
https://www.lesswrong.com/posts/oudhBX8DFdaZ5gv6K/the-parable-of-the-underdog
# The Strange Science of Interpretability: Recent Papers and a Reading List for the Philosophy of Interpretability **TL;DR**: We recently released two papers about the Philosophy of (Mechanistic) Interpretability \[[here](https://arxiv.org/abs/2505.00808) and [here](https://arxiv.org/abs/2505.01372)\] and a reading li...
https://www.lesswrong.com/posts/qRnupMmFG7dxQTTYh/the-strange-science-of-interpretability-recent-papers-and-a
# ABSOLUTE POWER (A short story) Crossposting this story from [my blog](https://www.taylor.gl/blog/39) since it feels like something this community might enjoy. *** The professor's voice droned. It was a consistent hum at the front of the class, like an air conditioner unit. Nagib had long since stopped parsing the ...
https://www.lesswrong.com/posts/PGXBJdrgSqd9uYZnn/absolute-power-a-short-story
# Handing People Puzzles We're a community that is especially vulnerable to [nerd sniping](https://xkcd.com/356/), as communities go. I'm fond of partaking in a little nerd sniping myself. While the original xkcd paints this pastime in a malicious light, I argue that this is in fact a great thing in general. One of my...
https://www.lesswrong.com/posts/xRZfpYkC9MwqG4tKi/handing-people-puzzles
# Morality, Values and Trade-Offs *Quick Summary: We do better when we (1) acknowledge that Human Values are broad and hard to grasp; (2) treat morality largely as the art of managing trade‑offs among those values. Conversations that deny either point usually aren’t worth having.* Morality is about pragmatic matters....
https://www.lesswrong.com/posts/N9d83fZNq5rWrzfSr/morality-values-and-trade-offs
# Why Every Politician Thinks They’re So Right (and Why That’s a Disaster) Since childhood, I have been troubled by the following paradox: Suppose that in some country there are several parties. Each of them has its own vision of how to achieve economic and all other kinds of prosperity. Let’s assume they are all dri...
https://www.lesswrong.com/posts/pfcnj2YyAsHhe9int/why-every-politician-thinks-they-re-so-right-and-why-that-s
# GPT-5: The Reverse DeepSeek Moment ![](https://substackcdn.com/image/fetch/$s_!if9g!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e752c67-72c2-46cd-b977-fa9e5e19cda5_1536x1024.png) Everyone agrees that the release of GPT-5 was botched...
https://www.lesswrong.com/posts/eFd7NZ4KpYLM4ocBv/gpt-5-the-reverse-deepseek-moment
# Giving AIs safe motivations *(Audio version (read by the author)* [*here*](https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17686921-giving-ais-safe-motivations)*, or search for "Joe Carlsmith Audio" on your podcast app.* *This is the sixth essay in a series I’m calling “*[*How do we solve the alignment pr...
https://www.lesswrong.com/posts/Kv7DRtEaQYjfyZ8Ld/giving-ais-safe-motivations