text
stringlengths
300
320k
source
stringlengths
52
154
# On Stance I’d like to talk about a martial arts concept, which I think has some applications in rationality. It’s called stance, and you’ve probably heard of it. I. Definitions -------------- What is stance? Stance is the way you stand and hold your body when you’re not at that particular moment doing anything, b...
https://www.lesswrong.com/posts/NxgQwGA4RPemhBubr/on-stance
# The problem of graceful deference *[Crosspost from my blog](https://tsvibt.blogspot.com/2025/11/the-problem-of-graceful-deference.html).* # Moral deference Sometimes when I bring up the subject of reprogenetics, people get uncomfortable. "So you want to do eugenics?", "This is going to lead to inequality.", "Pa...
https://www.lesswrong.com/posts/jzy5qqRuqA9iY7Jxu/the-problem-of-graceful-deference-1
# Thick practices for AI tools *This essay is the result of thoughts developed during the PIBBSS fellowship this summer. Thanks to Dusan and Maris for feedback on a draft of this essay. Thanks to Sahil and Niki for discussions that influenced the ideas of this essay.* **1\. Intro** ============= What if we could bui...
https://www.lesswrong.com/posts/ALvJ4bKqDdXW9qf9A/thick-practices-for-ai-tools
# Steering Language Models with Weight Arithmetic We isolate behavior directions in weight-space by subtracting the weight deltas from two small fine-tunes - one that induces the desired behavior on a narrow distribution and another that induces its opposite. We show that using this direction to steer model behaviors...
https://www.lesswrong.com/posts/HYTbakdHpxfaCowYp/steering-language-models-with-weight-arithmetic
# Learnings from the Zurich AI Safety Day **TL;DR:** The Zurich AI Safety Day - a one-day conference in Zurich of more than 200 participants - was a new event format to bring together the AI safety field across Europe and tap into the local talent pool. The feedback received was very positive, and the event could serv...
https://www.lesswrong.com/posts/sBWBEfa2WPPjsmzpG/learnings-from-the-zurich-ai-safety-day
# Not-A-Book Review: The Attractive Man (Dating Coach Service) As far as I am aware, I have at the above link written the *only* acceptably-detailed, non-woo product breakdown that exists for any service whatsoever in the ~2 billion dollars/year industry of post-PUA male dating/relationship coaching. Which is weird. ...
https://www.lesswrong.com/posts/5XiKxLyfWxMGfZRqq/not-a-book-review-the-attractive-man-dating-coach-service
# Kimi K2 Thinking I previously covered [**Kimi K2,**](https://thezvi.substack.com/p/kimi-k2) which now has a new thinking version. As I said at the time back in July, price in that the thinking version is coming. Is it the real deal? That depends on what level counts as the real deal. It’s a good model, sir, by all...
https://www.lesswrong.com/posts/SLrWSyS3FypLKyRL6/kimi-k2-thinking
# Fairly Breaking Ties Without Fair Coins I was thinking about an approval-style voting system that could end with a large number of ties, and ran into the problem of how to break ties in a provably-fair way that doesn't depend on candidates trusting each other and won't make voters' eyes glaze over. I think I found ...
https://www.lesswrong.com/posts/AhTkRsxwu2ek4XjbH/fairly-breaking-ties-without-fair-coins
# Conceptual reasoning dataset v0.1 available (AI for AI safety/AI for philosophy) **Tl;dr:** We have a dataset for conceptual reasoning which you can request access for if you would like to use it for AI safety (or related) research. We consider the dataset half-baked and it will likely become much more useful over t...
https://www.lesswrong.com/posts/DiyqXDE5xzPAdNNdi/conceptual-reasoning-dataset-v0-1-available-ai-for-ai-safety
# How I Learned That I Don't Feel Companionate Love A few months ago, I learned that I probably can’t feel the emotions signalled by oxytocin, the "love hormone". This raises lots of interesting questions - what things I do and don’t feel, how the world looks different from an oxytocin-less perspective, how a lack of ...
https://www.lesswrong.com/posts/Hds7xkLgYtm6qDGPS/how-i-learned-that-i-don-t-feel-companionate-love
# Response to "Taking AI Welfare Seriously": The Indirect Approach to Moral Patienthood I've been thinking about the Sebo et al. paper [*Taking AI Welfare Seriously*](https://arxiv.org/abs/2411.00986?), which features one of my favorite philosophers, David Chalmers (I have a signed copy of his anthology of mind, full ...
https://www.lesswrong.com/posts/Aoza5Qmc62ny4ShWM/response-to-taking-ai-welfare-seriously-the-indirect
# Teleosemantics & Swampman *Epistemic status: trying to articulate my own thoughts. I have not done a thorough literature review.* James Diacoumis [commented](https://www.lesswrong.com/posts/MF5bMMxLNGCFtSzix/consciousness-as-a-distributed-ponzi-scheme?commentId=CZiCtkPPcvL5Bnf2F) on [yesterday's post](https://www.l...
https://www.lesswrong.com/posts/Qk6LKvsDTYczfyPDx/teleosemantics-and-swampman
# How human-like do safe AI motivations need to be? *(Audio version (read by the author)* [*here*](https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/18175429-how-human-like-do-safe-ai-motivations-need-to-be)*, or search for "Joe Carlsmith Audio" on your podcast app.* *This is the eighth essay in a series I’m ...
https://www.lesswrong.com/posts/r8H4d3X7AKg9n8TQk/how-human-like-do-safe-ai-motivations-need-to-be
# Better than Baseline There's a word I put unusual emphasis on, which helps me think about the world. [In my culture](https://www.lesswrong.com/posts/4EGYhyyJXSnE7xJ9H/in-my-culture) I'd have a special word for it but it's close enough to the common English term "baseline." > Baseline /ˈbāsˌlīn/ *noun.* A minimum or...
https://www.lesswrong.com/posts/tqouxgw8gBSvrsxkG/better-than-baseline
# Do not hand off what you cannot pick up *Context: Memo #2 in* [*my sequence*](https://www.lesswrong.com/s/cegJvyiP2dBkhDh2K) *of publishing Lightcone Infrastructure internal team memos about our organizational principles* * * * Delegation is good! Delegation is the foundation of civilization! But in the depths of ...
https://www.lesswrong.com/posts/rSCxviHtiWrG5pudv/do-not-hand-off-what-you-cannot-pick-up
# 5 Things I Learned After 10 Days of Inkhaven If you don't know, [**Inkhaven**](https://www.inkhaven.blog/) is a residency where you come and publish a blogpost every day. No "*Oh it would be nice to blog some day*" or *"Oh I'm working on something, I'm sure I'll publish it some day"*. No, you have to publish *today*...
https://www.lesswrong.com/posts/wHfseCBgYSiA9Nz47/5-things-i-learned-after-10-days-of-inkhaven
# Why to Commit to a Writing and Publishing Schedule Consistency is key! This is kind of obvious but I want to convince you it matters more than you'd think, if you have a blog or newsletter. Plus I have practical tips (that aren't entirely a Beeminder infomercial). The least obvious point is how much your readers li...
https://www.lesswrong.com/posts/erXSZJ8JurXr8avKe/why-to-commit-to-a-writing-and-publishing-schedule
# I Read Red Heart and I Heart It *Red Heart* currently only has 1 review on Goodreads, and I haven’t seen anyone talking about it. The book, by Max Harms, is an exciting spy-thriller novel about a Chinese AI doomsday scenario. The author’s other novels, the *Crystal Society* series, is slightly more popular, with the...
https://www.lesswrong.com/posts/7G3br7WiB6u7thM9f/i-read-red-heart-and-i-heart-it
# Warning Aliens About the Dangerous AI We Might Create Thesis: We should broadcast a warning to potential extraterrestrial listeners that Earth might soon spawn an unfriendly computer superintelligence. Sending the message might benefit humanity. If we were to create an unaligned computer superintelligence, it would...
https://www.lesswrong.com/posts/AH4pFd3Sgt6qaoH6h/warning-aliens-about-the-dangerous-ai-we-might-create
# Introducing faruvc.org I wanted to link an explanation of how [far-UVC](https://www.faruvc.org) works, why you might want to use it to clean indoor air, and what we know about its safety. I didn't find anything I liked, so I made something: `[faruvc.org](https://www.faruvc.org)`. [![](https://www.jefftk.com/faruvc-...
https://www.lesswrong.com/posts/xpf8ygKPECc8yWhZk/introducing-faruvc-org
# Undissolvable Problems: things that still confuse me Like everyone there are millions of things that I don't know. There are even millions of things I'm certain I'll never know. But only a few of them feel fundamentally confusing. A confusing question is one where I can't imagine how an answer could possibly actua...
https://www.lesswrong.com/posts/XJEnwqwdFmrGm5kDs/undissolvable-problems-things-that-still-confuse-me
# Reflections on being Sorted **TL;DR** ========= I grew up on the edge of The Sort. I probably would have been in the center of it, but I got caught in a gifted-kid trap, never learned how to work hard, and hit a wall in grad school. After spending a decade floundering before learning how to actually dedicate myself...
https://www.lesswrong.com/posts/DGCknJTZo2Wt7TgtW/reflections-on-being-sorted
# Lighthaven-ish Ticket Strategy: Three Pillars of FOMO This community occasionally discusses how hard it is to get people to RSVP to events, even well-attended events the community loves. I think recent Lighthaven events have a ticket strategy that mostly solves this problem, at least for larger paid events that need...
https://www.lesswrong.com/posts/PuCdt9k3SFePwo6LG/lighthaven-ish-ticket-strategy-three-pillars-of-fomo
# Bitcoin Halvings and the Trisolaran Mistake: When External Actors Masquerade as Natural Laws In Liu Cixin's *The Three-Body Problem*, Trisolaran scientists conclude that "physics does not exist" after observing chaotic, unpredictable behavior in their world. Their planet orbits three suns in patterns they cannot pre...
https://www.lesswrong.com/posts/ufGM5tuC3HpuiSDW8/bitcoin-halvings-and-the-trisolaran-mistake-when-external
# OpenAI Releases GPT 5.1 > GPT-5.1: A smarter, more conversational ChatGPT > =============================================== > > We’re upgrading GPT‑5 while making it easier to customize ChatGPT. Starting to roll out today to everyone, beginning with paid users. > > Today we’re upgrading the GPT‑5 series with the r...
https://www.lesswrong.com/posts/m8c9ddQcT4zZFfyZg/openai-releases-gpt-5-1
# Social drives 2: “Approval Reward”, from norm-enforcement to status-seeking *(Follow-up to* [*Social drives 1: “Sympathy Reward”, from compassion to dehumanization*](https://www.lesswrong.com/posts/KuBiv9cCbZ6ALjHFw/social-drives-1-sympathy-reward-from-compassion-to)*, but this post is self-contained.)* 1\. Intro &...
https://www.lesswrong.com/posts/fPxgFHfs5yHzYqgG7/social-drives-2-approval-reward-from-norm-enforcement-to
# Why Truth First? *On a warm spring weekend, Jerry B wanders through Hyde Park. At a corner, he happens upon the Preacher Man, standing on a soapbox and proclaiming the Way of Truth.* **Preacher Man**: … and they will come to you and they will say “Do not believe such things, for it is hurtful to others to believe s...
https://www.lesswrong.com/posts/gqcAnj3h2kZzCcpY5/why-truth-first
# The Pope Offers Wisdom The Pope is a remarkably wise and helpful man. He offered us some wisdom. Yes, he is generally playing on easy mode by saying straightforwardly true things, but that’s meeting the world where it is. You have to start somewhere. Some rejected his teachings. #### Wisdom Is Offered Two thousa...
https://www.lesswrong.com/posts/4gXvnTFy5WCTtYMAA/the-pope-offers-wisdom
# Please, Don't Roll Your Own Metaethics One day, when I was an intern at the cryptography research department of a large software company, my boss handed me an assignment to break a pseudorandom number generator passed to us for review. Someone in another department invented it and planned to use it in their product,...
https://www.lesswrong.com/posts/KCSmZsQzwvBxYNNaT/please-don-t-roll-your-own-metaethics
# Utilitarian inequality metrics TL;DR: Use Atkinson with $\epsilon = 1$ or generalized entropy with $\alpha=0$. Inequality indices are numerical metrics of income (or wealth, etc) inequality. There are some downsides to society coordinating around a few canonical numerical metrics, like [Goodhart's law](https://en...
https://www.lesswrong.com/posts/4Zt4pghb774dq9rdG/utilitarian-inequality-metrics
# Two can keep a secret if one is dead. So please share everything with at least one person. A lot of things go better if more people have more context on the state of a project. Just to name a few: * Others can point out mistakes * People can build common knowledge of how much everyone is working * Others can ...
https://www.lesswrong.com/posts/FM7tWezTknbcTinu9/two-can-keep-a-secret-if-one-is-dead-so-please-share
# Meetup Tip: Food Summary ------- Food is nice to have at meetups. From laid-back coffee shop chats to giant festivals, people like eating, and having something to nibble on contributes to general good cheer. Food is also straightforward to acquire if you're thinking about it, though it does have a mostly linear cos...
https://www.lesswrong.com/posts/ToyqdNWtdYHm4bFnk/meetup-tip-food
# Are AI time horizons inherently superexponential? A [few](https://www.lesswrong.com/posts/deesrjitvXM4xYGZd/metr-measuring-ai-ability-to-complete-long-tasks?commentId=xQ7cW4WaiArDhchNA) [people](https://www.lesswrong.com/posts/iEkksmar6mtuQtdsR/jacob_hilton-s-shortform?commentId=Gec7jdFPEbGA9pays) have made the pred...
https://www.lesswrong.com/posts/F9PQu3uWZL2xqgpkf/are-ai-time-horizons-inherently-superexponential
# Turing-Complete vs Turing-Universal I think people often confuse the concept of Turing-Complete with Turing-Universal. Turing-universality is the property a [universal Turing machine](https://en.wikipedia.org/wiki/Universal_Turing_machine) (UTM) has:[^vxm3ce9joql] **Turing-universality:** an abstract machine[^p2vn...
https://www.lesswrong.com/posts/mKYwFhXJma8jQ4nhv/turing-complete-vs-turing-universal
# What's so hard about...? A question worth asking There’s a wide range of tasks that most people *get* why they’re hard. And then there are activities where I think a lot of people might think to themselves “what’s so hard about that?” On the one end of the continuum, you can have a visceral sense of the difficulty ...
https://www.lesswrong.com/posts/qnMwwwzmnfaQHvymc/what-s-so-hard-about-a-question-worth-asking
# Favorite quotes from "High Output Management" Some months ago I read the classic management book [High Output Management](https://en.wikipedia.org/wiki/High_Output_Management) and made a note of quotes that rang particularly true to me. I normally dislike this genre (management books), and disagree with some popular...
https://www.lesswrong.com/posts/jAH4dYhbw3CkpoHz5/favorite-quotes-from-high-output-management
# Strategically Procrastinate as an Anti-Rabbit-Hole Strategy This is highly ironic, since I'm all about commitment devices as a tool to not procrastinate. But fun fact: you can use procrastination itself strategically as a commitment device to cap the amount of time you spend on something. This is really just a coro...
https://www.lesswrong.com/posts/so72tRiZitvDFEvmw/strategically-procrastinate-as-an-anti-rabbit-hole-strategy
# 8 Questions for the Future of Inkhaven If you don't know, [**Inkhaven**](https://www.inkhaven.blog/) is a residency where you come and publish a blogpost every day. No "*Oh it would be nice to blog some day*" or *"Oh I'm working on something, I'm sure I'll publish it some day"*. No, you have to publish *today*, othe...
https://www.lesswrong.com/posts/LyuLYuBup4kvb3BYN/8-questions-for-the-future-of-inkhaven
# Paranoia: A Beginner's Guide People sometimes make mistakes. (Citation Needed) The obvious explanation for most of those mistakes is that people do not have access to sufficient information to avoid the mistake, or are not smart enough to think through the consequences of their actions. This predicts that as decis...
https://www.lesswrong.com/posts/yXSKGm4txgbC3gvNs/paranoia-a-beginner-s-guide
# AI #142: Common Ground [**The Pope offered us wisdom**](https://thezvi.substack.com/p/the-pope-offers-wisdom?r=67wny), calling upon us to exercise moral discernment when building AI systems. Some rejected his teachings. We mark this for future reference. The long anticipated [**Kimi K2 Thinking**](https://thezvi.su...
https://www.lesswrong.com/posts/diDGBWxzncikqEvk7/ai-142-common-ground
# Tools for deferring gracefully *[Crosspost from my blog](https://tsvibt.blogspot.com/2025/11/tools-for-deferring-gracefully.html).* # Introduction Last time, I asked about [The problem of graceful deference](https://www.lesswrong.com/posts/jzy5qqRuqA9iY7Jxu/the-problem-of-graceful-deference-1). We have to defer...
https://www.lesswrong.com/posts/zLG2DnJw6oEZqAgaE/tools-for-deferring-gracefully
# Epistemic Spot Check: Expected Value of Donating to Alex Bores's Congressional Campaign Political advocacy is an important lever for reducing existential risk. One way to make political change happen is to support candidates for Congress. In October, Eric Neyman wrote [Consider donating to Alex Bores, author of the...
https://www.lesswrong.com/posts/HwHoJDcJpGABQiPJc/epistemic-spot-check-expected-value-of-donating-to-alex
# The Transformer and the Hash > *All stable processes we shall predict. All unstable processes we shall control.* > > ~ John von Neumann > *We discovered something. Our one hope against total domination. A hope that with courage, insight, and solidarity we could use to resist. A strange property of the physical uni...
https://www.lesswrong.com/posts/ynnGqr2D6KXYDnskp/the-transformer-and-the-hash
# Supervised fine-tuning as a method for training-based AI control Executive summary ================= This post is a research update on our ongoing project studying training-based AI control: using training to mitigate risk from misaligned AI systems in diffuse threat models like research sabotage. This post present...
https://www.lesswrong.com/posts/Cz7AKenSiNkgijdJi/supervised-fine-tuning-as-a-method-for-training-based-ai
# (Fantasy) -> (Planning): A Core Mental Move For Agentic Humans? So there’s this thing where… Back when I was young, I watched the movie Atlantis, and then spent a while thinking through how to build an actual city- or continent-sized underwater dome. What materials would be required? How much would it cost? How wou...
https://www.lesswrong.com/posts/SiTwkSAsFpauvtkgW/fantasy-greater-than-planning-a-core-mental-move-for-agentic
# Self-interpretability: LLMs can describe complex internal processes that drive their decisions *This post is a summary of our paper from earlier this year:* [*Plunkett, Morris, Reddy, & Morales (2025)*](https://arxiv.org/abs/2505.17120)*. Adam received* [*an ACX grant*](https://www.astralcodexten.com/p/acx-grants-re...
https://www.lesswrong.com/posts/9pozvFKbLDLmuL879/self-interpretability-llms-can-describe-complex-internal-1
# Evaluation Avoidance: How Humans and AIs Hack Reward by Disabling Evaluation Instead of Gaming Metrics Have you ever binge-watched a TV series? Binge-watching puts you in a very peculiar mental state. Assuming you don\'t reflectively endorse your binge-watching behavior, you\'d probably feel pretty bad if you were...
https://www.lesswrong.com/posts/reCesekXabqC6LMFs/evaluation-avoidance-how-humans-and-ais-hack-reward-by-1
# Orient Speed in the 21st Century *I wrote this post with an audience of "artists who are worried about AI" in mind, published on a new blog,* [*The Human Spirit*](https://thehumanspirit.substack.com/)*. *[^0ld0oswdhte] My guess is, the 21st century will be a period of rapid change, that feels kinda crazy. I think t...
https://www.lesswrong.com/posts/CHByipjioWRzoo5Ap/orient-speed-in-the-21st-century
# Questioning Computationalism Daniel Dennett, late in his life, changed his mind about a significant aspect of his computationalist perspective. This seems significant. I am told his mind is exceptionally difficult to change. This change is indicated briefly in a book review: in *Aching Voids and Making Voids,* Denn...
https://www.lesswrong.com/posts/JxNrEo6hvzuCvu2s7/questioning-computationalism
# Tell people as early as possible it's not going to work out *Context: Post #4 in* [*my sequence*](https://www.lesswrong.com/s/cegJvyiP2dBkhDh2K) *of private Lightcone Infrastructure memos edited for public consumption* * * * This principle is more about how I want people at Lightcone to relate to community governa...
https://www.lesswrong.com/posts/Hun4EaiSQnNmB9xkd/tell-people-as-early-as-possible-it-s-not-going-to-work-out
# The rare, deadly virus lurking in the Southwest US, and the bigger picture If you live in this one tiny county in California, you might be more likely to die from Sin Nombre Virus than in a car crash. ![](https://eukaryotewritesblog.wordpress.com/wp-content/uploads/2025/11/image.png?w=1024) In the same way that “w...
https://www.lesswrong.com/posts/u4hAr8R82a4HTqkkW/the-rare-deadly-virus-lurking-in-the-southwest-us-and-the
# Types of systems that could be useful for agent foundations In this post, I've written something that would have been very helpful to my former self from a few years ago. Given that, it may or may not be helpful to anyone else. When studying for agent foundations research, I kept finding that I wanted a good genera...
https://www.lesswrong.com/posts/Q6mxeHMJCzQX3KXz2/types-of-systems-that-could-be-useful-for-agent-foundations
# AI Corrigibility Debate: Max Harms vs. Jeremy Gillen **Is focusing on corrigibility our best shot at getting to ASI alignment?** =========================================================================== [Max Harms](https://www.lesswrong.com/users/max-harms?mention=user) and [Jeremy Gillen](https://www.lesswrong.c...
https://www.lesswrong.com/posts/CsXAg8dHSgghDAoPx/ai-corrigibility-debate-max-harms-vs-jeremy-gillen
# Thoughts are surprisingly detailed and remarkably autonomous *For years, I've been stumped by the failure of most people, of society, to make simple and important inferences. They have the facts, it's important to them, and yet they do not put two and two together. How do they fail so?* *Well, last week, a friend's...
https://www.lesswrong.com/posts/r5ECX97KYHFZhS3xf/thoughts-are-surprisingly-detailed-and-remarkably-autonomous
# How do you read Less Wrong? My method of reading Less Wrong is to scroll back through all recent comments and posts, which the front page spontaneously presents to me in reverse-chronological order, until I arrive at posts and comments that I recognize. Along the way, if I see anything that I might want to read at l...
https://www.lesswrong.com/posts/nyzmXMxjzqiZYEnN8/how-do-you-read-less-wrong
# Notes on the book "Talent" Some months ago I read the book "Talent" by Tyler Cowen and Daniel Gross. Published in 2022, it discusses how to spot talent using one's individual judgment (e.g. assessing a founder as VC). Importantly, it's not a book about how to create a standardized hiring process at a big company ("p...
https://www.lesswrong.com/posts/mMRnHZdebpivQ47io/notes-on-the-book-talent
# Everyone has a plan until they get lied to the face > "Everyone has a plan until they get punched in the face."  > \- Mike Tyson (The exact phrasing of that quote changes, this is my favourite.) I think there is an open, important weakness in many people. We assume those we communicate with are *basically* trustw...
https://www.lesswrong.com/posts/5LFjo6TBorkrgFGqN/everyone-has-a-plan-until-they-get-lied-to-the-face
# Don't let people buy credit with borrowed funds I.  --- Most large-scale fraud follows basically the same pattern:  **1\.** Some trader or executive gets in a position where they can use a bunch of other people's resources (either via borrowing them, or being given custody over them) **2\.** They spend some of th...
https://www.lesswrong.com/posts/NQquXynvuAor3KAnD/don-t-let-people-buy-credit-with-borrowed-funds
# 10 Types of LessWrong Post Several artists and professionals have come to Inkhaven to share their advice. They keep talking about form—even if you have a raw feeling or interest, you must channel it through one of the forms of art or journalism to make it into something people can appreciate. I'm curious to underst...
https://www.lesswrong.com/posts/Gi3v6j5QiiFku39s6/10-types-of-lesswrong-post
# The Charge of the Hobby Horse *[Crosspost from my blog](https://tsvibt.blogspot.com/2025/11/the-charge-of-hobby-horse.html).* *[Epistemic status: !! 🚨 Drama Alert 🚨 !! discoursepoasting, LWslop]* ![](https://i.imgur.com/VKwc9Pl.gif) # Case 1: You only get six words In 2024, the MATS team published a post, ...
https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse
# From Anthony: Control Inversion Here is [a nice essay](https://control-inversion.ai/) from FLI's Anthony. It takes the form of a website with a nice design (or that of the more regular PDF format). It presents a reasonable operationalisation for what "Control" is. According to this operationalisation, it shows tha...
https://www.lesswrong.com/posts/PK4PPACgucLPWpi8v/from-anthony-control-inversion
# Understanding and Controlling LLM Generalization A distillation of my long-term research agenda and current thinking. I welcome takes on this.   Why study generalization?  ========================== I'm interested in studying how LLMs generalise - when presented with multiple policies that achieve similar loss, wh...
https://www.lesswrong.com/posts/ZSQaT2yxNNZ3eLxRd/understanding-and-controlling-llm-generalization
# AI Craziness: Additional Suicide Lawsuits and The Fate of GPT-4o GPT-4o has been a unique problem for a while, and has been at the center of the bulk of mental health incidents involving LLMs that didn’t involve character chatbots. I’ve previously covered related issues in [AI Craziness Mitigation Efforts](https://t...
https://www.lesswrong.com/posts/erTE9BTM7gGHb96po/ai-craziness-additional-suicide-lawsuits-and-the-fate-of-gpt
# Lambda Calculus Prior There were two technical comments on my post [Turing-complete vs Turing-universal](https://www.lesswrong.com/posts/mKYwFhXJma8jQ4nhv/turing-complete-vs-turing-universal), both about my claim that a prior based on Lambda calculus would be weaker than a prior based on Turing machines, reflecting ...
https://www.lesswrong.com/posts/xzsGp8H4hFLWpNaND/lambda-calculus-prior
# a sketch of how we might go about getting basins of corrigibility from RL Here is a story about how building corrigible AIs with current rl-like techniques could go right. Interested to hear peoples thoughts.  Assumptions =========== When you train AIs with RL, they learn a bunch of contextually activated mini-pre...
https://www.lesswrong.com/posts/kKb8mAE4pqAc3FrAz/a-sketch-of-how-we-might-go-about-getting-basins-of
# List of great filk songs I love filk. Dreaming of strange worlds, listening to a narrative set to music, what's not to love. If you're reading this, you probably love filk too. So maybe you'd like some recommendations for songs for the space age, and more.  1. I recommend the album "Minus Ten and Counting: Songs f...
https://www.lesswrong.com/posts/5Adeph2qH7sQ5zAyv/list-of-great-filk-songs
# Prediction markets for social deduction games *Being able to N/A contracts with market manipulators reduces the incentives to do bad stuff.* Prediction markets are fun! Social deduction games are fun! Can you merge the two? The simple solution is normal games plus betting among the spectators. But for a long time...
https://www.lesswrong.com/posts/ErxEFfKXhrKGEqa6G/prediction-markets-for-social-deduction-games
# What are your impossible problems? My followup to ["What's hard about this? What can I do about that?"](https://www.lesswrong.com/posts/AQSwzEgLjM4dyWuAh/what-s-hard-about-this-what-can-i-do-about-that) is going to be called "What (exactly) is impossible about this?".  It's a somewhat similar move to asking "what's...
https://www.lesswrong.com/posts/rrpwpAqvt294fNh7C/what-are-your-impossible-problems
# Everyday Clean Air When the next pandemic hits, our ability to stop it will depend on the infrastructure we already have in place. A key missing piece is clean indoor air. An airborne pathogen can be very hard to contain, and we would want to move fast to limit spread. But how quickly we can get measures in place, a...
https://www.lesswrong.com/posts/ydCrXHg6L5ZuXQbto/everyday-clean-air
# Will AI systems drift into misalignment? [Joshua Clymer](mailto:joshuamclymer@gmail.com), [Alek Westover](mailto:alek.westover@gmail.com), [Anshul Khandelwal](mailto:kanshul45@gmail.com) We explore the following hypothesis both conceptually and, to a small extent, empirically. We call this the **Alignment Drift Hyp...
https://www.lesswrong.com/posts/u8TYRhGPD878i3qkc/will-ai-systems-drift-into-misalignment
# Generation Ship: A Protest Song For PauseAI [**Link to listen.**](https://soundcloud.com/logan-strohl-5728833/generation-ship?si=274a0a6975fa481ea820b7c790393989&utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing) =========================================================================================...
https://www.lesswrong.com/posts/eqwemiBcYEhCeD3MZ/generation-ship-a-protest-song-for-pauseai
# "But You'd Like To Feel Companionate Love, Right? ... Right?" ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/507ed8bb39b7d2313a5371ecec4221b8ed0b7814f4a5c043.png) ... One of the responses which one will predictably receive when posting something titled “[How I Learned That I Don't Feel Companionate L...
https://www.lesswrong.com/posts/TcJpm3mcLMoKMzqpj/but-you-d-like-to-feel-companionate-love-right-right
# Increasing returns to effort are common *Context: Post #5 in* [*my sequence*](https://www.lesswrong.com/s/cegJvyiP2dBkhDh2K) *of private Lightcone Infrastructure memos edited for public consumption. Much of the advice in this might not apply to your situation, read with that context in mind.* * * * Most things hav...
https://www.lesswrong.com/posts/swymiotpbYFv9pnEk/increasing-returns-to-effort-are-common
# A Love Song to Nicotine Two years ago an attractive woman handed me two nicotine mints — 2 mg each, a fairly small dose. She was spontaneous, with dark hair and an annoyingly white smile, a drug geek who worked at some vaguely neurotech-y startup. I liked her and had her phone number. But I needed something interest...
https://www.lesswrong.com/posts/ECnb9r6n8kLAaGupT/a-love-song-to-nicotine
# Same cognitive paints, exceedingly different mental pictures ![COMPOSITION: GEOMETRIC OVERLOAD](https://d3rf6j5nx5r04a.cloudfront.net/TLtrY18X2GDzG1l66mOp5zDgJJs=/560x0/product/9/3/85f2b7c7527c4ff79a8792beed0daa36_opt.jpg) *Composition: Geometric Overload (2017) by* [*Stephen Conroy*](https://www.artfinder.com/prod...
https://www.lesswrong.com/posts/gEEcrYBFsjfgyYX8S/same-cognitive-paints-exceedingly-different-mental-pictures
# Just Another Five Minutes Some tasks just take five minutes. Some jobs are made up of five minute tasks. * * * There's a message notification. I'd asked a venue for clarification of a line on their rental contract. Their response clears it up and the line won't be a problem. I reread the contract one more time be...
https://www.lesswrong.com/posts/foNd5ukdyvBMrzsEm/just-another-five-minutes
# "Middlemarch" is inane and also one of my favorite books I am entirely bored and uninterested in the lives of the characters in Middlemarch (a book published 1871, about some small-town people living in ~1830). They do things like (a) get married, or (b) don't get married.  *Yet it is one of my favorite books.* The...
https://www.lesswrong.com/posts/5pgpwshssLmA9kZGz/middlemarch-is-inane-and-also-one-of-my-favorite-books
# Don't use the phrase "human values" I really dislike the phrase "human values". I think it's confusing because: * It obscures a distinction between human preferences and normative values, i.e. what the author thinks is good in a moral sense * Insofar as the author thinks that fulfilling human preferences is...
https://www.lesswrong.com/posts/dyDpJyNLgAHTAHeX9/don-t-use-the-phrase-human-values
# Matrices map between biproducts Why are linear functions between finite-dimensional vector spaces representable by matrices? And why does matrix multiplication compose the corresponding linear maps? There's geometric intuition for this, e.g. presented by [3Blue1Brown](https://www.youtube.com/playlist?list=PLZHQObOWT...
https://www.lesswrong.com/posts/7wstHFRn3bHzSN2z3/matrices-map-between-biproducts
# Punctuation & Quotation Conventions I write with an unusual convention about how punctuation and quotation interact with each other. Doing some basic research for this essay, I discovered that my style resembles the [British style](https://en.wikipedia.org/wiki/Quotation_marks_in_English#British_style). However, I w...
https://www.lesswrong.com/posts/YgedrNsdXNajQ7oCT/punctuation-and-quotation-conventions
# AI loves octopuses I occasionally like to be an idiot. In a fun, harmless way mostly, although I have participated in the Running of the Bulls in Pamplona[^g4910g2lnos], which perhaps invalidates my point. That aside, a month or so ago, a friend and I were coming up with silly ways to evaluate AI models and hit upon...
https://www.lesswrong.com/posts/6oaHGbxjTazGZED47/ai-loves-octopuses
# Writing Hack: Write It Just Like That In [the novella “The Suitcase”](https://oceanofpdf.com/authors/sergei-dovlatov/pdf-epub-the-suitcase-download/) by nonconformist soviet writer Sergei Dovlatov I came across a dialogue that contains one of the most important writing hacks I know. The dialogue is between the mothe...
https://www.lesswrong.com/posts/vdKoFHvYTT5cjwP9X/writing-hack-write-it-just-like-that
# Your Clone Wants to Kill You Because You Assumed Too Much My friend [@Croissanthology](https://www.lesswrong.com/users/croissanthology?mention=user) is puzzled why it is such a common trope for fictional clones to [turn on their creators](https://croissanthology.substack.com/p/would-you-even-cooperate-with-yourself?...
https://www.lesswrong.com/posts/bXG79EsEmjR3agQtd/your-clone-wants-to-kill-you-because-you-assumed-too-much
# Why does ChatGPT think mammoths were alive in December? The is a slimmed down version which omits some extra examples but includes my theorizing about ChatGPT, my investigations of it, and my findings. **Epistemic status:** Pretty uncertain. I think my conclusions are probably more right than wrong, more useful tha...
https://www.lesswrong.com/posts/bkrr5iHBknkGLhsAG/why-does-chatgpt-think-mammoths-were-alive-in-december
# The skills and physics of high-performance driving, Pt. 1 *High performance driving = motorsport = racecar driving* Even if you have a license and drive a car, you probably don't understand what is hard about racing. I think the answer is interesting (in general I think [knowing why things are hard is interesting](...
https://www.lesswrong.com/posts/tFjuJQfwLWLbuNaQ7/the-skills-and-physics-of-high-performance-driving-pt-1
# Sharpening Your Map: Introducing Calibrate A pocket tool for Bayesian calibration. --------------------------------------- Have you ever desired external tools for the deliberate shaping of your mind? If so, I’m pleased to introduce **Calibrate** an [open source Android app](https://github.com/VermillionS/C...
https://www.lesswrong.com/posts/fuFYm3w4YAKxvhtNy/sharpening-your-map-introducing-calibrate
# AI safety undervalues founders **TL;DR:** In AI safety, we systematically undervalue founders and field‑builders relative to researchers and prolific writers. This status gradient pushes talented would‑be founders and amplifiers out of the ecosystem, slows the growth of research orgs and talent funnels, and bottlene...
https://www.lesswrong.com/posts/yw9B5jQazBKGLjize/ai-safety-undervalues-founders
# Put numbers on stuff, all the time, otherwise scope insensitivity will eat you *Context: Post #6 in* [*my sequence*](https://www.lesswrong.com/s/cegJvyiP2dBkhDh2K) *of private Lightcone Infrastructure memos edited for public consumption.* * * * In almost any role at Lightcone you will have to make prioritization d...
https://www.lesswrong.com/posts/sdvBcPSR7p5mKhB7b/put-numbers-on-stuff-all-the-time-otherwise-scope
# The Ambiguity Of "Human Values" Is A Feature, Not A Bug *This post started as *[*two*](https://www.lesswrong.com/posts/dyDpJyNLgAHTAHeX9/don-t-use-the-phrase-human-values?commentId=bDQ9kmJ3kQzqAEnqH)[*comments*](https://www.lesswrong.com/posts/dyDpJyNLgAHTAHeX9/don-t-use-the-phrase-human-values?commentId=PDgAxhFCqbL...
https://www.lesswrong.com/posts/W3pKCNPck63vD2Wez/the-ambiguity-of-human-values-is-a-feature-not-a-bug
# Finding My Internal Compass, Literally There are tribes in Australia that don’t use “left” and “right” in their speech, relying instead on cardinal directions — their equivalents of north/south/east/west. Members of these tribes are always aware of the direction they are facing. Instead of “shift left a bit,” someon...
https://www.lesswrong.com/posts/frWzWznQ68c6isB2m/finding-my-internal-compass-literally
# 7 Vicious Vices of Rationalists *Vices aren't behaviors that one should never do. Rather, vices are behaviors that are fine and pleasurable to do in moderation, but tempting to do in excess. The classical vices are actually good in part. Moderate amounts of gluttony is just eating food, which is important. Moderate ...
https://www.lesswrong.com/posts/r6xSmbJRK9KKLcXTM/7-vicious-vices-of-rationalists-1
# Brand New Experience Salesman *(Part of a short story collection based around a prompt. The prompt was 'selling memories or experiences.')* “We at Glimpse scout and curate first time experiences.” Stan gave his new potential customers his winning smile, straight from the highway billboards, hiding the desperation t...
https://www.lesswrong.com/posts/NPWKAFsDYtfY4zw5Z/brand-new-experience-salesman
# Diagonalization: A (slightly) more rigorous model of paranoia In my post on Wednesday ([Paranoia: A Beginner's Guide](https://www.lesswrong.com/posts/yXSKGm4txgbC3gvNs/paranoia-a-beginner-s-guide)), I talked at a high level about the experience of paranoia, and gave two models (the lemons market model and the OODA l...
https://www.lesswrong.com/posts/rmkbneSMQHZJ5Svbk/diagonalization-a-slightly-more-rigorous-model-of-paranoia
# The Badness of Death in Different Metaethical Theories *Epistemic: a section from my unpublished article about badness of death. I decided to publish it given recent* [*discussion*](https://www.lesswrong.com/posts/KCSmZsQzwvBxYNNaT/please-don-t-roll-your-own-metaethics) *about metaethics in order to show my attempt ...
https://www.lesswrong.com/posts/iBg6AAG72wqyosxAk/the-badness-of-death-in-different-metaethical-theories
# Considering the Relevance of Computational Uncertainty for AI Safety ***Epistemic Status:** Thoughts collected at the Sydney Mathematical Research Institute's* [*Focus Period on the Mathematical Science of AI Safety*](https://mathematical-research-institute.sydney.edu.au/upcoming-focus-period-mathematical-science-of...
https://www.lesswrong.com/posts/EFMpYDqCWikpbvQ9X/considering-the-relevance-of-computational-uncertainty-for
# The new Pluribus TV show is a great and unusual analogy for AI. Pluribus (or "PLUR1BUS") shows how the world radically changes after everyone on the planet merges their thoughts and knowledge to become a single entity. Everyone except, of course, the main character and 11 others. The sci-fi magic that causes this is...
https://www.lesswrong.com/posts/cKuPsenbX9cL68CgG/the-new-pluribus-tv-show-is-a-great-and-unusual-analogy-for
# I Spent 30 Days Learning to Smile More Charismatically I recently watched Better Call Saul with my girlfriend, and she mentioned how attractive Lalo Salamanca (Tony Dalton) is when he smiled. (His smile is also creepy, since he’s a charismatic psychopath in the show.) ![Lalo Salamanca Smiling](https://www.taylor.gl...
https://www.lesswrong.com/posts/YkRHHPfGzfY8CP5TX/i-spent-30-days-learning-to-smile-more-charismatically
# Wiki AI A thought that has been bouncing around in my head for the last couple years is *wikipedia as a motivating metaphor for what LLMs should become.* This is particularly in contrast to social media. I owe this idea to Tom Everitt, who conveyed it to me at an Agent Foundations conference at Wytham Abbey.  LLMs ...
https://www.lesswrong.com/posts/cHAqvpcDEGNzzxEMX/wiki-ai
# Process Crimes and Pedantic Rules Some rules are there because they make the enforcement of other rules much easier. There’s many things we don’t care about *per se*, but that doing (or not doing) will create easy to spot evidence of crimes we do care about that would be much easier to hide. I suspect this explains...
https://www.lesswrong.com/posts/hXCNaXnHwYYnwYstp/process-crimes-and-pedantic-rules
# Transformative AI expectations are not new A little while ago I was reading a [book review](https://kevinbryanecon.com/BryanAIBookReview.pdf) from the economist Kevin Bryan about AI’s likely impacts. He not only provides an excellent overview of the economic questions, but also makes a considerable effort to reconst...
https://www.lesswrong.com/posts/joBL9zWZhXKKKxnwQ/transformative-ai-expectations-are-not-new