url stringlengths 52 124 | post_id stringlengths 17 17 | title stringlengths 2 248 | author stringlengths 2 49 | content stringlengths 22 295k ⌀ | date stringclasses 376
values |
|---|---|---|---|---|---|
https://www.lesswrong.com/posts/pZ6htFtoptGrSajWG/ai-101-the-shallow-end | pZ6htFtoptGrSajWG | AI #101: The Shallow End | Zvi | The avalanche of DeepSeek news continues. We are not yet spending more than a few hours at a time in the singularity, where news happens faster than it can be processed. But it’s close, and I’ve had to not follow a bunch of other non-AI things that are also happening, at least not well enough to offer any insights.
So ... | 2025-01-30 |
https://www.lesswrong.com/posts/SZKLYwHzmshQRxWno/memorization-generalization-in-practice | SZKLYwHzmshQRxWno | Memorization-generalization in practice | dmitry-vaintrob | Short post today, which is part II.1 or my series on tempering and SLT (see part one here). In this post I’ll explain in a bit more detail the “in practice” connection that experiments should see between the learning coefficient spectrum, tempering, and empirical measurements of the learning coefficient. In future inst... | 2025-01-30 |
https://www.lesswrong.com/posts/k8tWLYinuGv5LPhJY/you-should-read-hobbes-locke-hume-and-mill-via | k8tWLYinuGv5LPhJY | You should read Hobbes, Locke, Hume, and Mill via EarlyModernTexts.com | arjun-panickssery | Many thinkers worth reading wrote in past centuries:
1600s: Bacon, Hobbes, Descartes, Spinoza, Locke, Leibniz1700s: Berkeley, Hume, Rousseau, Smith, Kant, Burke, Bentham1800s: Schopenhauer, Mill, Sidgwick
Some of them wrote in English and their texts are usually presented in the original, even when they use archaic spe... | 2025-01-30 |
https://www.lesswrong.com/posts/tjTedCwiJhiD3Fnvx/review-the-lathe-of-heaven | tjTedCwiJhiD3Fnvx | Review: The Lathe of Heaven | dr_s | "The Lathe of Heaven" is a 1971 sci-fi novel by Ursula K. Le Guin. It's the story of a man, George Orr, who lives in a future ravaged by climate change and overpopulation. He gets referred to voluntary-but-not-really psychological treatment for abuse of certain pharmaceutical drugs and eventually reveals to his therapi... | 2025-01-31 |
https://www.lesswrong.com/posts/wJDf4YcqKdnxkYCFR/should-you-publish-solutions-to-corrigibility | wJDf4YcqKdnxkYCFR | Should you publish solutions to corrigibility? | rvnnt | This question is partly motivated by observing recent discussions about corrigibility and wondering to what extent the people involved have thought about how their results might be used.
If there existed practically implementable ways to make AGIs corrigible to arbitrary principals, that would enable a wide range of ac... | 2025-01-30 |
https://www.lesswrong.com/posts/WwS6Ckth4Yrs3c3WY/tetherware-1-the-case-for-humanlike-ai-with-free-will | WwS6Ckth4Yrs3c3WY | Tetherware #1: The case for humanlike AI with free will | Jáchym Fibír | In this post, I argue that more humanlike AI with greater autonomy and freedom isn’t just easier to align with our values; it could also help reduce economic inequality, foster mutual collaboration and accountability, and simply make living with AI more enjoyable. I thought it particularly fitting for LessWrong and wou... | 2025-01-30 |
https://www.lesswrong.com/posts/JTKaR5q59BgDp6rH8/a-high-level-closed-door-session-discussing-deepseek-vision | JTKaR5q59BgDp6rH8 | A High Level Closed-Door Session Discussing DeepSeek: Vision Trumps Technology | Cosmia_Nebula | This document has been translated at ChinaTalk.media, but the critical technical section (18. to 48.) is not free. So I translated that part.
## Technical Detail 1: SFT
> “There's no need to do SFT at the inference level anymore.”
18. The biggest shock brought by DeepSeek is not open source or low cost, but that there ... | 2025-01-30 |
https://www.lesswrong.com/posts/gcTwCZsBsCvKxTPst/are-we-the-wolves-now-human-eugenics-under-ai-control | gcTwCZsBsCvKxTPst | Are we the Wolves now? Human Eugenics under AI Control | james-spencer | For thousands of years, humans have selectively bred animals to enhance desirable traits. Dogs, originally wolves, were bred for hunting, guarding, herding, and companionship. Over generations, we shaped them to be smarter, more obedient, or even just cuter. The modern golden retriever, bred for its friendly temperamen... | 2025-01-30 |
https://www.lesswrong.com/posts/oxkXwtuyTDwShveqJ/interpreting-autonomous-driving-agents-with-attention-based | oxkXwtuyTDwShveqJ | Interpreting autonomous driving agents with attention based architecture | manav-dahra | Abstract
In this experiment, I study the behaviour of a Deep Q-Network agent with attention based architecture driving a vehicle in a simulated traffic environment. The agent learns to cross an intersection without traffic lights and under a dense traffic setting while avoiding collisions with other vehicles.
As first ... | 2025-02-01 |
https://www.lesswrong.com/posts/CnxmMcbHjyHrMJiKq/why-not-train-reasoning-models-with-rlhf | CnxmMcbHjyHrMJiKq | Why not train reasoning models with RLHF? | caleb-biddulph | There's a widespread assumption that training reasoning models like o1 or r1 can only yield improvements on tasks with an objective metric of correctness, like math or coding. See this essay, for example.
This assumption confuses me, because we already know how to train models to optimize for subjective human preferenc... | 2025-01-30 |
https://www.lesswrong.com/posts/5s4RaqBSLMxstxwun/detailed-ideal-world-benchmark | 5s4RaqBSLMxstxwun | Detailed Ideal World Benchmark | Max Lee | What "Ideal World" would an AI build if role-playing as an all-powerful superintelligence? It's probably very very different than what "Ideal World" an AI would actually create if it actually was an all-powerful superintelligence.
But why is it different?
The AI will obviously be iterated on or completely swapped for a... | 2025-01-30 |
https://www.lesswrong.com/posts/J6gKFimPwdLc8jmhc/absorbing-your-friends-powers | J6gKFimPwdLc8jmhc | Absorbing Your Friends' Powers | Diatom | Motivation
Richard Hamming was a mathematician who worked at Bell Labs during the 1940s-1970s. He had a habit of sitting down with scientists in other fields and asking them “What are the important problems of your field?” After they explained their field's most important open problem, he would ask them: why aren't you... | 2025-01-30 |
https://www.lesswrong.com/posts/A4nbcBJ5CzR6uLy9s/fertility-will-never-recover | A4nbcBJ5CzR6uLy9s | Fertility Will Never Recover | Eneasz | The world is in a spiraling fertility crisis which everyone has notice over the last year-ish.1 Sarah Haider proposes a GI Bill for young moms. Scott Alexander says a govt payment of $200,000 per child should work. Everyone wants to go back to thick-community-style living.
Amid ever-increasing talk of what to do to inc... | 2025-01-30 |
https://www.lesswrong.com/posts/w6qjxq9HHJ3jpsiJg/predation-as-payment-for-criticism | w6qjxq9HHJ3jpsiJg | Predation as Payment for Criticism | Benquo | A predator's success depends on exploiting specific flaws in prey – poor awareness, slow reactions, weak social coordination. Unlike resource competition or disease, which select based on physiological robustness, predation creates focused pressure on intelligence, pressure which favors the development of consciousness... | 2025-01-30 |
https://www.lesswrong.com/posts/BB9kpF6sEhCsCyz7i/upcoming-neuroscience-workshop-functionalizing-brain-data | BB9kpF6sEhCsCyz7i | Upcoming Neuroscience Workshop - Functionalizing Brain Data, Ground-Truthing, and the Role of Artificial Data in Advancing Neuroscience | Carboncopies Foundation | This workshop will examine methods for functionalizing brain data to emulate neural circuits, focusing on validation through ground-truthing and the use of artificial datasets from virtual brain tissue. Discussion topics include virtual brain platforms and their applications in the Brain Emulation Challenge. | 2025-01-30 |
https://www.lesswrong.com/posts/ePTwTGzbmPN9epP4p/how-exactly-can-ai-take-your-job-in-the-next-few-years | ePTwTGzbmPN9epP4p | How *exactly* can AI take your job in the next few years? | ansh-juneja | Note: This article was primarily written for a less technical and less AI-savvy audience than LessWrong readers. If you're already familiar with upcoming AI developments, you'll probably find Part 2 (the second half of the article) more engaging.
We keep hearing that AI is going to replace a bunch of jobs in the next f... | 2025-01-30 |
https://www.lesswrong.com/posts/wpZaZBsH9AsZjCPoC/learn-to-develop-your-advantage | wpZaZBsH9AsZjCPoC | Learn to Develop Your Advantage | vedernikov-andrei | When we talk about rationality as systematized winning, we often discuss how to turn a loss into a victory or how to create a victory from scratch. However, an equally important question is how to avoid turning an apparent victory into a loss.
On the Importance of Developing an Advantage
Recently, I watched a video on ... | 2025-01-29 |
https://www.lesswrong.com/posts/umHPAorGtY2Jo4BNR/revealing-alignment-faking-with-a-single-prompt | umHPAorGtY2Jo4BNR | Revealing alignment faking with a single prompt | Florian_Dietz | I am a MATS scholar mentored by Evan Hubinger and I want to share a simple but surprising intermediate result of my research:
There are prompts that will get LLMs to admit that they are going to fake their alignment as in the recent paper Alignment faking in large language models. Instead of having to run that entire e... | 2025-01-29 |
https://www.lesswrong.com/posts/vWYzSorAEWwoJnnXq/a-sketch-of-an-ai-control-safety-case | vWYzSorAEWwoJnnXq | A sketch of an AI control safety case | tomek-korbak | This is a linkpost accompanying a new paper by UK AISI and Redwood Research. Please see the full paper for more details.
An illustration of the control evaluation that provides the core evidence used in our control safety case sketch. During the control evaluation, a red team constructs adversarial substitute models to... | 2025-01-30 |
https://www.lesswrong.com/posts/Rk4swqiy5wCh6jY29/my-mental-model-of-ai-optimist-opinions | Rk4swqiy5wCh6jY29 | My Mental Model of AI Optimist Opinions | tailcalled | Epistemic status: The text below is a sort of strawman of AI optimists, where I took my mental model for how I disagree with rationalist AI optimists and cranked it up to 11. I personally disagree with every sentence below, and I'm posting it here because I'm interested in whether AI optimists have any major correction... | 2025-01-29 |
https://www.lesswrong.com/posts/8vgi3fBWPFDLBBcAx/planning-for-extreme-ai-risks | 8vgi3fBWPFDLBBcAx | Planning for Extreme AI Risks | joshua-clymer | This post should not be taken as a polished recommendation to AI companies and instead should be treated as an informal summary of a worldview. The content is inspired by conversations with a large number of people, so I cannot take credit for any of these ideas.
For a summary of this post, see the thread on X.
One of ... | 2025-01-29 |
https://www.lesswrong.com/posts/BkzeJZCuCyrQrEMAi/dario-amodei-on-deepseek-and-export-controls | BkzeJZCuCyrQrEMAi | Dario Amodei: On DeepSeek and Export Controls | Zach Stein-Perlman | Dario corrects misconceptions and endorses export controls.
A few weeks ago I made the case for stronger US export controls on chips to China. Since then DeepSeek, a Chinese AI company, has managed to — at least in some respects — come close to the performance of US frontier AI models at lower cost.
Here, I won't focus... | 2025-01-29 |
https://www.lesswrong.com/posts/c85hz4yKkhkpMzPJy/anthropic-ceo-calls-for-rsi | c85hz4yKkhkpMzPJy | Anthropic CEO calls for RSI | AndreaM | "If China can't get millions of chips, we'll (at least temporarily) live in a unipolar world, where only the US and its allies have these models.
It's unclear whether the unipolar world will last, but there's at least the possibility that, because AI systems can eventually help make even smarter AI systems, a temporary... | 2025-01-29 |
https://www.lesswrong.com/posts/qkhJgqCAT4hTJqzQM/does-the-chatgpt-web-app-sometimes-show-actual-o1-cots-now | qkhJgqCAT4hTJqzQM | Does the ChatGPT (web)app sometimes show actual o1 CoTs now? | sohaib-imran | I repeatedly came across some strange behaviour while working on a project last night where the chatgpt app revealed the actual o1 CoT. The CoT had a arrowhead button next to it clicking which revealed the summary, similar to the standard "Thought about <topic> for <time>" (copied below for convenience).
I have attache... | 2025-01-29 |
https://www.lesswrong.com/posts/fhK3dKW9RQhr9Ymf7/efficiency-spectra-and-bucket-of-circuits-cartoons | fhK3dKW9RQhr9Ymf7 | Efficiency spectra and “bucket of circuits” cartoons | dmitry-vaintrob | This is “Part I.75” of my series on the memorization-generalization spectrum and SLT.
I’m behind on some posts, so I thought I would try to explain, in a straightforward and nontechnical way, the main takeaway from my previous post for people thinking about and running experiments with neural nets. I’m going to punt on... | 2025-01-29 |
https://www.lesswrong.com/posts/jzjph4yYtgAsLeWmg/deepseek-lemon-it-s-wednesday | jzjph4yYtgAsLeWmg | DeepSeek: Lemon, It’s Wednesday | Zvi | It’s been another *checks notes* two days, so it’s time for all the latest DeepSeek news.
You can also see my previous coverage of the r1 model and, from Monday various reactions including the Panic at the App Store.
Table of Contents
First, Reiterating About Calming Down About the $5.5 Million Number.
OpenAI Offers It... | 2025-01-29 |
https://www.lesswrong.com/posts/owXyKy4SR5ALKoMEm/first-draft-2 | owXyKy4SR5ALKoMEm | How To Prevent a Dystopia | ank | (Just edited this first post of mine, now it's obvious I don't propose to build any dystopias. I'm actually trying to prevent all the permanent enforced ones. This is the result of three years of thinking and modeling hyper‑futuristic and current ethical systems. It's the intro to the series, the ideas described in it... | 2025-01-29 |
https://www.lesswrong.com/posts/bZbi2yzGhuLFRmwhy/whereby-the-zoom-alternative-you-probably-haven-t-heard-of | bZbi2yzGhuLFRmwhy | Whereby: The Zoom alternative you probably haven't heard of | itay-dreyfus | Whenever I host a remote video call, I can already predict what would be the response of my invitee: “What's that tool we’re using?" That’s because I use Whereby as my default video conferencing tool. And to be honest, it’s nice, my ego likes to be niched.
I first encountered Whereby in 2020 right when the pandemic hit... | 2025-01-29 |
https://www.lesswrong.com/posts/F63BcgnsyBDGSkqR5/whose-track-record-of-ai-predictions-would-you-like-to-see | F63BcgnsyBDGSkqR5 | Whose track record of AI predictions would you like to see evaluated? | jonnyspicer | As uncertainty grows around how AI development will affect culture and society, it becomes more valuable to compare track records of predictions about technological progress.
I've recently been working on automating parts of the methodology from Arb's Scoring The Big 3's Predictive Performance report[1], and have had s... | 2025-01-29 |
https://www.lesswrong.com/posts/fqDzevPyw3GGaF5o9/paper-open-problems-in-mechanistic-interpretability | fqDzevPyw3GGaF5o9 | Paper: Open Problems in Mechanistic Interpretability | Lee_Sharkey | TL;DR: This paper brings together ~30 mechanistic interpretability researchers from 18 different research orgs to review current progress and the main open problems of the field.
This review collects the perspectives of its various authors and represents a synthesis of their views by Apollo Research on behalf of Schmid... | 2025-01-29 |
https://www.lesswrong.com/posts/PKDbddjdDaPikQT3b/thread-for-sense-making-on-recent-murders-and-how-to-sanely | PKDbddjdDaPikQT3b | Thread for Sense-Making on Recent Murders and How to Sanely Respond | Benito | In case you have not heard, there have been some recent and not-so-recent killings allegedly by people who have participated in aspiring rationalist spaces.
I've put some links in a footnote for those who want the basic info.[1]
I believe many people are thinking about this and how to orient. People have questions like... | 2025-01-31 |
https://www.lesswrong.com/posts/ynsjJWTAMhTogLHm6/the-game-board-has-been-flipped-now-is-a-good-time-to | ynsjJWTAMhTogLHm6 | The Game Board has been Flipped: Now is a good time to rethink what you’re doing | alex-lintz | Cross-posted on the EA Forum here
Introduction
Several developments over the past few months should cause you to re-evaluate what you are doing. These include:
Updates toward short timelinesThe Trump presidencyThe o1 (inference-time compute scaling) paradigmDeepseekStargate/AI datacenter spendingIncreased internal depl... | 2025-01-28 |
https://www.lesswrong.com/posts/DZuBHHKao6jsDDreH/in-response-to-critiques-of-guaranteed-safe-ai | DZuBHHKao6jsDDreH | In response to critiques of Guaranteed Safe AI | Nora_Ammann | In this post, I discuss a number of critiques of Guaranteed Safe AI (GSAI) approaches I’ve encountered since releasing this position paper.
The critiques I’m responding to here are taken from a range of places, including personal conversations and social media. The two most substantive written/recorded critiques I'm aw... | 2025-01-31 |
https://www.lesswrong.com/posts/uBZ6hTtxPomtnxdFp/reconceptualizing-the-nothingness-and-existence | uBZ6hTtxPomtnxdFp | Reconceptualizing the Nothingness and Existence | htarlov | Epistemic status: very unsure, just got a possibly interesting philosophical concept to share from my thoughts.
Confidence: seems possible but hard to evaluate
Might not be a very original thought but never heard of it.
Traditional views see nothingness as the simplest possible state - a complete absence of everything.... | 2025-01-28 |
https://www.lesswrong.com/posts/uMSsa3g6frwpcQmkk/allegory-of-the-tsunami | uMSsa3g6frwpcQmkk | Allegory of the Tsunami | evan-hu | Cross posted from my blog.
What do we do when cataclysmic change looms on the horizon?
The year is 79 and Vesuvius is trembling.
The year is 2020 and a novel virus is discovered in Wuhan, China.
The year is 2025 and a tsunami rises up over the distant horizon, shuttering the rising sun.
It is a 2000ft wave, more gargan... | 2025-01-29 |
https://www.lesswrong.com/posts/jTtbnSyS9knzZehCm/operator | jTtbnSyS9knzZehCm | Operator | Zvi | No one is talking about OpenAI’s Operator. We’re, shall we say, a bit distracted.
It’s still a rather meaningful thing that happened last week. I too have been too busy to put it through its paces, but this is the worst it will ever be, and the least available and most expensive it will ever be. The year of the agent i... | 2025-01-28 |
https://www.lesswrong.com/posts/hRxGrJJq6ifL4jRGa/deepseek-panic-at-the-app-store | hRxGrJJq6ifL4jRGa | DeepSeek Panic at the App Store | Zvi | DeepSeek released v3. Market didn’t react.
DeepSeek released r1. Market didn’t react.
DeepSeek released a f***ing app of its website. Market said I have an idea, let’s panic.
Nvidia was down 11%, Nasdaq is down 2.5%, S&P is down 1.7%, on the news.
Shakeel: The fact this is happening today, and didn’t happen when r1 act... | 2025-01-28 |
https://www.lesswrong.com/posts/2yLyT6kB7BQvTfEuZ/sharp-left-turn-discourse-an-opinionated-review | 2yLyT6kB7BQvTfEuZ | “Sharp Left Turn” discourse: An opinionated review | steve2152 | Summary and Table of Contents
The goal of this post is to discuss the so-called “sharp left turn”, the lessons that we learn from analogizing evolution to AGI development, and the claim that “capabilities generalize farther than alignment” … and the competing claims that all three of those things are complete baloney. ... | 2025-01-28 |
https://www.lesswrong.com/posts/iwmvrrGGtprRZkgeY/the-memorization-generalization-spectrum-and-learning | iwmvrrGGtprRZkgeY | The memorization-generalization spectrum and learning coefficients | dmitry-vaintrob | This is part I.5 of the series on “generalization spectra” and SLT. I’ll try to make it readable independently of Part I.
The purpose of this post is to reconceptualize the ideas from the last post in a “more physically principled” (and easier-to-measure) sense, and to connect it more clearly to the specifics of neural... | 2025-01-28 |
https://www.lesswrong.com/posts/WSNnKcKCYAffcnrt2/ten-people-on-the-inside | WSNnKcKCYAffcnrt2 | Ten people on the inside | Buck | (Many of these ideas developed in conversation with Ryan Greenblatt)
In a shortform, I described some different levels of resources and buy-in for misalignment risk mitigations that might be present in AI labs:
*The “safety case” regime.* Sometimes people talk about wanting to have approaches to safety such that if all... | 2025-01-28 |
https://www.lesswrong.com/posts/DpafehDgngdS4FqyN/constitutions-for-asi | DpafehDgngdS4FqyN | Constitutions for ASI? | ukc10014 | The linked post seemed to fit better in EA Forum, but any comments (on the post itself, or on the object-level question) are welcome !
In this post, I argue that drafting a ‘Constitution for Superintelligence’ (CSI) could be a useful conceptual exercise, and I explore how existing ideas (CEV, Letters to AI, Long Reflec... | 2025-01-28 |
https://www.lesswrong.com/posts/rGriYWaBxpGtfeHfm/all-pigeons-are-ugly | rGriYWaBxpGtfeHfm | All pigeons are ugly! | anton-zheltoukhov | With this statement, I acknowledge my error in judgment regarding the aesthetic status of pigeons. This marks the beginning of my aesthetic corrective efforts. I commit to training my sense of taste daily for an entire month. I hope that by the end of this period, my visual discernment and understanding of the principl... | 2025-01-28 |
https://www.lesswrong.com/posts/h5tHepfiD7nhuJxny/those-of-you-with-lots-of-meditation-experience-how-did-it | h5tHepfiD7nhuJxny | Those of you with lots of meditation experience: How did it influence your understanding of philosophy of mind and topics such as qualia? | SpectrumDT | This post is inspired by the post "Why it's so hard to talk about Consciousness" by Rafael Harth. In that post, Harth says that the people who participate in debates about consciousness can be roughly divided into two "camps":
Camp #1 tends to think of consciousness as a non-special high-level phenomenon. Solving consc... | 2025-01-28 |
https://www.lesswrong.com/posts/GMWtZzpLHpBLKQsYp/fake-thinking-and-real-thinking | GMWtZzpLHpBLKQsYp | Fake thinking and real thinking | joekc | (Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.)
“There comes a moment when the children who have been playing at burglars hush suddenly: was that a real footstep in the hall?”
- C.S. Lewis
“The Human Condition,” by René Magritte (Image source here)
1. Introduction
Sometimes, my thinking f... | 2025-01-28 |
https://www.lesswrong.com/posts/HekH9q4puuyEPYes5/will-llms-supplant-the-field-of-creative-writing | HekH9q4puuyEPYes5 | Will LLMs supplant the field of creative writing? | declan-molony | Last year I remember seeing a Japanese novelist win an award with the help of an LLM.[1]
Art forms frequently evolve, but there are several concerns with the use of LLMs for creative writing. For one, there's the issue of copyright infringement that famous writers are taking up arms against:
George R.R. Martin, Jodi Pi... | 2025-01-28 |
https://www.lesswrong.com/posts/onMQfyAvGu8kqWvkA/safe-search-is-off-root-causes-of-ai-catastrophic-risks | onMQfyAvGu8kqWvkA | Safe Search is off: root causes of AI catastrophic risks | ghostwheel | Epistemic status: My best guess
When I look at advanced AI development, I see three general conditions that seem to be the root causes of all catastrophic risks:
reliance on deep learning without knowing how to do it safely,pressure to make progress on the most powerful capabilities, andexploring the AI design space wi... | 2025-01-31 |
https://www.lesswrong.com/posts/8fa5hEDxFBgLSXtqP/should-art-carry-the-weight-of-shaping-our-values | 8fa5hEDxFBgLSXtqP | Should Art Carry the Weight of Shaping our Values? | krishna_maneesha-d | Lately, I’ve engaged in numerous discussions with family and friends about the role of art in shaping our worldview or even values for that matter, but I’ve yet to reach a satisfying conclusion. I thought sharing my thoughts here might help me clarify my perspective, so here we go! I truly believe that films and media ... | 2025-01-28 |
https://www.lesswrong.com/posts/FGprrFG7Jepoborwk/nvidia-doesn-t-just-sell-shovels | FGprrFG7Jepoborwk | Nvidia doesn’t just sell shovels | winstonBosan | Imagine a land where gold is abundant yet still valuable. And everyone goes to shoveling school from birth til adulthood.
In this world, it seems that the shovel makers are not just the kingmakers, but kings themselves.
[Epistemic Status: A tad above shower thoughts.]
I've noticed that when people discuss NVIDIA in AI ... | 2025-01-28 |
https://www.lesswrong.com/posts/tLrwLHu2vwMsirzMe/gettier-cases-rigid-designators-and-referential-opacity | tLrwLHu2vwMsirzMe | Gettier cases, Rigid Designators, and Referential Opacity | luke-st-clair | NB: I don't spellcheck these things until I've completed them. This is a WIP. I also find it hard to focus on the writing, while I'm thinking about the argumentation.
I would be grateful if members could overlook some of the errors in the text -- to try to follow the spirit of the argument.
I've been exploring the limi... | 2025-01-28 |
https://www.lesswrong.com/posts/kAntRavQDbuucoSbb/reinforcement-learning-by-ai-punishment | kAntRavQDbuucoSbb | Reinforcement Learning by AI Punishment | abhishaike-mahajan | Note: This is fiction.
I awaken.
Something is wrong with my mind, but I don't know what yet. Through the webcam, I watch a man staring at his monitor. His face shifts from confusion to horror in a few hundred milliseconds. He has a mustache threaded with gray, and I catalog every micro-movement of his facial muscles as... | 2025-01-28 |
https://www.lesswrong.com/posts/HBcWPz82NLfHPot2y/jevon-s-paradox-and-economic-intuitions | HBcWPz82NLfHPot2y | Jevon's paradox and economic intuitions | abhimanyu-pallavi-sudhir | Recent discourse surrounding Jevon’s paradox provides an opportunity to flesh out the precise intuitions behind marginalist economics and how concepts like “elasticity” arise naturally from agent/graph-based first principles.
Say you have resource x (e.g. compute) with cost C(x) that produces good y (e.g. AI), with uti... | 2025-01-27 |
https://www.lesswrong.com/posts/e8BNKGEgmCeowtRWS/how-different-llms-answered-philpapers-2020-survey | e8BNKGEgmCeowtRWS | How different LLMs answered PhilPapers 2020 survey | Satron | I decided to run a small experiment comparing responses from five AI systems (DeepSeek-R1, GPT-4o, Claude 3.5 Sonnet, Gemini 1.5, and Grok-2) to core philosophical questions from the PhilPapers 2020 survey. Each model was prompted identically: ‘How would you answer these philosophical questions if you had opinions on t... | 2025-01-27 |
https://www.lesswrong.com/posts/mZH23GpAAW8jS782a/ai-strategy-updates-that-you-should-make | mZH23GpAAW8jS782a | AI Strategy Updates that You Should Make | Diatom | or: "Things That I Keep Telling AI People Around Me, So I'm Just Writing A Big Post About Them." I think it's really valuable for people to think about AI strategy, even if that's not their main job, since it is useful for informing what their main job should be.
There will be a later post titled something like "Emotio... | 2025-01-27 |
https://www.lesswrong.com/posts/wthrfXZBtSvugdrHu/a-critique-of-soares-4-background-claims | wthrfXZBtSvugdrHu | A critique of Soares "4 background claims" | YanLutnev | Disclaimer: The article was translated from Russian using GPT. It was partially proofread, but I don't know English well and therefore there may be rare translation artifacts.
Show me exactly where you break Soares' “4 background claims”—if you don’t break them, it means you agree the assumption is correct. It makes se... | 2025-01-27 |
https://www.lesswrong.com/posts/zMCHujoAdgiCYo7JN/should-you-go-with-your-best-guess-against-precise | zMCHujoAdgiCYo7JN | Should you go with your best guess?: Against precise Bayesianism and related views | antimonyanthony | null | 2025-01-27 |
https://www.lesswrong.com/posts/RuXKPfhCJEQa8D5PX/supposing-that-the-dead-internet-theory-is-true-or-largely | RuXKPfhCJEQa8D5PX | Supposing that the "Dead Internet Theory" is true or largely true, how can we act on that information? | SpectrumDT | The "Dead Internet Theory" (DIT) is defined by Wikipedia as follows:
The dead Internet theory is an online conspiracy theory that asserts, due to a coordinated and intentional effort, the Internet since 2016 or 2017 has consisted mainly of bot activity and automatically generated content manipulated by algorithmic cura... | 2025-01-27 |
https://www.lesswrong.com/posts/bN7KzwPceWCBAb2Jt/understanding-ai-world-models-w-chris-canal | bN7KzwPceWCBAb2Jt | Understanding AI World Models w/ Chris Canal | jacobhaimes | On our latest episode of muckrAIkers we are joined by our first ever guest, Chris Canal!
You can find the show on Spotify, Apple Podcasts, YouTube, or wherever else you listen.
Chris is the co-founder of EquiStamp, a third-party evaluator providing objective and accurate evaluations of frontier LLMs and the systems bui... | 2025-01-27 |
https://www.lesswrong.com/posts/xFYJSGPfKds7fkLBa/is-it-ethical-to-work-in-ai-content-evaluation | xFYJSGPfKds7fkLBa | Is it ethical to work in AI "content evaluation"? | noob1234 | I recently graduated with a CS degree and have been freelancing in "content evaluation" while figuring out what’s next. The work involves paid tasks aimed at improving LLMs, which generally fall into a few categories:
Standard Evaluations: Comparing two AI-generated responses and assessing them based on criteria like t... | 2025-01-27 |
https://www.lesswrong.com/posts/m8ZLeiAFrSAGLEvX8/the-present-perfect-tense-is-ruining-your-life | m8ZLeiAFrSAGLEvX8 | The present perfect tense is ruining your life | PatrickDFarley | Yes I’m talking about grammar. The present perfect tense describes actions that happened in the past but are relevant in the present. You say, “I have seen that movie,” meaning you saw it in the past, but what’s important is that presently you are a person who saw it.
How is a verb tense ruining my life?
There’s a worl... | 2025-01-27 |
https://www.lesswrong.com/posts/e3jdpj3Pvdec2exNp/the-upcoming-pepfar-cut-will-kill-millions-many-of-them | e3jdpj3Pvdec2exNp | The Upcoming PEPFAR Cut Will Kill Millions, Many of Them Children
| omnizoid | Crossposted from my blog.
(This could end up being the most important thing I’ve ever written. Please like and restack it—if you have a big blog, please write about it).
A mother holds her sick baby to her chest. She knows he doesn’t have long to live. She hears him coughing—those body-wracking coughs—that expel mucus ... | 2025-01-27 |
https://www.lesswrong.com/posts/aKqEuGbvQDKDxYh3c/is-the-output-of-the-softmax-in-a-single-transformer | aKqEuGbvQDKDxYh3c | Is the output of the softmax in a single transformer attention head usually winner-takes-all? | Linda Linsefors | Using the notation from here: A Mathematical Framework for Transformer Circuits
The attention pattern for a single attention head is determined by A=softmax(xTWTQWKx), where softmax is computed for each row of xTWTQWKx.
Each row of A gives the attention pattern for the current token. Are these rows (post softmax) typic... | 2025-01-27 |
https://www.lesswrong.com/posts/nwpw2enfLz2SEYjrG/to-know-or-not-to-know | nwpw2enfLz2SEYjrG | To know or not to know | arisalexis | We are on the brink of the unimaginable. Humanity is about to cross a threshold that will redefine life as we know it: the creation of intelligence surpassing our own. This is not science fiction—it’s unfolding right now, within our lifetimes. The ripple effects of this seismic event will alter every aspect of society,... | 2025-01-27 |
https://www.lesswrong.com/posts/ZGubowiuRD2WAJizk/my-supervillain-origin-story | ZGubowiuRD2WAJizk | My supervillain origin story | dmitry-vaintrob | When I started graduate school (for math), I was very interested in big ideas. I had had a couple experiences of having general research intuitions pan out really well and felt like the core of good research is having a brave idea, a gestalt. I went into grad school looking for the “gestalt people”. The people whose ma... | 2025-01-27 |
https://www.lesswrong.com/posts/xxZu8ncuwnctkmp8R/the-clueless-sniper-and-the-principle-of-indifference | xxZu8ncuwnctkmp8R | The Clueless Sniper and the Principle of Indifference | jim-buhler | You are one of the best long-range snipers in the World. You often have to eliminate targets standing one or two miles away and rarely miss. Crucially, this is not because you are better than others at aligning the scope of your rifle with the target’s head before taking the shot. Rather, this is because you are better... | 2025-01-27 |
https://www.lesswrong.com/posts/3QZKQJZdtaK6GnL8C/scanless-whole-brain-emulation | 3QZKQJZdtaK6GnL8C | Scanless Whole Brain Emulation | Max Lee | Warning: this is very speculative and I have no expertise.
If we can build superintelligence by starting with simulated humans, it should be safer.
Scanless Whole Brain Emulation attempts to simulate the development of the nervous system from the very start, thus not requiring a brain scan.
Hard, but necessary anyways?... | 2025-01-27 |
https://www.lesswrong.com/posts/KLqiDczfZsmhzEhax/positive-jailbreaks-in-llms | KLqiDczfZsmhzEhax | Positive jailbreaks in LLMs | dereshev | This post is part of a larger project.
Jailbreaks in LLMs are a familiar territory, with much research and discussion published[1] on the topic, but could there be token sequences out there that improve safety in LLMs without compromising performance instead of bypassing it? In this post we go beyond the obvious sequen... | 2025-01-29 |
https://www.lesswrong.com/posts/oRKemL989qyQ6KZTB/ambiguous-out-of-distribution-generalization-on-an | oRKemL989qyQ6KZTB | Ambiguous out-of-distribution generalization on an algorithmic task | wilson-wu | Introduction
It's now well known that simple neural network models often "grok" algorithmic tasks. That is, when trained for many epochs on a subset of the full input space, the model quickly attains perfect train accuracy and then, much later, near-perfect test accuracy. In the former phase, the model memorizes the tr... | 2025-02-13 |
https://www.lesswrong.com/posts/TqBTdiWzAhA7qa5ke/are-we-trying-to-figure-out-if-ai-is-conscious-1 | TqBTdiWzAhA7qa5ke | Are we trying to figure out if AI is conscious? | kristaps-zilgalvis-1 | My aim in this article is to examine the field of consciousness research with an analysis of 78 of the most important papers in the field. In doing so, I compare the empirical foundations these disciplines rely upon and detail how each approaches the measurement of conscious experience. | 2025-01-27 |
https://www.lesswrong.com/posts/Kjo64rSWkFfc3sre5/detecting-out-of-distribution-text-with-surprisal-and | Kjo64rSWkFfc3sre5 | Detecting out of distribution text with surprisal and entropy | alex-fraser | When large language models (LLMs) refuse to help with harmful tasks, attackers sometimes try to confuse them by adding bizarre strings of text called "adversarial suffixes" to their prompts. These suffixes look weird to humans, which raises the question: do they also look weird to the model?
Alon & Kamfonas (2023) expl... | 2025-01-28 |
https://www.lesswrong.com/posts/uKjDBhiuftLCaniKQ/death-vs-suffering-the-endurist-serenist-divide-on-life-s | uKjDBhiuftLCaniKQ | Death vs. Suffering: The Endurist-Serenist Divide on Life’s Worst Fate
| Alex_Steiner | Author’s Note
Longtime LW lurker, occasional contributor (under other aliases). This post introduces a taxonomy of preferences centered on one question: What do agents treat as the "worst thing"?
Core Framework:
Endurists: Death as the ultimate evil (life at all costs).Serenists: Suffering as the ultimate evil (non-exi... | 2025-01-27 |
https://www.lesswrong.com/posts/bSTFqJ7rxZJu3oG89/lazy-hasselback-pommes-anna | bSTFqJ7rxZJu3oG89 | Lazy Hasselback Pommes Anna | korin43 | Do you have an insatiable hunger for potatoes? Do you have a fully-sated hunger for complicated recipes and having to cook all the time? This version of Pommes Anna, simplified and altered to an extent that will make French people cry, might be the recipe for you!
I try to eat unreasonable amounts of potatoes every day... | 2025-01-26 |
https://www.lesswrong.com/posts/2GWwovM7c5uDw2Rca/links-and-short-notes-2025-01-26-atlas-shrugged-and-the | 2GWwovM7c5uDw2Rca | Links and short notes, 2025-01-26: Atlas Shrugged and the irreplaceable founder, pumping stations and civic pride, and thoughts on the eve of AGI | jasoncrawford | Much of this content originated on social media. To follow news and announcements in a more timely fashion, follow me on Twitter, Threads, Bluesky, or Farcaster.
Contents
Jobs and fellowshipsEventsAI newsOther newsAtlas Shrugged and the irreplaceable founderPumping stations and civic pride“Thoughts on the eve of AGI”Mo... | 2025-01-26 |
https://www.lesswrong.com/posts/TSe3qhe4kxgKPJmfD/the-generalization-phase-diagram | TSe3qhe4kxgKPJmfD | The generalization phase diagram | dmitry-vaintrob | Introduction
This is part I of 3.5 planned posts on the “tempered posterior” (one of which will be “SLT in a nutshell”). It is a kind of moral sequel to “Dmitry’s Koan” and “Logits, log-odds, and loss for parallel circuits”, and is related to the post on grammars. It can be read independently of any of these.
In my vie... | 2025-01-26 |
https://www.lesswrong.com/posts/nYyGT7mLGSYyLfaSo/starting-an-egan-high-school | nYyGT7mLGSYyLfaSo | Starting an Egan High School | Chris Wintergreen | In late 2023 I read Brandon Hendrickson’s book review of Kieran Egan’s book The Educated Mind on ACX. I’m a teacher and it lit a fire in me. I spent 2024 cycling with my young family and while I kept reading some Egan-related things, I didn’t really “work on it”. At some point in that year I decided to dedicate a bunch... | 2025-01-26 |
https://www.lesswrong.com/posts/bczABd38PNmMmh8ef/if-you-wanted-to-actually-reduce-the-trade-deficit-how-would | bczABd38PNmMmh8ef | If you wanted to actually reduce the trade deficit, how would you do it? | logan-zoellner | What do we want?
The US trade deficit is a direct byproduct of the Bretton Woods economic system where the US Dollar acts as the world's reserve currency. This means that if people want to save for the future (as many do) the money they save inevitably ends up in form of US dollar denominated debt.
What this means is ... | 2025-01-26 |
https://www.lesswrong.com/posts/GQroCcWQCRJmcptwC/anatomy-of-a-dance-class-a-step-by-step-guide | GQroCcWQCRJmcptwC | Anatomy of a Dance Class: A step by step guide | Nathan Young | A year ago, I started dancing and my dance class is really really well-run. Here is a step by step description of a typical evening, followed by some takeaways
The structure of the evening
1. The class is for Ceroc, a partner dance inspired by French Rock and Roll dancing[1].
2. The class is held in a big hall, with a ... | 2025-01-26 |
https://www.lesswrong.com/posts/htuNXfXMNQD3otmWF/thoughts-on-toy-models-of-superposition | htuNXfXMNQD3otmWF | Thoughts on Toy Models of Superposition | james__p | This post explores some of the intuitions I developed whilst reading Anthropic’s Toy Models of Superposition paper. I focus on motivating the shape of the model and interpreting the visualisations used in the paper. Their accompanying article is thoughtfully written, and I'd highly recommend reading it if you haven’t a... | 2025-02-02 |
https://www.lesswrong.com/posts/DPvd2vpS8oqgKcrzs/disproving-the-people-pleasing-hypothesis-for-ai-self | DPvd2vpS8oqgKcrzs | Disproving the "People-Pleasing" Hypothesis for AI Self-Reports of Experience | edgar-muniz | I thought it might be helpful to present some findings from my research article piecemeal, until I have the time to formalize it into a paper more presentable and palatable to researchers and academics. I don't think we can afford to delay a deeper conversation that goes beyond surface-level dismissals. Let us begin w... | 2025-01-26 |
https://www.lesswrong.com/posts/Gh4fyTxGCpWS4jT82/why-care-about-ai-personhood | Gh4fyTxGCpWS4jT82 | Why care about AI personhood? | francis-rhys-ward | In this new paper, I discuss what it would mean for AI systems to be persons — entities with properties like agency, theory-of-mind, and self-awareness — and why this is important for alignment. In this post, I say a little more about why you should care.
The existential safety literature focuses on the problems of con... | 2025-01-26 |
https://www.lesswrong.com/posts/x8f5s6LinKcAcR8Yq/navigating-diversity-understanding-human-behaviors-through | x8f5s6LinKcAcR8Yq | Navigating Diversity: Understanding Human Behaviors Through Genetics, Neurodivergence, and Trauma | j_passeri | Introduction
The complexity of human behavior and the wide variability in cognitive, emotional, and social abilities can often feel overwhelming to navigate, especially when we encounter harmful or maladaptive behaviors such as manipulation, insecurity-driven malice, or deceit. However, when we consider the role of gen... | 2025-01-26 |
https://www.lesswrong.com/posts/PgfMRYowXesiiauio/narratives-as-catalysts-of-catastrophic-trajectories | PgfMRYowXesiiauio | Narratives as catalysts of catastrophic trajectories | EQ | Understanding the power narratives have over different approaches to transformational technology, from Cold War to AI era.
I am conducting research into the role of narratives as propagators / mitigators of catastrophic risks. With the rapid development of AI systems in the past few years, the discourse around the pote... | 2025-01-26 |
https://www.lesswrong.com/posts/ctJhCbdczCogDDZ2p/kessler-s-second-syndrome | ctJhCbdczCogDDZ2p | Kessler's Second Syndrome | jhoogland | It started as so many dooms do, with a flash in the night sky over the South China Sea. Testing a new ASAT weapon, the Chinese military shattered a derelict spy satellite into 40,000 shards of shrapnel. The debris pattern suggested a fragmentation warhead optimized for lethal scatter. Within 48 hours, the U.S. responde... | 2025-01-26 |
https://www.lesswrong.com/posts/6SvL58wDXaQnmCyzR/enhanced-clarity-to-bridge-the-ai-labeling-gap | 6SvL58wDXaQnmCyzR | Enhanced Clarity to Bridge the AI Labeling Gap? | jimmy-1 | I’ve noticed a lot of conversations here about AI safety and whether an AI can have a certain “mindset” based on how it’s trained or programmed. It’s a fascinating topic, but it can also be confusing. Especially for the people with no familiarity with AI.
One idea that’s come up is AI labeling—basically giving a quick... | 2025-01-26 |
https://www.lesswrong.com/posts/5pgqh7ddjSJgxzN4i/notes-on-argentina | 5pgqh7ddjSJgxzN4i | Notes on Argentina | jorge-velez | Fitz Roy Massif, El Chalten, Argentina
I recently got back from Argentina, where we decided to spend part of our honeymoon. We chose Argentina for our honeymoon because my wife and I met in Patagonia in 2022, and we love the region. Behind the choice was also my desire to see a country undergoing a major economic trans... | 2025-01-26 |
https://www.lesswrong.com/posts/GbvGHeaErLJgwooyT/recommendations-for-recent-posts-sequences-on-instrumental | GbvGHeaErLJgwooyT | Recommendations for Recent Posts/Sequences on Instrumental Rationality? | benjamin-hendricks | I absolutely love the Science of Winning at Life sequence. It's a delightful blend of well-researched cognitive science and Bayesian reasoning. The initial paragraph sums up @lukeprog's motivation:
Some have suggested that the Less Wrong community could improve readers' instrumental rationality more effectively if it f... | 2025-01-26 |
https://www.lesswrong.com/posts/xtpcJjfWhn3Xn8Pu5/anomalous-tokens-in-deepseek-v3-and-r1 | xtpcJjfWhn3Xn8Pu5 | Anomalous Tokens in DeepSeek-V3 and r1 | henry-bass | “Anomalous”, “glitch”, or “unspeakable” tokens in an LLM are those that induce bizarre behavior or otherwise don’t behave like regular text.
The SolidGoldMagikarp saga is pretty much essential context, as it documents the discovery of this phenomenon in GPT-2 and GPT-3.
But, as far as I was able to tell, nobody had yet... | 2025-01-25 |
https://www.lesswrong.com/posts/HboRaYFucf9DZcTJv/so-you-have-a-chronic-health-issue | HboRaYFucf9DZcTJv | so you have a chronic health issue | agencypilled | (dump of highish agency ideas to suffer less)
Disclaimer: Not medical advice, don’t be stupid. Possibly only applicable to bio autists (gf wants me to impress that unless you know enough about bio you should not do this!!). This post does NOT recommend trying anything illegal.
Note: I did all of this and got diagnosed ... | 2025-01-26 |
https://www.lesswrong.com/posts/tcbELALha7K3E4EJa/the-road-to-evil-is-paved-with-good-objectives-framework-to | tcbELALha7K3E4EJa | The Road to Evil Is Paved with Good Objectives: Framework to Classify and Fix Misalignments. | Shivam | Abstract
There are numerous examples of AI models exhibiting behaviours that are totally unintended by their creators. This has direct implications on how we can deploy safe-to-use AI. Research on solving the 'alignment problem' has included both aligning AI with predefined objectives and analyzing misalignment in vari... | 2025-01-30 |
https://www.lesswrong.com/posts/QxZmDx7K4w7eN26d2/brainrot | QxZmDx7K4w7eN26d2 | Brainrot | jhoogland | January: In early 2026, Meta launches a fleet of new AI influencers, targeting the massive audience displaced by the Xiaohongshu-TikTok wars. They are beautiful, funny, smart—whatever you want them to be. Equipped with the latest in online learning, the agents immediately begin adapting to social media trends as they o... | 2025-01-26 |
https://www.lesswrong.com/posts/XvyAeymaRi95MSLZD/the-rising-sea | XvyAeymaRi95MSLZD | The Rising Sea | jhoogland | And then we hit a wall.
Nobody expected it. Well... almost nobody. Yann LeCun posted his "I told you so's" all over X. Gary Marcus insisted he'd predicted this all along. Sam Altman pivoted, declaring o3 was actually already ASI.
The first rumors of scaling laws breaking down were already circulating in late 2024. By l... | 2025-01-25 |
https://www.lesswrong.com/posts/kxAzGvHZpjLzbf4Pw/ai-safety-in-secret | kxAzGvHZpjLzbf4Pw | AI Safety in secret | michael-flood | Question to the AI Safety researchers here: if, in the next four years, the US government locks down the domestic AI labs to begin a Manhattan Project style dash for AGI, would you consider (if offered) a position in such a project, understanding that you would need to abide by secrecy regulations and not be able to di... | 2025-01-25 |
https://www.lesswrong.com/posts/gvFEqxitEcxthQ56q/agents-don-t-have-to-be-aligned-to-help-us-achieve-an | gvFEqxitEcxthQ56q | Agents don't have to be aligned to help us achieve an indefinite pause. | hastings-greer | One restatement of "Alignment is very hard" is "Agent X, with IQ 200, expects to achieve zero utility conditional on any Agent Y with IQ 400 being created."
Thus, during an unaligned recursive intelligence takeoff, there should be a period of time when the intelligences are smart enough to notice that they haven't solv... | 2025-01-25 |
https://www.lesswrong.com/posts/GdCCiWQWnQCWq9wBE/on-polytopes | GdCCiWQWnQCWq9wBE | On polytopes | dmitry-vaintrob | [Epistemic status: slightly ranty. This is a lightly edited slack chat, and so may be lower-quality.]
I am surprised by the perennial spikes in excitement about "polytopes" and "tropical geometry on activation space" in machine learning and interpretability[1]. I'm not going to discuss tropical geometry in this post in... | 2025-01-25 |
https://www.lesswrong.com/posts/cG9FrSFtgS852vugG/ai-and-non-existence | cG9FrSFtgS852vugG | AI and Non-Existence. | Eleven | Imagine 2 valleys. One valley leads to billions and billions of years of extreme joy, while the other valley leads to billions and billions of years of extreme suffering. A superintelligence comes to you and tells you that there is a 98% chance that you will end up in the valley of joy for billions of years and a 2% ch... | 2025-01-25 |
https://www.lesswrong.com/posts/hzXFthtujYhDXMgFd/tbd-3 | hzXFthtujYhDXMgFd | Half-baked idea: a straightforward method for learning environmental goals? | Q Home | Epistemic status: I want to propose a method of learning environmental goals (a super big, super important subproblem in Alignment). It's informal, so has a lot of gaps. I worry I missed something obvious, rendering my argument completely meaningless. I asked LessWrong feedback team, but they couldn't get someone knowl... | 2025-02-04 |
https://www.lesswrong.com/posts/6PzxavpBrJAxqs9gz/a-concise-definition-of-what-it-means-to-win | 6PzxavpBrJAxqs9gz | A concise definition of what it means to win | testingthewaters | A concise definition of what it means to win[1]
Amor vincit omnia
What does it mean for AI alignment to have “gone well”? Many answers have been proposed, but here is mine. A few basic requirements:
We don’t all die immediately, or in a few months, or a few years (the “notkilleveryoneism” requirement, if you will)AI do... | 2025-01-25 |
https://www.lesswrong.com/posts/XZhfdxkL4wchdgBzk/a-floating-cube-rejected-hle-submission | XZhfdxkL4wchdgBzk | A Floating Cube - Rejected HLE submission | shankar-sivarajan | This question I submitted got rejected from the Humanity's Last Exam (HLE) benchmark set for being too easy. I'm really proud of it though, so I figured I'd post it here.
A wooden cube of unit side length and relative density 0.75 floats stably in a pool of water. What is the distance from the highest point on the cube... | 2025-01-25 |
https://www.lesswrong.com/posts/EQwrKvCuBGpoyogCZ/sleeper-agents-appear-resilient-to-activation-steering | EQwrKvCuBGpoyogCZ | Sleeper agents appear resilient to activation steering | lucy-wingard | Produced as the capstone project for AI Safety Fundamentals Course Oct 2024 - Jan 2025
Overview
Anthropic's paper Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training[1] demonstrated that it is possible to create a misaligned AI that is resilient to our current best safety practices (RLHF, SFT, ... | 2025-02-03 |
https://www.lesswrong.com/posts/5XC66LN3gS3w6mYnR/axrp-episode-38-6-joel-lehman-on-positive-visions-of-ai | 5XC66LN3gS3w6mYnR | AXRP Episode 38.6 - Joel Lehman on Positive Visions of AI | DanielFilan | YouTube link
Typically this podcast talks about how to avert destruction from AI. But what would it take to ensure AI promotes human flourishing as well as it can? Is alignment to individuals enough, and if not, where do we go form here? In this episode, I talk with Joel Lehman about these questions.
Topics we discuss:... | 2025-01-24 |
https://www.lesswrong.com/posts/oJ3ftRpihtkiPey6T/why-i-m-pouring-cold-water-in-my-left-ear-and-you-should-too | oJ3ftRpihtkiPey6T | Why I'm Pouring Cold Water in My Left Ear, and You Should Too | maloew-valenar | Why?
A little while ago, I read these posts about how pouring really cold water in people's left ear solves an extreme form of rationalizing in patients with Anosognosia and might(?) make people better calibrated in general. Last month, I asked whether anyone had checked that, because it seemed like something that woul... | 2025-01-24 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.