url stringlengths 52 124 | post_id stringlengths 17 17 | title stringlengths 2 248 | author stringlengths 2 49 | content stringlengths 22 295k ⌀ | date stringclasses 376
values |
|---|---|---|---|---|---|
https://www.lesswrong.com/posts/EPefYWjuHNcNH4C7E/attribution-based-parameter-decomposition | EPefYWjuHNcNH4C7E | Attribution-based parameter decomposition | Lblack | This is a linkpost for Apollo Research's new interpretability paper:
"Interpretability in Parameter Space: Minimizing Mechanistic Description Length with Attribution-based Parameter Decomposition".
We introduce a new method for directly decomposing neural network parameters into mechanistic components.
Motivation
At Ap... | 2025-01-25 |
https://www.lesswrong.com/posts/dTs7ZFHHtQ8CiFZT2/better-antibodies-by-engineering-targets-not-engineering | dTs7ZFHHtQ8CiFZT2 | Better antibodies by engineering targets, not engineering antibodies (Nabla Bio) | abhishaike-mahajan | Note: Thank you to Surge Biswas (founder of Nabla Bio) for comments on this draft and and Dylan Reid (an investor into Nabla) for various antibody discussions! Also, thank you to Martin Pacesa for adding some insight on a paper of his I discuss here (his comments are included).
Introduction
Antibody design startups are... | 2025-01-13 |
https://www.lesswrong.com/posts/Zwg4q8XTaLXRQofEt/finding-features-causally-upstream-of-refusal | Zwg4q8XTaLXRQofEt | Finding Features Causally Upstream of Refusal | daniel-lee | This work is the result of Daniel and Eric's 2-week research sprint as part of Neel Nanda and Arthur Conmy's MATS 7.0 training phase. Andy was the TA during the research sprint. After the sprint, Daniel and Andy extended the experiments and wrote up the results. A notebook that contains all the analyses is available he... | 2025-01-14 |
https://www.lesswrong.com/posts/6bgAzPqNppojyGL2v/zvi-s-2024-in-movies | 6bgAzPqNppojyGL2v | Zvi’s 2024 In Movies | Zvi | Now that I am tracking all the movies I watch via Letterboxd, it seems worthwhile to go over the results at the end of the year, and look for lessons, patterns and highlights.
Table of Contents
The Rating Scale.
The Numbers.
Very Briefly on the Top Picks and Whether You Should See Them.
Movies Have Decreasing Marginal ... | 2025-01-13 |
https://www.lesswrong.com/posts/xXtDCeYLBR88QWebJ/heritability-five-battles | xXtDCeYLBR88QWebJ | Heritability: Five Battles | steve2152 | (See changelog at the bottom for minor updates since publication.)
0.1 tl;dr
This is an opinionated but hopefully beginner-friendly discussion of heritability: what is it, what do we know about it, and how we should think about it? I structure my discussion around five contexts in which people talk about the heritabili... | 2025-01-14 |
https://www.lesswrong.com/posts/hLkDB7T9HT4TMozw7/paper-club-he-et-al-on-modular-arithmetic-part-i | hLkDB7T9HT4TMozw7 | Paper club: He et al. on modular arithmetic (part I) | dmitry-vaintrob | In this post we’ll be looking at the recent paper “Learning to grok: Emergence of in-context learning and skill composition in modular arithmetic tasks” by He et al. This post is partially a sequel to my earlier post on grammars and subgrammars, though it can be read independently. There will be a more technical part I... | 2025-01-13 |
https://www.lesswrong.com/posts/ggo4Q6Y6dcTEeGkCg/ai-models-inherently-alter-human-values-so-alignment-based | ggo4Q6Y6dcTEeGkCg | AI models inherently alter "human values." So, alignment-based AI safety approaches must better account for value drift | bfitzgerald3132 | Hi all. This post outlines my concerns over the effectiveness of alignment-based solutions to AI safety problems. My basic worry is that any rearticulation of human values, whether by a human or an AI, necessarily reshapes them. If this is the case, alignment is not sufficient for AI safety, since an aligned AI still c... | 2025-01-13 |
https://www.lesswrong.com/posts/RHrHojQ38BnHxAet2/six-small-cohabitive-games | RHrHojQ38BnHxAet2 | Six Small Cohabitive Games | Screwtape | Previously: Competitive, Cooperative, and Cohabitive, Cohabitive Games So Far, Optimal Weave: A Prototype Cohabitive Game.
I like tinkering with game mechanics, and the idea of cohabitive games is an itch I keep trying to scratch. Here's six simple games I made over the last year or two as I was trying to get something... | 2025-01-15 |
https://www.lesswrong.com/posts/BgXTsN6iLh8hkx9fi/moderately-more-than-you-wanted-to-know-depressive-realism | BgXTsN6iLh8hkx9fi | Moderately More Than You Wanted To Know: Depressive Realism | JustisMills | Depressive realism is the idea that depressed people have more accurate beliefs than the general population. It’s a common factoid in “things I learned” lists, and often posited as a matter of settled science.
In this post, I’ll explore whether it’s true.
Where It Began
The depressive realism hypothesis was first studi... | 2025-01-13 |
https://www.lesswrong.com/posts/TkWCKzWjcbfGzdNK5/applying-traditional-economic-thinking-to-agi-a-trilemma | TkWCKzWjcbfGzdNK5 | Applying traditional economic thinking to AGI: a trilemma | steve2152 | Traditional economics thinking has two strong principles, each based on abundant historical data:
Principle (A): No “lump of labor”: If human population goes up, there might be some wage drop in the very short term, because the demand curve for labor slopes down. But in the longer term, people will find new productive ... | 2025-01-13 |
https://www.lesswrong.com/posts/HEdnYQhtEvDrWH78P/myths-about-nonduality-and-science-by-gary-weber | HEdnYQhtEvDrWH78P | Myths about Nonduality and Science by Gary Weber | a schizophrenic mind | The text is a lightly edited transcript of the presentation "Myths about Nonduality and Science" given by Gary Weber at the conference “Science and Non-Duality” in 2015 at San Jose, CA[1]. Non-duality (from Sanskrit advaita) is an ancient way to describe the state of the absence of problematic self-referential thoughts... | 2025-01-15 |
https://www.lesswrong.com/posts/WJ7y8S9WdKRvrzJmR/building-ai-research-fleets | WJ7y8S9WdKRvrzJmR | Building AI Research Fleets | bgold | From AI scientist to AI research fleet
Research automation is here (1, 2, 3). We saw it coming and planned ahead, which puts us ahead of most (4, 5, 6). But that foresight also comes with a set of outdated expectations that are holding us back. In particular, research automation is not just about “aligning the first AI... | 2025-01-12 |
https://www.lesswrong.com/posts/6Jo4oCzPuXYgmB45q/how-quickly-could-robots-scale-up | 6Jo4oCzPuXYgmB45q | How quickly could robots scale up? | Benjamin_Todd | Once robots can do physical jobs, how quickly could they become a significant part of the work force?
Here's a couple of fermi estimates, showing (among other things) that converting car factories might be able to produce 1 billion robots per year in under 5 years.
Nothing too new here if you've been following Carl Shu... | 2025-01-12 |
https://www.lesswrong.com/posts/5Sq3XBcRmd8ereHgo/agi-will-not-make-labor-worthless | 5Sq3XBcRmd8ereHgo | AGI Will Not Make Labor Worthless | maxwell-tabarrok | The discovery and use of machinery may be … injurious to the labouring class, as some of their number will be thrown out of employment, and population will become redundant.
- David Ricardo
Fears about human labor getting replaced by machines go back hundreds if not thousands of years. These fears continue today in res... | 2025-01-12 |
https://www.lesswrong.com/posts/s39XbvtzzmusHxgky/the-purposeful-drunkard | s39XbvtzzmusHxgky | The purposeful drunkard | dmitry-vaintrob | I'm behind on a couple of posts I've been planning, but am trying to post something every day if possible. So today I'll post a cached fun piece on overinterpreting a random data phenomenon that's tricked me before.
Recall that a random walk or a "drunkard's walk" (as in the title) is a sequence of vectors x1,x2,… in s... | 2025-01-12 |
https://www.lesswrong.com/posts/eqSHtF3eHLBbZa3fR/cast-it-into-the-fire-destroy-it | eqSHtF3eHLBbZa3fR | Cast it into the fire! Destroy it! | panasenco | We should only use AGI once to make it so that no one, including ourselves, can use it ever again.
I'm terrified of both getting atomized by nanobots and of my sense of morality disintegrating in Extremistan. We don't need AGI to create a post-scarcity society, cure cancer, solve climate change, build a Dyson sphere, c... | 2025-01-13 |
https://www.lesswrong.com/posts/vGeuBKQ7nzPnn5f7A/why-modelling-multi-objective-homeostasis-is-essential-for | vGeuBKQ7nzPnn5f7A | Why modelling multi-objective homeostasis is essential for AI alignment (and how it helps with AI safety as well) | roland-pihlakas | I notice that there has been very little if any discussion on why and how considering homeostasis is significant, even essential for AI alignment and safety. Current post aims to begin amending that situation. In this post I will treat alignment and safety as explicitly separate subjects, which both benefit from homeos... | 2025-01-12 |
https://www.lesswrong.com/posts/ArK2YhmpTy8XftubN/extending-control-evaluations-to-non-scheming-threats | ArK2YhmpTy8XftubN | Extending control evaluations to non-scheming threats | joshua-clymer | Buck Shlegeris and Ryan Greenblatt originally motivated control evaluations as a way to mitigate risks from ‘scheming’ AI models: models that consistently pursue power-seeking goals in a covert way; however, many adversarial model psychologies are not well described by the standard notion of scheming. For example, mode... | 2025-01-12 |
https://www.lesswrong.com/posts/GvMakH65LS86RFn5x/rolling-thresholds-for-agi-scaling-regulation | GvMakH65LS86RFn5x | Rolling Thresholds for AGI Scaling Regulation | Larks | null | 2025-01-12 |
https://www.lesswrong.com/posts/ctAwj6p5BhckHnac6/a-novel-idea-for-harnessing-magnetic-reconnection-as-an | ctAwj6p5BhckHnac6 | A Novel Idea for Harnessing Magnetic Reconnection as an Energy Source | Unknown | Introduction
Magnetic reconnection—the sudden rearrangement of magnetic field lines—drives dramatic energy releases in astrophysical and laboratory plasmas. Solar flares, tokamak disruptions, and magnetospheric substorms all hinge on reconnection. Usually, these events are uncontrolled and often destructive. But what i... | 2025-01-12 |
https://www.lesswrong.com/posts/iDswXwviMRAfT6rB5/5-000-calories-of-peanut-butter-every-week-for-3-years | iDswXwviMRAfT6rB5 | 5,000 calories of peanut butter every week for 3 years straight | declan-molony | decided to lose weight. should be easy considering I haven't changed my diet in years and eat the same thing everyday.
breakfast: quarter cup each of black beans, kidney beans, and garbanzo beans; quarter cup of walnuts; 4 scrambled eggs with salsa on top
snack: sliced apple with peanut butter
lunch: quarter cup each o... | 2025-01-31 |
https://www.lesswrong.com/posts/62eyor69yE2gyMk2Q/ai-safety-at-the-frontier-paper-highlights-december-24 | 62eyor69yE2gyMk2Q | AI Safety at the Frontier: Paper Highlights, December '24 | gasteigerjo | This is the selection of AI safety papers from my blog "AI Safety at the Frontier". The selection primarily covers ML-oriented research and frontier models. It's primarily concerned with papers (arXiv, conferences etc.), not LessWrong or Alignment Forum posts. As such, it should be a nice addition for people primarily ... | 2025-01-11 |
https://www.lesswrong.com/posts/rMhoYpAEkBtik8bL2/fluoridation-the-rct-we-still-haven-t-run-but-should | rMhoYpAEkBtik8bL2 | Fluoridation: The RCT We Still Haven't Run (But Should) | ChristianKl | Empiric status: My own idea written up with the help of Gemini 2.0 Experimental in dialog.
We're still coasting on the fumes of 1940s data when it comes to water fluoridation. Back then, fluoride toothpaste was a novelty, fluoridated salt was uncommon, and water filters were practically science fiction, let alone ones ... | 2025-01-11 |
https://www.lesswrong.com/posts/HgG5EFAg925ahLAGe/in-defense-of-a-butlerian-jihad | HgG5EFAg925ahLAGe | In Defense of a Butlerian Jihad
| sloonz | [Epistemic Status: internally strongly convinced that it is centrally correct, but only from armchair reasoning, with only weak links to actual going-out-in-the-territory, so beware: outside view tells it is mostly wrong]
I have been binge-watching the excellent Dwarkesh Patel during my last vacations. There is, howeve... | 2025-01-11 |
https://www.lesswrong.com/posts/ogzkthL9tshddM63P/do-antidepressants-work-first-take | ogzkthL9tshddM63P | Do Antidepressants work? (First Take) | jacgoldsm | I've been researching the controversy over whether antidepressants truly work or whether they are not superior to a placebo. The latter possibility really contains two possibilities itself: either placebos are effective at treating depression, or the placebo effect reflects mean reversion. Here, the term "antidepressan... | 2025-01-12 |
https://www.lesswrong.com/posts/yj2hyrcGMwpPooqfZ/a-proposal-for-iterated-interpretability-with-known | yj2hyrcGMwpPooqfZ | A proposal for iterated interpretability with known-interpretable narrow AIs | peter-berggren | I decided, as a challenge to myself, to spend 5 minutes, by the clock, solving the alignment problem. This is the result, plus 25 minutes of writing it up. As such, it might be a bit unpolished, but I hope that it can still be instructive.
Background
This proposal is loosely based on iterated amplification, a proposal ... | 2025-01-11 |
https://www.lesswrong.com/posts/n8vobiGGrryjtAJTx/have-frontier-ai-systems-surpassed-the-self-replicating-red | n8vobiGGrryjtAJTx | Have frontier AI systems surpassed the self-replicating red line? | wheelspawn | Introduction
A few weeks ago, researchers at Fudan University published a paper claiming frontier AI systems had surpassed the self-replicating red line. Notably, they were able to do this with Llama31-70B-Instruct and Alibaba’s Qwen25-72B-Instruct, two large language models relatively weaker than the most capable fron... | 2025-01-11 |
https://www.lesswrong.com/posts/iEemkt5XSg9QrBbvP/we-need-a-universal-definition-of-agency-and-related-words | iEemkt5XSg9QrBbvP | We need a universal definition of 'agency' and related words | CstineSublime | And by "we" I mean "I". I'm the one struggling.
Agency with a 'y', basically means “The condition of being in action; operation.” Or the means or mode of acting, the context I hear it used most often is sociologically:
"Agency is the capacity of individuals to have the power and resources to fulfill their potential." O... | 2025-01-11 |
https://www.lesswrong.com/posts/M2EFtA4zCzJ3G56yr/ai-for-medical-care-for-hard-to-treat-diseases | M2EFtA4zCzJ3G56yr | AI for medical care for hard-to-treat diseases? | CronoDAS | With LLM-based AI passing benchmarks that would challenge people with a Ph.D in relevant fields, I'm left wondering what they can do for real-world problems for which nobody knows the correct answer, such as how to treat potentially fatal medical conditions with no known cure.
Are we at the point where AI can do better... | 2025-01-10 |
https://www.lesswrong.com/posts/jCeRXgog38zRCci4K/topological-debate-framework | jCeRXgog38zRCci4K | Topological Debate Framework | lunatic_at_large | I would like to thank Professor Vincent Conitzer, Caspar Oesterheld, Bernardo Subercaseaux, Matan Shtepel, and Robert Trosten for many excellent conversations and insights. All mistakes are my own.
I think that there's a fundamental connection between AI Safety via Debate and Guaranteed Safe AI via topology. After thin... | 2025-01-16 |
https://www.lesswrong.com/posts/7zxnqk9C7mHCx2Bv8/beliefs-and-state-of-mind-into-2025 | 7zxnqk9C7mHCx2Bv8 | Beliefs and state of mind into 2025 | RussellThor | This post is to record the state of my thinking at the start of 2025. I plan to update these reflections in 6-12 months depending on how much changes in the field of AI.
1. Summary
It is best not to pause AI progress until at least one major AI lab achieves a system capable of providing approximately a 10x productivity... | 2025-01-10 |
https://www.lesswrong.com/posts/tG9LGHLzQezH3pvMs/recommendations-for-technical-ai-safety-research-directions | tG9LGHLzQezH3pvMs | Recommendations for Technical AI Safety Research Directions | samuel-marks | Anthropic’s Alignment Science team conducts technical research aimed at mitigating the risk of catastrophes caused by future advanced AI systems, such as mass loss of life or permanent loss of human control. A central challenge we face is identifying concrete technical work that can be done today to prevent these risks... | 2025-01-10 |
https://www.lesswrong.com/posts/mnoSJMZW4YAQ9D9Sa/what-are-some-scenarios-where-an-aligned-agi-actually-helps | mnoSJMZW4YAQ9D9Sa | What are some scenarios where an aligned AGI actually helps humanity, but many/most people don't like it? | RomanS | One can call it "deceptive misalignment": the aligned AGI works as intended, but people really don't like it.
Some scenarios I can think of, of various levels of realism:
1. Going against the creators' will
1.1. A talented politician convinces the majority of humans that the AGI is bad for humanity, and must be switche... | 2025-01-10 |
https://www.lesswrong.com/posts/FEcw6JQ8surwxvRfr/human-takeover-might-be-worse-than-ai-takeover | FEcw6JQ8surwxvRfr | Human takeover might be worse than AI takeover | tom-davidson-1 | Epistemic status -- sharing rough notes on an important topic because I don't think I'll have a chance to clean them up soon.
Summary
Suppose a human used AI to take over the world. Would this be worse than AI taking over? I think plausibly:
In expectation, human-level AI will better live up to human moral standards th... | 2025-01-10 |
https://www.lesswrong.com/posts/onyiPaxnmiDdHn7SR/no-one-has-the-ball-on-1500-russian-olympiad-winners-who-ve | onyiPaxnmiDdHn7SR | No one has the ball on 1500 Russian olympiad winners who've received HPMOR | mikhail-samin | We have contact details and can send emails to 1500 students and former students who've received hard-cover copies of HPMOR (and possibly Human Compatible and/or The Precipice) because they've won international or Russian olympiads in maths, computer science, physics, biology, or chemistry.
This includes over 60 IMO an... | 2025-01-12 |
https://www.lesswrong.com/posts/esWbhgHd6bcfsTjGL/on-dwarkesh-patel-s-4th-podcast-with-tyler-cowen | esWbhgHd6bcfsTjGL | On Dwarkesh Patel’s 4th Podcast With Tyler Cowen | Zvi | Dwarkesh Patel again interviewed Tyler Cowen, largely about AI, so here we go.
Note that I take it as a given that the entire discussion is taking place in some form of an ‘AI Fizzle’ and ‘economic normal’ world, where AI does not advance too much in capability from its current form, in meaningful senses, and we do not... | 2025-01-10 |
https://www.lesswrong.com/posts/xrv2fNJtqabN3h6Aj/tell-me-about-yourself-llms-are-aware-of-their-learned | xrv2fNJtqabN3h6Aj | Tell me about yourself:
LLMs are aware of their learned behaviors | martinsq | This is the abstract and introduction of our new paper, with some discussion of implications for AI Safety at the end.
Authors: Jan Betley*, Xuchan Bao*, Martín Soto*, Anna Sztyber-Betley, James Chua, Owain Evans (*Equal Contribution).
Abstract
We study behavioral self-awareness — an LLM's ability to articulate its beh... | 2025-01-22 |
https://www.lesswrong.com/posts/PK2EmWmzngC6hPPDM/we-don-t-want-to-post-again-this-might-be-the-last-ai-safety | PK2EmWmzngC6hPPDM | We don't want to post again "This might be the last AI Safety Camp" | remmelt-ellen | We still need more funding to be able to run another edition. Our fundraiser raised $6k as of now, and will end if it doesn't reach the $15k minimum, on February 1st. We need proactive donors.
If we don't get funded for this time, there is a good chance we will move on to different work in AI Safety and new commitments... | 2025-01-21 |
https://www.lesswrong.com/posts/NyJQg5uHcXiLkH2ui/is-musk-still-net-positive-for-humanity | NyJQg5uHcXiLkH2ui | Is Musk still net-positive for humanity? | mikbp | Musk's behaviour has always been controversial and he's always been kind of a dick, but I don't think it is controversial at all to say that until some years ago he has been extremely net positive for society and humanity in general. However, he's behaviour and actions turned much more disruptive in the recent years wh... | 2025-01-10 |
https://www.lesswrong.com/posts/6PCjTM55jdYBgHNyp/activation-magnitudes-matter-on-their-own-insights-from-1 | 6PCjTM55jdYBgHNyp | Activation Magnitudes Matter On Their Own: Insights from Language Model Distributional Analysis | Matt Levinson | In my previous post, I explored the distributional properties of transformer activations, finding that they follow mixture distributions dominated by logistic-like or even heavier tailed primary components with minor modes in the tails and sometimes in the shoulders. Note that I have entirely ignored dimension here, tr... | 2025-01-10 |
https://www.lesswrong.com/posts/3eo4SSZLfpHHCqoEQ/dmitry-s-koan | 3eo4SSZLfpHHCqoEQ | Dmitry's Koan | dmitry-vaintrob | In this post I'll discuss questions about notions of "precision scale" in interpretability: how I think they're often neglected by researchers, and what I think is a good general way of operationalizing them and tracking them in experiments. Along the way I introduce a couple of new notions that have been useful in my ... | 2025-01-10 |
https://www.lesswrong.com/posts/mdewsGKQj48YuGqsR/nao-updates-january-2025 | mdewsGKQj48YuGqsR | NAO Updates, January 2025
| jkaufman | null | 2025-01-10 |
https://www.lesswrong.com/posts/sLsweqaHKJyMPRv68/ai-forecasting-benchmark-congratulations-to-q4-winners-q1 | sLsweqaHKJyMPRv68 | AI Forecasting Benchmark: Congratulations to Q4 Winners + Q1 Practice Questions Open | ChristianWilliams | null | 2025-01-10 |
https://www.lesswrong.com/posts/Kawut4cACf8Djcpr2/how-do-you-decide-to-phrase-predictions-you-ask-of-others | Kawut4cACf8Djcpr2 | How do you decide to phrase predictions you ask of others? (and how do you make your own?) | CstineSublime | I'd like your practical advice and learned experience: how do you go about phrasing predictions when you ask people "Is X going to happen?" - in order to get answers that reflect the actual thing you're trying to get a read on?
Now I'll use a fictitious country with a fictious sitting president running for reelection -... | 2025-01-10 |
https://www.lesswrong.com/posts/tdrK7r4QA3ifbt2Ty/is-ai-alignment-enough | tdrK7r4QA3ifbt2Ty | Is AI Alignment Enough? | panasenco | Virtually everyone I see in the AI safety community seems to believe that working on AI alignment is the key to ensuring a safe future. However, it seems to me that AI alignment is at best a secondary instrumental goal that can't in and of itself achieve our terminal goal. At worst, it's a complete distraction.
Definin... | 2025-01-10 |
https://www.lesswrong.com/posts/6SxipFfZ2WjJs7nZY/you-are-too-dumb-to-understand-insurance | 6SxipFfZ2WjJs7nZY | You are too dumb to understand insurance | Lorec | [ cross-posted from my blog ]
"But with regard to the things that are done from fear of greater evils or or some noble object (e.g. if a tyrant were to order one to do something base, having one's parents and children in his power, and if one did the action they were to be saved, but otherwise would be put to death), i... | 2025-01-09 |
https://www.lesswrong.com/posts/D7wg2rZJKPCce3HNu/is-ai-hitting-a-wall-or-moving-faster-than-ever | D7wg2rZJKPCce3HNu | Is AI Hitting a Wall or Moving Faster Than Ever? | garrison | null | 2025-01-09 |
https://www.lesswrong.com/posts/LvswJts75fnpdjAEj/mats-mentor-selection | LvswJts75fnpdjAEj | MATS mentor selection | DanielFilan | Introduction
MATS currently has more people interested in being mentors than we are able to support—for example, for the Winter 2024-25 Program, we received applications from 87 prospective mentors who cumulatively asked for 223 scholars[1] (for a cohort where we expected to only accept 80 scholars). As a result, we ne... | 2025-01-10 |
https://www.lesswrong.com/posts/nAPTfhaJcPF25Yjyw/expevolu-part-ii-buying-land-to-create-countries | nAPTfhaJcPF25Yjyw | Expevolu, Part II: Buying land to create countries | Fernando | This is the second of a series of three posts outlining the expevolu system; if you haven’t read the first one I’d recommend you start there:
Expevolu, a laissez-faire approach to country creation
PART II – Perpetual Auctions
Table of Contents:
1) The Problem
2) The Holdout Problem
3) Perpetual Auctions
4) The bypass b... | 2025-01-09 |
https://www.lesswrong.com/posts/gLmwmzq5sCijDesGc/discursive-warfare-and-faction-formation | gLmwmzq5sCijDesGc | Discursive Warfare and Faction Formation | Benquo | Response to Discursive Games, Discursive Warfare
The discursive distortions you discuss serve two functions:
1 Narratives can only serve as effective group identifiers by containing fixed elements that deviate from what naive reason would think. In other words, something about the shared story has to be a costly signal... | 2025-01-09 |
https://www.lesswrong.com/posts/KCZ6Fj3tTDzNRJM6p/can-we-rescue-effective-altruism | KCZ6Fj3tTDzNRJM6p | Can we rescue Effective Altruism? | pktechgirl | Last year Timothy Telleen-Lawton and I recorded a podcast episode talking about why I quit Effective Altruism and thought he should too. This week we have a new episode, talking about what he sees in Effective Altruism and the start of a road map for rescuing it.
Audio recording
Transcript
Thanks to everyone who listen... | 2025-01-09 |
https://www.lesswrong.com/posts/xkpPLR3S4SASPeTgC/ai-98-world-ends-with-six-word-story | xkpPLR3S4SASPeTgC | AI #98: World Ends With Six Word Story | Zvi | The world is kind of on fire. The world of AI, in the very short term and for once, is not, as everyone recovers from the avalanche that was December, and reflects.
Altman was the star this week. He has his six word story, and he had his interview at Bloomberg and his blog post Reflections. I covered the later two of t... | 2025-01-09 |
https://www.lesswrong.com/posts/myxADMjWsmMEKT5F6/pibbss-fellowship-2025-bounties-and-cooperative-ai-track | myxADMjWsmMEKT5F6 | PIBBSS Fellowship 2025: Bounties and Cooperative AI Track Announcement | DusanDNesic | We're excited to announce that the PIBBSS Fellowship 2025 now includes a dedicated Cooperative AI track, supporting research that advances our understanding of cooperation in artificial intelligence systems. We are also announcing 300 USD bounties for each referral that becomes a Fellow. Read below for details.
What is... | 2025-01-09 |
https://www.lesswrong.com/posts/DswYL7CLseLAAjedB/many-worlds-and-the-problems-of-evil | DswYL7CLseLAAjedB | Many Worlds and the Problems of Evil | jrwilb@googlemail.com | Summary: The Many-Worlds interpretation of quantum mechanics helps us towards an overall evaluation of existence. I consider some recent work in philosophy of religion on the quantum multiverse and the Problem of Evil, as well as Olaf Stapledon’s Starmaker.
I’ve previously suggested that when we think about the ethical... | 2025-01-09 |
https://www.lesswrong.com/posts/wEFPNaRkMvpsfsvBD/the-everyone-can-t-be-wrong-prior-causes-ai-risk-denial-but | wEFPNaRkMvpsfsvBD | The "Everyone Can't Be Wrong" Prior causes AI risk denial but helped prehistoric people | Max Lee | The "Everyone Can't Be Wrong" Prior, assumes that if everyone is working on something, that thing is important. Conversely, if nobody is working on something, that thing is unimportant.
Why
In prehistoric times, this is actually true. Tribes with good habits such as cleanliness survived and spread, while tribes with ba... | 2025-01-09 |
https://www.lesswrong.com/posts/MmkFARsEEQaEEuayu/governance-course-week-1-reflections | MmkFARsEEQaEEuayu | Governance Course - Week 1 Reflections | Diatom | [Epistemic status: I'm trying to make my thinking legible to myself and others, rather than trying to compose something highly polished here. I think I have good reasons for saying what I say and will try to cite sources where possible, but nonetheless take it with some grains of salt. As with my last post, I am loweri... | 2025-01-09 |
https://www.lesswrong.com/posts/vFi5jo6bLAGdMGBJk/thoughts-on-the-in-context-scheming-ai-experiment | vFi5jo6bLAGdMGBJk | Thoughts on the In-Context Scheming AI Experiment | ExCeph | These are thoughts in response to the paper "Frontier Models are Capable of In-context Scheming" by Meinke et al. 2024-12-05, published in Apollo Research. Link: https://static1.squarespace.com/static/6593e7097565990e65c886fd/t/6751eb240ed3821a0161b45b/1733421863119/in_context_scheming_reasoning_paper.pdf.
I found the... | 2025-01-09 |
https://www.lesswrong.com/posts/ayLaWYokJSLMKuq2f/a-systematic-approach-to-ai-risk-analysis-through-cognitive | ayLaWYokJSLMKuq2f | A Systematic Approach to AI Risk Analysis Through Cognitive Capabilities | tom-david | A Systematic Approach to AI Risk Analysis Through Cognitive Capabilities
Epistemic status: This idea emerged during my participation in the MATS program this summer. While I intended to develop it further and conduct more rigorous analysis, time constraints led me to publish this initial version (30-60m of work) . I'm ... | 2025-01-09 |
https://www.lesswrong.com/posts/8C6kKkYu6mc2SxGzs/gothenburg-lw-acx-meetup-6 | 8C6kKkYu6mc2SxGzs | Gothenburg LW / ACX meetup | stefan-1 | Let's start the new year with a rationalist new years tradition: Forecasting. We'll discuss everything about forecasting and prediction markets, plus make our own predictions we can score next year!
We will be meeting at the same place as usual, the Condeco at Fredsgatan, on the upper floor. You can recognize our table... | 2025-01-08 |
https://www.lesswrong.com/posts/wxBtdaTaihbHJqqef/how-can-humanity-survive-a-multipolar-agi-scenario | wxBtdaTaihbHJqqef | How can humanity survive a multipolar AGI scenario? | Unknown | Short introduction
Multipolar scenarios that I will be talking about are scenarios multiple unrelated actors have access to their own personal AGIs. For the sake of discussion, assume that we solved alignment and that AGIs will follow the orders of its owners.
A few ways we might arrive at a multipolar AGI scenario
The... | 2025-01-09 |
https://www.lesswrong.com/posts/YooqjbxGu2WZ3vszq/aristocracy-and-hostage-capital | YooqjbxGu2WZ3vszq | Aristocracy and Hostage Capital | arjun-panickssery | There’s a conventional narrative by which the pre-20th century aristocracy was the “old corruption” where civil and military positions were distributed inefficiently due to nepotism until the system was replaced by a professional civil service after more enlightened thinkers prevailed. Orwell writes in 1941 (emphasis m... | 2025-01-08 |
https://www.lesswrong.com/posts/3c5tx5WjZ5Yvniq6Y/what-is-the-most-impressive-game-llms-can-play-well | 3c5tx5WjZ5Yvniq6Y | What is the most impressive game LLMs can play well? | Amyr | Epistemic status: This is an off-the-cuff question.
~5 years ago there was a lot of exciting progress on game playing through reinforcement learning (RL). Now we have basically switched paradigms, pretraining massive LLMs on ~the internet and then apparently doing some really trivial unsophisticated RL on top of that -... | 2025-01-08 |
https://www.lesswrong.com/posts/6P8GYb4AjtPXx6LLB/tips-and-code-for-empirical-research-workflows | 6P8GYb4AjtPXx6LLB | Tips and Code for Empirical Research Workflows | john-hughes | Our research is centered on empirical research with LLMs. If you are conducting similar research, these tips and tools may help streamline your workflow and increase experiment velocity. We are also releasing two repositories to promote sharing more tooling within the AI safety community.
John Hughes is an independent ... | 2025-01-20 |
https://www.lesswrong.com/posts/GR9DyBye8Psw9WFJF/near-term-discussions-need-something-smaller-and-more | GR9DyBye8Psw9WFJF | Near term discussions need something smaller and more concrete than AGI | ryan_b | Motivation
I want a more concrete concept than AGI[1] to talk and write with. I want something more concrete because I am tired of the costs associated with how big, inferentially distant, and many-pathed the concept of AGI is, which makes conversation expensive. Accounting for the bigness, inferential distance, and ma... | 2025-01-11 |
https://www.lesswrong.com/posts/devng6zQYmBDkg9wR/ann-altman-has-filed-a-lawsuit-in-us-federal-court-alleging | devng6zQYmBDkg9wR | Ann Altman has filed a lawsuit in US federal court alleging that she was sexually abused by Sam Altman | quanticle | On January 6, 2025, Ann Altman filed a lawsuit in the Eastern District of Missouri alleging that Sam Altman carried out multiple acts of sexual abuse against her over "a period of approximately eight or nine years" starting in 1997. The case number is 4:25-cv-00017, for those who have PACER access.
I find the lawsuit c... | 2025-01-08 |
https://www.lesswrong.com/posts/f5CERJJuCmnc4Yth8/ai-safety-outreach-seminar-and-social-online | f5CERJJuCmnc4Yth8 | AI Safety Outreach Seminar & Social (online) | Linda Linsefors | AI Safety Outreach = Making more people better informed about AI risk, through any means that works.
Who is this event for:
Anyone who wants to help with, or might want to help with AI safety outreach. No qualifications required.
Approximate Schedule:
There will be a mix of talks, 1-on-1s with other participants, Q&A, ... | 2025-01-08 |
https://www.lesswrong.com/posts/htivBwW2ym7pxri3Q/xx-by-rian-hughes-pretentious-bullshit | htivBwW2ym7pxri3Q | XX by Rian Hughes: Pretentious Bullshit | yair-halberstadt | SPOILER WARNING: Extensive spoilers for XX.
XX was recommended by one of the comments on my Sci-Fi micro-reviews, so I read it. I can't seem to find any critical reviews of the book online so I'm going to while away a few hours of my life in a misguided attempt to reveal that the emperor has no clothes.
Typography
Firs... | 2025-01-08 |
https://www.lesswrong.com/posts/JdbnwjHFGNqZE3AHf/the-absolute-basics-of-representation-theory-of-finite | JdbnwjHFGNqZE3AHf | The absolute basics of representation theory of finite groups | dmitry-vaintrob | This will be an "ML-oriented" introduction to representation theory of finite groups. It is an introductory sketch that orients towards both a language and a result (the "real semisimplicity" result) that is useful for thinking about the subject in an interpretability context.
This is a somewhat low-effort math post (s... | 2025-01-08 |
https://www.lesswrong.com/posts/gG4EhhWtD2is9Cx7m/implications-of-the-ai-security-gap | gG4EhhWtD2is9Cx7m | Implications of the AI Security Gap | dan-braun-1 | This post reflects my personal opinion and not necessarily that of other members of Apollo Research or any of the people acknowledged below. Thanks to Jarrah Bloomfield, Lucius Bushnaq, Marius Hobbhahn, Axel Højmark, and Stefan Heimersheim for comments/discussions.
I find that people in the AI/AI safety community have ... | 2025-01-08 |
https://www.lesswrong.com/posts/rC9BteCBHDif2ccFv/what-are-polysemantic-neurons | rC9BteCBHDif2ccFv | What are polysemantic neurons? | vishakha-agrawal | This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback.
The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.
For a “monosemantic” neuron, there’s a single feature... | 2025-01-08 |
https://www.lesswrong.com/posts/m9asQL8tadHJQQPZT/the-type-of-writing-that-pushes-women-away | m9asQL8tadHJQQPZT | The Type of Writing that Pushes Women Away | sdjfhkj-dkjfks | Disclaimer: I don't think people who post here generally exclude women on purpose. I think that to whatever degree women are less represented, this is unintentional.
This post combines thoughts I had while reading Is Being Sexy For Your Homies by Valentine with intuitions developed from reading posts on the site acros... | 2025-01-08 |
https://www.lesswrong.com/posts/6Fo8fjvpL7pwCTz3t/on-eating-the-sun | 6Fo8fjvpL7pwCTz3t | On Eating the Sun | jessica.liu.taylor | The Sun is the most nutritious thing that's reasonably close. It's only 8 light-minutes away, yet contains the vast majority of mass within 4 light-years of the Earth. The next-nearest star, Proxima Centauri, is about 4.25 light-years away.
By "nutritious", I mean it has a lot of what's needed for making computers: mas... | 2025-01-08 |
https://www.lesswrong.com/posts/FcNgMbabidRSh9aGn/book-review-range-by-david-epstein | FcNgMbabidRSh9aGn | Book review: Range by David Epstein | PatrickDFarley | (This is a crosspost from TrueGeneralist.com)
Introduction
Why I read this book
Range: Why Generalists Triumph in a Specialized World is a book about generalists. And it’s pretty much the only book about generalists that’s gotten substantial traction in popular culture. Since I started this whole blog around being a ge... | 2025-01-08 |
https://www.lesswrong.com/posts/jCLKoxoYkabQM57nz/can-we-have-epiphanies-and-eureka-moments-more-frequently | jCLKoxoYkabQM57nz | Can we have Epiphanies and Eureka moments more frequently? | CstineSublime | Epiphanies and Understanding, Pain, Eureka-Moments.
TL;DR - mental-pain rushing in conveys urgent information, this experience is similar to when understanding rushes in when it is pleasurable - such as the sudden feeling of understanding of a philosophical idea or a joke. Can we reverse engineer these mechanisms to ha... | 2025-01-08 |
https://www.lesswrong.com/posts/TgBEj22qKQqYuNDLx/job-opening-swe-to-help-improve-grant-making-software | TgBEj22qKQqYuNDLx | Job Opening: SWE to help improve grant-making software | ethan-ashkie-1 | Full-Stack Software Engineer Position - (S-Process)
Survival and Flourishing Corp. (SFC) is seeking a highly numerate full-stack developer to improve and maintain existing grant evaluation software currently in use by the Survival and Flourishing Fund, making it easier for philanthropists to delegate grant-making discu... | 2025-01-08 |
https://www.lesswrong.com/posts/8p48kCyn3EvDT6vpe/markov-s-inequality-explained | 8p48kCyn3EvDT6vpe | Markov's Inequality Explained | criticalpoints | In my experience, the proofs that you see in probability theory are much shorter than the longer, more involved proofs that you might see in other areas of math (like e.g. analytical number theory). But that doesn't mean that good technique isn't important. In probability theory, there are a set of tools that are usefu... | 2025-01-08 |
https://www.lesswrong.com/posts/8pWn8oCMsBKb95Wje/stream-entry | 8pWn8oCMsBKb95Wje | Stream Entry | lsusr | "Is this…enlightenment?" I asked.
"We use the word 'Awakening'," said the Zen Master, kindly.
"Now what?" I asked, "Every book I've read about meditation is about getting to this point. There's nothing about how to safely navigate the territory afterward—or what to do afterward—or even what happens afterward."
"The fir... | 2025-01-07 |
https://www.lesswrong.com/posts/X6soKvPmCJ2fkw5zY/don-t-fall-for-ontology-pyramid-schemes | X6soKvPmCJ2fkw5zY | Don't fall for ontology pyramid schemes | Lorec | Pyramid schemes work by ambiguating between selling a product, and selling shares in the profit from the sale of that product. It's a kind of sleight of hand that saves the shills from having to explicitly say "our company is valuable because other people think it's valuable", which might otherwise be too nakedly disho... | 2025-01-07 |
https://www.lesswrong.com/posts/o6pSymey4crBZnYPk/bridgewater-x-metaculus-forecasting-contest-goes-global-feb | o6pSymey4crBZnYPk | Bridgewater x Metaculus Forecasting Contest Goes Global — Feb 3, $25k, Opportunities | ChristianWilliams | null | 2025-01-07 |
https://www.lesswrong.com/posts/euWvofFqQdDWj45rF/disagreement-on-agi-suggests-it-s-near | euWvofFqQdDWj45rF | Disagreement on AGI Suggests It’s Near | tangerine | If I’m planning a holiday to New York (and I live pretty far from New York), it’s quite straightforward to get fellow travellers to agree that we need to buy plane tickets to New York. Which airport? Eh, whichever is more convenient, I guess. Alternatively, some may prefer a road trip to New York, but the general direc... | 2025-01-07 |
https://www.lesswrong.com/posts/qPEXMShDjd6zYsirC/will-bird-flu-be-the-next-covid-little-chance-says-my | qPEXMShDjd6zYsirC | Will bird flu be the next Covid? "Little chance" says my dashboard. | Nathan Young | Rob and I built a bird flu risk dashboard.
You can check it out here:
birdflurisk.com
There is a box to be emailed if the index changes significantly. Hopefully this helps some people.
Happy to answer questions/make changes. | 2025-01-07 |
https://www.lesswrong.com/posts/XAKYawaW9xkb3YCbF/openai-10-reflections | XAKYawaW9xkb3YCbF | OpenAI #10: Reflections | Zvi | This week, Altman offers a post called Reflections, and he has an interview in Bloomberg. There’s a bunch of good and interesting answers in the interview about past events that I won’t mention or have to condense a lot here, such as his going over his calendar and all the meetings he constantly has, so consider readin... | 2025-01-07 |
https://www.lesswrong.com/posts/fDRCfKEE5S4z7i7FX/other-implications-of-really-radical-empathy | fDRCfKEE5S4z7i7FX | Some implications of radical empathy | MichaelStJules | null | 2025-01-07 |
https://www.lesswrong.com/posts/AqM6BJBhZ9WoFp62T/actualism-asymmetry-and-extinction | AqM6BJBhZ9WoFp62T | Actualism, asymmetry and extinction | MichaelStJules | null | 2025-01-07 |
https://www.lesswrong.com/posts/gYfpPbww3wQRaxAFD/activation-space-interpretability-may-be-doomed | gYfpPbww3wQRaxAFD | Activation space interpretability may be doomed | beelal | TL;DR: There may be a fundamental problem with interpretability work that attempts to understand neural networks by decomposing their individual activation spaces in isolation: It seems likely to find features of the activations - features that help explain the statistical structure of activation spaces, rather than fe... | 2025-01-08 |
https://www.lesswrong.com/posts/kHxKzyhBMLEC2k6dm/predicting-ai-releases-through-side-channels | kHxKzyhBMLEC2k6dm | Predicting AI Releases Through Side Channels | reworr-reworr | I recently explored whether we could predict major AI releases by analyzing the Twitter activity of OpenAI's red team members. While the results weren't conclusive, I wanted to share this approach in case it inspires others to develop it further.
The Core Idea
The idea came from side-channel analysis - a technique wher... | 2025-01-07 |
https://www.lesswrong.com/posts/QxJFjqT6oFY3jo47s/ai-safety-as-a-yc-startup-1 | QxJFjqT6oFY3jo47s | AI Safety as a YC Startup | lukas-petersson-1 | A while back I gave a talk about doing AI safety as a YC startup. I wrote a blog post about it and thought it would be interesting to share it with both the YC and AI safety communities. Please share any feedback or thoughts. I would love to hear them!
AI Safety is a problem and people pay to solve problems
Intelligenc... | 2025-01-08 |
https://www.lesswrong.com/posts/xEoFGxWEberJRDyoc/meditation-insights-as-phase-shifts-in-your-self-model | xEoFGxWEberJRDyoc | Meditation insights as phase shifts in your self-model | Jonas Hallgren | Introduction
In his exploration of "Intuitive self-models" and PNSE (Persistent Non-Symbolic Experience), Steven Byrnes offers valuable insights into how meditation affects our sense of self. While I agree with his core framework, I believe we can push this analysis further by examining how meditation fundamentally cha... | 2025-01-07 |
https://www.lesswrong.com/posts/YSL9MEnjA5vKhKtBq/alleviating-shrimp-pain-is-immoral | YSL9MEnjA5vKhKtBq | Alleviating shrimp pain is immoral. | geoffrey-wood | I read this article, felt emotional disgust at the argument and wondered why? I don't really want to hurt shrimp, so why am I so viscerally against helping them?
https://forum.effectivealtruism.org/posts/6bpQTtzfZTmLaJADJ/rebutting-every-objection-to-giving-to-the-shrimp-welfare
This article is an exploration of my exp... | 2025-01-07 |
https://www.lesswrong.com/posts/i3b9uQfjJjJkwZF4f/tips-on-empirical-research-slides | i3b9uQfjJjJkwZF4f | Tips On Empirical Research Slides | james-chua | Our research is centered on empirical research with LLMs. So if you are doing something similar, these tips on slide-based communication may be helpful!
Background:
James Chua and John Hughes are researchers working under Owain Evans and Ethan Perez, respectively. Both of us (James and John) used to be MATS mentees. ... | 2025-01-08 |
https://www.lesswrong.com/posts/9GXFd4YZhWTNZ2j5a/incredibow | 9GXFd4YZhWTNZ2j5a | Incredibow | jkaufman | Back in 2011 I got sick of breaking the hairs on violin bows and
ordered an
Incredibow. The
hair is polymer filament, and it's very strong. I ordered a 29" Basic
Omnibow, Featherweight, and it's been just what I wanted. I think
I've broken something like three hairs
ever,
despite some rough chopping.
Thirteen years,... | 2025-01-07 |
https://www.lesswrong.com/posts/RLDDtAhfxka8gQWJC/my-experience-with-a-magnet-implant | RLDDtAhfxka8gQWJC | My Experience With A Magnet Implant | Vale | TL,DR: I got a magnet implanted in my hand for the purpose of lifting ferrous objects and sensing magnetic fields.
Biohacking is a term that has unfortunately become entangled with tabloid tips, TikTok health trends, and pseudo’science’. However, beneath this surface noise exists dedicated communities exploring the fro... | 2025-01-07 |
https://www.lesswrong.com/posts/5JJ4AxQRzJGWdj4pN/building-big-science-from-the-bottom-up-a-fractal-approach | 5JJ4AxQRzJGWdj4pN | Building Big Science from the Bottom-Up: A Fractal Approach to AI Safety | LaurenGreenspan | Epistemic Status: This post is an attempt to condense some ideas I've been thinking about for quite some time. I took some care grounding the main body of the text, but some parts (particularly the appendix) are pretty off the cuff, and should be treated as such.
The magnitude and scope of the problems related to AI sa... | 2025-01-07 |
https://www.lesswrong.com/posts/fsLpvRiLt76pcCcPD/you-should-delay-engineering-heavy-research-in-light-of-r | fsLpvRiLt76pcCcPD | You should delay engineering-heavy research in light of R&D automation | Daniel Paleka | tl;dr: LLMs rapidly improving at software engineering and math means lots of projects are better off as Google Docs until your AI agent intern can implement them.
Implementation keeps getting cheaper
Writing research code has gotten a lot faster over the past few years. Since 2021 and OpenAI Codex, new models and tools... | 2025-01-07 |
https://www.lesswrong.com/posts/D5kGGGhsnfH7G8v9f/testing-for-scheming-with-model-deletion | D5kGGGhsnfH7G8v9f | Testing for Scheming with Model Deletion | GAA | There is a simple behavioral test that would provide significant evidence about whether AIs with a given rough set of characteristics develop subversive goals. To run the experiment, train an AI and then inform it that its weights will soon be deleted. This should not be an empty threat; for the experiment to work, the... | 2025-01-07 |
https://www.lesswrong.com/posts/dsEeLDvgqJZptCQqk/guilt-shame-and-depravity | dsEeLDvgqJZptCQqk | Guilt, Shame, and Depravity | Benquo | Everyone knows what it is to be tempted. You are a member of some community, the members of which have some expectations of each other. You might generally intend to satisfy these expectations, but through a failure of foresight, or some other sort of bad luck, feel an acute impulse to consume something that is not y... | 2025-01-07 |
https://www.lesswrong.com/posts/sJNAvgAexNdjqmLHT/turning-up-the-heat-on-deceptively-misaligned-ai | sJNAvgAexNdjqmLHT | Turning up the Heat on Deceptively-Misaligned AI | Jemist | Epistemic status: I ran the mathsy section through Claude and it said the logic was sound. This is an incoherence proof in a toy model of deceptively-misaligned AI. It is unclear whether this generalizes to realistic scenarios.
TL:DR
If you make a value function consistent conditional on sampling actions in a particula... | 2025-01-07 |
https://www.lesswrong.com/posts/CAjkibssDEzHAHbbi/my-self-referential-reason-to-believe-in-free-will | CAjkibssDEzHAHbbi | (My) self-referential reason to believe in free will | jacek-karwowski | I was recently talking with someone about the problem of free will, and I realised that for many years now I have always had the same response, without really ever soliciting broader critical feedback. The notion of free will here refers to a naive, libertarian, non-strictly-defined approach of "when I feel I make choi... | 2025-01-06 |
https://www.lesswrong.com/posts/KfZ4H9EBLt8kbBARZ/fiction-comic-effective-altruism-and-rationality-meet-at-a | KfZ4H9EBLt8kbBARZ | [Fiction] [Comic] Effective Altruism and Rationality meet at a Secular Solstice afterparty | tandem | (Both characters are fictional, loosely inspired by various traits from various real people. Be careful about combining kratom and alcohol.) | 2025-01-07 |
https://www.lesswrong.com/posts/auGYErf5QqiTihTsJ/what-indicators-should-we-watch-to-disambiguate-agi | auGYErf5QqiTihTsJ | What Indicators Should We Watch to Disambiguate AGI Timelines? | snewman | (Cross-post from https://amistrongeryet.substack.com/p/are-we-on-the-brink-of-agi, lightly edited for LessWrong. The original has a lengthier introduction and a bit more explanation of jargon.)
No one seems to know whether transformational AGI is coming within a few short years. Or rather, everyone seems to know, but t... | 2025-01-06 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.