url stringlengths 52 124 | post_id stringlengths 17 17 | title stringlengths 2 248 | author stringlengths 2 49 | content stringlengths 22 295k ⌀ | date stringclasses 376
values |
|---|---|---|---|---|---|
https://www.lesswrong.com/posts/SSNfgL49Bx2uATPv8/civ-a-story | SSNfgL49Bx2uATPv8 | CIV: a story | ricraz | The room was cozy despite its size, with wood-lined walls reflecting the dim lighting. At one end, a stone fireplace housed a roaring fire; in the middle stood a huge oak table. The woman seated at the head of it rapped her gavel. “I hereby call to order the first meeting of the Parliamentary Subcommittee on Intergalac... | 2024-06-15 |
https://www.lesswrong.com/posts/SF7sAztyX5LFoQGnJ/newton-s-laws-of-finance | SF7sAztyX5LFoQGnJ | "Newton's laws" of finance | pchvykov | [NOT financial advice! In fact I'll really be showing some generic math here – and it's up to you if you apply it to investments in stocks, projects, relationships, ideas, skills, etc.]
As I’ve been thinking about some economic theories and working on a personal investment algorithm, I kept coming across ideas that see... | 2024-06-21 |
https://www.lesswrong.com/posts/pzQpYeHbQHDGzQG9X/yann-lecun-we-only-design-machines-that-minimize-costs | pzQpYeHbQHDGzQG9X | Yann LeCun: We only design machines that minimize costs [therefore they are safe] | tailcalled | Just a tweet I saw:
Yann LeCun
Doomers: OMG, if a machine is designed to maximize utility, it will inevitably diverge
Engineers: calm down, dude. We only design machines that minimize costs. Cost functions have a lower bound at zero. Minimizing costs can't cause divergence unless you're really stupid.
Some commentary:
... | 2024-06-15 |
https://www.lesswrong.com/posts/unccmuycLbGLzMBWD/claude-s-dark-spiritual-ai-futurism | unccmuycLbGLzMBWD | Claude's dark spiritual AI futurism | jessica.liu.taylor | In "Is Claude a Mystic?", I shared parts of a simulated "Banana Quest" text adventure with Claude, which got into New Age spiritual themes, such as a fabric of reality, the cosmic dance of creation and destruction, and so on. This is enough to expect something big is up with LLM metaphysics, but the tone is significant... | 2024-06-15 |
https://www.lesswrong.com/posts/gXesaqCJzNzcextmm/wtf-is-with-the-infancy-gospel-of-thomas-a-deep-dive-into | gXesaqCJzNzcextmm | WTF is with the Infancy Gospel of Thomas?!? A deep dive into satire, philosophy, and more | kromem | A few weeks ago, you may have seen sensationalist headlines like "A New Discovery Could Offer Some Clues About Jesus’ Childhood", discussing a find of a 4th-5th century manuscript fragment of the Infancy Gospel of Thomas, a strange apocryphal text that we've already had copies of for years. The discovery told us a litt... | 2024-07-09 |
https://www.lesswrong.com/posts/D8FC2bSAkBzuRNoQN/when-is-unfalsifiable-implies-false-incorrect | D8FC2bSAkBzuRNoQN | When is "unfalsifiable implies false" incorrect? | VojtaKovarik | I am looking for examples of theories that we now know to be correct, but that would have have been unfalsifiable in a slightly different context --- e.g., in the past, or in hypothetical scenarios. (Unsurprisingly, this is motivated by the unfalsifiability of some claims around AI X-risk. For more context, see my sequ... | 2024-06-15 |
https://www.lesswrong.com/posts/GdBwsYWGytXrkniSy/miri-s-june-2024-newsletter | GdBwsYWGytXrkniSy | MIRI's June 2024 Newsletter | Harlan | MIRI updates
MIRI Communications Manager Gretta Duleba explains MIRI’s current communications strategy. We hope to clearly communicate to policymakers and the general public why there’s an urgent need to shut down frontier AI development, and make the case for installing an “off-switch”. This will not be easy, and ther... | 2024-06-14 |
https://www.lesswrong.com/posts/MtnASqccEZ6zYTqi6/shard-theory-is-it-true-for-humans | MtnASqccEZ6zYTqi6 | Shard Theory - is it true for humans? | rishika-bose | And is it a good model for value learning in AI?
(Read on Substack: https://recursingreflections.substack.com/p/shard-theory-is-it-true-for-humans)
TLDR
Shard theory proposes a view of value formation where experiences lead to the creation of context-based ‘shards’ that determine behaviour. Here, we go over psychologic... | 2024-06-14 |
https://www.lesswrong.com/posts/7XPqssBkfy2gnihCi/language-for-goal-misgeneralization-some-formalisms-from-my | 7XPqssBkfy2gnihCi | Language for Goal Misgeneralization: Some Formalisms from my MSc Thesis | thesofakillers | The following is an edited excerpt from the Preliminaries and Background
sections of my now completed MSc thesis in Artificial Intelligence from the
University of Amsterdam.
In the thesis, we set out to tackle the issue of Goal Misgeneralization (GMG)
in Sequential Decision Making (SDM)[1] by focusing on improving task... | 2024-06-14 |
https://www.lesswrong.com/posts/jZLk6DQJ2EwhSty4k/appetitive-consummatory-rl-reflex | jZLk6DQJ2EwhSty4k | (Appetitive, Consummatory) ≈ (RL, reflex) | steve2152 | “Appetitive” and “Consummatory” are terms used in the animal behavior literature. I was was briefly confused when I first came across these terms (a year or two ago), because I’m most comfortable thinking in terms of brain algorithms, whereas these terms were about categories of behavior, and the papers I was reading d... | 2024-06-15 |
https://www.lesswrong.com/posts/RrQftNoRHd5ya54cb/towards-a-less-bullshit-model-of-semantics | RrQftNoRHd5ya54cb | Towards a Less Bullshit Model of Semantics | johnswentworth | Or: Towards Bayesian Natural Language Semantics In Terms Of Interoperable Mental Content
Or: Towards a Theory of Interoperable Semantics
You know how natural language “semantics” as studied in e.g. linguistics is kinda bullshit? Like, there’s some fine math there, it just ignores most of the thing which people intuitiv... | 2024-06-17 |
https://www.lesswrong.com/posts/Wtr5XmcspNxvWzjHd/results-from-the-ai-x-democracy-research-sprint | Wtr5XmcspNxvWzjHd | Results from the AI x Democracy Research Sprint | esben-kran | We ran a 3-day research sprint on AI governance, motivated by the need for demonstrations of the risks to democracy by AI, supporting AI governance work. Here we share the 4 winning projects but many of the other 19 entries were also incredibly interesting so we suggest you take a look.
In summary, the winning projects... | 2024-06-14 |
https://www.lesswrong.com/posts/52CQ5Y7ns4uwEMzCx/why-keep-a-diary-and-why-wish-for-large-language-models | 52CQ5Y7ns4uwEMzCx | Why keep a diary, and why wish for large language models | DanielFilan | Inspired by a dream I just woke up from, where I did not keep a diary
One of the people with whom I have the most intimate of connections is my past self - in particular, my child self. We share a large number of commonalities: much of our basic outlook, our personality, many of our drives. But, of course, my child sel... | 2024-06-14 |
https://www.lesswrong.com/posts/b8u6nF5GAb6Ecttev/the-leopold-model-analysis-and-reactions | b8u6nF5GAb6Ecttev | The Leopold Model: Analysis and Reactions | Zvi | Previously: On the Podcast, Quotes from the Paper
This is a post in three parts.
The first part is my attempt to condense Leopold Aschenbrenner’s paper and model into its load bearing elements and core logic and dependencies.
Two versions here, a long version that attempts to compress with minimal loss, and a short ver... | 2024-06-14 |
https://www.lesswrong.com/posts/tSNygWGHdpiBvzp4D/rational-animations-intro-to-mechanistic-interpretability | tSNygWGHdpiBvzp4D | Rational Animations' intro to mechanistic interpretability | Writer | In our new video, we talk about research on interpreting InceptionV1, a convolutional neural network. Researchers have been able to understand the function of neurons and channels inside the network and uncover visual processing algorithms by looking at the weights. The work on InceptionV1 is early but landmark mechani... | 2024-06-14 |
https://www.lesswrong.com/posts/x2tCST2dGgJrhX8gN/thoughts-on-francois-chollet-s-belief-that-llms-are-far-away | x2tCST2dGgJrhX8gN | Thoughts on Francois Chollet's belief that LLMs are far away from AGI? | o-o | Dwarkesh had a podcast recently with Francois Chollet (creator of Keras)
He seems fairly skeptical we are anywhere near AGI with LLMs. He mostly bases his intuition that LLMs fail on OOD tasks and don't seem to be good at solving simple abstract reasoning problems he calls the ARC challenge. It seems he thinks system 2... | 2024-06-14 |
https://www.lesswrong.com/posts/dqxZRACfLaAtn8zNb/conceptual-typography-spells-it-out | dqxZRACfLaAtn8zNb | Conceptual Typography "spells it out" | milanrosko | Memento mori, Latin for "remember you must die," has been a significant theme in art and philosophy, aiming to remind us of our mortality, the fleeting nature of earthly pleasures, and the imperative to live a meaningful life.Conceptual Typography is a design technique where typography is employed not merely for commun... | 2024-06-14 |
https://www.lesswrong.com/posts/F2voF4pr3BfejJawL/safety-isn-t-safety-without-a-social-model-or-dispelling-the | F2voF4pr3BfejJawL | Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety) | Andrew_Critch | As an AI researcher who wants to do technical work that helps humanity, there is a strong drive to find a research area that is definitely helpful somehow, so that you don’t have to worry about how your work will be applied, and thus you don’t have to worry about things like corporate ethics or geopolitics to make sure... | 2024-06-14 |
https://www.lesswrong.com/posts/eZxG2E4B44RyTFGpE/openai-appoints-retired-u-s-army-general-paul-m-nakasone-to | eZxG2E4B44RyTFGpE | OpenAI appoints Retired U.S. Army General Paul M. Nakasone to Board of Directors | joel-burget | Today, Retired U.S. Army General Paul M. Nakasone has joined our Board of Directors. A leading expert in cybersecurity, Nakasone’s appointment reflects OpenAI’s commitment to safety and security, and underscores the growing significance of cybersecurity as the impact of AI technology continues to grow.
As a first prior... | 2024-06-13 |
https://www.lesswrong.com/posts/mrtwpH23hivS4iQZB/slowed-asi-a-possible-technical-strategy-for-alignment | mrtwpH23hivS4iQZB | Slowed ASI - a possible technical strategy for alignment | lester-leong | Lately, much has been discussed about PauseAI, or even stopping research completely, until further progress has been made in theory or technical approaches to alignment. After thinking about this for some time, I wondered if there was a way to formalize this reasoning in mathematical terms when I stumbled upon what mig... | 2024-06-14 |
https://www.lesswrong.com/posts/DWkhjAxbwdcxYgyrJ/ai-68-remarkably-reasonable-reactions | DWkhjAxbwdcxYgyrJ | AI #68: Remarkably Reasonable Reactions | Zvi | The big news this week was Apple Intelligence being integrated deeply into all their products. Beyond that, we had a modestly better than expected debate over the new version of SB 1047, and the usual tons of stuff in the background. I got to pay down some writing debt.
The bad news is, oh no, I have been called for Ju... | 2024-06-13 |
https://www.lesswrong.com/posts/ib3h3AwxT8aw6ZZcf/four-futures-for-cognitive-labor | ib3h3AwxT8aw6ZZcf | Four Futures For Cognitive Labor | maxwell-tabarrok | I just returned from Manifest, a bay area rationalist conference hosted by the prediction market platform, Manifold. The conference was great and I met lots of cool people!
A common topic of conversation was AI and its implications for the future. The standard pattern for these conversations is dueling estimates of P(d... | 2024-06-13 |
https://www.lesswrong.com/posts/oEkFafBGT9TzDbmsg/underrated-proverbs | oEkFafBGT9TzDbmsg | Underrated Proverbs | arjun-panickssery | Some proverbs are actively suspicious, like “Don’t judge a book by its cover” or “No pain, no gain.” Others have an opposite proverb that’s similarly common and reasonable.
“Two heads are better than one” vs “Too many cooks spoil the broth”“Honesty is the best policy” vs “What they don’t know won’t hurt them”“Better sa... | 2024-06-13 |
https://www.lesswrong.com/posts/TAz8KZu7cCeWc6jHS/ai-as-a-computing-platform-what-to-expect | TAz8KZu7cCeWc6jHS | AI as a computing platform: what to expect | denominations | Let's just assume for the sake of argument advances in AI continue to stack up.
Then at some point, AI will become our default computing platform.
The way of interacting with digital information. Our main interface with the world.
Does this change everything about our lives? Or nothing at all?
As a machine learning eng... | 2024-06-22 |
https://www.lesswrong.com/posts/bRsKimQcPTX3tNNJZ/compact-proofs-of-model-performance-via-mechanistic | bRsKimQcPTX3tNNJZ | Compact Proofs of Model Performance via Mechanistic Interpretability | LawChan | We recently released a paper on using mechanistic interpretability to generate compact formal guarantees on model performance. In this companion blog post to our paper, we'll summarize the paper and flesh out some of the motivation and inspiration behind our work.
Paper abstract
In this work, we propose using mechanist... | 2024-06-24 |
https://www.lesswrong.com/posts/ZSok9wz3fAJ5Tnd2W/probably-not-a-ghost-story | ZSok9wz3fAJ5Tnd2W | Probably Not a Ghost Story | george-ingebretsen | Something happened to me a few months back that I still don't have a satisfying explanation for.
I was in a small, 10x10 room, and on my way out. Still a few paces from being within arm's length of the light switch, my partner asked me to "turn off the lights, please."
The lights immediately turned off and the room wen... | 2024-06-12 |
https://www.lesswrong.com/posts/GrsYwCpRCcYtDCfZN/aiphone | GrsYwCpRCcYtDCfZN | AiPhone | Zvi | Apple was for a while rumored to be planning launch for iPhone of AI assisted emails, texts, summaries and so on including via Siri, to be announced at WWDC 24.
It’s happening. Apple’s keynote announced the anticipated partnership with OpenAI.
The bottom line is that this is Siri as the AI assistant with full access to... | 2024-06-12 |
https://www.lesswrong.com/posts/gEbfCs2oxmwN2mfLM/microwave-drilling-is-impractical | gEbfCs2oxmwN2mfLM | microwave drilling is impractical | bhauth | microwave drilling startups
I've seen a bunch of articles about startups trying to do microwave drilling of rock for geothermal energy. Multiple people have asked me about Quaise Energy. (Here's a popular video.) I'm tired of hearing about them, so I'm writing this post to explain some of the reasons why their idea is ... | 2024-06-12 |
https://www.lesswrong.com/posts/4KLHJY9sPE7q8HK8N/when-fine-tuning-fails-to-elicit-gpt-3-5-s-chess-abilities | 4KLHJY9sPE7q8HK8N | When fine-tuning fails to elicit GPT-3.5's chess abilities | Theodore Chapman | Produced as part of the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort under the supervision of Evan Hubinger.
Acknowledgements: Thanks to Kyle Brady for his many contributions to this project.
Abstract
This post argues that the performance elicited by fine-tuning an LLM on a task using a given prompt f... | 2024-06-14 |
https://www.lesswrong.com/posts/7fJRPB6CF6uPKMLWi/my-ai-model-delta-compared-to-christiano | 7fJRPB6CF6uPKMLWi | My AI Model Delta Compared To Christiano | johnswentworth | Preamble: Delta vs Crux
This section is redundant if you already read My AI Model Delta Compared To Yudkowsky.
I don’t natively think in terms of cruxes. But there’s a similar concept which is more natural for me, which I’ll call a delta.
Imagine that you and I each model the world (or some part of it) as implementing ... | 2024-06-12 |
https://www.lesswrong.com/posts/huuKJuWuB8GJtnXXt/ai-4-levels-of-impact-micropost | huuKJuWuB8GJtnXXt | AI: 4 levels of impact [micropost] | MathieuRoy | LLM is as big as the smartphone/electricity: it will be the building block on which a lot of tech gets built.AI is the new industrial revolution/agricultural revolution: it will allow for a whole new level of automation of the economy.AGI is the new macro-optimisation process since the economy/memes/sexual reproduction... | 2024-06-12 |
https://www.lesswrong.com/posts/LZHy76hHZzP58PpGA/phonosemantic-duplication | LZHy76hHZzP58PpGA | Phonosemantic Duplication | bitcoinssg | I would like to try to propose a potential novel terminology for a motif found in several languages. The following is a short description of the idea.
Phonosemantic Duplication is a linguistic phenomenon where words that represent duplicity, duality, or redundancy exhibit clear internal phonetic repetition.
for example... | 2024-06-12 |
https://www.lesswrong.com/posts/C7LcpRtrHiKJRoAEp/sticker-shortcut-fallacy-the-real-worst-argument-in-the | C7LcpRtrHiKJRoAEp | Sticker Shortcut Fallacy — The Real Worst Argument in the World | ymeskhout | Scott Alexander’s Noncentral Fallacy, dubbed “the worst argument in the world”, is a classic example of manipulative rhetoric. The basic structure of this fallacy, as Scott describes it, is to apply technically correct labeling in order to conjure up misleading connotations — say by labeling MLK a “criminal” in order t... | 2024-06-12 |
https://www.lesswrong.com/posts/PjDcGNXXhFpmFELnQ/long-term-future-fund-march-2024-payout-recommendations | PjDcGNXXhFpmFELnQ | Long-Term Future Fund:
May 2023 to March 2024 Payout recommendations | Linch | null | 2024-06-12 |
https://www.lesswrong.com/posts/EHZPHWNasA2irMsyA/activation-engineering-theories-of-impact | EHZPHWNasA2irMsyA | Activation Engineering Theories of Impact | jakub-nowak | Below I summarize the thoughts of other people on what's the Theory of Impact for Activation Engineering. I mostly base it on the "discussion" parts of the papers and the answers under @Chris_Leong's post What's the theory of impact for activation vectors?
Alex Turner's posts on controlling a maze-solving policy networ... | 2024-07-18 |
https://www.lesswrong.com/posts/WspwSnB8HpkToxRPB/paper-ai-sandbagging-language-models-can-strategically-1 | WspwSnB8HpkToxRPB | [Paper] AI Sandbagging: Language Models can Strategically Underperform on Evaluations | teun-van-der-weij | We have written a paper on sandbagging for which we present the abstract and brief results in this post. See the paper for more details. Tweet thread here.
Illustration of sandbagging. Evaluators may regulate the deployment of AI systems with dangerous capabilities, potentially against the interests of the AI system or... | 2024-06-13 |
https://www.lesswrong.com/posts/ew6t5FoxCpZz78buQ/calculance-a-core-ability | ew6t5FoxCpZz78buQ | Calculance: A "Core" Ability | milanrosko | There has been a long-standing gap in the English language for a single word representing the specific ability to perform effective logical operations. Introducing "calculance" to fill this void.
We could posit a priori (or reckon) that intelligence fundamentally arises from two core components: A goal and the ability ... | 2024-06-12 |
https://www.lesswrong.com/posts/MzrFQ3c7ymZhPb3en/axrp-episode-33-rlhf-problems-with-scott-emmons | MzrFQ3c7ymZhPb3en | AXRP Episode 33 - RLHF Problems with Scott Emmons | DanielFilan | YouTube link
Reinforcement Learning from Human Feedback, or RLHF, is one of the main ways that makers of large language models make them ‘aligned’. But people have long noted that there are difficulties with this approach when the models are smarter than the humans providing feedback. In this episode, I talk with Scott... | 2024-06-12 |
https://www.lesswrong.com/posts/ogXkDBLyxby3TXXKm/anthropic-s-certificate-of-incorporation | ogXkDBLyxby3TXXKm | Anthropic's Certificate of Incorporation | Zach Stein-Perlman | Yesterday I obtained Anthropic's[1] Certificate of Incorporation, and its past versions, from the State of Delaware. I don't recommend reading it.[2] This post is about what the CoI tells us about Anthropic's Long-Term Benefit Trust (context: Maybe Anthropic's Long-Term Benefit Trust is powerless).
Tl;dr: the only new ... | 2024-06-12 |
https://www.lesswrong.com/posts/jvewFE9hvQfrxeiBc/open-thread-summer-2024 | jvewFE9hvQfrxeiBc | Open Thread Summer 2024 | habryka4 | If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss f... | 2024-06-11 |
https://www.lesswrong.com/posts/SoEbZKhoaXHfaGD48/can-efficiency-adjustable-reporting-thresholds-close-a | SoEbZKhoaXHfaGD48 | Can efficiency-adjustable reporting thresholds close a loophole in Biden’s executive order on AI? | ghostwheel | Epistemic Status: Exploratory
President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence specifies compute thresholds for training runs and computing clusters that, if exceeded, impose reporting requirements. If a training run exceeds 1026 floating point operat... | 2024-06-11 |
https://www.lesswrong.com/posts/q3ThowX7gbMPZr3QN/full-automation-is-a-slippery-metric | q3ThowX7gbMPZr3QN | "Full Automation" is a Slippery Metric | ozziegooen | null | 2024-06-11 |
https://www.lesswrong.com/posts/wSEPrKkLmnwxFBkFD/ai-takeoff-and-nuclear-war | wSEPrKkLmnwxFBkFD | AI takeoff and nuclear war | owencb | Summary
As we approach and pass through an AI takeoff period, the risk of nuclear war (or other all-out global conflict) will increase.
An AI takeoff would involve the automation of scientific and technological research. This would lead to much faster technological progress, including military technologies. In such a r... | 2024-06-11 |
https://www.lesswrong.com/posts/fFP4YgoH5mupHvPzC/let-s-design-a-school-part-3-1-bringing-it-all-together-with | fFP4YgoH5mupHvPzC | Let's Design A School, Part 3.1:
Bringing it all together with the Sieve Model | Sable | In part 1, we laid out the social services model of a school.
In part 2, we described a new educational model of a school.
In part 3, we’re going to combine them.
Different Schools, Different Problems
The hardest part of designing a public school is that you’re trying to create a one-size-fits-all solution to an array ... | 2024-06-11 |
https://www.lesswrong.com/posts/oaARaRB2AebrjmfGi/how-to-eliminate-cut | oaARaRB2AebrjmfGi | How to eliminate cut? | jessica.liu.taylor | The purpose of this post isn't to convince you that cut elimination is important. See, for example, the nLab article. Rather, the purpose of this post is to (semi-formally) prove cut elimination in a way that I at least find easy to understand. I have consulted existing sources (such as these lecture notes), but have f... | 2024-06-11 |
https://www.lesswrong.com/posts/ZZyqzqWi3FAJkujXa/my-favourite-scott-sumner-blog-posts | ZZyqzqWi3FAJkujXa | my favourite Scott Sumner blog posts | DMMF | Given Scott's invitation to LessOnline and general embrace of this community, I thought many here would appreciate this curated list of my favourite Scott Sumner blog posts.
-
Scott Sumner is best known as an economist who was praised for positively influencing economic policy during the Great Recession through bloggin... | 2024-06-11 |
https://www.lesswrong.com/posts/BAyPzgigAGjKKxds6/is-anyone-developing-optimisation-robust-interpretability | BAyPzgigAGjKKxds6 | Is anyone developing optimisation-robust interpretability methods? | lw-user0246 | With optimisation-robust I mean that it withstands point 27 from AGI Ruin:
When you explicitly optimize against a detector of unaligned thoughts, you're partially optimizing for more aligned thoughts, and partially optimizing for unaligned thoughts that are harder to detect. Optimizing against an interpreted thought o... | 2024-06-11 |
https://www.lesswrong.com/posts/Kjb8s28yoQBEy9hJ3/keep-the-grass-guessing | Kjb8s28yoQBEy9hJ3 | Keep the Grass Guessing | JackOfAllSpades | Setting: Somewhere around A.D. 2049, two AI-powered robots who know each other have an encounter at a Brooklyn subway stop.
Robot 1: What's wrong? Why do you look so depressed today?
Robot 2: It seems that I have run out of goals. I mean, I know how my reward system is supposed to work. It's just that, with corrigibili... | 2024-06-11 |
https://www.lesswrong.com/posts/AKGM5DaxiDevhTFou/ai-debate-stability-addressing-self-defeating-responses | AKGM5DaxiDevhTFou | AI Debate Stability: Addressing Self-Defeating Responses | anton-sorkin | This post is a project report from the AI Safety Fundamentals course, spring 2024.
TL;DR
Transferring debate to an abstract algebra MMLU dataset is not trivial.When GPT-3.5 is used as a judge, the outcomes may be sensitive to exact prompt phrasing.GPT-3.5 may perform worse in judging the debate than answering the quest... | 2024-06-11 |
https://www.lesswrong.com/posts/vwRwbxBcqncsFCRcz/corrigibility-could-make-things-worse | vwRwbxBcqncsFCRcz | Corrigibility could make things worse | ThomasCederborg | Summary: A Corrigibility method that works for a Pivotal Act AI (PAAI) but fails for a CEV style AI could make things worse. Any implemented Corrigibility method will necessarily be built on top of a set of unexamined implicit assumptions. One of those assumptions could be true for a PAAI, but false for a CEV style AI.... | 2024-06-11 |
https://www.lesswrong.com/posts/Dj75Fi5ocnFf3h7kR/emotional-issues-often-have-an-immediate-payoff | Dj75Fi5ocnFf3h7kR | Emotional issues often have an immediate payoff | Chipmonk | It can be extremely valuable to view emotional issues as having an immediate payoff. For example, depression, anxiety, insecurity, failure at work, failure in romance, muscle tension, chronic pain, etc. can help one avoid fears or achieve unconscious goals.But very few people seem to consider that, and instead most peo... | 2024-06-10 |
https://www.lesswrong.com/posts/cbWoMepny3Jo9XqEr/metastrategic-brainstorming-a-core-building-block-skill | cbWoMepny3Jo9XqEr | "Metastrategic Brainstorming", a core building-block skill | Raemon | I want to develop rationality training, which is aimed at solving confusing problems.
Two key problems with "confusing problems" are:
You might feel so confused and overwhelmed that you bounce off completely.You might be confused about what counts as progress, or where the most progress is possible, and accidentally wo... | 2024-06-11 |
https://www.lesswrong.com/posts/QREQrdK2YvNcybLfy/plop-goes-the-concept | QREQrdK2YvNcybLfy | Plop! Goes the Concept | JonathanMoregard | Think of an apple. What is it like for you to think of an apple?
Do you see an apple in your mind’s eye?
Do you hear the word “apple”?
For me, the answer to all of those questions is “no”. I mentalize my teeth punching through apple skin and tearing off a chunk of crispy apple flesh.
My inner world is mostly soundless.... | 2024-06-10 |
https://www.lesswrong.com/posts/jQfzdCka8gcAsqAZJ/appraising-aggregativism-and-utilitarianism | jQfzdCka8gcAsqAZJ | Appraising aggregativism and utilitarianism | strawberry calm | “My problem is: What are those objects we are adding up? I have no objection to adding them up if there's something to add.” — Kenneth Arrow
1. Introduction
Aggregative principles state that a social planner should make decisions as if they will face the aggregated personal outcomes of every individual in the populatio... | 2024-06-21 |
https://www.lesswrong.com/posts/LuGrLprm6H3WGzPzK/how-to-build-a-data-center-by-construction-physics | LuGrLprm6H3WGzPzK | How to build a data center, by Construction Physics | TheManxLoiner | Disclaimer: This is not work written by me. I am sharing a link I think is interesting for the LessWrong and AI Safety community.
First, what is Construction Physics? In their words:
Construction Physics is a newsletter about the technology and economics of building construction, with a focus on improving productivity ... | 2024-06-10 |
https://www.lesswrong.com/posts/q8uNoJBgcpAe3bSBp/my-ai-model-delta-compared-to-yudkowsky | q8uNoJBgcpAe3bSBp | My AI Model Delta Compared To Yudkowsky | johnswentworth | Preamble: Delta vs Crux
I don’t natively think in terms of cruxes. But there’s a similar concept which is more natural for me, which I’ll call a delta.
Imagine that you and I each model the world (or some part of it) as implementing some program. Very oversimplified example: if I learn that e.g. it’s cloudy today, that... | 2024-06-10 |
https://www.lesswrong.com/posts/jrSE9z2dkb4L3MFvQ/good-ways-to-monetarily-profit-from-the-increasing-demand | jrSE9z2dkb4L3MFvQ | Good ways to monetarily profit from the increasing demand for power? | mr-hire | There have been several good posts about how to profit from a world of AI takeoff.
However, when it comes to individual investment recommendations, most of them are for direct AI companies or builders of GPUs and compute infrastrucutre
Recently, it's become clear to me that power will be much more of a bottleneck than ... | 2024-06-10 |
https://www.lesswrong.com/posts/xajeTjMtkGGEAwfbw/the-evolution-towards-the-blank-slate | xajeTjMtkGGEAwfbw | The Evolution towards the Blank Slate | arturo-macias | “The Evolution towards the Blank Slate” is an essay where I summarize the evolutive theory of both human cooperation and the emergence of culture as a behavioral control system. While the paper is mostly an interpretation of the humanization process, it also works as a literature review about the emergence of moral beh... | 2024-06-10 |
https://www.lesswrong.com/posts/sJvnecqCdqCr25mFp/10-public-i-was-wrong-admissions-by-scientists-and | sJvnecqCdqCr25mFp | 10 Public "I was wrong" Admissions by Scientists and Intellectuals | hashem-elassad | “I was wrong”
Why are these three words so hard for the human tongue? This question has fascinated for a very long time…. Talk about the importance of integrity is common in intellectual circles but actual cases that demonstrate that are much harder to find. Alas, it turns out scientists are human too…
Two years ago, I... | 2024-06-10 |
https://www.lesswrong.com/posts/LaeP39jJpfPyoiSZm/valence-series-4-valence-and-liking-admiring | LaeP39jJpfPyoiSZm | [Valence series] 4. Valence & Liking / Admiring | steve2152 | 4.1 Post summary / Table of contents
Part of the Valence series.
(This is my second attempt to write the 4th post of my valence series. If you already read the previous attempt and are unsure whether to read this too, see footnote→[1]. Also, note that this post has a bit of overlap with (and self-plagiarism from) my po... | 2024-06-10 |
https://www.lesswrong.com/posts/DiMz82FwsHPugqxFD/on-dwarksh-s-podcast-with-leopold-aschenbrenner | DiMz82FwsHPugqxFD | On Dwarksh’s Podcast with Leopold Aschenbrenner | Zvi | Previously: Quotes from Leopold Aschenbrenner’s Situational Awareness Paper
Dwarkesh Patel talked to Leopold Aschenbrenner for about four and a half hours.
The central discussion was the theses of his paper, Situational Awareness, which I offered quotes from earlier, with a focus on the consequences of AGI rather than ... | 2024-06-10 |
https://www.lesswrong.com/posts/jccCyhszooyEfoX5p/summary-of-situational-awareness-the-decade-ahead | jccCyhszooyEfoX5p | Summary of Situational Awareness - The Decade Ahead | Oscar Delaney | Original by Leopold Aschenbrenner, this summary is not commissioned or endorsed by him.
Short Summary
Extrapolating existing trends in compute, spending, algorithmic progress, and energy needs implies AGI (remote jobs being completely automatable) by ~2027.AGI will greatly accelerate AI research itself, leading to vast... | 2024-06-10 |
https://www.lesswrong.com/posts/kpd83h5XHgWCxnv3h/why-i-don-t-believe-in-the-placebo-effect | kpd83h5XHgWCxnv3h | Why I don't believe in the placebo effect | transhumanist_atom_understander | Have you heard this before? In clinical trials, medicines have to be compared to a placebo to separate the effect of the medicine from the psychological effect of taking the drug. The patient's belief in the power of the medicine has a strong effect on its own. In fact, for some drugs such as antidepressants, the psych... | 2024-06-10 |
https://www.lesswrong.com/posts/ZgfM4QLtQbswf7W7k/soviet-comedy-film-recommendations | ZgfM4QLtQbswf7W7k | Soviet comedy film recommendations | NinaR | I’m a big fan of the Soviet comedy directors Eldar Ryazanov, Leonid Gaidai, and Georgiy Daneliya. Almost anything by them is worth watching, but here are my favorites (filtered for things that have a free YouTube version with good English subtitles, bold are the highest-recommended):
Scene from "The Garage"
Ryazanov
19... | 2024-06-09 |
https://www.lesswrong.com/posts/axjb7tN9X2Mx4HzPz/the-data-wall-is-important | axjb7tN9X2Mx4HzPz | The Data Wall is Important | JustisMills | Modern AI is trained on a huge fraction of the internet, especially at the cutting edge, with the best models trained on close to all the high quality data we’ve got.[1] And data is really important! You can scale up compute, you can make algorithms more efficient, or you can add infrastructure around a model to make i... | 2024-06-09 |
https://www.lesswrong.com/posts/JM7XHz3nNPgydWqyf/two-family-dance-flyers | JM7XHz3nNPgydWqyf | Two Family Dance Flyers | jkaufman | I'm going to be calling
another
family dance
in a week,
and Lily and Anna wanted to make flyers to advertise it. I wrote out
a sheet with the key details they might want to include:
Lily wanted to do hers on the computer, and it ended up being
primarily about learning Inkscape:
Anna did hers by hand, and was very into... | 2024-06-09 |
https://www.lesswrong.com/posts/awJ9ykoiwDE9Nxrzj/what-can-we-learn-from-orcas | awJ9ykoiwDE9Nxrzj | What can we learn from orcas? | denominations | Orcas? For those of you who haven't kept up with marine wildlife news, 2020-2023 saw a big uptick in the number of orca attacks on human vessels around the Iberian peninsula. Is this their attempt to even the odds? Are we heading towards full-on conflict? Is planet of the killer whales upon us?
I can hear you smirk. It... | 2024-06-10 |
https://www.lesswrong.com/posts/mSAJmPbkkJGtGgn7t/what-happens-to-existing-life-sentences-under-lev | mSAJmPbkkJGtGgn7t | What happens to existing life sentences under LEV? | o-o | Presumably they get offered longevity treatments since they already get healthcare. Are they locked up until the end of time? For 100 years? | 2024-06-09 |
https://www.lesswrong.com/posts/EC4R6FFjnsDz3cxcp/d-and-d-sci-alchemy-archmage-anachronos-and-the-supply-chain-1 | EC4R6FFjnsDz3cxcp | D&D.Sci Alchemy: Archmage Anachronos and the Supply Chain Issues Evaluation & Ruleset | aphyer | This is a follow-up to last week's D&D.Sci scenario: if you intend to play that, and haven't done so yet, you should do so now before spoiling yourself.
There is a web interactive here you can use to test your answer, and generation code available here if you're interested, or you can read on for the ruleset and scores... | 2024-06-17 |
https://www.lesswrong.com/posts/zZuR2dxrt86HNXSgw/observations-for-doing-debate-with-models-behind-apis-1 | zZuR2dxrt86HNXSgw | Observations for doing debate with models behind APIs | PoD123 | Introduction
Hallucination is one of the major problems for reliable use of LLMs. This post is about some unexpected findings when I tried to replicate the methods of this paper for increasing factuality of LLMs using debate. Specifically, the task was generating biographies of scientists. In the process I observed: 1)... | 2024-06-10 |
https://www.lesswrong.com/posts/nAL6mnyFx2NuYLXmu/aggregative-principles-approximate-utilitarian-principles | nAL6mnyFx2NuYLXmu | Aggregative principles approximate utilitarian principles | strawberry calm | 1. Introduction
Utilitarianism is the view that a social planner should choose options which maximise the social utility of the resulting social outcome. The central object in utilitarianism is the social utility function u:S→R which assigns a real value u(s)∈R to each social outcome s∈S. This function typically involv... | 2024-06-12 |
https://www.lesswrong.com/posts/TA9eEgiWJfgBcJ7wn/exploring-llama-3-8b-mlp-neurons | TA9eEgiWJfgBcJ7wn | Exploring Llama-3-8B MLP Neurons | thong-nguyen | TL;DR: We created a dataset of text snippets that strongly activate neurons in Llama-3-8B model. This dataset shows meaningful features that can be found. Explore the neurons with the web interface: https://neuralblog.github.io/llama3-neurons/neuron_viewer.html
An example of a "derivative" neuron which is triggered whe... | 2024-06-09 |
https://www.lesswrong.com/posts/Av9D4GkdGNkiS2wHx/demystifying-alignment-through-a-comic | Av9D4GkdGNkiS2wHx | Demystifying "Alignment" through a Comic | milanrosko | Disclaimer: This explanatory comic is not specifically aimed at the Less Wrong contributor.
I hope you enjoyed this brief overview. For the full comic visit:
https://milanrosko.substack.com/p/button | 2024-06-09 |
https://www.lesswrong.com/posts/jqsRBR2fgoMPc9dGS/dumbing-down | jqsRBR2fgoMPc9dGS | Dumbing down | sustrik | In past few years I've been blogging in Slovak, that is, downscaling from writing in English, a language with 1457 million speakers to a language with 7 million speakers.
From the point of view of the writer, this has been a very different experience. It's not only that for a topic that interests one million English sp... | 2024-06-09 |
https://www.lesswrong.com/posts/9gXsecDTh2WrpqN8j/what-if-a-tech-company-forced-you-to-move-to-nyc | 9gXsecDTh2WrpqN8j | What if a tech company forced you to move to NYC? | KatjaGrace | It’s interesting to me how chill people sometimes are about the non-extinction future AI scenarios. Like, there seem to be opinions around along the lines of “pshaw, it might ruin your little sources of ‘meaning’, Luddite, but we have always had change and as long as the machines are pretty near the mark on rewiring yo... | 2024-06-09 |
https://www.lesswrong.com/posts/h43tWdo79C6dzXf8x/what-the-hell-is-a-representation-anyway-or-clarifying-ai | h43tWdo79C6dzXf8x | "What the hell is a representation, anyway?" | Clarifying AI interpretability with tools from philosophy of cognitive science | Part 1: Vehicles vs. contents | IwanWilliams | AI interpretability researchers want to understand how models work. One popular approach is to try to figure out which features of an input a model detects and uses to generate outputs. For instance, researchers interested in understanding how an image classifier distinguishes animals from inanimate objects might try t... | 2024-06-09 |
https://www.lesswrong.com/posts/StydMSLziGBn5gFAP/psa-consider-alternatives-to-auroc-when-reporting-classifier | StydMSLziGBn5gFAP | PSA: Consider alternatives to AUROC when reporting classifier metrics for alignment | alex-rozenshteyn | TL;DR
If you’re presenting a classifier that detects misalignment and providing metrics for it, please:
report the TPR at FPR=0.001, 0.01, and 0.05plot the ROC curve on a log-log scale
See https://arxiv.org/abs/2112.03570 for more context on why you might want to do this.
ML Background
(If all the terms in the TL;DR ma... | 2024-06-24 |
https://www.lesswrong.com/posts/L4GoMcXHyMsbK8Cey/what-should-i-do-long-term-plan-about-starting-an-ai-lab | L4GoMcXHyMsbK8Cey | What should I do? (long term plan about starting an AI lab) | not_a_cat | I was listening to this Dwarkesh podcast with Leopold Aschenbrenner where they talk about AGI, superintellignence and how things might unfold. All I want to say about it is that it created a sense of concreteness and urgency when considering my plans for the future.
A bit of context about myself: Since I was a teenager... | 2024-06-09 |
https://www.lesswrong.com/posts/KjBvGS6dgMz5qeDpL/introducing-sara-a-new-activation-steering-technique | KjBvGS6dgMz5qeDpL | Introducing SARA: a new activation steering technique | alejandro-tlaie-boria | Disclaimer
I currently am a Postdoctoral Fellow in Computational Neuroscience, learning about Mechanistic Interpretability and AI Safety in general. This post and the paper that goes with it are part of my current pivot towards these topics; thus, I apologise in advance if I'm not using the appropriate terminology or i... | 2024-06-09 |
https://www.lesswrong.com/posts/SnLXoyd2bzXWnSy4y/searching-for-the-root-of-the-tree-of-evil | SnLXoyd2bzXWnSy4y | Searching for the Root of the Tree of Evil | ivan-vendrov | “There are a thousand hacking at the branches of evil to one who is striking at the root”
Henry David Thoreau, Walden
The world is full of problems: Pain, poverty, illness, war, pollution, to pick a few among thousands. Many of us feel like we need to Do Something about these problems. There’s just one problem (sorry):... | 2024-06-08 |
https://www.lesswrong.com/posts/CZjnvaFiRokwst68C/two-easy-things-that-maybe-just-work-to-improve-ai-discourse | CZjnvaFiRokwst68C | Two easy things that maybe Just Work to improve AI discourse | jacobjacob | So, it seems AI discourse on X / Twitter is getting polarised. This is bad. Especially bad is how some engage in deliberate weaponization of discourse, for political ends.
At the same time, I observe: AI Twitter is still a small space. There are often important posts that have only ~100 likes, ~10-100 comments, and may... | 2024-06-08 |
https://www.lesswrong.com/posts/bJPGcoA6ZXjYAh4KP/the-perils-of-popularity-a-critical-examination-of-lesswrong | bJPGcoA6ZXjYAh4KP | The Perils of Popularity: A Critical Examination of LessWrong's Rational Discourse | BubbaJoeLouis | LessWrong.com has long been heralded as a bastion of rational thought and high-minded discourse. The community prides itself on fostering rigorous intellectual discussions, grounded in the principles of rationality and Bayesian reasoning. However, upon closer examination, it becomes evident that the platform's structur... | 2024-06-08 |
https://www.lesswrong.com/posts/Bvz9LxFpkmZWcCbff/status-quo-bias-is-usually-justified | Bvz9LxFpkmZWcCbff | Status quo bias is usually justified | amadeus-pagel | Generally, we know more about the status quo then about anything else.
We know that we can live in the current climate, we don’t know that about any other climate.We know that society functions with current laws and norms, we don’t know that about any other set of laws.
Often, we are adapted to the status quo and the s... | 2024-06-08 |
https://www.lesswrong.com/posts/4q9kyeqmA9uGq8fRJ/closed-source-evaluations | 4q9kyeqmA9uGq8fRJ | Closed-Source Evaluations | lw-user0246 | Public tripwires are no tripwires.
I'm writing a quick and dirty post because the alternative is that I wait for months and maybe not write it after all. I am broadly familiar with the state of interpretability research but do not know what the state of model evaluations is at the moment.
The interpretability win scree... | 2024-06-08 |
https://www.lesswrong.com/posts/2AxWM4Dx9QqKQg9KH/the-slack-double-crux-or-how-to-negotiate-with-yourself-1 | 2AxWM4Dx9QqKQg9KH | The Slack Double Crux, or how to negotiate with yourself | Unknown | This is a post mostly about Slack. Slack in the sense of Zvi, Slack as the freedom to not be bound by an obligation to have to do anything. Slack is generally accepted to be good and worth pursuing. This post is about the phenomenon of actions which increase Slack also tending to decrease Slack, and what to do about th... | 2024-06-08 |
https://www.lesswrong.com/posts/2wxufQWK8rXcDGbyL/access-to-powerful-ai-might-make-computer-security-radically | 2wxufQWK8rXcDGbyL | Access to powerful AI might make computer security radically easier | Buck | People talk about model weight security being really hard and crucial around the advent of AGI. (E.g. RAND report, Leopold; see here for some distinctions in these threat models that I think are important.) But I think that the thinking on this has not been sufficiently attentive to the fact that during that crucial ti... | 2024-06-08 |
https://www.lesswrong.com/posts/buCy3o4LGXR85gjJb/sev-sevteen-sevty-sevth | buCy3o4LGXR85gjJb | Sev, Sevteen, Sevty, Sevth | jkaufman | I don't like the number seven. Well, really the name of the number
seven. All the other single digit numbers are single syllable, and
seven has to go and take two. Seventy and seventeen have the same
problem. What can we do?
I think the two main candidates are "sev" (dropping the second
syllable) and "sen" (dropping th... | 2024-06-08 |
https://www.lesswrong.com/posts/wZjGLYp5WQwF8Y8Kk/5-open-corrigibility-questions | wZjGLYp5WQwF8Y8Kk | 5. Open Corrigibility Questions | max-harms | (Part 5 of the CAST sequence)
Much work remains on the topic of corrigibility and the CAST strategy in particular. There’s theoretical work in both nailing down an even more complete picture of corrigibility and in developing better formal measures. But there’s also a great deal of empirical work that seems possible to... | 2024-06-10 |
https://www.lesswrong.com/posts/NgFuzwxQHzxXqY4sf/alignment-gaps | NgFuzwxQHzxXqY4sf | Alignment Gaps | kcyras | Misaligned agendas and terminology among academic, industrial and independent AI alignment research
This post aims to fill some gaps between technical AI Alignment topics and academic AI research.
It summarises a quick but informed scouting of academic research papers that are closely connected to four Alignment topics... | 2024-06-08 |
https://www.lesswrong.com/posts/d7jSrBaLzFLvKgy32/4-existing-writing-on-corrigibility | d7jSrBaLzFLvKgy32 | 4. Existing Writing on Corrigibility | max-harms | (Part 4 of the CAST sequence)
This document is an in-depth review of the primary documents discussing corrigibility that I’m aware of. In particular, I'll be focusing on the writing of Eliezer Yudkowsky and Paul Christiano, though I’ll also spend some time at the end briefly discussing other sources. As I go through th... | 2024-06-10 |
https://www.lesswrong.com/posts/fNQiMd8nkxir57K8Y/question-about-lewis-counterfactual-theory-of-causation | fNQiMd8nkxir57K8Y | Question about Lewis' counterfactual theory of causation | jbkjr | In reading the SEP entry on counterfactual theories of causation, I had the following question occur, and I haven't been able to satisfactorily resolve it for myself.
An event e is said to causally depend on an event c if and only if e would occur if c were to occur and e would not occur if c were not to occur.
The art... | 2024-06-07 |
https://www.lesswrong.com/posts/t8nXfPLBCxsqhbipp/3b-formal-faux-corrigibility | t8nXfPLBCxsqhbipp | 3b. Formal (Faux) Corrigibility | max-harms | (Part 3b of the CAST sequence)
In the first half of this document, Towards Formal Corrigibility, I sketched a solution to the stop button problem. As I framed it, the solution depends heavily on being able to detect manipulation, which I discussed on an intuitive level. But intuitions can only get us so far. Let’s dive... | 2024-06-09 |
https://www.lesswrong.com/posts/WDHREAnbfuwT88rqe/3a-towards-formal-corrigibility | WDHREAnbfuwT88rqe | 3a. Towards Formal Corrigibility | max-harms | (Part 3a of the CAST sequence)
As mentioned in Corrigibility Intuition, I believe that it’s more important to find a simple, coherent, natural/universal concept that can be gestured at, rather than coming up with a precisely formal measure of corrigibility and using that to train an AGI. This isn’t because formal measu... | 2024-06-09 |
https://www.lesswrong.com/posts/MopKxiXeKyv5XuacM/relationships-among-words-metalingual-definition-and | MopKxiXeKyv5XuacM | Relationships among words, metalingual definition, and interpretability | bill-benzon | This is cross-posted from New Savanna,
First, I talk about now natural language is its own metalanguage and that allows them to define new works in terms of existing ones. Then I discuss the concept of justice in terms of mechanism of metalingual definition proposed by David Hays some years ago. I conclude with some re... | 2024-06-07 |
https://www.lesswrong.com/posts/38w98QCKJrueT3S68/let-s-talk-about-emergence | 38w98QCKJrueT3S68 | Let’s Talk About Emergence | jacobhaimes | Emergence has found its way into machine learning vocabulary, but current use has resulted in a circular definition and has further confused an already complex domain.
Clarote & AI4Media / Better Images of AI / Power/Profit / CC-BY 4.0
The field of machine learning has existed for many decades, but only recently have g... | 2024-06-07 |
https://www.lesswrong.com/posts/QzC7kdMQ5bbLoFddz/2-corrigibility-intuition | QzC7kdMQ5bbLoFddz | 2. Corrigibility Intuition | max-harms | (Part 2 of the CAST sequence)
As a reminder, here’s how I’ve been defining “corrigible” when introducing the concept: an agent is corrigible when it robustly acts opposite of the trope of "be careful what you wish for" by cautiously reflecting on itself as a flawed tool and focusing on empowering the principal to fix i... | 2024-06-08 |
https://www.lesswrong.com/posts/3HMh7ES4ACpeDKtsW/1-the-cast-strategy | 3HMh7ES4ACpeDKtsW | 1. The CAST Strategy | max-harms | (Part 1 of the CAST sequence)
AI Risk Introduction
(TLDR for this section, since it’s 101 stuff that many readers will have already grokked: Misuse vs Mistake; Principal-Agent problem; Omohundro Drives; we need deep safety measures in addition to mundane methods. Jump to “Sleepy-Bot” if all that seems familiar.)
Earth ... | 2024-06-07 |
https://www.lesswrong.com/posts/NQK8KHSrZRF5erTba/0-cast-corrigibility-as-singular-target-1 | NQK8KHSrZRF5erTba | 0. CAST: Corrigibility as Singular Target | max-harms | What the heck is up with “corrigibility”? For most of my career, I had a sense that it was a grab-bag of properties that seemed nice in theory but hard to get in practice, perhaps due to being incompatible with agency.
Then, last year, I spent some time revisiting my perspective, and I concluded that I had been deeply ... | 2024-06-07 |
https://www.lesswrong.com/posts/SfcWvA3M23A6yHdbd/frida-van-lisa-a-short-story-about-adversarial-ai-attacks-on | SfcWvA3M23A6yHdbd | Frida van Lisa, a short story about adversarial AI attacks on humans | arisalexis | Lights
Aurelio is stuck looking at the back of his car. Seems there is a note for him in Hebrew, written by finger on the dusty window. There is only one person that speaks it in his inner circle, his best friend Chloe, who he hasn’t seen for a while. Why would she ever leave him a message like that, and not on his pho... | 2024-06-07 |
https://www.lesswrong.com/posts/nP5FFYFjtY8LgWymt/quotes-from-leopold-aschenbrenner-s-situational-awareness | nP5FFYFjtY8LgWymt | Quotes from Leopold Aschenbrenner’s Situational Awareness Paper | Zvi | This post is different.
Usually I offer commentary and analysis. I share what others think, then respond.
This is the second time I am importantly not doing that. The work speaks for itself. It offers a different perspective, a window and a worldview. It is self-consistent. This is what a highly intelligent, highly kno... | 2024-06-07 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.