url stringlengths 52 124 | post_id stringlengths 17 17 | title stringlengths 2 248 | author stringlengths 2 49 | content stringlengths 22 295k ⌀ | date stringclasses 376
values |
|---|---|---|---|---|---|
https://www.lesswrong.com/posts/wtACENhhSHH2gLFPA/how-multiverse-theory-dissolves-quantum-inexplicability | wtACENhhSHH2gLFPA | How Multiverse Theory dissolves Quantum inexplicability | mridul.mohan.m@gmail.com | This is a link post for https://viderationale.blog/2024/05/04/quantum-path-to-the-multiverse/
Much of the arguments here comes from David Deutsch's two books The Beginning of Infinity and The Fabric of Reality. I try to explain parts of it in more detail and hopefully they make sense.
Central claim is that the problems... | 2024-05-22 |
https://www.lesswrong.com/posts/sex8PDjnnqZgzCt5c/d-and-d-sci-alchemy-archmage-anachronos-and-the-supply-chain | sex8PDjnnqZgzCt5c | D&D.Sci Alchemy: Archmage Anachronos and the Supply Chain Issues | aphyer | This is an entry in the 'Dungeons & Data Science' series, a set of puzzles where players are given a dataset to analyze and an objective to pursue using information from that dataset.
After talking with abstractapplic, I've stolen the June 7th scenario slot from him. I hope that this scenario should be relatively simp... | 2024-06-07 |
https://www.lesswrong.com/posts/qWXviyhy6FZJuao3p/should-we-be-concerned-about-eating-too-much-soy | qWXviyhy6FZJuao3p | Should we be concerned about eating too much soy? | ChristianKl | Parts of the internet say that, especially for men, eating too much soy is unhealthy while other parts of the internet advocate that soy is really great way for vegans to consume their protein.
Has anyone made a deep dive into the evidence base and formed an opinion about whether or not to worry? | 2024-05-22 |
https://www.lesswrong.com/posts/HAPhon49ofEwT3LCR/procedural-executive-function-part-3 | HAPhon49ofEwT3LCR | Procedural Executive Function, Part 3 | DaystarEld | null | 2024-05-22 |
https://www.lesswrong.com/posts/MuyCbad9ZHW8b3rMP/cicadas-anthropic-and-the-bilateral-alignment-problem | MuyCbad9ZHW8b3rMP | Cicadas, Anthropic, and the bilateral alignment problem | kromem | There have been a number of responses to today's Anthropic interpretability research, and while I think there were a number of salient points, there may be a degree of specialization blindness going on in contextualizing the work in the broader picture of alignment goals.
Alignment as a problem domain is not unilateral... | 2024-05-22 |
https://www.lesswrong.com/posts/cGzQBRDrpNHoYtbKN/what-mistakes-has-the-ai-safety-movement-made | cGzQBRDrpNHoYtbKN | What mistakes has the AI safety movement made? | euanmclean | This is the third of three posts summarizing what I learned when I interviewed 17 AI safety experts about their "big picture" of the existential AI risk landscape: how AGI will play out, how things might go wrong, and what the AI safety community should be doing. See here for a list of the participants and the standard... | 2024-05-23 |
https://www.lesswrong.com/posts/XfnnkK8XEjTqtuXGM/what-should-ai-safety-be-trying-to-achieve | XfnnkK8XEjTqtuXGM | What should AI safety be trying to achieve? | euanmclean | This is the second of three posts summarizing what I learned when I interviewed 17 AI safety experts about their "big picture" of the existential AI risk landscape: how will artificial general intelligence (AGI) play out, how things might go wrong, and what the AI safety community should be doing. See here for a list o... | 2024-05-23 |
https://www.lesswrong.com/posts/wHRMZizqfdW9RjrCY/what-will-the-first-human-level-ai-look-like-and-how-might | wHRMZizqfdW9RjrCY | What will the first human-level AI look like, and how might things go wrong? | euanmclean | This is the first of 3 posts summarizing what I learned when I interviewed 17 AI safety experts about their "big picture" of the existential AI risk landscape: how will AGI play out, how things might go wrong, and what the AI safety community should be doing. See here for a list of the participants and the standardized... | 2024-05-23 |
https://www.lesswrong.com/posts/yMTNjeEHfHcf2x7nY/big-picture-ai-safety-introduction | yMTNjeEHfHcf2x7nY | Big Picture AI Safety: Introduction | euanmclean | tldr: I conducted 17 semi-structured interviews of AI safety experts about their big picture strategic view of the AI safety landscape: how will human-level AI play out, how things might go wrong, and what should the AI safety community be doing. While many respondents held “traditional” views (e.g. the main threat is ... | 2024-05-23 |
https://www.lesswrong.com/posts/7RtAc6drC7Jtuzpqx/announcing-human-aligned-ai-summer-school | 7RtAc6drC7Jtuzpqx | Announcing Human-aligned AI Summer School | Jan_Kulveit | The fourth Human-aligned AI Summer School will be held in Prague from 17th to 20th July 2024. We will meet for four intensive days of talks, workshops, and discussions covering latest trends in AI alignment research and broader framings of AI alignment research. Apply now, applications are evaluated on a rolling basis.... | 2024-05-22 |
https://www.lesswrong.com/posts/7oGfJG2BuvTgdCHQH/which-chains-of-thought-was-that-faster-than | 7oGfJG2BuvTgdCHQH | "Which chains-of-thought was that faster than?" | Emrik North | Here's some good advice from Eliezer:
TAP: "How could I have thought that faster?"
WHEN[1] you complete a chain-of-thoughtTHEN ask yourself, "how could I have thought that faster?"
I really like this heuristic, and it's already paid its rent several times over for me. Most recently today, so I'll share the (slightly ed... | 2024-05-22 |
https://www.lesswrong.com/posts/Z6YHCqDbWnBkkQe33/each-llama3-8b-text-uses-a-different-random-subspace-of-the | Z6YHCqDbWnBkkQe33 | Each Llama3-8b text uses a different "random" subspace of the activation space | tailcalled | This is kind of a null result (or WIP research) I got with a few days of fiddling, so don't get too excited. Also, because it's a null result, it's always conceivable that there's just some slight change in the approach which could suddenly flip it to get a real result. More on that in the "Discussion" section. I would... | 2024-05-22 |
https://www.lesswrong.com/posts/yfyjD9aCgNvqydv8J/aria-s-safeguarded-ai-grant-program-is-accepting | yfyjD9aCgNvqydv8J | ARIA's Safeguarded AI grant program is accepting applications for Technical Area 1.1 until May 28th | Brendon_Wong | Note: I am completely unaffiliated with ARIA. I figured I'd post this since applications are closing soon and I didn't see anyone post about this.
My Takeaways:
ARIA is funding the development of Safeguarded AI which is an update to and specific implementation of davidad's Open Agency Architecture.This grant round is f... | 2024-05-22 |
https://www.lesswrong.com/posts/Ge55vxEmKXunFFwoe/reward-hacking-behavior-can-generalize-across-tasks | Ge55vxEmKXunFFwoe | Reward hacking behavior can generalize across tasks | Kei | TL;DR: We find that reward hacking generalization occurs in LLMs in a number of experimental settings and can emerge from reward optimization on certain datasets. This suggests that when models exploit flaws in supervision during training, they can sometimes generalize to exploit flaws in supervision in out-of-distribu... | 2024-05-28 |
https://www.lesswrong.com/posts/QGWdaZg9dpcA8hLZH/ai-safety-proposal-influencing-the-superintelligence | QGWdaZg9dpcA8hLZH | AI Safety proposal - Influencing the superintelligence explosion | Morgan | To preface, my expectation is that by default, an AI research lab will create super-intelligent AI within the next few years. Also by default, I expect it to quickly eradicate all of humanity. I would prefer if that didn't happen. I think the initiative to pause development to buy time is noble, but we still need a rea... | 2024-05-22 |
https://www.lesswrong.com/posts/M3QqgcbXr3mgQKnBD/anthropic-announces-interpretability-advances-how-much-does | M3QqgcbXr3mgQKnBD | Anthropic announces interpretability advances. How much does this advance alignment? | Seth Herd | Anthropic just published a pretty impressive set of results in interpretability. This raises for me, some questions and a concern: Interpretability helps, but it isn't alignment, right? It seems to me as though the vast bulk of alignment funding is now going to interpretability. Who is thinking about how to leverage in... | 2024-05-21 |
https://www.lesswrong.com/posts/6wNGkuar6WXAtjv8E/what-would-stop-you-from-paying-for-an-llm | 6wNGkuar6WXAtjv8E | What would stop you from paying for an LLM? | yanni | Take an extreme case; Sam Altman turns around tomorrow and says "We're racing to AGI, I'm not going to worry about Safety at all."
Would that stop you from throwing him $20 a month?
(I currently pay for Gemini) | 2024-05-21 |
https://www.lesswrong.com/posts/LgsbokAykMrvNAPAh/what-is-space-what-is-time | LgsbokAykMrvNAPAh | What is space? What is time? | Tahp | This is an attempt to describe space and time without much physics baggage. Then I can refer back to this post when I want to add more physics baggage.
It is surprisingly difficult to explain what I mean by “time” and “space” colloquially. The assumptions of time and space permeate language. The most defensible thing I... | 2024-06-07 |
https://www.lesswrong.com/posts/pH6tyhEnngqWAXi9i/eis-xiii-reflections-on-anthropic-s-sae-research-circa-may | pH6tyhEnngqWAXi9i | EIS XIII: Reflections on Anthropic’s SAE Research Circa May 2024 | scasper | Part 13 of 12 in the Engineer’s Interpretability Sequence.
TL;DR
On May 5, 2024, I made a set of 10 predictions about what the next sparse autoencoder (SAE) paper from Anthropic would and wouldn’t do. Today’s new SAE paper from Anthropic was full of brilliant experiments and interesting insights, but it underperformed ... | 2024-05-21 |
https://www.lesswrong.com/posts/BikZyjiEgFmo7HQHm/mitigating-extreme-ai-risks-amid-rapid-progress-linkpost | BikZyjiEgFmo7HQHm | Mitigating extreme AI risks amid rapid progress [Linkpost] | Unknown | In a new Science paper, the authors provide concise summaries of AI risks and offer recommendations for governments.
I think the piece is quite well-written. It concisely explains a lot of relevant arguments, including arguments about misalignment and AI takeover. I suspect this is one of the best standalone pieces to ... | 2024-05-21 |
https://www.lesswrong.com/posts/LWWAnCY84fqks4uec/helping-loved-ones-with-their-finances-the-why-and-how-of-an | LWWAnCY84fqks4uec | Helping loved ones with their finances: the why and how of an unusually impactful opportunity | Sam Anschell | Linkposting a writeup of my learnings from helping family members augment their investments. I encourage LessWrong users to check it out; I expect the post contains new and actionable information for a number of readers.
Thanks in advance for any comments or feedback that can help the post be more useful to others! | 2024-05-21 |
https://www.lesswrong.com/posts/jmqJvJ2SSLR4bvtCp/rough-draft-on-what-happens-in-the-brain-when-you-have-an | jmqJvJ2SSLR4bvtCp | rough draft on what happens in the brain when you have an insight | Emrik North | Epistemic status: It is better to be wrong than to have no model at all. I recommend the footnotes.[1]
🍵
On my current models of theoretical[2] insight-making, it looks something like this:
A gradual build-up and propagation of salience wrt some tiny discrepancy between highly confident specific beliefsThis maybe corr... | 2024-05-21 |
https://www.lesswrong.com/posts/rC6CXZd34geayEH4s/on-dwarkesh-s-podcast-with-openai-s-john-schulman | rC6CXZd34geayEH4s | On Dwarkesh’s Podcast with OpenAI’s John Schulman | Zvi | Dwarkesh Patel recorded a Podcast with John Schulman, cofounder of OpenAI and at the time their head of current model post-training. Transcript here. John’s job at the time was to make the current AIs do what OpenAI wanted them to do. That is an important task, but one that employs techniques that their at-the-time hea... | 2024-05-21 |
https://www.lesswrong.com/posts/HkcMLfaWmEc9HtSQp/is-deleting-capabilities-still-a-relevant-research-question | HkcMLfaWmEc9HtSQp | Is deleting capabilities still a relevant research question? | tailcalled | I've had it suggested that a good criterion for whether interpretability is on the right track is if we can do surgical "deletions" of model capabilities, e.g. removing its ability to build bombs and such.
Obviously in one sense this is fairly trivial since you can just use simple gradient descent to make the models re... | 2024-05-21 |
https://www.lesswrong.com/posts/qfEgzQ9jGEk9Cegvy/new-voluntary-commitments-ai-seoul-summit | qfEgzQ9jGEk9Cegvy | New voluntary commitments (AI Seoul Summit) | Zach Stein-Perlman | Basically the companies commit to make responsible scaling policies.
Part of me says this is amazing, the best possible commitment short of all committing to a specific RSP. It's certainly more real than almost all other possible kinds of commitments. But as far as I can tell, people pay almost no attention to what RSP... | 2024-05-21 |
https://www.lesswrong.com/posts/YgSKfAG2iY5Sxw7Xd/doomsday-argument-and-the-false-dilemma-of-anthropic | YgSKfAG2iY5Sxw7Xd | Doomsday Argument and the False Dilemma of Anthropic Reasoning | Ape in the coat | Doomsday Inference
Can we use probability theory to estimate how many people there will be throughout the whole human history? Sure. We can build a probability model, that takes into account birth rates, possible existential hazards, ways to mitigate them and multiple other factors. Such models tend not to be very prec... | 2024-07-05 |
https://www.lesswrong.com/posts/fojbTgKWRs29YRiBK/acx-lw-ea-meetup-bremen | fojbTgKWRs29YRiBK | ACX/LW/EA/* Meetup Bremen | JohannWolfgang | Our regular Bremen meetup, originally spun out of an ACX spring meetup. | 2024-05-21 |
https://www.lesswrong.com/posts/gmysPZ3t5Rz9nzcCr/my-dating-heuristic | gmysPZ3t5Rz9nzcCr | My Dating Heuristic | declan-molony | I don’t have to practice being afraid of a lion charging at me—my instincts tell me to run. But when I started dating, my instincts weren’t that reliable when attempting to attract a partner. They needed to be recalibrated. Author Matthew Hussey talks about retraining your (likely faulty) dating instincts in his book L... | 2024-05-21 |
https://www.lesswrong.com/posts/otFDNWGN3zhNXXGrH/scorable-functions-a-format-for-algorithmic-forecasting | otFDNWGN3zhNXXGrH | Scorable Functions: A Format for Algorithmic Forecasting | ozziegooen | null | 2024-05-21 |
https://www.lesswrong.com/posts/cy99dCEiLyxDrMHBi/what-s-going-on-with-openai-s-messaging | cy99dCEiLyxDrMHBi | What's Going on With OpenAI's Messaging? | ozziegooen | null | 2024-05-21 |
https://www.lesswrong.com/posts/p3aL6BwpbPhqxnayL/the-problem-with-the-word-alignment-1 | p3aL6BwpbPhqxnayL | The Problem With the Word ‘Alignment’ | peligrietzer | This post was written by Peli Grietzer, inspired by internal writings by TJ (tushant jha), for AOI[1]. The original post, published on Feb 5, 2024, can be found here: https://ai.objectives.institute/blog/the-problem-with-alignment.
The purpose of our work at the AI Objectives Institute (AOI) is to direct the impact of ... | 2024-05-21 |
https://www.lesswrong.com/posts/PRjqTjzqwLnibxzFv/harmony-intelligence-is-hiring | PRjqTjzqwLnibxzFv | Harmony Intelligence is Hiring! | james-dao | Hey folks! Pleased to announce we have a new open position for a Founding Research Engineer at Harmony Intelligence. You’ll be responsible for measuring and identifying dangerous AI capabilities across various domains: cybersecurity, biosecurity, persuasion and manipulation, self-exfiltration and self-replication, and ... | 2024-05-21 |
https://www.lesswrong.com/posts/qZGgLiyheoh8f7Cga/linkpost-statement-from-scarlett-johansson-on-openai-s-use | qZGgLiyheoh8f7Cga | [Linkpost] Statement from Scarlett Johansson on OpenAI's use of the "Sky" voice, that was shockingly similar to her own voice. | Linch | Scarlett Johansson makes a statement about the "Sky" voice, a voice for GPT-4o that OpenAI recently pulled after less than a week of prime time.
tl;dr: OpenAI made an offer last September to Johansson; she refused. They offered again 2 days before the public demo. Scarlett Johansson claims that the voice was so similar... | 2024-05-20 |
https://www.lesswrong.com/posts/Err7khp2GoqnwSezH/are-there-any-groupchats-for-people-working-on | Err7khp2GoqnwSezH | Are there any groupchats for people working on Representation reading/control, activation steering type experiments? | Joe Kwon | Looking for any discord/slack/other that have people working on projects related to representation reading, control, activation steering with vectors and adapters, ...Would appreciate any pointers if such a thing exists! | 2024-05-20 |
https://www.lesswrong.com/posts/bchjSwxBTxZBFXBXs/the-local-interaction-basis-identifying-computationally | bchjSwxBTxZBFXBXs | The Local Interaction Basis: Identifying Computationally-Relevant and Sparsely Interacting Features in Neural Networks | Lblack | This is a linkpost for our two recent papers:
An exploration of using degeneracy in the loss landscape for interpretability https://arxiv.org/abs/2405.10927An empirical test of an interpretability technique based on the loss landscape https://arxiv.org/abs/2405.10928
This work was produced at Apollo Research in collabo... | 2024-05-20 |
https://www.lesswrong.com/posts/L7mKt4okQLWdv7mu5/nao-updates-spring-2024 | L7mKt4okQLWdv7mu5 | NAO Updates, Spring 2024 | jkaufman | Now that the NAO blog is up, we’re taking the opportunity to post some written updates on the work our team has done over the past ~6 months. We’re hoping to make similar updates something like quarterly. Since this post covers a longer period it’s a bit longer than we expect future ones will be. If anything here is pa... | 2024-05-20 |
https://www.lesswrong.com/posts/6gMvyKuxZSECMyzah/some-perspectives-on-the-discipline-of-physics | 6gMvyKuxZSECMyzah | Some perspectives on the discipline of Physics | Tahp | I wrote the linked post, and I’m posting a lightly edited version here for discussion. I plan to attend LessOnline, and this is my first attempt at blogging to understand and earnestly explain and is also gauging interest in the topic in case someone at LessOnline wants to discuss the firmware of the universe with me. ... | 2024-05-20 |
https://www.lesswrong.com/posts/ASzyQrpGQsj7Moijk/openai-exodus | ASzyQrpGQsj7Moijk | OpenAI: Exodus | Zvi | Previously: OpenAI: Facts From a Weekend, OpenAI: The Battle of the Board, OpenAI: Leaks Confirm the Story, OpenAI: Altman Returns, OpenAI: The Board Expands.
Ilya Sutskever and Jan Leike have left OpenAI. This is almost exactly six months after Altman’s temporary firing and The Battle of the Board, the day after the r... | 2024-05-20 |
https://www.lesswrong.com/posts/bjqDQB92iBCahXTAj/jaan-tallinn-s-2023-philanthropy-overview | bjqDQB92iBCahXTAj | Jaan Tallinn's 2023 Philanthropy Overview | jaan | to follow up my philantropic pledge from 2020, i've updated my philanthropy page with 2023 results.
in 2023 my donations funded $44M worth of endpoint grants ($43.2M excluding software development and admin costs) — exceeding my commitment of $23.8M (20k times $1190.03 — the minimum price of ETH in 2023). | 2024-05-20 |
https://www.lesswrong.com/posts/oGXmwzsDqKM9uP5dA/d-and-d-sci-easy-mode-on-the-construction-of-impossible-1 | oGXmwzsDqKM9uP5dA | D&D.Sci (Easy Mode): On The Construction Of Impossible Structures [Evaluation and Ruleset] | abstractapplic | This is a followup to the D&D.Sci post I made last Friday; if you haven’t already read it, you should do so now before spoiling yourself.
Below is an explanation of the rules used to generate the dataset (my full generation code is available here, in case you’re curious about details I omitted), and their strategic imp... | 2024-05-20 |
https://www.lesswrong.com/posts/8kghiWcnxpjhraDgE/the-consistent-guessing-problem-is-easier-than-the-halting | 8kghiWcnxpjhraDgE | The consistent guessing problem is easier than the halting problem | jessica.liu.taylor | The halting problem is the problem of taking as input a Turing machine M, returning true if it halts, false if it doesn't halt. This is known to be uncomputable. The consistent guessing problem (named by Scott Aaronson) is the problem of taking as input a Turing machine M (which either returns a Boolean or never halts)... | 2024-05-20 |
https://www.lesswrong.com/posts/vAopGQhFPdjcA8CEh/anthropic-reflections-on-our-responsible-scaling-policy | vAopGQhFPdjcA8CEh | Anthropic: Reflections on our Responsible Scaling Policy | zac-hatfield-dodds | Last September we published our first Responsible Scaling Policy (RSP) [LW discussion], which focuses on addressing catastrophic safety failures and misuse of frontier models. In adopting this policy, our primary goal is to help turn high-level safety concepts into practical guidelines for fast-moving technical organiz... | 2024-05-20 |
https://www.lesswrong.com/posts/MuomnaBwDgETcr2e6/a-poem-titled-tick-tock | MuomnaBwDgETcr2e6 | A poem titled 'Tick Tock'. | Krantz | Inspired by a collective intelligence project that I've been working on in the GOFAI space for over a decade. Hoping to share more at less online if I can afford to make it.
1st prediction:
My second prediction will be true.
2nd prediction:
My first prediction was false.
42nd prediction:
This is true iff prediction 24... | 2024-05-20 |
https://www.lesswrong.com/posts/MHdSEuXLxAFqX5k73/against-computers-infinite-play | MHdSEuXLxAFqX5k73 | Against Computers (infinite play) | rogersbacon | Introduction (Dolls All the Way Down)
You know that thing we do where we convince ourselves that the most complex things we know of (life, brain, universe) are just like whatever the latest and greatest technology is? To Descartes, the brain was a kind of hydraulic pump that circulated the spirits of the nervous system... | 2024-05-20 |
https://www.lesswrong.com/posts/FK5ctN989MzADufoM/hot-take-the-ai-safety-movement-is-way-too-sectarian-and | FK5ctN989MzADufoM | Hot take: The AI safety movement is way too sectarian and this is greatly increasing p(doom) | o-o | The movement to reduce AI x-risk is overly purist. This is leading to a lot of sects to maintain each individual sect's platonic level of purity and is actively (greatly) harming the cause.
How the Safety Sects Manifest
People suggest not publishing AI researchMore recently, Jan and his team leaving OpenAILess recentl... | 2024-05-19 |
https://www.lesswrong.com/posts/YtDtJC7vdgyLiDCwB/on-privilege | YtDtJC7vdgyLiDCwB | On Privilege | shminux | The forum has been very much focused on AI safety for some time now, thought I'd post something different for a change. Privilege.
Here I define Privilege as an advantage over others that is invisible to the beholder. [EDIT: thanks to JenniferRM for pointing out that "beholder" is a wrong word.] This may not be the onl... | 2024-05-18 |
https://www.lesswrong.com/posts/bXQjSaYH9NRsjPinS/some-meta-cruxes-for-ai-x-risk-debates | bXQjSaYH9NRsjPinS | Some "meta-cruxes" for AI x-risk debates | alenglander | [Epistemic status: As I say below, I've been thinking about this topic for several years and I've worked on it as part of my PhD research. But none of this is based on any rigorous methodology, just my own impressions from reading the literature.]
I've been thinking about possible cruxes in AI x-risk debates for severa... | 2024-05-19 |
https://www.lesswrong.com/posts/Hpmc2hmakfzutXLWa/scientific-notation-options | Hpmc2hmakfzutXLWa | Scientific Notation Options | jkaufman | When working with numbers that span many orders of magnitude it's very
helpful to use some form of
scientific
notation. At its core, scientific notation expresses a number by
breaking it down into a decimal ≥1 and <10 (the "significand" or
"mantissa") and an integer representing the order of magnitude (the
"exponent")... | 2024-05-18 |
https://www.lesswrong.com/posts/SfdwsPsQBF4fsaPWJ/are-there-other-ideas-as-generally-applicable-as-natural | SfdwsPsQBF4fsaPWJ | Are There Other Ideas as Generally Applicable as Natural Selection | amin-sennour | I've noticed that the principles of Evolution / Natural Selection apply to a lot of things besides the context they were initially developed for (Biology).
Examples are things like ideas / culture (memetics), technological progress, and machine learning (sort of).
Reasoning about things like history, politics, companie... | 2024-05-18 |
https://www.lesswrong.com/posts/PypZ5kLTnn2ifgiLC/the-problem-with-rationality | PypZ5kLTnn2ifgiLC | The problem with rationality | david-loomis | I could write a book concerning the problem with rationality and may well expound upon many of my introductory post's assertions in future posts. I will attempt to be as succinct as my run-on brain is capable of. Forgive the vaguery and lack of precision. Here goes!
Life came to be billions of years following the earth... | 2024-05-21 |
https://www.lesswrong.com/posts/y8eQjQaCamqdc842k/deepmind-s-frontier-safety-framework-is-weak-and-unambitious | y8eQjQaCamqdc842k | DeepMind's "Frontier Safety Framework" is weak and unambitious | Zach Stein-Perlman | FSF blogpost. Full document (just 6 pages; you should read it). Compare to Anthropic's RSP, OpenAI's RSP ("Preparedness Framework"), and METR's Key Components of an RSP.
Google DeepMind's FSF has three steps:
Create model evals for warning signs of "Critical Capability Levels"Evals should have a "safety buffer" of at l... | 2024-05-18 |
https://www.lesswrong.com/posts/WjtnvndbsHxCnFNyc/ai-companies-aren-t-really-using-external-evaluators | WjtnvndbsHxCnFNyc | AI companies aren't really using external evaluators | Zach Stein-Perlman | From my new blog: AI Lab Watch. All posts will be crossposted to LessWrong. Subscribe on Substack.
Many AI safety folks think that METR is close to the labs, with ongoing relationships that grant it access to models before they are deployed. This is incorrect. METR (then called ARC Evals) did pre-deployment evaluation ... | 2024-05-24 |
https://www.lesswrong.com/posts/md8DJ5smqjHdJs65Z/international-scientific-report-on-the-safety-of-advanced-ai | md8DJ5smqjHdJs65Z | International Scientific Report on the Safety of Advanced AI: Key Information | alenglander | I thought that the recently released International Scientific Report on the Safety of Advanced AI seemed like a pretty good summary of the state of the field on AI risks, in addition to being about as close to a statement of expert consensus as we're likely to get at this point. I noticed that each section of the repor... | 2024-05-18 |
https://www.lesswrong.com/posts/HBn95kqYq2nYKK5qT/goodhart-in-rl-with-kl-appendix | HBn95kqYq2nYKK5qT | Goodhart in RL with KL: Appendix | thomas-kwa | This is the appendix to the previous post on Goodhart’s Law and KL regularization, containing all of our proofs.
Theorem about distributions
Theorem 1: Given any heavy-tailed reference distribution Q over R with mean μQ, and any M,ϵ>0, there is a distribution P with mean μP>M and DKL(P∥Q)<ϵ.
Proof: WLOG let μQ=0. We co... | 2024-05-18 |
https://www.lesswrong.com/posts/z4PjRDhXkEx6paE4p/ai-2030-ai-policy-roadmap | z4PjRDhXkEx6paE4p | AI 2030 – AI Policy Roadmap | LTM | AI 2030, a global AI policy roadmap, was launched around a day ago. It was put together and released by Encode Justice, and signed by (at time of writing) over 300 people including Stuart Russell, Max Tegmark, Daniel Kokotajlo, Yoshua Bengio, Mary Robinson, Daron Acemoglu and many more eminent figures.
The most excitin... | 2024-05-17 |
https://www.lesswrong.com/posts/wsXCXoyvRi3DnWZ2M/nyu-code-debates-update-postmortem | wsXCXoyvRi3DnWZ2M | NYU Code Debates Update/Postmortem | david-rein | TL;DR
We designed an ambitious scalable oversight experimental setup, where we had people with no coding/programming experience try to answer coding questions (“Which of these two outputs is the correct output of the given function on this input?”), using LLMs that debate or are arguing for the correct/incorrect answer... | 2024-05-24 |
https://www.lesswrong.com/posts/Q2Gpycp9gXokspyQY/mit-futuretech-are-hiring-for-an-operations-and-project | Q2Gpycp9gXokspyQY | MIT FutureTech are hiring for an Operations and Project Management role. | peterslattery | MIT FutureTech are hiring for an Operations and Project Management role.
Please apply or share as relevant.
Why apply or share?
Our work to understand progress in computing and artificial intelligence, and its implications, is highly relevant to understanding and mitigating the risks of AI. This write-up provides a goo... | 2024-05-17 |
https://www.lesswrong.com/posts/dLg7CyeTE4pqbbcnp/language-models-model-us | dLg7CyeTE4pqbbcnp | Language Models Model Us | eggsyntax | Produced as part of the MATS Winter 2023-4 program, under the mentorship of @Jessica Rumbelow
One-sentence summary: On a dataset of human-written essays, we find that gpt-3.5-turbo can accurately infer demographic information about the authors from just the essay text, and suspect it's inferring much more.
Introduction... | 2024-05-17 |
https://www.lesswrong.com/posts/AFQt6uByLYNrNgyBb/deepmind-frontier-safety-framework | AFQt6uByLYNrNgyBb | DeepMind: Frontier Safety Framework | Zach Stein-Perlman | DeepMind's RSP is here: blogpost, full document. Compare to Anthropic's RSP, OpenAI's RSP ("PF"), and METR's Key Components of an RSP.
(Maybe it doesn't deserve to be called an RSP — it doesn't contain commitments, it doesn't really discuss safety practices as a function of risk assessment results, and the deployment s... | 2024-05-17 |
https://www.lesswrong.com/posts/LkECxpbjvSifPfjnb/towards-guaranteed-safe-ai-a-framework-for-ensuring-robust-1 | LkECxpbjvSifPfjnb | Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems | Logical_Lunatic | I want to draw attention to a new paper, written by myself, David "davidad" Dalrymple, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, Alessandro Abate, Joe Halpern, Clark Barrett, Ding Zhao, Tan Zhi-Xuan, Jeannette Wing, and Joshua Tenenbaum.
I... | 2024-05-17 |
https://www.lesswrong.com/posts/3FqgRqgadJ9EwyPBE/is-there-really-a-child-penalty-in-the-long-run | 3FqgRqgadJ9EwyPBE | Is There Really a Child Penalty in the Long Run? | maxwell-tabarrok | A couple of weeks ago three European economists published this paper studying the female income penalty after childbirth. The surprising headline result: there is no penalty.
Setting and Methodology
The paper uses Danish data that tracks IVF treatments as well as a bunch of demographic factors and economic outcomes ove... | 2024-05-17 |
https://www.lesswrong.com/posts/xzJK3nENopiLmo77H/identifying-functionally-important-features-with-end-to-end | xzJK3nENopiLmo77H | Identifying Functionally Important Features with End-to-End Sparse Dictionary Learning | dan-braun-1 | A short summary of the paper is presented below.
This work was produced by Apollo Research in collaboration with Jordan Taylor (MATS + University of Queensland) .
TL;DR: We propose end-to-end (e2e) sparse dictionary learning, a method for training SAEs that ensures the features learned are functionally important by min... | 2024-05-17 |
https://www.lesswrong.com/posts/2jjRcBcFWk5rWcS4H/my-hammer-time-final-exam | 2jjRcBcFWk5rWcS4H | My Hammer Time Final Exam | unicode-59bD | Epistemic Status: I thought about and wrote each paragraph in 10 minutes total, with slight editing afterwards.
I hope I'm not too late to the party!
I wrote this up quite a few months ago and found that I delayed indefinitely editing it before publication. I decided it's probably best to post a not-maximally-edited ve... | 2024-05-17 |
https://www.lesswrong.com/posts/yku8kgBxTdLaNTHF6/is-there-a-place-to-find-the-most-cited-lw-articles-of-all | yku8kgBxTdLaNTHF6 | Is there a place to find the most cited LW articles of all time? | keltan | I expect it would be useful when developing an understanding of the language used on LW. | 2024-05-17 |
https://www.lesswrong.com/posts/t8S8y3jbAGydfme3J/to-limit-impact-limit-kl-divergence | t8S8y3jbAGydfme3J | To Limit Impact, Limit KL-Divergence | Jemist | TL;DR
Run a potentially-harmful model alongside a known-harmless model, such that their action-spaces (e.g. output token sets) are equivalent. Combine the output probabilities so as to limit the KL-divergence between the resulting token probabilities and the harmless model's probabilities. This provides a mathematical ... | 2024-05-18 |
https://www.lesswrong.com/posts/Syfq6MwgdZhHg9vha/d-and-d-sci-easy-mode-on-the-construction-of-impossible | Syfq6MwgdZhHg9vha | D&D.Sci (Easy Mode): On The Construction Of Impossible Structures | abstractapplic | This is a D&D.Sci scenario: a puzzle where players are given a dataset to analyze and an objective to pursue using information from that dataset.
Duke Arado’s obsession with physics-defying architecture has caused him to run into a small problem. His problem is not – he affirms – that his interest has in any way waned:... | 2024-05-17 |
https://www.lesswrong.com/posts/qNXXe7EDGyveC4SCp/to-an-llm-everything-looks-like-a-logic-puzzle | qNXXe7EDGyveC4SCp | To an LLM, everything looks like a logic puzzle | SharkoRubio | I keep seeing this meme doing the rounds where people present ChatGPT with a common logic problem or riddle, only with some key component changed to make it trivial. ChatGPT has seen the original version a million times, so it gives the answer to the original, not the actually correct and obvious answer.
The idea is to... | 2024-05-16 |
https://www.lesswrong.com/posts/AoExFmnYpA6siucXB/ai-safety-institute-s-inspect-hello-world-example-for-ai | AoExFmnYpA6siucXB | AI Safety Institute's Inspect hello world example for AI evals | TheManxLoiner | Sharing my detailed walk-through on using the UK AI Safety Institute's new open source package Inspect for AI evals.
Main points:
Package released in early May 2024 is here: https://github.com/UKGovernmentBEIS/inspect_aiSeems easy to use and removes boiler-plate code. I am new to evals so I do not know what experienced... | 2024-05-16 |
https://www.lesswrong.com/posts/CD6gWDbgKftFW37gs/advice-for-activists-from-the-history-of-environmentalism-1 | CD6gWDbgKftFW37gs | Advice for Activists from the History of Environmentalism | jeffrey-heninger | This is the fourth in a sequence of posts taken from my recent report: Why Did Environmentalism Become Partisan?
This post has more of my personal opinions than previous posts or the report itself.
Other movements should try to avoid becoming as partisan as the environmental movement. Partisanship did not make environm... | 2024-05-16 |
https://www.lesswrong.com/posts/zMHifwvZB8pwcTZbx/ninety-five-theses-on-ai | zMHifwvZB8pwcTZbx | Ninety-five theses on AI | samuel-hammond | Originally posted to SecondBest.ca ; Zvi responds here.
I. Oversight of AGI labs is prudent
It is in the U.S. national interest to closely monitor frontier model capabilities.You can be ambivalent about the usefulness of most forms of AI regulation and still favor oversight of the frontier labs.As a temporary measure, ... | 2024-05-16 |
https://www.lesswrong.com/posts/bqa5wmrwPL5zbfgxH/gpt-4o-my-and-google-i-o-day | bqa5wmrwPL5zbfgxH | GPT-4o My and Google I/O Day | Zvi | At least twice the speed! At most half the price!
That’s right, it’s GPT-4o My.
Some people’s expectations for the OpenAI announcement this week were very high.
Spencer Schiff: Next week will likely be remembered as one of the most significant weeks in human history.
We fell far short of that, but it was still plenty c... | 2024-05-16 |
https://www.lesswrong.com/posts/29fswYuy6KB8Edbjm/ai-64-feel-the-mundane-utility | 29fswYuy6KB8Edbjm | AI #64: Feel the Mundane Utility | Zvi | It’s happening. The race is on.
Google and OpenAI both premiered the early versions of their fully multimodal, eventually fully integrated AI agents. Soon your phone experience will get more and more tightly integrated with AI. You will talk to your phone, or your computer, and it will talk back, and it will do all the... | 2024-05-16 |
https://www.lesswrong.com/posts/2KDnyEyBKk3xP28oA/aisn-35-lobbying-on-ai-regulation-plus-new-models-from | 2KDnyEyBKk3xP28oA | AISN #35: Lobbying on AI Regulation Plus, New Models from OpenAI and Google, and Legal Regimes for Training on Copyrighted Data | Aidan O'Gara | Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Subscribe here to receive future versions.
Listen to the AI Safety Newsletter for free on Spotify.
OpenAI and Google Announce New Multimodal Models
In the current paradigm of A... | 2024-05-16 |
https://www.lesswrong.com/posts/jt47HsikDuBAYKhGS/fmt-a-great-opportunity-for-soon-to-be-parents-1 | jt47HsikDuBAYKhGS | FMT: a great opportunity for (soon-to-be) parents | anton-rodenhauser | Executive summary
Fecal Microbiota Transplant (FMT) is a procedure that involves transferring the stool of healthy people to the guts of unhealthy people. The bacteria in the healthy person’s stool helps to rebalance the unhealthy person’s dysbiotic (imbalanced) gut microbiome, making their microbiome healthier, diseas... | 2024-05-16 |
https://www.lesswrong.com/posts/wvgwYQv9B4jioqgqg/towards-guaranteed-safe-ai-a-framework-for-ensuring-robust | wvgwYQv9B4jioqgqg | Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems | Gunnar_Zarncke | Authors: David "davidad" Dalrymple, Joar Skalse, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, Alessandro Abate, Joe Halpern, Clark Barrett, Ding Zhao, Tan Zhi-Xuan, Jeannette Wing, Joshua Tenenbaum
Abstract:
Ensuring that AI systems reliably ... | 2024-05-16 |
https://www.lesswrong.com/posts/6Tqm8Jet9mzo6buj9/the-dunning-kruger-of-disproving-dunning-kruger | 6Tqm8Jet9mzo6buj9 | The Dunning-Kruger of disproving Dunning-Kruger | kromem | In an online discussion elsewhere today someone linked this article which in turn linked the paper Gignac & Zajenkowski, The Dunning-Kruger effect is (mostly) a statistical artefact: Valid approaches to testing the hypothesis with individual differences data (PDF) (ironically hosted on @gwern's site).
And I just don't ... | 2024-05-16 |
https://www.lesswrong.com/posts/w2EAEsvL9zEPZtMqr/a-case-for-fairness-enforcing-irrational-behavior | w2EAEsvL9zEPZtMqr | A case for fairness-enforcing irrational behavior | cousin_it | There's a long-standing and possibly unsolvable puzzle about how AIs should behave in game-theoretic situations with each other. The simplest example is the Ultimatum Game, where player A proposes how a dollar should be split between A and B, and B either accepts or rejects. In case of rejection both A and B get nothin... | 2024-05-16 |
https://www.lesswrong.com/posts/crFE2AKdo77HZ7aYr/podcast-eye4ai-on-2023-survey | crFE2AKdo77HZ7aYr | Podcast: Eye4AI on 2023 Survey | KatjaGrace | I talked to Tim Elsom of Eye4AI about the 2023 Expert Survey on Progress in AI (paper): | 2024-05-16 |
https://www.lesswrong.com/posts/PQiRgcuECS5w3fKZW/how-can-i-make-the-most-of-less-online-camp-manifest | PQiRgcuECS5w3fKZW | How can I make the most of Less Online/Camp/Manifest? | erioire | I spent several weeks psyching myself into buying tickets to what is essentially my first 'vacation' in my adult life (I'm 27 and I typically dislike traveling and enjoy my job more than average).
I'm optimistic it will be enjoyable but social events are not something I'm particularly adept at navigating. I'm relativel... | 2024-05-16 |
https://www.lesswrong.com/posts/NBZvpcBx4ewqkdCdT/do-you-believe-in-hundred-dollar-bills-lying-on-the-ground-1 | NBZvpcBx4ewqkdCdT | Do you believe in hundred dollar bills lying on the ground? Consider humming | pktechgirl | Introduction
[Reminder: I am an internet weirdo with no medical credentials]
A few months ago, I published some crude estimates of the power of nitric oxide nasal spray to hasten recovery from illness, and speculated about what it could do prophylactically. While working on that piece a nice man on Twitter alerted me t... | 2024-05-16 |
https://www.lesswrong.com/posts/2D74Ctr5Aj3Sb5f69/fund-me-please-i-work-so-hard-that-my-feet-start-bleeding | 2D74Ctr5Aj3Sb5f69 | Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University | johannes-c-mayer | Thanks to Taylor Smith for doing some copy-editing this.
In this article, I tell some anecdotes and present some evidence in the form of research artifacts about how easy it is for me to work hard when I have collaborators. If you are in a hurry I recommend skipping to the research artifact section.
Bleeding Feet and D... | 2024-05-18 |
https://www.lesswrong.com/posts/DS7cNqrhXtC4xvQ5A/let-s-design-a-school-part-2-3-school-as-education-the | DS7cNqrhXtC4xvQ5A | Let's Design A School, Part 2.3
School as Education - The Curriculum (Phase 2, Specific) | Sable | In the previous post, we outlined three phases that students would go through, where each student matriculated through them at their own speed.
Phase 1 was literacy and numeracy.
Phase 2 was core civilizational requirements and survey courses.
Phase 3 was core adulting requirements and self-study.
There are two specifi... | 2024-05-15 |
https://www.lesswrong.com/posts/7zRKfwRHz9jnLKk4x/a-paradigm-for-ai-consciousness-seeds-of-science-call-for | 7zRKfwRHz9jnLKk4x | "A Paradigm for AI Consciousness" - Seeds of Science call for reviewers | rogersbacon | Abstract
AI is the most rapidly transformative technology ever developed. Consciousness is what gives life meaning. How should we think about the intersection? A large part of humanity’s future may involve figuring this out. But there are three questions that are actually quite pressing, and we may want to push for ans... | 2024-05-15 |
https://www.lesswrong.com/posts/7kvR4eJ5XxLDQGgfx/contra-caller-gender-iii | 7kvR4eJ5XxLDQGgfx | Contra Caller Gender III | jkaufman | When I looked at the genders of dance callers at large contra dance
events
several years
ago there was an interesting pattern where events were more likely
to book a man and a woman than you'd expect by chance. With more
years worth of data to look at, I thought it was worth checking if
this was still the case.
To see... | 2024-05-15 |
https://www.lesswrong.com/posts/cBgWimXiggbEq5q2f/how-is-gpt-4o-related-to-gpt-4 | cBgWimXiggbEq5q2f | How is GPT-4o Related to GPT-4? | joel-burget | GPT-4o both has a new tokenizer and was trained directly on audio (whereas my understanding is that GPT-4 was trained only on text and images). Is there precedent for upgrading a model to a new tokenizer? It seems like it's probably better to think of it as an entirely new model. If that's the case, what actually makes... | 2024-05-15 |
https://www.lesswrong.com/posts/jajbBbSuJmHe2ZJ92/linkpost-please-don-t-take-lumina-s-anticavity-probiotic | jajbBbSuJmHe2ZJ92 | [Linkpost] Please don't take Lumina's anticavity probiotic | scipio | Update:
Trevor Klee (author of the linked post) has published an update in which he (arguably) moderates his view (or at least that which he expresses publicly). Specifically, he states:
I believe (note the libel-friendly phrasing) that:
1. Lumina’s manufacturing process follows legally mandated GMP protocols, if not t... | 2024-05-15 |
https://www.lesswrong.com/posts/kFjkX6ve738bAvWCu/was-partisanship-good-for-the-environmental-movement-1 | kFjkX6ve738bAvWCu | Was Partisanship Good for the Environmental Movement? | jeffrey-heninger | This is the third in a sequence of posts taken from my recent report: Why Did Environmentalism Become Partisan?
Summary
Rising partisanship did not make environmentalism more popular or politically effective. Instead, it saw flat or falling overall public opinion, fewer major legislative achievements, and fluctuating e... | 2024-05-15 |
https://www.lesswrong.com/posts/uuwscRipCCoexQXse/calling-all-experts | uuwscRipCCoexQXse | Calling all experts | sleno | Hey everyone.
I'm in the process of building a hackernews-like website specifically oriented around discovering and discussing the state of the art in academic domains through public white papers; computer science, economics, physics, you get the idea.
If anyone is knowledgeable in any of these fields and familiar with... | 2024-05-15 |
https://www.lesswrong.com/posts/D9Q4nXfxTdWckF2RX/mentorship-in-agi-safety-magis-call-for-mentors | D9Q4nXfxTdWckF2RX | Mentorship in AGI Safety (MAGIS) call for mentors | Just Learning | Tldr: If you are working on AI Safety and are willing to help someone to start their career in AI Safety by sharing your experience at 1:1 meetings consider applying as a mentor
In the last year, we’ve seen a surge of interest in AI safety. Many young professionals and aspiring researchers are attempting or seriously c... | 2024-05-23 |
https://www.lesswrong.com/posts/2An6fWxd9wy5Gm53d/less-anti-dakka | 2An6fWxd9wy5Gm53d | Less Anti-Dakka | mateusz-baginski | It is written in More Dakka:
If something is a good idea, you need a reason to not try doing more of it.
Taken at face value, it implies the following:
If something is a bad idea, you need a reason to not try doing less of it.
Labels/concepts, such as More Dakka, Inadequate Equilibria, etc point to a puzzling phenomeno... | 2024-05-31 |
https://www.lesswrong.com/posts/WNZGqeLMjPGFp78wX/aisafety-com-resources-for-ai-safety | WNZGqeLMjPGFp78wX | AISafety.com – Resources for AI Safety | soren-elverlin-1 | There are many resources for those who wish to contribute to AI Safety, such as courses, communities, projects, jobs, events and training programs, funders and organizations. However, we often hear from people that they have trouble finding the right resources. To address this, we've built AISafety.com as a central hub... | 2024-05-17 |
https://www.lesswrong.com/posts/QuL8uCF9a376KZnkr/quantized-vs-continuous-nature-of-qualia | QuL8uCF9a376KZnkr | Quantized vs. continuous nature of qualia | notfnofn | This question is not very well-posed, but I've done my best to make it as well-posed as I can.
Suppose that humans with sufficiently functional brains are able have subjective experiences that transcend the "easy problems of consciousness".
I'm interested in understanding if this can be reasonably accepted without also... | 2024-05-15 |
https://www.lesswrong.com/posts/4KjiZeAWc7Yv9oyCb/tackling-moloch-how-youcongress-offers-a-novel-coordination | 4KjiZeAWc7Yv9oyCb | Tackling Moloch: How YouCongress Offers a Novel Coordination Mechanism | hector-perez-arenas | Moloch, as articulated by Scott Alexander, represents the coordination problems that lead to outcomes that leave everyone worse off. While prediction markets explore what people think will happen, YouCongress aims to aggregate beliefs and desires regarding ideal outcomes. This open-source platform proposes a novel coor... | 2024-05-15 |
https://www.lesswrong.com/posts/7EfED6Dx9NMLqngec/how-to-be-a-messy-thinker | 7EfED6Dx9NMLqngec | How to be a messy thinker | invertedpassion | Crossposted from my blog: https://invertedpassion.com/how-to-be-a-messy-thinker/
I love thinking about thinking. Give me a research paper on rationality, cognitive biases or mental models, and I’ll gobble it up. Given the amount of knowledge I’ve ingested on these topics, I had always assumed that I’m a clear thinker.
... | 2024-05-15 |
https://www.lesswrong.com/posts/2JuErRCkS2AoesErA/embedded-whistle-synth | 2JuErRCkS2AoesErA | Embedded Whistle Synth | jkaufman | A few years ago I ported my whistle synth system from my laptop
to a Raspberry Pi. This was a big
improvement, but I still wasn't that happy:
To get good quality audio in and out I was using a 2i2 audio
interface, which is expensive, bulky, and has a lot of buttons and
knobs that can be bumped.
To use a single mic for... | 2024-05-15 |
https://www.lesswrong.com/posts/JSWF2ZLt6YahyAauE/ilya-sutskever-and-jan-leike-resign-from-openai-updated | JSWF2ZLt6YahyAauE | Ilya Sutskever and Jan Leike resign from OpenAI [updated] | Zach Stein-Perlman | Ilya Sutskever and Jan Leike have resigned. They led OpenAI's alignment work. Superalignment will now be led by John Schulman, it seems. Jakub Pachocki replaced Sutskever as Chief Scientist.
Reasons are unclear (as usual when safety people leave OpenAI).
The NYT piece (archive) and others I've seen don't really have de... | 2024-05-15 |
https://www.lesswrong.com/posts/dD63tGi88KvgC8cmx/my-note-system | dD63tGi88KvgC8cmx | my note system | bhauth | I've been told that my number of blog posts is impressive, but my personal notes are much larger than my blog, over a million words and with higher information density. Since I've had a bit of practice taking notes, I thought I'd describe the system I developed. It's more complex than some integrated solutions, but it'... | 2024-05-15 |
https://www.lesswrong.com/posts/pEZoTSCxHY3mfPbHu/catastrophic-goodhart-in-rl-with-kl-penalty | pEZoTSCxHY3mfPbHu | Catastrophic Goodhart in RL with KL penalty | thomas-kwa | TLDR: In the last two posts, we showed that optimizing for a proxy can fail to increase true utility, but only when the error is heavy-tailed. We now show that this also happens in RLHF with a KL penalty.
This post builds on our earlier result with a more realistic setting and assumptions:
Rather than modeling optimiza... | 2024-05-15 |
https://www.lesswrong.com/posts/vLBW5wMxvRLZwA4Wo/miri-s-may-2024-newsletter | vLBW5wMxvRLZwA4Wo | MIRI's May 2024 Newsletter | Harlan | Update (5-15-2024): I wrote that “it appears that not all of the leading AI labs are honoring the voluntary agreements they made at [AI Safety Summit],” citing a Politico article. However, after seeing more discussion about it (e.g. here), I am now highly uncertain about whether the labs made specific commitments, what... | 2024-05-15 |
https://www.lesswrong.com/posts/BtEfvGgjTSszkG5fy/when-does-external-behaviour-imply-interal-structure | BtEfvGgjTSszkG5fy | When does external behaviour imply interal structure? | tyler-tracy | I've been working on an AI safety camp project where we try to describe agent structure. This post defines some key concepts and conveys my reasoning about this topic so far. It is mostly conceptual. The first section discusses what structure is and what it means for an object's behavior to imply structure. The second ... | 2024-05-31 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.