url
stringlengths
52
124
post_id
stringlengths
17
17
title
stringlengths
2
248
author
stringlengths
2
49
content
stringlengths
22
295k
date
stringclasses
376 values
https://www.lesswrong.com/posts/5rygaBBH7B4LNqQkz/new-feature-your-subscribed-feed
5rygaBBH7B4LNqQkz
[New Feature] Your Subscribed Feed
Ruby
tl;dr LessWrong now has a Subscribed tab (next to the Latest tab and Enriched tab[1]). You can now "follow" users, which means their posts and comments will show up in your Subscribed tab[2]. You can subscribe to users that are suggested to you, or search for other users using the search bar. You can also find subscrib...
2024-06-11
https://www.lesswrong.com/posts/bsNMde2zsGwivyeCR/lesswrong-acx-meetup-transilvanya-tour-cluj-napoca
bsNMde2zsGwivyeCR
LessWrong/ACX meetup Transilvanya tour - Cluj Napoca
Marius Adrian Nicoară
Let's get together at the Iulius Mall food court and hang out for a while. I'll bring an interesting board game that involves teamwork and Bayesian reasoning; requires 4 players. As a starting point, we can talk about "The Pyramid And The Garden" by Scott Alexander: https://www.lesswrong.com/posts/9HSwh2mE3tX6xvZ2W/th...
2024-06-07
https://www.lesswrong.com/posts/Caj9brP8PEqkgSrSC/is-claude-a-mystic
Caj9brP8PEqkgSrSC
Is Claude a mystic?
jessica.liu.taylor
There's an odd tendency for large language models such as Claude to output spiritual meta content if they run long enough. See, for example, some LLM outputs selected by Repligate (LLM content starts in 2024), and Worldspider outputs selected by John Pressman. One thing that is unclear is how consistent these outputs a...
2024-06-07
https://www.lesswrong.com/posts/9vwfFGHmck2KnLraJ/offering-completion
9vwfFGHmck2KnLraJ
Offering Completion
jkaufman
One verbal approach I often find useful with little kids is to offer them the opportunity to complete a phrase. Compare: Parent: It's time to brush teeth! Toddler: I no want brush teef! vs: Parent: It's time to brush... Toddler: Teef! Similarly, "the dirty tissue goes in the...", "when we get home it will be time to p...
2024-06-07
https://www.lesswrong.com/posts/cJv8rBSshrR82NRET/a-case-for-superhuman-governance-using-ai
cJv8rBSshrR82NRET
A Case for Superhuman Governance, using AI
ozziegooen
null
2024-06-07
https://www.lesswrong.com/posts/Yig9oa4zGE97xM2os/response-to-aschenbrenner-s-situational-awareness
Yig9oa4zGE97xM2os
Response to Aschenbrenner's "Situational Awareness"
RobbBB
(Cross-posted from Twitter.) My take on Leopold Aschenbrenner's new report: I think Leopold gets it right on a bunch of important counts. Three that I especially care about: Full AGI and ASI soon. (I think his arguments for this have a lot of holes, but he gets the basic point that superintelligence looks 5 or 15 years...
2024-06-06
https://www.lesswrong.com/posts/Fg2gAgxN6hHSaTjkf/scaling-and-evaluating-sparse-autoencoders
Fg2gAgxN6hHSaTjkf
Scaling and evaluating sparse autoencoders
leogao
[Blog] [Paper] [Visualizer] Abstract: Sparse autoencoders provide a promising unsupervised approach for extracting interpretable features from a language model by reconstructing activations from a sparse bottleneck layer.  Since language models learn many concepts, autoencoders need to be very large to recover all rele...
2024-06-06
https://www.lesswrong.com/posts/TEE799ocKYNriD8CZ/situational-awareness-summarized-part-2
TEE799ocKYNriD8CZ
Situational Awareness Summarized - Part 2
Joe Rogero
This is the second post in the Situational Awareness Summarized sequence. Collectively, these posts represent my attempt to condense Leopold Aschenbrenner's recent report, Situational Awareness, into something more digestible. Part II: Zoom to Foom This section outlines how the development of AGI quickly and inevitably...
2024-06-07
https://www.lesswrong.com/posts/bfm5Fm9pJYAthFSZT/memorizing-weak-examples-can-elicit-strong-behavior-out-of
bfm5Fm9pJYAthFSZT
Memorizing weak examples can elicit strong behavior out of password-locked models
Fabien
We’ve recently done some research looking into sandbagging: examining when models can succeed at intentionally producing low-quality outputs despite attempts at fine-tuning them to perform well. One reason why sandbagging could be concerning is because scheming models might try to appear less capable than they actually...
2024-06-06
https://www.lesswrong.com/posts/dsZeogoPQbF8jSHMB/humming-is-not-a-free-usd100-bill
dsZeogoPQbF8jSHMB
Humming is not a free $100 bill
pktechgirl
Last month I posted about humming as a cheap and convenient way to flood your nose with nitric oxide (NO), a known antiviral. Alas, the economists were right, and the benefits were much smaller than I estimated. The post contained one obvious error and one complication. Both were caught by Thomas Kwa, for which he has ...
2024-06-06
https://www.lesswrong.com/posts/LGDXYnnaMqrAin76w/there-are-no-primordial-definitions-of-man-woman
LGDXYnnaMqrAin76w
There Are No Primordial Definitions of Man/Woman
ymeskhout
I watched a two-hour debate between atheist Stephen Woodford (host of Rationality Rules channel) and sex realist biologist Colin Wright on the topic of gender. The topic was ostensibly about whether gender is a social construct, but it ended up being a captivating discussion on the definition of man/woman and male/fema...
2024-06-06
https://www.lesswrong.com/posts/xDsbqxeCQWe4BiYFX/natural-latents-are-not-robust-to-tiny-mixtures
xDsbqxeCQWe4BiYFX
Natural Latents Are Not Robust To Tiny Mixtures
johnswentworth
In our previous natural latent posts, our core theorem typically says something like: Assume two agents have the same predictive distribution P[X] over variables X, but model that distribution using potentially-different latent variables. If the latents both satisfy some simple “naturality” conditions (mediation and re...
2024-06-07
https://www.lesswrong.com/posts/gKxf6qJaSP5Ehqnsm/ai-67-brief-strange-trip
gKxf6qJaSP5Ehqnsm
AI #67: Brief Strange Trip
Zvi
I had a great time at LessOnline. It was a both a working trip and also a trip to an alternate universe, a road not taken, a vision of a different life where you get up and start the day in dialogue with Agnes Callard and Aristotle and in a strange combination of relaxed and frantically go from conversation to conversa...
2024-06-06
https://www.lesswrong.com/posts/eoshu82FHeGzGXz2M/link-post-foundational-challenges-in-assuring-alignment-and
eoshu82FHeGzGXz2M
[Link Post] "Foundational Challenges in Assuring Alignment and Safety of Large Language Models"
capybaralet
We’ve recently released a comprehensive research agenda on LLM safety and alignment. This is a collaborative work with contributions from more than 35+ authors across the fields of AI Safety, machine learning, and NLP.  Major credit goes to first author Usman Anwar, a 2nd year PhD student of mine who conceived and led ...
2024-06-06
https://www.lesswrong.com/posts/jnfRZ7Yd9HgFgLKaD/the-human-biological-advantage-over-ai
jnfRZ7Yd9HgFgLKaD
The Human Biological Advantage Over AI
Wstewart
I've completed the first full draft of the subject paper, found here, and would be most grateful for comments of any kind. The origin is my realization it would really help to have an intellectual foundation beyond personal interest for the importance of avoiding x-risk.  It is easy to say that x-risk would be bad beca...
2024-06-06
https://www.lesswrong.com/posts/8dms2XAuD6kDS5WXR/metastrategy-get-started-guide
8dms2XAuD6kDS5WXR
Metastrategy get-started guide
Tahp
I was at LessOnline. The thing that got me most excited was attending @Raemon’s very abridged Metastrategy workshop. To my lack of surprise, he has written up the general idea already, but that report focuses on people’s response to trying fractal strategy and I would not have found it very actionable had I seen it bef...
2024-06-25
https://www.lesswrong.com/posts/2t8pgrxDf6i3FJeyf/the-impossibility-of-a-rational-intelligence-optimizer
2t8pgrxDf6i3FJeyf
The Impossibility of a Rational Intelligence Optimizer
nicolas-villarreal
Goodhart’s Law and Sign Function Collapse In my last post, where I created a semiotic critique of the orthogonality thesis, I noted that optimizing for intelligence is distinct from all other possible goals, and that therefore, the only goal compatible with all levels of intelligence in principle is optimization for in...
2024-06-06
https://www.lesswrong.com/posts/fJseTLvaun4SmRv4a/an-evaluation-of-helen-toner-s-interview-on-the-ted-ai-show
fJseTLvaun4SmRv4a
An evaluation of Helen Toner’s interview on the TED AI Show
PeterH
Helen Toner was recently interviewed on the TED AI Show. In the first segment, she explains why the OpenAI board decided to fire Sam Altman (video, transcript). What should we make of Helen's account? In this post, I'll go through the interview and examine each of the claims made. For the tl;dr, skip straight to the "r...
2024-06-06
https://www.lesswrong.com/posts/YA4wix3eB5brn9uZC/immunization-against-harmful-fine-tuning-attacks
YA4wix3eB5brn9uZC
Immunization against harmful fine-tuning attacks
domenicrosati
TL;DR: A potential source of risk from frontier models comes from bad actors purposely training them towards harmful ends or circumventing safety guards: so-called “harmful fine-tuning attacks (HFTAs)”. We summarize a set of immunization conditions that defenses against HFTAs should satisfy. This work was done as part ...
2024-06-06
https://www.lesswrong.com/posts/4t98oqh8tzDvoatHs/sb-1047-is-weakened
4t98oqh8tzDvoatHs
SB 1047 Is Weakened
Zvi
It looks like Scott Weiner’s SB 1047 is now severely weakened. Some of the changes are good clarifications. One is a big very welcome fix. The one I call The Big Flip is something very different. It is mind boggling that we can have a political system where a bill can overwhelmingly pass the California senate, and then...
2024-06-06
https://www.lesswrong.com/posts/HmaZAgKWcGDopJWkQ/weeping-agents
HmaZAgKWcGDopJWkQ
Weeping Agents
pleiotroth
What is Agency? In a relatively intuitive way it’s a property of systems which have three things: Some –however rudimentary– way of modelling a slice of reality, a utility function over that slice of reality and a way of acting upon it. An agent can make better-than-chance guesses about how the behaviours in their acti...
2024-06-06
https://www.lesswrong.com/posts/gawBcA5HyHRcKRwmx/podcast-center-for-ai-policy-on-ai-risk-and-listening-to-ai
gawBcA5HyHRcKRwmx
Podcast: Center for AI Policy, on AI risk and listening to AI researchers
KatjaGrace
I was on the Center for AI Policy Podcast. We talked about topics around the 2023 Expert Survey on Progress in AI, including why I think AI is an existential risk, and how much to listen to AI researchers on the subject. Full transcript at the link.
2024-06-06
https://www.lesswrong.com/posts/NHKCtSXgFieDAyWt2/calculating-natural-latents-via-resampling
NHKCtSXgFieDAyWt2
Calculating Natural Latents via Resampling
johnswentworth
So you’ve read some of our previous natural latents posts, and you’re sold on the value proposition. But there’s some big foundational questions still unanswered. For example: how do we find these natural latents in some model, if we don’t know in advance what they are? Examples in previous posts conceptually involved ...
2024-06-06
https://www.lesswrong.com/posts/wEw8CZQssYGN7AQ9E/weak-agis-kill-us-first
wEw8CZQssYGN7AQ9E
Weak AGIs Kill Us First
yehuda-rimon
Weak AGIs Kill Us First aka. Deadly pre-Superintelligence: Paths, Dangers, Strategies Epistemic status: This has not been edited or commented on by anyone other than chatGPT as of publishing, I hope the arguments stand on their own. When people talk about the dangers of AGI, they often refer to the dangers posed by a s...
2024-06-17
https://www.lesswrong.com/posts/sDnbTN8Qb84P6D3Xd/situational-awareness-summarized-part-1
sDnbTN8Qb84P6D3Xd
Situational Awareness Summarized - Part 1
Joe Rogero
This is the first post in the Situational Awareness Summarized sequence. Collectively, these posts represent my attempt to condense Leopold Aschenbrenner's recent report, Situational Awareness, into something more digestible. I'd like to make it more accessible to people who don't want to read 160 pages. I will not att...
2024-06-06
https://www.lesswrong.com/posts/dSyR89vS9sema9vvB/let-s-design-a-school-part-2-4-school-as-education-the
dSyR89vS9sema9vvB
Let's Design A School, Part 2.4 School as Education - The Curriculum (Phase 3, Specific)
Sable
In previous posts, we laid out our model of school-as-education: Phase 1 was literacy and numeracy, Phase 2 was core civilizational requirements and survey courses, Phase 3 was core adulting requirements and self-study, and went into detail about the core civilizational requirements of phase 2. This post, we’ll dive in...
2024-06-05
https://www.lesswrong.com/posts/xFuxaszduwGtA3boQ/metr-is-hiring-ml-research-engineers-and-scientists
xFuxaszduwGtA3boQ
METR is hiring ML Research Engineers and Scientists
Xodarap
METR is developing evaluations for AI R&D capabilities, such that evaluators can determine if further AI development risks a “capabilities explosion”, which could be extraordinarily destabilizing if realized. METR is hiring ML research engineers/scientists to drive these AI R&D evaluations forward. Why focus on risks p...
2024-06-05
https://www.lesswrong.com/posts/fijKEQJkFiqM9PAG7/book-review-the-quincunx
fijKEQJkFiqM9PAG7
Book review: The Quincunx
cousin_it
The Quincunx is a 1989 novel by Charles Palliser, set in early 1800s England. I want to recommend it to everyone because it's really good, and it might be relevant to the AI transition. Let me try to explain. The surface level of the book is a kind of mishmash of Dickensian themes. The main character is caught in a com...
2024-06-05
https://www.lesswrong.com/posts/ignCBxbqWWPYCdCCx/research-report-alternative-sparsity-methods-for-sparse
ignCBxbqWWPYCdCCx
Research Report: Alternative sparsity methods for sparse autoencoders with OthelloGPT.
Andrew Quaisley
Abstract Standard sparse autoencoder training uses an L1 sparsity loss term to induce sparsity in the hidden layer.  However, theoretical justifications for this choice are lacking (in my opinion), and there may be better ways to induce sparsity.  In this post, I explore other methods of inducing sparsity and experimen...
2024-06-14
https://www.lesswrong.com/posts/SRwQgemMYnrBcaAiL/aisn-36-voluntary-commitments-are-insufficient-plus-a-senate
SRwQgemMYnrBcaAiL
AISN #36: Voluntary Commitments are Insufficient Plus, a Senate AI Policy Roadmap, and Chapter 1: An Overview of Catastrophic Risks
corin-katzke
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Subscribe here to receive future versions. Listen to the AI Safety Newsletter for free on Spotify. Voluntary Commitments are Insufficient AI companies agree to RSPs in Seoul. F...
2024-06-05
https://www.lesswrong.com/posts/u4KfrRnqhe9LfmQeX/gpt2-five-years-on
u4KfrRnqhe9LfmQeX
GPT2, Five Years On
joel-burget
Jack Clark's retrospective on GPT2 is full of interesting policy thoughts, I recommend reading the whole thing. One excerpt: I've come to believe that in policy "a little goes a long way" - it's far better to have a couple of ideas you think are robustly good in all futures and advocate for those than make a confident ...
2024-06-05
https://www.lesswrong.com/posts/ajY7kB5aSNSxbh63a/who-wants-to-be-invited-to-the-lw-metamodern-dialogue
ajY7kB5aSNSxbh63a
Who wants to be invited to the LW Metamodern dialogue?
hunterglenn
I want to start a metamodern dialogue. Ideally, it will lead to a Metamodern handshake (or many of them), ie, we figure out how to help local people and systems get even more of what they selfishly want by aiming also at helping avert the meta-crisis, participating in the spirit of the times. The way I see it is win-wi...
2024-06-05
https://www.lesswrong.com/posts/YxBSk2hnT3wEdmhra/nonreactivity-a-simple-model-of-meditation
YxBSk2hnT3wEdmhra
Nonreactivity: a simple model of meditation
cesiumquail
Here’s my model of how meditation works: Nonreactivity reduces suffering.You can gradually develop nonreactivity through practice. This model is easy to understand, and it clarifies a lot of confusing philosophical stuff in other models. For example, “non-self” is one of the Three Characteristics of experiential realit...
2024-06-05
https://www.lesswrong.com/posts/MnEPh4srDM9ftixa4/startup-stock-options-the-shortest-complete-guide-for
MnEPh4srDM9ftixa4
Startup Stock Options: the Shortest Complete Guide for Employees
Euphetar
This is a short, but comprehensive, guide to stock options, with specific example outcomes for employees and a sprinkle of Bayesian modeling of your chance to cash out with at least $1M. Unlike other guides, this is not a stream of financial terminology. Instead, it is specific and answers the real questions with concr...
2024-06-05
https://www.lesswrong.com/posts/fyP7RArAnDtqmibdQ/aggregative-principles-of-social-justice-1
fyP7RArAnDtqmibdQ
Aggregative Principles of Social Justice
strawberry calm
1. Introduction 1.1. Three aggregative principles This article examines aggregative principles of social justice. These principles state that a social planner should make decisions as if they will face the aggregated personal outcomes of every individual in the population. Different conceptions of aggregation generate ...
2024-06-05
https://www.lesswrong.com/posts/qRADtHHHwk925BFYd/what-and-how-much-makes-a-difference
qRADtHHHwk925BFYd
What and how much makes a difference?
Marius Adrian Nicoară
This election year, we'll have the whole shebang in Romania: June 9: election of local officials and representatives to the European Parliament September 15 and 29: first and second round of presidential elections December 8: elections for the Romanian Parliament Quite a workout for the civic muscles. But I am reminded...
2024-06-05
https://www.lesswrong.com/posts/2mrdHw6yM3h55bmhg/former-openai-superalignment-researcher-superintelligence-by
2mrdHw6yM3h55bmhg
Former OpenAI Superalignment Researcher: Superintelligence by 2030
Julian Bradshaw
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. In the link provided, Leopold Aschenbrenner explains wh...
2024-06-05
https://www.lesswrong.com/posts/88cEgGcSXwFvPms4s/i-made-an-ai-safety-fellowship-what-i-wish-i-knew
88cEgGcSXwFvPms4s
I made an AI safety fellowship. What I wish I knew.
ruben-castaing
TLDR If you make an AI safety course: Leverage other people. Make sure to cover why AI safety is important. Build a community and just start. Use the below resources and slides. Why Bother Making an AI safety fellowship is highly worth doing. It has likely been a major update on my and other student's career paths. For...
2024-06-08
https://www.lesswrong.com/posts/AtEk7cSttHyuicfmb/how-should-i-think-about-my-career
AtEk7cSttHyuicfmb
How should I think about my career?
chico-demmenie
Hi, I am relatively new to LessWrong and Rationalism but I've been trying to learn more about it over the last year or so. I am a Computer Science student from the UK and am about to graduate from university. I have been trying to figure out my career path from here and in the process have been applying to jobs and pos...
2024-06-05
https://www.lesswrong.com/posts/Nsmabb9fhpLuLdtLE/takeoff-speeds-presentation-at-anthropic
Nsmabb9fhpLuLdtLE
Takeoff speeds presentation at Anthropic
tom-davidson-1
This is a lightly edited transcript of a presentation about the risks of a fast takeoff that I (Tom Davidson) gave  at Anthropic in September 2023. See also the video recording, or the slides. None of the content necessarily reflects the views of Anthropic or anyone who works there. Summary: Software progress – improve...
2024-06-04
https://www.lesswrong.com/posts/kLRN3uZMawshPBL9D/a-reflection-on-richard-hamming-s-you-and-your-research
kLRN3uZMawshPBL9D
A Reflection on Richard Hamming's "You and Your Research": Striving for Greatness
aysajan
Introduction: A Commitment to Greatness Before I dive into my reflections on Richard Hamming's insightful talk, "You and Your Research", I want to start by posing three critical questions to myself: What are the important problems in my field?What important problems am I working on?If what I am doing is not important, ...
2024-06-04
https://www.lesswrong.com/posts/8mGKsensaTEb6EMZx/graphpatch-a-python-library-for-activation-patching
8mGKsensaTEb6EMZx
graphpatch: a Python Library for Activation Patching
evan-lloyd
This post is an announcement for a software library. It is likely only relevant to those working, or looking to start working, in mechanistic interpretability. What is graphpatch? graphpatch is a Python library for activation patching on arbitrary PyTorch neural network models. It is designed to minimize the amount of ...
2024-06-05
https://www.lesswrong.com/posts/C7LJyYdgnuNYhrA89/ideas-for-next-generation-writing-platforms-using-llms
C7LJyYdgnuNYhrA89
Ideas for Next-Generation Writing Platforms, using LLMs
ozziegooen
null
2024-06-04
https://www.lesswrong.com/posts/c4sZqhqPwNKGz3fFW/paper-stress-testing-capability-elicitation-with-password
c4sZqhqPwNKGz3fFW
[Paper] Stress-testing capability elicitation with password-locked models
Fabien
The paper is by Ryan Greenblatt, Fabien Roger, Dmitrii Krasheninnikov and David Krueger. This post was written by Fabien and Ryan, and may not reflect the views of Dmitrii and David. Scheming models might try to perform less capably than they are able to (sandbag). They might do this to make evaluations underestimate d...
2024-06-04
https://www.lesswrong.com/posts/iabv566Fk9NusB55R/circuit-board-ordering
iabv566Fk9NusB55R
Circuit Board Ordering
jkaufman
Working first on my plucked electronic instrument and now my hardware whistle synthesizer I've been ordering custom circuit boards ("PCBs"). Once the design digitized in KiCad ( tutorial), the timeline looks like: ~15min: place order online (JLCPCB, picked kind of randomly) ~30min: manufacturer reviews order In the mos...
2024-06-04
https://www.lesswrong.com/posts/sStvF2RLfLfLQEDLv/has-anyone-here-written-about-religious-fictionalism
sStvF2RLfLfLQEDLv
Has anyone here written about religious fictionalism?
SpectrumDT
Religious fictionalism, as I understand it, is the position that religion can be valuable and worth practicing and spreading even though it makes false claims. I was inspired to ask this question by the below YouTube video essay by Kane Baker. Has anyone on Less Wrong written about religious fictionalism and the argume...
2024-06-04
https://www.lesswrong.com/posts/JGa5igyJGaZ8N5o3g/is-wittgenstein-s-language-game-used-when-helping-ai
JGa5igyJGaZ8N5o3g
Is Wittgenstein's Language Game used when helping Ai understand language?
Unknown
Hello. I am a lurker, but I checked the search and didn't see anyone discussing Wittgenstein's ideas concerning the "essence" of language and his talk of "Language Games" so I thought I'd ask. Wittgenstein is a linguistic philosopher, who in very brief terms clarified our usage of language. While many people conceived ...
2024-06-04
https://www.lesswrong.com/posts/qgJ4ehdTCfwHKjbnh/smartphone-etiquette-suggestions-for-social-interactions
qgJ4ehdTCfwHKjbnh
Smartphone Etiquette: Suggestions for Social Interactions
declan-molony
If anyone stopped and asked themselves if they enjoy the current smartphone social etiquette—whereby you could be opening up your soul to a friend in a moment of vulnerability, when suddenly, they feel a buzz and reflexively check their phone to discover an important & urgent message from the McDonald’s app that The Mc...
2024-06-04
https://www.lesswrong.com/posts/vjeAwiWFcexohWS2e/just-admit-that-you-ve-zoned-out
vjeAwiWFcexohWS2e
Just admit that you’ve zoned out
joec
Summary: Zoning out is difficult to avoid and common, zoning out without admitting it hurts your comprehension, therefore you should admit that you zoned out and ask people to repeat things. If you’re anything like me, you’ve “zoned out” before. You’ve probably even zoned out when you’re trying to learn something inter...
2024-06-04
https://www.lesswrong.com/posts/S9xKTMMRWnCDLA5b9/here-s-why-indefinite-life-extension-will-never-work-even
S9xKTMMRWnCDLA5b9
Here's Why Indefinite Life Extension Will Never Work, Even Though it Does.
hominghamster
Introduction “I know that technologies to make us live indefinitely exist, but who would want to live THAT long?” While the concept of living indefinitely might sound like science fiction, advancements in medical science are rapidly turning it into a scientifically accepted reality. The quest for longevity has taken ma...
2024-06-04
https://www.lesswrong.com/posts/PJr24aC4ziNqkfpjF/not-derailing-the-lessonline-puzzle-hunt-1
PJr24aC4ziNqkfpjF
(Not) Derailing the LessOnline Puzzle Hunt
Error
(spoiler alert: may meta-spoil future iterations of the LOPH, if you haven’t already read other posts about it) I knew early on that I wouldn’t be able to finish the LessOnline puzzle hunt. I contributed to solving two of the first six puzzles, each of which revealed the combination to a locked box. Each box contained ...
2024-06-04
https://www.lesswrong.com/posts/qs7SjiMFoKseZrhxK/on-first-critical-tries-in-ai-alignment
qs7SjiMFoKseZrhxK
On “first critical tries” in AI alignment
joekc
People sometimes say that AI alignment is scary partly (or perhaps: centrally) because you have to get it right on the “first critical try,” and can’t learn from failures.[1] What does this mean? Is it true? Does there need to be a “first critical try” in the relevant sense? I’ve sometimes felt confused about this, so ...
2024-06-05
https://www.lesswrong.com/posts/z3RNotvutqPXBLmqB/masculinity-a-case-for-courage
z3RNotvutqPXBLmqB
Masculinity—A Case For Courage
james-brown
HOW THIS POST CAME ABOUT When I put a call out for non-zero-sum topics from subscribers, it was my brother who first replied with masculinity—pointing me to the work of author Richard Reeves. “I think the danger of zero-sum thinking is present … men are suffering because women are rising. But it’s also true—when I try ...
2024-06-04
https://www.lesswrong.com/posts/abKBpprwJcQwbZpxZ/philosophers-wrestling-with-evil-as-a-social-media-feed
abKBpprwJcQwbZpxZ
Philosophers wrestling with evil, as a social media feed
David_Gross
Susan Nieman’s Evil in Modern Thought: An Alternative History of Philosophy (2015), visualized as a social media feed: Alfonso Ⅹ hates his Ptolemaic astronomy class Alfonso Ⅹ If I had been of God’s counsel at the Creation, many things would have been ordered better. good Christians everywhere Blasphemy! Gottfried Leibn...
2024-06-03
https://www.lesswrong.com/posts/CJpPkHwGsXJFugCKA/aci-8-value-as-a-function-of-possible-worlds
CJpPkHwGsXJFugCKA
ACI#8: Value as a Function of Possible Worlds
Akira Pyinya
What is value? What are rewards? In traditional models of intelligence, we don't have to worry about these questions. Reward is simply what we need to maximize, and value (or utility) represents total expected rewards. You don't need to question these fundamental assumptions of the model. In contrast, ACI argues that r...
2024-06-03
https://www.lesswrong.com/posts/CyZeedcbztE7ngF8L/in-defense-of-linus-pauling
CyZeedcbztE7ngF8L
in defense of Linus Pauling
bhauth
Linus Pauling was a chemist who won a Nobel Prize in Chemistry in 1954. He later became well-known for advocating large doses of Vitamin C. I've heard that advocacy referred to as a cautionary tale, but I've long had a bit of sympathy for Linus, and I'm writing this post to explain how and why. mainstream nutrition One...
2024-06-03
https://www.lesswrong.com/posts/gSc4dKAwMaA3FfgRr/searching-magic-cards
gSc4dKAwMaA3FfgRr
Searching Magic Cards
jkaufman
I like playing Magic, except for the "it's designed to pump away your money" aspect. So I like formats where I can play with other people's cards a lot! My favorite is probably drafting from booster packs where someone else will keep all the cards, but yesterday Stevie brought over a Forgetful Fish-style deck he'd pu...
2024-06-03
https://www.lesswrong.com/posts/sGEJi9wFT3Gdqg2nM/the-standard-analogy
sGEJi9wFT3Gdqg2nM
The Standard Analogy
Zack_M_Davis
[Scene: a suburban house, a minute after the conclusion of "And All the Shoggoths Merely Players". Doomimir returns with his package, which he places by the door, and turns his attention to Simplicia, who has been waiting for him.] Simplicia: Right. To recap for [coughs] no one in particular, when we left off [pointedl...
2024-06-03
https://www.lesswrong.com/posts/6XrnhEX3aMnBjfZHd/how-was-less-online-for-you
6XrnhEX3aMnBjfZHd
How was Less Online for you?
gworley
I was going to write a post that was something like "Personal Reflections from Less Online", but lots of us who were there probably have reflections on it, so I created this question post to invite people to offer their reflections in one place. Answers should be reflections from people who attended. Comments could be ...
2024-06-03
https://www.lesswrong.com/posts/ceBpLHJDdCt3xfEok/ai-catastrophes-and-rogue-deployments
ceBpLHJDdCt3xfEok
AI catastrophes and rogue deployments
Buck
[Thanks to Aryan Bhatt, Ansh Radhakrishnan, Adam Kaufman, Vivek Hebbar, Hanna Gabor, Justis Mills, Aaron Scher, Max Nadeau, Ryan Greenblatt, Peter Barnett, Fabien Roger, and various people at a presentation of these arguments for comments. These ideas aren’t very original to me; many of the examples of threat models ar...
2024-06-03
https://www.lesswrong.com/posts/szi87xXRTyXwmYpJf/a-semiotic-critique-of-the-orthogonality-thesis
szi87xXRTyXwmYpJf
A Semiotic Critique of the Orthogonality Thesis
nicolas-villarreal
“Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal.” That quote above is the orthogonality thesis, as defined by Nick Bostrom, the AI safety researcher. His ...
2024-06-04
https://www.lesswrong.com/posts/BQKhar24uLWrAz2tG/second-order-rationality-system-rationality-and-a-feature
BQKhar24uLWrAz2tG
Second-Order Rationality, System Rationality, and a feature suggestion for LessWrong
MathieuRoy
Second-order rationality Definition By “second-order rationality” (and intelligence) I mean the study of rationally reasoning about other people’s rationality (and intelligence) in order to inform us about the world. This is opposed to evaluating a proposition at face value, using the first-order evidence supporting it...
2024-06-05
https://www.lesswrong.com/posts/zj3GKWAnhPgTARByB/saes-discover-meaningful-features-in-the-ioi-task
zj3GKWAnhPgTARByB
SAEs Discover Meaningful Features in the IOI Task
amakelov
TLDR: recently, we wrote a paper proposing several evaluations of SAEs against "ground-truth" features computed w/ supervision for a given task (in our case, IOI [1]). However, we didn't optimize the SAEs much for performance in our tests. After putting the paper on arxiv, Alex carried out a more exhaustive search for ...
2024-06-05
https://www.lesswrong.com/posts/FiaqNKefDqXLBC6yi/acx-meetup-1
FiaqNKefDqXLBC6yi
ACX Meetup
svfritz
Time: 6pm, June 19 Place: Shelter between 705 Eagle Heights and 706 Eagle Heights (Parking Lot O). Coordinates: 43°05'22.8"N 89°26'06.4"W  OR (43.089667,-89.4357467) Getting there: There is guest parking at Lot P and Lot O, and one can get there via bus 80 at Lake Mendota Dr and Lot P (or Lot Q, if you miss that stop)....
2024-06-03
https://www.lesswrong.com/posts/zzmhsKx5dBpChKhry/comments-on-anthropic-s-scaling-monosemanticity
zzmhsKx5dBpChKhry
Comments on Anthropic's Scaling Monosemanticity
Robert_AIZI
These are some of my notes from reading Anthropic's latest research report, Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet. TL;DR In roughly descending order of importance: Its great that Anthropic trained an SAE on a production-scale language model, and that the approach works to find ...
2024-06-03
https://www.lesswrong.com/posts/sExzCaNww23kW28so/politics-is-the-mind-killer-but-maybe-we-should-talk-about
sExzCaNww23kW28so
Politics is the mind-killer, but maybe we should talk about it anyway
Chris_Leong
I completely agree with Eliezer that Politics is the Mind-Killer, but at the same time, I worry that talking in abstracts all the time is limiting. I wouldn't want this site to tilt too much toward politics, but I feel that there's such a thing as too little discussion as well. We recently lived through a period where ...
2024-06-03
https://www.lesswrong.com/posts/KqYQYkqsHqRuAKki5/dpo-ppo-rlhf-on-llms-incentivizes-sycophancy-exaggeration
KqYQYkqsHqRuAKki5
DPO/PPO-RLHF on LLMs incentivizes sycophancy, exaggeration and deceptive hallucination, but not misaligned powerseeking
tailcalled
TL;DR: GPTs are imitation learners, even with current forms of RL;HF. Direct preference optimization is a conditioning method for generative probabilistic models where pairs of outputs are ranked (e.g. by human raters) based on which one is better, and then (roughly speaking) you apply gradient updates to increase the ...
2024-06-10
https://www.lesswrong.com/posts/qmPXQbyYA66DuJbht/comparing-quantized-performance-in-llama-models
qmPXQbyYA66DuJbht
Comparing Quantized Performance in Llama Models
Nicky
Epistemic Status: Quick tests, most of this was done in less than 48 hours TL;DR: Can you skimp on GPU VRAM?  8bit quantized seems fine, for 4bit it depends. I was asked by @Teun van der Weij, to what degree one can run evaluations on quantized models, and I was unsure. I have run some evaluations with Llama 3 and have...
2024-07-15
https://www.lesswrong.com/posts/o5WGuk2auqkhaY7bp/how-do-you-shut-down-an-escaped-model
o5WGuk2auqkhaY7bp
How do you shut down an escaped model?
quetzal_rainbow
From comment on post about Autonomous Adaptation and Replication: ARA is just not a very compelling threat model in my mind. The key issue is that AIs that do ARA will need to be operating at the fringes of human society, constantly fighting off the mitigations that humans are using to try to detect them and shut them ...
2024-06-02
https://www.lesswrong.com/posts/KDvvZj6SJwqdNFn6k/list-of-arguments-for-bayesianism
KDvvZj6SJwqdNFn6k
List of arguments for Bayesianism
alenglander
I've been trying to list out for myself the various arguments people give for going with Bayesian (or at least probabilistic) epistemology. Here's what I have so far: Intuitive representation for degrees of belief: It seems pretty obvious that beliefs come in degrees. Representing degrees of belief as numbers between 0...
2024-06-02
https://www.lesswrong.com/posts/s2JqqidLxEE34fD7D/origins-of-the-lab-mouse
s2JqqidLxEE34fD7D
Origins of the Lab Mouse
niko-2
By Alex Telford for Asimov Press. It was 1905, and the French biologist Lucien Cuénot had a puzzle on his hands. He had been breeding mice in an attempt to decipher the patterns of coat color inheritance, but one of his crosses wasn’t behaving as expected. When Cuénot bred heterozygous yellow-coated mice—with yellow co...
2024-06-02
https://www.lesswrong.com/posts/nZnFvi5YFXorvQCpc/why-write-down-the-basics-of-logic-if-they-are-so-evident
nZnFvi5YFXorvQCpc
Why write down the basics of logic if they are so evident?
commissar Yarrick
In My Bayesian Enlightenment, Eliezer writes that he was born a Bayesian. That he decided to become a Bayesian no more than fish prefer to breathe water. Maybe all people are born Bayesian? Although, in that case, why doesn't everyone use Bayesian statistics? And why do many people learn little new by studying Bayesian...
2024-06-02
https://www.lesswrong.com/posts/mmDJWDX5EXv6rymtM/companies-safety-plans-neglect-risks-from-scheming-ai
mmDJWDX5EXv6rymtM
Companies' safety plans neglect risks from scheming AI
Zach Stein-Perlman
Without countermeasures, a scheming AI could escape. A safety case (for deployment) is an argument that it is safe to deploy a particular AI system in a particular way.[1] For any existing LM-based system, the developer can make a great safety case by demonstrating that the system does not have dangerous capabilities.[...
2024-06-03
https://www.lesswrong.com/posts/fhffkfJv9zYPeP3uJ/how-to-better-report-sparse-autoencoder-performance
fhffkfJv9zYPeP3uJ
How to Better Report Sparse Autoencoder Performance
Jemist
TL;DR When presenting data from SAEs, try plotting 1/L0 against 1−Recovered Loss and fitting a Hill curve. From Wikipedia Long Sparse autoencoders are hot, people are experimenting. The typical graph for SAE experimentation looks something like this. I'm using borrowed data here to better illustrate my point, but I hav...
2024-06-02
https://www.lesswrong.com/posts/sXhkiQuY92oJPzx8Y/turning-latexed-notes-into-blog-posts
sXhkiQuY92oJPzx8Y
Turning latexed notes into blog posts
notfnofn
I've been writing up an assortment of statistical notes in preparation for an upcoming job. I've been learning statistics from mostly-scratch but I have a solid math background, so I imagine my notes would be helpful to many people here with a similar background. My notes are currently latexed. I don't use any commands...
2024-06-01
https://www.lesswrong.com/posts/Amgrrj6k5kzTcrRxr/how-do-you-know-you-are-right-when-debating-calculate-your
Amgrrj6k5kzTcrRxr
How do you know you are right when debating? Calculate your AmIRight score.
ViktorThink
I recently found myself in a spirited debate with a friend about whether large language models (LLMs) like GPT-4 are mere stochastic parrots or if they can genuinely engage in deeper reasoning. We both presented a range of technical arguments and genuinely considered each other’s points. Despite our efforts, we ended u...
2024-06-01
https://www.lesswrong.com/posts/ximzfabt523fDFpPh/scanning-your-brain-with-100-000-000-000-wires
ximzfabt523fDFpPh
Scanning your Brain with 100,000,000,000 wires?
johannes-c-mayer
Epistemic status: I probably don't know what I am talking about 50% of the time. Specifically, I expect that I am missing some bottlenecks that would be obvious to a neuroscientist. Song Version Neuralink has a surgical robot to insert electrodes in your brain. They seem to try to target single neurons already. Their i...
2024-06-01
https://www.lesswrong.com/posts/RPszn2ze2aY8tfqHc/links-for-may-1
RPszn2ze2aY8tfqHc
Links for May
Kaj_Sotala
In English What would happen if a superintelligent AI was aligned with your values? The details here are a little too much in the “superintelligence is magic that can achieve anything” direction to my taste (I don’t think that anything will just be instantly teleported into safety, superintelligent AI or not), but I do...
2024-06-01
https://www.lesswrong.com/posts/GPuXM3ufXfmaktYXZ/what-do-coherence-arguments-actually-prove-about-agentic
GPuXM3ufXfmaktYXZ
What do coherence arguments actually prove about agentic behavior?
Unknown
(edit: discussions in the comments section have led me to realize there have been several conversations on LessWrong related to this topic that I did not mention in my original question post. Since ensuring their visibility is important, I am listing them here: Rohin Shah has explained how consequentialist agents optim...
2024-06-01
https://www.lesswrong.com/posts/pT6E7HzCQpuqFRyuc/ai-safety-a-climb-to-armageddon
pT6E7HzCQpuqFRyuc
AI Safety: A Climb To Armageddon?
kmenou
by Herman Cappelen, Josh Dever and John Hawthorne Abstract: This paper presents an argument that certain AI safety measures, rather than mitigating existential risk, may instead exacerbate it. Under certain key assumptions - the inevitability of AI failure, the expected correlation between an AI system's power at the p...
2024-06-01
https://www.lesswrong.com/posts/GfwdBoaLw3ef3zBqe/evidence-of-learned-look-ahead-in-a-chess-playing-neural
GfwdBoaLw3ef3zBqe
Evidence of Learned Look-Ahead in a Chess-Playing Neural Network
ejenner
Paper authors: Erik Jenner, Shreyas Kapur, Vasil Georgiev, Cameron Allen, Scott Emmons, Stuart Russell TL;DR: We released a paper with IMO clear evidence of learned look-ahead in a chess-playing network (i.e., the network considers future moves to decide on its current one). This post shows some of our results, and the...
2024-06-04
https://www.lesswrong.com/posts/5dkhdRMypeuyoXfmb/is-this-lie-detector-really-just-a-lie-detector-an
5dkhdRMypeuyoXfmb
Is This Lie Detector Really Just a Lie Detector? An Investigation of LLM Probe Specificity.
josh-levy
Abstract Whereas previous work has focused primarily on demonstrating a putative lie detector’s sensitivity/generalizability[1][2], it is equally important to evaluate its specificity.  With this in mind, I evaluated a lie detector trained with a state-of-the-art, white box technique - probing an LLM’s activations duri...
2024-06-04
https://www.lesswrong.com/posts/xiRfJApXGDRsQBhvc/we-might-be-dropping-the-ball-on-autonomous-replication-and-1
xiRfJApXGDRsQBhvc
We might be dropping the ball on Autonomous Replication and Adaptation.
charbel-raphael-segerie
Here is a little Q&A Can you explain your position quickly? I think autonomous replication and adaptation in the wild is under-discussed as an AI threat model. And this makes me sad because this is one of the main reasons I'm worried. I think one of AI Safety people's main proposals should first focus on creating a non...
2024-05-31
https://www.lesswrong.com/posts/7LWXfHZLwhn9Jja6N/tax-cuts-and-innovation
7LWXfHZLwhn9Jja6N
Tax Cuts and Innovation
maxwell-tabarrok
The classic turn in metascience goes like this: Policy discussions about R&D usually focus on how much we should spend: should we double the NIH budget or should we stop subsidizing university nonsense? The meta-scientist notices that some types of science funding are tens if not hundreds of times more effective than o...
2024-05-31
https://www.lesswrong.com/posts/seM8aQ7Yy6m3i4QPx/the-gemini-1-5-report
seM8aQ7Yy6m3i4QPx
The Gemini 1.5 Report
Zvi
This post goes over the extensive report Google put out on Gemini 1.5. There are no important surprises. Both Gemini Pro 1.5 and Gemini Flash are ‘highly capable multimodal models incorporating a novel mixture-of-experts architecture’ and various other improvements. They are solid models with solid performance. It can ...
2024-05-31
https://www.lesswrong.com/posts/EGvtgB7ctifzxZg6v/attention-output-saes-improve-circuit-analysis
EGvtgB7ctifzxZg6v
Attention Output SAEs Improve Circuit Analysis
ckkissane
This is the final post of our Alignment Forum sequence produced as part of the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort. Executive Summary In a previous post we trained Attention Output Sparse Autoencoders (SAEs) on every layer of GPT-2 Small. Following that work, we wanted to stress-test that Att...
2024-06-21
https://www.lesswrong.com/posts/2M8jj9wE2kiCocNJB/web-surfing-tips-for-strange-times
2M8jj9wE2kiCocNJB
Web-surfing tips for strange times
eukaryote
[This post is more opinion-heavy and aimlessly self-promoting than feels appropriate for Lesswrong. I wrote it for my site, Eukaryote Writes Blog, to show off that I now have a substack. But it had all these other observations about the state of the internet and advice woven in, and THOSE seemed more at home on Lesswro...
2024-05-31
https://www.lesswrong.com/posts/5yt2rWLXM5KT7ivnL/there-should-be-more-alignment-driven-startups
5yt2rWLXM5KT7ivnL
There Should Be More Alignment-Driven Startups
Vaniver
Many thanks to Brandon Goldman, David Langer, Samuel Härgestam, Eric Ho, Diogo de Lucena, and Marc Carauleanu, for their support and feedback throughout. Most alignment researchers we sampled in our recent survey think we are currently not on track to succeed with alignment–meaning that humanity may well be on track to...
2024-05-31
https://www.lesswrong.com/posts/w6sbD4K74Lg9AQEZm/twin-peaks-under-the-air
w6sbD4K74Lg9AQEZm
Twin Peaks: under the air
KatjaGrace
Content warning: low content ~ Feb 2021 The other day I decided to try imbibing work-relevant blog posts via AI-generated recital, while scaling the Twin Peaks—large hills near my house in San Francisco, of the sort that one lives near and doesn’t get around to going to. It was pretty strange, all around. For one thing...
2024-05-31
https://www.lesswrong.com/posts/KMv4ZAQuof8hWwcXn/is-suffering-like-shit
KMv4ZAQuof8hWwcXn
Is suffering like shit?
KatjaGrace
People seem to find suffering deep. Serious writings explore the experiences of all manner of misfortunes, and the nuances of trauma and torment involved. It’s hard to write an essay about a really good holiday that seems as profound as an essay about a really unjust abuse. A dark past can be plumbed for all manner of ...
2024-05-31
https://www.lesswrong.com/posts/ZKmFZY995MMvRf8q3/foresight-vision-weekend-europe-2024
ZKmFZY995MMvRf8q3
Foresight Vision Weekend Europe 2024
allison-duettmann
You are invited to join Vision Weekend Europe, the annual festival of Foresight Institute at Bückeburg Castle in Germany from July 12 - 14. What’s this year’s theme? This year’s main conference track is dedicated to “Paths to Progress”; meaning you will hear 10+ invited presentations from Foresight’s core community hig...
2024-05-31
https://www.lesswrong.com/posts/RuRDyEZKM5Cz54f8q/how-have-analogous-industries-solved-interested-greater-than
RuRDyEZKM5Cz54f8q
How have analogous Industries solved Interested > Trained > Employed bottlenecks?
yanni
It seems like one of the biggest problems* in AI Safety is that it is ridiculously hard to get good training (i.e. MATS is ridiculously competitive now) and employed (samesies). Has anyone look across other categories (e.g. potential other sciences) to see how this problem has been solved? I assume at the most macro le...
2024-05-30
https://www.lesswrong.com/posts/DyY5SAZttfvg7ykjh/duckbill-masks-better
DyY5SAZttfvg7ykjh
Duckbill Masks Better?
jkaufman
Lai et al. just published a paper looking at how well various masks worked for keeping others from getting sick ("source control"). Their university press office summarized this as: Study finds all masks effective, but 'duckbill' N95 masks far outperform others, suggests they should be the standard in high-risk settin...
2024-05-30
https://www.lesswrong.com/posts/dd66GymgbLQMHGLwQ/openai-helen-toner-speaks
dd66GymgbLQMHGLwQ
OpenAI: Helen Toner Speaks
Zvi
Helen Toner went on the TED AI podcast, giving us more color on what happened at OpenAI. These are important claims to get right. I will start with my notes on the podcast, including the second part where she speaks about regulation in general. Then I will discuss some implications more broadly. Notes on Helen Toner’s ...
2024-05-30
https://www.lesswrong.com/posts/yRWv5kkDD4YhzwRLq/non-disparagement-canaries-for-openai
yRWv5kkDD4YhzwRLq
Non-Disparagement Canaries for OpenAI
aysja
Since at least 2017, OpenAI has asked departing employees to sign offboarding agreements which legally bind them to permanently—that is, for the rest of their lives—refrain from criticizing OpenAI, or from otherwise taking any actions which might damage its finances or reputation.[1] If they refused to sign, OpenAI thr...
2024-05-30
https://www.lesswrong.com/posts/yHFhWmu3DmvXZ5Fsm/clarifying-metr-s-auditing-role
yHFhWmu3DmvXZ5Fsm
Clarifying METR's Auditing Role
beth-barnes
METR has not intended to claim to have audited anything, or to claim to be providing meaningful oversight or accountability, but there has been some confusion about whether METR is an auditor or planning to be one. To clarify this point: METR’s top priority is to develop the science of evaluations, and we don’t need to...
2024-05-30
https://www.lesswrong.com/posts/FzvjShr65WDyakDdq/how-likely-is-it-that-ai-will-torture-us-until-the-end-of
FzvjShr65WDyakDdq
How likely is it that AI will torture us until the end of time?
Damilo
Disclaimer: in this post I touch on some very dark and disturbing topics. I'm talking about suicide, my reasoning may be wrong and should not be used to retroactively justify suicide. I've been stuck on s-risks for over a month now. My life has been turned upside down since I first learned about this subject. So today ...
2024-05-31
https://www.lesswrong.com/posts/iHgomfiLgPZfJ2a2M/one-week-left-to-apply-for-the-roots-of-progress-blog
iHgomfiLgPZfJ2a2M
One week left to apply for the Roots of Progress Blog-Building Intensive
jasoncrawford
The application deadline for the the 2024 cohort of The Roots of Progress Blog-Building Intensive is Friday, June 7—just over a week away. If you want to apply, do it now. The Blog-Building Intensive is an eight-week program for aspiring progress writers to start or grow a blog. It also makes you a Roots of Progress fe...
2024-05-30
https://www.lesswrong.com/posts/r7nBaKy5Ry3JWhnJT/announcing-iliad-theoretical-ai-alignment-conference
r7nBaKy5Ry3JWhnJT
Announcing ILIAD — Theoretical AI Alignment Conference
Nora_Ammann
We are pleased to announce ILIAD — a 5-day conference bringing together 100+ researchers to build strong scientific foundations for AI alignment. ***Apply to attend by June 30!*** When: Aug 28 - Sep 3, 2024Where: @Lighthaven (Berkeley, US)What: A mix of topic-specific tracks, and unconference style programming, 100+ at...
2024-06-05
https://www.lesswrong.com/posts/qpspcyKopfGNsz5TH/getting-started-with-ai-alignment-research-how-to-reproduce
qpspcyKopfGNsz5TH
Getting started with AI Alignment research: how to reproduce an experiment from research paper
Alexander230
This is a post with technical instructions, how to reproduce an experiment from Weak-to-strong generalization paper: https://openai.com/index/weak-to-strong-generalization/. It’s oriented mostly on beginners in AI Alignment who want to start tinkering with models and looking for examples how to do experiments. Weak-to-...
2024-05-30