Dataset Viewer
Auto-converted to Parquet Duplicate
id
stringlengths
17
17
body
stringlengths
0
19.5k
posted_at
stringlengths
24
24
karma
int64
-6
185
parent_comment_id
stringlengths
17
17
post_id
stringlengths
17
17
post_title
stringlengths
2
127
post_slug
stringlengths
2
61
post_url
stringlengths
20
146
post_author
stringclasses
214 values
post_posted_at
stringlengths
24
24
Ps9oCx4apXyLpwqAq
I would typically call MLP(x) = f(x) + (MLP(x) - f(x)) a non-linear decomposition as f(x) is an arbitrary function. Regardless, any decomposition into a computational graph (that we can prove is extensionally equal) is fine. For instance, if it's the case that MLP(x) = combine(h(x), g(x)) (via extensional equality)...
2022-12-05T01:50:30.466Z
2
amJ6jtDX8TfFNEmA3
JvZhhzycHu2Yd57RN
Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research]
causal-scrubbing-a-method-for-rigorously-testing
https://www.lesswrong.com/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing
LawrenceC
2022-12-03T00:58:36.973Z
p946iQfZZMvsf5cdM
> I am happy to take a “non-worst-case” empirical perspective in studying this problem. In particular, I suspect it will be very helpful – and possibly necessary – to use incidental empirical properties of deep learning systems, which often have a surprising amount of useful emergent structure (as I will discuss more u...
2022-12-15T18:58:49.658Z
16
null
L4anhrxjv8j2yRKKp
How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme
how-discovering-latent-knowledge-in-language-models-without
https://www.lesswrong.com/posts/L4anhrxjv8j2yRKKp/how-discovering-latent-knowledge-in-language-models-without
Collin
2022-12-15T18:22:40.109Z
5Z2azFypoeaXJ6TLE
> Simulations are not the most efficient way for A and B to reach their agreement Are you claiming that the marginal returns to simulation are *never* worth the costs? I'm skeptical. I think it's quite likely that some number of acausal trade simulations are run even if that isn't where most of the information comes f...
2023-03-04T01:13:10.256Z
5
null
3RSq3bfnzuL3sp46J
Acausal normalcy
acausal-normalcy
https://www.lesswrong.com/posts/3RSq3bfnzuL3sp46J/acausal-normalcy
Andrew_Critch
2023-03-03T23:34:33.971Z
AySJNYqpweYRg8iqY
comment TLDR: Adversarial examples are a weapon against the AIs we can use for good and solving adversarial robustness would let the AIs harden themselves. I haven't read this yet (I will later : ) ), so it's possible this is mentioned, but I'd note that *exploiting* the lack of adversarial robustness could also be us...
2023-03-12T03:54:14.507Z
6
null
ncsxcf8CkDveXBCrA
AI Safety in a World of Vulnerable Machine Learning Systems
ai-safety-in-a-world-of-vulnerable-machine-learning-systems-1
https://far.ai/post/2023-03-safety-vulnerable-world/
AdamGleave
2023-03-08T02:40:43.139Z
ErJ3bvLtFQtGeoXNJ
I roughly agree with Akash's comment. But also some additional points: - It's decently likely that it will be pretty easy to get GPT-7 to avoid breaking the law or other egregious issue. As systems get more capable, basic alignment approaches get better at preventing stuff we can measure well. It's plausible that sca...
2023-03-30T16:38:17.101Z
3
null
zDf7fnentCFTdK3K6
Want to win the AGI race? Solve alignment.
want-to-win-the-agi-race-solve-alignment
https://www.forourposterity.com/want-to-win-the-agi-race-solve-alignment/
leopold
2023-03-29T17:40:36.187Z
hkqk6sFphuSHSHxE4
> So I propose “somebody gets autonomous learning to work stably for LLMs (or similarly-general systems)” as a possible future fast-takeoff scenario. Broadly speaking, autonomous learning doesn't seem particularly distinguished relative to supervised learning unless you have data limitations. For instance, suppose tha...
2023-04-11T23:08:25.461Z
27
7yAJbkDtMepxDvcMe
hvz9qjWyv8cLX9JJR
Evolution provides no evidence for the sharp left turn
evolution-provides-no-evidence-for-the-sharp-left-turn
https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn
Quintin Pope
2023-04-11T18:43:07.776Z
CShisr4paxQ8mWyZ7
(Note: this comment is rambly and repetitive, but I decided not to spend time cleaning it up) It sounds like you believe something like: "There are autonomous learning style approaches which are considerably better than the efficiency on next token prediction." And more broadly, you're making a claim like 'current le...
2023-04-12T16:47:02.142Z
23
ABpQQ7AZup7KbqmEz
hvz9qjWyv8cLX9JJR
Evolution provides no evidence for the sharp left turn
evolution-provides-no-evidence-for-the-sharp-left-turn
https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn
Quintin Pope
2023-04-11T18:43:07.776Z
2Go4DTBCfcxDXghDy
Distilling inference based approaches into learning is usually reasonably straightforward. I think this also applies in this case. This doesn't necessarily apply to 'learning how to learn'. (That said, I'm less sold that retrieval + chain of thought 'mostly solves autonmomous learning')
2023-04-13T00:43:06.136Z
5
rpqvchJxQcPNtfnds
hvz9qjWyv8cLX9JJR
Evolution provides no evidence for the sharp left turn
evolution-provides-no-evidence-for-the-sharp-left-turn
https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn
Quintin Pope
2023-04-11T18:43:07.776Z
9AmCp7ykwP73vDNft
Counterargument: you can just defend against these AIs running amuck. As long as most AIs are systematically trying to further human goals you don't obviously get doomed (though the situation is scary). There could be offense-defense inbalances, but there are also 'tyranny of the majority' advantages.
2023-04-16T23:28:32.571Z
3
null
RJEWuHZBr85RMYRp4
Top lesson from GPT: we will probably destroy humanity "for the lulz" as soon as we are able.
top-lesson-from-gpt-we-will-probably-destroy-humanity-for
https://www.lesswrong.com/posts/RJEWuHZBr85RMYRp4/top-lesson-from-gpt-we-will-probably-destroy-humanity-for
Shmi
2023-04-16T20:27:19.665Z
egdh9jAXe85HmwNaH
Huh? Definitely some humans will try to defend...
2023-04-17T01:00:48.852Z
5
XfoujwesmngDW9drj
RJEWuHZBr85RMYRp4
Top lesson from GPT: we will probably destroy humanity "for the lulz" as soon as we are able.
top-lesson-from-gpt-we-will-probably-destroy-humanity-for
https://www.lesswrong.com/posts/RJEWuHZBr85RMYRp4/top-lesson-from-gpt-we-will-probably-destroy-humanity-for
Shmi
2023-04-16T20:27:19.665Z
MScxqqfwzzmrtnHfh
This post seems to argue for fast/discontinuous takeoff without explicitly noting that people working in alignment often disagree. Further I think many of the arguments given here for fast takeoff seem sloppy or directly wrong on my own views. It seems reasonable to just give your views without noting disagreement, bu...
2023-04-18T16:35:48.019Z
15
null
eaDCgdkbsfGqpWazi
The basic reasons I expect AGI ruin
the-basic-reasons-i-expect-agi-ruin
https://www.lesswrong.com/posts/eaDCgdkbsfGqpWazi/the-basic-reasons-i-expect-agi-ruin
Rob Bensinger
2023-04-18T03:37:01.496Z
vLZaFtY6YCsABMXja
> A common misconception is that STEM-level AGI is dangerous because of something murky about "agents" or about self-awareness. Instead, I'd say that the danger is inherent to the nature of action sequences that push the world toward some sufficiently-hard-to-reach state. > > Call such sequences "plans". > > If you sam...
2023-04-18T16:47:24.900Z
6
null
eaDCgdkbsfGqpWazi
The basic reasons I expect AGI ruin
the-basic-reasons-i-expect-agi-ruin
https://www.lesswrong.com/posts/eaDCgdkbsfGqpWazi/the-basic-reasons-i-expect-agi-ruin
Rob Bensinger
2023-04-18T03:37:01.496Z
E9JimTRJcXpKym8wT
> I agree that much of LW has moved past the foom argument and is solidly on Eliezers side relative to Robin Hanson; Hanson's views seem increasingly silly as time goes on (though they seemed much more plausible a decade ago, before e.g. the rise of foundation models and the shortening of timelines to AGI). The debate ...
2023-04-18T16:52:57.452Z
3
MfcncYEc8f8unERPq
87EzRDAHkQJptLthE
But why would the AI kill us?
but-why-would-the-ai-kill-us
https://www.lesswrong.com/posts/87EzRDAHkQJptLthE/but-why-would-the-ai-kill-us
So8res
2023-04-17T18:42:39.720Z
LrEoXGedJek5LMfDd
If you condition on misaligned AI takeover, my current (extremely rough) probabilities are: - 50% chance the AI kills > 99% of people - Conditional on killing >99% of people, 2/3 chance the AI kills literally everyone Edit: I now think mass death and extinction are notably less likely than these probabilites. Perhaps...
2023-04-18T17:27:13.561Z
26
null
87EzRDAHkQJptLthE
But why would the AI kill us?
but-why-would-the-ai-kill-us
https://www.lesswrong.com/posts/87EzRDAHkQJptLthE/but-why-would-the-ai-kill-us
So8res
2023-04-17T18:42:39.720Z
45LkJyYZ5zAe5MxPh
I really hope this isn't a sticking point for people. I also strongly disagree with this being 'a fundamental point'.
2023-04-18T17:28:54.265Z
23
JKERbBzy4wfogyoaE
87EzRDAHkQJptLthE
But why would the AI kill us?
but-why-would-the-ai-kill-us
https://www.lesswrong.com/posts/87EzRDAHkQJptLthE/but-why-would-the-ai-kill-us
So8res
2023-04-17T18:42:39.720Z
dA5QSgtxyKfH33yw6
[endorsed]
2023-04-18T23:06:42.354Z
3
jFHHu5u2Cvz6kAufn
87EzRDAHkQJptLthE
But why would the AI kill us?
but-why-would-the-ai-kill-us
https://www.lesswrong.com/posts/87EzRDAHkQJptLthE/but-why-would-the-ai-kill-us
So8res
2023-04-17T18:42:39.720Z
xGSGYP3BiiJQLQENu
I think my views on takeoff/timelines are broadly similar to Paul's except that I have somewhat shorter takeoffs and timelines (I think this is due to thinking AI is a bit easier and also due to misc deference). > ... Wait, why not? If AI exceeds the human capability range on STEM four years from now, I would call tha...
2023-04-18T23:21:15.668Z
9
HXtgRpGxPxZmxZHxe
eaDCgdkbsfGqpWazi
The basic reasons I expect AGI ruin
the-basic-reasons-i-expect-agi-ruin
https://www.lesswrong.com/posts/eaDCgdkbsfGqpWazi/the-basic-reasons-i-expect-agi-ruin
Rob Bensinger
2023-04-18T03:37:01.496Z
gxx2JsyyK7b9W6JYs
I broadly disagree with Yudkowsky on his vision of FOOM and think he's pretty sloppy wrt. AI takeoff overall. But, I do think you're quite likely to get a quite rapid singularity if people don't intentionally slow things down. For instance, I broadly think the modeling in [Tom Davidson's takeoff speeds report](https:...
2023-04-27T18:01:47.787Z
10
null
LF3DDZ67knxuyadbm
Contra Yudkowsky on Doom from Foom #2
contra-yudkowsky-on-doom-from-foom-2
https://www.lesswrong.com/posts/LF3DDZ67knxuyadbm/contra-yudkowsky-on-doom-from-foom-2
jacob_cannell
2023-04-27T00:07:20.360Z
ktSb9kbzfqxJFar5C
See also [this section where Tom talks about kinks in the underlying capabilities leading to rapid progress](https://docs.google.com/document/d/1DZy1qgSal2xwDRR0wOPBroYE_RDV1_2vvhwVz4dxCVc/edit#heading=h.apdvo0uwo5qe)
2023-04-27T18:17:51.074Z
3
gxx2JsyyK7b9W6JYs
LF3DDZ67knxuyadbm
Contra Yudkowsky on Doom from Foom #2
contra-yudkowsky-on-doom-from-foom-2
https://www.lesswrong.com/posts/LF3DDZ67knxuyadbm/contra-yudkowsky-on-doom-from-foom-2
jacob_cannell
2023-04-27T00:07:20.360Z
seMfCZxnLmByWaJPo
I can't tell if this post is trying to discuss communicating about anything related to AI or alignment or is trying to more specifically discuss communication aimed at general audiences. I'll assume it's discussing arbitrary communication on AI or alignment. I feel like this post doesn't engage sufficiently with the c...
2023-04-29T18:17:46.513Z
50
null
mLubC65xXekk5tkug
[SEE NEW EDITS] No, *You* Need to Write Clearer
see-new-edits-no-you-need-to-write-clearer
https://www.thinkingmuchbetter.com/nickai/fieldbuilding/no-you-need-to-write-clearer.html
Nicholas Kross
2023-04-29T05:04:01.559Z
5kopfKcwui6bitbHf
> Everyone, everyone, literally everyone in AI alignment is severely wrong about at least one core thing, and disagreements still persist on seemingly-obviously-foolish things. If by 'severely wrong about at least one core thing' you just mean 'systemically severely miscalibrated on some very important topic ', then m...
2023-04-29T18:18:33.801Z
6
null
mLubC65xXekk5tkug
[SEE NEW EDITS] No, *You* Need to Write Clearer
see-new-edits-no-you-need-to-write-clearer
https://www.thinkingmuchbetter.com/nickai/fieldbuilding/no-you-need-to-write-clearer.html
Nicholas Kross
2023-04-29T05:04:01.559Z
dCP73sQ8jaW4udGps
I agree that EY is quite overconfident and I think his argument for doom are often sloppy and don't hold up. (I think the risk is substantial but often the exact arguments EY gives don't work). And, his communication often fails to meet basic bars for clarity. I'd also probably agree with 'if EY was able to do so, impr...
2023-04-30T20:59:26.374Z
11
t6hoW9zZ5CxBDxEH2
mLubC65xXekk5tkug
[SEE NEW EDITS] No, *You* Need to Write Clearer
see-new-edits-no-you-need-to-write-clearer
https://www.thinkingmuchbetter.com/nickai/fieldbuilding/no-you-need-to-write-clearer.html
Nicholas Kross
2023-04-29T05:04:01.559Z
iPR9fqqyqEGs48rxf
My probabilities are very rough, but I'm feeling more like 1/3 ish today after thinking about it a bit more. Shrug. As far as reasons for it being this high: - Conflict seems plausible to get to this level of lethality (see edit, I think I was a bit unclear or incorrect) - AIs might not care about acausal trade consi...
2023-05-14T19:32:51.753Z
1
4KgTjXxAFd6PkaN3h
87EzRDAHkQJptLthE
But why would the AI kill us?
but-why-would-the-ai-kill-us
https://www.lesswrong.com/posts/87EzRDAHkQJptLthE/but-why-would-the-ai-kill-us
So8res
2023-04-17T18:42:39.720Z
y8NhHPqqqGWugZFuw
I don't quite think this point is right. Gradient descent had to have been able to produce the highly polysemantic model and pack things together in a way which got lower loss. This suggests that it can also change the underlying computation. I might need to provide more explanation for my point to be clear, but I thin...
2023-05-19T17:51:36.011Z
1
D3ZFvEetR3HJPexya
w2TAEvME2yAG9MHeq
Gradient hacking is extremely difficult
gradient-hacking-is-extremely-difficult
https://www.lesswrong.com/posts/w2TAEvME2yAG9MHeq/gradient-hacking-is-extremely-difficult
beren
2023-01-24T15:45:46.518Z
LJj24JqDvR9bPqemi
[Sorry for late reply] > Analogously, conditional on things like gradient hacking being an issue at all, I'd expect the "hacker" to treat potential-training-objective-improvement as a scarce resource, which it generally avoids "spending" unless the expenditure will strengthen its own structure. Concretely, this probab...
2023-05-19T18:16:45.879Z
3
Z6qZ9ME3EAMkx94G4
rCJQAkPTEypGjSJ8X
How might we align transformative AI if it’s developed very soon?
how-might-we-align-transformative-ai-if-it-s-developed-very
https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very
HoldenKarnofsky
2022-08-29T15:42:08.985Z
mJJ3ejDuW9zw8N38t
> We can't be confident enough that it won't happen to safely rely on that assumption. I'm not sure what motivation for [worst-case reasoning](https://www.lesswrong.com/posts/yTvBSFrXhZfL8vr5a/worst-case-thinking-in-ai-alignment) you're thinking about here. Maybe just that there are many disjunctive ways things can g...
2023-05-19T18:24:01.320Z
3
Z6qZ9ME3EAMkx94G4
rCJQAkPTEypGjSJ8X
How might we align transformative AI if it’s developed very soon?
how-might-we-align-transformative-ai-if-it-s-developed-very
https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very
HoldenKarnofsky
2022-08-29T15:42:08.985Z
GZT7Ns6gqLbb9NMLj
> Pitting two models against each other in a zero-sum competition only works so long as both models actually learn the desired goals. Otherwise, they may be able to reach a compromise with each other and cooperate towards a non-zero-sum objective. If training works well, then they can't collude on average during train...
2023-05-27T16:15:45.033Z
2
null
A48amesEmqD8KNSmY
Conditional Prediction with Zero-Sum Training Solves Self-Fulfilling Prophecies
conditional-prediction-with-zero-sum-training-solves-self
https://www.lesswrong.com/posts/A48amesEmqD8KNSmY/conditional-prediction-with-zero-sum-training-solves-self
Rubi J. Hudson
2023-05-26T17:44:35.575Z
8DqCckZ2kRMBDDoxm
> Here are some views, often held in a cluster: I'm not sure exactly which clusters you're referring to, but I'll just assume that you're pointing to something like "people who aren't very into the sharp left turn and think that iterative, carefully bootstrapped alignment is a plausible strategy." If this isn't what y...
2023-05-27T19:20:38.343Z
9
null
tNtiJp8dA6jMbgKbf
Hands-On Experience Is Not Magic
hands-on-experience-is-not-magic
https://www.lesswrong.com/posts/tNtiJp8dA6jMbgKbf/hands-on-experience-is-not-magic
Thane Ruthenis
2023-05-27T16:57:10.531Z
4AeTGsdzKWbAhsGss
I would be more sympathetic if you made a move like, "I'll accept continuity through the human range of intelligence, and that we'll only have to align systems as collectively powerful as humans, but I still think that hands-on experience is only..." In particular, I think there is a real disagreement about the relativ...
2023-05-27T19:31:35.500Z
4
8DqCckZ2kRMBDDoxm
tNtiJp8dA6jMbgKbf
Hands-On Experience Is Not Magic
hands-on-experience-is-not-magic
https://www.lesswrong.com/posts/tNtiJp8dA6jMbgKbf/hands-on-experience-is-not-magic
Thane Ruthenis
2023-05-27T16:57:10.531Z
R6z36fXsPmwe8PNsB
When I try to interpret your points here, I come to the conclusion that you think humans, upon reflection, would cause human extinction (in favor of resources being used for something else). Or at least that many/most humans would, upon reflection, prefer resources to be used for purposes other than preserving human l...
2023-06-01T21:09:02.670Z
3
TbK2zqGETgAqT2bjx
2NncxDQ3KBDCxiJiP
Cosmopolitan values don't come free
cosmopolitan-values-don-t-come-free
https://www.lesswrong.com/posts/2NncxDQ3KBDCxiJiP/cosmopolitan-values-don-t-come-free
So8res
2023-05-31T15:58:16.974Z
Wz8QGAmJYzgro3mHr
> **Does This Make Any Sense?** > ----------------------------- I'm confused - it looks like the first paragraph of this section is taken from a prior post on *attribution patching*.
2023-06-05T03:36:12.889Z
1
null
xh85KbTFhbCz7taD4
How to Think About Activation Patching
how-to-think-about-activation-patching
https://www.neelnanda.io/mechanistic-interpretability/attribution-patching#how-to-think-about-activation-patching
Neel Nanda
2023-06-04T14:17:42.264Z
HWXwpdZtcmBMasis4
Oops, somehow I missed that context. Thanks for the clarification.
2023-06-05T22:38:11.837Z
1
Ynjz2dKvKGK5srzMu
xh85KbTFhbCz7taD4
How to Think About Activation Patching
how-to-think-about-activation-patching
https://www.neelnanda.io/mechanistic-interpretability/attribution-patching#how-to-think-about-activation-patching
Neel Nanda
2023-06-04T14:17:42.264Z
Myz6CX7BjeSq5heWR
> If I build a chatbot, and I can't jailbreak it, how do I determine whether that's because the chatbot is secure or because I'm bad at jailbreaking? How should AI scientists overcome Schneier's Law of LLMs? FWIW, I think there aren't currently good benchmarks for alignment and the ones you list aren't very relevant....
2023-06-14T04:26:27.197Z
8
null
uyk5nn93HxJMsio98
MetaAI: less is less for alignment.
metaai-less-is-less-for-alignment-1
https://www.lesswrong.com/posts/uyk5nn93HxJMsio98/metaai-less-is-less-for-alignment-1
Cleo Nardo
2023-06-13T14:08:45.209Z
wrZ2cRXnqPX3sku3u
I'd like to register that I disagree with the claim that standard online RLHF requires adversarial robustness in AIs persay. (I agree that it requires that humans are adversarially robust to the AI, but this is a pretty different problem.) In particular, the place where adversarial robustness shows up is in sample eff...
2023-06-15T22:32:21.805Z
11
null
ncsxcf8CkDveXBCrA
AI Safety in a World of Vulnerable Machine Learning Systems
ai-safety-in-a-world-of-vulnerable-machine-learning-systems-1
https://far.ai/post/2023-03-safety-vulnerable-world/
AdamGleave
2023-03-08T02:40:43.139Z
YAJKAuNe8fjMwrL4e
This is pretty close to my understanding, with one important objection. Thanks for responding and trying to engage with my perspective. ## Objection > If we repeat this iterative process enough times, we'll end up with a robust reward model. I don't claim we'll necessarily ever get a fully robust reward model, just...
2023-06-16T16:54:39.672Z
1
MZrLpvatKKPgkXGAr
ncsxcf8CkDveXBCrA
AI Safety in a World of Vulnerable Machine Learning Systems
ai-safety-in-a-world-of-vulnerable-machine-learning-systems-1
https://far.ai/post/2023-03-safety-vulnerable-world/
AdamGleave
2023-03-08T02:40:43.139Z
ppWbqGoGFrSqPsfTP
Sorry, thanks for the correction. I personally disagree on this being a good benchmark for outer alignment for various reasons, but it's good to understand the intention.
2023-06-16T19:09:49.700Z
3
bmGCdyCyiAvxnHjLf
uyk5nn93HxJMsio98
MetaAI: less is less for alignment.
metaai-less-is-less-for-alignment-1
https://www.lesswrong.com/posts/uyk5nn93HxJMsio98/metaai-less-is-less-for-alignment-1
Cleo Nardo
2023-06-13T14:08:45.209Z
pL2afjDA6zpHCKiLo
> I'm a bit confused by the claim here, although I've only read the abstract and skimmed the paper so perhaps it'd become obvious from a closer read. As far as I can tell, the cited paper focuses on motion-planning, and considers a rather restricted setting of LQR policies. I originally linked to the wrong paper! : ( ...
2023-06-19T04:26:35.931Z
1
BmkwAzJfyLdQfHPBj
ncsxcf8CkDveXBCrA
AI Safety in a World of Vulnerable Machine Learning Systems
ai-safety-in-a-world-of-vulnerable-machine-learning-systems-1
https://far.ai/post/2023-03-safety-vulnerable-world/
AdamGleave
2023-03-08T02:40:43.139Z
MSWt7tY5k5RauRsTo
> Suppose we condition on RLHF failing. At a high level, failures split into: (a) human labelers rewarded the wrong thing (e.g. fooling humans); (b) the reward model failed to predict human labelers judgement and rewarded the wrong thing (e.g. reward hacking); (c) RL produced a policy that is capable enough to be dange...
2023-06-19T04:31:31.715Z
1
BmkwAzJfyLdQfHPBj
ncsxcf8CkDveXBCrA
AI Safety in a World of Vulnerable Machine Learning Systems
ai-safety-in-a-world-of-vulnerable-machine-learning-systems-1
https://far.ai/post/2023-03-safety-vulnerable-world/
AdamGleave
2023-03-08T02:40:43.139Z
xAyYsRJjwcd7mb5hN
> Suppose we condition on RLHF failing. At a high level, failures split into: (a) human labelers rewarded the wrong thing (e.g. fooling humans); (b) the reward model failed to predict human labelers judgement and rewarded the wrong thing (e.g. reward hacking); (c) RL produced a policy that is capable enough to be dange...
2023-06-19T04:36:51.859Z
1
BmkwAzJfyLdQfHPBj
ncsxcf8CkDveXBCrA
AI Safety in a World of Vulnerable Machine Learning Systems
ai-safety-in-a-world-of-vulnerable-machine-learning-systems-1
https://far.ai/post/2023-03-safety-vulnerable-world/
AdamGleave
2023-03-08T02:40:43.139Z
QWY9ynu5XJHjnR29u
> Collecting fresh human data for that is prohibitive, so we rely on a reward model -- unfortunately that gets hacked. Are you assuming that we can't collect human data online as the policy optimizes against the reward model? (People currently do collect data online to avoid getting hacked like this.) This case seems ...
2023-06-19T04:40:03.770Z
1
BmkwAzJfyLdQfHPBj
ncsxcf8CkDveXBCrA
AI Safety in a World of Vulnerable Machine Learning Systems
ai-safety-in-a-world-of-vulnerable-machine-learning-systems-1
https://far.ai/post/2023-03-safety-vulnerable-world/
AdamGleave
2023-03-08T02:40:43.139Z
MdStCtpHqjokboY8B
> This argument feels like it's proving too much. InstructGPT isn't perfect, but it does produce a lot less toxic output and follow instructions a lot better than the base model GPT-3. RLHF seems to work, and GPT-4 is even better, showing that it gets easier with bigger models. Why should we expect this trend to revers...
2023-06-19T05:00:02.850Z
1
BmkwAzJfyLdQfHPBj
ncsxcf8CkDveXBCrA
AI Safety in a World of Vulnerable Machine Learning Systems
ai-safety-in-a-world-of-vulnerable-machine-learning-systems-1
https://far.ai/post/2023-03-safety-vulnerable-world/
AdamGleave
2023-03-08T02:40:43.139Z
MHdciQ2ahhHk25uTF
Related to this. You say: > I suspect this process would be much less sample efficient than vanilla RLHF, but it would have better safety properties, and measuring how much slower it is could be a good proxy for how severe the "robustness tax" is. What specific safety properties are you thinking about? As far as I c...
2023-06-19T05:06:53.507Z
1
xAyYsRJjwcd7mb5hN
ncsxcf8CkDveXBCrA
AI Safety in a World of Vulnerable Machine Learning Systems
ai-safety-in-a-world-of-vulnerable-machine-learning-systems-1
https://far.ai/post/2023-03-safety-vulnerable-world/
AdamGleave
2023-03-08T02:40:43.139Z
yphn7aBQGJ3AqJxCi
(Here's a possibly arcane remark. It's worth noting that I think *always correct* reward models are sufficient for high stakes alignment via runtime filtering (technically you just need to never give a very bad output decent reward). So, *always correct* reward models would be great if you could get them. Note that *al...
2023-06-19T05:51:09.771Z
1
BmkwAzJfyLdQfHPBj
ncsxcf8CkDveXBCrA
AI Safety in a World of Vulnerable Machine Learning Systems
ai-safety-in-a-world-of-vulnerable-machine-learning-systems-1
https://far.ai/post/2023-03-safety-vulnerable-world/
AdamGleave
2023-03-08T02:40:43.139Z
azdKPP3DxKLwLzwhr
It's worth noting that you don't necessarily need to train models to actually do dangerous actions like literally executing on a takeover attempt, you can just train models which do something which is a proxy to coups (or a proxy to some part of coups). The extent to which this proxy itself dangerous or generalizing i...
2023-07-06T03:24:29.096Z
7
phFqC2EdDALCFwr56
Hna4aoMwr6Qx9rHBs
[Linkpost] Introducing Superalignment
linkpost-introducing-superalignment
https://openai.com/blog/introducing-superalignment
beren
2023-07-05T18:23:18.419Z
LxMHGCYFCJZFT5LjS
Given the current paradigm and technology it seems far safer to have an AI work on alignment research than highly difficult engineering tasks like nanotech. In particular, note that we only need to have an AI totally obsolete prior effors for this to be as good of a position as we could reasonably hope for. In the cur...
2023-07-12T03:24:35.651Z
20
HygD3rXhHahSFDAnT
NSZhadmoYdjRKNq6X
OpenAI Launches Superalignment Taskforce
openai-launches-superalignment-taskforce
https://www.lesswrong.com/posts/NSZhadmoYdjRKNq6X/openai-launches-superalignment-taskforce
Zvi
2023-07-11T13:00:06.232Z
65cy3SicpoWmMfdEa
Yeah, if this wasn't clear, I was refering to 'pivotal acts' which use hard engineering power sufficient for decisive strategic advantage. Things like 'brain emulations' or 'build a fully human interpretable AI design' don't seem particularly anti-social (but may be poor ideas for feasiblity reasons).
2023-07-12T16:38:30.939Z
1
f5qw3ByewoyrTJ5D8
NSZhadmoYdjRKNq6X
OpenAI Launches Superalignment Taskforce
openai-launches-superalignment-taskforce
https://www.lesswrong.com/posts/NSZhadmoYdjRKNq6X/openai-launches-superalignment-taskforce
Zvi
2023-07-11T13:00:06.232Z
vmvWSrdwnLk9wpRe8
I think OpenAI is probably agnostic about how to use AIs to get more alignment research done. That said, speeding up human researchers by large multipliers will eventually be required for the plan to be feasible. Like 10-100x rather than 1.5-4x. My guess is that you'll probably need AIs running considerably autonomous...
2023-07-12T16:45:21.993Z
2
z2oophxk2YabSDp4g
NSZhadmoYdjRKNq6X
OpenAI Launches Superalignment Taskforce
openai-launches-superalignment-taskforce
https://www.lesswrong.com/posts/NSZhadmoYdjRKNq6X/openai-launches-superalignment-taskforce
Zvi
2023-07-11T13:00:06.232Z
AmxmAxmGub6d9GCmg
> Many proposed solutions to the alignment problem involve one “helper AI” providing a feedback signal steering the main AI system towards desirable behavior. Unfortunately, if the helper AI system is vulnerable to adversarial attack, then the main AI system will achieve a higher rating by the helper AI if it exploits ...
2023-07-21T05:32:27.066Z
4
null
DCL3MmMiPsuMxP45a
Even Superhuman Go AIs Have Surprising Failure Modes
even-superhuman-go-ais-have-surprising-failure-modes
https://far.ai/post/2023-07-superhuman-go-ais/
AdamGleave
2023-07-20T17:31:35.814Z
x4PvYmdxzxzRYhq9R
Amusingly, Betteridge's law of headlines applies.
2023-08-10T00:51:35.566Z
4
null
oSZ2xTxEMZh9f3Yaz
LLMs are (mostly) not helped by filler tokens
llms-are-mostly-not-helped-by-filler-tokens
https://www.lesswrong.com/posts/oSZ2xTxEMZh9f3Yaz/llms-are-mostly-not-helped-by-filler-tokens
Kshitij Sachan
2023-08-10T00:48:50.510Z
r4ZGs8Jao3YF9kD78
> beyond what was capable with Meta's finetuning How do you know this is beyond what finetuning was capable of? I'd guess that Meta didn't bother to train against obvious sycophancy and if you trained against it, then it would go away. This work can still be interesting for other reasons, e.g. building into better int...
2023-08-10T01:27:38.672Z
4
rbu6RftikDH3mDBuC
raoeNarFYCxxyKAop
Modulating sycophancy in an RLHF model via activation steering
modulating-sycophancy-in-an-rlhf-model-via-activation
https://www.lesswrong.com/posts/raoeNarFYCxxyKAop/modulating-sycophancy-in-an-rlhf-model-via-activation
Nina Panickssery
2023-08-09T07:06:50.859Z
54hrsJT2msPP6CaWK
I'd guess that if you: - Instructed human labelers to avoid sycophancy - Gave human labelers examples of a few good and bad responses with respect to sycophancy - Trained models on examples where sycophancy is plausibly/likely (e.g., pretrained models exhibit sycophancy a reasonable fraction of the time when generatin...
2023-08-10T18:57:34.268Z
5
E6GcNZJCwxLHNWeJs
raoeNarFYCxxyKAop
Modulating sycophancy in an RLHF model via activation steering
modulating-sycophancy-in-an-rlhf-model-via-activation
https://www.lesswrong.com/posts/raoeNarFYCxxyKAop/modulating-sycophancy-in-an-rlhf-model-via-activation
Nina Panickssery
2023-08-09T07:06:50.859Z
EEuAztJFgzgKWZCrA
More generally, I think arguments that human feedback is failing should ideally be of the form: "Human labelers (with AI assistance) fail to notice this sort of bad behavior. Also, either this or nearby stuff can't just be resolved with trivial and obvious countermeasures like telling human labelers to be on the look ...
2023-08-10T19:08:15.912Z
1
54hrsJT2msPP6CaWK
raoeNarFYCxxyKAop
Modulating sycophancy in an RLHF model via activation steering
modulating-sycophancy-in-an-rlhf-model-via-activation
https://www.lesswrong.com/posts/raoeNarFYCxxyKAop/modulating-sycophancy-in-an-rlhf-model-via-activation
Nina Panickssery
2023-08-09T07:06:50.859Z
SXzzBGgJ6BK2ovd5y
[title was editted]
2023-08-10T20:38:53.073Z
3
x4PvYmdxzxzRYhq9R
oSZ2xTxEMZh9f3Yaz
LLMs are (mostly) not helped by filler tokens
llms-are-mostly-not-helped-by-filler-tokens
https://www.lesswrong.com/posts/oSZ2xTxEMZh9f3Yaz/llms-are-mostly-not-helped-by-filler-tokens
Kshitij Sachan
2023-08-10T00:48:50.510Z
zEm2SkAGMXcKW75vt
> It should appear everywhere that one has variable context windows and should apply to pretty much all models or none, if that's the explanation. I would also expect the benefits to show up more broadly across benchmarks, rather than affect a very few dramatically. It seems plausible to me that you'd see some sort o...
2023-08-10T20:43:35.567Z
2
cD2RGLSNhdtq3qWri
oSZ2xTxEMZh9f3Yaz
LLMs are (mostly) not helped by filler tokens
llms-are-mostly-not-helped-by-filler-tokens
https://www.lesswrong.com/posts/oSZ2xTxEMZh9f3Yaz/llms-are-mostly-not-helped-by-filler-tokens
Kshitij Sachan
2023-08-10T00:48:50.510Z
dPCH8TqZkBMirzRb3
Yep, agreed. But it's worth noting that other hypotheses for why this happens still have to explain why no other model has this behavior (including GPT3.5). So I'm not sure we take that much additional surprise from the capability having sudden onset.
2023-08-11T00:25:01.836Z
2
Mheqaypr9ajae6H8u
oSZ2xTxEMZh9f3Yaz
LLMs are (mostly) not helped by filler tokens
llms-are-mostly-not-helped-by-filler-tokens
https://www.lesswrong.com/posts/oSZ2xTxEMZh9f3Yaz/llms-are-mostly-not-helped-by-filler-tokens
Kshitij Sachan
2023-08-10T00:48:50.510Z
4rSdFrvyrwsxebwLt
(Aside: Why do you think GPT3.5-turbo (most recent release) isn't MOE? I'd guess that if GPT4 is MOE, GPT3.5 is also.)
2023-08-12T01:00:25.605Z
3
Cn8i7KpCaGmc9uoxB
oSZ2xTxEMZh9f3Yaz
LLMs are (mostly) not helped by filler tokens
llms-are-mostly-not-helped-by-filler-tokens
https://www.lesswrong.com/posts/oSZ2xTxEMZh9f3Yaz/llms-are-mostly-not-helped-by-filler-tokens
Kshitij Sachan
2023-08-10T00:48:50.510Z
5w4iYdP3A5yXN9qbu
Driving optimally might be AGI complete, but you don't necessarily need to drive optimally, it should be sufficient to beat typical human drivers for safety (this will depend on the regulatory regime of course). It might be that the occurrences where avoiding an accident is AGI complete are lower per mile than the cas...
2023-08-15T04:41:17.589Z
5
wgh28sKKWrJafcygF
A5YQqDEz9QKGAZvn6
AGI is easier than robotaxis
agi-is-easier-than-robotaxis
https://www.lesswrong.com/posts/A5YQqDEz9QKGAZvn6/agi-is-easier-than-robotaxis
Daniel Kokotajlo
2023-08-13T17:00:29.901Z
7fNRMke9Gc4QghYyf
After spending a while thinking about interpretability, my current stance is: - Let's define *Mechanistic interpretability* as "A subfield of interpretability that uses bottom-up approaches, generally by corresponding low-level components such as circuits or neurons to components of human-understandable algorithms and...
2023-08-18T16:25:08.823Z
63
null
LNA8mubrByG7SFacm
Against Almost Every Theory of Impact of Interpretability
against-almost-every-theory-of-impact-of-interpretability-1
https://www.lesswrong.com/posts/LNA8mubrByG7SFacm/against-almost-every-theory-of-impact-of-interpretability-1
Charbel-Raphaël
2023-08-17T18:44:41.099Z
wmxZwAHZrnyaytpnr
For mechanistic interpretabilty, very ambitious success looks something like: - Have some decomposition of the model or the behavior of the model into parts. - For any given randomly selected part, you should almost always be able build up a very good understanding of this part in isolation. - By "very good" I mean...
2023-08-18T16:42:27.935Z
34
7fNRMke9Gc4QghYyf
LNA8mubrByG7SFacm
Against Almost Every Theory of Impact of Interpretability
against-almost-every-theory-of-impact-of-interpretability-1
https://www.lesswrong.com/posts/LNA8mubrByG7SFacm/against-almost-every-theory-of-impact-of-interpretability-1
Charbel-Raphaël
2023-08-17T18:44:41.099Z
uH2XGLsEz4gghfqjq
2023-08-18T16:46:40.368Z
0
7fNRMke9Gc4QghYyf
LNA8mubrByG7SFacm
Against Almost Every Theory of Impact of Interpretability
against-almost-every-theory-of-impact-of-interpretability-1
https://www.lesswrong.com/posts/LNA8mubrByG7SFacm/against-almost-every-theory-of-impact-of-interpretability-1
Charbel-Raphaël
2023-08-17T18:44:41.099Z
Npzm3cfgQxtyha84z
The main reason why I think mechanistic interpretability is very far from ambitious success is that current _numbers_ are extremely bad and what people explain is extremely cherry picked. Like people's explanations typically result in performance which is worse than that of much, much tinier models even though heavy ch...
2023-08-18T16:49:00.923Z
26
wmxZwAHZrnyaytpnr
LNA8mubrByG7SFacm
Against Almost Every Theory of Impact of Interpretability
against-almost-every-theory-of-impact-of-interpretability-1
https://www.lesswrong.com/posts/LNA8mubrByG7SFacm/against-almost-every-theory-of-impact-of-interpretability-1
Charbel-Raphaël
2023-08-17T18:44:41.099Z
vyGGXuEiyeJJQfkcX
I believe that the section on decision theory is somewhat misguided in several ways. Specifically, I don't perceive FDT as a critical error. However, I should note that I'm not an expert on decision theory, so please consider my opinion with a grain of salt. (I generally agree with the statements "Eliezer is excessive...
2023-08-27T02:23:45.230Z
11
null
TjyyngWFYvQWPpNNj
Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong
eliezer-yudkowsky-is-frequently-confidently-egregiously
https://www.lesswrong.com/posts/TjyyngWFYvQWPpNNj/eliezer-yudkowsky-is-frequently-confidently-egregiously
Bentham's Bulldog
2023-08-27T01:06:37.355Z
tXt2HCkutcMLwFWCA
> And as Schwarz points out, in the twin case, you'll get less utility by following FDT--you don't always want to be a FDTist. I can't seem to find this in the linked blog post. (I see discussion of the twin case, but not a case where you get less utility from precommiting to follow FDT at the start of time.) > I fi...
2023-08-27T02:41:48.718Z
4
JEHkQ9WFfyfoXgmKn
TjyyngWFYvQWPpNNj
Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong
eliezer-yudkowsky-is-frequently-confidently-egregiously
https://www.lesswrong.com/posts/TjyyngWFYvQWPpNNj/eliezer-yudkowsky-is-frequently-confidently-egregiously
Bentham's Bulldog
2023-08-27T01:06:37.355Z
j7sEkyoRcTLE8oGFY
> Sometimes rationality will be bad for you--if there's a demon who tortures all rational people, for example At some point this gets down to semantics. I think a reasonable question to answer is "what decision rule should be chosen by an engineer who wants to build an agent scoring the most utility across its lifetim...
2023-08-27T02:49:45.558Z
14
JEHkQ9WFfyfoXgmKn
TjyyngWFYvQWPpNNj
Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong
eliezer-yudkowsky-is-frequently-confidently-egregiously
https://www.lesswrong.com/posts/TjyyngWFYvQWPpNNj/eliezer-yudkowsky-is-frequently-confidently-egregiously
Bentham's Bulldog
2023-08-27T01:06:37.355Z
ifdAcBtAYTq8f7zk8
Cool, so you maybe agree that CDT agents would want to self modify into something like FDT agents (if they could). Then I suppose we might just disagree on the semantics behind the word rational. (Note that CDT agents don't exactly self-modify into FDT agents, just something close.)
2023-08-27T02:51:31.153Z
3
EsHSTeruQECfACBYA
TjyyngWFYvQWPpNNj
Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong
eliezer-yudkowsky-is-frequently-confidently-egregiously
https://www.lesswrong.com/posts/TjyyngWFYvQWPpNNj/eliezer-yudkowsky-is-frequently-confidently-egregiously
Bentham's Bulldog
2023-08-27T01:06:37.355Z
ECHqDr4sfbutiDRjs
As far as I can tell, the procreation case isn't defined well enough in Schwarz for me to enage with it. In particular, in what exact way are the decision of my father and I entangled? (Just saying the father follows FDT isn't enough.) But, I do think there is going to be a case basically like this where I bite the bul...
2023-08-27T02:54:36.411Z
4
EsHSTeruQECfACBYA
TjyyngWFYvQWPpNNj
Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong
eliezer-yudkowsky-is-frequently-confidently-egregiously
https://www.lesswrong.com/posts/TjyyngWFYvQWPpNNj/eliezer-yudkowsky-is-frequently-confidently-egregiously
Bentham's Bulldog
2023-08-27T01:06:37.355Z
hwndp2MyvN8u9aRCe
My cached state is that the A/H100 vs 4090 price gap is mostly price discrimination rather than a large difference in the actual manufacturing cost. I think price discrimination is very common in computing hardware and nvidia happens to have a quite powerful monopoly right now for various reasons. Note that [4090s te...
2023-08-31T03:45:44.524Z
20
null
nXcHe7t4rqHMjhzau
Report on Frontier Model Training
report-on-frontier-model-training
https://docs.google.com/document/d/1TsYkDYtV6BKiCN9PAOirRAy3TrNDu2XncUZ5UZfaAKA/edit?usp=sharing
YafahEdelman
2023-08-30T20:02:46.317Z
pPBNCLGvJ5FCo63Lr
> Also, such a regulation seems like it would be illegal in the US. While the government does have wide latitude to regulate commercial activities that impact multiple states, this is rather specifically a proposal that would regulate all activity (even models that never get released!). I'm unaware of any precedent for...
2023-08-31T03:59:16.751Z
10
QX2rykLvone6rhjxH
unwRBRQivd2LYRfuP
Introducing the Center for AI Policy (& we're hiring!)
introducing-the-center-for-ai-policy-and-we-re-hiring
https://www.aipolicy.us/blog/hiring
Thomas Larsen
2023-08-28T21:17:11.703Z
xBPpbLsgfMuee72WY
> So far, I'm confident that our proposals will not impede the vast majority of AI developers, but if we end up receiving feedback that this isn't true, we'll either rethink our proposals or remove this claim from our advocacy efforts. Also, as stated in a comment below: It seems to me that for AI regulation to have ...
2023-08-31T04:26:28.457Z
7
cfShuRQC88EuojNP2
unwRBRQivd2LYRfuP
Introducing the Center for AI Policy (& we're hiring!)
introducing-the-center-for-ai-policy-and-we-re-hiring
https://www.aipolicy.us/blog/hiring
Thomas Larsen
2023-08-28T21:17:11.703Z
hxJugdweDjEkwvhHf
Presumably, your hope for avoiding this flop threshold becoming burdensome soon is: > As AI advances and dangerous systems become increasingly easy to develop at a fraction of the current cost, the definition of frontier AI will need to change. This is why we need an expert-led administration that can adapt the criter...
2023-08-31T04:30:03.746Z
4
xBPpbLsgfMuee72WY
unwRBRQivd2LYRfuP
Introducing the Center for AI Policy (& we're hiring!)
introducing-the-center-for-ai-policy-and-we-re-hiring
https://www.aipolicy.us/blog/hiring
Thomas Larsen
2023-08-28T21:17:11.703Z
FG4YXDAL7sATcbwrg
> I also think 70% on MMLU is extremely low, since that's about the level of ChatGPT 3.5, and that system is very far from posing a risk of catastrophe. Very far in qualitative capability or very far in effective flop? I agree on the qualitative capability, but disagree on the effective flop. It seems quite plausi...
2023-08-31T04:32:59.450Z
4
LsrzDKMzhyykaekvv
unwRBRQivd2LYRfuP
Introducing the Center for AI Policy (& we're hiring!)
introducing-the-center-for-ai-policy-and-we-re-hiring
https://www.aipolicy.us/blog/hiring
Thomas Larsen
2023-08-28T21:17:11.703Z
FSDEb7qnKHFGeujgQ
I'd guess that the best would be to define a specific flop or dollar threshold and have this steadily decrease over time at a conservative rate (e.g. 2x lower threshold each year).
2023-08-31T17:32:50.086Z
6
xBPpbLsgfMuee72WY
unwRBRQivd2LYRfuP
Introducing the Center for AI Policy (& we're hiring!)
introducing-the-center-for-ai-policy-and-we-re-hiring
https://www.aipolicy.us/blog/hiring
Thomas Larsen
2023-08-28T21:17:11.703Z
cFFuT5BqfGogyGhij
I'm extremely confused how extrapolating out the curve can possibly get you 1000x improvement in FLOP/$ within 7 years. What happens if you backtest this autoregressive model? Can you show the plot for this fit? (I can't seem to see the image in this post, maybe that contains the fit?)
2023-10-06T17:47:38.508Z
0
null
gLJP2sBqXDsQWLAgy
Super-Exponential versus Exponential Growth in Compute Price-Performance
super-exponential-versus-exponential-growth-in-compute-price
https://www.lesswrong.com/posts/gLJP2sBqXDsQWLAgy/super-exponential-versus-exponential-growth-in-compute-price
moridinamael
2023-10-06T16:23:56.714Z
PhfDvfu9N27FHkoYt
> - A capabilities evaluation is defined as “a model evaluation designed to test whether a model could do some task if it were trying to. ... > - A safety evaluation is defined as “a model evaluation designed to test under what circumstances a model would actually try to do some task. ... I propose changing the term f...
2023-10-14T21:02:30.075Z
25
null
mcnWZBnbeDz7KKtjJ
RSPs are pauses done right
rsps-are-pauses-done-right
https://www.lesswrong.com/posts/mcnWZBnbeDz7KKtjJ/rsps-are-pauses-done-right
evhub
2023-10-14T04:06:02.709Z
nAcCWYX2RioQzWoTM
I sometimes refer to capability based arguments as *control arguments*. Then, we can name two lines of defense: - The control line of defense: Would the AI succeed at causing bad outcomes if it tried? - The propensity line of defense: Would the AI try to cause bad outcomes? It's possible to develop techniques which ...
2023-10-14T21:25:50.436Z
17
PhfDvfu9N27FHkoYt
mcnWZBnbeDz7KKtjJ
RSPs are pauses done right
rsps-are-pauses-done-right
https://www.lesswrong.com/posts/mcnWZBnbeDz7KKtjJ/rsps-are-pauses-done-right
evhub
2023-10-14T04:06:02.709Z
FKjrEhCo6nybPogXL
An important thing to emphasize with control arguments is that it seems quite unlikely that control arguments can be made workable for very superhuman models. (At least for the notion of "control arguments" which can be readily assessed with non-insane capability evaluations.)
2023-10-15T04:07:36.363Z
9
nAcCWYX2RioQzWoTM
mcnWZBnbeDz7KKtjJ
RSPs are pauses done right
rsps-are-pauses-done-right
https://www.lesswrong.com/posts/mcnWZBnbeDz7KKtjJ/rsps-are-pauses-done-right
evhub
2023-10-14T04:06:02.709Z
3KZYNAo2J2Mn3XjQM
Sure. I just mean "try to do things which result in bad outcomes from our perspective".
2023-10-16T23:42:48.394Z
2
fYNosthqFvgjRgmkA
mcnWZBnbeDz7KKtjJ
RSPs are pauses done right
rsps-are-pauses-done-right
https://www.lesswrong.com/posts/mcnWZBnbeDz7KKtjJ/rsps-are-pauses-done-right
evhub
2023-10-14T04:06:02.709Z
sjtD58PGdDXoBi3ad
It seems like there are strong reasons to expect that the post AI coalitions will look very different from the *current* world economy, though I agree that they might look like *a* world economy. For instance, imagine world GDP grows by 100x. It seems totally plausible that Google/TSMC/OpenAI revenue grows by 50x rela...
2023-10-17T03:11:11.137Z
4
vLuFmLw3DAYARrCsn
PKy8NuNPknenkDY74
Soft takeoff can still lead to decisive strategic advantage
soft-takeoff-can-still-lead-to-decisive-strategic-advantage
https://www.lesswrong.com/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage
Daniel Kokotajlo
2019-08-23T16:39:31.317Z
oaQrmW48xC2RPkC7b
I'm not going to respond to everything you're saying here right now. It's pretty likely I won't end up responding to everything you're saying at any point; so apologies for that. Here are some key claims I want to make: - **Serial speed is key**: Speeding up theory work (like e.g. ARC theory) by 5-10x should be quite...
2023-10-17T22:47:53.931Z
15
X8RMk5TTq7wcFtXdF
mcnWZBnbeDz7KKtjJ
RSPs are pauses done right
rsps-are-pauses-done-right
https://www.lesswrong.com/posts/mcnWZBnbeDz7KKtjJ/rsps-are-pauses-done-right
evhub
2023-10-14T04:06:02.709Z
k5qEjsg2DbjaRF76m
This seems like a good thing for labs to do[^disagree]. I'd go one step earlier and propose that labs make a clear and explicit page (on their website or similar) stating their views on the *risk* from powerful AI systems. The proposal given in this post seems somewhat more ambitious and costly than the thing I'm propo...
2023-10-18T03:37:10.556Z
58
null
6HEYbsqk35butCYTe
Labs should be explicit about why they are building AGI
labs-should-be-explicit-about-why-they-are-building-agi
https://www.lesswrong.com/posts/6HEYbsqk35butCYTe/labs-should-be-explicit-about-why-they-are-building-agi
peterbarnett
2023-10-17T21:09:20.711Z
2Xmativ6mgEenjG6H
On RSPs vs pauses, my basic take is that hardcore pauses are better than RSPs and RSPs are considerably better than weak pauses. Best: we first prevent hardware progress and stop H100 manufactoring for a bit, then we prevent AI algorithmic progress, and then we stop scaling (ideally in that order). Then, we heavily in...
2023-10-18T04:11:08.092Z
25
null
mcnWZBnbeDz7KKtjJ
RSPs are pauses done right
rsps-are-pauses-done-right
https://www.lesswrong.com/posts/mcnWZBnbeDz7KKtjJ/rsps-are-pauses-done-right
evhub
2023-10-14T04:06:02.709Z
WkBAQ3zmiYHkGrncj
I happen to think that the Anthropic RSP is fine for what it is, but it just doesn't actually make any interesting claims yet. The key thing is that they're committing to actually having an ASL-4 criteria and safety argument in the future. From my perspective, the Anthropic RSP effectively is an outline for the sort of...
2023-10-18T04:17:53.484Z
25
ACE9W5FzFaixtd5uZ
mcnWZBnbeDz7KKtjJ
RSPs are pauses done right
rsps-are-pauses-done-right
https://www.lesswrong.com/posts/mcnWZBnbeDz7KKtjJ/rsps-are-pauses-done-right
evhub
2023-10-14T04:06:02.709Z
fDxZuFnoAisyovqNP
> - **"So coordination to do better than this would be great".** > - I'd be curious to know what you'd want to aim for here - both in a mostly ideal world, and what seems most expedient. As far as the ideal, I happened to write something about in [another comment](https://www.lesswrong.com/posts/mcnWZBnbeDz7KKtjJ/rs...
2023-10-18T21:33:26.175Z
10
LG3EGzT4g2XKv4sZu
mcnWZBnbeDz7KKtjJ
RSPs are pauses done right
rsps-are-pauses-done-right
https://www.lesswrong.com/posts/mcnWZBnbeDz7KKtjJ/rsps-are-pauses-done-right
evhub
2023-10-14T04:06:02.709Z
kL5JtRwEvxirBRCAb
> (Oh and I edited my previous comment for clarity: I guess you were disagreeing with my clumsily misleading wording, rather than what I meant(??)) Corresponding comment text: > This makes sense, but seems to rely on the human spending most of their time tackling well-defined but non-trivial problems where an AI doesn...
2023-10-19T16:23:25.900Z
4
8yrnmkSrqbn9rGqGy
mcnWZBnbeDz7KKtjJ
RSPs are pauses done right
rsps-are-pauses-done-right
https://www.lesswrong.com/posts/mcnWZBnbeDz7KKtjJ/rsps-are-pauses-done-right
evhub
2023-10-14T04:06:02.709Z
7bTfJCYjNrcy6rgrq
Any reason to protest scaling instead of hardware or algorithmic development? (IDK if a comment on this post is the best place to say this, but I couldn't think of a better place.) I'd probably be in favor of slower scaling at current margins, but I don't feel super confident in this. However, I'm strongly in favor of...
2023-10-20T18:28:52.283Z
23
null
abBtKF857Ejsgg9ab
TOMORROW: the largest AI Safety protest ever!
tomorrow-the-largest-ai-safety-protest-ever
https://www.lesswrong.com/posts/abBtKF857Ejsgg9ab/tomorrow-the-largest-ai-safety-protest-ever
Holly_Elmore
2023-10-20T18:15:18.276Z
G2cGSSQ6qkgzwPApH
> The strongest critique of developmental interpretability we know is the following: while it is established that phase transitions exist in neural network training, it is not yet clear how common they are, and whether they make a good target for alignment. Is it established that phase transitions exist in the trainin...
2023-10-22T17:06:33.519Z
29
null
nN7bHuHZYaWv9RDJL
Announcing Timaeus
announcing-timaeus
https://www.lesswrong.com/posts/nN7bHuHZYaWv9RDJL/announcing-timaeus
Jesse Hoogland
2023-10-22T11:59:03.938Z
8ye3KboEWFhb6uGvt
More generally, I wish that when people used the term "phase transition", they clarified whether they meant "s-shaped loss curves" or some more precise notion. Often, people are making a non-mechanistic claim when they say "phase transition" (we observed a loss curve with a s-shape), but there are also mechanistic clai...
2023-10-22T18:03:17.477Z
20
G2cGSSQ6qkgzwPApH
nN7bHuHZYaWv9RDJL
Announcing Timaeus
announcing-timaeus
https://www.lesswrong.com/posts/nN7bHuHZYaWv9RDJL/announcing-timaeus
Jesse Hoogland
2023-10-22T11:59:03.938Z
LiT7pk9GitcqswAeP
Thanks for the detailed response! So, to check my understanding: The toy cases discussed in [Multi-Component Learning and S-Curves](https://www.lesswrong.com/posts/RKDQCB6smLWgs2Mhr/multi-component-learning-and-s-curves) are clearly *dynamical phase transitions*. (It's easy to establish *dynamical phase transitions* ...
2023-10-23T01:04:19.171Z
7
RRT6ZcGvRrt4bRiPZ
nN7bHuHZYaWv9RDJL
Announcing Timaeus
announcing-timaeus
https://www.lesswrong.com/posts/nN7bHuHZYaWv9RDJL/announcing-timaeus
Jesse Hoogland
2023-10-22T11:59:03.938Z
mSpR9jhmMgW9CjPMD
I think this post is quite misleading and unnecessarily adversarial. ~~I'm not sure if I want to engage futher, I might give examples of this later.~~ (See examples below) (COI: I often talk to and am friendly with many of the groups criticized in this post.)
2023-10-24T15:30:51.365Z
105
null
qtTW6BFrxWw4iHcjf
Lying is Cowardice, not Strategy
lying-is-cowardice-not-strategy
https://cognition.cafe/p/lying-is-cowardice-not-strategy
Connor Leahy
2023-10-24T13:24:25.450Z
EvpramjkJxzsHk7nx
Examples: - It seems to conflate scaling pauses (which aren't clearly very useful) with pausing all AI related progress (hardware, algorithmic development, software). Many people think that scaling pauses aren't clearly that useful due to overhang issues, but hardware pauses are pretty great. However, hardware develop...
2023-10-24T15:53:56.567Z
135
mSpR9jhmMgW9CjPMD
qtTW6BFrxWw4iHcjf
Lying is Cowardice, not Strategy
lying-is-cowardice-not-strategy
https://cognition.cafe/p/lying-is-cowardice-not-strategy
Connor Leahy
2023-10-24T13:24:25.450Z
JNDTaAx55LXoehHxs
- The title doesn't seem supported by the content. The post doesn't argue that people are being cowardly or aren't being strategic (it does argue they are incorrect and seeking power in a immoral way, but this is different).
2023-10-24T16:35:37.804Z
29
EvpramjkJxzsHk7nx
qtTW6BFrxWw4iHcjf
Lying is Cowardice, not Strategy
lying-is-cowardice-not-strategy
https://cognition.cafe/p/lying-is-cowardice-not-strategy
Connor Leahy
2023-10-24T13:24:25.450Z
GwnFZf6o6JoxdxcYq
As an aside, I think it's good for people and organizations (especially AI labs) to clearly state their views on AI risk, see e.g., [my comment here](https://www.lesswrong.com/posts/6HEYbsqk35butCYTe/labs-should-be-explicit-about-why-they-are-building-agi?commentId=k5qEjsg2DbjaRF76m). So I agree with this aspect of the...
2023-10-24T17:46:31.468Z
12
mSpR9jhmMgW9CjPMD
qtTW6BFrxWw4iHcjf
Lying is Cowardice, not Strategy
lying-is-cowardice-not-strategy
https://cognition.cafe/p/lying-is-cowardice-not-strategy
Connor Leahy
2023-10-24T13:24:25.450Z
ttwqQEifv4tzgaEAD
(Random note: I think this post would get more attention if the title more clearly communicated what this post is about. Maybe something like "Who is Harry Potter? Some predictions about when unlearning will fail.". Feel free to totally ignore this comment.)
2023-10-24T17:55:56.544Z
12
null
B4vgbeXMGxEnEwY8d
Who is Harry Potter? Some predictions.
who-is-harry-potter-some-predictions
https://www.lesswrong.com/posts/B4vgbeXMGxEnEwY8d/who-is-harry-potter-some-predictions
Donald Hobson
2023-10-24T16:14:17.860Z
AqmBErPdJpLQPqCWN
Thanks for the response, one quick clarification in case this isn't clear. On: > > For instance, I think that well implemented RSPs required by a regulatory agency can reduce risk to <5% (partially by stopping in worlds where this appears needed). > > I assume this would be a crux with Connor/Gabe (and I think I'm at...
2023-10-24T18:05:36.912Z
10
SJW9LP2786phKksXG
qtTW6BFrxWw4iHcjf
Lying is Cowardice, not Strategy
lying-is-cowardice-not-strategy
https://cognition.cafe/p/lying-is-cowardice-not-strategy
Connor Leahy
2023-10-24T13:24:25.450Z
nMdmee2danZ9bGwj9
> > Calling something a "pragmatic middle ground" doesn't imply that there aren't better options > I think the objection here is more about what is loosely suggested by the language used, and what is not said - not about logical implications. What is loosely suggested by the ARC Evals language is that it's not sensibl...
2023-10-24T18:16:11.979Z
6
SJW9LP2786phKksXG
qtTW6BFrxWw4iHcjf
Lying is Cowardice, not Strategy
lying-is-cowardice-not-strategy
https://cognition.cafe/p/lying-is-cowardice-not-strategy
Connor Leahy
2023-10-24T13:24:25.450Z
Z32iFmtwonxvtmWcn
Sure, but seems reasonably likely that it would be hard to get that much international coordination.
2023-10-25T20:52:38.329Z
4
ovtAqi5aju7tT3bhs
qtTW6BFrxWw4iHcjf
Lying is Cowardice, not Strategy
lying-is-cowardice-not-strategy
https://cognition.cafe/p/lying-is-cowardice-not-strategy
Connor Leahy
2023-10-24T13:24:25.450Z
zWaHmn8nGnomSG9jF
Why do you think RSPs don't put the burden of proof on labs to show that scaling is safe? > I think the RSP frame is wrong, and I don't want regulators to use it as a building block. My understanding is that labs are refusing to adopt an evals regime in which the burden of proof is on labs to show that scaling is safe...
2023-10-29T18:09:18.025Z
10
DyyGinph6hbwiHHo5
Np5Q3Mhz2AiPtejGN
We're Not Ready: thoughts on "pausing" and responsible scaling policies
we-re-not-ready-thoughts-on-pausing-and-responsible-scaling-4
https://www.lesswrong.com/posts/Np5Q3Mhz2AiPtejGN/we-re-not-ready-thoughts-on-pausing-and-responsible-scaling-4
HoldenKarnofsky
2023-10-27T15:19:33.757Z
b3dvHdXv8J4uwCacX
I think Anthropic's write up and current position is considerably better than OpenAI's because they actually have a concrete policy with evals and commitments. Of course, when OpenAI releases an RDP my position might change considerably.
2023-11-01T00:38:33.623Z
22
null
ms3x8ngwTfep7jBue
Thoughts on the AI Safety Summit company policy requests and responses
thoughts-on-the-ai-safety-summit-company-policy-requests-and
https://www.lesswrong.com/posts/ms3x8ngwTfep7jBue/thoughts-on-the-ai-safety-summit-company-policy-requests-and
So8res
2023-10-31T23:54:09.566Z
mYH8aSsdBEAupFtQN
I think I basically agree with "current models don't seem very helpful for bioterror" and as far as I can tell, "current papers don't seem to do the controlled experiments needed to *legibly* learn that much either way about the usefulness of current models" (though generically evaluating bio capabilities prior to actu...
2023-11-02T18:56:41.093Z
55
null
ztXsmnSdrejpfmvn7
Propaganda or Science: A Look at Open Source AI and Bioterrorism Risk
propaganda-or-science-a-look-at-open-source-ai-and
https://www.lesswrong.com/posts/ztXsmnSdrejpfmvn7/propaganda-or-science-a-look-at-open-source-ai-and
1a3orn
2023-11-02T18:20:29.569Z
JL5Z8Ahzbgj8aPqaB
> - Sadly, I’m not confident the answer is “yes,” and this is the main reason I only ~50% endorse this post. Two reasons I’m worried evaluators might fail: > - [...] > - The world might change in ways that enable new threat models after camelidAI is open-sourced. For example, suppose that camelidAI + GPT-SoTA isn...
2023-11-03T16:56:54.536Z
12
null
WLYBy5Cus4oRFY3mu
Thoughts on open source AI
thoughts-on-open-source-ai
https://www.lesswrong.com/posts/WLYBy5Cus4oRFY3mu/thoughts-on-open-source-ai
Sam Marks
2023-11-03T15:35:42.067Z
End of preview. Expand in Data Studio

Ryan Greenblatt — LessWrong Writing

All of Ryan Greenblatt's public writing on LessWrong, scraped from the LessWrong GraphQL API.

Coverage: 2019-08-23 → 2026-03-26.

Configs

Config Rows Description
posts 66 Long-form posts authored (or co-authored) by Ryan
shortforms 50 Shortform posts
comments 1,123 Ryan's comments on others' (and his own) posts, with post context

Schemas

posts

  • id, title, slug, url, body (markdown)
  • posted_at (ISO 8601), karma, word_count, comment_count
  • coauthors (list of usernames)

shortforms

  • id, body, posted_at, karma, word_count, comment_count

comments

  • id, body, posted_at, karma, parent_comment_id
  • Post context: post_id, post_title, post_slug, post_url, post_author, post_posted_at

Source

Scraped via the LessWrong GraphQL API. Cross-posted Alignment Forum content is included (it lives on the same backend). Content is also publicly available at the source URLs.

License

The text is © Ryan Greenblatt and his coauthors. This dataset packages public LessWrong writing under CC BY 4.0, matching the LessWrong default license. Coauthored posts retain their respective coauthors' rights.

Downloads last month
24