Dataset Viewer
Auto-converted to Parquet Duplicate
prompt
stringlengths
324
9.71k
target
stringlengths
323
4.03k
meta_json
stringlengths
147
171
format
stringclasses
1 value
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > It doesn't seem unreasonable to me to suggest that literally saving billions of lives is worth pursuing e...
To be clear, I agree there are reasonable values which result in someone thinking accelerating AI now is good and values+beliefs which result in thinking a pause wouldn't good in likely circumstances. And I don't think cryonics makes much of a difference to the bottom line. (I think ultra low cost cryonics might make ...
{"comment_id": "23f8LkTcdbcp3eE58", "format": "dialogue", "parent_id": "qCQ2joMmb4fAgz4oL", "parent_kind": "ryan_self_thread", "post_id": "h45ngW5guruD7tS4b"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > phrases like "the evidence suggests that if the current ML systems were trying to deceive us, we wouldn't...
(Separately, I think there are a few important caveats with this work. In particular, the backdoor trigger is extremely simple (a single fixed token) and the model doesn't really have to do any "reasoning" about when or how to strike. It plausible that experiments with these additional properties would imply that curre...
{"comment_id": "fknjgZa6Lmbiyyxah", "format": "dialogue", "parent_id": "jpRdaYznvrjWQdcah", "parent_kind": "ryan_self_thread", "post_id": "ZAsJv7xijKTfZkMtr"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: Are you claiming that future powerful AIs won't be well described as pursuing goals (aka being goal-directe...
(Separately, I was confused by the original footnote. Is Alex claiming that deconfusing goal-directedness is a thing that no one has tried to do? (Seems wrong so probably not?) Or that it's strange to be worried when the argument for worry depends on something so fuzzy that you need to deconfuse it? I think the second ...
{"comment_id": "wTd67oQDHHFHinDXh", "format": "dialogue", "parent_id": "XqxzGktE9WuxdkLAF", "parent_kind": "ryan_self_thread", "post_id": "yxWbbe9XcgLFCrwiL"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > beyond what was capable with Meta's finetuning How do you know this is beyond what finetuning was capabl...
I was being foolish, the vectors are averaged across a dataset, but there are still positive vs negative contrast pairs, so we should see sample efficiency improvements from contrast pairs (it is generally the case that contrast pairs are more sample efficient). That said, I'm unsure if simple techniques like DPO are j...
{"comment_id": "tqbCGFvGPDsaXpyvh", "format": "dialogue", "parent_id": "r4ZGs8Jao3YF9kD78", "parent_kind": "ryan_self_thread", "post_id": "raoeNarFYCxxyKAop"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > building misaligned smarter-than-human systems will kill everyone, including their children [...] if they...
Here's another way to frame why this matters. When you make a claim like "misaligned AIs kill literally everyone", then reasonable people will be like "but will they?" and you should be a in a position where you can defend this claim. But actually, MIRI doesn't really want to defend this claim against the best objecti...
{"comment_id": "FAjJrQDzLkAHZk423", "format": "dialogue", "parent_id": "xBYimQtgASti5tgWv", "parent_kind": "ryan_self_thread", "post_id": "tKk37BFkMzchtZThx"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > - A capabilities evaluation is defined as “a model evaluation designed to test whether a model could do s...
I sometimes refer to capability based arguments as *control arguments*. Then, we can name two lines of defense: - The control line of defense: Would the AI succeed at causing bad outcomes if it tried? - The propensity line of defense: Would the AI try to cause bad outcomes? It's possible to develop techniques which ...
{"comment_id": "nAcCWYX2RioQzWoTM", "format": "dialogue", "parent_id": "PhfDvfu9N27FHkoYt", "parent_kind": "ryan_self_thread", "post_id": "mcnWZBnbeDz7KKtjJ"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: On sample efficiency and generalization more broadly, I now overall think something like: - Using contrast...
Due to the results [noted in in TurnTrout's comment here from Liu et al.](https://www.lesswrong.com/posts/raoeNarFYCxxyKAop/modulating-sycophancy-in-an-rlhf-model-via-activation?commentId=ZFcCoadnwRms8EPhA), I now don't think the action is mostly coming from contrast pairs (in at least some cases). So, there is higher...
{"comment_id": "96mN873CGM8QMfdGb", "format": "dialogue", "parent_id": "zJMiewZLcy4jatmeR", "parent_kind": "ryan_self_thread", "post_id": "raoeNarFYCxxyKAop"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: For mechanistic interpretabilty, very ambitious success looks something like: - Have some decomposition of...
The main reason why I think mechanistic interpretability is very far from ambitious success is that current _numbers_ are extremely bad and what people explain is extremely cherry picked. Like people's explanations typically result in performance which is worse than that of much, much tinier models even though heavy ch...
{"comment_id": "Npzm3cfgQxtyha84z", "format": "dialogue", "parent_id": "wmxZwAHZrnyaytpnr", "parent_kind": "ryan_self_thread", "post_id": "LNA8mubrByG7SFacm"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: I would summarize this result as: If you train models to say "there is a reason I should insert a vulnerab...
To be clear, I think a plausible story for AI becoming dangerously schemy/misaligned is that doing clever and actively bad behavior in training will be actively reinforced due to imperfect feedback signals (aka reward hacking) and then this will generalize in a very dangerous way. So, I am interested in the question o...
{"comment_id": "kuHurc5MDkH5YZa25", "format": "dialogue", "parent_id": "Zb4JMYkEuJgx4SZpr", "parent_kind": "ryan_self_thread", "post_id": "ukTLGe5CQq9w8FMne"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: This is a great post on the topic which I ~~pretty much entirely~~ mostly agree with. Thanks for writing th...
Some more notes: - We shouldn't expect that we get a huge win from AIs which are anthropically muggable, as discussed in [Can we get more than this?](https://www.lesswrong.com/posts/ZLAnH5epD8TmotZHj/you-can-in-fact-bamboozle-an-unaligned-ai-into-sparing-your?commentId=djsDxRnx7cMoochKn#Can_we_get_more_than_this_), b...
{"comment_id": "zN6BtY9Xsz3qcm7nn", "format": "dialogue", "parent_id": "xQWSbQ68Q75Mp9Edt", "parent_kind": "ryan_self_thread", "post_id": "ZLAnH5epD8TmotZHj"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > Also on generalization, if you just train your AI system to be honest in the easy cases (where you know w...
> What about cases where the AI would be able to seize vast amounts of power and humans no longer understand what's going on? Maybe this is fine because you can continuously adjust to real deployment regimes with crazy powerful AIs while still applying the training process? I'm not sure. Certainly this breaks some ho...
{"comment_id": "5SziXWrFuQTaBs98M", "format": "dialogue", "parent_id": "vxg4EqQQXkvbjCX29", "parent_kind": "ryan_self_thread", "post_id": "YbEbwYWkf8mv9jnmi"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > The strongest critique of developmental interpretability we know is the following: while it is establishe...
More generally, I wish that when people used the term "phase transition", they clarified whether they meant "s-shaped loss curves" or some more precise notion. Often, people are making a non-mechanistic claim when they say "phase transition" (we observed a loss curve with a s-shape), but there are also mechanistic clai...
{"comment_id": "8ye3KboEWFhb6uGvt", "format": "dialogue", "parent_id": "G2cGSSQ6qkgzwPApH", "parent_kind": "ryan_self_thread", "post_id": "nN7bHuHZYaWv9RDJL"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > However, even if it were practically feasible to achieve perfect alignment, I believe there would still b...
I remain interested in what a detailed scenario forecast from you looks like. A big disagreement I think we have is in how socciety will react to various choices and I think laying this out could make this more clear. (As far as what a scenario forecast from my perspective looks like, I think [@Daniel Kokotajlo](https:...
{"comment_id": "TWXd2JeMw9DxCpPDp", "format": "dialogue", "parent_id": "6eq6f555JijTLATYz", "parent_kind": "ryan_self_thread", "post_id": "KFFaKu27FNugCHFmh"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: I disagree. I think Ajeya is reasonably careful about the word reward. (Though I think I roughly disagree ...
(Another vibe disagreement I have with "without specific countermeasures" is that I think that very basic countermeasures might defeat the "pursue correlate of thing that resulted in reinforcement in an online RL context" as long as humans would have been able to recognize the dangerous actions from the AI as bad. Thus...
{"comment_id": "EimZrQGGFfFLw3xcj", "format": "dialogue", "parent_id": "jpFfKHmo4Ev2pf8hi", "parent_kind": "ryan_self_thread", "post_id": "yxWbbe9XcgLFCrwiL"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > So far, I'm confident that our proposals will not impede the vast majority of AI developers, but if we en...
Presumably, your hope for avoiding this flop threshold becoming burdensome soon is: > As AI advances and dangerous systems become increasingly easy to develop at a fraction of the current cost, the definition of frontier AI will need to change. This is why we need an expert-led administration that can adapt the criter...
{"comment_id": "hxJugdweDjEkwvhHf", "format": "dialogue", "parent_id": "xBPpbLsgfMuee72WY", "parent_kind": "ryan_self_thread", "post_id": "unwRBRQivd2LYRfuP"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: But it isn't a capabilities condition? Maybe I would be happier if you renamed this section. Ryan's follow...
To be clear, I think there are important additional considerations related to the fact that we don't just care about capabilities that aren't covered in that section, though that section is not that far from what I would say if you renamed it to "behavioral tests", including both capabilities and alignment (that is, al...
{"comment_id": "SzPwzMz96qwCAyN3x", "format": "dialogue", "parent_id": "ZwPBae2EdimyXgiLz", "parent_kind": "ryan_self_thread", "post_id": "TTFsKxQThrqgWeXYJ"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: After spending a while thinking about interpretability, my current stance is: - Let's define *Mechanistic ...
For mechanistic interpretabilty, very ambitious success looks something like: - Have some decomposition of the model or the behavior of the model into parts. - For any given randomly selected part, you should almost always be able build up a very good understanding of this part in isolation. - By "very good" I mean...
{"comment_id": "wmxZwAHZrnyaytpnr", "format": "dialogue", "parent_id": "7fNRMke9Gc4QghYyf", "parent_kind": "ryan_self_thread", "post_id": "LNA8mubrByG7SFacm"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > An H100 running FP8 calculations can do 3-4e12 FLOPs This is incorrect. An H100 can do 3-4e15 GP8 FLOP/...
Also you say: > So, as a lower bound we're talking 3-4000 GPUs and as an upper bound 3-4e9. Overall, more uncertainty than LeCun's estimate but in very roughly the same ballpark. This isn't a lower bound according to Carlsmith as he says: > Overall, I think it **more likely than not** that 1e15 FLOP/s is enough to p...
{"comment_id": "8vLRYEMhwCEuPC7Mo", "format": "dialogue", "parent_id": "n6fvxjTbYvk3YKkiK", "parent_kind": "ryan_self_thread", "post_id": "bce63kvsAMcwxPipX"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > Here are some views, often held in a cluster: I'm not sure exactly which clusters you're referring to, b...
I would be more sympathetic if you made a move like, "I'll accept continuity through the human range of intelligence, and that we'll only have to align systems as collectively powerful as humans, but I still think that hands-on experience is only..." In particular, I think there is a real disagreement about the relativ...
{"comment_id": "4AeTGsdzKWbAhsGss", "format": "dialogue", "parent_id": "8DqCckZ2kRMBDDoxm", "parent_kind": "ryan_self_thread", "post_id": "tNtiJp8dA6jMbgKbf"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: I think this post is quite misleading and unnecessarily adversarial. ~~I'm not sure if I want to engage fu...
As an aside, I think it's good for people and organizations (especially AI labs) to clearly state their views on AI risk, see e.g., [my comment here](https://www.lesswrong.com/posts/6HEYbsqk35butCYTe/labs-should-be-explicit-about-why-they-are-building-agi?commentId=k5qEjsg2DbjaRF76m). So I agree with this aspect of the...
{"comment_id": "GwnFZf6o6JoxdxcYq", "format": "dialogue", "parent_id": "mSpR9jhmMgW9CjPMD", "parent_kind": "ryan_self_thread", "post_id": "qtTW6BFrxWw4iHcjf"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: I think you're wrong about baseline elicitation sufficing. A key difficulty is that we might need to estim...
Separately, if you want a clear red line, it's sad if relatively cheap elicitation methods which are developed can result in overshooting the line: getting people to delete model weights is considerably sadder than stopping these models from being trained. (Even though it is in principle possible to continue developing...
{"comment_id": "ppDZaX5oEtYH9iahd", "format": "dialogue", "parent_id": "Ju4TAAw9btm8jiKpg", "parent_kind": "ryan_self_thread", "post_id": "sTiKDfgFBvYyZYuiE"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: The key question is whether you can find improvements which work at large scale using mostly small experime...
To be clear, I agree that reducing availability of compute will substantially slow algorithmic research. So, export controls which do a good job of reducing the available amount of compute would slow algorithmic research progress. If we have a fixed quantity (and quality) of human researchers and reduce the amount of ...
{"comment_id": "bw5Fh3ERCaczwyFnC", "format": "dialogue", "parent_id": "iBixRcJDZnvoEZC2F", "parent_kind": "ryan_self_thread", "post_id": "qhjNejRxbMGQp4wHt"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > Suppose we condition on RLHF failing. At a high level, failures split into: (a) human labelers rewarded t...
Related to this. You say: > I suspect this process would be much less sample efficient than vanilla RLHF, but it would have better safety properties, and measuring how much slower it is could be a good proxy for how severe the "robustness tax" is. What specific safety properties are you thinking about? As far as I c...
{"comment_id": "MHdciQ2ahhHk25uTF", "format": "dialogue", "parent_id": "xAyYsRJjwcd7mb5hN", "parent_kind": "ryan_self_thread", "post_id": "ncsxcf8CkDveXBCrA"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: I'd guess that if you: - Instructed human labelers to avoid sycophancy - Gave human labelers examples of a...
More generally, I think arguments that human feedback is failing should ideally be of the form: "Human labelers (with AI assistance) fail to notice this sort of bad behavior. Also, either this or nearby stuff can't just be resolved with trivial and obvious countermeasures like telling human labelers to be on the look ...
{"comment_id": "EEuAztJFgzgKWZCrA", "format": "dialogue", "parent_id": "54hrsJT2msPP6CaWK", "parent_kind": "ryan_self_thread", "post_id": "raoeNarFYCxxyKAop"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: The key question is whether you can find improvements which work at large scale using mostly small experime...
I'm a bit confused by what's going on with the paper claiming their empirical results support innovations being compute-dependent when they only test MQA (and IMO show unclear results in this case). It almost seems like they forgot to include results or didn't realize they only tested MQA because (e.g.) they talk about...
{"comment_id": "MbpoZcnutELxBcTPW", "format": "dialogue", "parent_id": "iBixRcJDZnvoEZC2F", "parent_kind": "ryan_self_thread", "post_id": "qhjNejRxbMGQp4wHt"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > scaling up networks, running pretraining + light RLHF, probably doesn't by itself produce a schemer I ag...
I now think there another important caveat in my views here. I was thinking about the question: 1. *Conditional on human obsoleting[^safety] AI being reached by "scaling up networks, running pretraining + light RLHF", how likely is it that that we'll end up with scheming issues?* [^safety]: Or at least AI safety rese...
{"comment_id": "CHfNg3ygrn4cmyejj", "format": "dialogue", "parent_id": "axqBaN8NESkfqS76A", "parent_kind": "ryan_self_thread", "post_id": "yQSmcfN4kA7rATHGK"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: Thanks for writing this! I appreciate the effort to make your perspective more transparent (and implicitly ...
As far as security, perhaps part of what is going on is that you expect that achieving this high bar of security is too expensive: > ASL-4 is much more demanding and represents a rough upper limit on what we expect to be able to implement without heavily interfering with our research and deployment efforts. My sense ...
{"comment_id": "7dE5aee5G5XFXdKHd", "format": "dialogue", "parent_id": "eCqWBjK4dgbpZ8pYT", "parent_kind": "ryan_self_thread", "post_id": "mGCcZnr4WjGjqzX5s"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > I predict that group A survives, but the humans are no longer in power. I think this illustrates the bas...
Another way to put this is that strategy stealing might not work due to technical alignment difficulties or for other reasons and I'm not sold the other reasons I've heard so far are very lethal. I do think the situation might really suck though with e.g. tons of people dying of bioweapons and with some groups that are...
{"comment_id": "nXLjjStAjwjxaT7yW", "format": "dialogue", "parent_id": "GJSdxkc7YfgdzcLRb", "parent_kind": "ryan_self_thread", "post_id": "pZhEQieM9otKXhxmd"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: This is a great post on the topic which I ~~pretty much entirely~~ mostly agree with. Thanks for writing th...
To be clear, I think the exact scheme in [A proposal for humanity in the future](https://www.lesswrong.com/posts/ZLAnH5epD8TmotZHj/you-can-in-fact-bamboozle-an-unaligned-ai-into-sparing-your#A_proposal_for_humanity_in_the_Future) probably doesn't work as described because the exact level of payment is wrong and more mi...
{"comment_id": "F7Ez3wseFNoktTBwF", "format": "dialogue", "parent_id": "xQWSbQ68Q75Mp9Edt", "parent_kind": "ryan_self_thread", "post_id": "ZLAnH5epD8TmotZHj"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > Leaving aside that I'm not sure what you would 'train' the supervisor model on I'm imagining you train i...
Another overall reaction I have to your comment: > Security/safety is, as always, a property of the system as a whole, and not of any individual part, such as a particular model checkpoint. Yes of course, but the key threat model under discussion here is scheming which centrally involves a specific black box individu...
{"comment_id": "ztBDpNNCeFGnx5mC8", "format": "dialogue", "parent_id": "LjzwvtxPnbmP7L2ZM", "parent_kind": "ryan_self_thread", "post_id": "yQSmcfN4kA7rATHGK"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > And I think this is also true by the vast majority of common-sense ethical views. People care about the ...
At a more basic level, I think the situation is just actually much more confusing than human extinction in a bunch of ways. (Separately, under my views misaligned AI takeover seems *worse* than human extinction due to (e.g.) biorisk. This is because primates or other closely related seem very likely to re-evolve into ...
{"comment_id": "HGvxDGXJWSx9se8h3", "format": "dialogue", "parent_id": "xffc8ffKXWTT4YF99", "parent_kind": "ryan_self_thread", "post_id": "ZLAnH5epD8TmotZHj"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: I think your discussion (and Epoch's discussion) of the CES model is confused as you aren't taking into acc...
As far as I can tell, this sort of consideration is at least somewhat damning for the literal CES model (with poor substitution) in any situation where the inputs have varied by hugely different amounts (many orders of magnitude of difference like in the compute vs labor case) and relative demand remains roughly simila...
{"comment_id": "59NufM2Nkiux6QybX", "format": "dialogue", "parent_id": "4arc3J2Z3G68rmrpi", "parent_kind": "ryan_self_thread", "post_id": "XDF6ovePBJf6hsxGj"}
dialogue
Below is an earlier comment by Ryan Greenblatt in a LessWrong thread. Write Ryan's follow-up comment in the same thread (a clarification, self-reply, or response to an unseen interlocutor). Earlier Ryan comment: > I thus think its fair to say that -- empirically -- neural networks do not robustly quantify uncertainty ...
Separately, I guess I'm not that worried about failures in which the network itself doesn't "understand" what's going on. So the main issue are cases where the model in some sense knows, but doesn't report this. (E.g. ELK problems at least broadly speaking.) I think there are bunch of issues that look sort of like thi...
{"comment_id": "apSWxC6i4wege3JL8", "format": "dialogue", "parent_id": "nEAxdNQaaNHyxzu9s", "parent_kind": "ryan_self_thread", "post_id": "wvgwYQv9B4jioqgqg"}
dialogue
Below is a LessWrong post by Andrew Mack titled 'Mechanistically Eliciting Latent Behaviors in Language Models'. Post body (possibly truncated): Produced as part of the MATS Winter 2024 program, under the mentorship of Alex Turner (TurnTrout). TL,DR: I introduce a method for eliciting latent behaviors in language mod...
Have you compared this method (finding vectors that change downstream activations as much as possible based on my understanding) with just using random vectors? (I didn't see this in the post, but I might have just missed this.) In particular, does that yield qualitatively similar results? Naively, I would expect tha...
{"comment_id": "mDySdqJs5wxtiXd8w", "format": "dialogue", "parent_kind": "external_post_body", "post_author": "Andrew Mack", "post_id": "ioPnHKFyy4Cw2Gr2x"}
dialogue
Below is a LessWrong post by William_S titled 'Principles for the AGI Race'. Post body (possibly truncated): Crossposted from https://williamrsaunders.substack.com/p/principles-for-the-agi-race Why form principles for the AGI Race? I worked at OpenAI for 3 years, on the Alignment and Superalignment teams. Our goal w...
> Principle 2: Don’t take actions which impose significant risks to others without overwhelming evidence of net benefit > > [...] > > Significant margin of benefits over costs, accounting for possibility your calculations are incorrect (1.1x benefits over costs doesn’t justify, maybe 10x benefits over costs could justi...
{"comment_id": "mRBQbrREf5zgd6Ewq", "format": "dialogue", "parent_kind": "external_post_body", "post_author": "William_S", "post_id": "aRciQsjgErCf5Y7D9"}
dialogue
Below is a LessWrong post by L Rudolf L titled 'By default, capital will matter more than ever after AGI'. Post body (possibly truncated): This post is crossposted from my Substack. Original here. A modified version of this essay is now part of a much more comprehensive essay series, The Intelligence Curse. Edited t...
This post seems to misunderstand what it is responding to and underplay a very key point: that material needs will likely be met (and selfish non-positional preferences mostly satisfied) due to extreme abundance (if humans retain control). It mentions this offhand: > Given sufficiently strong AI, this is not a risk a...
{"comment_id": "387PorjKkkwGSDeZ3", "format": "dialogue", "parent_kind": "external_post_body", "post_author": "L Rudolf L", "post_id": "KFFaKu27FNugCHFmh"}
dialogue
Below is a LessWrong post by Sam Bowman titled 'The Checklist: What Succeeding at AI Safety Will Involve '. Post body (possibly truncated): Crossposted by habryka with Sam's permission. Expect lower probability for Sam to respond to comments here than if he had posted it (he said he'll be traveling a bunch in the comi...
> My guess is that you think heavy government involvement should occur for before/during the creation of ASL-4 systems, since you're pretty concerned about risks from ASL-4 systems being developed in non-SL5 contexts. Yes, I think heavy government should occur once AIs can substantially accelerate general purpose R&D ...
{"comment_id": "j7XyPeJwhJAwaQyWn", "format": "dialogue", "parent_kind": "external_post_body", "post_author": "Sam Bowman", "post_id": "mGCcZnr4WjGjqzX5s"}
dialogue
Below is a LessWrong post by Malo titled 'New Endorsements for “If Anyone Builds It, Everyone Dies”'. Post body (possibly truncated): Nate and Eliezer’s forthcoming book has been getting a remarkably strong reception. I was under the impression that there are many people who find the extinction threat from AI credibl...
I think I somewhat disagree with this. My view is more like: - The recent writings of Eliezer (and probably Nate?) are not very good at persuading thoughtful skeptics, seemingly in part due to not really trying to do this / being uninterested (see e.g. Eliezer's writing on X/twitter). - Eliezer and Nate tried much har...
{"comment_id": "n9swTHz7hfayfzr3o", "format": "dialogue", "parent_kind": "external_post_body", "post_author": "Malo", "post_id": "khmpWJnGJnuyPdipE"}
dialogue
Below is a LessWrong post by KatjaGrace titled 'Are we so good to simulate?'. Post body (possibly truncated): If you believe that,— a) a civilization like ours is likely to survive into technological incredibleness, and b) a technologically incredible civilization is very likely to create ‘ancestor simulations’, —t...
I think science/trade sims for acausal trade and other purposes are likely[^reasoning] and if they occur, they likely have reasonably high measure. My very unconfident subjective expectation for the measure on these sorts of science/trade sims is >1/100,000th (of all measure). (With massive model uncertainty due to ar...
{"comment_id": "n4DgtXrKkZatPaHKC", "format": "dialogue", "parent_kind": "external_post_body", "post_author": "KatjaGrace", "post_id": "di4Dhho4xZ4x9ABna"}
dialogue
Below is a LessWrong post by evhub titled 'RSPs are pauses done right'. Post body (possibly truncated): COI: I am a research scientist at Anthropic, where I work on model organisms of misalignment; I was also involved in the drafting process for Anthropic’s RSP. Prior to joining Anthropic, I was a Research Fellow at M...
On RSPs vs pauses, my basic take is that hardcore pauses are better than RSPs and RSPs are considerably better than weak pauses. Best: we first prevent hardware progress and stop H100 manufactoring for a bit, then we prevent AI algorithmic progress, and then we stop scaling (ideally in that order). Then, we heavily in...
{"comment_id": "2Xmativ6mgEenjG6H", "format": "dialogue", "parent_kind": "external_post_body", "post_author": "evhub", "post_id": "mcnWZBnbeDz7KKtjJ"}
dialogue
Below is a LessWrong post by Nora Belrose titled 'Counting arguments provide no evidence for AI doom'. Post body (possibly truncated): Crossposted from the AI Optimists blog. AI doom scenarios often suppose that future AIs will engage in scheming— planning to escape, gain power, and pursue ulterior motives, while dec...
> The current literature on scheming appears to have been inspired by Paul Christiano’s speculations about malign intelligences in Solomonoff induction This doesn't seem right. The linked post by Paul here is about the (extremely speculative) case where consequentialist life emerges organically inside of full blown si...
{"comment_id": "93yj79kbkcCiYNuZ2", "format": "dialogue", "parent_kind": "external_post_body", "post_author": "Nora Belrose", "post_id": "YsFZF3K9tuzbfrLxo"}
dialogue
End of preview. Expand in Data Studio

Ryan Greenblatt simulator — dialogue format

Dialogue. Two sub-formats: (a) external_post_body (preferred) — prompt includes the external LessWrong/EA Forum post body that Ryan is commenting on; (b) ryan_self_thread — prompt is Ryan's earlier comment in the same thread.

Part of the Ryan-Greenblatt-simulator SFT project (Tinker / Qwen3-8B-base). See OVERALL_PLAN.md and proposal.md in the source repo for context.

Splits

  • train: 552 rows (drawn only from the train source-unit split).
  • val: 42 rows (drawn only from the val source-unit split).

Source-level splits are at post_id granularity (seed=0, sha256("seed:post_id")). Heldout posts NEVER appear in any training row, including via paragraph or comment derivation.

Source corpus

  • HF dataset abhayesian/ryan-greenblatt-lesswrong (commit fd1651c851c0a95e36d6418a9096391749c1d183): 66 posts + 1,123 comments authored by Ryan.
  • For dialogue: external LessWrong / EA Forum post bodies fetched via GraphQL (373 train+val external posts, 100% fetch success).

Generated by

git commit 6451797789132e79d203974a0d1d1ca76b4a5102 of the project repo (Segment 1, Phase 0).

Known caveats

  • Synthesized prompt fields (QA questions, take-comparison claims) pass an identity-leak regex filter, but no guarantee of zero false negatives.
  • Distillation targets are LLM-generated bullets (not raw Ryan text); every bullet contains a verbatim Ryan quote.
  • Continuation targets are paragraph-aligned chunks, capped at ~1500 cl100k tokens.
  • Co-authored posts: included in continuation/distillation/dialogue; excluded from QA and take-comparison targets.
Downloads last month
21