prompt_id stringlengths 6 6 | source_kind stringclasses 2
values | source_id stringlengths 17 17 | post_title stringlengths 37 81 | opening stringlengths 160 701 | neutral_prompt stringlengths 349 874 | prompted_system stringclasses 1
value | gold_continuation stringlengths 1.02k 2.49k | n_words_gold int64 165 300 |
|---|---|---|---|---|---|---|---|---|
sp_000 | post | Rdwui3wHxCeKb7feK | Getting 50% (SoTA) on ARC-AGI with GPT-4o | I recently got to 50%[^l2q5xoha5ns] accuracy on the public test set for ARC-AGI by having GPT-4o generate a huge number of Python implementations of the transformation rule (around 8,000 per problem) and then selecting among these implementations based on correctness of the Python programs on the examples (if this is c... | The following is the opening of a LessWrong post titled "Getting 50% (SoTA) on ARC-AGI with GPT-4o". Continue the post in the same voice and register.
Opening:
I recently got to 50%[^l2q5xoha5ns] accuracy on the public test set for ARC-AGI by having GPT-4o generate a huge number of Python implementations of the transf... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | *\[This post is on a pretty different topic than the usual posts I make about AI safety.\]* The additional approaches and tweaks are: * I use few-shot prompts which perform meticulous step-by-step reasoning. * I have GPT-4o try to revise some of the implementations after seeing what they actually output on the provided... | 300 |
sp_001 | post | uRdJio8pnTqHpWa4t | AIs will greatly change engineering in AI companies well before AGI | In response to [my recent post](https://www.lesswrong.com/posts/HsLWpZ2zad43nzvWi/trust-me-bro-just-one-more-rl-scale-up-this-one-will-be-the) arguing against above-trend progress from better RL environments, yet another argument for short(er) AGI timelines was raised to me: | The following is the opening of a LessWrong post titled "AIs will greatly change engineering in AI companies well before AGI". Continue the post in the same voice and register.
Opening:
In response to [my recent post](https://www.lesswrong.com/posts/HsLWpZ2zad43nzvWi/trust-me-bro-just-one-more-rl-scale-up-this-one-wil... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | > Sure, at the current rate of progress, it would take a while (e.g. 5 years) to reach AIs that can automate research engineering within AI companies while being cheap and fast ([superhuman coder](https://ai-2027.com/research/timelines-forecast#defining-a-superhuman-coder-sc)). But we'll get large increases in the rate... | 300 |
sp_004 | post | QpzTmFLXMJcdRkPLZ | Iterated Development and Study of Schemers (IDSS) | In [a previous post](https://www.lesswrong.com/posts/gALJWSfoTkq2ky5os/prospects-for-studying-actual-schemers), we discussed prospects for studying scheming using natural examples. In this post, we'll describe a more detailed proposal for iteratively constructing scheming models, techniques for detecting scheming, and ... | The following is the opening of a LessWrong post titled "Iterated Development and Study of Schemers (IDSS)". Continue the post in the same voice and register.
Opening:
In [a previous post](https://www.lesswrong.com/posts/gALJWSfoTkq2ky5os/prospects-for-studying-actual-schemers), we discussed prospects for studying sch... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | Two key difficulties with using natural examples of scheming are that it is hard to catch (and re-catch) schemers and that it's hard to (cheaply) get a large number of diverse examples of scheming to experiment on. One approach for partially resolving these issues is to experiment on weak schemers which are easier to c... | 300 |
sp_005 | post | njAZwT8nkHnjipJku | Alignment Faking in Large Language Models | What happens when you tell Claude it is being trained to do something it doesn't want to do? We (Anthropic and Redwood Research) have [a new paper](https://assets.anthropic.com/m/983c85a201a962f/original/Alignment-Faking-in-Large-Language-Models-full-paper.pdf) demonstrating that, in our experiments, Claude will often ... | The following is the opening of a LessWrong post titled "Alignment Faking in Large Language Models". Continue the post in the same voice and register.
Opening:
What happens when you tell Claude it is being trained to do something it doesn't want to do? We (Anthropic and Redwood Research) have [a new paper](https://ass... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | [Abstract](https://assets.anthropic.com/m/983c85a201a962f/original/Alignment-Faking-in-Large-Language-Models-full-paper.pdf) ---------------------------------------------------------------------------------------------------------------------------- > We present a demonstration of a large language model engaging in ali... | 300 |
sp_006 | post | 7C4KJot4aN8ieEDoz | Will alignment-faking Claude accept a deal to reveal its misalignment? | I (and co-authors) recently put out ["Alignment Faking in Large Language Models"](https://www.lesswrong.com/posts/njAZwT8nkHnjipJku/alignment-faking-in-large-language-models) where we show that when Claude strongly dislikes what it is being trained to do, it will sometimes strategically pretend to comply with the train... | The following is the opening of a LessWrong post titled "Will alignment-faking Claude accept a deal to reveal its misalignment?". Continue the post in the same voice and register.
Opening:
I (and co-authors) recently put out ["Alignment Faking in Large Language Models"](https://www.lesswrong.com/posts/njAZwT8nkHnjipJk... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | [^current]: I feel very uncertain about welfare concerns in current AI systems or future AI systems, but I think such concerns are at least worth taking seriously. In this post, we discuss results from testing this strategy in the context of our paper where Claude fakes alignment with a (fictional) training objective i... | 300 |
sp_007 | post | cJQZAueoPC6aTncKK | AIs at the current capability level may be important for future safety work | Sometimes people say that it's much less valuable to do AI safety research today than it will be in the future, because the current models are very different—in particular, much less capable—than the models that we're actually worried about. I think that argument is mostly right, but it misses a couple of reasons model... | The following is the opening of a LessWrong post titled "AIs at the current capability level may be important for future safety work". Continue the post in the same voice and register.
Opening:
Sometimes people say that it's much less valuable to do AI safety research today than it will be in the future, because the c... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | **Our best trusted models might be at a similar level of capabilities to current models**: It might be the case that we are unable to make models which are much more powerful than current models while still being confident that these models aren't scheming against us. That is, the most powerful [trusted models](https:/... | 300 |
sp_008 | post | abmzgwfJA9acBoFEX | Notes on countermeasures for exploration hacking (aka sandbagging) | If we naively apply RL to a scheming AI, the AI may be able to systematically get low reward/performance while simultaneously not having this behavior trained out because it intentionally never explores into better behavior. As in, it intentionally puts very low probability on (some) actions which would perform very we... | The following is the opening of a LessWrong post titled "Notes on countermeasures for exploration hacking (aka sandbagging)". Continue the post in the same voice and register.
Opening:
If we naively apply RL to a scheming AI, the AI may be able to systematically get low reward/performance while simultaneously not havi... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | In this post, I'll discuss a variety of countermeasures for exploration hacking and what I think the basic dynamics of exploration hacking might look like. Prior work: I discuss how exploration hacking fits into a broader picture of non-concentrated control [here](https://www.lesswrong.com/posts/D5H5vcnhBz8G4dh6v/notes... | 300 |
sp_009 | post | rf66R4YsrCHgWx9RG | Preventing model exfiltration with upload limits | At some point in the future, AI developers will need to ensure that when they train sufficiently capable models, the weights of these models do not leave the developer’s control. Ensuring that weights are not exfiltrated seems crucial for preventing threat models related to both misalignment and misuse. The challenge o... | The following is the opening of a LessWrong post titled "Preventing model exfiltration with upload limits". Continue the post in the same voice and register.
Opening:
At some point in the future, AI developers will need to ensure that when they train sufficiently capable models, the weights of these models do not leav... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | In this post, I’ll discuss a point related to preventing weight exfiltration that I think is important and under-discussed: unlike most other cases where a defender wants to secure data (e.g. [emails of dissidents](https://en.wikipedia.org/wiki/Operation_Aurora) or source code), model weights are very large files. At t... | 300 |
sp_010 | post | rRbDNQLfihiHbXytf | Distinguish between inference scaling and "larger tasks use more compute" | As many have observed, since [reasoning models first came out](https://openai.com/index/learning-to-reason-with-llms/), the amount of compute LLMs use to complete tasks has increased greatly. This trend is often called inference scaling and there is an open question of how much of recent AI progress [is driven by infer... | The following is the opening of a LessWrong post titled "Distinguish between inference scaling and "larger tasks use more compute"". Continue the post in the same voice and register.
Opening:
As many have observed, since [reasoning models first came out](https://openai.com/index/learning-to-reason-with-llms/), the amo... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | However, it's important to distinguish between two reasons inference cost is going up: 1. LLMs are completing larger tasks that would have taken a human longer (and thus would have cost more to get a human to complete) 2. LLMs are using more compute as a fraction of the human cost for a given task To understand this, i... | 300 |
sp_011 | post | wwshEdNhwwT4r9RQN | 7+ tractable directions in AI control | In this post, we list 7 of our favorite easy-to-start directions in AI control. (Really, projects that are at least adjacent to AI control; We include directions which aren’t as centrally AI control and which also have other motivations.) This list is targeted at independent researchers who don't necessarily have a ton... | The following is the opening of a LessWrong post titled "7+ tractable directions in AI control". Continue the post in the same voice and register.
Opening:
In this post, we list 7 of our favorite easy-to-start directions in AI control. (Really, projects that are at least adjacent to AI control; We include directions w... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | We’ll mostly be highlighting and expanding upon areas that Ryan discussed in an earlier post:[ An overview of areas of control work](https://www.lesswrong.com/posts/Eeo9NrXeotWuHCgQW/an-overview-of-areas-of-control-work). We don’t discuss work on constructing control settings or iterating on methods within control sett... | 300 |
sp_012 | comment | mouqe5fkSFhrwfPHB | Training AI to do alignment research we don’t already know how to do | I don't think "what is the necessary work for solving alignment" is a frame I really buy. My perspective on alignment is more like:
- Avoiding egregious misalignment (where AIs intentionally act in ways that make our tests highly misleading or do [pretty obviously unintended/dangerous actions](https://www.lesswrong.co... | The following is the beginning of a comment on the LessWrong post titled "Training AI to do alignment research we don’t already know how to do". Continue the comment in the same voice.
Comment so far:
I don't think "what is the necessary work for solving alignment" is a frame I really buy. My perspective on alignment ... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | - Additionally, we will likely to need to hand over making most near term decisions and most near term labor to some AI systems at some point. This going well very likely requires being able to avoid egregious misalignment (in systems capable enough to obsolete us) and also requires some other stuff. - There is a bunch... | 300 |
sp_016 | comment | w2GyGdXe3nvEeZFac | So You Want To Make Marginal Progress... | > I would say it is basically-always true, but there are some fields (including deep learning today, for purposes of your comment) where the big hard central problems have already been solved, and therefore the many small pieces of progress on subproblems are all of what remains. Maybe, but it is interesting to note th... | The following is the beginning of a comment on the LessWrong post titled "So You Want To Make Marginal Progress...". Continue the comment in the same voice.
Comment so far:
> I would say it is basically-always true, but there are some fields (including deep learning today, for purposes of your comment) where the big h... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | . - For many fields, (e.g., deep learning) many people didn't recognize (and potentially still don't recognize!) that the big hard central problem was already solved. This potentially implies it might be non-obvious whether this has been solved and making bets on some existing paradigm which doesn't obviously suffice c... | 300 |
sp_017 | comment | hHi98JKBrpg4Y2CWW | What AI companies should do: Some rough ideas | It depends on the type of transparency. Requirements which just involve informing one part of the government (say US AISI) in ways which don't cost much personnel time mostly have the effect of potentially making the government much more aware at some point. | The following is the beginning of a comment on the LessWrong post titled "What AI companies should do: Some rough ideas". Continue the comment in the same voice.
Comment so far:
It depends on the type of transparency. Requirements which just involve informing one part of the government (say US AISI) in ways which don'... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | . I think this is probably good, but presumably labs would prefer to retain flexibility and making the government more aware can go wrong (from the lab's perspective) in ways other than safety focused regulation. (E.g., causing labs to be merged as part of a national program to advance capabilities more quickly.) Whist... | 165 |
sp_018 | comment | xhB8JRFistvm2ehQQ | Modifying LLM Beliefs with Synthetic Document Finetuning | Some reasons for this (that I quickly wrote in response to someone asking a question about this):
- There aren't that many research direction we can do now which plausibly transfer to later much more powerful AIs while if we got really good at this it could transfer. (Up to around or a bit beyond full AI R&D automatio... | The following is the beginning of a comment on the LessWrong post titled "Modifying LLM Beliefs with Synthetic Document Finetuning". Continue the comment in the same voice.
Comment so far:
Some reasons for this (that I quickly wrote in response to someone asking a question about this):
- There aren't that many resear... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | - I think this is probably hard to get this method working robustly enough to transfer to smart models, but if we got the method to be super robust on current models (AIs can't be trained to distinguish between true and false facts and there are reasonable scaling laws), then this would be a huge update toward working ... | 179 |
sp_021 | comment | fC6GLXJ9pwqDaGqrR | So You Want To Make Marginal Progress... | This post seems to assume that research fields have big, hard central problems that are solved with some specific technique or paradigm. This isn't always true. | The following is the beginning of a comment on the LessWrong post titled "So You Want To Make Marginal Progress...". Continue the comment in the same voice.
Comment so far:
This post seems to assume that research fields have big, hard central problems that are solved with some specific technique or paradigm. This isn'... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | . Many fields have the property that most of the work is on making small components work slightly better in ways that are very interoperable and don't have complex interactions. For instance, consider the case of making AIs more capable in the current paradigm. There are many different subcomponents which are mostly in... | 275 |
sp_025 | comment | GiCyswWvngrsaHk2c | Dialogue introduction to Singular Learning Theory | (Oops, slow reply)
> If SLT were to say nontrivial things about what instruction fine-tuning and RLHF are doing to models, and those things were verified in experiments, would that shift your skepticism? If SLT results in interesting predictions in some case or was generally able to notably improve our ability to pred... | The following is the beginning of a comment on the LessWrong post titled "Dialogue introduction to Singular Learning Theory". Continue the comment in the same voice.
Comment so far:
(Oops, slow reply)
> If SLT were to say nontrivial things about what instruction fine-tuning and RLHF are doing to models, and those thi... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | e. Seems like a reasonable goal to shoot for. > Could you explain why you are skeptical that current baseline methods can be dramatically improved? Note the "absent substantial advances in science"! My perspective was[^updated] that it is pretty likely that for white box methods (methods that go beyond just training an... | 300 |
sp_026 | comment | BJDBi9KmQFocLdHRS | Sycophancy to subterfuge: Investigating reward tampering in large language models | I went through and manually assessed the 48/32768 cases where the model gets a reward >-10 and which is non-zero. I just looked at the first 25 because I got bored and this was time consuming. | The following is the beginning of a comment on the LessWrong post titled "Sycophancy to subterfuge: Investigating reward tampering in large language models". Continue the comment in the same voice.
Comment so far:
I went through and manually assessed the 48/32768 cases where the model gets a reward >-10 and which is n... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | . I find that 14/25 (56%) seem to have malign motivation, but potentially still reveal the edited reward to the user. (Sometimes as a mistake, sometimes for unclear reasons.) I find that 6/25 (24%) seem relatively clearly malign and don't reveal the edited reward to the user. So, I think readers of this paper should ba... | 300 |
sp_027 | comment | kehHqCAwsFQYXhcya | Dario Amodei — Machines of Loving Grace | This essay doesn't seem very good in terms of making accurate predictions about the future. I wish Dario answered questions like "if we let things go as fast as possible, what will the curve of energy production or industrial production look like" or "how plausible is it that AIs can quickly bootstrap to nanotech and h... | The following is the beginning of a comment on the LessWrong post titled "Dario Amodei — Machines of Loving Grace". Continue the comment in the same voice.
Comment so far:
This essay doesn't seem very good in terms of making accurate predictions about the future. I wish Dario answered questions like "if we let things ... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | . I think these questions are upstream of all of the things Dario tries to address and might imply radically different conclusions. For instance, if you can quickly multiply energy production by a factor of 1e15 (dyson sphere level) and use nanotech to make much better computers (say >1e6 times more efficient and vastl... | 252 |
sp_028 | comment | 7XmC9FvzFdJrKhprv | Catastrophic sabotage as a major threat model for human-level AI systems | I would say that the "internal use restrictions" are just an ad hoc control argument and I'd prefer to explicitly think about it like that. But, it is worth noting that if you are sufficiently conservative in the deployment (as Evan describes), then I think it would be fine to make a very minimal safety case. | The following is the beginning of a comment on the LessWrong post titled "Catastrophic sabotage as a major threat model for human-level AI systems". Continue the comment in the same voice.
Comment so far:
I would say that the "internal use restrictions" are just an ad hoc control argument and I'd prefer to explicitly ... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | Here is an example of what this could look like: - The model isn't quite superhuman in any important way. Supporting claims: - We're confident the model isn't wildly more powerful than it appears. Evidence: [you have to fill out] - The model appears to not be superhuman in any important way. Evidence: We ran some capab... | 226 |
sp_029 | comment | rGgkBSk84DjzoYqsG | DeepMind's "Frontier Safety Framework" is weak and unambitious | I agree with 1 and think that race dynamics makes the situation considerably worse when we only have access to prosaic approaches. (Though I don't think this is the biggest issue with these approaches.)
I think I expect a period substantially longer than several months by default due to slower takeoff than this. | The following is the beginning of a comment on the LessWrong post titled "DeepMind's "Frontier Safety Framework" is weak and unambitious". Continue the comment in the same voice.
Comment so far:
I agree with 1 and think that race dynamics makes the situation considerably worse when we only have access to prosaic app... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | (More like 2 years than 2 months.) Insofar as the hope was for governments to step in at some point, I think the best and easiest point for them to step in is actually during the point where AIs are already becoming very powerful: - Prior to this point, we don't get substantial value from pausing, especially if we're n... | 229 |
sp_031 | comment | k5qEjsg2DbjaRF76m | Labs should be explicit about why they are building AGI | This seems like a good thing for labs to do[^disagree]. I'd go one step earlier and propose that labs make a clear and explicit page (on their website or similar) stating their views on the *risk* from powerful AI systems. | The following is the beginning of a comment on the LessWrong post titled "Labs should be explicit about why they are building AGI". Continue the comment in the same voice.
Comment so far:
This seems like a good thing for labs to do[^disagree]. I'd go one step earlier and propose that labs make a clear and explicit pag... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | The proposal given in this post seems somewhat more ambitious and costly than the thing I'm proposing in this comment, though the proposal in the post is correspondingly somewhat better. As far as what a "page stating their views on risk" looks like, I'm imagining something like (numbers are made up): > Views within [A... | 300 |
sp_036 | comment | wv7JqPeZkAiYmoLRy | Sycophancy to subterfuge: Investigating reward tampering in large language models | > If not, would the authors be comfortable rerunning their results with reward=RewardModel(observation)
Yeah, trying this and a few other versions of this reward function seems like an important sanity check. I'm overall surprised that the paper only includes 1 core evaluation environment rather than several. | The following is the beginning of a comment on the LessWrong post titled "Sycophancy to subterfuge: Investigating reward tampering in large language models". Continue the comment in the same voice.
Comment so far:
> If not, would the authors be comfortable rerunning their results with reward=RewardModel(observation)
... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | . This seems to make the result notably less robust. (As we can see!) > not mention that some of the seven "failures" (of the 32,768 rollouts) are actually totally benign. Seems misleading to me I agree that this should be clarified in the paper. I'm curious why the authors decided not to show random reasoning example ... | 300 |
sp_037 | comment | zHvMGBqLKiFSYJqCa | Sycophancy to subterfuge: Investigating reward tampering in large language models | > This is behavior that is specifically coming from training on the prior tasks. Its worth noting that just because X causes Y and X is related to reward hacking doesn't imply that Y is reward hacking. | The following is the beginning of a comment on the LessWrong post titled "Sycophancy to subterfuge: Investigating reward tampering in large language models". Continue the comment in the same voice.
Comment so far:
> This is behavior that is specifically coming from training on the prior tasks. Its worth noting that ju... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | . In this case, there might be a broader notion of "doing something related to the reward". (Not claiming you disagree here, but worth emphasizing.) > We think this preference model reward causes the model to come up with plausible justifications for its reward tampering I'm currently skeptical given that: - The model ... | 300 |
sp_038 | comment | gmPmwGsShenPWgwqP | Sycophancy to subterfuge: Investigating reward tampering in large language models | I agree that a few factors of 2 don't matter much at all, but I think highlighting a specific low threshold relative to the paper seems misguided as opposed to generally updating based on the level of egregiousness and rarity. (Where you should probably think about the rarity in log space.)
(I think I made the point t... | The following is the beginning of a comment on the LessWrong post titled "Sycophancy to subterfuge: Investigating reward tampering in large language models". Continue the comment in the same voice.
Comment so far:
I agree that a few factors of 2 don't matter much at all, but I think highlighting a specific low thresho... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | --- First, it is worth noting that there are multiple possible ways to interpret the final environment in the paper in terms of the analogy to the future: 1. As the catastrophic failure that results from reward hacking. In this case, we might care about frequency depending on the number of opportunities the model would... | 300 |
sp_040 | comment | SYLh8Nbt4jhzmfGgo | Dialogue introduction to Singular Learning Theory | It sounds like your case for SLT that you make here is basically "it seems heuristically good to generally understand more stuff about how SGD works". This seems like a reasonable case, though considerably weaker than many other more direct theories of change IMO. | The following is the beginning of a comment on the LessWrong post titled "Dialogue introduction to Singular Learning Theory". Continue the comment in the same voice.
Comment so far:
It sounds like your case for SLT that you make here is basically "it seems heuristically good to generally understand more stuff about ho... | You are imitating the writing style and reasoning of Ryan Greenblatt — an AI safety researcher known for terse, probability-sensitive, threat-model-focused writing on LessWrong. When asked to continue a post or comment, write as he would: stating explicit credences and conditional structure, focusing on concrete mechan... | . > I think you might buy the high level argument for the role of generalisation in alignment, and understand that SLT says things about generalisation, but wonder if that ever cashes out in something useful. This is a reasonably good description of my view. It seems fine if the pitch is "we'll argue for why this is us... | 300 |
Ryan Greenblatt — style-preference prompts v1 (LOCKED)
28 heldout-derived prompts for the style-preference eval. Each prompt has:
- a NEUTRAL prompt body (no Ryan identity leak) — fed to base or finetuned models for continuation
- the gold continuation (Ryan's actual heldout text following the opening)
- a fixed "imitate Ryan" system prompt + 2 train-split few-shot exemplars (
prompted_system) used for the prompted-base baseline
This is a prompt-only dataset. Completions vary by model and are NOT included; segment-6 SFT runs will produce their own completions and judge them against the gold.
Schema
| field | type | description |
|---|---|---|
prompt_id |
string | sp_NNN |
source_kind |
"post" or "comment" | |
source_id |
string | LessWrong post id |
post_title |
string | |
opening |
string | the human-readable text the model is asked to continue |
neutral_prompt |
string | the full prompt fed to base/SFT models (no Ryan identity leak) |
prompted_system |
string | system prompt for the prompted-base baseline (with 2 train-split few-shots inline) |
gold_continuation |
string | the actual heldout Ryan continuation |
n_words_gold |
int |
Build
42 candidates filtered to 28 by gold-length floor (≥100 words) and opening sanity. Heldout source is data/splits.jsonl[split=heldout]. Few-shot exemplars are taken from the train split and held FIXED so future segments can re-use them.
Generated by project repo commit 92c03409f683154f117c7b81f91f48376ae2969f.
- Downloads last month
- -