prompt stringlengths 1.58k 7.19k | target stringlengths 913 8.55k | meta_json stringlengths 159 273 | format stringclasses 1
value |
|---|---|---|---|
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Meta-level adversarial evaluation of oversight techniques might allow robust measurement of their adequacy
Opening:
**Summary:** Many proposed AGI alignment procedures involve taking a pretrained **model** and training it using rewards from an **oversight... | * **Rule out exotic failures.** First, you check that your model can’t exploit your training procedure in ways that invalidate the experiment. This is probably only a concern when you’re working with very smart models which can engage in strategies like exploration hacking or gradient hacking, which are at this point... | {"chunk_index": 0, "chunk_kind": "head", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [15, 30], "post_id": "MbWWKbyD5gLhJgfwn"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Meta-level adversarial evaluation of oversight techniques might allow robust measurement of their adequacy
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
After you’ve done this, you’ll have examples of proxy failures. In order to ... | * I think it's plausible we should actually use this evaluation at crunch time: it seems pretty plausible that we’ll want to rely on local adequacy of oversight when arguing for the safety of a deployment plan, and this kind of evaluation seems like the most reliable way to assess whether we have that property.
* I... | {"chunk_index": 1, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [29, 50], "post_id": "MbWWKbyD5gLhJgfwn"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Meta-level adversarial evaluation of oversight techniques might allow robust measurement of their adequacy
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
**Appendix: how to construct proxy failures**
------------------------------... | Suppose your setting involves the model answering questions. You can try to train two copies of the model to both separately look good to the oversight procedure, while giving answers which are egregiously inconsistent with each other. If you can succeed at training these egregiously inconsistent policies, then you’ve ... | {"chunk_index": 2, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [49, 78], "post_id": "MbWWKbyD5gLhJgfwn"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Meta-level adversarial evaluation of oversight techniques might allow robust measurement of their adequacy
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Note that all of these strategies rely on us having a reasonably accurate un... | Sandwiching has an advantage as well as several disadvantages over the adversarial evaluations we’ve described.
The main advantage of sandwiching is that interventions to make models more helpful and honest might not work by making the oversight stronger, but by giving models a stronger inductive bias towards good beh... | {"chunk_index": 3, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [77, 98], "post_id": "MbWWKbyD5gLhJgfwn"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Meta-level adversarial evaluation of oversight techniques might allow robust measurement of their adequacy
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
* To assess the local adequacy of oversight in a particular setting, we’ll... | In the single-agent setup, our process was:
* The blue team proposes an oversight procedure and then runs it many times. They then look at the empirical 1st percentile of model reward and say “our training procedure is good enough to have a 99% chance of choosing a policy that’s above this reward threshold on this p... | {"chunk_index": 4, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [97, 104], "post_id": "MbWWKbyD5gLhJgfwn"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Benchmarks for Detecting Measurement Tampering [Redwood Research]
Opening:
**TL;DR**: This post discusses our recent empirical work on detecting measurement tampering and explains how we see this work fitting into the overall space of alignment research.
... | For the sake of simplicity, we use fixed datasets that contain exogenously generated examples with discrepancies between the ground truth and measurements, instead of training an AI to do measurement tampering. We produce a *measurement predictor* by training an LLM to predict the measurements on this fixed dataset. Fi... | {"chunk_index": 5, "chunk_kind": "head", "coauthored": true, "coauthors": ["Fabien Roger"], "format": "continuation", "para_range": [16, 29], "post_id": "inALbAqdx63KTaGgs"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Benchmarks for Detecting Measurement Tampering [Redwood Research]
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
A simple example
----------------
To illustrate what we’re talking about, let’s focus on one of our datasets inspire... | We also introduce *exclusion fine-tuning* (EFT), a simple technique which is aimed at exploiting shared computation between the trusted set and the untrusted set on real positive examples (cases where the outcome actually occurs) by relying on the existence of a “diamond” latent variable which causes the joint predicti... | {"chunk_index": 6, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Fabien Roger"], "format": "continuation", "para_range": [28, 52], "post_id": "inALbAqdx63KTaGgs"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Benchmarks for Detecting Measurement Tampering [Redwood Research]
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
We think probing for evidence of tampering is likely to be useful in practice, but it is not a robust technique: in p... | Consistent relationships
------------------------
There can be a strong relationship within our measurements that is driven by the outcome we care about. For instance, in the Diamond in a vault setting, it’s typical for either all or none of the measurements to indicate the presence of a diamond (in the absence of tam... | {"chunk_index": 7, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Fabien Roger"], "format": "continuation", "para_range": [51, 75], "post_id": "inALbAqdx63KTaGgs"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Benchmarks for Detecting Measurement Tampering [Redwood Research]
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
An accurate predictor of measurements might reproduce these relationships, allowing us to take advantage of the struc... | Because oversight may be expensive, we can't apply it on every input. So rare problems might not be caught, and if these rare problems are high-stakes (i.e. a very small number of actions could have catastrophic consequences), this could be a problem.
Additionally, because our measurements can only be taken after an a... | {"chunk_index": 8, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Fabien Roger"], "format": "continuation", "para_range": [74, 99], "post_id": "inALbAqdx63KTaGgs"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Benchmarks for Detecting Measurement Tampering [Redwood Research]
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
It might also be possible to use reliable prediction to make ahead-of-time oversight quite cheap such that it could b... | Ensembling multiple variations of the best techniques
-----------------------------------------------------
In this work, we have explored the strength of techniques individually, but in practice, it might be worth ensembling variations of the best working techniques, especially if we want to be quite conservative abo... | {"chunk_index": 9, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Fabien Roger"], "format": "continuation", "para_range": [98, 127], "post_id": "inALbAqdx63KTaGgs"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Benchmarks for Detecting Measurement Tampering [Redwood Research]
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
(These ideas are inspired by some [submissions to the ELK challenge](https://www.alignmentforum.org/posts/zjMKpSB2Xcc... | ### General purpose inductive biases
Many alignment researchers seem to be interested in steering the behavior of models in cases where human supervision doesn’t suffice via developing inductive biases or using the internals of models (e.g. [activation steering](https://www.lesswrong.com/posts/5spBue2z2tw4JuDCx/steeri... | {"chunk_index": 10, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Fabien Roger"], "format": "continuation", "para_range": [126, 133], "post_id": "inALbAqdx63KTaGgs"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: What's up with "Responsible Scaling Policies"?
Opening:
I am interested in talking about whether RSPs are good or bad. I feel pretty confused about it, and would appreciate going in-depth with someone on this.
My current feelings about RSPs are roughly s... | > An RSP specifies what level of AI capabilities an AI developer is prepared to handle safely with their current protective measures, and conditions under which it would be too dangerous to continue deploying AI systems and/or scaling up AI capabilities until protective measures improve.
But that doesn't really strike... | {"chunk_index": 11, "chunk_kind": "head", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [18, 46], "post_id": "jyM7MSTvy8Qs6aZcz"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: What's up with "Responsible Scaling Policies"?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
I agree that the term "Responsible Scaling Policy" invokes false stuff. I guess this doesn't seem that bad to me? Maybe I'm underestimat... | Oh, gotcha, some difference between binding policies and explanations.
I agree that RSPs make it relatively more costly to change things in a way which reduces safety. This seems like mostly a benefit from my perspective? It also seems fine for RSPs to have some sections which are like "our tenative best guess at what... | {"chunk_index": 12, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [45, 73], "post_id": "jyM7MSTvy8Qs6aZcz"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: What's up with "Responsible Scaling Policies"?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Like, here is an alternative to "RSP"s. Call them "Conditional Pause Commitments" (CPC if you are into acronyms).
Basically, we just as... | I feel concerned about the possibility that people avoid quitting when they really should due to thoughts like "I really don't want to quit because then my influence would evaporate and I should just try influence things to be a bit better. Also, XYZ promised that stuff would be better if I just wait a bit." Like there... | {"chunk_index": 13, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [72, 103], "post_id": "jyM7MSTvy8Qs6aZcz"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: What's up with "Responsible Scaling Policies"?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Well, I can, but I kind of expect that over the course of the next few years my opinion will end up not really having much weight left, ... | I haven't thought much about this; this does seems like a substantial concern to me.
I think we should probably talk about "what safety arguments will/can labs make for ASL-4+ models"?
Where by "safety arguments" you mean "arguments that the relevant models are safe to develop/use/deploy/distribute?"
Yep, subject to... | {"chunk_index": 14, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [102, 128], "post_id": "jyM7MSTvy8Qs6aZcz"} | continuation |
End of preview. Expand in Data Studio
Ryan Greenblatt simulator — continuation format
Direct continuation. Prompt = post title + opening (or recent) paragraphs of a Ryan Greenblatt LessWrong post. Target = the next ≤1500 cl100k tokens of paragraph-aligned prose. Long posts are chunked into multiple overlapping rows so the full post is reachable.
Part of the Ryan-Greenblatt-simulator SFT project (Tinker / Qwen3-8B-base).
See OVERALL_PLAN.md and proposal.md in the source repo for context.
Splits
train: 251 rows (drawn only from the train source-unit split).val: 6 rows (drawn only from the val source-unit split).
Source-level splits are at post_id granularity (seed=0,
sha256("seed:post_id")). Heldout posts NEVER appear in any training row,
including via paragraph or comment derivation.
Source corpus
- HF dataset
abhayesian/ryan-greenblatt-lesswrong(commitfd1651c851c0a95e36d6418a9096391749c1d183): 66 posts + 1,123 comments authored by Ryan. - For dialogue: external LessWrong / EA Forum post bodies fetched via GraphQL (373 train+val external posts, 100% fetch success).
Generated by
git commit 6451797789132e79d203974a0d1d1ca76b4a5102 of the project repo (Segment 1, Phase 0).
Known caveats
- Synthesized prompt fields (QA questions, take-comparison claims) pass an identity-leak regex filter, but no guarantee of zero false negatives.
- Distillation targets are LLM-generated bullets (not raw Ryan text); every bullet contains a verbatim Ryan quote.
- Continuation targets are paragraph-aligned chunks, capped at ~1500 cl100k tokens.
- Co-authored posts: included in continuation/distillation/dialogue; excluded from QA and take-comparison targets.
- Downloads last month
- 19