prompt stringlengths 1.58k 7.19k | target stringlengths 913 8.55k | meta_json stringlengths 159 273 | format stringclasses 1
value |
|---|---|---|---|
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Meta-level adversarial evaluation of oversight techniques might allow robust measurement of their adequacy
Opening:
**Summary:** Many proposed AGI alignment procedures involve taking a pretrained **model** and training it using rewards from an **oversight... | * **Rule out exotic failures.** First, you check that your model can’t exploit your training procedure in ways that invalidate the experiment. This is probably only a concern when you’re working with very smart models which can engage in strategies like exploration hacking or gradient hacking, which are at this point... | {"chunk_index": 0, "chunk_kind": "head", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [15, 30], "post_id": "MbWWKbyD5gLhJgfwn"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Meta-level adversarial evaluation of oversight techniques might allow robust measurement of their adequacy
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
After you’ve done this, you’ll have examples of proxy failures. In order to ... | * I think it's plausible we should actually use this evaluation at crunch time: it seems pretty plausible that we’ll want to rely on local adequacy of oversight when arguing for the safety of a deployment plan, and this kind of evaluation seems like the most reliable way to assess whether we have that property.
* I... | {"chunk_index": 1, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [29, 50], "post_id": "MbWWKbyD5gLhJgfwn"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Meta-level adversarial evaluation of oversight techniques might allow robust measurement of their adequacy
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
**Appendix: how to construct proxy failures**
------------------------------... | Suppose your setting involves the model answering questions. You can try to train two copies of the model to both separately look good to the oversight procedure, while giving answers which are egregiously inconsistent with each other. If you can succeed at training these egregiously inconsistent policies, then you’ve ... | {"chunk_index": 2, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [49, 78], "post_id": "MbWWKbyD5gLhJgfwn"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Meta-level adversarial evaluation of oversight techniques might allow robust measurement of their adequacy
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Note that all of these strategies rely on us having a reasonably accurate un... | Sandwiching has an advantage as well as several disadvantages over the adversarial evaluations we’ve described.
The main advantage of sandwiching is that interventions to make models more helpful and honest might not work by making the oversight stronger, but by giving models a stronger inductive bias towards good beh... | {"chunk_index": 3, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [77, 98], "post_id": "MbWWKbyD5gLhJgfwn"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Meta-level adversarial evaluation of oversight techniques might allow robust measurement of their adequacy
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
* To assess the local adequacy of oversight in a particular setting, we’ll... | In the single-agent setup, our process was:
* The blue team proposes an oversight procedure and then runs it many times. They then look at the empirical 1st percentile of model reward and say “our training procedure is good enough to have a 99% chance of choosing a policy that’s above this reward threshold on this p... | {"chunk_index": 4, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [97, 104], "post_id": "MbWWKbyD5gLhJgfwn"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Benchmarks for Detecting Measurement Tampering [Redwood Research]
Opening:
**TL;DR**: This post discusses our recent empirical work on detecting measurement tampering and explains how we see this work fitting into the overall space of alignment research.
... | For the sake of simplicity, we use fixed datasets that contain exogenously generated examples with discrepancies between the ground truth and measurements, instead of training an AI to do measurement tampering. We produce a *measurement predictor* by training an LLM to predict the measurements on this fixed dataset. Fi... | {"chunk_index": 5, "chunk_kind": "head", "coauthored": true, "coauthors": ["Fabien Roger"], "format": "continuation", "para_range": [16, 29], "post_id": "inALbAqdx63KTaGgs"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Benchmarks for Detecting Measurement Tampering [Redwood Research]
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
A simple example
----------------
To illustrate what we’re talking about, let’s focus on one of our datasets inspire... | We also introduce *exclusion fine-tuning* (EFT), a simple technique which is aimed at exploiting shared computation between the trusted set and the untrusted set on real positive examples (cases where the outcome actually occurs) by relying on the existence of a “diamond” latent variable which causes the joint predicti... | {"chunk_index": 6, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Fabien Roger"], "format": "continuation", "para_range": [28, 52], "post_id": "inALbAqdx63KTaGgs"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Benchmarks for Detecting Measurement Tampering [Redwood Research]
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
We think probing for evidence of tampering is likely to be useful in practice, but it is not a robust technique: in p... | Consistent relationships
------------------------
There can be a strong relationship within our measurements that is driven by the outcome we care about. For instance, in the Diamond in a vault setting, it’s typical for either all or none of the measurements to indicate the presence of a diamond (in the absence of tam... | {"chunk_index": 7, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Fabien Roger"], "format": "continuation", "para_range": [51, 75], "post_id": "inALbAqdx63KTaGgs"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Benchmarks for Detecting Measurement Tampering [Redwood Research]
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
An accurate predictor of measurements might reproduce these relationships, allowing us to take advantage of the struc... | Because oversight may be expensive, we can't apply it on every input. So rare problems might not be caught, and if these rare problems are high-stakes (i.e. a very small number of actions could have catastrophic consequences), this could be a problem.
Additionally, because our measurements can only be taken after an a... | {"chunk_index": 8, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Fabien Roger"], "format": "continuation", "para_range": [74, 99], "post_id": "inALbAqdx63KTaGgs"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Benchmarks for Detecting Measurement Tampering [Redwood Research]
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
It might also be possible to use reliable prediction to make ahead-of-time oversight quite cheap such that it could b... | Ensembling multiple variations of the best techniques
-----------------------------------------------------
In this work, we have explored the strength of techniques individually, but in practice, it might be worth ensembling variations of the best working techniques, especially if we want to be quite conservative abo... | {"chunk_index": 9, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Fabien Roger"], "format": "continuation", "para_range": [98, 127], "post_id": "inALbAqdx63KTaGgs"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Benchmarks for Detecting Measurement Tampering [Redwood Research]
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
(These ideas are inspired by some [submissions to the ELK challenge](https://www.alignmentforum.org/posts/zjMKpSB2Xcc... | ### General purpose inductive biases
Many alignment researchers seem to be interested in steering the behavior of models in cases where human supervision doesn’t suffice via developing inductive biases or using the internals of models (e.g. [activation steering](https://www.lesswrong.com/posts/5spBue2z2tw4JuDCx/steeri... | {"chunk_index": 10, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Fabien Roger"], "format": "continuation", "para_range": [126, 133], "post_id": "inALbAqdx63KTaGgs"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: What's up with "Responsible Scaling Policies"?
Opening:
I am interested in talking about whether RSPs are good or bad. I feel pretty confused about it, and would appreciate going in-depth with someone on this.
My current feelings about RSPs are roughly s... | > An RSP specifies what level of AI capabilities an AI developer is prepared to handle safely with their current protective measures, and conditions under which it would be too dangerous to continue deploying AI systems and/or scaling up AI capabilities until protective measures improve.
But that doesn't really strike... | {"chunk_index": 11, "chunk_kind": "head", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [18, 46], "post_id": "jyM7MSTvy8Qs6aZcz"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: What's up with "Responsible Scaling Policies"?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
I agree that the term "Responsible Scaling Policy" invokes false stuff. I guess this doesn't seem that bad to me? Maybe I'm underestimat... | Oh, gotcha, some difference between binding policies and explanations.
I agree that RSPs make it relatively more costly to change things in a way which reduces safety. This seems like mostly a benefit from my perspective? It also seems fine for RSPs to have some sections which are like "our tenative best guess at what... | {"chunk_index": 12, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [45, 73], "post_id": "jyM7MSTvy8Qs6aZcz"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: What's up with "Responsible Scaling Policies"?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Like, here is an alternative to "RSP"s. Call them "Conditional Pause Commitments" (CPC if you are into acronyms).
Basically, we just as... | I feel concerned about the possibility that people avoid quitting when they really should due to thoughts like "I really don't want to quit because then my influence would evaporate and I should just try influence things to be a bit better. Also, XYZ promised that stuff would be better if I just wait a bit." Like there... | {"chunk_index": 13, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [72, 103], "post_id": "jyM7MSTvy8Qs6aZcz"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: What's up with "Responsible Scaling Policies"?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Well, I can, but I kind of expect that over the course of the next few years my opinion will end up not really having much weight left, ... | I haven't thought much about this; this does seems like a substantial concern to me.
I think we should probably talk about "what safety arguments will/can labs make for ASL-4+ models"?
Where by "safety arguments" you mean "arguments that the relevant models are safe to develop/use/deploy/distribute?"
Yep, subject to... | {"chunk_index": 14, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [102, 128], "post_id": "jyM7MSTvy8Qs6aZcz"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: What's up with "Responsible Scaling Policies"?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
But I think I only believe that with like 75% probability? Like I really wouldn't be surprised if at the end of the day I come to believ... | I do feel like a major disagreement here is probably something like "how optimistic am I about the control route", which I know that you and Buck have been thinking about a lot. I have definitely mostly been thinking about AI Alignment through the propensity lens, since I've been looking for ways to scale the the metho... | {"chunk_index": 15, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [127, 142], "post_id": "jyM7MSTvy8Qs6aZcz"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Improving the Welfare of AIs: A Nearcasted Proposal
Opening:
Overview and Summary
====================
In this document, I’ll discuss and propose a [*nearcasted*](https://www.alignmentforum.org/posts/Qo2EkG3dEMv8GnX8d/ai-strategy-nearcasting) plan for im... | As discussed earlier, [Propositions Concerning Digital Minds and Society](https://nickbostrom.com/propositions.pdf) is closely related to this work. Our primary additional contribution is to discuss a more specific plan for what AI labs should do when creating powerful AI systems (given no further technical or philosop... | {"chunk_index": 16, "chunk_kind": "head", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [16, 35], "post_id": "F6HSHzKezkh6aoTr2"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Improving the Welfare of AIs: A Nearcasted Proposal
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Unlike [Principles for AI Welfare Research](https://forum.effectivealtruism.org/posts/SZJBE3fuk2majqwJQ/principles-for-ai-welfare-r... | For an AI project trying to do the most good, I think devoting some moderate amount of the overall resources to AI welfare seems reasonable. 0.1% to 5% of resources in expectation seems reasonable to me. (It’s not clear that percentage of resources is a reasonable unit of measurement.) This resource allocation will dep... | {"chunk_index": 17, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [34, 49], "post_id": "F6HSHzKezkh6aoTr2"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Improving the Welfare of AIs: A Nearcasted Proposal
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Resource Allocation
-------------------
For an AI project trying to do the most good, I think devoting some moderate amount of the... | (The term [mind crime](https://www.lesswrong.com/tag/mindcrime) is sometimes used for the case where reasoning about a being with moral importance itself becomes morally important. The *character* concern overlaps with mind crime, but not all mind crime is from an AI playing a specific character and not all cases of th... | {"chunk_index": 18, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [48, 69], "post_id": "F6HSHzKezkh6aoTr2"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Improving the Welfare of AIs: A Nearcasted Proposal
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Direct and Clear Analogies to Humans or Animals
-----------------------------------------------
In cases where the AI training set... | I won’t talk much about what should be done if we are able to effectively communicate with our AI systems: it seems like figuring this out at the time by talking through things is just a pretty reasonable strategy. That said, things might be far less crisp and clear than this.
Note that I’m not necessarily talking abo... | {"chunk_index": 19, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [68, 80], "post_id": "F6HSHzKezkh6aoTr2"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Improving the Welfare of AIs: A Nearcasted Proposal
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Malign (deceptively aligned) AI systems might be instrumentally incentivized to lie about their preferences in order to ensure they... | My current guess would be to follow a two-step strategy:
* **Situational Awareness Training**: First, train the model such that it understands the overall situation in detail. If this is already part of the normal training process, then this isn’t necessary. If we want to ask about preferences for some work task, we... | {"chunk_index": 20, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [79, 95], "post_id": "F6HSHzKezkh6aoTr2"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Preventing Language Models from hiding their reasoning
Opening:
When working with powerful AI models, one concern is that the AIs [steganographically encode secret messages](https://www.alignmentforum.org/posts/yDcMDJeSck7SuBs24/steganography-in-chain-of-... | 
Experimental setup
------------------
**Datasets**: We run experiments on two datasets:
* [AlpacaEval](https://github.com... | {"chunk_index": 21, "chunk_kind": "head", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [13, 37], "post_id": "9Fdd9N7Escg3tcymb"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Preventing Language Models from hiding their reasoning
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
For the sake of simplicity, this post only contains results for AQUA-RAT, where the results are the crispest (as a result of bei... | One naive approach for preventing unintended knowledge build up is to ensure work outputs and long-term memories are stored in natural language. Then, we could have dumb but trusted models read this output, as well as applying human oversight to some subset of outputs and memories. However, steganography could prevent ... | {"chunk_index": 22, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [36, 61], "post_id": "9Fdd9N7Escg3tcymb"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Preventing Language Models from hiding their reasoning
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Given appropriate defenses that prevent most steganography, this hidden takeover research could be slowed down to a point where ... | For example, to estimate the number of acceptable bit so that AIs can’t steganographically encode enough memories to make dangerous progress in takeover science, AI labs can train models to do takeover science using as few bits of communication as possible (but using the regular text channel, without paraphrasing). Thi... | {"chunk_index": 23, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [60, 78], "post_id": "9Fdd9N7Escg3tcymb"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Preventing Language Models from hiding their reasoning
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Can you really reduce the bitrate enough?
=========================================
As the gap between the trusted model used t... | Realism of countermeasures
==========================
Using paraphrasing doesn’t increase inference cost much, as long as the model used for paraphrasing is at least one generation below. For example, using GPT-3.5-turbo to paraphrase outputs from GPT-4 only increases the price of inference by 5% ([at the price of the... | {"chunk_index": 24, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [77, 85], "post_id": "9Fdd9N7Escg3tcymb"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is mechanistic interpretability?
Opening:
Opening positions
-----------------
I'm somewhat skeptical about mech interp (bottom-up or substantial reverse engineering style interp):
* Current work seems very far from being useful (it isn't cu... | This feels fairly true to me (in the sense of "I expect such methods to exist"), though I don't feel confident in any specific non-mechanistic approach. I expect that for any given such method, I'd be excited about trying to use mech interp to red-team/validate/better understand it
> A possible meta-level point of dis... | {"chunk_index": 25, "chunk_kind": "head", "coauthored": true, "coauthors": ["Neel Nanda", "Buck", "habryka"], "format": "continuation", "para_range": [28, 52], "post_id": "tEPHGZAb63dfq2v8n"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is mechanistic interpretability?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
I'm concerned in cases where neither of those is present.
Some quick meta notes:
[Against Almost Every Theory of Impact of Interpretabili... | * [Some common confusions about induction heads](https://www.lesswrong.com/posts/nJqftacoQGKurJ6fv/some-common-confusion-about-induction-heads)
* [Causal scrubbing results on induction heads](https://www.lesswrong.com/posts/j6s9H9SHrEhEfuJnq/causal-scrubbing-results-on-induction-heads)
I think that, for my favorit... | {"chunk_index": 26, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Neel Nanda", "Buck", "habryka"], "format": "continuation", "para_range": [51, 81], "post_id": "tEPHGZAb63dfq2v8n"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is mechanistic interpretability?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
> I think that, for my favorite metric of "proportion of what's going on that you've explained", the 'they do induction' hypothesis might b... | This is just a rescaling of the metric "explaining x% of the difference between GPT-4 and unigram statistics". So the question is still entirely about how good x has to be for us to be happy.
Meta-level note: I'm not sure that our current discussion is a crux for me. Even if I conceded that current models are a mess, ... | {"chunk_index": 27, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Neel Nanda", "Buck", "habryka"], "format": "continuation", "para_range": [80, 109], "post_id": "tEPHGZAb63dfq2v8n"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is mechanistic interpretability?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
I'll try to explain how I like thinking about the amount explained in LMs trained on real text distributions.
First, let's simplify and ju... | If your hope is to focus on cases where 80% means your explanation contains more of the relevant model knowledge than a small model had, you should maybe try to steer towards that hope immediately.
Ie, find some task that small models just can't do (say, hard multiple choice Qs from MMLU) and find a circuit such that ... | {"chunk_index": 28, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Neel Nanda", "Buck", "habryka"], "format": "continuation", "para_range": [108, 133], "post_id": "tEPHGZAb63dfq2v8n"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is mechanistic interpretability?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Idea for your MATS scholars: try to get explanations/replacements that get a smaller model 80% of the way to the bigger model's performance... | Hopes (as I see them) for mech interp being useful without explaining 99%:
* **Approach**: Maybe we can just focus on safety relevant components and explain >98% of just these components which will be easier. **Concern**: It seems very hard to explain these components to this degree without understanding other compo... | {"chunk_index": 29, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Neel Nanda", "Buck", "habryka"], "format": "continuation", "para_range": [132, 155], "post_id": "tEPHGZAb63dfq2v8n"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is mechanistic interpretability?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
> This is because I expect various difficulties with testing using model organisms
Why?
Re "Why?": The question is how you construct mode... | Would it be a significant update for you if any of these were successful? I tentatively feel like current techniques (esp activation patching between various counterfactuals) are good enough to give some insight on some of these in narrow cases. Solving any of these in general seems pretty ambitious.
The first 3 are a... | {"chunk_index": 30, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Neel Nanda", "Buck", "habryka"], "format": "continuation", "para_range": [154, 180], "post_id": "tEPHGZAb63dfq2v8n"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is mechanistic interpretability?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Some (very ambitious) tasks I'm maybe excited about:
* (hard to setup) Can we rule out algorithms based on doing something like SAE + lo... | For a bunch of these projects, it seems pretty unclear if mech interp-y methods are the best tool and we can measure success using downstream metrics (as opposed to internal validity metrics like % explained).
I think mech interp industial policy is reasonable (subsidize work on mech interp even if returns per unit ti... | {"chunk_index": 31, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Neel Nanda", "Buck", "habryka"], "format": "continuation", "para_range": [179, 187], "post_id": "tEPHGZAb63dfq2v8n"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Auditing failures vs concentrated failures
Opening:
**TL;DR**: If you want to notice when powerful AIs are behaving badly, a reasonable strategy is to carefully audit a small fraction of AI actions. This kind of strategy has two kinds of failures which ca... | To mitigate sampling concentrated failures, we can apply a cheap monitoring process on every AI action and audit the most suspicious actions. We can also use separation of roles and security to limit the affordances of dangerous AIs and increase the difficulty of successfully taking over - which means that AIs will hav... | {"chunk_index": 32, "chunk_kind": "head", "coauthored": true, "coauthors": ["Fabien Roger"], "format": "continuation", "para_range": [18, 38], "post_id": "hirhSqvEAq7pdnyPG"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Auditing failures vs concentrated failures
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
* Cheap monitoring using [trusted AIs](https://www.lesswrong.com/posts/LhxHcASQwpNa3mRNk/untrusted-smart-models-and-trusted-dumb-models).
... | * **As threat models: outer misalignment and inner misalignment.**
* **As desiderata sufficient for a particular type of proposal for AI safety: For a given AI, solve outer alignment and inner alignment and then combine these solutions to avoid misalignment issues.**
We’ll discuss these two different usages of out... | {"chunk_index": 33, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Fabien Roger"], "format": "continuation", "para_range": [37, 46], "post_id": "hirhSqvEAq7pdnyPG"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: AI Control: Improving Safety Despite Intentional Subversion
Opening:
We’ve released a paper, [AI Control: Improving Safety Despite Intentional Subversion](https://arxiv.org/abs/2312.06942). This paper explores techniques that prevent AI catastrophes even ... | 
Results
-------
As shown in the figure below, we found protocols that substantially improve **safety vs usefulness tradeoffs... | {"chunk_index": 34, "chunk_kind": "head", "coauthored": true, "coauthors": ["Fabien Roger", "Kshitij Sachan"], "format": "continuation", "para_range": [13, 26], "post_id": "d9FJHawgkiMSPjagR"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: AI Control: Improving Safety Despite Intentional Subversion
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Results
-------
As shown in the figure below, we found protocols that substantially improve **safety vs usefulness tradeof... | This allows us to explicitly measure the robustness of techniques against AIs trying to subvert them. If the red team succeeds at eliciting the capabilities of the actual untrusted model we want to use (which is about the capabilities of the model - not its propensity), it becomes possible to know how much safety our t... | {"chunk_index": 35, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Fabien Roger", "Kshitij Sachan"], "format": "continuation", "para_range": [25, 41], "post_id": "d9FJHawgkiMSPjagR"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: AI Control: Improving Safety Despite Intentional Subversion
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Another effect of modeling scheming by explicitly red-teaming is that, because our red team is currently much better at con... | We’re excited about a variety of future work:
* **Control for auditing failures**: Work that focuses on cases where the core difficulty is that our most expensive auditing process is insufficiently good. In our paper, the expensive auditing process wasn’t a core bottleneck. It’s likely that this work should also use... | {"chunk_index": 36, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Fabien Roger", "Kshitij Sachan"], "format": "continuation", "para_range": [40, 45], "post_id": "d9FJHawgkiMSPjagR"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Scalable Oversight and Weak-to-Strong Generalization: Compatible approaches to the same problem
Opening:
*Thanks to Roger Grosse, Cem Anil, Sam Bowman, Tamera Lanham, and Mrinank Sharma for helpful discussion and comments on drafts of this post. *
*Thro... | The main upshot of this perspective is that we can evaluate techniques for both of these approaches in the same manner. When making a safety case, taking cues from Redwood Research’s recent work on [AI control](https://www.lesswrong.com/posts/d9FJHawgkiMSPjagR/ai-control-improving-safety-despite-intentional-subversion)... | {"chunk_index": 37, "chunk_kind": "head", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [12, 21], "post_id": "hw2tGSsvLLyjFoLFS"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Measurement tampering detection as a special case of weak-to-strong generalization
Opening:
Burns et al at OpenAI released [a paper](https://openai.com/research/weak-to-strong-generalization) studying various techniques for fine-tuning strong models on do... | * Failed attempts at tampering might leave evidence of this wrongdoing, while you might never get strong evidence of wrongdoing if the AI system isn’t trying to tamper.
* Consistent relationships between measurements might be driven by similar mechanisms for real positive examples and trusted examples, while the me... | {"chunk_index": 38, "chunk_kind": "head", "coauthored": true, "coauthors": ["Fabien Roger", "Buck"], "format": "continuation", "para_range": [21, 28], "post_id": "4KLCygqTLsMBM3KFR"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Catching AIs red-handed
Opening:
We’ve been spending a lot of time recently thinking about how to mitigate risks posed by [scheming](https://www.lesswrong.com/s/J7JpFeijCK5urdbzv/p/yFofRxg7RRQYCcwFA) (a.k.a. deceptively aligned) models intentionally subve... | We think that it’s plausible that AI developers will adopt the policy that if they catch the models trying to escape, they stop using them. We call this plan the **catch-then-shut-down plan**.
You might think that this is an absurdly weak lab ask: “once your model has demonstrably tried to escape, stop deploying it”. ... | {"chunk_index": 39, "chunk_kind": "head", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [16, 40], "post_id": "i2nmBfCXnadeGmhzW"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Catching AIs red-handed
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
* The AI lab starts out with low credence that scheming will arise. E.g. suppose they started with Carlsmith’s estimate of a 25% chance of scheming and then ... | ### 3\. Studying countermeasures in practice
We’ll (hopefully) have a variety of countermeasures deployed that aim to catch AIs or prevent AIs being able to escape. Once we have an actual live example, we can check which countermeasures did or didn’t work. This might allow for more precisely crafting countermeasures t... | {"chunk_index": 40, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [39, 56], "post_id": "i2nmBfCXnadeGmhzW"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Catching AIs red-handed
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Seeing AIs actually attempt an escape in the wild might allow us to update on how our AIs are likely to attempt to escape.
**Focused countermeasures**: We cou... | ### 6\. Small sample count ML
In some cases, even a small number of examples of egregious misbehavior might allow you to improve simple models that you can use to detect future escape attempts, e.g. as we described [here](https://www.lesswrong.com/posts/WCj7WgFSLmyKaMwPR/coup-probes-catching-catastrophes-with-probes-t... | {"chunk_index": 41, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [55, 73], "post_id": "i2nmBfCXnadeGmhzW"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Catching AIs red-handed
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Using [contrast](https://www.alignmentforum.org/posts/9vwekjD6xyuePX7Zr/contrast-pairs-drive-the-empirical-performance-of-contrast) [pairs](https://www.ai-tran... | For instance, imagine that in the prior scenario the chance of a scheming AI succeeding without being caught halves every time we catch the AI. Then, a scheming AI has “only” a 2% chance of success in total. In practice, we’ll want to ensure the total number of attempts isn’t too high (possibly using few-shot approache... | {"chunk_index": 42, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [72, 89], "post_id": "i2nmBfCXnadeGmhzW"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Catching AIs red-handed
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
More specifically, let’s say that this evaluation had the following distribution of end states:
* 30% of the time, the model was caught before deployment.
*... | Conclusion
==========
If you catch your AIs red-handed intentionally trying to cause catastrophically bad outcomes, your situation changes a lot. Because of this, we think researchers should mostly focus on safety plans which ensure that if the model is scheming, it does not escape before getting caught.
It seems pre... | {"chunk_index": 43, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [88, 104], "post_id": "i2nmBfCXnadeGmhzW"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Catching AIs red-handed
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
If an AI lab properly accounts for the possibility of fake attempts, then intentionally performing fake attempts is very unlikely to be a good strategy for the... | We’re defining “catch” here in a way that excludes techniques that might update you up on whether your model is scheming, but don’t involve having specific actions that the model takes with massively suspiciously high probability:
* **Model organisms:** [Scheming-focused model organism experiments](https://www.align... | {"chunk_index": 44, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [103, 118], "post_id": "i2nmBfCXnadeGmhzW"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Managing catastrophic misuse without robust AIs
Opening:
Many people worry about catastrophic misuse of future AIs with highly dangerous capabilities. For instance, powerful AIs might substantially lower the bar to building bioweapons or allow for massive... | (I'm making the assumption here that the threat model is more like "the LLM gives the equivalent of many hours of advice" rather than "the LLM gives the equivalent of five minutes of advice". I'm not a biosecurity expert and so don’t know whether that's an appropriate assumption to make; it probably comes down to quest... | {"chunk_index": 45, "chunk_kind": "head", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [10, 20], "post_id": "KENtuXySHJgxsH2Qk"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Managing catastrophic misuse without robust AIs
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
And so in this case, the LLMs caused problems not by being superhuman, but by allowing aspiring bioterrorists to get a fairly large amo... | Here are some possible additional countermeasures:
- We could reject queries to the bio endpoint which don't seem like questions about biology according to our monitoring models. (Or queries which contain fully separable subcomponents which don’t seem to be about biology.) This helps avoid the failure mode in which bi... | {"chunk_index": 46, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [19, 32], "post_id": "KENtuXySHJgxsH2Qk"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Managing catastrophic misuse without robust AIs
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
## What reliability is acceptable?
A nice property of this proposal is that our monitoring ensemble doesn't need to be that reliable t... | # Fine-tuning?
I think that allowing end-users to fine-tune these models makes the picture notably more complicated, as it might be possible to train knowledge into the model which isn't present in the monitoring models. There are still possible mitigations. We could also try to resolve this by only allowing large cor... | {"chunk_index": 47, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [31, 38], "post_id": "KENtuXySHJgxsH2Qk"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
In this post, we argue that AI labs should ensure that powerful AIs are *controlled*. That is, labs should make sure that the safety measures they apply to their powerful models prevent unacc... | We think the control line-of-defense should mostly focus on the lead-up to the first time an AI gets caught trying to cause bad outcomes. So we'd aim to ensure the property "if an AI is scheming, it won't successfully cause a catastrophe before we catch it". That is, it's acceptable if either no catastrophic outcomes o... | {"chunk_index": 48, "chunk_kind": "head", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [12, 26], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
One nice property of the control line-of-defense compared to alignment is that we don't need any technical breakthroughs to subs... | When developing powerful AIs, an AI lab using control would manage risk from scheming very similarly to how they manage security risk. In the case of security risk, whenever a lab is considering an important change to their risk profile (such as deploying a new service, training a new model, or just responding to the p... | {"chunk_index": 49, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [25, 38], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
When developing powerful AIs, an AI lab using control would manage risk from scheming very similarly to how they manage security... | [^footnote11]: One concern with these evals is that they might actually make models dangerous or misaligned even if they wouldn't be dangerous by default, or more generally that the evaluation is dangerous to run. We think this is probably not a substantial issue, because we can often use benign tasks as proxies for da... | {"chunk_index": 50, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [37, 44], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
[^footnote11]: One concern with these evals is that they might actually make models dangerous or misaligned even if they wouldn'... | 1. Less serial thinking time. Current LLMs can only do a small amount of serial reasoning outside of [natural language reasoning](https://www.lesswrong.com/posts/r3xwHzMmMf25peeHE/the-translucent-thoughts-hypotheses-and-their-implications). So, if future powerful AIs are similar in this way to current LLMs and we monit... | {"chunk_index": 51, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [43, 52], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
[^footnote14]: This relies on the assumption that the AIs are unable to secretly develop a body of knowledge of good takeover st... | Some positive implications of control being tractable to evaluate:
- Labs could make commitments that they will do high-quality control evaluations (or perhaps have an outside evaluator do them) and ensure a sufficient level of safety according to control evaluations. It is likely easier to agree on standards of evide... | {"chunk_index": 52, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [51, 68], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
[^footnote18]: If models don't scheme, then evaluating whether models have some other type of catastrophic misalignment issue is... | One misconception we've run into is that control is just about robust oversight. However, note that robust but expensive and serially slow oversight is not sufficient for control and it is also possible to solve issues due to inadequate oversight with improvements in other areas.
It's not sufficient because if the ove... | {"chunk_index": 53, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [67, 82], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
[^footnote20]: In addition to being useful for control, better scalable oversight might also be slightly useful for reducing the... | We mostly agree with Holden Karnofsky's [list of helpful activities that an AI lab might try to massively speed up using AI labor](https://www.alignmentforum.org/posts/vZzg8NS7wBtqcwhoJ/nearcast-based-deployment-problem-analysis#Phase_2__as_aligned_and_transformative_AI_systems_become_available) (except that we would r... | {"chunk_index": 54, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [81, 94], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
We think that AI that is capable of drastically speeding up all of those projects would have sufficient capabilities to drastica... | There are a few different concerns we'll discuss related to qualitatively wildly superhuman skills:
- For high levels of performance, **we'll need AIs to take actions which are highly inscrutable**. The fact that these actions are highly inscrutable will be clear to us, but we can't avoid this without an unaffordable ... | {"chunk_index": 55, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [93, 105], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
[^footnote24]: If problematic reasoning is this obvious, then simple interventions like trusted monitoring would catch the issue... | What about the other class of concerns–that even though the models don't have any qualitatively wildly superhuman capabilities in problematic domains, they're still able to evade any control techniques that we can afford to implement?
Our belief here is harder for us to succinctly justify. For all of the control techn... | {"chunk_index": 56, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [104, 114], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
[^footnote26]: This part of the situation isn't really a control argument, but it might be pretty relevant regardless.
We think... | ## Iteration is very helpful for developing robust protocols
Iterating against control evaluations could allow for the development of powerful and precise techniques. In particular, we should be able to develop these techniques against the actual models we're going to deploy, targeting the tasks we plan to deploy on. ... | {"chunk_index": 57, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [113, 129], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
But we're not saying the situation is easy. Companies that achieve security do it by applying serious, intentional effort. Many ... | When the AIs get more powerful, it will likely start to get expensive for labs to ensure control; we imagine that at the point where labs are running twice as quickly due to AI assistance, maintaining the control property will require maybe 10% of the resources of the company (although it's hard to give good estimates ... | {"chunk_index": 58, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [128, 143], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
In "Control evaluations and black-box techniques" above, we argued that before naturally occurring scheming models have been dis... | In [our recent paper](https://www.lesswrong.com/posts/d9FJHawgkiMSPjagR/ai-control-improving-safety-despite-intentional-subversion), we demonstrated control techniques that involve changing the evaluation and deployment of powerful models in ways that combine to yield improvements in safety. Here are some of the techni... | {"chunk_index": 59, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [142, 151], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
In our paper, we ignored the problem of how you construct a reliable but expensive catastrophe auditing process using H (it's ex... | In this appendix, we'll elaborate on the arguments made in "Control evaluations and black-box techniques". As noted earlier, we want to avoid depending on incorrect assumptions about the properties of scheming models which is a big problem for techniques that rely on properties of models related to their generalization... | {"chunk_index": 60, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [150, 165], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
We don't know how you could do experiments that would cause you to become confident in coup probes working. Suppose you have acc... | # Appendix: Concerns with "neuralese" reasoning and memories
One factor which is very important for the viability of control is the extent to which making AIs transformatively useful requires those AIs to have the affordance to do long serial chains of uninterpretable[^footnote28] reasoning (e.g., recurrent "neuralese... | {"chunk_index": 61, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [164, 175], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
[^footnote28]: By "uninterpretable" we just mean "as hard to interpret as the activations of current AI models". It isn't litera... | One approach for AI learning and memories which might be key is having AI construct training datasets (in relatively understandable modalities like text, images, or motor activations). Intuitively, this seems much safer than neuralese, though there still might be substantial risks due to the AI-constructed datasets bei... | {"chunk_index": 62, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [174, 192], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
A reasonable intuition pump here is that if AIs were qualitatively as smart as humans, but ran faster and cheaper, then we proba... | | | Control is too easy | _**Control requires a substantial but affordable amount of effort**_ | Control is infeasibly hard |
| --- | --- | --- | --- |
| Non-scheming is trivial/doable to ensure | | | |
| _**Ensuring non-scheming is hard**_ | | _**Control looks good**_ | |
Above, we grouped together investment i... | {"chunk_index": 63, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [191, 212], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
It's worth noting that research into control (investment 2) has a moderate level of transfer with some other hopes for improving... | However, it seems like the applications on Karnofsky’s list probably don't directly require models to have superhuman knowledge or abilities in dangerous areas.
Further, note that in this protein case, it was obvious to us that the model was qualitatively wildly superhuman. (Though it might not be obvious that a speci... | {"chunk_index": 64, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [211, 228], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: The case for ensuring that powerful AIs are controlled
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
As a trivial non-dangerous example, models are wildly superhuman at next-token prediction. We don't intrinsically care about nex... | ## Aside: Capabilities from imitating human cognition
It currently seems plausible (perhaps 30-40% probability overall, more in shorter timelines) that a large fraction of capabilities in early transformatively useful AI will come via the mechanism of imitating human cognitive processes, which naturally yields an abil... | {"chunk_index": 65, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck"], "format": "continuation", "para_range": [227, 230], "post_id": "kcKrE9mzEHrdqtDpE"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
*The quality bar of this post is somewhat lower than that of our previous posts on control and this post is much more focused on specific details. This is particularly true about the appendices of thi... | Ideally, we would run control evaluations (like the one we performed in the code backdooring setting from our recent paper) with the exact deployment setup, and see if the red team is able to cause a catastrophe. But this is unsafe, expensive, and we don’t have access to a perfect catastrophe detector. Instead, we can ... | {"chunk_index": 66, "chunk_kind": "head", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [15, 33], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
* What needs to be measured to determine the safety of the protocol against this strategy (e.g. the distribution of sneakiness scores o... | There are two strategies for handling this diversity of tasks:
**Safety properties:** You can reduce the complexity of analyzing the threat landscape by having the blue team propose safety properties which its protocol aims to maintain, establish that these safety properties suffice for preventing catastrophic actions... | {"chunk_index": 67, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [32, 50], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Red-teaming could allow us to be reasonably confident in the results of control evaluations by being at least as good as scheming AIs at ... | **Evaluating for the occurrence of proxy bad outcomes**
-------------------------------------------------------
The evaluation needs to know if an AI achieved a proxy bad outcome in some task, even if the protocol fails to detect it. But the protocol will already be using whatever approaches are available to estimate ... | {"chunk_index": 68, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [49, 72], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
The red team should have access to validation bad outcome detection when training their models, because this might be required to elicit ... | ### Avoiding AI sandbagging for competitiveness of execution
We believe the red team likely can train AIs to effectively execute on plans as well as a scheming AI by training AIs with reinforcement learning (RL) combined with supervised fine-tuning on carefully constructed expert demonstrations. For example, we would ... | {"chunk_index": 69, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [71, 82], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Such training schemes rely on the assumption that AIs can’t [gradient hack](https://www.lesswrong.com/posts/dBmfb76zx6wjPsBC7/when-can-we... | Here are our main hopes for handling this concern:
* For the reasons discussed [here](https://www.lesswrong.com/posts/kcKrE9mzEHrdqtDpE/the-case-for-ensuring-that-powerful-ais-are-controlled#We_can_probably_avoid_our_AIs_having_qualitatively_wildly_superhuman_skills_in_problematic_domains), we think this is somewhat... | {"chunk_index": 70, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [81, 87], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Additionally, far-from-human approaches might be problematic even if they aren’t more directly effective than the best red team approache... | Here are some rough evaluations we should probably run to get a sense of the difficulty of achieving red team competitiveness:
* **Create model organisms of exploration hacking**: We should be able to create various [model organisms](https://www.lesswrong.com/posts/ChDH335ckdvpxXaXX/model-organisms-of-misalignment-t... | {"chunk_index": 71, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [86, 91], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
To directly make the evaluation capture this dynamic, we could make the red team models have a different capability profile from the capa... | Untrusted models could try to break out or cause other catastrophic outcomes while we run control evaluations. Here are the approaches for avoiding this outcome, which we think are sufficient to ensure that running these evaluations isn't very risky:
1. **Proxy environments should be made far safer than the deploymen... | {"chunk_index": 72, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [90, 98], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Interventions 1, 2, 3, and 4 should be sufficient to ensure quite low risk from the evaluations of early transformative AI systems relati... | **Red team input classifier**: We can improve efficiency by having the red team provide a classifier for when to attempt bad things and then deferring to the original untrusted model in cases where the classifier doesn't indicate the attack policy should do bad things. We call this using a *red-team input classifier*. ... | {"chunk_index": 73, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [97, 108], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
**Decomposing linked tasks**: In some cases, proxy bad outcomes might arise from long chains of serial tasks. To make estimation more eff... | 2\. Hopes for control evaluations not being too conservative
------------------------------------------------------------
* **Good science of AI situational awareness, planning capabilities, & learning from training data**: We might be able to prevent the red-team plans from being considerably better than what AIs c... | {"chunk_index": 74, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [107, 117], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
### Reasons to expect control-style countermeasures to be relatively easy to build
* **Happy with **[**catching a first coup**](https:... | **Appendix: Evaluating usefulness and avoiding usefulness overfit from the blue team**
======================================================================================
It's quite likely that labs will want to use different task distributions to measure usefulness than the task distribution used for safety measur... | {"chunk_index": 75, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [116, 135], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Various control techniques involve constructing synthetic inputs. For example, in our paper, untrusted monitoring needs to generate synth... | **Evaluating safety for tasks where human judgment is used for validation bad outcome detection**
-------------------------------------------------------------------------------------------------
For tasks where we use human judgment for both validation bad outcome detection and the blue-team auditing process, we stil... | {"chunk_index": 76, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [134, 152], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
In some cases, it might be clear that human judgment will be highly reliable. In these cases we can just use human judgment without tryin... | One issue with this approach is that humans playing an important role in the protocol might be vigilant in evaluations while not being very vigilant in reality. There are two approaches for handling this:
* Make evaluations hard to distinguish from reality from the perspective of human labor. This seems difficult an... | {"chunk_index": 77, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [151, 164], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
**Appendix: Handling timing in our evaluation**
===============================================
One strategy scheming AIs might use is t... | Evaluating effects on utility from final discrimination
-------------------------------------------------------
Ultimately, we aim to use an approximation which estimates two terms E\[utility | no scheming\] and E\[utility | scheming\] and then we have weights for these terms based on prior probability of scheming. Th... | {"chunk_index": 78, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [163, 177], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
So, how does *successful*final discrimination (final discrimination where we actually identified the red team model) effect our estimate ... | We don’t have a full picture of how you run this assessment, but we do have a relatively concrete story for some types of RL-equivalent training schemes. In particular, consider “distilling BoN” where you sample N completions from the AI on every input and then finetune the AI on the best input. If we did this in round... | {"chunk_index": 79, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [176, 186], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
This works for distilling BoN in rounds (and other “poor man’s RL” schemes which just use SFT in rounds), but it’s less clear how to exte... | Here are some more caveats:
* In some cases we might obtain non-control evaluable evidence which strongly convinces us that we can trust a given model. If we obtain such evidence, then we can update on this.
* Our final safety estimate might update based on the protocol’s observations of untrusted models though t... | {"chunk_index": 80, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [185, 206], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
**Appendix: Jailbreak handling and searching against trusted models**
===================================================================... | Limit red team knowledge
------------------------
By default, the red team has access to all information about the protocol except that which the blue team will effectively randomize. But, this might be unnecessary conservative because the model wouldn’t know a bunch of key details and also wouldn’t realistically be a... | {"chunk_index": 81, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [205, 220], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Notes on control evaluations for safety cases
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Limiting unrealistic red team execution competence due to training
------------------------------------------------------------------
Th... | We discuss details of preventing coup science, steganography evaluation, and possible obstacles to this approach in more detail in[ our prior work on steganography evaluation](https://www.lesswrong.com/posts/9Fdd9N7Escg3tcymb/preventing-language-models-from-hiding-their-reasoning). In particular, see the sections on[ s... | {"chunk_index": 82, "chunk_kind": "mid", "coauthored": true, "coauthors": ["Buck", "Fabien Roger"], "format": "continuation", "para_range": [219, 228], "post_id": "3s8PtYbo7rCbho4Ev"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is "AI Control" as a framing on AI X-Risk?
Opening:
Redwood (where Ryan works) recently released a series of blogposts proposing a research agenda for reducing AI-risk that focuses on ensuring safety (and secondarily usefulness) under the conse... | I often think about this as "it's hard to compete with future AI researchers on moving beyond this early regime". (That said, we should of course have some research bets for what to do if control doesn't work for the weakest AIs which are very useful.)
I'm curious what you think about this "but we can't do better" fra... | {"chunk_index": 83, "chunk_kind": "head", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [25, 48], "post_id": "9BoGjYb5gbjZirmXv"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is "AI Control" as a framing on AI X-Risk?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
I often think about this as "it's hard to compete with future AI researchers on moving beyond this early regime". (That said, we ... | I kind of want to make a quick list what at a high level my dooms are with doing a control-focused thing, and then we can decide which one to jump into:
1. We don't have a concrete plan for what to do with mildly superhuman AIs, and we will find it hard to use them for making real progress on AI Alignment
2. The con... | {"chunk_index": 84, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [47, 60], "post_id": "9BoGjYb5gbjZirmXv"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is "AI Control" as a framing on AI X-Risk?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Quickly responding on the object level to (1).
Main outs (as far as what to use AIs for):
* Ensure that our early transformat... | **(5)** (AIs being incentivized to make us worse at thinking) seems like a pretty serious concern, but I think it will be hard for AIs to do this consistently without this getting trained away.
On **(6)** (large changes in AI training environment and cognitive landscape): I agree that this is a serious concern. I thin... | {"chunk_index": 85, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [59, 85], "post_id": "9BoGjYb5gbjZirmXv"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is "AI Control" as a framing on AI X-Risk?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
> 5 (AIs being incentivized to make us worse at thinking) seems like a pretty serious concern, but I think it will be hard for AI... | Like, my current models of all mechanisms we have for AI training is that when you have a system that is trying to deceive humans, and it fails, and then you try to make an update against that, then you will probably also substantially reinforce cognition about how to deceive better (especially if there was a non-negli... | {"chunk_index": 86, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [84, 115], "post_id": "9BoGjYb5gbjZirmXv"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is "AI Control" as a framing on AI X-Risk?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
> So for (4), we're talking about the regime where we've already caught pretty egregious behavior at least once?
Yeah.
> Like, ... | Of course, it's also trying to break out during pre-training all the time, but it's not really interfacing with humans, and it's trying out hardware exploits or whatever, but let's assume its not succeeding at that.
Ah, by "perform well in training" I mean "perform well in post-training like RLHF".
Anyway, in the sta... | {"chunk_index": 87, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [114, 143], "post_id": "9BoGjYb5gbjZirmXv"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is "AI Control" as a framing on AI X-Risk?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
It's not even clear that current RLHF models are at all better at doing high-stakes tasks than pre-trained models. I haven't play... | Yeah, that all seems pretty good
I feel pretty confused about what happens when you interface with a model that is close to being a consequentialist optimizer at this step. My guess is it will be pretty messy, but one simplified outcome is that you don't really need to do any kind of RLHF. The model is like "welp, loo... | {"chunk_index": 88, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [142, 168], "post_id": "9BoGjYb5gbjZirmXv"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is "AI Control" as a framing on AI X-Risk?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
I'm generally somewhat confused about what happens in this sort of circumstance as far as the sample efficiency of training and w... | Like probably such an AI will understand the idea of "do well on this episode and don't care about longer run effects, don't care about escaping" (e.g. concepts needed to implement this will be pretty salient in its ontology). And just modifying its goals can suffice for getting this.
This might require a large number... | {"chunk_index": 89, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [167, 197], "post_id": "9BoGjYb5gbjZirmXv"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is "AI Control" as a framing on AI X-Risk?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
(very metaphorically, of course there aren't pointers in the relevant sense in current architectures)
Yep, that seems like a goo... | Seems like the system could probably just sandbag, though I would have to think harder about it (as one example of the kind of algorithm that seems to me like it would successfully sandbag "think adversarial thoughts when you expect to perform well, and think corrigible/submissive thoughts when you expect to perform ba... | {"chunk_index": 90, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [196, 236], "post_id": "9BoGjYb5gbjZirmXv"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is "AI Control" as a framing on AI X-Risk?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
I think this might be rough for 2 second tasks, but I feel like if we extend to 5 minute tasks, have a vaguely reasonable notion ... | (Indeed, I don't super feel like my brain immediately knows how to think about that setup)
Yeah, maybe. My current feeling keeps being like "if I treat the AI as a black box, and I just imagine myself being in this situation, I would have no trouble sandbagging and don't really see a world where I would be exposed to ... | {"chunk_index": 91, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [235, 265], "post_id": "9BoGjYb5gbjZirmXv"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is "AI Control" as a framing on AI X-Risk?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
I'm not sure where our disagreement is localized, we disagree by quite a bit here. I'm maybe like 7/10 that this basically works ... | I do think getting this kind of evidence could be very helpful for slowing down AI progress and buying more time via regulatory action, but it's unclear how much, as I expect that the evidence will be hard to be air-tight, and the commercial incentives would still strongly push in the direction of developing things fas... | {"chunk_index": 92, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [264, 299], "post_id": "9BoGjYb5gbjZirmXv"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: How useful is "AI Control" as a framing on AI X-Risk?
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
See also [discussion here](https://www.lesswrong.com/posts/i2nmBfCXnadeGmhzW/catching-ais-red-handed?commentId=Zmh3gEP2zgKQjarBs)... | So, we can dive into the technical side (what we've called [few-shot catastrophe prevention](https://www.lesswrong.com/posts/i2nmBfCXnadeGmhzW/catching-ais-red-handed#Few_shot_catastrophe_prevention)), but I'd like to caveat that this is a bit obviously insane. (As opposed to non-obviously insane like the current situa... | {"chunk_index": 93, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [298, 305], "post_id": "9BoGjYb5gbjZirmXv"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Thoughts on SB-1047
Opening:
In this post, I'll discuss my current understanding of SB-1047, what I think should change about the bill, and what I think about the bill overall (with and without my suggested changes).
Overall, SB-1047 seems pretty good an... | 1. So you want to train a non-derivative model and you haven't yet started training. The bill imposes various requirements on the training of covered models that don't have limited duty exemptions, so we need to determine whether this model will be covered.
2. Is it >10^26 flop or could you reasonably expect it to ma... | {"chunk_index": 94, "chunk_kind": "head", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [19, 32], "post_id": "WBPgacdjdZJCZaohj"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Thoughts on SB-1047
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Ok, but what if rather than training a model from scratch, you're just fine-tuning a model? Under the current bill, you don't need to worry at all. Unfortunately, ... | I think just having the 10^26 flop threshold is better because:
* It's simpler which reduces fear, uncertainty, and doubt from various actors.
* I don’t trust benchmarks that much (e.g. maybe most benchmarks are quite gameable while also not increasing general capability that much). (It seems sad to have tricky be... | {"chunk_index": 95, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [31, 46], "post_id": "WBPgacdjdZJCZaohj"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Thoughts on SB-1047
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
(Also, if the equivalent performance condition is retained, it should be clarified that this refers to the best performance achieved with 2024 methods using 10^26 ... | * The AI could autonomously replicate successfully in the absence of a campaign to prevent this (e.g. make money (over >$1 billion in aggregate) and acquire computing hardware to run itself). (A commenter pointed out that this might be too hard to operationalize well.)
* The AI can massively accelerate competent cy... | {"chunk_index": 96, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [45, 60], "post_id": "WBPgacdjdZJCZaohj"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: Thoughts on SB-1047
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
From this perspective, my guess is that the current bar is too early. At the point where the current bar triggers, I’m unsure if restrictions like these are worth ... | Here are some beliefs I have that are load-bearing for my support of this bill.
* When models which are unlikely to have a hazardous capability (even when enhanced in relatively cheap ways over the next 2 years) and the developer does a reasonable job with capability evaluations, that developer will be able to claim... | {"chunk_index": 97, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [59, 66], "post_id": "WBPgacdjdZJCZaohj"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: [Paper] Stress-testing capability elicitation with password-locked models
Opening:
*The paper is by Ryan Greenblatt, Fabien Roger, Dmitrii Krasheninnikov and David Krueger. This post was written by Fabien and Ryan, and may not reflect the views of Dmitrii... | * **Determining security requirements and making open-sourcing decisions**: to know when you can no longer safely open source model weights and whether you need to implement strong security measures to avoid weights from being stolen, you need to know if an AI would be capable of causing a catastrophe (on its own or ... | {"chunk_index": 98, "chunk_kind": "head", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [15, 28], "post_id": "c4sZqhqPwNKGz3fFW"} | continuation |
You are continuing a LessWrong post by Ryan Greenblatt.
Title: [Paper] Stress-testing capability elicitation with password-locked models
Opening:
[Earlier portions of the post are elided.]
Recent paragraphs:
Good elicitation means you can use AIs to do useful (safety) work
-------------------------------------------... | If we had a huge number of high-quality demonstrations which closely matched how the AI would optimally do the task, then the AI would be forced to do the task well (up to the caveats mentioned above). AIs can either choose to “go along”, making best-effort predictions of demonstrations, in which case SGD might not cha... | {"chunk_index": 99, "chunk_kind": "mid", "coauthored": false, "coauthors": [], "format": "continuation", "para_range": [27, 42], "post_id": "c4sZqhqPwNKGz3fFW"} | continuation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.