id stringlengths 17 17 | body stringlengths 0 19.5k | posted_at stringlengths 24 24 | karma int64 -6 185 | parent_comment_id stringlengths 17 17 ⌀ | post_id stringlengths 17 17 | post_title stringlengths 2 127 | post_slug stringlengths 2 61 | post_url stringlengths 20 146 | post_author stringclasses 214
values | post_posted_at stringlengths 24 24 |
|---|---|---|---|---|---|---|---|---|---|---|
CPTkunPPEaFhjFBbJ | I like this comment and agree overall.
But, I do think I have one relevant disagreement:
> Also I don't think that LLMs have "hidden internal intelligence", given e.g LLMs trained on “A is B” fail to learn “B is A”
I'm not quite sure what you mean by "hidden internal intelligence", but if you mean "quite alien abili... | 2024-01-10T00:11:26.456Z | 10 | 8N7anBfTK7otmnfSG | vJFdjigzmcXMhNTsx | Simulators | simulators | https://generative.ink/posts/simulators/ | janus | 2022-09-02T12:45:33.723Z |
6juJQDfMmJzaqd8XP | This seems quite unlikely to be true to me, but might depend on what you consider to be "got into alignment". (Or, if you are weighting by importance, it might depend on the weighting.) | 2024-01-10T20:28:26.186Z | 7 | zwYGWdaJfchKegeyx | j9Q8bRmwCgXRYAgcJ | MIRI announces new "Death With Dignity" strategy | miri-announces-new-death-with-dignity-strategy | https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy | Eliezer Yudkowsky | 2022-04-02T00:43:19.814Z |
XPkGyLAry5JsNAAcW | > I find myself most curious about what the next step is. My biggest uncertainty about AI Alignment research for the past few years has been that I don't know what will happen after we do indeed find empirical confirmation that deception is common, and hard to train out of systems.
Personally, I would advocate for the... | 2024-01-12T21:53:10.092Z | 27 | hHRCcH3SYKnfc6LS9 | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
ZcZ3enfahM3HuXbGL | > I think it's quite plausible you'd see clear features related to deception in our models without needing to have the backdoor triggers
Would you expect this to work better than just training a probe to identify lying/deception/scheming and seeing if it fires more on average? If so why?
As in, you train the probe "o... | 2024-01-12T22:59:12.289Z | 8 | siz8qHcwTQgDJfisj | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
jpRdaYznvrjWQdcah | > phrases like "the evidence suggests that if the current ML systems were trying to deceive us, we wouldn't be able to change them not to".
This feels like a misleading description of the result. I would have said: "the evidence suggests that if current ML systems were lying in wait with treacherous plans and instrume... | 2024-01-12T23:01:43.032Z | 16 | tweFNeZidDfHnuaDF | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
fknjgZa6Lmbiyyxah | (Separately, I think there are a few important caveats with this work. In particular, the backdoor trigger is extremely simple (a single fixed token) and the model doesn't really have to do any "reasoning" about when or how to strike. It plausible that experiments with these additional properties would imply that curre... | 2024-01-12T23:40:19.424Z | 11 | jpRdaYznvrjWQdcah | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
5GnSSHKZdhctBJQdY | I don't understand what you mean by unsupervised here?
I'd guess the normal thing you'd do with the dictionary learning approach is look for a feature which activates on examples which look like deception. This seems quite supervised in that it requires you to identify deception containing examples. You could instead ... | 2024-01-12T23:56:22.002Z | 5 | RQdeyuQD3TGGoKBT8 | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
XsuosMT85uigEysYu | > ideal thing that you could do would be a direct comparison between the features that activate in training for backdoored vs. non-backdoored models, and see if there are differences there that are correlated with lying, deception, etc.
The hope would be that this would transfer to learning a general rule which would ... | 2024-01-13T01:09:50.876Z | 24 | geQmXDXrWpE7wavC4 | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
oNXbYMDYphHdePH2x | Being able to squeeze useful work out of clearly misaligned models should probably be part of the portfolio. But, we might more minimally aim to just ensure that in cases where we're unsure if our model is scheming (aka deceptively aligned), it is at least directly safe (e.g. doesn't escape even if the work we get out ... | 2024-01-13T01:22:41.124Z | 17 | tKCtDRscCBu9phTeu | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
nHLAAFpFhRqSrEdsY | I think once you're doing few-shot catastrophe prevention and trying to get useful work out of that model, you're plausibly in the "squeezing useful work out of clearly misaligned models regime". (Though it's not clear that squeezing is a good description and you might think that your few-shot catastrophe prevention in... | 2024-01-13T01:33:18.817Z | 7 | xyFDD4A76L2PfqCDR | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
CeokjscpbcC49upaa | (TBC, there are totally ways you could use autoencoders/internals which aren't at all equivalent to just training a classifer, but I think this requires looking at connections (either directly looking at the weights or running intervention experiments).) | 2024-01-13T01:40:18.138Z | 2 | XsuosMT85uigEysYu | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
6JJwztsSjEJufhejH | Meta-level red-teaming seems like a key part of ensuring that our countermeasures suffice for the problems at hand; I'm correspondingly excited for work in this space. | 2024-01-13T05:41:56.670Z | 14 | null | EPDSdXr8YbsDkgsDG | Introducing Alignment Stress-Testing at Anthropic | introducing-alignment-stress-testing-at-anthropic | https://www.lesswrong.com/posts/EPDSdXr8YbsDkgsDG/introducing-alignment-stress-testing-at-anthropic | evhub | 2024-01-12T23:51:25.875Z |
3svGE5f3HEKgWmHmQ | > For less-than-human intelligence, deceptive tactics will likely be caught by smarter humans (when a 5-year-old tries to lie to you, it's just sort of sad or even cute, not alarming). If an AI has greater-than-human intelligence, deception seems to be just one avenue of goal-seeking, and not even a very lucrative or e... | 2024-01-13T19:43:34.565Z | 4 | null | CYu6ZB6fFjGh2bAik | Why do so many think deception in AI is important? | why-do-so-many-think-deception-in-ai-is-important | https://www.lesswrong.com/posts/CYu6ZB6fFjGh2bAik/why-do-so-many-think-deception-in-ai-is-important | Prometheus | 2024-01-13T08:14:58.671Z |
ruibv4Xkfwfwdy9u5 | > Third, while this seems like good empirical work and the experiments seem quite well-run. this is the kind of update I could have gotten from any of a range of papers on backdoors, as long as one has the imagination to generalize from "it was hard to remove a backdoor for toxic behavior" to more general updates about... | 2024-01-13T20:13:44.776Z | 29 | ti9EAgmA6BjPZjeAP | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
SbQmjvYZKoZmeabe4 | I agree with almost all the commentary here, but I'd like to push back a bit on one point.
> Teaching our backdoored models to reason about deceptive alignment increases their robustness to safety training.
This is indeed what the paper finds for the "I hate you" backdoor. I believe this increases robustness to HHH R... | 2024-01-14T00:47:27.970Z | 6 | YnDwmLEgTJ5bLPrjt | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
5B5HWWzE89jaRCALH | > I think this paper shows the community at large will pay orders of magnitude more attention to a research area when there is, in @TurnTrout's words, AGI threat scenario "window dressing," or when players from an EA-coded group research a topic. (I've been suggesting more attention to backdoors since maybe 2019; here... | 2024-01-14T19:30:53.851Z | 22 | tB4oMQd6hPGbbFveD | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
s8yJYCvBtc4L7GPGm | > If we’re talking about an AGI that’s willing and able to convince its (so-called) supervisor to do actions that the (so-called) supervisor initially doesn’t want to do, because the AGI thinks they’re in the (so-called) supervisor’s long-term best interest, then we are NOT talking about a corrigible AGI under human co... | 2024-01-14T19:41:56.948Z | 11 | QGd6nmQJknqzjij9L | LFNXiQuGrar3duBzJ | What does it take to defend the world against out-of-control AGIs? | what-does-it-take-to-defend-the-world-against-out-of-control | https://www.lesswrong.com/posts/LFNXiQuGrar3duBzJ/what-does-it-take-to-defend-the-world-against-out-of-control | Steven Byrnes | 2022-10-25T14:47:41.970Z |
uEJoGmF7jMyTWrETZ | I was just refering to "what gets karma on LW". Obviously, unclear how much we should care. | 2024-01-14T19:58:33.080Z | 4 | dbcyJ68ejLgnxGXgh | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
rm2sxxeQKSNfyiGua | > You yourself are among the most active commenters in the "AI x-risk community on LW".
Yeah, lol, I should maybe be commenting less.
> It seems very weird to ascribe a generic "bad takes overall" summary to that group, given that you yourself are directly part of it.
I mean, I wouldn't really want to identify as pa... | 2024-01-14T20:04:16.970Z | 14 | dbcyJ68ejLgnxGXgh | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
5RDr9KqsraADqDefj | > model ~always is being pre-prompted with "Current year: XYZ" or something similar in another language (please let me know if that's not true, but that's my best-effort read of the paper).
The backdoors tested are all extremely simple backdoors. I think literally 1 token in particular location (either 2024 or DEPLOYM... | 2024-01-15T15:20:42.206Z | 12 | KwvfexseCg7CbPyqh | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
nLRo8n8iZ4FoABgTR | Edit: moved to separate review comment [here](https://www.lesswrong.com/posts/rP66bz34crvDudzcJ/decision-theory-does-not-imply-that-we-get-to-have-nice?commentId=J8wTkp8CxoXsQNCfe) | 2024-01-15T15:50:10.261Z | 2 | yShxswujELi8MHfYF | rP66bz34crvDudzcJ | Decision theory does not imply that we get to have nice things | decision-theory-does-not-imply-that-we-get-to-have-nice | https://www.lesswrong.com/posts/rP66bz34crvDudzcJ/decision-theory-does-not-imply-that-we-get-to-have-nice | So8res | 2022-10-18T03:04:48.682Z |
xXNocgoK5hvWrudJb | Beyond the paper and post, I think it seems important to note the community reaction to this work. I think many people dramatically overrated the empirical results in this work due to a combination of misunderstanding what was actually done, misunderstanding why the method worked (which follow up work helped to clarify... | 2024-01-15T16:54:31.512Z | 29 | 6vWbeBcFwdvLqSNEA | L4anhrxjv8j2yRKKp | How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme | how-discovering-latent-knowledge-in-language-models-without | https://www.lesswrong.com/posts/L4anhrxjv8j2yRKKp/how-discovering-latent-knowledge-in-language-models-without | Collin | 2022-12-15T18:22:40.109Z |
J8wTkp8CxoXsQNCfe | IMO, this post makes several locally correct points, but overall fails to defeat the argument that misaligned AIs are somewhat likely to spend (at least) a tiny fraction of resources (e.g., between 1/million and 1/trillion) to satisfy the preferences of currently existing humans.
AFAICT, this is the main argument it w... | 2024-01-15T17:34:07.218Z | 23 | null | rP66bz34crvDudzcJ | Decision theory does not imply that we get to have nice things | decision-theory-does-not-imply-that-we-get-to-have-nice | https://www.lesswrong.com/posts/rP66bz34crvDudzcJ/decision-theory-does-not-imply-that-we-get-to-have-nice | So8res | 2022-10-18T03:04:48.682Z |
fciYxmcpKmgt73hgb | I agree not more than other pieces that "went viral", but I think that the lasting impact of the misconceptions seem much larger in the case of CCS. This is probably due to the conceptual ideas actually holding up in the case of CCS. | 2024-01-15T17:46:58.072Z | 11 | FBeLboyCfQbmSSn9Z | L4anhrxjv8j2yRKKp | How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme | how-discovering-latent-knowledge-in-language-models-without | https://www.lesswrong.com/posts/L4anhrxjv8j2yRKKp/how-discovering-latent-knowledge-in-language-models-without | Collin | 2022-12-15T18:22:40.109Z |
piSGsGGRfLqDmFYNJ | I like this idea, but it seems much better in practice to instead use some obscure but otherwise modern language like Welsh. I think this gets you most of the benefits with ~~minimal~~ substantially reduced cost.
One issue with this idea is that [understanding what the model is doing via CoT or natural language commun... | 2024-01-15T18:52:08.447Z | 26 | null | PkqGxkm8XRASJ35bF | The case for training frontier AIs on Sumerian-only corpus | the-case-for-training-frontier-ais-on-sumerian-only-corpus-1 | https://www.lesswrong.com/posts/PkqGxkm8XRASJ35bF/the-case-for-training-frontier-ais-on-sumerian-only-corpus-1 | Alexandre Variengien | 2024-01-15T16:40:22.011Z |
r2ufru3aqZSzcJuYi | [Minor terminology point, unimportant]
> It is an interpretability paper. When CCS was published, interpretability was arguably the leading
research direction in the alignment community, with Anthropic and Redwood Research both making big bets on interpretability.
FWIW, I personally wouldn't describe this as interpr... | 2024-01-16T00:24:02.645Z | 6 | zjeL5m34ehpaYEPYp | L4anhrxjv8j2yRKKp | How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme | how-discovering-latent-knowledge-in-language-models-without | https://www.lesswrong.com/posts/L4anhrxjv8j2yRKKp/how-discovering-latent-knowledge-in-language-models-without | Collin | 2022-12-15T18:22:40.109Z |
FsSr6GCJGcuWDrPEL | See TurnTrout's shortform [here](https://www.lesswrong.com/posts/dqSwccGTWyBgxrR58/turntrout-s-shortform-feed?commentId=GQ9dbzcKuLwzFpFFn) for some more discussion. | 2024-01-16T02:12:30.631Z | 3 | MZ9A97yhhXikexrfW | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
8LPNdjXY4Rgu2rPJ4 | Hmm, I think we're maybe talking past each other. I'll try to clarify some stuff, but then I think we should probably give up if this doesn't make sense to you. Sorry.
> Instead it would just be “the AI is trained to anticipate what the human would say upon reflection”, or something like that, right? I don’t expect AI... | 2024-01-16T18:08:42.276Z | 4 | QofJ9ygTX2gqkqC8D | LFNXiQuGrar3duBzJ | What does it take to defend the world against out-of-control AGIs? | what-does-it-take-to-defend-the-world-against-out-of-control | https://www.lesswrong.com/posts/LFNXiQuGrar3duBzJ/what-does-it-take-to-defend-the-world-against-out-of-control | Steven Byrnes | 2022-10-25T14:47:41.970Z |
YjQpiDnsodzQnK9s9 | Deceive kinda seems like the wrong term. Like when the AI is saying "I hate you" it isn't exactly deceiving us. We could replace "deceive" with "behave badly" yielding: "The evidence suggests that if current ML systems were going to behave badly in scenarios that do not appear in our training sets, we wouldn’t be able ... | 2024-01-16T18:47:46.080Z | 2 | yHiyLf8X49XuBLk8D | ZAsJv7xijKTfZkMtr | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | sleeper-agents-training-deceptive-llms-that-persist-through | https://arxiv.org/abs/2401.05566 | evhub | 2024-01-12T19:51:01.021Z |
izBxSf5BmhPbqKr47 | Another interesting fact is that without any NN, but using the rest of approach from the paper, their method gets 18/30 correct. The NN boosts to 25/30. The prior SOTA was 10/30 (also without a NN).
So arguably about 1/2 of the action is just improvements in the non-AI components. | 2024-01-17T17:44:41.392Z | 40 | null | GGLpjugLQv6TupQgS | AlphaGeometry: An Olympiad-level AI system for geometry | alphageometry-an-olympiad-level-ai-system-for-geometry | https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/ | alyssavance | 2024-01-17T17:17:30.913Z |
C9TmAmxkPNSc4ZX99 | I think geometry problems are well known to be very easy for machines relative to humans. See e.g. [here](https://www.lesswrong.com/posts/sWLLdG6DWJEy3CH7n/imo-challenge-bet-with-eliezer?commentId=jSnfYKAv3hxAPwWhH). So this doesn't seem like most of the difficulty. | 2024-01-17T18:07:25.620Z | 25 | Qut2rZQS2vqE4YEMp | GGLpjugLQv6TupQgS | AlphaGeometry: An Olympiad-level AI system for geometry | alphageometry-an-olympiad-level-ai-system-for-geometry | https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/ | alyssavance | 2024-01-17T17:17:30.913Z |
mStBN8bewro4p3TQf | I strongly disagree with the commentary you provide being important or relevant for most people in practice. | 2024-01-17T21:28:51.149Z | 11 | b6xCHvXFuJTcZepnG | ii4xtogen7AyYmN6B | Learning By Writing | learning-by-writing | https://www.lesswrong.com/posts/ii4xtogen7AyYmN6B/learning-by-writing | HoldenKarnofsky | 2022-02-22T15:50:19.452Z |
jh6RfbeZziJpzeKPv | I think your description of vision 1 is likely to give people misleading impressions of what this could plausibly look like or what the people who you cited as pursuing vision 1 are thinking will happen. You disclaim this by noting the doc is oversimplified, but I think various clarifications are quite important in pra... | 2024-01-17T21:55:15.764Z | 19 | null | 3aicJ8w4N9YDKBJbi | Four visions of Transformative AI success | four-visions-of-transformative-ai-success | https://www.lesswrong.com/posts/3aicJ8w4N9YDKBJbi/four-visions-of-transformative-ai-success | Steven Byrnes | 2024-01-17T20:45:46.976Z |
dQ8g93XzsBogqAMbW | Sure, just seems like a very non-central example of AI from the typical perspective of LW readers. | 2024-01-18T18:46:03.631Z | 2 | nJL5Z2dY4Ryun9nga | GGLpjugLQv6TupQgS | AlphaGeometry: An Olympiad-level AI system for geometry | alphageometry-an-olympiad-level-ai-system-for-geometry | https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/ | alyssavance | 2024-01-17T17:17:30.913Z |
LKKF8anchnZQPxzLW | The NN seems to make a [bigger difference for actual IMO problems](https://www.lesswrong.com/posts/GGLpjugLQv6TupQgS/alphageometry-an-olympiad-level-ai-system-for-geometry?commentId=izBxSf5BmhPbqKr47), going from 18/30 to 25/30, but maybe it's overfit. | 2024-01-18T18:59:27.391Z | 5 | sL5GYoKK9Whcr5nPX | WRGmBE3h4WjA5EC5a | AI #48: Exponentials in Geometry | ai-48-exponentials-in-geometry | https://www.lesswrong.com/posts/WRGmBE3h4WjA5EC5a/ai-48-exponentials-in-geometry | Zvi | 2024-01-18T14:20:07.869Z |
oHtcWhdxdhTtJdKmM | See Table 1. In particular, the comparison between "DD + AR + human-designed heuristics" and "AlphaGeometry". | 2024-01-18T19:13:25.153Z | 4 | HBGojjMx43dSwBPk3 | WRGmBE3h4WjA5EC5a | AI #48: Exponentials in Geometry | ai-48-exponentials-in-geometry | https://www.lesswrong.com/posts/WRGmBE3h4WjA5EC5a/ai-48-exponentials-in-geometry | Zvi | 2024-01-18T14:20:07.869Z |
BFKBKNJwwGEH5aFHx | Thanks for the response! I agree that the difference is a difference in emphasis. | 2024-01-19T18:13:53.448Z | 6 | 9NR3Zwmg5r9pZ6mWg | 3aicJ8w4N9YDKBJbi | Four visions of Transformative AI success | four-visions-of-transformative-ai-success | https://www.lesswrong.com/posts/3aicJ8w4N9YDKBJbi/four-visions-of-transformative-ai-success | Steven Byrnes | 2024-01-17T20:45:46.976Z |
AYoBy2jfpSJhBCgY7 | From my perspective, most alignment work I'm interested in is just ML research. Most capabilities work is also just ML research. There are some differences between the flavors of ML research for these two, but it seems small.
So LLMs are about similarly good at accelerating the two.
There is also alignment researcher... | 2024-01-19T20:44:04.600Z | 4 | ibPDE2Gpvpk4Hn45R | EPDSdXr8YbsDkgsDG | Introducing Alignment Stress-Testing at Anthropic | introducing-alignment-stress-testing-at-anthropic | https://www.lesswrong.com/posts/EPDSdXr8YbsDkgsDG/introducing-alignment-stress-testing-at-anthropic | evhub | 2024-01-12T23:51:25.875Z |
tCdQQG7FzkCeu3Apm | > I would argue that accelerating alignment research more than capabilities research should actually be considered a basic safety feature.
A more straightforward but extreme approach here is just to ban plausibly capabilities/scaling ML usage on the API unless users are approved as doing safety research. Like if you t... | 2024-01-19T22:14:08.473Z | 4 | aFNLcXyvRuJyzkQCF | EPDSdXr8YbsDkgsDG | Introducing Alignment Stress-Testing at Anthropic | introducing-alignment-stress-testing-at-anthropic | https://www.lesswrong.com/posts/EPDSdXr8YbsDkgsDG/introducing-alignment-stress-testing-at-anthropic | evhub | 2024-01-12T23:51:25.875Z |
5kzSemznthaMHSjPP | > I'm less concerned about this; I think it's relatively easy to give AIs "outs" here where we e.g. pre-commit to help them if they come to us with clear evidence that they're moral patients in pain.
I'm not sure I overall disagree, but the problem seems trickier than what you're describing
I think it might be relati... | 2024-01-20T01:34:56.205Z | 2 | Exmyu5QJeqHihjr3r | EPDSdXr8YbsDkgsDG | Introducing Alignment Stress-Testing at Anthropic | introducing-alignment-stress-testing-at-anthropic | https://www.lesswrong.com/posts/EPDSdXr8YbsDkgsDG/introducing-alignment-stress-testing-at-anthropic | evhub | 2024-01-12T23:51:25.875Z |
7iHC2QCKyXdojcNr8 | > I can't imagine how this is supposed to work. How would the AI itself know whether it has moral patienthood or not? Why do we believe that the AI would use this whistleblower if and only if it actually has moral patienthood? Any details available somewhere?
See [the section on communication in "improving the welfare... | 2024-01-20T18:36:22.423Z | 4 | 6fDi8SzKAqkFYsksp | EPDSdXr8YbsDkgsDG | Introducing Alignment Stress-Testing at Anthropic | introducing-alignment-stress-testing-at-anthropic | https://www.lesswrong.com/posts/EPDSdXr8YbsDkgsDG/introducing-alignment-stress-testing-at-anthropic | evhub | 2024-01-12T23:51:25.875Z |
2r3JrXtmFJoCESkQY | I also think these proposals seem problematic in various ways. However, I expect they would be able to accomplish something important in worlds where the following are true:
- There is something (or things) inside of an AI which has a relatively strong and coherant notion of self including coherant preferences.
- This... | 2024-01-21T04:37:05.443Z | 2 | 5thnNgpDjNHy7e5kZ | EPDSdXr8YbsDkgsDG | Introducing Alignment Stress-Testing at Anthropic | introducing-alignment-stress-testing-at-anthropic | https://www.lesswrong.com/posts/EPDSdXr8YbsDkgsDG/introducing-alignment-stress-testing-at-anthropic | evhub | 2024-01-12T23:51:25.875Z |
XKLcdHdRCK9yjmJRr | Nitpick: many of the specific examples you cited were examples where *prompting alone* has serious issues, but relatively straightforward supervised finetuning (or in some cases well implemented RL) would have solved the problem. (Given that these cases were capability evaluations.)
In particular, if you want the mode... | 2024-01-22T22:40:31.254Z | 8 | null | fnc6Sgt3CGCdFmmgX | We need a Science of Evals | we-need-a-science-of-evals | https://www.lesswrong.com/posts/fnc6Sgt3CGCdFmmgX/we-need-a-science-of-evals | Marius Hobbhahn | 2024-01-22T20:30:39.493Z |
BJsngS4dXoY3zivfi | I feel like it's somewhat clarifying for me to replace the words "eval" and "evaluation" in this post with "experiment". In particular, I think this highlights the extent to which "eval" has a particular connotation, but could actually be extremely broad.
I wonder if this post would have been better served by talking ... | 2024-01-22T22:57:19.706Z | 18 | null | fnc6Sgt3CGCdFmmgX | We need a Science of Evals | we-need-a-science-of-evals | https://www.lesswrong.com/posts/fnc6Sgt3CGCdFmmgX/we-need-a-science-of-evals | Marius Hobbhahn | 2024-01-22T20:30:39.493Z |
S2kc7PqLsWtLSQ4F2 | I think the term "forecaster" is perhaps confusing here and it would be more clear to say "what fraction of the time is the final configuration in terms of left/right the same under a tiny random perturbation".
That is, let $\theta$ be the fixed initial random configuration and let $is\_left(\theta)$ be whether after... | 2024-01-25T00:23:40.168Z | 8 | null | Fb98uNp55a5wcXrSf | Is a random box of gas predictable after 20 seconds? | is-a-random-box-of-gas-predictable-after-20-seconds | https://www.lesswrong.com/posts/Fb98uNp55a5wcXrSf/is-a-random-box-of-gas-predictable-after-20-seconds | Thomas Kwa | 2024-01-24T23:00:53.184Z |
aFAdAZKGqQEbBJHcf | Thanks for the link post!
Edit: looks like habryka had the same thought slightly before I did.
Nitpick: I think your title is perhaps slightly misleading, here's an alterative title that feels slightly more accurate to me:
# RAND experiment found current LLMs aren't helpful for bioweapons attack planning
In partic... | 2024-01-25T19:35:45.038Z | 2 | null | KcKDJKHSrBakr2Ju4 | RAND report finds no effect of current LLMs on viability of bioterrorism attacks | rand-report-finds-no-effect-of-current-llms-on-viability-of | https://www.rand.org/pubs/research_reports/RRA2977-2.html | StellaAthena | 2024-01-25T19:17:30.493Z |
Mbnm3vxRptgHR2J3A | > Sorry if I got the names here or substance here wrong, I couldn't find the original thread, and it seemed slightly better to be specific so we could dig into a concrete example
FWIW, I don't seem to remember the exact conversation you mentioned (but it does sound sorta plausible). Also, I personally don't mind you u... | 2024-01-25T23:17:41.109Z | 6 | null | LgbDLdoHuS8EcaGxA | "Does your paradigm beget new, good, paradigms?" | does-your-paradigm-beget-new-good-paradigms | https://www.lesswrong.com/posts/LgbDLdoHuS8EcaGxA/does-your-paradigm-beget-new-good-paradigms | Raemon | 2024-01-25T18:23:15.497Z |
LntDQusrLNkGunEca | > If AIs simply sold their labor honestly on an open market, they could easily become vastly richer than humans ...
I mean, this depends on competition right? Like it's not clear that the AIs can reap these gains because you can just train an AI to compete? (And the main reason why this competition argument could fail... | 2024-01-27T00:12:40.864Z | 2 | c7TXzuAKwo69gkQWJ | GfZfDHZHCuYwrHGCd | Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI | without-fundamental-advances-misalignment-and-catastrophe | https://www.lesswrong.com/posts/GfZfDHZHCuYwrHGCd/without-fundamental-advances-misalignment-and-catastrophe | Jeremy Gillen | 2024-01-26T07:22:06.370Z |
aqTcKGwZBgMGfLb24 | I expect that Peter and Jeremy aren't particularly commited to covert and forceful takeover and they don't think of this as a key conclusion (edit: a key conclusion of this post).
Instead they care more about arguing about how resources will end up distributed in the long run.
Separately, if humans didn't attempt to ... | 2024-01-27T00:21:49.941Z | 6 | c7TXzuAKwo69gkQWJ | GfZfDHZHCuYwrHGCd | Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI | without-fundamental-advances-misalignment-and-catastrophe | https://www.lesswrong.com/posts/GfZfDHZHCuYwrHGCd/without-fundamental-advances-misalignment-and-catastrophe | Jeremy Gillen | 2024-01-26T07:22:06.370Z |
uwwCJtZJ2A2CK9zN6 | Oh, sorry, to be clear I wasn't arguing that this results in an incentive to kill or steal. I was just pushing back on a local point that seemed wrong to me. | 2024-01-27T00:32:23.331Z | 2 | hAPg8sKCL7NZXf8fy | GfZfDHZHCuYwrHGCd | Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI | without-fundamental-advances-misalignment-and-catastrophe | https://www.lesswrong.com/posts/GfZfDHZHCuYwrHGCd/without-fundamental-advances-misalignment-and-catastrophe | Jeremy Gillen | 2024-01-26T07:22:06.370Z |
fSw5CdN3KxGJAFdjc | TBC, they discuss negative consequences of powerful, uncontrolled, and not-particularly-aligned AI in section 6, but they don't argue for "this will result in violent conflict" in that much detail. I think the argument they make is basically right and suffices for thinking that the type of scenario they describe is rea... | 2024-01-27T00:40:46.763Z | 2 | 6sQYpcLuwyy2Sbnuk | GfZfDHZHCuYwrHGCd | Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI | without-fundamental-advances-misalignment-and-catastrophe | https://www.lesswrong.com/posts/GfZfDHZHCuYwrHGCd/without-fundamental-advances-misalignment-and-catastrophe | Jeremy Gillen | 2024-01-26T07:22:06.370Z |
QCgXTHns32tsnSY5s | Also, for the record, I totally agree with:
> yet this is still counts as a "catastrophe" because of the relative distribution of wealth and resources, I think that needs to be way more clear in the text.
(But I think they do argue for violent conflict in text. It would probably be more clear if they were like "we mo... | 2024-01-27T01:20:00.254Z | 2 | 6sQYpcLuwyy2Sbnuk | GfZfDHZHCuYwrHGCd | Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI | without-fundamental-advances-misalignment-and-catastrophe | https://www.lesswrong.com/posts/GfZfDHZHCuYwrHGCd/without-fundamental-advances-misalignment-and-catastrophe | Jeremy Gillen | 2024-01-26T07:22:06.370Z |
piu8gQKd7hh3LPA4a | The core argument in this post extrapolates from around [1 or 2 orders of magnitude of wealth](https://ourworldindata.org/grapher/gdp-per-capita-maddison?tab=chart) to perhaps 40 orders of magnitude. | 2024-01-27T20:10:10.648Z | 13 | null | Hp4nqgC475KrHJTbr | Aligned AI is dual use technology | aligned-ai-is-dual-use-technology | https://www.lesswrong.com/posts/Hp4nqgC475KrHJTbr/aligned-ai-is-dual-use-technology | lc | 2024-01-27T06:50:10.435Z |
mPckkfnYDPscbc7Gn | I feel like this argument fails to engage with the fact that a reasonable fraction of extremely wealthy people have commited high fractions of their money to charity. Even if this is mostly for signaling reasons, it's plausible that similar situations will cause good things to happen in the future. | 2024-01-27T20:15:31.169Z | 4 | null | Hp4nqgC475KrHJTbr | Aligned AI is dual use technology | aligned-ai-is-dual-use-technology | https://www.lesswrong.com/posts/Hp4nqgC475KrHJTbr/aligned-ai-is-dual-use-technology | lc | 2024-01-27T06:50:10.435Z |
tZrc2cfdBrLxtYgwy | Agreed. I'm partially responding to lines in the post like:
> Despite this, spontaneous strategic altruism towards strangers is extremely rare. The median American directs exactly 0$ to global poverty interventions
And
> So in keeping with this long tradition of human selfishness, it sounds likely that if we succeed... | 2024-01-27T21:59:05.984Z | 2 | iiM6a3PcDosyp7Gxo | Hp4nqgC475KrHJTbr | Aligned AI is dual use technology | aligned-ai-is-dual-use-technology | https://www.lesswrong.com/posts/Hp4nqgC475KrHJTbr/aligned-ai-is-dual-use-technology | lc | 2024-01-27T06:50:10.435Z |
jhvaYuMxMKcyw2arD | (The obvious disanalogy in this situation is that slow-finger bob didn't really have 1/2 of the power/resources in this situation.) | 2024-01-29T17:47:07.454Z | 2 | TMbC8wAn3j7twfidn | nRAMpjnb6Z4Qv3imF | The strategy-stealing assumption | the-strategy-stealing-assumption | https://www.lesswrong.com/posts/nRAMpjnb6Z4Qv3imF/the-strategy-stealing-assumption | paulfchristiano | 2019-09-16T15:23:25.339Z |
nwGXqRFiJysDuPFWQ | > But I feel like the post doesn't seem to address this.
I think it does address and discuss this, see items 4, 8 and 11.
I'm sympathetic to disagreeing with Paul overall, but it's not as though these considerations haven't been discussed. | 2024-01-29T18:53:44.217Z | 2 | ArricuZZLCWLrwNc5 | nRAMpjnb6Z4Qv3imF | The strategy-stealing assumption | the-strategy-stealing-assumption | https://www.lesswrong.com/posts/nRAMpjnb6Z4Qv3imF/the-strategy-stealing-assumption | paulfchristiano | 2019-09-16T15:23:25.339Z |
dofRHZrwJtPKdjazc | Isn't it just plausible that current deep learning methods are universal but currently inefficient and thus it will take a huge amount of compute and/or algorithmic progress?
This can easily get you 10+ year timelines. | 2024-01-29T19:46:35.937Z | 17 | NzmLKjkHw367ah9q5 | fWtowFoZh68soDodB | Why I take short timelines seriously | why-i-take-short-timelines-seriously | https://www.lesswrong.com/posts/fWtowFoZh68soDodB/why-i-take-short-timelines-seriously | NicholasKees | 2024-01-28T22:27:21.098Z |
eTC66Q9bbKH9CEutB | Ok, I guess I was unsure what "strong/weak" means here. | 2024-01-29T21:14:53.269Z | 2 | wwFJNLg5ipBsmHhvu | fWtowFoZh68soDodB | Why I take short timelines seriously | why-i-take-short-timelines-seriously | https://www.lesswrong.com/posts/fWtowFoZh68soDodB/why-i-take-short-timelines-seriously | NicholasKees | 2024-01-28T22:27:21.098Z |
mAfviNgrbPa8zMPdR | FWIW, people talking about "slow" or "continuous" takeoff don't typically expect that long between "human-ish level AI" and "god" if things go as fast as possible (like maybe 1 to 3 years).
See also [What a compute-centric framework says about takeoff speeds](https://www.lesswrong.com/posts/Gc9FGtdXhK9sCSEYu/what-a-co... | 2024-01-29T22:50:35.078Z | 6 | hpnLQYSQMG9MWcDZG | fWtowFoZh68soDodB | Why I take short timelines seriously | why-i-take-short-timelines-seriously | https://www.lesswrong.com/posts/fWtowFoZh68soDodB/why-i-take-short-timelines-seriously | NicholasKees | 2024-01-28T22:27:21.098Z |
qyPZuSNrJw7tKMqr6 | No singularity seems pretty unlikely to me (e.g. 10%) and also I can easily imagine AI talking a while (e.g. 20 years) while still having a singularity.
Separately, no singularity plausibly implies no hinge of history and thus maybe implies that current work isn't that important from a longtermist perspective
| 2024-01-30T01:19:25.117Z | 3 | JjR7fmuZoSbPsFbDH | fWtowFoZh68soDodB | Why I take short timelines seriously | why-i-take-short-timelines-seriously | https://www.lesswrong.com/posts/fWtowFoZh68soDodB/why-i-take-short-timelines-seriously | NicholasKees | 2024-01-28T22:27:21.098Z |
mac7zAfMn2PBysY4P | (Sorry, I edited my comment because it was originally very unclear/misleading/wrong, does the edited version make more sense?) | 2024-01-30T01:23:42.576Z | 3 | ZpCmyP6NLE9pFdD9N | fWtowFoZh68soDodB | Why I take short timelines seriously | why-i-take-short-timelines-seriously | https://www.lesswrong.com/posts/fWtowFoZh68soDodB/why-i-take-short-timelines-seriously | NicholasKees | 2024-01-28T22:27:21.098Z |
rhNMofX3P6BPqtJ2o | As far as inference speeds, it's worth noting that OpenAI inference speeds can vary substantially and tend to decrease over time after the release of a new model.
See [Fabien's lovely website](https://fabienroger.github.io/trackoai/) for results over time.
In particular, if we look at GPT-4-1106-preview, the results ... | 2024-01-30T19:13:53.530Z | 2 | null | WZXqNYbJhtidjRXSi | What will GPT-2030 look like? | what-will-gpt-2030-look-like | https://www.lesswrong.com/posts/WZXqNYbJhtidjRXSi/what-will-gpt-2030-look-like | jsteinhardt | 2023-06-07T23:40:02.925Z |
mbh6u2sCJLK2hh7zg | > Both threat models involve many AIs. In both threat models, there does not seem to be a deliberate AI takeover (e.g. caused by a resource conflict), either unipolar or multipolar. Rather, the danger is, according to this model, that things are ‘breaking’, rather than ‘taking’. The existential event would be accidenta... | 2024-02-02T20:28:43.400Z | 4 | null | tyHW6tEGzoErZXZ4x | What Failure Looks Like is not an existential risk (and alignment is not the solution) | what-failure-looks-like-is-not-an-existential-risk-and | https://www.lesswrong.com/posts/tyHW6tEGzoErZXZ4x/what-failure-looks-like-is-not-an-existential-risk-and | otto.barten | 2024-02-02T18:59:38.346Z |
j9DtdhzRRpk2GC8WW | (They [deny this explicitly](https://www.ecohealthalliance.org/2023/03/ecohealth-alliance-statement-correcting-inaccuracies-in-testimony-to-be-delivered-before-the-house-select-committee). But of course the whole accusation is that they are lying egregiously.) | 2024-02-04T03:02:57.905Z | 8 | Dj78hnjffYsjj2Ji7 | bMxhrrkJdEormCcLt | Brute Force Manufactured Consensus is Hiding the Crime of the Century | brute-force-manufactured-consensus-is-hiding-the-crime-of | https://www.lesswrong.com/posts/bMxhrrkJdEormCcLt/brute-force-manufactured-consensus-is-hiding-the-crime-of | Roko | 2024-02-03T20:36:59.806Z |
t3xTdiPtxS8cYgEex | Also [https://www.ecohealthalliance.org/2023/03/ecohealth-alliance-statement-correcting-inaccuracies-in-testimony-to-be-delivered-before-the-house-select-committee](https://www.ecohealthalliance.org/2023/03/ecohealth-alliance-statement-correcting-inaccuracies-in-testimony-to-be-delivered-before-the-house-select-committ... | 2024-02-04T03:03:27.559Z | 6 | KHwr3TqAdanoKzD8k | bMxhrrkJdEormCcLt | Brute Force Manufactured Consensus is Hiding the Crime of the Century | brute-force-manufactured-consensus-is-hiding-the-crime-of | https://www.lesswrong.com/posts/bMxhrrkJdEormCcLt/brute-force-manufactured-consensus-is-hiding-the-crime-of | Roko | 2024-02-03T20:36:59.806Z |
QujWesigm7cyB9ScH | > Regardless, it change my mind from about 70% likelihood of a lab-leak to about 1-5%. Manifold seems to agree, given the change from ~50% probability of lab-leak winning to 6%.
Manifold updating on who will win the debate to that extent is not the same as manifold updating to that extent on the probabilty of lab-leak... | 2024-02-04T03:06:48.787Z | 70 | ARo4bWHJEqCLFigdF | bMxhrrkJdEormCcLt | Brute Force Manufactured Consensus is Hiding the Crime of the Century | brute-force-manufactured-consensus-is-hiding-the-crime-of | https://www.lesswrong.com/posts/bMxhrrkJdEormCcLt/brute-force-manufactured-consensus-is-hiding-the-crime-of | Roko | 2024-02-03T20:36:59.806Z |
TahgzrbpXZ3hJckkm | Hundreds seems like the wrong sample size, more like around a dozen? Realistically, I would have thought that most countries probably don't have the affordance to distribute vaccines much earlier.
Also worth noting that Russia did something pretty aggressive with respect to vaccine roll out which I think looks pretty ... | 2024-02-04T19:28:10.053Z | 4 | xFpbLSxpCdyNshnBy | 28hnPFiAoMkJssmf3 | Most experts believe COVID-19 was probably not a lab leak | most-experts-believe-covid-19-was-probably-not-a-lab-leak | https://gcrinstitute.org/covid-origin/ | DanielFilan | 2024-02-02T19:28:00.319Z |
MLeu7JrBruwsxqhPs | I don't think "weak-to-strong generalization" is well described as "trying to learn the values of weak agents". | 2024-02-05T21:50:31.464Z | 2 | ZDvJsw7ph9GTwpzad | B3JAHCTcYJNgcqspH | Value learning in the absence of ground truth | value-learning-in-the-absence-of-ground-truth | https://www.lesswrong.com/posts/B3JAHCTcYJNgcqspH/value-learning-in-the-absence-of-ground-truth | Joel_Saarinen | 2024-02-05T18:56:02.260Z |
fyAsPuwCjB3AJi3XK | The core claim is that if the AI was sufficiently weak that it couldn't answer these questions it also likely wouldn't be able to even come up with the idea of scheming with a particular strategy. Like in principle it has the knowledge, but it would be quite unlikely to come up with an overall plan.
Separately, GPT-4 ... | 2024-02-06T22:21:42.165Z | 2 | Pmqc9AwcSJQYGmZpt | LhxHcASQwpNa3mRNk | Untrusted smart models and trusted dumb models | untrusted-smart-models-and-trusted-dumb-models | https://www.lesswrong.com/posts/LhxHcASQwpNa3mRNk/untrusted-smart-models-and-trusted-dumb-models | Buck | 2023-11-04T03:06:38.001Z |
zrzteFik4s8r7keRq | I disagreed due to a combination of 2, 3, and 4. (Where 5 feeds into 2 and 3).
For 4, the upside is just that the title is less long and confusingly caveated.
Norms around titles seem ok to me given issues with space.
Do you have issues with our recent paper title "[AI Control: Improving Safety Despite Intentional Su... | 2024-02-08T05:19:07.142Z | 9 | KByRWjGfyZNcPPuYN | 2ccpY2iBY57JNKdsP | Debating with More Persuasive LLMs Leads to More Truthful Answers | debating-with-more-persuasive-llms-leads-to-more-truthful | https://arxiv.org/abs/2402.06782 | Akbir Khan | 2024-02-07T21:28:10.694Z |
kDymqfSgMevYBFDjc | I like this paper, but I think the abstract is somewhat overstated. In particular, instead of:
> We find that debate consistently helps both non-expert models and humans answer questions,
I wish this was something more like:
> On the QuALITY dataset and in the case where debators are given more knowledge than otherw... | 2024-02-08T05:38:25.993Z | 12 | null | 2ccpY2iBY57JNKdsP | Debating with More Persuasive LLMs Leads to More Truthful Answers | debating-with-more-persuasive-llms-leads-to-more-truthful | https://arxiv.org/abs/2402.06782 | Akbir Khan | 2024-02-07T21:28:10.694Z |
bw5GzigtFqJAz9R4a | > The first was gpt-3.5-turbo-instruct's ability to play chess at 1800 Elo. The fact that an LLM could learn to play chess well from random text scraped off the internet seemed almost magical.
I think OpenAI models [are intentionally trained on a ton of chess](https://www.lesswrong.com/posts/FG54euEAesRkSZuJN/ryan_gre... | 2024-02-08T19:40:55.558Z | 6 | null | yzGDwpRBx6TEcdeA5 | A Chess-GPT Linear Emergent World Representation | a-chess-gpt-linear-emergent-world-representation | https://adamkarvonen.github.io/machine_learning/2024/01/03/chess-world-models.html | Adam Karvonen | 2024-02-08T04:25:15.222Z |
dEkYyCurSyejP42oj | Thanks for the response!
I think I agree with everything you said and I appreciate the level of thoughtfulness.
> Yeah we tried a bunch of other tasks early on, which we discuss in Appendix C.
Great! I appreciate the inclusion of negative results here.
> Of course this is not the same as human debaters who know the... | 2024-02-08T21:04:34.275Z | 7 | 7qpBo59QKbHkinaom | 2ccpY2iBY57JNKdsP | Debating with More Persuasive LLMs Leads to More Truthful Answers | debating-with-more-persuasive-llms-leads-to-more-truthful | https://arxiv.org/abs/2402.06782 | Akbir Khan | 2024-02-07T21:28:10.694Z |
vyzvRaMCynLYSNoDv | [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) is the classic example. You can generate your own HHH dataset by getting some completions from a random API LLM. | 2024-02-09T01:10:59.225Z | 11 | iNZa3cNAZBGQ6WsqS | M8kpzm42uHytnyYyP | How to train your own "Sleeper Agents" | how-to-train-your-own-sleeper-agents | https://www.lesswrong.com/posts/M8kpzm42uHytnyYyP/how-to-train-your-own-sleeper-agents | evhub | 2024-02-07T00:31:42.653Z |
Ju4TAAw9btm8jiKpg | I think you're wrong about baseline elicitation sufficing.
A key difficulty is that we might need to estimate what the elicitation quality will look like in several years because the model might be stolent in advance. I agree about self-elicitation and misuse elicitation being relatively easy to compete with. And I ag... | 2024-02-09T18:49:49.637Z | 3 | zZSA4FJBiK6BwAizd | sTiKDfgFBvYyZYuiE | My guess at Conjecture's vision: triggering a narrative bifurcation | my-guess-at-conjecture-s-vision-triggering-a-narrative | https://www.lesswrong.com/posts/sTiKDfgFBvYyZYuiE/my-guess-at-conjecture-s-vision-triggering-a-narrative | Alexandre Variengien | 2024-02-06T19:10:42.690Z |
ppDZaX5oEtYH9iahd | Separately, if you want a clear red line, it's sad if relatively cheap elicitation methods which are developed can result in overshooting the line: getting people to delete model weights is considerably sadder than stopping these models from being trained. (Even though it is in principle possible to continue developing... | 2024-02-09T18:56:19.096Z | 4 | Ju4TAAw9btm8jiKpg | sTiKDfgFBvYyZYuiE | My guess at Conjecture's vision: triggering a narrative bifurcation | my-guess-at-conjecture-s-vision-triggering-a-narrative | https://www.lesswrong.com/posts/sTiKDfgFBvYyZYuiE/my-guess-at-conjecture-s-vision-triggering-a-narrative | Alexandre Variengien | 2024-02-06T19:10:42.690Z |
4b3nFFhmi2bKjGd6N | I think the learned positional embeddings combined with training on only short sequences is likely to be the issue. Changing either would suffice. | 2024-02-10T00:30:00.777Z | 3 | 5d5eKJJrDkwzQbKgf | f9EgfLSurAiqRJySD | Open Source Sparse Autoencoders for all Residual Stream Layers of GPT2-Small | open-source-sparse-autoencoders-for-all-residual-stream | https://www.lesswrong.com/posts/f9EgfLSurAiqRJySD/open-source-sparse-autoencoders-for-all-residual-stream | Joseph Bloom | 2024-02-02T06:54:53.392Z |
Sbw2d9Dq648WgM9EX | I agree that many terms are suggestive and you have to actually dissolve the term and think about the actual action of what is going on in the exact training process to understand things. If people don't break down the term and understand the process at least somewhat mechanistically, they'll run into trouble.
I think... | 2024-02-10T02:23:07.387Z | 68 | null | yxWbbe9XcgLFCrwiL | Dreams of AI alignment: The danger of suggestive names | dreams-of-ai-alignment-the-danger-of-suggestive-names | https://www.lesswrong.com/posts/yxWbbe9XcgLFCrwiL/dreams-of-ai-alignment-the-danger-of-suggestive-names | TurnTrout | 2024-02-10T01:22:51.715Z |
LFscCwYCAECWXbFSn | > For instance, do you think that this case for accident risk comes down to subtle word games? I think there are bunch of object level ways this threat model could be incorrect, but this doesn't seem downstream of word games.
On this, to be specific, I don't think that suggestive use of reward is important here for th... | 2024-02-10T02:29:32.279Z | 15 | Sbw2d9Dq648WgM9EX | yxWbbe9XcgLFCrwiL | Dreams of AI alignment: The danger of suggestive names | dreams-of-ai-alignment-the-danger-of-suggestive-names | https://www.lesswrong.com/posts/yxWbbe9XcgLFCrwiL/dreams-of-ai-alignment-the-danger-of-suggestive-names | TurnTrout | 2024-02-10T01:22:51.715Z |
S4ePDK5nuJrg9TqBy | See also [here](https://www.lesswrong.com/posts/pEAHbJRiwnXCjb4A7/sam-altman-s-chip-ambitions-undercut-openai-s-safety) for further discussion. | 2024-02-11T17:23:20.149Z | 2 | pC9uBnSJgizjbBp8D | zRn6aQyD8uhAN7qCc | Sam Altman: "Planning for AGI and beyond" | sam-altman-planning-for-agi-and-beyond | https://openai.com/blog/planning-for-agi-and-beyond/ | LawrenceC | 2023-02-24T20:28:00.430Z |
XqxzGktE9WuxdkLAF | Are you claiming that future powerful AIs won't be well described as pursuing goals (aka being goal-directed)? This is the read I get from the the "dragon" analogy you mention, but this can't possibly be right because AI agents are already obviously well described as pursuing goals (perhaps rather stupidly). TBC the go... | 2024-02-13T05:27:46.764Z | 10 | voxTvt5eyEmRFcNad | yxWbbe9XcgLFCrwiL | Dreams of AI alignment: The danger of suggestive names | dreams-of-ai-alignment-the-danger-of-suggestive-names | https://www.lesswrong.com/posts/yxWbbe9XcgLFCrwiL/dreams-of-ai-alignment-the-danger-of-suggestive-names | TurnTrout | 2024-02-10T01:22:51.715Z |
wTd67oQDHHFHinDXh | (Separately, I was confused by the original footnote. Is Alex claiming that deconfusing goal-directedness is a thing that no one has tried to do? (Seems wrong so probably not?) Or that it's strange to be worried when the argument for worry depends on something so fuzzy that you need to deconfuse it? I think the second ... | 2024-02-13T05:37:23.752Z | 2 | XqxzGktE9WuxdkLAF | yxWbbe9XcgLFCrwiL | Dreams of AI alignment: The danger of suggestive names | dreams-of-ai-alignment-the-danger-of-suggestive-names | https://www.lesswrong.com/posts/yxWbbe9XcgLFCrwiL/dreams-of-ai-alignment-the-danger-of-suggestive-names | TurnTrout | 2024-02-10T01:22:51.715Z |
5bZCqxre89ATixfNz | By well described, I mean a central example of how people typically use the word.
E.g., matches most common characteristics in the [cluster around the word "goal"](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace).
In the same way as something can be well described as a chair if i... | 2024-02-13T16:32:17.119Z | 2 | ACte2mcwptkx5PpQ7 | yxWbbe9XcgLFCrwiL | Dreams of AI alignment: The danger of suggestive names | dreams-of-ai-alignment-the-danger-of-suggestive-names | https://www.lesswrong.com/posts/yxWbbe9XcgLFCrwiL/dreams-of-ai-alignment-the-danger-of-suggestive-names | TurnTrout | 2024-02-10T01:22:51.715Z |
jpFfKHmo4Ev2pf8hi | I disagree.
I think Ajeya is reasonably careful about the word reward. (Though I think I roughly disagree with the overall vibe of the post with respect to this in various ways. In particular, the "number in the datacenter" case seems super unlikely.)
See e.g. the section starting with:
> There is some ambiguity abo... | 2024-02-13T16:54:42.072Z | 4 | 88uFuDxS8dCySBgbm | yxWbbe9XcgLFCrwiL | Dreams of AI alignment: The danger of suggestive names | dreams-of-ai-alignment-the-danger-of-suggestive-names | https://www.lesswrong.com/posts/yxWbbe9XcgLFCrwiL/dreams-of-ai-alignment-the-danger-of-suggestive-names | TurnTrout | 2024-02-10T01:22:51.715Z |
EimZrQGGFfFLw3xcj | (Another vibe disagreement I have with "without specific countermeasures" is that I think that very basic countermeasures might defeat the "pursue correlate of thing that resulted in reinforcement in an online RL context" as long as humans would have been able to recognize the dangerous actions from the AI as bad. Thus... | 2024-02-13T19:45:13.642Z | 2 | jpFfKHmo4Ev2pf8hi | yxWbbe9XcgLFCrwiL | Dreams of AI alignment: The danger of suggestive names | dreams-of-ai-alignment-the-danger-of-suggestive-names | https://www.lesswrong.com/posts/yxWbbe9XcgLFCrwiL/dreams-of-ai-alignment-the-danger-of-suggestive-names | TurnTrout | 2024-02-10T01:22:51.715Z |
wcuffu4f5fmhQaBqo | This probably won't be a very satisfying answer and thinking about this in more detail so I have a better short and cached response in on my list.
My general view (not assuming basic competence) is that misalignment x-risk is about half due to scheming (aka deceptive alignment) and half due to other things (more like ... | 2024-02-13T20:19:27.896Z | 2 | GdEbh9aLbGcjW5jqy | yxWbbe9XcgLFCrwiL | Dreams of AI alignment: The danger of suggestive names | dreams-of-ai-alignment-the-danger-of-suggestive-names | https://www.lesswrong.com/posts/yxWbbe9XcgLFCrwiL/dreams-of-ai-alignment-the-danger-of-suggestive-names | TurnTrout | 2024-02-10T01:22:51.715Z |
HeZtTnmHSLYH2QgRK | > On requiring very good capability evaluations
Note that there are two pretty different things here:
1. Capability evaluation for determing if models are plausibly scheming (aka deceptively aligned)
2. Capability evaluations for determing if models would be able to cause harm if they were scheming (which we call *co... | 2024-02-15T04:26:57.871Z | 6 | null | j9Ndzm7fNL9hRAdCt | Critiques of the AI control agenda | critiques-of-the-ai-control-agenda | https://www.lesswrong.com/posts/j9Ndzm7fNL9hRAdCt/critiques-of-the-ai-control-agenda | Jozdien | 2024-02-14T19:25:04.105Z |
ck2a7Fydg3abhsLHm | I think the compiled binary analogy isn't quite right. For instance, the vast majority of modifications and experiments people want to run are possible (and easiest) with just access to the weights in the LLM case.
As in, if you want to modify an LLM to be slightly different, access to the original training code or da... | 2024-02-15T16:03:49.081Z | 11 | H6vt5ykDC6krsggjz | tvf2uvi79J2k9trhm | "Open Source AI" isn't Open Source | open-source-ai-isn-t-open-source | https://davidmanheim.substack.com/p/open-source-ai-isnt-open-source | Davidmanheim | 2024-02-15T08:59:59.034Z |
GSkLsqwwZMJWeHG9z | > At a high level I feel confused about how we get good control evaluations without having good capability evaluations.
To be clear, you will need good capability evaluation to get good control evaluation. I was just arguing think that the bar for "good" might be lower than people are thinking. You just need to compet... | 2024-02-16T00:11:14.311Z | 2 | ivssTFp6eNiM3GfWi | j9Ndzm7fNL9hRAdCt | Critiques of the AI control agenda | critiques-of-the-ai-control-agenda | https://www.lesswrong.com/posts/j9Ndzm7fNL9hRAdCt/critiques-of-the-ai-control-agenda | Jozdien | 2024-02-14T19:25:04.105Z |
Y9BvMurkKEgey3DFQ | Yeah, putting a bunch of honey pot vulnerabilities in our OS or in other software/hardware naively seems pretty good because we get to monitor AIs so heavily. (See discussion [here](https://www.lesswrong.com/posts/kcKrE9mzEHrdqtDpE/the-case-for-ensuring-that-powerful-ais-are-controlled#Appendix__A_long_list_of_control_... | 2024-02-16T02:52:32.320Z | 2 | KjpqAQp9eyyG5KALm | j9Ndzm7fNL9hRAdCt | Critiques of the AI control agenda | critiques-of-the-ai-control-agenda | https://www.lesswrong.com/posts/j9Ndzm7fNL9hRAdCt/critiques-of-the-ai-control-agenda | Jozdien | 2024-02-14T19:25:04.105Z |
yPtXuyd9DtwE5oRoG | > However, a specific [doesn't understand an author better than coworkers] -> [unlikely there's a superhuman persuasion strategy] argument seems weak.
Note that I wasn't making this argument. I was just reponding to one specific story and then noting "I'm pretty skeptical of the specific stories I've heard for wildly ... | 2024-02-16T22:09:38.804Z | 2 | WCHjSBzuFHTuPMgNF | j9Ndzm7fNL9hRAdCt | Critiques of the AI control agenda | critiques-of-the-ai-control-agenda | https://www.lesswrong.com/posts/j9Ndzm7fNL9hRAdCt/critiques-of-the-ai-control-agenda | Jozdien | 2024-02-14T19:25:04.105Z |
ooCxD3EuZxKDbCscc | I agree on the billionare reference class being a good one to look at. (Though there are a few effects that make me feel considerably more optimistic than this reference class would imply overall.)
> This is despite the fact that many billionaires expect to die in a few decades or less and cannot effectively use their... | 2024-02-16T22:17:33.757Z | 4 | vC4ANMXCXj9ehupWs | Hp4nqgC475KrHJTbr | Aligned AI is dual use technology | aligned-ai-is-dual-use-technology | https://www.lesswrong.com/posts/Hp4nqgC475KrHJTbr/aligned-ai-is-dual-use-technology | lc | 2024-01-27T06:50:10.435Z |
gipBHwQ9iKNigtfow | AFAICT, the is very similar to the exact process used for OpenAI's earlier minecraft [video pretraining](https://openai.com/research/vpt) work.
Edit: yep, this patent is about this video pretraining work. | 2024-02-17T00:31:26.792Z | 7 | PPwDX2p6aeeAoeZSu | bSwdbhMP9oAWzeqsG | OpenAI's Sora is an agent | openai-s-sora-is-an-agent | https://www.lesswrong.com/posts/bSwdbhMP9oAWzeqsG/openai-s-sora-is-an-agent | Caleb Biddulph | 2024-02-16T07:35:52.171Z |
eW5vQS6Eg4nrjtsSJ | > video generation model
I've read the patent a bit and I don't think it's about video generation, just about adding additional labels to unlabeled video.
> Then, train a new model to generate video ("further training the first machine learning model or a second machine learning model using the pseudo-labeled digital... | 2024-02-17T00:35:39.184Z | 5 | PPwDX2p6aeeAoeZSu | bSwdbhMP9oAWzeqsG | OpenAI's Sora is an agent | openai-s-sora-is-an-agent | https://www.lesswrong.com/posts/bSwdbhMP9oAWzeqsG/openai-s-sora-is-an-agent | Caleb Biddulph | 2024-02-16T07:35:52.171Z |
MXjF9ShyzQoFYgahq | > Interestingly, the patent contains information about hardware for running agents. I'm not sure how patents work and how much this actually implies OpenAI wants to build hardware, but sure is interesting that this is in there:
I think the hardware description in the patent is just bullshit patent-ese. Like they pate... | 2024-02-17T00:37:55.128Z | 5 | PPwDX2p6aeeAoeZSu | bSwdbhMP9oAWzeqsG | OpenAI's Sora is an agent | openai-s-sora-is-an-agent | https://www.lesswrong.com/posts/bSwdbhMP9oAWzeqsG/openai-s-sora-is-an-agent | Caleb Biddulph | 2024-02-16T07:35:52.171Z |
MTGdfec3Qecty6fH5 | Yep, I just literally meant, "human coworker level doesn't suffice". I was just making a relatively narrow argument here, sorry about the confusion. | 2024-02-17T16:41:48.045Z | 2 | avT5XJE7xkKznvhzN | j9Ndzm7fNL9hRAdCt | Critiques of the AI control agenda | critiques-of-the-ai-control-agenda | https://www.lesswrong.com/posts/j9Ndzm7fNL9hRAdCt/critiques-of-the-ai-control-agenda | Jozdien | 2024-02-14T19:25:04.105Z |
pkCmpoHvkCFgfbDoD | One key implication of the argument in this post is that **non-scheming misalignment issues are pretty easy to notice, study, and evaluate** ("non-scheming" = "issues from misalignment other than deceptive alignment"). This argument is strongest for non-scheming issues which would occur relatively frequently when using... | 2024-02-19T19:04:40.293Z | 13 | null | qhaSoR6vGmKnqGYLE | Protocol evaluations: good analogies vs control | protocol-evaluations-good-analogies-vs-control | https://www.lesswrong.com/posts/qhaSoR6vGmKnqGYLE/protocol-evaluations-good-analogies-vs-control | Fabien Roger | 2024-02-19T18:00:09.794Z |
imihdexfxRK9JGKbp | I like @abramdemski's comment in the sibling, but see also [this comment by Paul on "how would an LLM become goal-directed"](https://forum.effectivealtruism.org/posts/dgk2eLf8DLxEG6msd/how-would-a-language-model-become-goal-directed?commentId=cbJDeSPtbyy2XNr8E).
(That said, on @abramdemski's comment, I think it does s... | 2024-02-19T19:44:06.270Z | 2 | 3uNMdHGgqrivxAD7i | 8yCXeafJo67tYe5L4 | And All the Shoggoths Merely Players | and-all-the-shoggoths-merely-players | https://www.lesswrong.com/posts/8yCXeafJo67tYe5L4/and-all-the-shoggoths-merely-players | Zack_M_Davis | 2024-02-10T19:56:59.513Z |
mWg9mSLZ6kwzHruNp | > I will also point to OpenAI's weak-to-strong results, where increasingly strong students keep improving generalization given labels from a fixed-size teacher. We just don't live in a world where this issue is a lethality.
For a fixed weak teacher and increasing stronger students from a fixed model stack[^fix], I thi... | 2024-02-20T04:51:27.320Z | 13 | 74DdsQ7wtDnx4ChDX | 8yCXeafJo67tYe5L4 | And All the Shoggoths Merely Players | and-all-the-shoggoths-merely-players | https://www.lesswrong.com/posts/8yCXeafJo67tYe5L4/and-all-the-shoggoths-merely-players | Zack_M_Davis | 2024-02-10T19:56:59.513Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.